Merge branch 'develop' into bugfix/traypublish-editorial-avoid-audio-track

This commit is contained in:
Jakub Ježek 2024-01-26 16:57:37 +01:00 committed by GitHub
commit acc2f9fc87
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
179 changed files with 4490 additions and 2939 deletions

View file

@ -35,6 +35,20 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.18.5
- 3.18.5-nightly.3
- 3.18.5-nightly.2
- 3.18.5-nightly.1
- 3.18.4
- 3.18.4-nightly.1
- 3.18.3
- 3.18.3-nightly.2
- 3.18.3-nightly.1
- 3.18.2
- 3.18.2-nightly.6
- 3.18.2-nightly.5
- 3.18.2-nightly.4
- 3.18.2-nightly.3
- 3.18.2-nightly.2
- 3.18.2-nightly.1
- 3.18.1
@ -121,20 +135,6 @@ body:
- 3.15.8-nightly.3
- 3.15.8-nightly.2
- 3.15.8-nightly.1
- 3.15.7
- 3.15.7-nightly.3
- 3.15.7-nightly.2
- 3.15.7-nightly.1
- 3.15.6
- 3.15.6-nightly.3
- 3.15.6-nightly.2
- 3.15.6-nightly.1
- 3.15.5
- 3.15.5-nightly.2
- 3.15.5-nightly.1
- 3.15.4
- 3.15.4-nightly.3
- 3.15.4-nightly.2
validations:
required: true
- type: dropdown

View file

@ -1,6 +1,939 @@
# Changelog
## [3.18.5](https://github.com/ynput/OpenPype/tree/3.18.5)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.4...3.18.5)
### **🚀 Enhancements**
<details>
<summary>Chore: Add addons dir only if exists <a href="https://github.com/ynput/OpenPype/pull/6140">#6140</a></summary>
Do not add addons directory path for addons discovery if does not exists.
___
</details>
<details>
<summary>Hiero: Effect Categories - OP-7397 <a href="https://github.com/ynput/OpenPype/pull/6143">#6143</a></summary>
This PR introduces `Effect Categories` for the Hiero settings. This allows studios to split effect stacks into meaningful subsets.
___
</details>
<details>
<summary>Nuke: Render Workfile Attributes <a href="https://github.com/ynput/OpenPype/pull/6146">#6146</a></summary>
`Workfile Dependency` default value can now be controlled from project settings.`Use Published Workfile` makes using published workfiles for rendering optional.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Attributes are locked after publishing if they are locked in Camera Family <a href="https://github.com/ynput/OpenPype/pull/6073">#6073</a></summary>
This PR is to make sure unlock attributes only during the bake context, make sure attributes are relocked after to preserve the lock state of the original node being baked.
___
</details>
<details>
<summary>Missing nuke family Windows arguments <a href="https://github.com/ynput/OpenPype/pull/6131">#6131</a></summary>
Default Windows arguments for launching the Nuke family was missing.
___
</details>
<details>
<summary>AYON: Fix the bug on the limit group not being set correctly in Maya Deadline Setting <a href="https://github.com/ynput/OpenPype/pull/6139">#6139</a></summary>
This PR is to bug-fix the limit groups from maya deadline settings errored out when the user tries to edit the setting.
___
</details>
<details>
<summary>Chore: Transcoding extensions add missing '.tif' extension <a href="https://github.com/ynput/OpenPype/pull/6142">#6142</a></summary>
Image extensions in transcoding helper was missing `.tif` extension and had `.tiff` twice.
___
</details>
<details>
<summary>Blender: Use the new API for override context <a href="https://github.com/ynput/OpenPype/pull/6145">#6145</a></summary>
Blender 4.0 disabled the old API to override context. This API updates the code to use the new API.
___
</details>
<details>
<summary>BugFix: Include Model in FBX Loader in Houdini <a href="https://github.com/ynput/OpenPype/pull/6150">#6150</a></summary>
A quick bugfig where we can't load fbx exported from blender. The bug was reported here.
___
</details>
<details>
<summary>Blender: Restore actions to objects after update <a href="https://github.com/ynput/OpenPype/pull/6153">#6153</a></summary>
Restore the actions assigned to objects after updating assets from blend files.
___
</details>
<details>
<summary>Chore: Collect template data with hierarchy context <a href="https://github.com/ynput/OpenPype/pull/6154">#6154</a></summary>
Fixed queue loop where is used wrong variable to pop items from queue.
___
</details>
<details>
<summary>OP-6382 - Thumbnail Integration Problem <a href="https://github.com/ynput/OpenPype/pull/6156">#6156</a></summary>
This ticket alerted to 3 different cases of integration issues;
- [x] Using the Tray Publisher with the same image format (extension) for representation and review representation.
- [x] Clash on publish file path from output definitions in `ExtractOIIOTranscode`.
- [x] Clash on publish file from thumbnail in `ExtractThumbnail`There might be an issue with this fix, if a studio does not use the `{output}` token in their `render` anatomy template. But thinking if they have customized it, they will be responsible to maintain these edge cases.
___
</details>
<details>
<summary>Max: Bugfix saving camera scene errored out when creating render instance with multi-camera option turned off <a href="https://github.com/ynput/OpenPype/pull/6163">#6163</a></summary>
This PR is to make sure the integrator of saving camera scene turned off and the render submitted successfully when multi-camera options being turned off in 3dsmax
___
</details>
<details>
<summary>Chore: Fix duplicated project name on create project structure <a href="https://github.com/ynput/OpenPype/pull/6166">#6166</a></summary>
Small fix in project folders. It is not used same variable name to change values which breaks values on any next loop.
___
</details>
### **Merged pull requests**
<details>
<summary>Maya: Remove duplicate plugin <a href="https://github.com/ynput/OpenPype/pull/6157">#6157</a></summary>
The two plugins below are doing the same work, so we can remove the one focused solely on lookdev.https://github.com/ynput/OpenPype/blob/develop/openpype/hosts/maya/plugins/publish/validate_look_members_unique.pyhttps://github.com/ynput/OpenPype/blob/develop/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
___
</details>
<details>
<summary>Publish report viewer: Report items sorting <a href="https://github.com/ynput/OpenPype/pull/6092">#6092</a></summary>
Proposal of items sorting in Publish report viewer tool. Items are sorted by report creation time. Creation time is also added to publish report data when saved from publisher tool.
___
</details>
<details>
<summary>Maya: Extended error message <a href="https://github.com/ynput/OpenPype/pull/6161">#6161</a></summary>
Added more details to message
___
</details>
<details>
<summary>Fusion: Added settings for Fusion creators to legacy OP <a href="https://github.com/ynput/OpenPype/pull/6162">#6162</a></summary>
Added missing OP variant of setting for new Fusion creator.
___
</details>
## [3.18.4](https://github.com/ynput/OpenPype/tree/3.18.4)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.3...3.18.4)
### **🚀 Enhancements**
<details>
<summary>multiple render camera supports for 3dsmax <a href="https://github.com/ynput/OpenPype/pull/5124">#5124</a></summary>
Supports for rendering with multiple cameras in 3dsmax
- [x] Add Batch Render Layers functions
- [x] Rewrite lib.rendersetting and lib.renderproduct
- [x] Add multi-camera options in creator.
- [x] Collector with batch render-layer when multi-camera enabled.
- [x] Add instance plugin for saving scene files with different cameras respectively by using subprocess
- [x] Refactor submit_max_deadline
- [x] Check with metadata.json in submit publish job
___
</details>
<details>
<summary>Fusion: new creator for image product type <a href="https://github.com/ynput/OpenPype/pull/6057">#6057</a></summary>
In many DCC `render` product type is expected to be sequence of files. This PR adds new explicit creator for `image` product type which is focused on single frame image. Workflows for both product types might be a bit different, this gives artists more granularity to choose better workflow.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Account and ignore free image planes. <a href="https://github.com/ynput/OpenPype/pull/5993">#5993</a></summary>
Free image planes do not have the `->` path separator, so we need to account for that.
___
</details>
<details>
<summary>Blender: Fix long names for instances <a href="https://github.com/ynput/OpenPype/pull/6070">#6070</a></summary>
Changed naming for instances to use only final part of the `folderPath`.
___
</details>
<details>
<summary>Traypublisher & Chore: Instance version on follow workfile version <a href="https://github.com/ynput/OpenPype/pull/6117">#6117</a></summary>
If `follow_workfile_version` is enabled but context does not have filled workfile version, a version on instance is used instead.
___
</details>
<details>
<summary>Substance Painter: Thumbnail errors with PBR Texture Set <a href="https://github.com/ynput/OpenPype/pull/6127">#6127</a></summary>
When publishing with PBR Metallic Roughness as Output Template, Emissive Map errors out because of the missing channel in the material and the map can't be generated in Substance Painter. This PR is to make sure `imagestance.data["publish"] = False` so that the related "empty" texture instance would be skipped to generate the output.
___
</details>
<details>
<summary>Transcoding: Fix reading image sequences through oiiotool <a href="https://github.com/ynput/OpenPype/pull/6129">#6129</a></summary>
When transcoding image sequences, the second image onwards includes the invalid xml line of `Reading path/to/file.exr` of the oiiotool output.This is most likely not the best solution, but it fixes the issue and illustrates the problem.Error:
```
ERROR:pyblish.plugin:Traceback (most recent call last):
File "C:\Users\tokejepsen\AppData\Local\Ynput\AYON\dependency_packages\ayon_2310271602_windows.zip\dependencies\pyblish\plugin.py", line 527, in __explicit_process
runner(*args)
File "C:\Users\tokejepsen\OpenPype\openpype\plugins\publish\extract_color_transcode.py", line 152, in process
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 1136, in convert_colorspace
input_info = get_oiio_info_for_input(input_path, logger=logger)
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 124, in get_oiio_info_for_input
output.append(parse_oiio_xml_output(xml_text, logger=logger))
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 276, in parse_oiio_xml_output
tree = xml.etree.ElementTree.fromstring(xml_string)
File "xml\etree\ElementTree.py", line 1347, in XML
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
Traceback (most recent call last):
File "C:\Users\tokejepsen\AppData\Local\Ynput\AYON\dependency_packages\ayon_2310271602_windows.zip\dependencies\pyblish\plugin.py", line 527, in __explicit_process
runner(*args)
File "<string>", line 152, in process
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 1136, in convert_colorspace
input_info = get_oiio_info_for_input(input_path, logger=logger)
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 124, in get_oiio_info_for_input
output.append(parse_oiio_xml_output(xml_text, logger=logger))
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 276, in parse_oiio_xml_output
tree = xml.etree.ElementTree.fromstring(xml_string)
File "xml\etree\ElementTree.py", line 1347, in XML
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
```
___
</details>
<details>
<summary>AYON: Remove 'IntegrateHeroVersion' conversion <a href="https://github.com/ynput/OpenPype/pull/6130">#6130</a></summary>
Remove settings conversion for `IntegrateHeroVersion`.
___
</details>
<details>
<summary>Chore tools: Make sure style object is not garbage collected <a href="https://github.com/ynput/OpenPype/pull/6136">#6136</a></summary>
Minor fix in tool utils to make sure style C++ object is not garbage collected when not stored into variable.
___
</details>
## [3.18.3](https://github.com/ynput/OpenPype/tree/3.18.3)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.2...3.18.3)
### **🚀 Enhancements**
<details>
<summary>Maya: Apply initial viewport shader for Redshift Proxy after loading <a href="https://github.com/ynput/OpenPype/pull/6102">#6102</a></summary>
When the published redshift proxy is being loaded, the shader of the proxy is missing. This is different from the manual load through creating redshift proxy for files. This PR is to assign the default lambert to the redshift proxy, which replicates the same approach when the user manually loads the proxy with filepath.
___
</details>
<details>
<summary>General: We should keep current subset version when we switch only the representation type <a href="https://github.com/ynput/OpenPype/pull/4629">#4629</a></summary>
When we switch only the representation type of subsets, we should not get the representation from the last version of the subset.
___
</details>
<details>
<summary>Houdini: Add loader for redshift proxy family <a href="https://github.com/ynput/OpenPype/pull/5948">#5948</a></summary>
Loader for Redshift Proxy in Houdini (Thanks for @BigRoy contribution)
___
</details>
<details>
<summary>AfterEffects: exposing Deadline pools fields in Publisher UI <a href="https://github.com/ynput/OpenPype/pull/6079">#6079</a></summary>
Deadline pools might be adhoc set by an artist during publishing. AfterEffects implementation wasn't providing this.
___
</details>
<details>
<summary>Chore: Event callbacks can have order <a href="https://github.com/ynput/OpenPype/pull/6080">#6080</a></summary>
Event callbacks can have order in which are called, and fixed issue with getting function name and file when using `partial` function as callback.
___
</details>
<details>
<summary>AYON: OpenPype addon defines runtime dependencies <a href="https://github.com/ynput/OpenPype/pull/6095">#6095</a></summary>
Moved runtime dependencies from ayon-launcher to openpype addon.
___
</details>
<details>
<summary>Max: User's setting for scene unit scale <a href="https://github.com/ynput/OpenPype/pull/6097">#6097</a></summary>
Options for users to set the default scene unit scale for their scenes.AYONLegacy OP
___
</details>
<details>
<summary>Chore: Remove deprecated templates profiles <a href="https://github.com/ynput/OpenPype/pull/6103">#6103</a></summary>
Remove deprecated usage of template profiles from settings.
___
</details>
<details>
<summary>Publisher: Window is not always on top <a href="https://github.com/ynput/OpenPype/pull/6107">#6107</a></summary>
Goal of this PR is to avoid using `WindowStaysOnTopHint` which causes issues, especially in cases when DCC shows a popup dialog that is behind the window, in that case both Publisher and DCC are frozen and there is nothing to do.
___
</details>
<details>
<summary>Houdini: add split job export support for Redshift ROP <a href="https://github.com/ynput/OpenPype/pull/6108">#6108</a></summary>
This is adding support for splitting of export and render jobs for Redshift as is already implemented for Vray, Mantra and Arnold.
___
</details>
<details>
<summary>Fusion: automatic installation of PySide2 <a href="https://github.com/ynput/OpenPype/pull/6111">#6111</a></summary>
This PR adds hook which tries to check if PySide2 is installed in Python used by Fusion and if not, it tries to install it automatically.
___
</details>
<details>
<summary>AYON: OpenPype addon dependencies <a href="https://github.com/ynput/OpenPype/pull/6113">#6113</a></summary>
Added `click` and `six` to requirements of openpype addon, and removed `Qt.py` requirement, which is not used anywhere.
___
</details>
<details>
<summary>Chore: Thumbnail representation has 'outputName' <a href="https://github.com/ynput/OpenPype/pull/6114">#6114</a></summary>
Add thumbnail output name to thumbnail representation to prevent same output filename during integration.
___
</details>
<details>
<summary>Kitsu: Clear credentials is safe <a href="https://github.com/ynput/OpenPype/pull/6116">#6116</a></summary>
Do not remove not existing keyring items.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: bug fix the playblast without textures <a href="https://github.com/ynput/OpenPype/pull/5942">#5942</a></summary>
Bug fix the texture not being displayed when users enable texture placement in the OP/AYON setting
___
</details>
<details>
<summary>Blender: Workfile instance update fix <a href="https://github.com/ynput/OpenPype/pull/6048">#6048</a></summary>
Make sure workfile instance has always available 'instance_node' in transient data.
___
</details>
<details>
<summary>Publisher: Fix issue with parenting of widgets <a href="https://github.com/ynput/OpenPype/pull/6106">#6106</a></summary>
Don't use publisher window parent (usually main DCC window) as parent for report widget.
___
</details>
<details>
<summary>:wrench: fix and update pydocstyle configuration <a href="https://github.com/ynput/OpenPype/pull/6109">#6109</a></summary>
Fix pydocstyle configuration and move it to `pyproject.toml`
___
</details>
<details>
<summary>Nuke: Create camera node with the latest camera node class in Nuke 14 <a href="https://github.com/ynput/OpenPype/pull/6118">#6118</a></summary>
Creating instance fails for certain cameras, and it seems to only exist in Nuke 14. The reason of causing that contributes to the new camera node class `Camera4` while the camera creator is working with the `Camera2` class.
___
</details>
<details>
<summary>Site Sync: small fixes in Loader <a href="https://github.com/ynput/OpenPype/pull/6119">#6119</a></summary>
Resolves issue:
- local and studio icons were same, they should be different
- `TypeError: string indices must be integers` error when downloading/uploading workfiles
___
</details>
<details>
<summary>Chore: Template data for editorial publishing <a href="https://github.com/ynput/OpenPype/pull/6120">#6120</a></summary>
Template data for editorial publishing are filled during `CollectInstanceAnatomyData`. The structure for editorial is determined, as it's required for ExtractHierarchy AYON/OpenPype plugins.
___
</details>
<details>
<summary>SceneInventory: Fix site sync icon conversion <a href="https://github.com/ynput/OpenPype/pull/6123">#6123</a></summary>
Use 'get_qt_icon' to convert icon definitions from site sync.
___
</details>
## [3.18.2](https://github.com/ynput/OpenPype/tree/3.18.2)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.1...3.18.2)
### **🚀 Enhancements**
<details>
<summary>Testing: Release Maya/Deadline job from pending when testing. <a href="https://github.com/ynput/OpenPype/pull/5988">#5988</a></summary>
When testing we wont put the Deadline jobs into pending with dependencies, so the worker can start as soon as possible.
___
</details>
<details>
<summary>Max: Tweaks on Extractions for the exporters <a href="https://github.com/ynput/OpenPype/pull/5814">#5814</a></summary>
With this PR
- Suspend Refresh would be introduced in abc & obj extractors for optimization.
- Allow users to choose the custom attributes to be included in abc exports
___
</details>
<details>
<summary>Maya: Optional preserve references. <a href="https://github.com/ynput/OpenPype/pull/5994">#5994</a></summary>
Optional preserve references when publishing Maya scenes.
___
</details>
<details>
<summary>AYON ftrack: Expect 'ayon' group in custom attributes <a href="https://github.com/ynput/OpenPype/pull/6066">#6066</a></summary>
Expect `ayon` group as one of options to get custom attributes.
___
</details>
<details>
<summary>AYON Chore: Remove dependencies related to separated addons <a href="https://github.com/ynput/OpenPype/pull/6074">#6074</a></summary>
Removed dependencies from openpype client pyproject.toml that are already defined by addons which require them.
___
</details>
<details>
<summary>Editorial & chore: Stop using pathlib2 <a href="https://github.com/ynput/OpenPype/pull/6075">#6075</a></summary>
Do not use `pathlib2` which is Python 2 backport for `pathlib` module in python 3.
___
</details>
<details>
<summary>Traypublisher: Correct validator label <a href="https://github.com/ynput/OpenPype/pull/6084">#6084</a></summary>
Use correct label for Validate filepaths.
___
</details>
<details>
<summary>Nuke: Extract Review Intermediate disabled when both Extract Review Mov and Extract Review Intermediate disabled in setting <a href="https://github.com/ynput/OpenPype/pull/6089">#6089</a></summary>
Report in Discord https://discord.com/channels/517362899170230292/563751989075378201/1187874498234556477
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Bug fix the file from texture node not being collected correctly in Yeti Rig <a href="https://github.com/ynput/OpenPype/pull/5990">#5990</a></summary>
Fix the bug of collect Yeti Rig not being able to get the file parameter(s) from the texture node(s), resulting to the failure of publishing the textures to the resource directory.
___
</details>
<details>
<summary>Bug: fix AYON settings for Maya workspace <a href="https://github.com/ynput/OpenPype/pull/6069">#6069</a></summary>
This is changing bug in default AYON setting for Maya workspace, where missing semicolumn caused workspace not being set. This is also syncing default workspace settings to OpenPype
___
</details>
<details>
<summary>Refactor colorspace handling in CollectColorspace plugin <a href="https://github.com/ynput/OpenPype/pull/6033">#6033</a></summary>
Traypublisher is now capable set available colorspaces or roles to publishing images sequence or video. This is fix of new implementation where we allowed to use roles in the enumerator selector.
___
</details>
<details>
<summary>Bugfix: Houdini render split bugs <a href="https://github.com/ynput/OpenPype/pull/6037">#6037</a></summary>
This PR is a follow up PR to https://github.com/ynput/OpenPype/pull/5420This PR does:
- refactor `get_output_parameter` to what is used to be.
- fix a bug with split render
- rename `exportJob` flag to `split_render`
___
</details>
<details>
<summary>Fusion: fix for single frame rendering <a href="https://github.com/ynput/OpenPype/pull/6056">#6056</a></summary>
Fixes publishes of single frame of `render` product type.
___
</details>
<details>
<summary>Photoshop: fix layer publish thumbnail missing in loader <a href="https://github.com/ynput/OpenPype/pull/6061">#6061</a></summary>
Thumbnails from any products (either `review` nor separate layer instances) weren't stored in Ayon.This resulted in not showing them in Loader and Server UI. After this PR thumbnails should be shown in the Loader and on the Server (`http://YOUR_AYON_HOSTNAME:5000/projects/YOUR_PROJECT/browser`).
___
</details>
<details>
<summary>AYON Chore: Do not use thumbnailSource for thumbnail integration <a href="https://github.com/ynput/OpenPype/pull/6063">#6063</a></summary>
Do not use `thumbnailSource` for thumbnail integration.
___
</details>
<details>
<summary>Photoshop: fix creation of .mov <a href="https://github.com/ynput/OpenPype/pull/6064">#6064</a></summary>
Generation of .mov file with 1 frame per published layer was failing.
___
</details>
<details>
<summary>Photoshop: fix Collect Color Coded settings <a href="https://github.com/ynput/OpenPype/pull/6065">#6065</a></summary>
Fix for wrong default value for `Collect Color Coded Instances` Settings
___
</details>
<details>
<summary>Bug: Fix Publisher parent window in Nuke <a href="https://github.com/ynput/OpenPype/pull/6067">#6067</a></summary>
Fixing issue where publisher parent window wasn't set because wrong use of version constant.
___
</details>
<details>
<summary>Python console widget: Save registry fix <a href="https://github.com/ynput/OpenPype/pull/6076">#6076</a></summary>
Do not save registry until there is something to save.
___
</details>
<details>
<summary>Ftrack: update asset names for multiple reviewable items <a href="https://github.com/ynput/OpenPype/pull/6077">#6077</a></summary>
Multiple reviewable assetVersion components with better grouping to asset version name.
___
</details>
<details>
<summary>Ftrack: DJV action fixes <a href="https://github.com/ynput/OpenPype/pull/6098">#6098</a></summary>
Fix bugs in DJV ftrack action.
___
</details>
<details>
<summary>AYON Workfiles tool: Fix arrow to timezone typo <a href="https://github.com/ynput/OpenPype/pull/6099">#6099</a></summary>
Fix parenthesis typo with arrow local timezone function.
___
</details>
### **🔀 Refactored code**
<details>
<summary>Chore: Update folder-favorite icon to ayon icon <a href="https://github.com/ynput/OpenPype/pull/5718">#5718</a></summary>
Updates old "Pype-2.0-era" (from ancient greece times) to AYON logo equivalent.I believe it's only used in Nuke.
___
</details>
### **Merged pull requests**
<details>
<summary>Chore: Maya / Nuke remove publish gui filters from settings <a href="https://github.com/ynput/OpenPype/pull/5570">#5570</a></summary>
- Remove Publish GUI Filters from Nuke settings
- Remove Publish GUI Filters from Maya settings
___
</details>
<details>
<summary>Fusion: Project/User option for output format (create_saver) <a href="https://github.com/ynput/OpenPype/pull/6045">#6045</a></summary>
Adds "Output Image Format" option which can be set via project settings and overwritten by users in "Create" menu. This replaces the current behaviour of being hardcoded to "exr". Replacing the need for people to manually edit the saver path if they require a different extension.
___
</details>
<details>
<summary>Fusion: Output Image Format Updating Instances (create_saver) <a href="https://github.com/ynput/OpenPype/pull/6060">#6060</a></summary>
Adds the ability to update Saver image output format if changed in the Publish UI.~~Adds an optional validator that compares "Output Image Format" in the Publish menu against the one currently found on the saver. It then offers a repair action to update the output extension on the saver.~~
___
</details>
<details>
<summary>Tests: Fix representation count for AE legacy test <a href="https://github.com/ynput/OpenPype/pull/6072">#6072</a></summary>
___
</details>
## [3.18.1](https://github.com/ynput/OpenPype/tree/3.18.1)

View file

@ -124,23 +124,24 @@ def get_linked_representation_id(
if not versions_to_check:
break
links = con.get_versions_links(
versions_links = con.get_versions_links(
project_name,
versions_to_check,
link_types=link_types,
link_direction="out")
versions_to_check = set()
for link in links:
# Care only about version links
if link["entityType"] != "version":
continue
entity_id = link["entityId"]
# Skip already found linked version ids
if entity_id in linked_version_ids:
continue
linked_version_ids.add(entity_id)
versions_to_check.add(entity_id)
for links in versions_links.values():
for link in links:
# Care only about version links
if link["entityType"] != "version":
continue
entity_id = link["entityId"]
# Skip already found linked version ids
if entity_id in linked_version_ids:
continue
linked_version_ids.add(entity_id)
versions_to_check.add(entity_id)
linked_version_ids.remove(version_id)
if not linked_version_ids:

View file

@ -127,8 +127,9 @@ def isolate_objects(window, objects):
context = create_blender_context(selected=objects, window=window)
bpy.ops.view3d.view_axis(context, type="FRONT")
bpy.ops.view3d.localview(context)
with bpy.context.temp_override(**context):
bpy.ops.view3d.view_axis(type="FRONT")
bpy.ops.view3d.localview()
deselect_all()
@ -270,10 +271,12 @@ def _independent_window():
"""Create capture-window context."""
context = create_blender_context()
current_windows = set(bpy.context.window_manager.windows)
bpy.ops.wm.window_new(context)
window = list(set(bpy.context.window_manager.windows) - current_windows)[0]
context["window"] = window
try:
yield window
finally:
bpy.ops.wm.window_close(context)
with bpy.context.temp_override(**context):
bpy.ops.wm.window_new()
window = list(
set(bpy.context.window_manager.windows) - current_windows)[0]
context["window"] = window
try:
yield window
finally:
bpy.ops.wm.window_close()

View file

@ -36,6 +36,12 @@ def prepare_scene_name(
if namespace:
name = f"{name}_{namespace}"
name = f"{name}_{subset}"
# Blender name for a collection or object cannot be longer than 63
# characters. If the name is longer, it will raise an error.
if len(name) > 63:
raise ValueError(f"Scene name '{name}' would be too long.")
return name
@ -226,7 +232,7 @@ class BaseCreator(Creator):
# Create asset group
if AYON_SERVER_ENABLED:
asset_name = instance_data["folderPath"]
asset_name = instance_data["folderPath"].split("/")[-1]
else:
asset_name = instance_data["asset"]
@ -305,12 +311,16 @@ class BaseCreator(Creator):
)
return
# Rename the instance node in the scene if subset or asset changed
# Rename the instance node in the scene if subset or asset changed.
# Do not rename the instance if the family is workfile, as the
# workfile instance is included in the AVALON_CONTAINER collection.
if (
"subset" in changes.changed_keys
or asset_name_key in changes.changed_keys
):
) and created_instance.family != "workfile":
asset_name = data[asset_name_key]
if AYON_SERVER_ENABLED:
asset_name = asset_name.split("/")[-1]
name = prepare_scene_name(
asset=asset_name, subset=data["subset"]
)

View file

@ -25,7 +25,7 @@ class CreateWorkfile(BaseCreator, AutoCreator):
def create(self):
"""Create workfile instances."""
existing_instance = next(
workfile_instance = next(
(
instance for instance in self.create_context.instances
if instance.creator_identifier == self.identifier
@ -39,14 +39,14 @@ class CreateWorkfile(BaseCreator, AutoCreator):
host_name = self.create_context.host_name
existing_asset_name = None
if existing_instance is not None:
if workfile_instance is not None:
if AYON_SERVER_ENABLED:
existing_asset_name = existing_instance.get("folderPath")
existing_asset_name = workfile_instance.get("folderPath")
if existing_asset_name is None:
existing_asset_name = existing_instance["asset"]
existing_asset_name = workfile_instance["asset"]
if not existing_instance:
if not workfile_instance:
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
task_name, task_name, asset_doc, project_name, host_name
@ -66,19 +66,18 @@ class CreateWorkfile(BaseCreator, AutoCreator):
asset_doc,
project_name,
host_name,
existing_instance,
workfile_instance,
)
)
self.log.info("Auto-creating workfile instance...")
current_instance = CreatedInstance(
workfile_instance = CreatedInstance(
self.family, subset_name, data, self
)
instance_node = bpy.data.collections.get(AVALON_CONTAINERS, {})
current_instance.transient_data["instance_node"] = instance_node
self._add_instance_to_context(current_instance)
self._add_instance_to_context(workfile_instance)
elif (
existing_asset_name != asset_name
or existing_instance["task"] != task_name
or workfile_instance["task"] != task_name
):
# Update instance context if it's different
asset_doc = get_asset_by_name(project_name, asset_name)
@ -86,12 +85,17 @@ class CreateWorkfile(BaseCreator, AutoCreator):
task_name, task_name, asset_doc, project_name, host_name
)
if AYON_SERVER_ENABLED:
existing_instance["folderPath"] = asset_name
workfile_instance["folderPath"] = asset_name
else:
existing_instance["asset"] = asset_name
workfile_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name
workfile_instance["task"] = task_name
workfile_instance["subset"] = subset_name
instance_node = bpy.data.collections.get(AVALON_CONTAINERS)
if not instance_node:
instance_node = bpy.data.collections.new(name=AVALON_CONTAINERS)
workfile_instance.transient_data["instance_node"] = instance_node
def collect_instances(self):

View file

@ -61,5 +61,10 @@ class BlendAnimationLoader(plugin.AssetLoader):
bpy.data.objects.remove(container)
library = bpy.data.libraries.get(bpy.path.basename(libpath))
filename = bpy.path.basename(libpath)
# Blender has a limit of 63 characters for any data name.
# If the filename is longer, it will be truncated.
if len(filename) > 63:
filename = filename[:63]
library = bpy.data.libraries.get(filename)
bpy.data.libraries.remove(library)

View file

@ -67,7 +67,8 @@ class AudioLoader(plugin.AssetLoader):
oc = bpy.context.copy()
oc["area"] = window_manager.windows[-1].screen.areas[0]
bpy.ops.sequencer.sound_strip_add(oc, filepath=libpath, frame_start=1)
with bpy.context.temp_override(**oc):
bpy.ops.sequencer.sound_strip_add(filepath=libpath, frame_start=1)
window_manager.windows[-1].screen.areas[0].type = old_type
@ -156,17 +157,18 @@ class AudioLoader(plugin.AssetLoader):
oc = bpy.context.copy()
oc["area"] = window_manager.windows[-1].screen.areas[0]
# We deselect all sequencer strips, and then select the one we
# need to remove.
bpy.ops.sequencer.select_all(oc, action='DESELECT')
scene = bpy.context.scene
scene.sequence_editor.sequences_all[old_audio].select = True
with bpy.context.temp_override(**oc):
# We deselect all sequencer strips, and then select the one we
# need to remove.
bpy.ops.sequencer.select_all(action='DESELECT')
scene = bpy.context.scene
scene.sequence_editor.sequences_all[old_audio].select = True
bpy.ops.sequencer.delete(oc)
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
bpy.ops.sequencer.delete()
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
bpy.ops.sequencer.sound_strip_add(
oc, filepath=str(libpath), frame_start=1)
bpy.ops.sequencer.sound_strip_add(
filepath=str(libpath), frame_start=1)
window_manager.windows[-1].screen.areas[0].type = old_type
@ -205,12 +207,13 @@ class AudioLoader(plugin.AssetLoader):
oc = bpy.context.copy()
oc["area"] = window_manager.windows[-1].screen.areas[0]
# We deselect all sequencer strips, and then select the one we
# need to remove.
bpy.ops.sequencer.select_all(oc, action='DESELECT')
bpy.context.scene.sequence_editor.sequences_all[audio].select = True
bpy.ops.sequencer.delete(oc)
with bpy.context.temp_override(**oc):
# We deselect all sequencer strips, and then select the one we
# need to remove.
bpy.ops.sequencer.select_all(action='DESELECT')
scene = bpy.context.scene
scene.sequence_editor.sequences_all[audio].select = True
bpy.ops.sequencer.delete()
window_manager.windows[-1].screen.areas[0].type = old_type

View file

@ -102,11 +102,15 @@ class BlendLoader(plugin.AssetLoader):
# Link all the container children to the collection
for obj in container.children_recursive:
print(obj)
bpy.context.scene.collection.objects.link(obj)
# Remove the library from the blend file
library = bpy.data.libraries.get(bpy.path.basename(libpath))
filepath = bpy.path.basename(libpath)
# Blender has a limit of 63 characters for any data name.
# If the filepath is longer, it will be truncated.
if len(filepath) > 63:
filepath = filepath[:63]
library = bpy.data.libraries.get(filepath)
bpy.data.libraries.remove(library)
return container, members
@ -189,8 +193,20 @@ class BlendLoader(plugin.AssetLoader):
transform = asset_group.matrix_basis.copy()
old_data = dict(asset_group.get(AVALON_PROPERTY))
old_members = old_data.get("members", [])
parent = asset_group.parent
actions = {}
objects_with_anim = [
obj for obj in asset_group.children_recursive
if obj.animation_data]
for obj in objects_with_anim:
# Check if the object has an action and, if so, add it to a dict
# so we can restore it later. Save and restore the action only
# if it wasn't originally loaded from the current asset.
if obj.animation_data.action not in old_members:
actions[obj.name] = obj.animation_data.action
self.exec_remove(container)
asset_group, members = self._process_data(libpath, group_name)
@ -201,6 +217,13 @@ class BlendLoader(plugin.AssetLoader):
asset_group.matrix_basis = transform
asset_group.parent = parent
# Restore the actions
for obj in asset_group.children_recursive:
if obj.name in actions:
if not obj.animation_data:
obj.animation_data_create()
obj.animation_data.action = actions[obj.name]
# Restore the old data, but reset memebers, as they don't exist anymore
# This avoids a crash, because the memory addresses of those members
# are not valid anymore

View file

@ -60,7 +60,12 @@ class BlendSceneLoader(plugin.AssetLoader):
bpy.context.scene.collection.children.link(container)
# Remove the library from the blend file
library = bpy.data.libraries.get(bpy.path.basename(libpath))
filepath = bpy.path.basename(libpath)
# Blender has a limit of 63 characters for any data name.
# If the filepath is longer, it will be truncated.
if len(filepath) > 63:
filepath = filepath[:63]
library = bpy.data.libraries.get(filepath)
bpy.data.libraries.remove(library)
return container, members

View file

@ -55,13 +55,13 @@ class ExtractAnimationABC(
context = plugin.create_blender_context(
active=asset_group, selected=selected)
# We export the abc
bpy.ops.wm.alembic_export(
context,
filepath=filepath,
selected=True,
flatten=False
)
with bpy.context.temp_override(**context):
# We export the abc
bpy.ops.wm.alembic_export(
filepath=filepath,
selected=True,
flatten=False
)
plugin.deselect_all()

View file

@ -50,19 +50,19 @@ class ExtractCamera(publish.Extractor, publish.OptionalPyblishPluginMixin):
scale_length = bpy.context.scene.unit_settings.scale_length
bpy.context.scene.unit_settings.scale_length = 0.01
# We export the fbx
bpy.ops.export_scene.fbx(
context,
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'CAMERA'},
bake_anim_simplify_factor=0.0
)
with bpy.context.temp_override(**context):
# We export the fbx
bpy.ops.export_scene.fbx(
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'CAMERA'},
bake_anim_simplify_factor=0.0
)
bpy.context.scene.unit_settings.scale_length = scale_length

View file

@ -57,15 +57,15 @@ class ExtractFBX(publish.Extractor, publish.OptionalPyblishPluginMixin):
scale_length = bpy.context.scene.unit_settings.scale_length
bpy.context.scene.unit_settings.scale_length = 0.01
# We export the fbx
bpy.ops.export_scene.fbx(
context,
filepath=filepath,
use_active_collection=False,
use_selection=True,
mesh_smooth_type='FACE',
add_leaf_bones=False
)
with bpy.context.temp_override(**context):
# We export the fbx
bpy.ops.export_scene.fbx(
filepath=filepath,
use_active_collection=False,
use_selection=True,
mesh_smooth_type='FACE',
add_leaf_bones=False
)
bpy.context.scene.unit_settings.scale_length = scale_length

View file

@ -153,17 +153,20 @@ class ExtractAnimationFBX(
override = plugin.create_blender_context(
active=root, selected=[root, armature])
bpy.ops.export_scene.fbx(
override,
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'EMPTY', 'ARMATURE'}
)
with bpy.context.temp_override(**override):
# We export the fbx
bpy.ops.export_scene.fbx(
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'EMPTY', 'ARMATURE'}
)
armature.name = armature_name
asset_group.name = asset_group_name
root.select_set(True)

View file

@ -80,17 +80,18 @@ class ExtractLayout(publish.Extractor, publish.OptionalPyblishPluginMixin):
override = plugin.create_blender_context(
active=asset, selected=[asset, obj])
bpy.ops.export_scene.fbx(
override,
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'EMPTY', 'ARMATURE'}
)
with bpy.context.temp_override(**override):
# We export the fbx
bpy.ops.export_scene.fbx(
filepath=filepath,
use_active_collection=False,
use_selection=True,
bake_anim_use_nla_strips=False,
bake_anim_use_all_actions=False,
add_leaf_bones=False,
armature_nodetype='ROOT',
object_types={'EMPTY', 'ARMATURE'}
)
obj.name = armature_name
asset.name = asset_group_name
asset.select_set(False)

View file

@ -0,0 +1,221 @@
from copy import deepcopy
import os
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk,
)
from openpype.lib import (
BoolDef,
EnumDef,
)
from openpype.pipeline import (
legacy_io,
Creator,
CreatedInstance
)
class GenericCreateSaver(Creator):
default_variants = ["Main", "Mask"]
description = "Fusion Saver to generate image sequence"
icon = "fa5.eye"
instance_attributes = [
"reviewable"
]
settings_category = "fusion"
image_format = "exr"
# TODO: This should be renamed together with Nuke so it is aligned
temp_rendering_path_template = (
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}")
def create(self, subset_name, instance_data, pre_create_data):
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
instance = CreatedInstance(
family=self.family,
subset_name=subset_name,
data=instance_data,
creator=self,
)
data = instance.data_to_store()
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp):
args = (-32768, -32768) # Magical position numbers
saver = comp.AddTool("Saver", *args)
self._update_tool_with_data(saver, data=data)
# Register the CreatedInstance
self._imprint(saver, data)
# Insert the transient data
instance.transient_data["tool"] = saver
self._add_instance_to_context(instance)
return instance
def collect_instances(self):
comp = get_current_comp()
tools = comp.GetToolList(False, "Saver").values()
for tool in tools:
data = self.get_managed_tool_data(tool)
if not data:
continue
# Add instance
created_instance = CreatedInstance.from_existing(data, self)
# Collect transient data
created_instance.transient_data["tool"] = tool
self._add_instance_to_context(created_instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
new_data = created_inst.data_to_store()
tool = created_inst.transient_data["tool"]
self._update_tool_with_data(tool, new_data)
self._imprint(tool, new_data)
def remove_instances(self, instances):
for instance in instances:
# Remove the tool from the scene
tool = instance.transient_data["tool"]
if tool:
tool.Delete()
# Remove the collected CreatedInstance to remove from UI directly
self._remove_instance_from_context(instance)
def _imprint(self, tool, data):
# Save all data in a "openpype.{key}" = value data
# Instance id is the tool's name so we don't need to imprint as data
data.pop("instance_id", None)
active = data.pop("active", None)
if active is not None:
# Use active value to set the passthrough state
tool.SetAttrs({"TOOLB_PassThrough": not active})
for key, value in data.items():
tool.SetData(f"openpype.{key}", value)
def _update_tool_with_data(self, tool, data):
"""Update tool node name and output path based on subset data"""
if "subset" not in data:
return
original_subset = tool.GetData("openpype.subset")
original_format = tool.GetData(
"openpype.creator_attributes.image_format"
)
subset = data["subset"]
if (
original_subset != subset
or original_format != data["creator_attributes"]["image_format"]
):
self._configure_saver_tool(data, tool, subset)
def _configure_saver_tool(self, data, tool, subset):
formatting_data = deepcopy(data)
# get frame padding from anatomy templates
frame_padding = self.project_anatomy.templates["frame_padding"]
# get output format
ext = data["creator_attributes"]["image_format"]
# Subset change detected
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
formatting_data.update({
"workdir": workdir,
"frame": "0" * frame_padding,
"ext": ext,
"product": {
"name": formatting_data["subset"],
"type": formatting_data["family"],
},
})
# build file path to render
filepath = self.temp_rendering_path_template.format(**formatting_data)
comp = get_current_comp()
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
# Rename tool
if tool.Name != subset:
print(f"Renaming {tool.Name} -> {subset}")
tool.SetAttrs({"TOOLS_Name": subset})
def get_managed_tool_data(self, tool):
"""Return data of the tool if it matches creator identifier"""
data = tool.GetData("openpype")
if not isinstance(data, dict):
return
required = {
"id": "pyblish.avalon.instance",
"creator_identifier": self.identifier,
}
for key, value in required.items():
if key not in data or data[key] != value:
return
# Get active state from the actual tool state
attrs = tool.GetAttrs()
passthrough = attrs["TOOLB_PassThrough"]
data["active"] = not passthrough
# Override publisher's UUID generation because tool names are
# already unique in Fusion in a comp
data["instance_id"] = tool.Name
return data
def get_instance_attr_defs(self):
"""Settings for publish page"""
return self.get_pre_create_attr_defs()
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
creator_attrs = instance_data["creator_attributes"] = {}
for pass_key in pre_create_data.keys():
creator_attrs[pass_key] = pre_create_data[pass_key]
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
"frames": "Use existing frames",
}
if "farm_rendering" in self.instance_attributes:
rendering_targets["farm"] = "Farm rendering"
return EnumDef(
"render_target", items=rendering_targets, label="Render target"
)
def _get_reviewable_bool(self):
return BoolDef(
"review",
default=("reviewable" in self.instance_attributes),
label="Review",
)
def _get_image_format_enum(self):
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
return EnumDef(
"image_format",
items=image_format_options,
default=self.image_format,
label="Output Image Format",
)

View file

@ -64,5 +64,8 @@ class FusionPrelaunch(PreLaunchHook):
self.launch_context.env[py3_var] = py3_dir
# for hook installing PySide2
self.data["fusion_python3_home"] = py3_dir
self.log.info(f"Setting OPENPYPE_FUSION: {FUSION_HOST_DIR}")
self.launch_context.env["OPENPYPE_FUSION"] = FUSION_HOST_DIR

View file

@ -0,0 +1,186 @@
import os
import subprocess
import platform
import uuid
from openpype.lib.applications import PreLaunchHook, LaunchTypes
class InstallPySideToFusion(PreLaunchHook):
"""Automatically installs Qt binding to fusion's python packages.
Check if fusion has installed PySide2 and will try to install if not.
For pipeline implementation is required to have Qt binding installed in
fusion's python packages.
"""
app_groups = {"fusion"}
order = 2
launch_types = {LaunchTypes.local}
def execute(self):
# Prelaunch hook is not crucial
try:
settings = self.data["project_settings"][self.host_name]
if not settings["hooks"]["InstallPySideToFusion"]["enabled"]:
return
self.inner_execute()
except Exception:
self.log.warning(
"Processing of {} crashed.".format(self.__class__.__name__),
exc_info=True
)
def inner_execute(self):
self.log.debug("Check for PySide2 installation.")
fusion_python3_home = self.data.get("fusion_python3_home")
if not fusion_python3_home:
self.log.warning("'fusion_python3_home' was not provided. "
"Installation of PySide2 not possible")
return
if platform.system().lower() == "windows":
exe_filenames = ["python.exe"]
else:
exe_filenames = ["python3", "python"]
for exe_filename in exe_filenames:
python_executable = os.path.join(fusion_python3_home, exe_filename)
if os.path.exists(python_executable):
break
if not os.path.exists(python_executable):
self.log.warning(
"Couldn't find python executable for fusion. {}".format(
python_executable
)
)
return
# Check if PySide2 is installed and skip if yes
if self._is_pyside_installed(python_executable):
self.log.debug("Fusion has already installed PySide2.")
return
self.log.debug("Installing PySide2.")
# Install PySide2 in fusion's python
if self._windows_require_permissions(
os.path.dirname(python_executable)):
result = self._install_pyside_windows(python_executable)
else:
result = self._install_pyside(python_executable)
if result:
self.log.info("Successfully installed PySide2 module to fusion.")
else:
self.log.warning("Failed to install PySide2 module to fusion.")
def _install_pyside_windows(self, python_executable):
"""Install PySide2 python module to fusion's python.
Installation requires administration rights that's why it is required
to use "pywin32" module which can execute command's and ask for
administration rights.
"""
try:
import win32api
import win32con
import win32process
import win32event
import pywintypes
from win32comext.shell.shell import ShellExecuteEx
from win32comext.shell import shellcon
except Exception:
self.log.warning("Couldn't import \"pywin32\" modules")
return False
try:
# Parameters
# - use "-m pip" as module pip to install PySide2 and argument
# "--ignore-installed" is to force install module to fusion's
# site-packages and make sure it is binary compatible
parameters = "-m pip install --ignore-installed PySide2"
# Execute command and ask for administrator's rights
process_info = ShellExecuteEx(
nShow=win32con.SW_SHOWNORMAL,
fMask=shellcon.SEE_MASK_NOCLOSEPROCESS,
lpVerb="runas",
lpFile=python_executable,
lpParameters=parameters,
lpDirectory=os.path.dirname(python_executable)
)
process_handle = process_info["hProcess"]
win32event.WaitForSingleObject(process_handle,
win32event.INFINITE)
returncode = win32process.GetExitCodeProcess(process_handle)
return returncode == 0
except pywintypes.error:
return False
def _install_pyside(self, python_executable):
"""Install PySide2 python module to fusion's python."""
try:
# Parameters
# - use "-m pip" as module pip to install PySide2 and argument
# "--ignore-installed" is to force install module to fusion's
# site-packages and make sure it is binary compatible
env = dict(os.environ)
del env['PYTHONPATH']
args = [
python_executable,
"-m",
"pip",
"install",
"--ignore-installed",
"PySide2",
]
process = subprocess.Popen(
args, stdout=subprocess.PIPE, universal_newlines=True,
env=env
)
process.communicate()
return process.returncode == 0
except PermissionError:
self.log.warning(
"Permission denied with command:"
"\"{}\".".format(" ".join(args))
)
except OSError as error:
self.log.warning(f"OS error has occurred: \"{error}\".")
except subprocess.SubprocessError:
pass
def _is_pyside_installed(self, python_executable):
"""Check if PySide2 module is in fusion's pip list."""
args = [python_executable, "-c", "from qtpy import QtWidgets"]
process = subprocess.Popen(args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
_, stderr = process.communicate()
stderr = stderr.decode()
if stderr:
return False
return True
def _windows_require_permissions(self, dirpath):
if platform.system().lower() != "windows":
return False
try:
# Attempt to create a temporary file in the folder
temp_file_path = os.path.join(dirpath, uuid.uuid4().hex)
with open(temp_file_path, "w"):
pass
os.remove(temp_file_path) # Clean up temporary file
return False
except PermissionError:
return True
except BaseException as exc:
print(("Failed to determine if root requires permissions."
"Unexpected error: {}").format(exc))
return False

View file

@ -0,0 +1,64 @@
from openpype.lib import NumberDef
from openpype.hosts.fusion.api.plugin import GenericCreateSaver
from openpype.hosts.fusion.api import get_current_comp
class CreateImageSaver(GenericCreateSaver):
"""Fusion Saver to generate single image.
Created to explicitly separate single ('image') or
multi frame('render) outputs.
This might be temporary creator until 'alias' functionality will be
implemented to limit creation of additional product types with similar, but
not the same workflows.
"""
identifier = "io.openpype.creators.fusion.imagesaver"
label = "Image (saver)"
name = "image"
family = "image"
description = "Fusion Saver to generate image"
default_frame = 0
def get_detail_description(self):
return """Fusion Saver to generate single image.
This creator is expected for publishing of single frame `image` product
type.
Artist should provide frame number (integer) to specify which frame
should be published. It must be inside of global timeline frame range.
Supports local and deadline rendering.
Supports selection from predefined set of output file extensions:
- exr
- tga
- png
- tif
- jpg
Created to explicitly separate single frame ('image') or
multi frame ('render') outputs.
"""
def get_pre_create_attr_defs(self):
"""Settings for create page"""
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
self._get_frame_int(),
self._get_image_format_enum(),
]
return attr_defs
def _get_frame_int(self):
return NumberDef(
"frame",
default=self.default_frame,
label="Frame",
tooltip="Set frame to be rendered, must be inside of global "
"timeline range"
)

View file

@ -1,187 +1,42 @@
from copy import deepcopy
import os
from openpype.lib import EnumDef
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk,
)
from openpype.lib import (
BoolDef,
EnumDef,
)
from openpype.pipeline import (
legacy_io,
Creator as NewCreator,
CreatedInstance,
Anatomy,
)
from openpype.hosts.fusion.api.plugin import GenericCreateSaver
class CreateSaver(NewCreator):
class CreateSaver(GenericCreateSaver):
"""Fusion Saver to generate image sequence of 'render' product type.
Original Saver creator targeted for 'render' product type. It uses
original not to descriptive name because of values in Settings.
"""
identifier = "io.openpype.creators.fusion.saver"
label = "Render (saver)"
name = "render"
family = "render"
default_variants = ["Main", "Mask"]
description = "Fusion Saver to generate image sequence"
icon = "fa5.eye"
instance_attributes = ["reviewable"]
image_format = "exr"
default_frame_range_option = "asset_db"
# TODO: This should be renamed together with Nuke so it is aligned
temp_rendering_path_template = (
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}"
)
def get_detail_description(self):
return """Fusion Saver to generate image sequence.
def create(self, subset_name, instance_data, pre_create_data):
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
This creator is expected for publishing of image sequences for 'render'
product type. (But can publish even single frame 'render'.)
instance_data.update(
{"id": "pyblish.avalon.instance", "subset": subset_name}
)
Select what should be source of render range:
- "Current asset context" - values set on Asset in DB (Ftrack)
- "From render in/out" - from node itself
- "From composition timeline" - from timeline
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp):
args = (-32768, -32768) # Magical position numbers
saver = comp.AddTool("Saver", *args)
Supports local and farm rendering.
self._update_tool_with_data(saver, data=instance_data)
# Register the CreatedInstance
instance = CreatedInstance(
family=self.family,
subset_name=subset_name,
data=instance_data,
creator=self,
)
data = instance.data_to_store()
self._imprint(saver, data)
# Insert the transient data
instance.transient_data["tool"] = saver
self._add_instance_to_context(instance)
return instance
def collect_instances(self):
comp = get_current_comp()
tools = comp.GetToolList(False, "Saver").values()
for tool in tools:
data = self.get_managed_tool_data(tool)
if not data:
continue
# Add instance
created_instance = CreatedInstance.from_existing(data, self)
# Collect transient data
created_instance.transient_data["tool"] = tool
self._add_instance_to_context(created_instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
new_data = created_inst.data_to_store()
tool = created_inst.transient_data["tool"]
self._update_tool_with_data(tool, new_data)
self._imprint(tool, new_data)
def remove_instances(self, instances):
for instance in instances:
# Remove the tool from the scene
tool = instance.transient_data["tool"]
if tool:
tool.Delete()
# Remove the collected CreatedInstance to remove from UI directly
self._remove_instance_from_context(instance)
def _imprint(self, tool, data):
# Save all data in a "openpype.{key}" = value data
# Instance id is the tool's name so we don't need to imprint as data
data.pop("instance_id", None)
active = data.pop("active", None)
if active is not None:
# Use active value to set the passthrough state
tool.SetAttrs({"TOOLB_PassThrough": not active})
for key, value in data.items():
tool.SetData(f"openpype.{key}", value)
def _update_tool_with_data(self, tool, data):
"""Update tool node name and output path based on subset data"""
if "subset" not in data:
return
original_subset = tool.GetData("openpype.subset")
original_format = tool.GetData(
"openpype.creator_attributes.image_format"
)
subset = data["subset"]
if (
original_subset != subset
or original_format != data["creator_attributes"]["image_format"]
):
self._configure_saver_tool(data, tool, subset)
def _configure_saver_tool(self, data, tool, subset):
formatting_data = deepcopy(data)
# get frame padding from anatomy templates
anatomy = Anatomy()
frame_padding = anatomy.templates["frame_padding"]
# get output format
ext = data["creator_attributes"]["image_format"]
# Subset change detected
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
formatting_data.update(
{"workdir": workdir, "frame": "0" * frame_padding, "ext": ext}
)
# build file path to render
filepath = self.temp_rendering_path_template.format(**formatting_data)
comp = get_current_comp()
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
# Rename tool
if tool.Name != subset:
print(f"Renaming {tool.Name} -> {subset}")
tool.SetAttrs({"TOOLS_Name": subset})
def get_managed_tool_data(self, tool):
"""Return data of the tool if it matches creator identifier"""
data = tool.GetData("openpype")
if not isinstance(data, dict):
return
required = {
"id": "pyblish.avalon.instance",
"creator_identifier": self.identifier,
}
for key, value in required.items():
if key not in data or data[key] != value:
return
# Get active state from the actual tool state
attrs = tool.GetAttrs()
passthrough = attrs["TOOLB_PassThrough"]
data["active"] = not passthrough
# Override publisher's UUID generation because tool names are
# already unique in Fusion in a comp
data["instance_id"] = tool.Name
return data
Supports selection from predefined set of output file extensions:
- exr
- tga
- png
- tif
- jpg
"""
def get_pre_create_attr_defs(self):
"""Settings for create page"""
@ -193,29 +48,6 @@ class CreateSaver(NewCreator):
]
return attr_defs
def get_instance_attr_defs(self):
"""Settings for publish page"""
return self.get_pre_create_attr_defs()
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
creator_attrs = instance_data["creator_attributes"] = {}
for pass_key in pre_create_data.keys():
creator_attrs[pass_key] = pre_create_data[pass_key]
# These functions below should be moved to another file
# so it can be used by other plugins. plugin.py ?
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
"frames": "Use existing frames",
}
if "farm_rendering" in self.instance_attributes:
rendering_targets["farm"] = "Farm rendering"
return EnumDef(
"render_target", items=rendering_targets, label="Render target"
)
def _get_frame_range_enum(self):
frame_range_options = {
"asset_db": "Current asset context",
@ -227,42 +59,5 @@ class CreateSaver(NewCreator):
"frame_range_source",
items=frame_range_options,
label="Frame range source",
)
def _get_reviewable_bool(self):
return BoolDef(
"review",
default=("reviewable" in self.instance_attributes),
label="Review",
)
def _get_image_format_enum(self):
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
return EnumDef(
"image_format",
items=image_format_options,
default=self.image_format,
label="Output Image Format",
)
def apply_settings(self, project_settings):
"""Method called on initialization of plugin to apply settings."""
# plugin settings
plugin_settings = project_settings["fusion"]["create"][
self.__class__.__name__
]
# individual attributes
self.instance_attributes = plugin_settings.get(
"instance_attributes", self.instance_attributes
)
self.default_variants = plugin_settings.get(
"default_variants", self.default_variants
)
self.temp_rendering_path_template = plugin_settings.get(
"temp_rendering_path_template", self.temp_rendering_path_template
)
self.image_format = plugin_settings.get(
"image_format", self.image_format
default=self.default_frame_range_option
)

View file

@ -95,7 +95,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
label = "Collect Inputs"
order = pyblish.api.CollectorOrder + 0.2
hosts = ["fusion"]
families = ["render"]
families = ["render", "image"]
def process(self, instance):

View file

@ -57,6 +57,18 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
start_with_handle = comp_start
end_with_handle = comp_end
frame = instance.data["creator_attributes"].get("frame")
# explicitly publishing only single frame
if frame is not None:
frame = int(frame)
start = frame
end = frame
handle_start = 0
handle_end = 0
start_with_handle = frame
end_with_handle = frame
# Include start and end render frame in label
subset = instance.data["subset"]
label = (

View file

@ -50,7 +50,7 @@ class CollectFusionRender(
continue
family = inst.data["family"]
if family != "render":
if family not in ["render", "image"]:
continue
task_name = context.data["task"]
@ -59,7 +59,7 @@ class CollectFusionRender(
instance_families = inst.data.get("families", [])
subset_name = inst.data["subset"]
instance = FusionRenderInstance(
family="render",
family=family,
tool=tool,
workfileComp=comp,
families=instance_families,

View file

@ -7,7 +7,7 @@ class FusionSaveComp(pyblish.api.ContextPlugin):
label = "Save current file"
order = pyblish.api.ExtractorOrder - 0.49
hosts = ["fusion"]
families = ["render", "workfile"]
families = ["render", "image", "workfile"]
def process(self, context):

View file

@ -17,7 +17,7 @@ class ValidateBackgroundDepth(
order = pyblish.api.ValidatorOrder
label = "Validate Background Depth 32 bit"
hosts = ["fusion"]
families = ["render"]
families = ["render", "image"]
optional = True
actions = [SelectInvalidAction, publish.RepairAction]

View file

@ -9,7 +9,7 @@ class ValidateFusionCompSaved(pyblish.api.ContextPlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Comp Saved"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
def process(self, context):

View file

@ -15,7 +15,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Create Folder Checked"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
actions = [RepairAction, SelectInvalidAction]

View file

@ -17,7 +17,7 @@ class ValidateFilenameHasExtension(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Filename Has Extension"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
actions = [SelectInvalidAction]

View file

@ -0,0 +1,27 @@
import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateImageFrame(pyblish.api.InstancePlugin):
"""Validates that `image` product type contains only single frame."""
order = pyblish.api.ValidatorOrder
label = "Validate Image Frame"
families = ["image"]
hosts = ["fusion"]
def process(self, instance):
render_start = instance.data["frameStartHandle"]
render_end = instance.data["frameEndHandle"]
too_many_frames = (isinstance(instance.data["expectedFiles"], list)
and len(instance.data["expectedFiles"]) > 1)
if render_end - render_start > 0 or too_many_frames:
desc = ("Trying to render multiple frames. 'image' product type "
"is meant for single frame. Please use 'render' creator.")
raise PublishValidationError(
title="Frame range outside of comp range",
message=desc,
description=desc
)

View file

@ -7,8 +7,8 @@ class ValidateInstanceFrameRange(pyblish.api.InstancePlugin):
"""Validate instance frame range is within comp's global render range."""
order = pyblish.api.ValidatorOrder
label = "Validate Filename Has Extension"
families = ["render"]
label = "Validate Frame Range"
families = ["render", "image"]
hosts = ["fusion"]
def process(self, instance):

View file

@ -13,7 +13,7 @@ class ValidateSaverHasInput(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Saver Has Input"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
actions = [SelectInvalidAction]

View file

@ -9,7 +9,7 @@ class ValidateSaverPassthrough(pyblish.api.ContextPlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Saver Passthrough"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
actions = [SelectInvalidAction]

View file

@ -64,7 +64,7 @@ class ValidateSaverResolution(
order = pyblish.api.ValidatorOrder
label = "Validate Asset Resolution"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
optional = True
actions = [SelectInvalidAction]

View file

@ -11,7 +11,7 @@ class ValidateUniqueSubsets(pyblish.api.ContextPlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Unique Subsets"
families = ["render"]
families = ["render", "image"]
hosts = ["fusion"]
actions = [SelectInvalidAction]

View file

@ -9,6 +9,8 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
label = "Collect Clip Effects Instances"
families = ["clip"]
effect_categories = []
def process(self, instance):
family = "effect"
effects = {}
@ -70,29 +72,62 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
subset_split.insert(0, "effect")
name = "".join(subset_split)
effect_categories = {
x["name"]: x["effect_classes"] for x in self.effect_categories
}
# create new instance and inherit data
data = {}
for key, value in instance.data.items():
if "clipEffectItems" in key:
category_by_effect = {"": ""}
for key, values in effect_categories.items():
for cls in values:
category_by_effect[cls] = key
effects_categorized = {k: {} for k in effect_categories.keys()}
effects_categorized[""] = {}
for key, value in effects.items():
if key == "assignTo":
continue
data[key] = value
# change names
data["subset"] = name
data["family"] = family
data["families"] = [family]
data["name"] = data["subset"] + "_" + data["asset"]
data["label"] = "{} - {}".format(
data['asset'], data["subset"]
)
data["effects"] = effects
# Some classes can have a number in them. Like Text2.
found_cls = ""
for cls in category_by_effect.keys():
if cls in value["class"]:
found_cls = cls
# create new instance
_instance = instance.context.create_instance(**data)
self.log.info("Created instance `{}`".format(_instance))
self.log.debug("instance.data `{}`".format(_instance.data))
effects_categorized[category_by_effect[found_cls]][key] = value
categories = list(effects_categorized.keys())
for category in categories:
if not effects_categorized[category]:
effects_categorized.pop(category)
continue
effects_categorized[category]["assignTo"] = effects["assignTo"]
for category, effects in effects_categorized.items():
name = "".join(subset_split)
name += category.capitalize()
# create new instance and inherit data
data = {}
for key, value in instance.data.items():
if "clipEffectItems" in key:
continue
data[key] = value
# change names
data["subset"] = name
data["family"] = family
data["families"] = [family]
data["name"] = data["subset"] + "_" + data["asset"]
data["label"] = "{} - {}".format(
data['asset'], data["subset"]
)
data["effects"] = effects
# create new instance
_instance = instance.context.create_instance(**data)
self.log.info("Created instance `{}`".format(_instance))
self.log.debug("instance.data `{}`".format(_instance.data))
def test_overlap(self, effect_t_in, effect_t_out):
covering_exp = bool(

View file

@ -121,62 +121,6 @@ def get_id_required_nodes():
return list(nodes)
def get_export_parameter(node):
"""Return the export output parameter of the given node
Example:
root = hou.node("/obj")
my_alembic_node = root.createNode("alembic")
get_output_parameter(my_alembic_node)
# Result: "output"
Args:
node(hou.Node): node instance
Returns:
hou.Parm
"""
node_type = node.type().description()
# Ensures the proper Take is selected for each ROP to retrieve the correct
# ifd
try:
rop_take = hou.takes.findTake(node.parm("take").eval())
if rop_take is not None:
hou.takes.setCurrentTake(rop_take)
except AttributeError:
# hou object doesn't always have the 'takes' attribute
pass
if node_type == "Mantra" and node.parm("soho_outputmode").eval():
return node.parm("soho_diskfile")
elif node_type == "Alfred":
return node.parm("alf_diskfile")
elif (node_type == "RenderMan" or node_type == "RenderMan RIS"):
pre_ris22 = node.parm("rib_outputmode") and \
node.parm("rib_outputmode").eval()
ris22 = node.parm("diskfile") and node.parm("diskfile").eval()
if pre_ris22 or ris22:
return node.parm("soho_diskfile")
elif node_type == "Redshift" and node.parm("RS_archive_enable").eval():
return node.parm("RS_archive_file")
elif node_type == "Wedge" and node.parm("driver").eval():
return get_export_parameter(node.node(node.parm("driver").eval()))
elif node_type == "Arnold":
return node.parm("ar_ass_file")
elif node_type == "Alembic" and node.parm("use_sop_path").eval():
return node.parm("sop_path")
elif node_type == "Shotgun Mantra" and node.parm("soho_outputmode").eval():
return node.parm("sgtk_soho_diskfile")
elif node_type == "Shotgun Alembic" and node.parm("use_sop_path").eval():
return node.parm("sop_path")
elif node.type().nameWithCategory() == "Driver/vray_renderer":
return node.parm("render_export_filepath")
raise TypeError("Node type '%s' not supported" % node_type)
def get_output_parameter(node):
"""Return the render output parameter of the given node
@ -184,41 +128,59 @@ def get_output_parameter(node):
root = hou.node("/obj")
my_alembic_node = root.createNode("alembic")
get_output_parameter(my_alembic_node)
# Result: "output"
>>> "filename"
Notes:
I'm using node.type().name() to get on par with the creators,
Because the return value of `node.type().name()` is the
same string value used in creators
e.g. instance_data.update({"node_type": "alembic"})
Rop nodes in different network categories have
the same output parameter.
So, I took that into consideration as a hint for
future development.
Args:
node(hou.Node): node instance
Returns:
hou.Parm
"""
node_type = node.type().description()
category = node.type().category().name()
node_type = node.type().name()
# Figure out which type of node is being rendered
if node_type == "Geometry" or node_type == "Filmbox FBX" or \
(node_type == "ROP Output Driver" and category == "Sop"):
return node.parm("sopoutput")
elif node_type == "Composite":
return node.parm("copoutput")
elif node_type == "opengl":
return node.parm("picture")
if node_type in {"alembic", "rop_alembic"}:
return node.parm("filename")
elif node_type == "arnold":
if node.evalParm("ar_ass_export_enable"):
if node_type.evalParm("ar_ass_export_enable"):
return node.parm("ar_ass_file")
elif node_type == "Redshift_Proxy_Output":
return node.parm("RS_archive_file")
elif node_type == "ifd":
return node.parm("ar_picture")
elif node_type in {
"geometry",
"rop_geometry",
"filmboxfbx",
"rop_fbx"
}:
return node.parm("sopoutput")
elif node_type == "comp":
return node.parm("copoutput")
elif node_type in {"karma", "opengl"}:
return node.parm("picture")
elif node_type == "ifd": # Mantra
if node.evalParm("soho_outputmode"):
return node.parm("soho_diskfile")
elif node_type == "Octane":
return node.parm("HO_img_fileName")
elif node_type == "Fetch":
inner_node = node.node(node.parm("source").eval())
if inner_node:
return get_output_parameter(inner_node)
elif node.type().nameWithCategory() == "Driver/vray_renderer":
return node.parm("vm_picture")
elif node_type == "Redshift_Proxy_Output":
return node.parm("RS_archive_file")
elif node_type == "Redshift_ROP":
return node.parm("RS_outputFileNamePrefix")
elif node_type in {"usd", "usd_rop", "usdexport"}:
return node.parm("lopoutput")
elif node_type in {"usdrender", "usdrender_rop"}:
return node.parm("outputimage")
elif node_type == "vray_renderer":
return node.parm("SettingsOutput_img_file_path")
raise TypeError("Node type '%s' not supported" % node_type)

View file

@ -15,6 +15,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
icon = "magic"
ext = "exr"
# Default to split export and render jobs
split_render = True
def create(self, subset_name, instance_data, pre_create_data):
instance_data.pop("active", None)
@ -36,12 +39,15 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
# Also create the linked Redshift IPR Rop
try:
ipr_rop = instance_node.parent().createNode(
"Redshift_IPR", node_name=basename + "_IPR"
"Redshift_IPR", node_name=f"{basename}_IPR"
)
except hou.OperationFailed:
except hou.OperationFailed as e:
raise plugin.OpenPypeCreatorError(
("Cannot create Redshift node. Is Redshift "
"installed and enabled?"))
(
"Cannot create Redshift node. Is Redshift "
"installed and enabled?"
)
) from e
# Move it to directly under the Redshift ROP
ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1))
@ -74,8 +80,15 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
for node in self.selected_nodes:
if node.type().name() == "cam":
camera = node.path()
parms.update({
"RS_renderCamera": camera or ""})
parms["RS_renderCamera"] = camera or ""
export_dir = hou.text.expandString("$HIP/pyblish/rs/")
rs_filepath = f"{export_dir}{subset_name}/{subset_name}.$F4.rs"
parms["RS_archive_file"] = rs_filepath
if pre_create_data.get("split_render", self.split_render):
parms["RS_archive_enable"] = 1
instance_node.setParms(parms)
# Lock some Avalon attributes
@ -102,6 +115,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
BoolDef("farm",
label="Submitting to Farm",
default=True),
BoolDef("split_render",
label="Split export and render jobs",
default=self.split_render),
EnumDef("image_format",
image_format_enum,
default=self.ext,

View file

@ -16,8 +16,9 @@ class FbxLoader(load.LoaderPlugin):
order = -10
families = ["staticMesh", "fbx"]
representations = ["fbx"]
families = ["*"]
representations = ["*"]
extensions = {"fbx"}
def load(self, context, name=None, namespace=None, data=None):

View file

@ -0,0 +1,112 @@
import os
import re
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.houdini.api import pipeline
from openpype.pipeline.load import LoadError
import hou
class RedshiftProxyLoader(load.LoaderPlugin):
"""Load Redshift Proxy"""
families = ["redshiftproxy"]
label = "Load Redshift Proxy"
representations = ["rs"]
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
# Get the root node
obj = hou.node("/obj")
# Define node name
namespace = namespace if namespace else context["asset"]["name"]
node_name = "{}_{}".format(namespace, name) if namespace else name
# Create a new geo node
container = obj.createNode("geo", node_name=node_name)
# Check whether the Redshift parameters exist - if not, then likely
# redshift is not set up or initialized correctly
if not container.parm("RS_objprop_proxy_enable"):
container.destroy()
raise LoadError("Unable to initialize geo node with Redshift "
"attributes. Make sure you have the Redshift "
"plug-in set up correctly for Houdini.")
# Enable by default
container.setParms({
"RS_objprop_proxy_enable": True,
"RS_objprop_proxy_file": self.format_path(
self.filepath_from_context(context),
context["representation"])
})
# Remove the file node, it only loads static meshes
# Houdini 17 has removed the file node from the geo node
file_node = container.node("file1")
if file_node:
file_node.destroy()
# Add this stub node inside so it previews ok
proxy_sop = container.createNode("redshift_proxySOP",
node_name=node_name)
proxy_sop.setDisplayFlag(True)
nodes = [container, proxy_sop]
self[:] = nodes
return pipeline.containerise(
node_name,
namespace,
nodes,
context,
self.__class__.__name__,
suffix="",
)
def update(self, container, representation):
# Update the file path
file_path = get_representation_path(representation)
node = container["node"]
node.setParms({
"RS_objprop_proxy_file": self.format_path(
file_path, representation)
})
# Update attribute
node.setParms({"representation": str(representation["_id"])})
def remove(self, container):
node = container["node"]
node.destroy()
@staticmethod
def format_path(path, representation):
"""Format file path correctly for single redshift proxy
or redshift proxy sequence."""
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
# The path is either a single file or sequence in a folder.
if is_sequence:
filename = re.sub(r"(.*)\.(\d+)\.(rs.*)", "\\1.$F4.\\3", path)
filename = os.path.join(path, filename)
else:
filename = path
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
return filename

View file

@ -41,11 +41,11 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
render_products = []
# Store whether we are splitting the render job (export + render)
export_job = bool(rop.parm("ar_ass_export_enable").eval())
instance.data["exportJob"] = export_job
split_render = bool(rop.parm("ar_ass_export_enable").eval())
instance.data["splitRender"] = split_render
export_prefix = None
export_products = []
if export_job:
if split_render:
export_prefix = evalParmNoFrame(
rop, "ar_ass_file", pad_character="0"
)

View file

@ -45,11 +45,11 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
render_products = []
# Store whether we are splitting the render job (export + render)
export_job = bool(rop.parm("soho_outputmode").eval())
instance.data["exportJob"] = export_job
split_render = bool(rop.parm("soho_outputmode").eval())
instance.data["splitRender"] = split_render
export_prefix = None
export_products = []
if export_job:
if split_render:
export_prefix = evalParmNoFrame(
rop, "soho_diskfile", pad_character="0"
)

View file

@ -31,7 +31,6 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
families = ["redshift_rop"]
def process(self, instance):
rop = hou.node(instance.data.get("instance_node"))
# Collect chunkSize
@ -43,13 +42,29 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
default_prefix = evalParmNoFrame(rop, "RS_outputFileNamePrefix")
beauty_suffix = rop.evalParm("RS_outputBeautyAOVSuffix")
render_products = []
# Store whether we are splitting the render job (export + render)
split_render = bool(rop.parm("RS_archive_enable").eval())
instance.data["splitRender"] = split_render
export_products = []
if split_render:
export_prefix = evalParmNoFrame(
rop, "RS_archive_file", pad_character="0"
)
beauty_export_product = self.get_render_product_name(
prefix=export_prefix,
suffix=None)
export_products.append(beauty_export_product)
self.log.debug(
"Found export product: {}".format(beauty_export_product)
)
instance.data["ifdFile"] = beauty_export_product
instance.data["exportFiles"] = list(export_products)
# Default beauty AOV
beauty_product = self.get_render_product_name(
prefix=default_prefix, suffix=beauty_suffix
)
render_products.append(beauty_product)
render_products = [beauty_product]
files_by_aov = {
"_": self.generate_expected_files(instance,
beauty_product)}
@ -59,11 +74,11 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
i = index + 1
# Skip disabled AOVs
if not rop.evalParm("RS_aovEnable_%s" % i):
if not rop.evalParm(f"RS_aovEnable_{i}"):
continue
aov_suffix = rop.evalParm("RS_aovSuffix_%s" % i)
aov_prefix = evalParmNoFrame(rop, "RS_aovCustomPrefix_%s" % i)
aov_suffix = rop.evalParm(f"RS_aovSuffix_{i}")
aov_prefix = evalParmNoFrame(rop, f"RS_aovCustomPrefix_{i}")
if not aov_prefix:
aov_prefix = default_prefix
@ -85,7 +100,7 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
instance.data["attachTo"] = [] # stub required data
if "expectedFiles" not in instance.data:
instance.data["expectedFiles"] = list()
instance.data["expectedFiles"] = []
instance.data["expectedFiles"].append(files_by_aov)
# update the colorspace data

View file

@ -46,11 +46,11 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
# TODO: add render elements if render element
# Store whether we are splitting the render job in an export + render
export_job = rop.parm("render_export_mode").eval() == "2"
instance.data["exportJob"] = export_job
split_render = rop.parm("render_export_mode").eval() == "2"
instance.data["splitRender"] = split_render
export_prefix = None
export_products = []
if export_job:
if split_render:
export_prefix = evalParmNoFrame(
rop, "render_export_filepath", pad_character="0"
)

View file

@ -294,6 +294,37 @@ def reset_frame_range(fps: bool = True):
frame_range["frameStartHandle"], frame_range["frameEndHandle"])
def reset_unit_scale():
"""Apply the unit scale setting to 3dsMax
"""
project_name = get_current_project_name()
settings = get_project_settings(project_name).get("max")
scene_scale = settings.get("unit_scale_settings",
{}).get("scene_unit_scale")
if scene_scale:
rt.units.DisplayType = rt.Name("Metric")
rt.units.MetricType = rt.Name(scene_scale)
else:
rt.units.DisplayType = rt.Name("Generic")
def convert_unit_scale():
"""Convert system unit scale in 3dsMax
for fbx export
Returns:
str: unit scale
"""
unit_scale_dict = {
"millimeters": "mm",
"centimeters": "cm",
"meters": "m",
"kilometers": "km"
}
current_unit_scale = rt.Execute("units.MetricType as string")
return unit_scale_dict[current_unit_scale]
def set_context_setting():
"""Apply the project settings from the project definition
@ -310,6 +341,7 @@ def set_context_setting():
reset_scene_resolution()
reset_frame_range()
reset_colorspace()
reset_unit_scale()
def get_max_version():

View file

@ -37,6 +37,95 @@ class RenderProducts(object):
)
}
def get_multiple_beauty(self, outputs, cameras):
beauty_output_frames = dict()
for output, camera in zip(outputs, cameras):
filename, ext = os.path.splitext(output)
filename = filename.replace(".", "")
ext = ext.replace(".", "")
start_frame = int(rt.rendStart)
end_frame = int(rt.rendEnd) + 1
new_beauty = self.get_expected_beauty(
filename, start_frame, end_frame, ext
)
beauty_output = ({
f"{camera}_beauty": new_beauty
})
beauty_output_frames.update(beauty_output)
return beauty_output_frames
def get_multiple_aovs(self, outputs, cameras):
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
aovs_frames = {}
for output, camera in zip(outputs, cameras):
filename, ext = os.path.splitext(output)
filename = filename.replace(".", "")
ext = ext.replace(".", "")
start_frame = int(rt.rendStart)
end_frame = int(rt.rendEnd) + 1
if renderer in [
"ART_Renderer",
"V_Ray_6_Hotfix_3",
"V_Ray_GPU_6_Hotfix_3",
"Default_Scanline_Renderer",
"Quicksilver_Hardware_Renderer",
]:
render_name = self.get_render_elements_name()
if render_name:
for name in render_name:
aovs_frames.update({
f"{camera}_{name}": self.get_expected_aovs(
filename, name, start_frame,
end_frame, ext)
})
elif renderer == "Redshift_Renderer":
render_name = self.get_render_elements_name()
if render_name:
rs_aov_files = rt.Execute("renderers.current.separateAovFiles") # noqa
# this doesn't work, always returns False
# rs_AovFiles = rt.RedShift_Renderer().separateAovFiles
if ext == "exr" and not rs_aov_files:
for name in render_name:
if name == "RsCryptomatte":
aovs_frames.update({
f"{camera}_{name}": self.get_expected_aovs(
filename, name, start_frame,
end_frame, ext)
})
else:
for name in render_name:
aovs_frames.update({
f"{camera}_{name}": self.get_expected_aovs(
filename, name, start_frame,
end_frame, ext)
})
elif renderer == "Arnold":
render_name = self.get_arnold_product_name()
if render_name:
for name in render_name:
aovs_frames.update({
f"{camera}_{name}": self.get_expected_arnold_product( # noqa
filename, name, start_frame,
end_frame, ext)
})
elif renderer in [
"V_Ray_6_Hotfix_3",
"V_Ray_GPU_6_Hotfix_3"
]:
if ext != "exr":
render_name = self.get_render_elements_name()
if render_name:
for name in render_name:
aovs_frames.update({
f"{camera}_{name}": self.get_expected_aovs(
filename, name, start_frame,
end_frame, ext)
})
return aovs_frames
def get_aovs(self, container):
render_dir = os.path.dirname(rt.rendOutputFilename)
@ -63,7 +152,7 @@ class RenderProducts(object):
if render_name:
for name in render_name:
render_dict.update({
name: self.get_expected_render_elements(
name: self.get_expected_aovs(
output_file, name, start_frame,
end_frame, img_fmt)
})
@ -77,14 +166,14 @@ class RenderProducts(object):
for name in render_name:
if name == "RsCryptomatte":
render_dict.update({
name: self.get_expected_render_elements(
name: self.get_expected_aovs(
output_file, name, start_frame,
end_frame, img_fmt)
})
else:
for name in render_name:
render_dict.update({
name: self.get_expected_render_elements(
name: self.get_expected_aovs(
output_file, name, start_frame,
end_frame, img_fmt)
})
@ -95,7 +184,8 @@ class RenderProducts(object):
for name in render_name:
render_dict.update({
name: self.get_expected_arnold_product(
output_file, name, start_frame, end_frame, img_fmt)
output_file, name, start_frame,
end_frame, img_fmt)
})
elif renderer in [
"V_Ray_6_Hotfix_3",
@ -106,7 +196,7 @@ class RenderProducts(object):
if render_name:
for name in render_name:
render_dict.update({
name: self.get_expected_render_elements(
name: self.get_expected_aovs(
output_file, name, start_frame,
end_frame, img_fmt) # noqa
})
@ -169,8 +259,8 @@ class RenderProducts(object):
return render_name
def get_expected_render_elements(self, folder, name,
start_frame, end_frame, fmt):
def get_expected_aovs(self, folder, name,
start_frame, end_frame, fmt):
"""Get all the expected render element output files. """
render_elements = []
for f in range(start_frame, end_frame):

View file

@ -74,13 +74,13 @@ class RenderSettings(object):
output = os.path.join(output_dir, container)
try:
aov_separator = self._aov_chars[(
self._project_settings["maya"]
self._project_settings["max"]
["RenderSettings"]
["aov_separator"]
)]
except KeyError:
aov_separator = "."
output_filename = "{0}..{1}".format(output, img_fmt)
output_filename = f"{output}..{img_fmt}"
output_filename = output_filename.replace("{aov_separator}",
aov_separator)
rt.rendOutputFilename = output_filename
@ -146,13 +146,13 @@ class RenderSettings(object):
for i in range(render_elem_num):
renderlayer_name = render_elem.GetRenderElement(i)
target, renderpass = str(renderlayer_name).split(":")
aov_name = "{0}_{1}..{2}".format(dir, renderpass, ext)
aov_name = f"{dir}_{renderpass}..{ext}"
render_elem.SetRenderElementFileName(i, aov_name)
def get_render_output(self, container, output_dir):
output = os.path.join(output_dir, container)
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
output_filename = "{0}..{1}".format(output, img_fmt)
output_filename = f"{output}..{img_fmt}"
return output_filename
def get_render_element(self):
@ -167,3 +167,61 @@ class RenderSettings(object):
orig_render_elem.append(render_element)
return orig_render_elem
def get_batch_render_elements(self, container,
output_dir, camera):
render_element_list = list()
output = os.path.join(output_dir, container)
render_elem = rt.maxOps.GetCurRenderElementMgr()
render_elem_num = render_elem.NumRenderElements()
if render_elem_num < 0:
return
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
for i in range(render_elem_num):
renderlayer_name = render_elem.GetRenderElement(i)
target, renderpass = str(renderlayer_name).split(":")
aov_name = f"{output}_{camera}_{renderpass}..{img_fmt}"
render_element_list.append(aov_name)
return render_element_list
def get_batch_render_output(self, camera):
target_layer_no = rt.batchRenderMgr.FindView(camera)
target_layer = rt.batchRenderMgr.GetView(target_layer_no)
return target_layer.outputFilename
def batch_render_elements(self, camera):
target_layer_no = rt.batchRenderMgr.FindView(camera)
target_layer = rt.batchRenderMgr.GetView(target_layer_no)
outputfilename = target_layer.outputFilename
directory = os.path.dirname(outputfilename)
render_elem = rt.maxOps.GetCurRenderElementMgr()
render_elem_num = render_elem.NumRenderElements()
if render_elem_num < 0:
return
ext = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
for i in range(render_elem_num):
renderlayer_name = render_elem.GetRenderElement(i)
target, renderpass = str(renderlayer_name).split(":")
aov_name = f"{directory}_{camera}_{renderpass}..{ext}"
render_elem.SetRenderElementFileName(i, aov_name)
def batch_render_layer(self, container,
output_dir, cameras):
outputs = list()
output = os.path.join(output_dir, container)
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
for cam in cameras:
camera = rt.getNodeByName(cam)
layer_no = rt.batchRenderMgr.FindView(cam)
renderlayer = None
if layer_no == 0:
renderlayer = rt.batchRenderMgr.CreateView(camera)
else:
renderlayer = rt.batchRenderMgr.GetView(layer_no)
# use camera name as renderlayer name
renderlayer.name = cam
renderlayer.outputFilename = f"{output}_{cam}..{img_fmt}"
outputs.append(renderlayer.outputFilename)
return outputs

View file

@ -124,6 +124,10 @@ class OpenPypeMenu(object):
colorspace_action.triggered.connect(self.colorspace_callback)
openpype_menu.addAction(colorspace_action)
unit_scale_action = QtWidgets.QAction("Set Unit Scale", openpype_menu)
unit_scale_action.triggered.connect(self.unit_scale_callback)
openpype_menu.addAction(unit_scale_action)
return openpype_menu
def load_callback(self):
@ -157,3 +161,7 @@ class OpenPypeMenu(object):
def colorspace_callback(self):
"""Callback to reset colorspace"""
return lib.reset_colorspace()
def unit_scale_callback(self):
"""Callback to reset unit scale"""
return lib.reset_unit_scale()

View file

@ -2,6 +2,7 @@
"""Creator plugin for creating camera."""
import os
from openpype.hosts.max.api import plugin
from openpype.lib import BoolDef
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
@ -17,15 +18,33 @@ class CreateRender(plugin.MaxCreator):
file = rt.maxFileName
filename, _ = os.path.splitext(file)
instance_data["AssetName"] = filename
instance_data["multiCamera"] = pre_create_data.get("multi_cam")
num_of_renderlayer = rt.batchRenderMgr.numViews
if num_of_renderlayer > 0:
rt.batchRenderMgr.DeleteView(num_of_renderlayer)
instance = super(CreateRender, self).create(
subset_name,
instance_data,
pre_create_data)
container_name = instance.data.get("instance_node")
sel_obj = self.selected_nodes
if sel_obj:
# set viewport camera for rendering(mandatory for deadline)
RenderSettings(self.project_settings).set_render_camera(sel_obj)
# set output paths for rendering(mandatory for deadline)
RenderSettings().render_output(container_name)
# TODO: create multiple camera options
if self.selected_nodes:
selected_nodes_name = []
for sel in self.selected_nodes:
name = sel.name
selected_nodes_name.append(name)
RenderSettings().batch_render_layer(
container_name, filename,
selected_nodes_name)
def get_pre_create_attr_defs(self):
attrs = super(CreateRender, self).get_pre_create_attr_defs()
return attrs + [
BoolDef("multi_cam",
label="Multiple Cameras Submission",
default=False),
]

View file

@ -4,8 +4,10 @@ import os
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline.publish import KnownPublishError
from openpype.hosts.max.api import colorspace
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
@ -23,7 +25,6 @@ class CollectRender(pyblish.api.InstancePlugin):
file = rt.maxFileName
current_file = os.path.join(folder, file)
filepath = current_file.replace("\\", "/")
context.data['currentFile'] = current_file
files_by_aov = RenderProducts().get_beauty(instance.name)
@ -39,6 +40,28 @@ class CollectRender(pyblish.api.InstancePlugin):
instance.data["cameras"] = [camera.name] if camera else None # noqa
if instance.data.get("multiCamera"):
cameras = instance.data.get("members")
if not cameras:
raise KnownPublishError("There should be at least"
" one renderable camera in container")
sel_cam = [
c.name for c in cameras
if rt.classOf(c) in rt.Camera.classes]
container_name = instance.data.get("instance_node")
render_dir = os.path.dirname(rt.rendOutputFilename)
outputs = RenderSettings().batch_render_layer(
container_name, render_dir, sel_cam
)
instance.data["cameras"] = sel_cam
files_by_aov = RenderProducts().get_multiple_beauty(
outputs, sel_cam)
aovs = RenderProducts().get_multiple_aovs(
outputs, sel_cam)
files_by_aov.update(aovs)
if "expectedFiles" not in instance.data:
instance.data["expectedFiles"] = list()
instance.data["files"] = list()

View file

@ -1,55 +0,0 @@
import os
import pyblish.api
from pymxs import runtime as rt
from openpype.hosts.max.api import maintained_selection
from openpype.pipeline import OptionalPyblishPluginMixin, publish
class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
"""Extract Camera with FbxExporter."""
order = pyblish.api.ExtractorOrder - 0.2
label = "Extract Fbx Camera"
hosts = ["max"]
families = ["camera"]
optional = True
def process(self, instance):
if not self.is_active(instance.data):
return
stagingdir = self.staging_dir(instance)
filename = "{name}.fbx".format(**instance.data)
filepath = os.path.join(stagingdir, filename)
rt.FBXExporterSetParam("Animation", True)
rt.FBXExporterSetParam("Cameras", True)
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
rt.FBXExporterSetParam("UpAxis", "Y")
rt.FBXExporterSetParam("Preserveinstances", True)
with maintained_selection():
# select and export
node_list = instance.data["members"]
rt.Select(node_list)
rt.ExportFile(
filepath,
rt.Name("noPrompt"),
selectedOnly=True,
using=rt.FBXEXP,
)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
"name": "fbx",
"ext": "fbx",
"files": filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info(f"Extracted instance '{instance.name}' to: {filepath}")

View file

@ -3,6 +3,7 @@ import pyblish.api
from openpype.pipeline import publish, OptionalPyblishPluginMixin
from pymxs import runtime as rt
from openpype.hosts.max.api import maintained_selection
from openpype.hosts.max.api.lib import convert_unit_scale
class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
@ -23,14 +24,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
stagingdir = self.staging_dir(instance)
filename = "{name}.fbx".format(**instance.data)
filepath = os.path.join(stagingdir, filename)
rt.FBXExporterSetParam("Animation", False)
rt.FBXExporterSetParam("Cameras", False)
rt.FBXExporterSetParam("Lights", False)
rt.FBXExporterSetParam("PointCache", False)
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
rt.FBXExporterSetParam("UpAxis", "Y")
rt.FBXExporterSetParam("Preserveinstances", True)
self._set_fbx_attributes()
with maintained_selection():
# select and export
@ -56,3 +50,34 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
self.log.info(
"Extracted instance '%s' to: %s" % (instance.name, filepath)
)
def _set_fbx_attributes(self):
unit_scale = convert_unit_scale()
rt.FBXExporterSetParam("Animation", False)
rt.FBXExporterSetParam("Cameras", False)
rt.FBXExporterSetParam("Lights", False)
rt.FBXExporterSetParam("PointCache", False)
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
rt.FBXExporterSetParam("UpAxis", "Y")
rt.FBXExporterSetParam("Preserveinstances", True)
if unit_scale:
rt.FBXExporterSetParam("ConvertUnit", unit_scale)
class ExtractCameraFbx(ExtractModelFbx):
"""Extract Camera with FbxExporter."""
order = pyblish.api.ExtractorOrder - 0.2
label = "Extract Fbx Camera"
families = ["camera"]
optional = True
def _set_fbx_attributes(self):
unit_scale = convert_unit_scale()
rt.FBXExporterSetParam("Animation", True)
rt.FBXExporterSetParam("Cameras", True)
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
rt.FBXExporterSetParam("UpAxis", "Y")
rt.FBXExporterSetParam("Preserveinstances", True)
if unit_scale:
rt.FBXExporterSetParam("ConvertUnit", unit_scale)

View file

@ -0,0 +1,105 @@
import pyblish.api
import os
import sys
import tempfile
from pymxs import runtime as rt
from openpype.lib import run_subprocess
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
class SaveScenesForCamera(pyblish.api.InstancePlugin):
"""Save scene files for multiple cameras without
editing the original scene before deadline submission
"""
label = "Save Scene files for cameras"
order = pyblish.api.ExtractorOrder - 0.48
hosts = ["max"]
families = ["maxrender"]
def process(self, instance):
if not instance.data.get("multiCamera"):
self.log.debug(
"Multi Camera disabled. "
"Skipping to save scene files for cameras")
return
current_folder = rt.maxFilePath
current_filename = rt.maxFileName
current_filepath = os.path.join(current_folder, current_filename)
camera_scene_files = []
scripts = []
filename, ext = os.path.splitext(current_filename)
fmt = RenderProducts().image_format()
cameras = instance.data.get("cameras")
if not cameras:
return
new_folder = f"{current_folder}_{filename}"
os.makedirs(new_folder, exist_ok=True)
for camera in cameras:
new_output = RenderSettings().get_batch_render_output(camera) # noqa
new_output = new_output.replace("\\", "/")
new_filename = f"{filename}_{camera}{ext}"
new_filepath = os.path.join(new_folder, new_filename)
new_filepath = new_filepath.replace("\\", "/")
camera_scene_files.append(new_filepath)
RenderSettings().batch_render_elements(camera)
rt.rendOutputFilename = new_output
rt.saveMaxFile(current_filepath)
script = ("""
from pymxs import runtime as rt
import os
filename = "{filename}"
new_filepath = "{new_filepath}"
new_output = "{new_output}"
camera = "{camera}"
rt.rendOutputFilename = new_output
directory = os.path.dirname(rt.rendOutputFilename)
directory = os.path.join(directory, filename)
render_elem = rt.maxOps.GetCurRenderElementMgr()
render_elem_num = render_elem.NumRenderElements()
if render_elem_num > 0:
ext = "{ext}"
for i in range(render_elem_num):
renderlayer_name = render_elem.GetRenderElement(i)
target, renderpass = str(renderlayer_name).split(":")
aov_name = f"{{directory}}_{camera}_{{renderpass}}..{ext}"
render_elem.SetRenderElementFileName(i, aov_name)
rt.saveMaxFile(new_filepath)
""").format(filename=instance.name,
new_filepath=new_filepath,
new_output=new_output,
camera=camera,
ext=fmt)
scripts.append(script)
maxbatch_exe = os.path.join(
os.path.dirname(sys.executable), "3dsmaxbatch")
maxbatch_exe = maxbatch_exe.replace("\\", "/")
if sys.platform == "windows":
maxbatch_exe += ".exe"
maxbatch_exe = os.path.normpath(maxbatch_exe)
with tempfile.TemporaryDirectory() as tmp_dir_name:
tmp_script_path = os.path.join(
tmp_dir_name, "extract_scene_files.py")
self.log.info("Using script file: {}".format(tmp_script_path))
with open(tmp_script_path, "wt") as tmp:
for script in scripts:
tmp.write(script + "\n")
try:
current_filepath = current_filepath.replace("\\", "/")
tmp_script_path = tmp_script_path.replace("\\", "/")
run_subprocess([maxbatch_exe, tmp_script_path,
"-sceneFile", current_filepath])
except RuntimeError:
self.log.debug("Checking the scene files existing")
for camera_scene in camera_scene_files:
if not os.path.exists(camera_scene):
self.log.error("Camera scene files not existed yet!")
raise RuntimeError("MaxBatch.exe doesn't run as expected")
self.log.debug(f"Found Camera scene:{camera_scene}")

View file

@ -0,0 +1,139 @@
"""Backwards compatible implementation of ExitStack for Python 2.
ExitStack contextmanager was implemented with Python 3.3.
As long as we supportPython 2 hosts we can use this backwards
compatible implementation to support bothPython 2 and Python 3.
Instead of using ExitStack from contextlib, use it from this module:
>>> from openpype.hosts.maya.api.exitstack import ExitStack
It will provide the appropriate ExitStack implementation for the current
running Python version.
"""
# TODO: Remove the entire script once dropping Python 2 support.
import contextlib
if getattr(contextlib, "nested", None):
from contextlib import ExitStack # noqa
else:
import sys
from collections import deque
class ExitStack(object):
"""Context manager for dynamic management of a stack of exit callbacks
For example:
with ExitStack() as stack:
files = [stack.enter_context(open(fname))
for fname in filenames]
# All opened files will automatically be closed at the end of
# the with statement, even if attempts to open files later
# in the list raise an exception
"""
def __init__(self):
self._exit_callbacks = deque()
def pop_all(self):
"""Preserve the context stack by transferring
it to a new instance"""
new_stack = type(self)()
new_stack._exit_callbacks = self._exit_callbacks
self._exit_callbacks = deque()
return new_stack
def _push_cm_exit(self, cm, cm_exit):
"""Helper to correctly register callbacks
to __exit__ methods"""
def _exit_wrapper(*exc_details):
return cm_exit(cm, *exc_details)
_exit_wrapper.__self__ = cm
self.push(_exit_wrapper)
def push(self, exit):
"""Registers a callback with the standard __exit__ method signature
Can suppress exceptions the same way __exit__ methods can.
Also accepts any object with an __exit__ method (registering a call
to the method instead of the object itself)
"""
# We use an unbound method rather than a bound method to follow
# the standard lookup behaviour for special methods
_cb_type = type(exit)
try:
exit_method = _cb_type.__exit__
except AttributeError:
# Not a context manager, so assume its a callable
self._exit_callbacks.append(exit)
else:
self._push_cm_exit(exit, exit_method)
return exit # Allow use as a decorator
def callback(self, callback, *args, **kwds):
"""Registers an arbitrary callback and arguments.
Cannot suppress exceptions.
"""
def _exit_wrapper(exc_type, exc, tb):
callback(*args, **kwds)
# We changed the signature, so using @wraps is not appropriate, but
# setting __wrapped__ may still help with introspection
_exit_wrapper.__wrapped__ = callback
self.push(_exit_wrapper)
return callback # Allow use as a decorator
def enter_context(self, cm):
"""Enters the supplied context manager
If successful, also pushes its __exit__ method as a callback and
returns the result of the __enter__ method.
"""
# We look up the special methods on the type to
# match the with statement
_cm_type = type(cm)
_exit = _cm_type.__exit__
result = _cm_type.__enter__(cm)
self._push_cm_exit(cm, _exit)
return result
def close(self):
"""Immediately unwind the context stack"""
self.__exit__(None, None, None)
def __enter__(self):
return self
def __exit__(self, *exc_details):
# We manipulate the exception state so it behaves as though
# we were actually nesting multiple with statements
frame_exc = sys.exc_info()[1]
def _fix_exception_context(new_exc, old_exc):
while 1:
exc_context = new_exc.__context__
if exc_context in (None, frame_exc):
break
new_exc = exc_context
new_exc.__context__ = old_exc
# Callbacks are invoked in LIFO order to match the behaviour of
# nested context managers
suppressed_exc = False
while self._exit_callbacks:
cb = self._exit_callbacks.pop()
try:
if cb(*exc_details):
suppressed_exc = True
exc_details = (None, None, None)
except Exception:
new_exc_details = sys.exc_info()
# simulate the stack of exceptions by setting the context
_fix_exception_context(new_exc_details[1], exc_details[1])
if not self._exit_callbacks:
raise
exc_details = new_exc_details
return suppressed_exc

View file

@ -1,6 +1,7 @@
"""Standalone helper functions"""
import os
import copy
from pprint import pformat
import sys
import uuid
@ -9,6 +10,8 @@ import re
import json
import logging
import contextlib
import capture
from .exitstack import ExitStack
from collections import OrderedDict, defaultdict
from math import ceil
from six import string_types
@ -172,6 +175,216 @@ def maintained_selection():
cmds.select(clear=True)
def reload_all_udim_tile_previews():
"""Regenerate all UDIM tile preview in texture file"""
for texture_file in cmds.ls(type="file"):
if cmds.getAttr("{}.uvTilingMode".format(texture_file)) > 0:
cmds.ogs(regenerateUVTilePreview=texture_file)
@contextlib.contextmanager
def panel_camera(panel, camera):
"""Set modelPanel's camera during the context.
Arguments:
panel (str): modelPanel name.
camera (str): camera name.
"""
original_camera = cmds.modelPanel(panel, query=True, camera=True)
try:
cmds.modelPanel(panel, edit=True, camera=camera)
yield
finally:
cmds.modelPanel(panel, edit=True, camera=original_camera)
def render_capture_preset(preset):
"""Capture playblast with a preset.
To generate the preset use `generate_capture_preset`.
Args:
preset (dict): preset options
Returns:
str: Output path of `capture.capture`
"""
# Force a refresh at the start of the timeline
# TODO (Question): Why do we need to do this? What bug does it solve?
# Is this for simulations?
cmds.refresh(force=True)
refresh_frame_int = int(cmds.playbackOptions(query=True, minTime=True))
cmds.currentTime(refresh_frame_int - 1, edit=True)
cmds.currentTime(refresh_frame_int, edit=True)
log.debug(
"Using preset: {}".format(
json.dumps(preset, indent=4, sort_keys=True)
)
)
preset = copy.deepcopy(preset)
# not supported by `capture` so we pop it off of the preset
reload_textures = preset["viewport_options"].pop("loadTextures", False)
panel = preset.pop("panel")
with ExitStack() as stack:
stack.enter_context(maintained_time())
stack.enter_context(panel_camera(panel, preset["camera"]))
stack.enter_context(viewport_default_options(panel, preset))
if reload_textures:
# Force immediate texture loading when to ensure
# all textures have loaded before the playblast starts
stack.enter_context(material_loading_mode(mode="immediate"))
# Regenerate all UDIM tiles previews
reload_all_udim_tile_previews()
path = capture.capture(log=self.log, **preset)
return path
def generate_capture_preset(instance, camera, path,
start=None, end=None, capture_preset=None):
"""Function for getting all the data of preset options for
playblast capturing
Args:
instance (pyblish.api.Instance): instance
camera (str): review camera
path (str): filepath
start (int): frameStart
end (int): frameEnd
capture_preset (dict): capture preset
Returns:
dict: Resulting preset
"""
preset = load_capture_preset(data=capture_preset)
preset["camera"] = camera
preset["start_frame"] = start
preset["end_frame"] = end
preset["filename"] = path
preset["overwrite"] = True
preset["panel"] = instance.data["panel"]
# Disable viewer since we use the rendering logic for publishing
# We don't want to open the generated playblast in a viewer directly.
preset["viewer"] = False
# "isolate_view" will already have been applied at creation, so we'll
# ignore it here.
preset.pop("isolate_view")
# Set resolution variables from capture presets
width_preset = capture_preset["Resolution"]["width"]
height_preset = capture_preset["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
# Use resolution from instance if review width/height is set
# Otherwise use the resolution from preset if it has non-zero values
# Otherwise fall back to asset width x height
# Else define no width, then `capture.capture` will use render resolution
if review_instance_width and review_instance_height:
preset["width"] = review_instance_width
preset["height"] = review_instance_height
elif width_preset and height_preset:
preset["width"] = width_preset
preset["height"] = height_preset
elif asset_width and asset_height:
preset["width"] = asset_width
preset["height"] = asset_height
# Isolate view is requested by having objects in the set besides a
# camera. If there is only 1 member it'll be the camera because we
# validate to have 1 camera only.
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
preset["isolate"] = instance.data["setMembers"]
# Override camera options
# Enforce persisting camera depth of field
camera_options = preset.setdefault("camera_options", {})
camera_options["depthOfField"] = cmds.getAttr(
"{0}.depthOfField".format(camera)
)
# Use Pan/Zoom from instance data instead of from preset
preset.pop("pan_zoom", None)
camera_options["panZoomEnabled"] = instance.data["panZoom"]
# Override viewport options by instance data
viewport_options = preset.setdefault("viewport_options", {})
viewport_options["displayLights"] = instance.data["displayLights"]
viewport_options["imagePlane"] = instance.data.get("imagePlane", True)
# Override transparency if requested.
transparency = instance.data.get("transparency", 0)
if transparency != 0:
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
# Update preset with current panel setting
# if override_viewport_options is turned off
if not capture_preset["Viewport Options"]["override_viewport_options"]:
panel_preset = capture.parse_view(preset["panel"])
panel_preset.pop("camera")
preset.update(panel_preset)
return preset
@contextlib.contextmanager
def viewport_default_options(panel, preset):
"""Context manager used by `render_capture_preset`.
We need to explicitly enable some viewport changes so the viewport is
refreshed ahead of playblasting.
"""
# TODO: Clarify in the docstring WHY we need to set it ahead of
# playblasting. What issues does it solve?
viewport_defaults = {}
try:
keys = [
"useDefaultMaterial",
"wireframeOnShaded",
"xray",
"jointXray",
"backfaceCulling",
"textures"
]
for key in keys:
viewport_defaults[key] = cmds.modelEditor(
panel, query=True, **{key: True}
)
if preset["viewport_options"].get(key):
cmds.modelEditor(
panel, edit=True, **{key: True}
)
yield
finally:
# Restoring viewport options.
if viewport_defaults:
cmds.modelEditor(
panel, edit=True, **viewport_defaults
)
@contextlib.contextmanager
def material_loading_mode(mode="immediate"):
"""Set material loading mode during context"""
original = cmds.displayPref(query=True, materialLoadingMode=True)
cmds.displayPref(materialLoadingMode=mode)
try:
yield
finally:
cmds.displayPref(materialLoadingMode=original)
def get_namespace(node):
"""Return namespace of given node"""
node_name = node.rsplit("|", 1)[-1]
@ -2565,9 +2778,37 @@ def bake_to_world_space(nodes,
list: The newly created and baked node names.
"""
@contextlib.contextmanager
def _unlock_attr(attr):
"""Unlock attribute during context if it is locked"""
if not cmds.getAttr(attr, lock=True):
# If not locked, do nothing
yield
return
try:
cmds.setAttr(attr, lock=False)
yield
finally:
cmds.setAttr(attr, lock=True)
def _get_attrs(node):
"""Workaround for buggy shape attribute listing with listAttr"""
"""Workaround for buggy shape attribute listing with listAttr
This will only return keyable settable attributes that have an
incoming connections (those that have a reason to be baked).
Technically this *may* fail to return attributes driven by complex
expressions for which maya makes no connections, e.g. doing actual
`setAttr` calls in expressions.
Arguments:
node (str): The node to list attributes for.
Returns:
list: Keyable attributes with incoming connections.
The attribute may be locked.
"""
attrs = cmds.listAttr(node,
write=True,
scalar=True,
@ -2592,14 +2833,14 @@ def bake_to_world_space(nodes,
return valid_attrs
transform_attrs = set(["t", "r", "s",
"tx", "ty", "tz",
"rx", "ry", "rz",
"sx", "sy", "sz"])
transform_attrs = {"t", "r", "s",
"tx", "ty", "tz",
"rx", "ry", "rz",
"sx", "sy", "sz"}
world_space_nodes = []
with delete_after() as delete_bin:
with ExitStack() as stack:
delete_bin = stack.enter_context(delete_after())
# Create the duplicate nodes that are in world-space connected to
# the originals
for node in nodes:
@ -2611,23 +2852,26 @@ def bake_to_world_space(nodes,
name=new_name,
renameChildren=True)[0] # noqa
# Connect all attributes on the node except for transform
# attributes
attrs = _get_attrs(node)
attrs = set(attrs) - transform_attrs if attrs else []
# Parent new node to world
if cmds.listRelatives(new_node, parent=True):
new_node = cmds.parent(new_node, world=True)[0]
# Temporarily unlock and passthrough connect all attributes
# so we can bake them over time
# Skip transform attributes because we will constrain them later
attrs = set(_get_attrs(node)) - transform_attrs
for attr in attrs:
orig_node_attr = '{0}.{1}'.format(node, attr)
new_node_attr = '{0}.{1}'.format(new_node, attr)
# unlock to avoid connection errors
cmds.setAttr(new_node_attr, lock=False)
orig_node_attr = "{}.{}".format(node, attr)
new_node_attr = "{}.{}".format(new_node, attr)
# unlock during context to avoid connection errors
stack.enter_context(_unlock_attr(new_node_attr))
cmds.connectAttr(orig_node_attr,
new_node_attr,
force=True)
# If shapes are also baked then connect those keyable attributes
# If shapes are also baked then also temporarily unlock and
# passthrough connect all shape attributes for baking
if shape:
children_shapes = cmds.listRelatives(new_node,
children=True,
@ -2642,25 +2886,19 @@ def bake_to_world_space(nodes,
children_shapes):
attrs = _get_attrs(orig_shape)
for attr in attrs:
orig_node_attr = '{0}.{1}'.format(orig_shape, attr)
new_node_attr = '{0}.{1}'.format(new_shape, attr)
# unlock to avoid connection errors
cmds.setAttr(new_node_attr, lock=False)
orig_node_attr = "{}.{}".format(orig_shape, attr)
new_node_attr = "{}.{}".format(new_shape, attr)
# unlock during context to avoid connection errors
stack.enter_context(_unlock_attr(new_node_attr))
cmds.connectAttr(orig_node_attr,
new_node_attr,
force=True)
# Parent to world
if cmds.listRelatives(new_node, parent=True):
new_node = cmds.parent(new_node, world=True)[0]
# Unlock transform attributes so constraint can be created
# Constraint transforms
for attr in transform_attrs:
cmds.setAttr('{0}.{1}'.format(new_node, attr), lock=False)
# Constraints
transform_attr = "{}.{}".format(new_node, attr)
stack.enter_context(_unlock_attr(transform_attr))
delete_bin.extend(cmds.parentConstraint(node, new_node, mo=False))
delete_bin.extend(cmds.scaleConstraint(node, new_node, mo=False))
@ -2677,7 +2915,7 @@ def bake_to_world_space(nodes,
return world_space_nodes
def load_capture_preset(data=None):
def load_capture_preset(data):
"""Convert OpenPype Extract Playblast settings to `capture` arguments
Input data is the settings from:
@ -2691,8 +2929,6 @@ def load_capture_preset(data=None):
"""
import capture
options = dict()
viewport_options = dict()
viewport2_options = dict()

View file

@ -137,6 +137,11 @@ class RedshiftProxyLoader(load.LoaderPlugin):
cmds.connectAttr("{}.outMesh".format(rs_mesh),
"{}.inMesh".format(mesh_shape))
# TODO: use the assigned shading group as shaders if existed
# assign default shader to redshift proxy
if cmds.ls("initialShadingGroup", type="shadingEngine"):
cmds.sets(mesh_shape, forceElement="initialShadingGroup")
group_node = cmds.group(empty=True, name="{}_GRP".format(name))
mesh_transform = cmds.listRelatives(mesh_shape,
parent=True, fullPath=True)

View file

@ -6,6 +6,7 @@ from maya import cmds
import pyblish.api
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import KnownPublishError
SETTINGS = {"renderDensity",
@ -116,7 +117,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
resources = []
image_search_paths = cmds.getAttr("{}.imageSearchPath".format(node))
texture_filenames = []
if image_search_paths:
# TODO: Somehow this uses OS environment path separator, `:` vs `;`
@ -127,9 +127,16 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
# find all ${TOKEN} tokens and replace them with $TOKEN env. variable
image_search_paths = self._replace_tokens(image_search_paths)
# List all related textures
texture_filenames = cmds.pgYetiCommand(node, listTextures=True)
self.log.debug("Found %i texture(s)" % len(texture_filenames))
# List all related textures
texture_nodes = cmds.pgYetiGraph(
node, listNodes=True, type="texture")
texture_filenames = [
cmds.pgYetiGraph(
node, node=texture_node,
param="file_name", getParamValue=True)
for texture_node in texture_nodes
]
self.log.debug("Found %i texture(s)" % len(texture_filenames))
# Get all reference nodes
reference_nodes = cmds.pgYetiGraph(node,
@ -137,11 +144,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
type="reference")
self.log.debug("Found %i reference node(s)" % len(reference_nodes))
if texture_filenames and not image_search_paths:
raise ValueError("pgYetiMaya node '%s' is missing the path to the "
"files in the 'imageSearchPath "
"atttribute'" % node)
# Collect all texture files
# find all ${TOKEN} tokens and replace them with $TOKEN env. variable
texture_filenames = self._replace_tokens(texture_filenames)
@ -161,7 +163,7 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
break
if not files:
self.log.warning(
raise KnownPublishError(
"No texture found for: %s "
"(searched: %s)" % (texture, image_search_paths))

View file

@ -265,13 +265,16 @@ def transfer_image_planes(source_cameras, target_cameras,
try:
for source_camera, target_camera in zip(source_cameras,
target_cameras):
image_planes = cmds.listConnections(source_camera,
image_plane_plug = "{}.imagePlane".format(source_camera)
image_planes = cmds.listConnections(image_plane_plug,
source=True,
destination=False,
type="imagePlane") or []
# Split of the parent path they are attached - we want
# the image plane node name.
# the image plane node name if attached to a camera.
# TODO: Does this still mean the image plane name is unique?
image_planes = [x.split("->", 1)[1] for x in image_planes]
image_planes = [x.split("->", 1)[-1] for x in image_planes]
if not image_planes:
continue
@ -282,7 +285,7 @@ def transfer_image_planes(source_cameras, target_cameras,
if source_camera == target_camera:
continue
_attach_image_plane(target_camera, image_plane)
else: # explicitly dettaching image planes
else: # explicitly detach image planes
cmds.imagePlane(image_plane, edit=True, detach=True)
originals[source_camera].append(image_plane)
yield

View file

@ -6,9 +6,11 @@ from maya import cmds
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline import AVALON_CONTAINER_ID, publish
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
from openpype.lib import BoolDef
class ExtractMayaSceneRaw(publish.Extractor):
class ExtractMayaSceneRaw(publish.Extractor, OpenPypePyblishPluginMixin):
"""Extract as Maya Scene (raw).
This will preserve all references, construction history, etc.
@ -23,6 +25,22 @@ class ExtractMayaSceneRaw(publish.Extractor):
"camerarig"]
scene_type = "ma"
@classmethod
def get_attribute_defs(cls):
return [
BoolDef(
"preserve_references",
label="Preserve References",
tooltip=(
"When enabled references will still be references "
"in the published file.\nWhen disabled the references "
"are imported into the published file generating a "
"file without references."
),
default=True
)
]
def process(self, instance):
"""Plugin entry point."""
ext_mapping = (
@ -64,13 +82,18 @@ class ExtractMayaSceneRaw(publish.Extractor):
# Perform extraction
self.log.debug("Performing extraction ...")
attribute_values = self.get_attr_values_from_data(
instance.data
)
with maintained_selection():
cmds.select(selection, noExpand=True)
cmds.file(path,
force=True,
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
exportSelected=True,
preserveReferences=True,
preserveReferences=attribute_values[
"preserve_references"
],
constructionHistory=True,
shader=True,
constraints=True,

View file

@ -1,9 +1,6 @@
import os
import json
import contextlib
import clique
import capture
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
@ -11,16 +8,6 @@ from openpype.hosts.maya.api import lib
from maya import cmds
@contextlib.contextmanager
def panel_camera(panel, camera):
original_camera = cmds.modelPanel(panel, query=True, camera=True)
try:
cmds.modelPanel(panel, edit=True, camera=camera)
yield
finally:
cmds.modelPanel(panel, edit=True, camera=original_camera)
class ExtractPlayblast(publish.Extractor):
"""Extract viewport playblast.
@ -36,19 +23,8 @@ class ExtractPlayblast(publish.Extractor):
capture_preset = {}
profiles = None
def _capture(self, preset):
if os.environ.get("OPENPYPE_DEBUG") == "1":
self.log.debug(
"Using preset: {}".format(
json.dumps(preset, indent=4, sort_keys=True)
)
)
path = capture.capture(log=self.log, **preset)
self.log.debug("playblast path {}".format(path))
def process(self, instance):
self.log.debug("Extracting capture..")
self.log.debug("Extracting playblast..")
# get scene fps
fps = instance.data.get("fps") or instance.context.data.get("fps")
@ -63,10 +39,6 @@ class ExtractPlayblast(publish.Extractor):
end = cmds.playbackOptions(query=True, animationEndTime=True)
self.log.debug("start: {}, end: {}".format(start, end))
# get cameras
camera = instance.data["review_camera"]
task_data = instance.data["anatomyData"].get("task", {})
capture_preset = lib.get_capture_preset(
task_data.get("name"),
@ -75,174 +47,35 @@ class ExtractPlayblast(publish.Extractor):
instance.context.data["project_settings"],
self.log
)
preset = lib.load_capture_preset(data=capture_preset)
# "isolate_view" will already have been applied at creation, so we'll
# ignore it here.
preset.pop("isolate_view")
# Set resolution variables from capture presets
width_preset = capture_preset["Resolution"]["width"]
height_preset = capture_preset["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
preset["camera"] = camera
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset["width"] = review_instance_width
preset["height"] = review_instance_height
elif width_preset and height_preset:
preset["width"] = width_preset
preset["height"] = height_preset
elif asset_width and asset_height:
preset["width"] = asset_width
preset["height"] = asset_height
preset["start_frame"] = start
preset["end_frame"] = end
# Enforce persisting camera depth of field
camera_options = preset.setdefault("camera_options", {})
camera_options["depthOfField"] = cmds.getAttr(
"{0}.depthOfField".format(camera))
stagingdir = self.staging_dir(instance)
filename = "{0}".format(instance.name)
filename = instance.name
path = os.path.join(stagingdir, filename)
self.log.debug("Outputting images to %s" % path)
# get cameras
camera = instance.data["review_camera"]
preset = lib.generate_capture_preset(
instance, camera, path,
start=start, end=end,
capture_preset=capture_preset)
lib.render_capture_preset(preset)
preset["filename"] = path
preset["overwrite"] = True
cmds.refresh(force=True)
refreshFrameInt = int(cmds.playbackOptions(q=True, minTime=True))
cmds.currentTime(refreshFrameInt - 1, edit=True)
cmds.currentTime(refreshFrameInt, edit=True)
# Use displayLights setting from instance
key = "displayLights"
preset["viewport_options"][key] = instance.data[key]
# Override transparency if requested.
transparency = instance.data.get("transparency", 0)
if transparency != 0:
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
# Isolate view is requested by having objects in the set besides a
# camera. If there is only 1 member it'll be the camera because we
# validate to have 1 camera only.
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
preset["isolate"] = instance.data["setMembers"]
# Show/Hide image planes on request.
image_plane = instance.data.get("imagePlane", True)
if "viewport_options" in preset:
preset["viewport_options"]["imagePlane"] = image_plane
else:
preset["viewport_options"] = {"imagePlane": image_plane}
# Disable Pan/Zoom.
pan_zoom = cmds.getAttr("{}.panZoomEnabled".format(preset["camera"]))
preset.pop("pan_zoom", None)
preset["camera_options"]["panZoomEnabled"] = instance.data["panZoom"]
# Need to explicitly enable some viewport changes so the viewport is
# refreshed ahead of playblasting.
keys = [
"useDefaultMaterial",
"wireframeOnShaded",
"xray",
"jointXray",
"backfaceCulling"
]
viewport_defaults = {}
for key in keys:
viewport_defaults[key] = cmds.modelEditor(
instance.data["panel"], query=True, **{key: True}
)
if preset["viewport_options"][key]:
cmds.modelEditor(
instance.data["panel"], edit=True, **{key: True}
)
override_viewport_options = (
capture_preset["Viewport Options"]["override_viewport_options"]
)
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
preset["viewer"] = False
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel_preset = capture.parse_view(instance.data["panel"])
panel_preset.pop("camera")
preset.update(panel_preset)
# Need to ensure Python 2 compatibility.
# TODO: Remove once dropping Python 2.
if getattr(contextlib, "nested", None):
# Python 3 compatibility.
with contextlib.nested(
lib.maintained_time(),
panel_camera(instance.data["panel"], preset["camera"])
):
self._capture(preset)
else:
# Python 2 compatibility.
with contextlib.ExitStack() as stack:
stack.enter_context(lib.maintained_time())
stack.enter_context(
panel_camera(instance.data["panel"], preset["camera"])
)
self._capture(preset)
# Restoring viewport options.
if viewport_defaults:
cmds.modelEditor(
instance.data["panel"], edit=True, **viewport_defaults
)
try:
cmds.setAttr(
"{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
except RuntimeError:
self.log.warning("Cannot restore Pan/Zoom settings.")
# Find playblast sequence
collected_files = os.listdir(stagingdir)
patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(collected_files,
minimum_items=1,
patterns=patterns)
filename = preset.get("filename", "%TEMP%")
self.log.debug("filename {}".format(filename))
self.log.debug("Searching playblast collection for: %s", path)
frame_collection = None
for collection in collections:
filebase = collection.format("{head}").rstrip(".")
self.log.debug("collection head {}".format(filebase))
if filebase in filename:
self.log.debug("Checking collection head: %s", filebase)
if filebase in path:
frame_collection = collection
self.log.debug(
"we found collection of interest {}".format(
str(frame_collection)))
if "representations" not in instance.data:
instance.data["representations"] = []
"Found playblast collection: %s", frame_collection
)
tags = ["review"]
if not instance.data.get("keepImages"):
@ -256,6 +89,9 @@ class ExtractPlayblast(publish.Extractor):
if len(collected_files) == 1:
collected_files = collected_files[0]
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
"name": capture_preset["Codec"]["compression"],
"ext": capture_preset["Codec"]["compression"],

View file

@ -1,15 +1,10 @@
import os
import glob
import tempfile
import json
import capture
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
from maya import cmds
class ExtractThumbnail(publish.Extractor):
"""Extract viewport thumbnail.
@ -24,7 +19,7 @@ class ExtractThumbnail(publish.Extractor):
families = ["review"]
def process(self, instance):
self.log.debug("Extracting capture..")
self.log.debug("Extracting thumbnail..")
camera = instance.data["review_camera"]
@ -37,20 +32,24 @@ class ExtractThumbnail(publish.Extractor):
self.log
)
preset = lib.load_capture_preset(data=capture_preset)
# "isolate_view" will already have been applied at creation, so we'll
# ignore it here.
preset.pop("isolate_view")
override_viewport_options = (
capture_preset["Viewport Options"]["override_viewport_options"]
# Create temp directory for thumbnail
# - this is to avoid "override" of source file
dst_staging = tempfile.mkdtemp(prefix="pyblish_tmp_thumbnail")
self.log.debug(
"Create temp directory {} for thumbnail".format(dst_staging)
)
# Store new staging to cleanup paths
filename = instance.name
path = os.path.join(dst_staging, filename)
preset["camera"] = camera
preset["start_frame"] = instance.data["frameStart"]
preset["end_frame"] = instance.data["frameStart"]
preset["camera_options"] = {
self.log.debug("Outputting images to %s" % path)
preset = lib.generate_capture_preset(
instance, camera, path,
start=1, end=1,
capture_preset=capture_preset)
preset["camera_options"].update({
"displayGateMask": False,
"displayResolution": False,
"displayFilmGate": False,
@ -60,101 +59,10 @@ class ExtractThumbnail(publish.Extractor):
"displayFilmPivot": False,
"displayFilmOrigin": False,
"overscan": 1.0,
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
}
# Set resolution variables from capture presets
width_preset = capture_preset["Resolution"]["width"]
height_preset = capture_preset["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset["width"] = review_instance_width
preset["height"] = review_instance_height
elif width_preset and height_preset:
preset["width"] = width_preset
preset["height"] = height_preset
elif asset_width and asset_height:
preset["width"] = asset_width
preset["height"] = asset_height
})
path = lib.render_capture_preset(preset)
# Create temp directory for thumbnail
# - this is to avoid "override" of source file
dst_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
self.log.debug(
"Create temp directory {} for thumbnail".format(dst_staging)
)
# Store new staging to cleanup paths
filename = "{0}".format(instance.name)
path = os.path.join(dst_staging, filename)
self.log.debug("Outputting images to %s" % path)
preset["filename"] = path
preset["overwrite"] = True
cmds.refresh(force=True)
refreshFrameInt = int(cmds.playbackOptions(q=True, minTime=True))
cmds.currentTime(refreshFrameInt - 1, edit=True)
cmds.currentTime(refreshFrameInt, edit=True)
# Use displayLights setting from instance
key = "displayLights"
preset["viewport_options"][key] = instance.data[key]
# Override transparency if requested.
transparency = instance.data.get("transparency", 0)
if transparency != 0:
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
# Isolate view is requested by having objects in the set besides a
# camera. If there is only 1 member it'll be the camera because we
# validate to have 1 camera only.
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
preset["isolate"] = instance.data["setMembers"]
# Show or Hide Image Plane
image_plane = instance.data.get("imagePlane", True)
if "viewport_options" in preset:
preset["viewport_options"]["imagePlane"] = image_plane
else:
preset["viewport_options"] = {"imagePlane": image_plane}
# Disable Pan/Zoom.
preset.pop("pan_zoom", None)
preset["camera_options"]["panZoomEnabled"] = instance.data["panZoom"]
with lib.maintained_time():
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
preset["viewer"] = False
# Update preset with current panel setting
# if override_viewport_options is turned off
panel = cmds.getPanel(withFocus=True) or ""
if not override_viewport_options and "modelPanel" in panel:
panel_preset = capture.parse_active_view()
preset.update(panel_preset)
cmds.setFocus(panel)
if os.environ.get("OPENPYPE_DEBUG") == "1":
self.log.debug(
"Using preset: {}".format(
json.dumps(preset, indent=4, sort_keys=True)
)
)
path = capture.capture(**preset)
playblast = self._fix_playblast_output_path(path)
playblast = self._fix_playblast_output_path(path)
_, thumbnail = os.path.split(playblast)

View file

@ -1,77 +0,0 @@
from collections import defaultdict
import pyblish.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
PublishValidationError, ValidatePipelineOrder)
class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
"""Validate the relational nodes of the look data to ensure every node is
unique.
This ensures the all member ids are unique. Every node id must be from
a single node in the scene.
That means there's only ever one of a specific node inside the look to be
published. For example if you'd have a loaded 3x the same tree and by
accident you're trying to publish them all together in a single look that
would be invalid, because they are the same tree. It should be included
inside the look instance only once.
"""
order = ValidatePipelineOrder
label = 'Look members unique'
hosts = ['maya']
families = ['look']
actions = [openpype.hosts.maya.api.action.SelectInvalidAction,
openpype.hosts.maya.api.action.GenerateUUIDsOnInvalidAction]
def process(self, instance):
"""Process all meshes"""
invalid = self.get_invalid(instance)
if invalid:
raise PublishValidationError(
("Members found without non-unique IDs: "
"{0}").format(invalid))
@staticmethod
def get_invalid(instance):
"""
Check all the relationship members of the objectSets
Example of the lookData relationships:
{"uuid": 59b2bb27bda2cb2776206dd8:79ab0a63ffdf,
"members":[{"uuid": 59b2bb27bda2cb2776206dd8:1b158cc7496e,
"name": |model_GRP|body_GES|body_GESShape}
...,
...]}
Args:
instance:
Returns:
"""
# Get all members from the sets
id_nodes = defaultdict(set)
relationships = instance.data["lookData"]["relationships"]
for relationship in relationships.values():
for member in relationship['members']:
node_id = member["uuid"]
node = member["name"]
id_nodes[node_id].add(node)
# Check if any id has more than 1 node
invalid = []
for nodes in id_nodes.values():
if len(nodes) > 1:
invalid.extend(nodes)
return invalid

View file

@ -44,4 +44,8 @@ class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin):
if not is_subdir(scene_name, root_dir):
raise PublishValidationError(
"Maya workspace is not set correctly.")
"Maya workspace is not set correctly.\n\n"
f"Current workfile `{scene_name}` is not inside the "
"current Maya project root directory `{root_dir}`.\n\n"
"Please use Workfile app to re-save."
)

View file

@ -3483,3 +3483,19 @@ def get_filenames_without_hash(filename, frame_start, frame_end):
new_filename = filename_without_hashes.format(frame)
filenames.append(new_filename)
return filenames
def create_camera_node_by_version():
"""Function to create the camera with the latest node class
For Nuke version 14.0 or later, the Camera4 camera node class
would be used
For the version before, the Camera2 camera node class
would be used
Returns:
Node: camera node
"""
nuke_number_version = nuke.NUKE_VERSION_MAJOR
if nuke_number_version >= 14:
return nuke.createNode("Camera4")
else:
return nuke.createNode("Camera2")

View file

@ -259,9 +259,7 @@ def _install_menu():
menu.addCommand(
"Create...",
lambda: host_tools.show_publisher(
parent=(
main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None
),
parent=main_window,
tab="create"
)
)
@ -270,9 +268,7 @@ def _install_menu():
menu.addCommand(
"Publish...",
lambda: host_tools.show_publisher(
parent=(
main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None
),
parent=main_window,
tab="publish"
)
)

View file

@ -12,7 +12,7 @@ def set_context_favorites(favorites=None):
favorites (dict): couples of {name:path}
"""
favorites = favorites or {}
icon_path = resources.get_resource("icons", "folder-favorite3.png")
icon_path = resources.get_resource("icons", "folder-favorite.png")
for name, path in favorites.items():
nuke.addFavoriteDir(
name,

View file

@ -4,6 +4,9 @@ from openpype.hosts.nuke.api import (
NukeCreatorError,
maintained_selection
)
from openpype.hosts.nuke.api.lib import (
create_camera_node_by_version
)
class CreateCamera(NukeCreator):
@ -32,7 +35,7 @@ class CreateCamera(NukeCreator):
"Creator error: Select only camera node type")
created_node = self.selected_nodes[0]
else:
created_node = nuke.createNode("Camera2")
created_node = create_camera_node_by_version()
created_node["tile_color"].setValue(
int(self.node_color, 16))

View file

@ -34,6 +34,11 @@ class ExtractReviewIntermediates(publish.Extractor):
nuke_publish = project_settings["nuke"]["publish"]
deprecated_setting = nuke_publish["ExtractReviewDataMov"]
current_setting = nuke_publish.get("ExtractReviewIntermediates")
if not deprecated_setting["enabled"] and (
not current_setting["enabled"]
):
cls.enabled = False
if deprecated_setting["enabled"]:
# Use deprecated settings if they are still enabled
cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"]

View file

@ -80,6 +80,7 @@ class ValidateOutputMaps(pyblish.api.InstancePlugin):
self.log.warning(f"Disabling texture instance: "
f"{image_instance}")
image_instance.data["active"] = False
image_instance.data["publish"] = False
image_instance.data["integrate"] = False
representation.setdefault("tags", []).append("delete")
continue

View file

@ -216,6 +216,11 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
instance.data["thumbnailSource"] = first_filepath
review_representation["tags"].append("review")
# Adding "review" to representation name since it can clash with main
# representation if they share the same extension.
review_representation["outputName"] = "review"
self.log.debug("Representation {} was marked for review. {}".format(
review_representation["name"], review_path
))

View file

@ -15,7 +15,7 @@ class ValidateFilePath(pyblish.api.InstancePlugin):
This is primarily created for Simple Creator instances.
"""
label = "Validate Workfile"
label = "Validate Filepaths"
order = pyblish.api.ValidatorOrder - 0.49
hosts = ["traypublisher"]

View file

@ -6,13 +6,13 @@ def requests_post(*args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
variable is found. This is useful when Deadline server is
running with self-signed certificates and its certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
of defense SSL is providing, and it is not recommended.
"""
if "verify" not in kwargs:
@ -24,13 +24,13 @@ def requests_get(*args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
variable is found. This is useful when Deadline server is
running with self-signed certificates and its certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
of defense SSL is providing, and it is not recommended.
"""
if "verify" not in kwargs:

View file

@ -16,6 +16,113 @@ class MissingEventSystem(Exception):
pass
def _get_func_ref(func):
if inspect.ismethod(func):
return WeakMethod(func)
return weakref.ref(func)
def _get_func_info(func):
path = "<unknown path>"
if func is None:
return "<unknown>", path
if hasattr(func, "__name__"):
name = func.__name__
else:
name = str(func)
# Get path to file and fallback to '<unknown path>' if fails
# NOTE This was added because of 'partial' functions which is handled,
# but who knows what else can cause this to fail?
try:
path = os.path.abspath(inspect.getfile(func))
except TypeError:
pass
return name, path
class weakref_partial:
"""Partial function with weak reference to the wrapped function.
Can be used as 'functools.partial' but it will store weak reference to
function. That means that the function must be reference counted
to avoid garbage collecting the function itself.
When the referenced functions is garbage collected then calling the
weakref partial (no matter the args/kwargs passed) will do nothing.
It will fail silently, returning `None`. The `is_valid()` method can
be used to detect whether the reference is still valid.
Is useful for object methods. In that case the callback is
deregistered when object is destroyed.
Warnings:
Values passed as *args and **kwargs are stored strongly in memory.
That may "keep alive" objects that should be already destroyed.
It is recommended to pass only immutable objects like 'str',
'bool', 'int' etc.
Args:
func (Callable): Function to wrap.
*args: Arguments passed to the wrapped function.
**kwargs: Keyword arguments passed to the wrapped function.
"""
def __init__(self, func, *args, **kwargs):
self._func_ref = _get_func_ref(func)
self._args = args
self._kwargs = kwargs
def __call__(self, *args, **kwargs):
func = self._func_ref()
if func is None:
return
new_args = tuple(list(self._args) + list(args))
new_kwargs = dict(self._kwargs)
new_kwargs.update(kwargs)
return func(*new_args, **new_kwargs)
def get_func(self):
"""Get wrapped function.
Returns:
Union[Callable, None]: Wrapped function or None if it was
destroyed.
"""
return self._func_ref()
def is_valid(self):
"""Check if wrapped function is still valid.
Returns:
bool: Is wrapped function still valid.
"""
return self._func_ref() is not None
def validate_signature(self, *args, **kwargs):
"""Validate if passed arguments are supported by wrapped function.
Returns:
bool: Are passed arguments supported by wrapped function.
"""
func = self._func_ref()
if func is None:
return False
new_args = tuple(list(self._args) + list(args))
new_kwargs = dict(self._kwargs)
new_kwargs.update(kwargs)
return is_func_signature_supported(
func, *new_args, **new_kwargs
)
class EventCallback(object):
"""Callback registered to a topic.
@ -34,20 +141,37 @@ class EventCallback(object):
or none arguments. When 1 argument is expected then the processed 'Event'
object is passed in.
The registered callbacks don't keep function in memory so it is not
possible to store lambda function as callback.
The callbacks are validated against their reference counter, that is
achieved using 'weakref' module. That means that the callback must
be stored in memory somewhere. e.g. lambda functions are not
supported as valid callback.
You can use 'weakref_partial' functions. In that case is partial object
stored in the callback object and reference counter is checked for
the wrapped function.
Args:
topic(str): Topic which will be listened.
func(func): Callback to a topic.
topic (str): Topic which will be listened.
func (Callable): Callback to a topic.
order (Union[int, None]): Order of callback. Lower number means higher
priority.
Raises:
TypeError: When passed function is not a callable object.
"""
def __init__(self, topic, func):
def __init__(self, topic, func, order):
if not callable(func):
raise TypeError((
"Registered callback is not callable. \"{}\""
).format(str(func)))
self._validate_order(order)
self._log = None
self._topic = topic
self._order = order
self._enabled = True
# Replace '*' with any character regex and escape rest of text
# - when callback is registered for '*' topic it will receive all
# events
@ -63,37 +187,38 @@ class EventCallback(object):
topic_regex = re.compile(topic_regex_str)
self._topic_regex = topic_regex
# Convert callback into references
# - deleted functions won't cause crashes
if inspect.ismethod(func):
func_ref = WeakMethod(func)
elif callable(func):
func_ref = weakref.ref(func)
# Callback function prep
if isinstance(func, weakref_partial):
partial_func = func
(name, path) = _get_func_info(func.get_func())
func_ref = None
expect_args = partial_func.validate_signature("fake")
expect_kwargs = partial_func.validate_signature(event="fake")
else:
raise TypeError((
"Registered callback is not callable. \"{}\""
).format(str(func)))
partial_func = None
(name, path) = _get_func_info(func)
# Convert callback into references
# - deleted functions won't cause crashes
func_ref = _get_func_ref(func)
# Collect function name and path to file for logging
func_name = func.__name__
func_path = os.path.abspath(inspect.getfile(func))
# Get expected arguments from function spec
# - positional arguments are always preferred
expect_args = is_func_signature_supported(func, "fake")
expect_kwargs = is_func_signature_supported(func, event="fake")
# Get expected arguments from function spec
# - positional arguments are always preferred
expect_args = is_func_signature_supported(func, "fake")
expect_kwargs = is_func_signature_supported(func, event="fake")
self._func_ref = func_ref
self._func_name = func_name
self._func_path = func_path
self._partial_func = partial_func
self._ref_is_valid = True
self._expect_args = expect_args
self._expect_kwargs = expect_kwargs
self._ref_valid = func_ref is not None
self._enabled = True
self._name = name
self._path = path
def __repr__(self):
return "< {} - {} > {}".format(
self.__class__.__name__, self._func_name, self._func_path
self.__class__.__name__, self._name, self._path
)
@property
@ -104,32 +229,83 @@ class EventCallback(object):
@property
def is_ref_valid(self):
return self._ref_valid
"""
Returns:
bool: Is reference to callback valid.
"""
self._validate_ref()
return self._ref_is_valid
def validate_ref(self):
if not self._ref_valid:
return
"""Validate if reference to callback is valid.
callback = self._func_ref()
if not callback:
self._ref_valid = False
Deprecated:
Reference is always live checkd with 'is_ref_valid'.
"""
# Trigger validate by getting 'is_valid'
_ = self.is_ref_valid
@property
def enabled(self):
"""Is callback enabled."""
"""Is callback enabled.
Returns:
bool: Is callback enabled.
"""
return self._enabled
def set_enabled(self, enabled):
"""Change if callback is enabled."""
"""Change if callback is enabled.
Args:
enabled (bool): Change enabled state of the callback.
"""
self._enabled = enabled
def deregister(self):
"""Calling this function will cause that callback will be removed."""
# Fake reference
self._ref_valid = False
self._ref_is_valid = False
self._partial_func = None
self._func_ref = None
def get_order(self):
"""Get callback order.
Returns:
Union[int, None]: Callback order.
"""
return self._order
def set_order(self, order):
"""Change callback order.
Args:
order (Union[int, None]): Order of callback. Lower number means
higher priority.
"""
self._validate_order(order)
self._order = order
order = property(get_order, set_order)
def topic_matches(self, topic):
"""Check if event topic matches callback's topic."""
"""Check if event topic matches callback's topic.
Args:
topic (str): Topic name.
Returns:
bool: Topic matches callback's topic.
"""
return self._topic_regex.match(topic)
def process_event(self, event):
@ -139,36 +315,69 @@ class EventCallback(object):
event(Event): Event that was triggered.
"""
# Skip if callback is not enabled or has invalid reference
if not self._ref_valid or not self._enabled:
# Skip if callback is not enabled
if not self._enabled:
return
# Get reference
callback = self._func_ref()
# Check if reference is valid or callback's topic matches the event
if not callback:
# Change state if is invalid so the callback is removed
self._ref_valid = False
# Get reference and skip if is not available
callback = self._get_callback()
if callback is None:
return
elif self.topic_matches(event.topic):
# Try execute callback
try:
if self._expect_args:
callback(event)
if not self.topic_matches(event.topic):
return
elif self._expect_kwargs:
callback(event=event)
# Try to execute callback
try:
if self._expect_args:
callback(event)
else:
callback()
elif self._expect_kwargs:
callback(event=event)
except Exception:
self.log.warning(
"Failed to execute event callback {}".format(
str(repr(self))
),
exc_info=True
)
else:
callback()
except Exception:
self.log.warning(
"Failed to execute event callback {}".format(
str(repr(self))
),
exc_info=True
)
def _validate_order(self, order):
if isinstance(order, int):
return
raise TypeError(
"Expected type 'int' got '{}'.".format(str(type(order)))
)
def _get_callback(self):
if self._partial_func is not None:
return self._partial_func
if self._func_ref is not None:
return self._func_ref()
return None
def _validate_ref(self):
if self._ref_is_valid is False:
return
if self._func_ref is not None:
self._ref_is_valid = self._func_ref() is not None
elif self._partial_func is not None:
self._ref_is_valid = self._partial_func.is_valid()
else:
self._ref_is_valid = False
if not self._ref_is_valid:
self._func_ref = None
self._partial_func = None
# Inherit from 'object' for Python 2 hosts
@ -282,30 +491,39 @@ class Event(object):
class EventSystem(object):
"""Encapsulate event handling into an object.
System wraps registered callbacks and triggered events into single object
so it is possible to create mutltiple independent systems that have their
System wraps registered callbacks and triggered events into single object,
so it is possible to create multiple independent systems that have their
topics and callbacks.
Callbacks are stored by order of their registration, but it is possible to
manually define order of callbacks using 'order' argument within
'add_callback'.
"""
default_order = 100
def __init__(self):
self._registered_callbacks = []
def add_callback(self, topic, callback):
def add_callback(self, topic, callback, order=None):
"""Register callback in event system.
Args:
topic (str): Topic for EventCallback.
callback (Callable): Function or method that will be called
when topic is triggered.
callback (Union[Callable, weakref_partial]): Function or method
that will be called when topic is triggered.
order (Optional[int]): Order of callback. Lower number means
higher priority.
Returns:
EventCallback: Created callback object which can be used to
stop listening.
"""
callback = EventCallback(topic, callback)
if order is None:
order = self.default_order
callback = EventCallback(topic, callback, order)
self._registered_callbacks.append(callback)
return callback
@ -341,22 +559,6 @@ class EventSystem(object):
event.emit()
return event
def _process_event(self, event):
"""Process event topic and trigger callbacks.
Args:
event (Event): Prepared event with topic and data.
"""
invalid_callbacks = []
for callback in self._registered_callbacks:
callback.process_event(event)
if not callback.is_ref_valid:
invalid_callbacks.append(callback)
for callback in invalid_callbacks:
self._registered_callbacks.remove(callback)
def emit_event(self, event):
"""Emit event object.
@ -366,6 +568,21 @@ class EventSystem(object):
self._process_event(event)
def _process_event(self, event):
"""Process event topic and trigger callbacks.
Args:
event (Event): Prepared event with topic and data.
"""
callbacks = tuple(sorted(
self._registered_callbacks, key=lambda x: x.order
))
for callback in callbacks:
callback.process_event(event)
if not callback.is_ref_valid:
self._registered_callbacks.remove(callback)
class QueuedEventSystem(EventSystem):
"""Events are automatically processed in queue.

View file

@ -269,7 +269,7 @@ def is_func_signature_supported(func, *args, **kwargs):
True
Args:
func (function): A function where the signature should be tested.
func (Callable): A function where the signature should be tested.
*args (Any): Positional arguments for function signature.
**kwargs (Any): Keyword arguments for function signature.

View file

@ -44,17 +44,17 @@ XML_CHAR_REF_REGEX_HEX = re.compile(r"&#x?[0-9a-fA-F]+;")
ARRAY_TYPE_REGEX = re.compile(r"^(int|float|string)\[\d+\]$")
IMAGE_EXTENSIONS = {
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
".pictor", ".png", ".psd", ".psb", ".psp", ".qtvr", ".ras",
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
".xpm", ".xwd"
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave",
".cal", ".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr",
".fits", ".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc",
".icer", ".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
".jng", ".jpeg", ".jpeg-ls", ".jpeg-hdr", ".2000", ".jpg",
".kra", ".logluv", ".mng", ".miff", ".nrrd", ".ora",
".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
".pictor", ".png", ".psd", ".psb", ".psp", ".qtvr",
".ras", ".rgbe", ".sgi", ".tga",
".tif", ".tiff", ".tiff/ep", ".tiff/it", ".ufo", ".ufp",
".wbmp", ".webp", ".xr", ".xt", ".xbm", ".xcf", ".xpm", ".xwd"
}
VIDEO_EXTENSIONS = {
@ -110,8 +110,9 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False):
if line == "</ImageSpec>":
subimages_lines.append(lines)
lines = []
xml_started = False
if not xml_started:
if not subimages_lines:
raise ValueError(
"Failed to read input file \"{}\".\nOutput:\n{}".format(
filepath, output

View file

@ -542,7 +542,8 @@ def _load_modules():
module_dirs.insert(0, current_dir)
addons_dir = os.path.join(os.path.dirname(current_dir), "addons")
module_dirs.append(addons_dir)
if os.path.exists(addons_dir):
module_dirs.append(addons_dir)
ignored_host_names = set(IGNORED_HOSTS_IN_AYON)
ignored_current_dir_filenames = set(IGNORED_DEFAULT_FILENAMES)
@ -1332,7 +1333,6 @@ class TrayModulesManager(ModulesManager):
"user",
"ftrack",
"kitsu",
"muster",
"launcher_tool",
"avalon",
"clockify",

View file

@ -34,8 +34,8 @@ def requests_post(*args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
variable is found. This is useful when Deadline server is
running with self-signed certificates and its certificate is not
added to trusted certificates on client machines.
Warning:
@ -55,8 +55,8 @@ def requests_get(*args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
variable is found. This is useful when Deadline server is
running with self-signed certificates and its certificate is not
added to trusted certificates on client machines.
Warning:
@ -464,7 +464,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin,
self.log.info("Submitted job to Deadline: {}.".format(job_id))
# TODO: Find a way that's more generic and not render type specific
if "exportJob" in instance.data:
if instance.data.get("splitRender"):
self.log.info("Splitting export and render in two jobs")
self.log.info("Export job id: %s", job_id)
render_job_info = self.get_job_info(dependency_job_ids=[job_id])

View file

@ -1,7 +1,4 @@
# -*- coding: utf-8 -*-
"""Collect Deadline pools. Choose default one from Settings
"""
import pyblish.api
from openpype.lib import TextDef
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
@ -9,11 +6,35 @@ from openpype.pipeline.publish import OpenPypePyblishPluginMixin
class CollectDeadlinePools(pyblish.api.InstancePlugin,
OpenPypePyblishPluginMixin):
"""Collect pools from instance if present, from Setting otherwise."""
"""Collect pools from instance or Publisher attributes, from Setting
otherwise.
Pools are used to control which DL workers could render the job.
Pools might be set:
- directly on the instance (set directly in DCC)
- from Publisher attributes
- from defaults from Settings.
Publisher attributes could be shown even for instances that should be
rendered locally as visibility is driven by product type of the instance
(which will be `render` most likely).
(Might be resolved in the future and class attribute 'families' should
be cleaned up.)
"""
order = pyblish.api.CollectorOrder + 0.420
label = "Collect Deadline Pools"
families = ["rendering",
hosts = ["aftereffects",
"fusion",
"harmony"
"nuke",
"maya",
"max"]
families = ["render",
"rendering",
"render.farm",
"renderFarm",
"renderlayer",
@ -30,7 +51,6 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin,
cls.secondary_pool = settings.get("secondary_pool", None)
def process(self, instance):
attr_values = self.get_attr_values_from_data(instance.data)
if not instance.data.get("primaryPool"):
instance.data["primaryPool"] = (
@ -60,8 +80,12 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin,
return [
TextDef("primaryPool",
label="Primary Pool",
default=cls.primary_pool),
default=cls.primary_pool,
tooltip="Deadline primary pool, "
"applicable for farm rendering"),
TextDef("secondaryPool",
label="Secondary Pool",
default=cls.secondary_pool)
default=cls.secondary_pool,
tooltip="Deadline secondary pool, "
"applicable for farm rendering")
]

View file

@ -15,6 +15,7 @@ from openpype.lib import (
NumberDef
)
@attr.s
class DeadlinePluginInfo():
SceneFile = attr.ib(default=None)
@ -41,6 +42,12 @@ class VrayRenderPluginInfo():
SeparateFilesPerFrame = attr.ib(default=True)
@attr.s
class RedshiftRenderPluginInfo():
SceneFile = attr.ib(default=None)
Version = attr.ib(default=None)
class HoudiniSubmitDeadline(
abstract_submit_deadline.AbstractSubmitDeadline,
OpenPypePyblishPluginMixin
@ -124,7 +131,7 @@ class HoudiniSubmitDeadline(
# Whether Deadline render submission is being split in two
# (extract + render)
split_render_job = instance.data["exportJob"]
split_render_job = instance.data.get("splitRender")
# If there's some dependency job ids we can assume this is a render job
# and not an export job
@ -132,18 +139,21 @@ class HoudiniSubmitDeadline(
if dependency_job_ids:
is_export_job = False
job_type = "[RENDER]"
if split_render_job and not is_export_job:
# Convert from family to Deadline plugin name
# i.e., arnold_rop -> Arnold
plugin = instance.data["family"].replace("_rop", "").capitalize()
else:
plugin = "Houdini"
if split_render_job:
job_type = "[EXPORT IFD]"
job_info = DeadlineJobInfo(Plugin=plugin)
filepath = context.data["currentFile"]
filename = os.path.basename(filepath)
job_info.Name = "{} - {}".format(filename, instance.name)
job_info.Name = "{} - {} {}".format(filename, instance.name, job_type)
job_info.BatchName = filename
job_info.UserName = context.data.get(
@ -259,6 +269,25 @@ class HoudiniSubmitDeadline(
plugin_info = VrayRenderPluginInfo(
InputFilename=instance.data["ifdFile"],
)
elif family == "redshift_rop":
plugin_info = RedshiftRenderPluginInfo(
SceneFile=instance.data["ifdFile"]
)
# Note: To use different versions of Redshift on Deadline
# set the `REDSHIFT_VERSION` env variable in the Tools
# settings in the AYON Application plugin. You will also
# need to set that version in `Redshift.param` file
# of the Redshift Deadline plugin:
# [Redshift_Executable_*]
# where * is the version number.
if os.getenv("REDSHIFT_VERSION"):
plugin_info.Version = os.getenv("REDSHIFT_VERSION")
else:
self.log.warning((
"REDSHIFT_VERSION env variable is not set"
" - using version configured in Deadline"
))
else:
self.log.error(
"Family '%s' not supported yet to split render job",

View file

@ -15,6 +15,12 @@ from openpype.pipeline import (
from openpype.pipeline.publish.lib import (
replace_with_published_scene_path
)
from openpype.pipeline.publish import KnownPublishError
from openpype.hosts.max.api.lib import (
get_current_renderer,
get_multipass_setting
)
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
from openpype_modules.deadline import abstract_submit_deadline
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
from openpype.lib import is_running_from_build
@ -54,7 +60,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
cls.priority)
cls.chuck_size = settings.get("chunk_size", cls.chunk_size)
cls.group = settings.get("group", cls.group)
# TODO: multiple camera instance, separate job infos
def get_job_info(self):
job_info = DeadlineJobInfo(Plugin="3dsmax")
@ -71,7 +77,6 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
src_filepath = context.data["currentFile"]
src_filename = os.path.basename(src_filepath)
job_info.Name = "%s - %s" % (src_filename, instance.name)
job_info.BatchName = src_filename
job_info.Plugin = instance.data["plugin"]
@ -134,11 +139,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
# Add list of expected files to job
# ---------------------------------
exp = instance.data.get("expectedFiles")
for filepath in self._iter_expected_files(exp):
job_info.OutputDirectory += os.path.dirname(filepath)
job_info.OutputFilename += os.path.basename(filepath)
if not instance.data.get("multiCamera"):
exp = instance.data.get("expectedFiles")
for filepath in self._iter_expected_files(exp):
job_info.OutputDirectory += os.path.dirname(filepath)
job_info.OutputFilename += os.path.basename(filepath)
return job_info
@ -163,11 +168,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
def process_submission(self):
instance = self._instance
filepath = self.scene_path
filepath = instance.context.data["currentFile"]
files = instance.data["expectedFiles"]
if not files:
raise RuntimeError("No Render Elements found!")
raise KnownPublishError("No Render Elements found!")
first_file = next(self._iter_expected_files(files))
output_dir = os.path.dirname(first_file)
instance.data["outputDir"] = output_dir
@ -181,9 +186,17 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
self.log.debug("Submitting 3dsMax render..")
project_settings = instance.context.data["project_settings"]
payload = self._use_published_name(payload_data, project_settings)
job_info, plugin_info = payload
self.submit(self.assemble_payload(job_info, plugin_info))
if instance.data.get("multiCamera"):
self.log.debug("Submitting jobs for multiple cameras..")
payload = self._use_published_name_for_multiples(
payload_data, project_settings)
job_infos, plugin_infos = payload
for job_info, plugin_info in zip(job_infos, plugin_infos):
self.submit(self.assemble_payload(job_info, plugin_info))
else:
payload = self._use_published_name(payload_data, project_settings)
job_info, plugin_info = payload
self.submit(self.assemble_payload(job_info, plugin_info))
def _use_published_name(self, data, project_settings):
# Not all hosts can import these modules.
@ -206,7 +219,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
files = instance.data.get("expectedFiles")
if not files:
raise RuntimeError("No render elements found")
raise KnownPublishError("No render elements found")
first_file = next(self._iter_expected_files(files))
old_output_dir = os.path.dirname(first_file)
output_beauty = RenderSettings().get_render_output(instance.name,
@ -218,6 +231,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
plugin_data["RenderOutput"] = beauty_name
# as 3dsmax has version with different languages
plugin_data["Language"] = "ENU"
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
@ -249,6 +263,120 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
return job_info, plugin_info
def get_job_info_through_camera(self, camera):
"""Get the job parameters for deadline submission when
multi-camera is enabled.
Args:
infos(dict): a dictionary with job info.
"""
instance = self._instance
context = instance.context
job_info = copy.deepcopy(self.job_info)
exp = instance.data.get("expectedFiles")
src_filepath = context.data["currentFile"]
src_filename = os.path.basename(src_filepath)
job_info.Name = "%s - %s - %s" % (
src_filename, instance.name, camera)
for filepath in self._iter_expected_files(exp):
if camera not in filepath:
continue
job_info.OutputDirectory += os.path.dirname(filepath)
job_info.OutputFilename += os.path.basename(filepath)
return job_info
# set the output filepath with the relative camera
def get_plugin_info_through_camera(self, camera):
"""Get the plugin parameters for deadline submission when
multi-camera is enabled.
Args:
infos(dict): a dictionary with plugin info.
"""
instance = self._instance
# set the target camera
plugin_info = copy.deepcopy(self.plugin_info)
plugin_data = {}
# set the output filepath with the relative camera
if instance.data.get("multiCamera"):
scene_filepath = instance.context.data["currentFile"]
scene_filename = os.path.basename(scene_filepath)
scene_directory = os.path.dirname(scene_filepath)
current_filename, ext = os.path.splitext(scene_filename)
camera_scene_name = f"{current_filename}_{camera}{ext}"
camera_scene_filepath = os.path.join(
scene_directory, f"_{current_filename}", camera_scene_name)
plugin_data["SceneFile"] = camera_scene_filepath
files = instance.data.get("expectedFiles")
if not files:
raise KnownPublishError("No render elements found")
first_file = next(self._iter_expected_files(files))
old_output_dir = os.path.dirname(first_file)
rgb_output = RenderSettings().get_batch_render_output(camera) # noqa
rgb_bname = os.path.basename(rgb_output)
dir = os.path.dirname(first_file)
beauty_name = f"{dir}/{rgb_bname}"
beauty_name = beauty_name.replace("\\", "/")
plugin_info["RenderOutput"] = beauty_name
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
if renderer in [
"ART_Renderer",
"Redshift_Renderer",
"V_Ray_6_Hotfix_3",
"V_Ray_GPU_6_Hotfix_3",
"Default_Scanline_Renderer",
"Quicksilver_Hardware_Renderer",
]:
render_elem_list = RenderSettings().get_batch_render_elements(
instance.name, old_output_dir, camera
)
for i, element in enumerate(render_elem_list):
if camera in element:
elem_bname = os.path.basename(element)
new_elem = f"{dir}/{elem_bname}"
new_elem = new_elem.replace("/", "\\")
plugin_info["RenderElementOutputFilename%d" % i] = new_elem # noqa
if camera:
# set the default camera and target camera
# (weird parameters from max)
plugin_data["Camera"] = camera
plugin_data["Camera1"] = camera
plugin_data["Camera0"] = None
plugin_info.update(plugin_data)
return plugin_info
def _use_published_name_for_multiples(self, data, project_settings):
"""Process the parameters submission for deadline when
user enables multi-cameras option.
Args:
job_info_list (list): A list of multiple job infos
plugin_info_list (list): A list of multiple plugin infos
"""
job_info_list = []
plugin_info_list = []
instance = self._instance
cameras = instance.data.get("cameras", [])
plugin_data = {}
multipass = get_multipass_setting(project_settings)
if multipass:
plugin_data["DisableMultipass"] = 0
else:
plugin_data["DisableMultipass"] = 1
for cam in cameras:
job_info = self.get_job_info_through_camera(cam)
plugin_info = self.get_plugin_info_through_camera(cam)
plugin_info.update(plugin_data)
job_info_list.append(job_info)
plugin_info_list.append(plugin_info)
return job_info_list, plugin_info_list
def from_published_scene(self, replace_in_path=True):
instance = self._instance
if instance.data["renderer"] == "Redshift_Renderer":

View file

@ -231,7 +231,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1"
# Adding file dependencies.
if self.asset_dependencies:
if not bool(os.environ.get("IS_TEST")) and self.asset_dependencies:
dependencies = instance.context.data["fileDependencies"]
for dependency in dependencies:
job_info.AssetDependency += dependency
@ -570,7 +570,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
job_info = copy.deepcopy(self.job_info)
if self.asset_dependencies:
if not bool(os.environ.get("IS_TEST")) and self.asset_dependencies:
# Asset dependency to wait for at least the scene file to sync.
job_info.AssetDependency += self.scene_path

View file

@ -47,6 +47,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
env_allowed_keys = []
env_search_replace_values = {}
workfile_dependency = True
use_published_workfile = True
@classmethod
def get_attribute_defs(cls):
@ -85,8 +86,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
),
BoolDef(
"workfile_dependency",
default=True,
default=cls.workfile_dependency,
label="Workfile Dependency"
),
BoolDef(
"use_published_workfile",
default=cls.use_published_workfile,
label="Use Published Workfile"
)
]
@ -125,20 +131,11 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
render_path = instance.data['path']
script_path = context.data["currentFile"]
for item_ in context:
if "workfile" in item_.data["family"]:
template_data = item_.data.get("anatomyData")
rep = item_.data.get("representations")[0].get("name")
template_data["representation"] = rep
template_data["ext"] = rep
template_data["comment"] = None
anatomy_filled = context.data["anatomy"].format(template_data)
template_filled = anatomy_filled["publish"]["path"]
script_path = os.path.normpath(template_filled)
self.log.info(
"Using published scene for render {}".format(script_path)
)
use_published_workfile = instance.data["attributeValues"].get(
"use_published_workfile", self.use_published_workfile
)
if use_published_workfile:
script_path = self._get_published_workfile_path(context)
# only add main rendering job if target is not frames_farm
r_job_response_json = None
@ -197,6 +194,44 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
families.insert(0, "prerender")
instance.data["families"] = families
def _get_published_workfile_path(self, context):
"""This method is temporary while the class is not inherited from
AbstractSubmitDeadline"""
for instance in context:
if (
instance.data["family"] != "workfile"
# Disabled instances won't be integrated
or instance.data("publish") is False
):
continue
template_data = instance.data["anatomyData"]
# Expect workfile instance has only one representation
representation = instance.data["representations"][0]
# Get workfile extension
repre_file = representation["files"]
self.log.info(repre_file)
ext = os.path.splitext(repre_file)[1].lstrip(".")
# Fill template data
template_data["representation"] = representation["name"]
template_data["ext"] = ext
template_data["comment"] = None
anatomy = context.data["anatomy"]
# WARNING Hardcoded template name 'publish' > may not be used
template_obj = anatomy.templates_obj["publish"]["path"]
template_filled = template_obj.format(template_data)
script_path = os.path.normpath(template_filled)
self.log.info(
"Using published scene for render {}".format(
script_path
)
)
return script_path
return None
def payload_submit(
self,
instance,

View file

@ -99,10 +99,6 @@ class ProcessSubmittedCacheJobOnFarm(pyblish.api.InstancePlugin,
def _submit_deadline_post_job(self, instance, job):
"""Submit publish job to Deadline.
Deadline specific code separated from :meth:`process` for sake of
more universal code. Muster post job is sent directly by Muster
submitter, so this type of code isn't necessary for it.
Returns:
(str): deadline_publish_job_id
"""

View file

@ -59,21 +59,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
publish.ColormanagedPyblishPluginMixin):
"""Process Job submitted on farm.
These jobs are dependent on a deadline or muster job
These jobs are dependent on a deadline job
submission prior to this plug-in.
- In case of Deadline, it creates dependent job on farm publishing
rendered image sequence.
- In case of Muster, there is no need for such thing as dependent job,
post action will be executed and rendered sequence will be published.
It creates dependent job on farm publishing rendered image sequence.
Options in instance.data:
- deadlineSubmissionJob (dict, Required): The returned .json
data from the job submission to deadline.
- musterSubmissionJob (dict, Required): same as deadline.
- outputDir (str, Required): The output directory where the metadata
file should be generated. It's assumed that this will also be
final folder containing the output files.
@ -89,7 +83,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
"""
label = "Submit image sequence jobs to Deadline or Muster"
label = "Submit Image Publishing job to Deadline"
order = pyblish.api.IntegratorOrder + 0.2
icon = "tractor"
@ -161,10 +155,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
def _submit_deadline_post_job(self, instance, job, instances):
"""Submit publish job to Deadline.
Deadline specific code separated from :meth:`process` for sake of
more universal code. Muster post job is sent directly by Muster
submitter, so this type of code isn't necessary for it.
Returns:
(str): deadline_publish_job_id
"""
@ -297,7 +287,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
job_index)] = assembly_id # noqa: E501
job_index += 1
elif instance.data.get("bakingSubmissionJobs"):
self.log.info("Adding baking submission jobs as dependencies...")
self.log.info(
"Adding baking submission jobs as dependencies..."
)
job_index = 0
for assembly_id in instance.data["bakingSubmissionJobs"]:
payload["JobInfo"]["JobDependency{}".format(
@ -582,20 +574,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
'''
render_job = None
submission_type = ""
if instance.data.get("toBeRenderedOn") == "deadline":
render_job = instance.data.pop("deadlineSubmissionJob", None)
submission_type = "deadline"
if instance.data.get("toBeRenderedOn") == "muster":
render_job = instance.data.pop("musterSubmissionJob", None)
submission_type = "muster"
render_job = instance.data.pop("deadlineSubmissionJob", None)
if not render_job and instance.data.get("tileRendering") is False:
raise AssertionError(("Cannot continue without valid Deadline "
"or Muster submission."))
raise AssertionError(("Cannot continue without valid "
"Deadline submission."))
if not render_job:
import getpass
@ -624,21 +606,19 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
"FTRACK_SERVER": os.environ.get("FTRACK_SERVER"),
}
deadline_publish_job_id = None
if submission_type == "deadline":
# get default deadline webservice url from deadline module
self.deadline_url = instance.context.data["defaultDeadline"]
# if custom one is set in instance, use that
if instance.data.get("deadlineUrl"):
self.deadline_url = instance.data.get("deadlineUrl")
assert self.deadline_url, "Requires Deadline Webservice URL"
# get default deadline webservice url from deadline module
self.deadline_url = instance.context.data["defaultDeadline"]
# if custom one is set in instance, use that
if instance.data.get("deadlineUrl"):
self.deadline_url = instance.data.get("deadlineUrl")
assert self.deadline_url, "Requires Deadline Webservice URL"
deadline_publish_job_id = \
self._submit_deadline_post_job(instance, render_job, instances)
deadline_publish_job_id = \
self._submit_deadline_post_job(instance, render_job, instances)
# Inject deadline url to instances.
for inst in instances:
inst["deadlineUrl"] = self.deadline_url
# Inject deadline url to instances.
for inst in instances:
inst["deadlineUrl"] = self.deadline_url
# publish job file
publish_job = {
@ -664,15 +644,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
if audio_file and os.path.isfile(audio_file):
publish_job.update({"audio": audio_file})
# pass Ftrack credentials in case of Muster
if submission_type == "muster":
ftrack = {
"FTRACK_API_USER": os.environ.get("FTRACK_API_USER"),
"FTRACK_API_KEY": os.environ.get("FTRACK_API_KEY"),
"FTRACK_SERVER": os.environ.get("FTRACK_SERVER"),
}
publish_job.update({"ftrack": ftrack})
metadata_path, rootless_metadata_path = \
create_metadata_path(instance, anatomy)

View file

@ -13,7 +13,7 @@ class DJVViewAction(BaseAction):
description = "DJV View Launcher"
icon = statics_icon("app_icons", "djvView.png")
type = 'Application'
type = "Application"
allowed_types = [
"cin", "dpx", "avi", "dv", "gif", "flv", "mkv", "mov", "mpg", "mpeg",
@ -60,7 +60,7 @@ class DJVViewAction(BaseAction):
return False
def interface(self, session, entities, event):
if event['data'].get('values', {}):
if event["data"].get("values", {}):
return
entity = entities[0]
@ -70,32 +70,32 @@ class DJVViewAction(BaseAction):
if entity_type == "assetversion":
if (
entity[
'components'
][0]['file_type'][1:] in self.allowed_types
"components"
][0]["file_type"][1:] in self.allowed_types
):
versions.append(entity)
else:
master_entity = entity
if entity_type == "task":
master_entity = entity['parent']
master_entity = entity["parent"]
for asset in master_entity['assets']:
for version in asset['versions']:
for asset in master_entity["assets"]:
for version in asset["versions"]:
# Get only AssetVersion of selected task
if (
entity_type == "task" and
version['task']['id'] != entity['id']
version["task"]["id"] != entity["id"]
):
continue
# Get only components with allowed type
filetype = version['components'][0]['file_type']
filetype = version["components"][0]["file_type"]
if filetype[1:] in self.allowed_types:
versions.append(version)
if len(versions) < 1:
return {
'success': False,
'message': 'There are no Asset Versions to open.'
"success": False,
"message": "There are no Asset Versions to open."
}
# TODO sort them (somehow?)
@ -134,68 +134,68 @@ class DJVViewAction(BaseAction):
last_available = None
select_value = None
for version in versions:
for component in version['components']:
for component in version["components"]:
label = base_label.format(
str(version['version']).zfill(3),
version['asset']['type']['name'],
component['name']
str(version["version"]).zfill(3),
version["asset"]["type"]["name"],
component["name"]
)
try:
location = component[
'component_locations'
][0]['location']
"component_locations"
][0]["location"]
file_path = location.get_filesystem_path(component)
except Exception:
file_path = component[
'component_locations'
][0]['resource_identifier']
"component_locations"
][0]["resource_identifier"]
if os.path.isdir(os.path.dirname(file_path)):
last_available = file_path
if component['name'] == default_component:
if component["name"] == default_component:
select_value = file_path
version_items.append(
{'label': label, 'value': file_path}
{"label": label, "value": file_path}
)
if len(version_items) == 0:
return {
'success': False,
'message': (
'There are no Asset Versions with accessible path.'
"success": False,
"message": (
"There are no Asset Versions with accessible path."
)
}
item = {
'label': 'Items to view',
'type': 'enumerator',
'name': 'path',
'data': sorted(
"label": "Items to view",
"type": "enumerator",
"name": "path",
"data": sorted(
version_items,
key=itemgetter('label'),
key=itemgetter("label"),
reverse=True
)
}
if select_value is not None:
item['value'] = select_value
item["value"] = select_value
else:
item['value'] = last_available
item["value"] = last_available
items.append(item)
return {'items': items}
return {"items": items}
def launch(self, session, entities, event):
"""Callback method for DJVView action."""
# Launching application
event_data = event["data"]
if "values" not in event_data:
event_values = event["data"].get("values")
if not event_values:
return
djv_app_name = event_data["djv_app_name"]
app = self.applicaion_manager.applications.get(djv_app_name)
djv_app_name = event_values["djv_app_name"]
app = self.application_manager.applications.get(djv_app_name)
executable = None
if app is not None:
executable = app.find_executable()
@ -206,18 +206,21 @@ class DJVViewAction(BaseAction):
"message": "Couldn't find DJV executable."
}
filpath = os.path.normpath(event_data["values"]["path"])
filpath = os.path.normpath(event_values["path"])
cmd = [
# DJV path
executable,
str(executable),
# PATH TO COMPONENT
filpath
]
try:
# Run DJV with these commands
subprocess.Popen(cmd)
_process = subprocess.Popen(cmd)
# Keep process in memory for some time
time.sleep(0.1)
except FileNotFoundError:
return {
"success": False,

View file

@ -64,8 +64,10 @@ def clear_credentials():
user_registry = OpenPypeSecureRegistry("kitsu_user")
# Set local settings
user_registry.delete_item("login")
user_registry.delete_item("password")
if user_registry.get_item("login", None) is not None:
user_registry.delete_item("login")
if user_registry.get_item("password", None) is not None:
user_registry.delete_item("password")
def save_credentials(login: str, password: str):
@ -92,8 +94,9 @@ def load_credentials() -> Tuple[str, str]:
# Get user registry
user_registry = OpenPypeSecureRegistry("kitsu_user")
return user_registry.get_item("login", None), user_registry.get_item(
"password", None
return (
user_registry.get_item("login", None),
user_registry.get_item("password", None)
)

View file

@ -1,6 +0,0 @@
from .muster import MusterModule
__all__ = (
"MusterModule",
)

View file

@ -1,147 +0,0 @@
import os
import json
import appdirs
import requests
from openpype.modules import OpenPypeModule, ITrayModule
class MusterModule(OpenPypeModule, ITrayModule):
"""
Module handling Muster Render credentials. This will display dialog
asking for user credentials for Muster if not already specified.
"""
cred_folder_path = os.path.normpath(
appdirs.user_data_dir('pype-app', 'pype')
)
cred_filename = 'muster_cred.json'
name = "muster"
def initialize(self, modules_settings):
muster_settings = modules_settings[self.name]
self.enabled = muster_settings["enabled"]
self.muster_url = muster_settings["MUSTER_REST_URL"]
self.cred_path = os.path.join(
self.cred_folder_path, self.cred_filename
)
# Tray attributes
self.widget_login = None
self.action_show_login = None
self.rest_api_obj = None
def get_global_environments(self):
return {
"MUSTER_REST_URL": self.muster_url
}
def tray_init(self):
from .widget_login import MusterLogin
self.widget_login = MusterLogin(self)
def tray_start(self):
"""Show login dialog if credentials not found."""
# This should be start of module in tray
cred = self.load_credentials()
if not cred:
self.show_login()
def tray_exit(self):
"""Nothing special for Muster."""
return
# Definition of Tray menu
def tray_menu(self, parent):
"""Add **change credentials** option to tray menu."""
from qtpy import QtWidgets
# Menu for Tray App
menu = QtWidgets.QMenu('Muster', parent)
menu.setProperty('submenu', 'on')
# Actions
self.action_show_login = QtWidgets.QAction(
"Change login", menu
)
menu.addAction(self.action_show_login)
self.action_show_login.triggered.connect(self.show_login)
parent.addMenu(menu)
def load_credentials(self):
"""
Get credentials from JSON file
"""
credentials = {}
try:
file = open(self.cred_path, 'r')
credentials = json.load(file)
except Exception:
file = open(self.cred_path, 'w+')
file.close()
return credentials
def get_auth_token(self, username, password):
"""
Authenticate user with Muster and get authToken from server.
"""
if not self.muster_url:
raise AttributeError("Muster REST API url not set")
params = {
'username': username,
'password': password
}
api_entry = '/api/login'
response = self._requests_post(
self.muster_url + api_entry, params=params)
if response.status_code != 200:
self.log.error(
'Cannot log into Muster: {}'.format(response.status_code))
raise Exception('Cannot login into Muster.')
try:
token = response.json()['ResponseData']['authToken']
except ValueError as e:
self.log.error('Invalid response from Muster server {}'.format(e))
raise Exception('Invalid response from Muster while logging in.')
self.save_credentials(token)
def save_credentials(self, token):
"""Save credentials to JSON file."""
with open(self.cred_path, "w") as f:
json.dump({'token': token}, f)
def show_login(self):
"""
Show dialog to enter credentials
"""
if self.widget_login:
self.widget_login.show()
# Webserver module implementation
def webserver_initialization(self, server_manager):
"""Add routes for Muster login."""
if self.tray_initialized:
from .rest_api import MusterModuleRestApi
self.rest_api_obj = MusterModuleRestApi(self, server_manager)
def _requests_post(self, *args, **kwargs):
""" Wrapper for requests, disabling SSL certificate validation if
DONT_VERIFY_SSL environment variable is found. This is useful when
Deadline or Muster server are running with self-signed certificates
and their certificate is not added to trusted certificates on
client machines.
WARNING: disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if 'verify' not in kwargs:
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
return requests.post(*args, **kwargs)

View file

@ -1,555 +0,0 @@
import os
import json
import getpass
import platform
import appdirs
from maya import cmds
import pyblish.api
from openpype.lib import requests_post
from openpype.hosts.maya.api import lib
from openpype.hosts.maya.api.lib_rendersettings import RenderSettings
from openpype.pipeline import legacy_io
from openpype.settings import get_system_settings
# mapping between Maya renderer names and Muster template ids
def _get_template_id(renderer):
"""
Return muster template ID based on renderer name.
:param renderer: renderer name
:type renderer: str
:returns: muster template id
:rtype: int
"""
# TODO: Use settings from context?
templates = get_system_settings()["modules"]["muster"]["templates_mapping"]
if not templates:
raise RuntimeError(("Muster template mapping missing in "
"pype-settings"))
try:
template_id = templates[renderer]
except KeyError:
raise RuntimeError("Unmapped renderer - missing template id")
return template_id
def _get_script():
"""Get path to the image sequence script"""
try:
from openpype.scripts import publish_filesequence
except Exception:
raise RuntimeError("Expected module 'publish_deadline'"
"to be available")
module_path = publish_filesequence.__file__
if module_path.endswith(".pyc"):
module_path = module_path[:-len(".pyc")] + ".py"
return module_path
def get_renderer_variables(renderlayer=None):
"""Retrieve the extension which has been set in the VRay settings
Will return None if the current renderer is not VRay
For Maya 2016.5 and up the renderSetup creates renderSetupLayer node which
start with `rs`. Use the actual node name, do NOT use the `nice name`
Args:
renderlayer (str): the node name of the renderlayer.
Returns:
dict
"""
renderer = lib.get_renderer(renderlayer or lib.get_current_renderlayer())
padding = cmds.getAttr(RenderSettings.get_padding_attr(renderer))
filename_0 = cmds.renderSettings(fullPath=True, firstImageName=True)[0]
if renderer == "vray":
# Maya's renderSettings function does not return V-Ray file extension
# so we get the extension from vraySettings
extension = cmds.getAttr("vraySettings.imageFormatStr")
# When V-Ray image format has not been switched once from default .png
# the getAttr command above returns None. As such we explicitly set
# it to `.png`
if extension is None:
extension = "png"
filename_prefix = "<Scene>/<Scene>_<Layer>/<Layer>"
else:
# Get the extension, getAttr defaultRenderGlobals.imageFormat
# returns an index number.
filename_base = os.path.basename(filename_0)
extension = os.path.splitext(filename_base)[-1].strip(".")
filename_prefix = "<Scene>/<RenderLayer>/<RenderLayer>"
return {"ext": extension,
"filename_prefix": filename_prefix,
"padding": padding,
"filename_0": filename_0}
def preview_fname(folder, scene, layer, padding, ext):
"""Return output file path with #### for padding.
Deadline requires the path to be formatted with # in place of numbers.
For example `/path/to/render.####.png`
Args:
folder (str): The root output folder (image path)
scene (str): The scene name
layer (str): The layer name to be rendered
padding (int): The padding length
ext(str): The output file extension
Returns:
str
"""
# Following hardcoded "<Scene>/<Scene>_<Layer>/<Layer>"
output = "{scene}/{layer}/{layer}.{number}.{ext}".format(
scene=scene,
layer=layer,
number="#" * padding,
ext=ext
)
return os.path.join(folder, output)
class MayaSubmitMuster(pyblish.api.InstancePlugin):
"""Submit available render layers to Muster
Renders are submitted to a Muster via HTTP API as
supplied via the environment variable ``MUSTER_REST_URL``.
Also needed is ``MUSTER_USER`` and ``MUSTER_PASSWORD``.
"""
label = "Submit to Muster"
order = pyblish.api.IntegratorOrder + 0.1
hosts = ["maya"]
families = ["renderlayer"]
icon = "satellite-dish"
if not os.environ.get("MUSTER_REST_URL"):
optional = False
active = False
else:
optional = True
_token = None
def _load_credentials(self):
"""
Load Muster credentials from file and set `MUSTER_USER`,
`MUSTER_PASSWORD`, `MUSTER_REST_URL` is loaded from settings.
.. todo::
Show login dialog if access token is invalid or missing.
"""
app_dir = os.path.normpath(
appdirs.user_data_dir('pype-app', 'pype')
)
file_name = 'muster_cred.json'
fpath = os.path.join(app_dir, file_name)
file = open(fpath, 'r')
muster_json = json.load(file)
self._token = muster_json.get('token', None)
if not self._token:
raise RuntimeError("Invalid access token for Muster")
file.close()
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
if not self.MUSTER_REST_URL:
raise AttributeError("Muster REST API url not set")
def _get_templates(self):
"""
Get Muster templates from server.
"""
params = {
"authToken": self._token,
"select": "name"
}
api_entry = '/api/templates/list'
response = requests_post(
self.MUSTER_REST_URL + api_entry, params=params)
if response.status_code != 200:
self.log.error(
'Cannot get templates from Muster: {}'.format(
response.status_code))
raise Exception('Cannot get templates from Muster.')
try:
response_templates = response.json()["ResponseData"]["templates"]
except ValueError as e:
self.log.error(
'Muster server returned unexpected data {}'.format(e)
)
raise Exception('Muster server returned unexpected data')
templates = {}
for t in response_templates:
templates[t.get("name")] = t.get("id")
self._templates = templates
def _resolve_template(self, renderer):
"""
Returns template ID based on renderer string.
:param renderer: Name of renderer to match against template names
:type renderer: str
:returns: ID of template
:rtype: int
:raises: Exception if template ID isn't found
"""
self.log.debug("Trying to find template for [{}]".format(renderer))
mapped = _get_template_id(renderer)
self.log.debug("got id [{}]".format(mapped))
return self._templates.get(mapped)
def _submit(self, payload):
"""
Submit job to Muster
:param payload: json with job to submit
:type payload: str
:returns: response
:raises: Exception status is wrong
"""
params = {
"authToken": self._token,
"name": "submit"
}
api_entry = '/api/queue/actions'
response = requests_post(
self.MUSTER_REST_URL + api_entry, params=params, json=payload)
if response.status_code != 200:
self.log.error(
'Cannot submit job to Muster: {}'.format(response.text))
raise Exception('Cannot submit job to Muster.')
return response
def process(self, instance):
"""
Authenticate with Muster, collect all data, prepare path for post
render publish job and submit job to farm.
"""
# setup muster environment
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
if self.MUSTER_REST_URL is None:
self.log.error(
"\"MUSTER_REST_URL\" is not found. Skipping "
"[{}]".format(instance)
)
raise RuntimeError("MUSTER_REST_URL not set")
self._load_credentials()
# self._get_templates()
context = instance.context
workspace = context.data["workspaceDir"]
project_name = context.data["projectName"]
asset_name = context.data["asset"]
filepath = None
allInstances = []
for result in context.data["results"]:
if ((result["instance"] is not None) and
(result["instance"] not in allInstances)):
allInstances.append(result["instance"])
for inst in allInstances:
print(inst)
if inst.data['family'] == 'scene':
filepath = inst.data['destination_list'][0]
if not filepath:
filepath = context.data["currentFile"]
self.log.debug(filepath)
filename = os.path.basename(filepath)
comment = context.data.get("comment", "")
scene = os.path.splitext(filename)[0]
dirname = os.path.join(workspace, "renders")
renderlayer = instance.data['renderlayer'] # rs_beauty
renderlayer_name = instance.data['subset'] # beauty
renderglobals = instance.data["renderGlobals"]
# legacy_layers = renderlayer_globals["UseLegacyRenderLayers"]
# deadline_user = context.data.get("deadlineUser", getpass.getuser())
jobname = "%s - %s" % (filename, instance.name)
# Get the variables depending on the renderer
render_variables = get_renderer_variables(renderlayer)
output_filename_0 = preview_fname(folder=dirname,
scene=scene,
layer=renderlayer_name,
padding=render_variables["padding"],
ext=render_variables["ext"])
instance.data["outputDir"] = os.path.dirname(output_filename_0)
self.log.debug("output: {}".format(filepath))
# build path for metadata file
metadata_filename = "{}_metadata.json".format(instance.data["subset"])
output_dir = instance.data["outputDir"]
metadata_path = os.path.join(output_dir, metadata_filename)
pype_root = os.environ["OPENPYPE_SETUP_PATH"]
# we must provide either full path to executable or use musters own
# python named MPython.exe, residing directly in muster bin
# directory.
if platform.system().lower() == "windows":
# for muster, those backslashes must be escaped twice
muster_python = ("\"C:\\\\Program Files\\\\Virtual Vertex\\\\"
"Muster 9\\\\MPython.exe\"")
else:
# we need to run pype as different user then Muster dispatcher
# service is running (usually root).
muster_python = ("/usr/sbin/runuser -u {}"
" -- /usr/bin/python3".format(getpass.getuser()))
# build the path and argument. We are providing separate --pype
# argument with network path to pype as post job actions are run
# but dispatcher (Server) and not render clients. Render clients
# inherit environment from publisher including PATH, so there's
# no problem finding PYPE, but there is now way (as far as I know)
# to set environment dynamically for dispatcher. Therefore this hack.
args = [muster_python,
_get_script().replace('\\', '\\\\'),
"--paths",
metadata_path.replace('\\', '\\\\'),
"--pype",
pype_root.replace('\\', '\\\\')]
postjob_command = " ".join(args)
try:
# Ensure render folder exists
os.makedirs(dirname)
except OSError:
pass
env = self.clean_environment()
payload = {
"RequestData": {
"platform": 0,
"job": {
"jobName": jobname,
"templateId": _get_template_id(
instance.data["renderer"]),
"chunksInterleave": 2,
"chunksPriority": "0",
"chunksTimeoutValue": 320,
"department": "",
"dependIds": [""],
"dependLinkMode": 0,
"dependMode": 0,
"emergencyQueue": False,
"excludedPools": [""],
"includedPools": [renderglobals["Pool"]],
"packetSize": 4,
"packetType": 1,
"priority": 1,
"jobId": -1,
"startOn": 0,
"parentId": -1,
"project": project_name or scene,
"shot": asset_name or scene,
"camera": instance.data.get("cameras")[0],
"dependMode": 0,
"packetSize": 4,
"packetType": 1,
"priority": 1,
"maximumInstances": 0,
"assignedInstances": 0,
"attributes": {
"environmental_variables": {
"value": ", ".join("{!s}={!r}".format(k, v)
for (k, v) in env.items()),
"state": True,
"subst": False
},
"memo": {
"value": comment,
"state": True,
"subst": False
},
"frames_range": {
"value": "{start}-{end}".format(
start=int(instance.data["frameStart"]),
end=int(instance.data["frameEnd"])),
"state": True,
"subst": False
},
"job_file": {
"value": filepath,
"state": True,
"subst": True
},
"job_project": {
"value": workspace,
"state": True,
"subst": True
},
"output_folder": {
"value": dirname.replace("\\", "/"),
"state": True,
"subst": True
},
"post_job_action": {
"value": postjob_command,
"state": True,
"subst": True
},
"MAYADIGITS": {
"value": 1,
"state": True,
"subst": False
},
"ARNOLDMODE": {
"value": "0",
"state": True,
"subst": False
},
"ABORTRENDER": {
"value": "0",
"state": True,
"subst": True
},
"ARNOLDLICENSE": {
"value": "0",
"state": False,
"subst": False
},
"ADD_FLAGS": {
"value": "-rl {}".format(renderlayer),
"state": True,
"subst": True
}
}
}
}
}
self.preflight_check(instance)
self.log.debug("Submitting ...")
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
response = self._submit(payload)
# response = requests.post(url, json=payload)
if not response.ok:
raise Exception(response.text)
# Store output dir for unified publisher (filesequence)
instance.data["musterSubmissionJob"] = response.json()
def clean_environment(self):
"""
Clean and set environment variables for render job so render clients
work in more or less same environment as publishing machine.
.. warning:: This is not usable for **post job action** as this is
executed on dispatcher machine (server) and not render clients.
"""
keys = [
# This will trigger `userSetup.py` on the slave
# such that proper initialisation happens the same
# way as it does on a local machine.
# TODO(marcus): This won't work if the slaves don't
# have access to these paths, such as if slaves are
# running Linux and the submitter is on Windows.
"PYTHONPATH",
"PATH",
"MTOA_EXTENSIONS_PATH",
"MTOA_EXTENSIONS",
"DYLD_LIBRARY_PATH",
"MAYA_RENDER_DESC_PATH",
"MAYA_MODULE_PATH",
"ARNOLD_PLUGIN_PATH",
"FTRACK_API_KEY",
"FTRACK_API_USER",
"FTRACK_SERVER",
"PYBLISHPLUGINPATH",
# todo: This is a temporary fix for yeti variables
"PEREGRINEL_LICENSE",
"SOLIDANGLE_LICENSE",
"ARNOLD_LICENSE"
"MAYA_MODULE_PATH",
"TOOL_ENV"
]
environment = dict({key: os.environ[key] for key in keys
if key in os.environ}, **legacy_io.Session)
# self.log.debug("enviro: {}".format(pprint(environment)))
for path in os.environ:
if path.lower().startswith('pype_'):
environment[path] = os.environ[path]
environment["PATH"] = os.environ["PATH"]
# self.log.debug("enviro: {}".format(environment['OPENPYPE_SCRIPTS']))
clean_environment = {}
for key, value in environment.items():
clean_path = ""
self.log.debug("key: {}".format(key))
if "://" in value:
clean_path = value
else:
valid_paths = []
for path in value.split(os.pathsep):
if not path:
continue
try:
path.decode('UTF-8', 'strict')
valid_paths.append(os.path.normpath(path))
except UnicodeDecodeError:
print('path contains non UTF characters')
if valid_paths:
clean_path = os.pathsep.join(valid_paths)
clean_environment[key] = clean_path
return clean_environment
def preflight_check(self, instance):
"""Ensure the startFrame, endFrame and byFrameStep are integers"""
for key in ("frameStart", "frameEnd", "byFrameStep"):
value = instance.data[key]
if int(value) == value:
continue
self.log.warning(
"%f=%d was rounded off to nearest integer"
% (value, int(value))
)
# TODO: Remove hack to avoid this plug-in in new publisher
# This plug-in should actually be in dedicated module
if not os.environ.get("MUSTER_REST_URL"):
del MayaSubmitMuster

View file

@ -1,96 +0,0 @@
import os
import json
import appdirs
import pyblish.api
from openpype.lib import requests_get
from openpype.pipeline.publish import (
context_plugin_should_run,
RepairAction,
)
class ValidateMusterConnection(pyblish.api.ContextPlugin):
"""
Validate Muster REST API Service is running and we have valid auth token
"""
label = "Validate Muster REST API Service"
order = pyblish.api.ValidatorOrder
hosts = ["maya"]
families = ["renderlayer"]
token = None
if not os.environ.get("MUSTER_REST_URL"):
active = False
actions = [RepairAction]
def process(self, context):
# Workaround bug pyblish-base#250
if not context_plugin_should_run(self, context):
return
# test if we have environment set (redundant as this plugin shouldn'
# be active otherwise).
try:
MUSTER_REST_URL = os.environ["MUSTER_REST_URL"]
except KeyError:
self.log.error("Muster REST API url not found.")
raise ValueError("Muster REST API url not found.")
# Load credentials
try:
self._load_credentials()
except RuntimeError:
self.log.error("invalid or missing access token")
assert self._token is not None, "Invalid or missing token"
# We have token, lets do trivial query to web api to see if we can
# connect and access token is valid.
params = {
'authToken': self._token
}
api_entry = '/api/pools/list'
response = requests_get(
MUSTER_REST_URL + api_entry, params=params)
assert response.status_code == 200, "invalid response from server"
assert response.json()['ResponseData'], "invalid data in response"
def _load_credentials(self):
"""
Load Muster credentials from file and set `MUSTER_USER`,
`MUSTER_PASSWORD`, `MUSTER_REST_URL` is loaded from settings.
.. todo::
Show login dialog if access token is invalid or missing.
"""
app_dir = os.path.normpath(
appdirs.user_data_dir('pype-app', 'pype')
)
file_name = 'muster_cred.json'
fpath = os.path.join(app_dir, file_name)
file = open(fpath, 'r')
muster_json = json.load(file)
self._token = muster_json.get('token', None)
if not self._token:
raise RuntimeError("Invalid access token for Muster")
file.close()
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
if not self.MUSTER_REST_URL:
raise AttributeError("Muster REST API url not set")
@classmethod
def repair(cls, instance):
"""
Renew authentication token by logging into Muster
"""
api_url = "{}/muster/show_login".format(
os.environ["OPENPYPE_WEBSERVER_URL"])
cls.log.debug(api_url)
response = requests_get(api_url, timeout=1)
if response.status_code != 200:
cls.log.error('Cannot show login form to Muster')
raise Exception('Cannot show login form to Muster')

View file

@ -1,22 +0,0 @@
from aiohttp.web_response import Response
class MusterModuleRestApi:
def __init__(self, user_module, server_manager):
self.module = user_module
self.server_manager = server_manager
self.prefix = "/muster"
self.register()
def register(self):
self.server_manager.add_route(
"GET",
self.prefix + "/show_login",
self.show_login_widget
)
async def show_login_widget(self, request):
self.module.action_show_login.trigger()
return Response(status=200)

View file

@ -1,165 +0,0 @@
from qtpy import QtCore, QtGui, QtWidgets
from openpype import resources, style
class MusterLogin(QtWidgets.QWidget):
SIZE_W = 300
SIZE_H = 150
loginSignal = QtCore.Signal(object, object, object)
def __init__(self, module, parent=None):
super(MusterLogin, self).__init__(parent)
self.module = module
# Icon
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
self.setWindowIcon(icon)
self.setWindowFlags(
QtCore.Qt.WindowCloseButtonHint |
QtCore.Qt.WindowMinimizeButtonHint
)
self._translate = QtCore.QCoreApplication.translate
# Font
self.font = QtGui.QFont()
self.font.setFamily("DejaVu Sans Condensed")
self.font.setPointSize(9)
self.font.setBold(True)
self.font.setWeight(50)
self.font.setKerning(True)
# Size setting
self.resize(self.SIZE_W, self.SIZE_H)
self.setMinimumSize(QtCore.QSize(self.SIZE_W, self.SIZE_H))
self.setMaximumSize(QtCore.QSize(self.SIZE_W+100, self.SIZE_H+100))
self.setStyleSheet(style.load_stylesheet())
self.setLayout(self._main())
self.setWindowTitle('Muster login')
def _main(self):
self.main = QtWidgets.QVBoxLayout()
self.main.setObjectName("main")
self.form = QtWidgets.QFormLayout()
self.form.setContentsMargins(10, 15, 10, 5)
self.form.setObjectName("form")
self.label_username = QtWidgets.QLabel("Username:")
self.label_username.setFont(self.font)
self.label_username.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
self.label_username.setTextFormat(QtCore.Qt.RichText)
self.input_username = QtWidgets.QLineEdit()
self.input_username.setEnabled(True)
self.input_username.setFrame(True)
self.input_username.setPlaceholderText(
self._translate("main", "e.g. John Smith")
)
self.label_password = QtWidgets.QLabel("Password:")
self.label_password.setFont(self.font)
self.label_password.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
self.label_password.setTextFormat(QtCore.Qt.RichText)
self.input_password = QtWidgets.QLineEdit()
self.input_password.setEchoMode(QtWidgets.QLineEdit.Password)
self.input_password.setEnabled(True)
self.input_password.setFrame(True)
self.input_password.setPlaceholderText(
self._translate("main", "e.g. ********")
)
self.error_label = QtWidgets.QLabel("")
self.error_label.setFont(self.font)
self.error_label.setStyleSheet('color: #FC6000')
self.error_label.setWordWrap(True)
self.error_label.hide()
self.form.addRow(self.label_username, self.input_username)
self.form.addRow(self.label_password, self.input_password)
self.form.addRow(self.error_label)
self.btn_group = QtWidgets.QHBoxLayout()
self.btn_group.addStretch(1)
self.btn_group.setObjectName("btn_group")
self.btn_ok = QtWidgets.QPushButton("Ok")
self.btn_ok.clicked.connect(self.click_ok)
self.btn_cancel = QtWidgets.QPushButton("Cancel")
QtWidgets.QShortcut(
QtGui.QKeySequence(
QtCore.Qt.Key_Escape), self).activated.connect(self.close)
self.btn_cancel.clicked.connect(self.close)
self.btn_group.addWidget(self.btn_ok)
self.btn_group.addWidget(self.btn_cancel)
self.main.addLayout(self.form)
self.main.addLayout(self.btn_group)
return self.main
def keyPressEvent(self, key_event):
if key_event.key() == QtCore.Qt.Key_Return:
if self.input_username.hasFocus():
self.input_password.setFocus()
elif self.input_password.hasFocus() or self.btn_ok.hasFocus():
self.click_ok()
elif self.btn_cancel.hasFocus():
self.close()
else:
super().keyPressEvent(key_event)
def setError(self, msg):
self.error_label.setText(msg)
self.error_label.show()
def invalid_input(self, entity):
entity.setStyleSheet("border: 1px solid red;")
def click_ok(self):
# all what should happen - validations and saving into appsdir
username = self.input_username.text()
password = self.input_password.text()
# TODO: more robust validation. Password can be empty in muster?
if not username:
self.setError("Username cannot be empty")
self.invalid_input(self.input_username)
try:
self.save_credentials(username, password)
except Exception as e:
self.setError(
"<b>Cannot get auth token:</b>\n<code>{}</code>".format(e))
else:
self._close_widget()
def save_credentials(self, username, password):
self.module.get_auth_token(username, password)
def showEvent(self, event):
super(MusterLogin, self).showEvent(event)
# Make btns same width
max_width = max(
self.btn_ok.sizeHint().width(),
self.btn_cancel.sizeHint().width()
)
self.btn_ok.setMinimumWidth(max_width)
self.btn_cancel.setMinimumWidth(max_width)
def closeEvent(self, event):
event.ignore()
self._close_widget()
def _close_widget(self):
self.hide()

View file

@ -354,7 +354,7 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
default_width = 1000
default_height = 600
def __init__(self, parent=None):
def __init__(self, allow_save_registry=True, parent=None):
super(PythonInterpreterWidget, self).__init__(parent)
self.setWindowTitle("{} Console".format(
@ -414,6 +414,8 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
self._first_show = True
self._splitter_size_ratio = None
self._allow_save_registry = allow_save_registry
self._registry_saved = True
self._init_from_registry()
@ -457,6 +459,11 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
pass
def save_registry(self):
# Window was not showed
if not self._allow_save_registry or self._registry_saved:
return
self._registry_saved = True
setting_registry = PythonInterpreterRegistry()
setting_registry.set_item("width", self.width())
@ -650,6 +657,7 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
def showEvent(self, event):
self._line_check_timer.start()
self._registry_saved = False
super(PythonInterpreterWidget, self).showEvent(event)
# First show setup
if self._first_show:

View file

@ -582,16 +582,17 @@ def _create_instances_for_aov(instance, skeleton, aov_filter, additional_data,
group_name = subset
# if there are multiple cameras, we need to add camera name
if isinstance(col, (list, tuple)):
cam = [c for c in cameras if c in col[0]]
else:
# in case of single frame
cam = [c for c in cameras if c in col]
if cam:
if aov:
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
else:
subset_name = '{}_{}'.format(group_name, cam)
expected_filepath = col[0] if isinstance(col, (list, tuple)) else col
cams = [cam for cam in cameras if cam in expected_filepath]
if cams:
for cam in cams:
if aov:
if not aov.startswith(cam):
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
else:
subset_name = "{}_{}".format(group_name, aov)
else:
subset_name = '{}_{}'.format(group_name, cam)
else:
if aov:
subset_name = '{}_{}'.format(group_name, aov)

View file

@ -28,13 +28,20 @@ def concatenate_splitted_paths(split_paths, anatomy):
# backward compatibility
if "__project_root__" in path_items:
for root, root_path in anatomy.roots.items():
if not os.path.exists(str(root_path)):
log.debug("Root {} path path {} not exist on \
computer!".format(root, root_path))
if not root_path or not os.path.exists(str(root_path)):
log.debug(
"Root {} path path {} not exist on computer!".format(
root, root_path
)
)
continue
clean_items = ["{{root[{}]}}".format(root),
r"{project[name]}"] + clean_items[1:]
output.append(os.path.normpath(os.path.sep.join(clean_items)))
root_items = [
"{{root[{}]}}".format(root),
"{project[name]}"
]
root_items.extend(clean_items[1:])
output.append(os.path.normpath(os.path.sep.join(root_items)))
continue
output.append(os.path.normpath(os.path.sep.join(clean_items)))

View file

@ -58,41 +58,13 @@ def get_template_name_profiles(
if not project_settings:
project_settings = get_project_settings(project_name)
profiles = (
return copy.deepcopy(
project_settings
["global"]
["tools"]
["publish"]
["template_name_profiles"]
)
if profiles:
return copy.deepcopy(profiles)
# Use legacy approach for cases new settings are not filled yet for the
# project
legacy_profiles = (
project_settings
["global"]
["publish"]
["IntegrateAssetNew"]
["template_name_profiles"]
)
if legacy_profiles:
if not logger:
logger = Logger.get_logger("get_template_name_profiles")
logger.warning((
"Project \"{}\" is using legacy access to publish template."
" It is recommended to move settings to new location"
" 'project_settings/global/tools/publish/template_name_profiles'."
).format(project_name))
# Replace "tasks" key with "task_names"
profiles = []
for profile in copy.deepcopy(legacy_profiles):
profile["task_names"] = profile.pop("tasks", [])
profiles.append(profile)
return profiles
def get_hero_template_name_profiles(
@ -121,36 +93,13 @@ def get_hero_template_name_profiles(
if not project_settings:
project_settings = get_project_settings(project_name)
profiles = (
return copy.deepcopy(
project_settings
["global"]
["tools"]
["publish"]
["hero_template_name_profiles"]
)
if profiles:
return copy.deepcopy(profiles)
# Use legacy approach for cases new settings are not filled yet for the
# project
legacy_profiles = copy.deepcopy(
project_settings
["global"]
["publish"]
["IntegrateHeroVersion"]
["template_name_profiles"]
)
if legacy_profiles:
if not logger:
logger = Logger.get_logger("get_hero_template_name_profiles")
logger.warning((
"Project \"{}\" is using legacy access to hero publish template."
" It is recommended to move settings to new location"
" 'project_settings/global/tools/publish/"
"hero_template_name_profiles'."
).format(project_name))
return legacy_profiles
def get_publish_template_name(

View file

@ -1971,7 +1971,6 @@ class PlaceholderCreateMixin(object):
if not placeholder.data.get("keep_placeholder", True):
self.delete_placeholder(placeholder)
def create_failed(self, placeholder, creator_data):
if hasattr(placeholder, "create_failed"):
placeholder.create_failed(creator_data)
@ -2036,7 +2035,7 @@ class CreatePlaceholderItem(PlaceholderItem):
self._failed_created_publish_instances = []
def get_errors(self):
if not self._failed_representations:
if not self._failed_created_publish_instances:
return []
message = (
"Failed to create {} instance using Creator {}"

View file

@ -190,48 +190,25 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
project_task_types = project_doc["config"]["tasks"]
for instance in context:
asset_doc = instance.data.get("assetEntity")
anatomy_updates = {
anatomy_data = copy.deepcopy(context.data["anatomyData"])
anatomy_data.update({
"family": instance.data["family"],
"subset": instance.data["subset"],
}
if asset_doc:
parents = asset_doc["data"].get("parents") or list()
parent_name = project_doc["name"]
if parents:
parent_name = parents[-1]
})
hierarchy = "/".join(parents)
anatomy_updates.update({
"asset": asset_doc["name"],
"hierarchy": hierarchy,
"parent": parent_name,
"folder": {
"name": asset_doc["name"],
},
})
# Task
task_type = None
task_name = instance.data.get("task")
if task_name:
asset_tasks = asset_doc["data"]["tasks"]
task_type = asset_tasks.get(task_name, {}).get("type")
task_code = (
project_task_types
.get(task_type, {})
.get("short_name")
)
anatomy_updates["task"] = {
"name": task_name,
"type": task_type,
"short": task_code
}
self._fill_asset_data(instance, project_doc, anatomy_data)
self._fill_task_data(instance, project_task_types, anatomy_data)
# Define version
version_number = None
if self.follow_workfile_version:
version_number = context.data('version')
else:
version_number = context.data("version")
# Even if 'follow_workfile_version' is enabled, it may not be set
# because workfile version was not collected to 'context.data'
# - that can happen e.g. in 'traypublisher' or other hosts without
# a workfile
if version_number is None:
version_number = instance.data.get("version")
# use latest version (+1) if already any exist
@ -242,6 +219,9 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
# If version is not specified for instance or context
if version_number is None:
task_data = anatomy_data.get("task") or {}
task_name = task_data.get("name")
task_type = task_data.get("type")
version_number = get_versioning_start(
context.data["projectName"],
instance.context.data["hostName"],
@ -250,29 +230,26 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
family=instance.data["family"],
subset=instance.data["subset"]
)
anatomy_updates["version"] = version_number
anatomy_data["version"] = version_number
# Additional data
resolution_width = instance.data.get("resolutionWidth")
if resolution_width:
anatomy_updates["resolution_width"] = resolution_width
anatomy_data["resolution_width"] = resolution_width
resolution_height = instance.data.get("resolutionHeight")
if resolution_height:
anatomy_updates["resolution_height"] = resolution_height
anatomy_data["resolution_height"] = resolution_height
pixel_aspect = instance.data.get("pixelAspect")
if pixel_aspect:
anatomy_updates["pixel_aspect"] = float(
anatomy_data["pixel_aspect"] = float(
"{:0.2f}".format(float(pixel_aspect))
)
fps = instance.data.get("fps")
if fps:
anatomy_updates["fps"] = float("{:0.2f}".format(float(fps)))
anatomy_data = copy.deepcopy(context.data["anatomyData"])
anatomy_data.update(anatomy_updates)
anatomy_data["fps"] = float("{:0.2f}".format(float(fps)))
# Store anatomy data
instance.data["projectEntity"] = project_doc
@ -288,3 +265,157 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
instance_name,
json.dumps(anatomy_data, indent=4)
))
def _fill_asset_data(self, instance, project_doc, anatomy_data):
# QUESTION should we make sure that all asset data are poped if asset
# data cannot be found?
# - 'asset', 'hierarchy', 'parent', 'folder'
asset_doc = instance.data.get("assetEntity")
if asset_doc:
parents = asset_doc["data"].get("parents") or list()
parent_name = project_doc["name"]
if parents:
parent_name = parents[-1]
hierarchy = "/".join(parents)
anatomy_data.update({
"asset": asset_doc["name"],
"hierarchy": hierarchy,
"parent": parent_name,
"folder": {
"name": asset_doc["name"],
},
})
return
if instance.data.get("newAssetPublishing"):
hierarchy = instance.data["hierarchy"]
anatomy_data["hierarchy"] = hierarchy
parent_name = project_doc["name"]
if hierarchy:
parent_name = hierarchy.split("/")[-1]
asset_name = instance.data["asset"].split("/")[-1]
anatomy_data.update({
"asset": asset_name,
"hierarchy": hierarchy,
"parent": parent_name,
"folder": {
"name": asset_name,
},
})
def _fill_task_data(self, instance, project_task_types, anatomy_data):
# QUESTION should we make sure that all task data are poped if task
# data cannot be resolved?
# - 'task'
# Skip if there is no task
task_name = instance.data.get("task")
if not task_name:
return
# Find task data based on asset entity
asset_doc = instance.data.get("assetEntity")
task_data = self._get_task_data_from_asset(
asset_doc, task_name, project_task_types
)
if task_data:
# Fill task data
# - if we're in editorial, make sure the task type is filled
if (
not instance.data.get("newAssetPublishing")
or task_data["type"]
):
anatomy_data["task"] = task_data
return
# New hierarchy is not created, so we can only skip rest of the logic
if not instance.data.get("newAssetPublishing"):
return
# Try to find task data based on hierarchy context and asset name
hierarchy_context = instance.context.data.get("hierarchyContext")
asset_name = instance.data.get("asset")
if not hierarchy_context or not asset_name:
return
project_name = instance.context.data["projectName"]
# OpenPype approach vs AYON approach
if "/" not in asset_name:
tasks_info = self._find_tasks_info_in_hierarchy(
hierarchy_context, asset_name
)
else:
current_data = hierarchy_context.get(project_name, {})
for key in asset_name.split("/"):
if key:
current_data = current_data.get("childs", {}).get(key, {})
tasks_info = current_data.get("tasks", {})
task_info = tasks_info.get(task_name, {})
task_type = task_info.get("type")
task_code = (
project_task_types
.get(task_type, {})
.get("short_name")
)
anatomy_data["task"] = {
"name": task_name,
"type": task_type,
"short": task_code
}
def _get_task_data_from_asset(
self, asset_doc, task_name, project_task_types
):
"""
Args:
asset_doc (Union[dict[str, Any], None]): Asset document.
task_name (Union[str, None]): Task name.
project_task_types (dict[str, dict[str, Any]]): Project task
types.
Returns:
Union[dict[str, str], None]: Task data or None if not found.
"""
if not asset_doc or not task_name:
return None
asset_tasks = asset_doc["data"]["tasks"]
task_type = asset_tasks.get(task_name, {}).get("type")
task_code = (
project_task_types
.get(task_type, {})
.get("short_name")
)
return {
"name": task_name,
"type": task_type,
"short": task_code
}
def _find_tasks_info_in_hierarchy(self, hierarchy_context, asset_name):
"""Find tasks info for an asset in editorial hierarchy.
Args:
hierarchy_context (dict[str, Any]): Editorial hierarchy context.
asset_name (str): Asset name.
Returns:
dict[str, dict[str, Any]]: Tasks info by name.
"""
hierarchy_queue = collections.deque()
hierarchy_queue.append(copy.deepcopy(hierarchy_context))
while hierarchy_queue:
item = hierarchy_queue.popleft()
if asset_name in item:
return item[asset_name].get("tasks") or {}
for subitem in item.values():
hierarchy_queue.extend(subitem.get("childs") or [])
return {}

View file

@ -19,7 +19,7 @@ class CollectFarmTarget(pyblish.api.InstancePlugin):
farm_name = ""
op_modules = context.data.get("openPypeModules")
for farm_renderer in ["deadline", "royalrender", "muster"]:
for farm_renderer in ["deadline", "royalrender"]:
op_module = op_modules.get(farm_renderer, False)
if op_module and op_module.enabled:

View file

@ -93,14 +93,6 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
assert ctx.get("user") == data.get("user"), ctx_err % "user"
assert ctx.get("version") == data.get("version"), ctx_err % "version"
# ftrack credentials are passed as environment variables by Deadline
# to publish job, but Muster doesn't pass them.
if data.get("ftrack") and not os.environ.get("FTRACK_API_USER"):
ftrack = data.get("ftrack")
os.environ["FTRACK_API_USER"] = ftrack["FTRACK_API_USER"]
os.environ["FTRACK_API_KEY"] = ftrack["FTRACK_API_KEY"]
os.environ["FTRACK_SERVER"] = ftrack["FTRACK_SERVER"]
# now we can just add instances from json file and we are done
any_staging_dir_persistent = False
for instance_data in data.get("instances"):

View file

@ -79,19 +79,6 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
"representation": "TEMP"
})
# Add fill keys for editorial publishing creating new entity
# TODO handle in editorial plugin
if instance.data.get("newAssetPublishing"):
if "hierarchy" not in template_data:
template_data["hierarchy"] = instance.data["hierarchy"]
if "asset" not in template_data:
asset_name = instance.data["asset"].split("/")[-1]
template_data["asset"] = asset_name
template_data["folder"] = {
"name": asset_name
}
publish_templates = anatomy.templates_obj["publish"]
if "folder" in publish_templates:
publish_folder = publish_templates["folder"].format_strict(

Some files were not shown because too many files have changed in this diff Show more