mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
Merge branch 'develop' into enhancement/keep-version-after-switch-ayon
This commit is contained in:
commit
29a3b56d46
229 changed files with 5421 additions and 4518 deletions
22
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
22
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,17 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.18.6-nightly.2
|
||||
- 3.18.6-nightly.1
|
||||
- 3.18.5
|
||||
- 3.18.5-nightly.3
|
||||
- 3.18.5-nightly.2
|
||||
- 3.18.5-nightly.1
|
||||
- 3.18.4
|
||||
- 3.18.4-nightly.1
|
||||
- 3.18.3
|
||||
- 3.18.3-nightly.2
|
||||
- 3.18.3-nightly.1
|
||||
- 3.18.2
|
||||
- 3.18.2-nightly.6
|
||||
- 3.18.2-nightly.5
|
||||
|
|
@ -124,17 +135,6 @@ body:
|
|||
- 3.15.9-nightly.1
|
||||
- 3.15.8
|
||||
- 3.15.8-nightly.3
|
||||
- 3.15.8-nightly.2
|
||||
- 3.15.8-nightly.1
|
||||
- 3.15.7
|
||||
- 3.15.7-nightly.3
|
||||
- 3.15.7-nightly.2
|
||||
- 3.15.7-nightly.1
|
||||
- 3.15.6
|
||||
- 3.15.6-nightly.3
|
||||
- 3.15.6-nightly.2
|
||||
- 3.15.6-nightly.1
|
||||
- 3.15.5
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
616
CHANGELOG.md
616
CHANGELOG.md
|
|
@ -1,6 +1,622 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.18.5](https://github.com/ynput/OpenPype/tree/3.18.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.4...3.18.5)
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Add addons dir only if exists <a href="https://github.com/ynput/OpenPype/pull/6140">#6140</a></summary>
|
||||
|
||||
Do not add addons directory path for addons discovery if does not exists.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Hiero: Effect Categories - OP-7397 <a href="https://github.com/ynput/OpenPype/pull/6143">#6143</a></summary>
|
||||
|
||||
This PR introduces `Effect Categories` for the Hiero settings. This allows studios to split effect stacks into meaningful subsets.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Render Workfile Attributes <a href="https://github.com/ynput/OpenPype/pull/6146">#6146</a></summary>
|
||||
|
||||
`Workfile Dependency` default value can now be controlled from project settings.`Use Published Workfile` makes using published workfiles for rendering optional.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Attributes are locked after publishing if they are locked in Camera Family <a href="https://github.com/ynput/OpenPype/pull/6073">#6073</a></summary>
|
||||
|
||||
This PR is to make sure unlock attributes only during the bake context, make sure attributes are relocked after to preserve the lock state of the original node being baked.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Missing nuke family Windows arguments <a href="https://github.com/ynput/OpenPype/pull/6131">#6131</a></summary>
|
||||
|
||||
Default Windows arguments for launching the Nuke family was missing.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fix the bug on the limit group not being set correctly in Maya Deadline Setting <a href="https://github.com/ynput/OpenPype/pull/6139">#6139</a></summary>
|
||||
|
||||
This PR is to bug-fix the limit groups from maya deadline settings errored out when the user tries to edit the setting.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Transcoding extensions add missing '.tif' extension <a href="https://github.com/ynput/OpenPype/pull/6142">#6142</a></summary>
|
||||
|
||||
Image extensions in transcoding helper was missing `.tif` extension and had `.tiff` twice.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Use the new API for override context <a href="https://github.com/ynput/OpenPype/pull/6145">#6145</a></summary>
|
||||
|
||||
Blender 4.0 disabled the old API to override context. This API updates the code to use the new API.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>BugFix: Include Model in FBX Loader in Houdini <a href="https://github.com/ynput/OpenPype/pull/6150">#6150</a></summary>
|
||||
|
||||
A quick bugfig where we can't load fbx exported from blender. The bug was reported here.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Restore actions to objects after update <a href="https://github.com/ynput/OpenPype/pull/6153">#6153</a></summary>
|
||||
|
||||
Restore the actions assigned to objects after updating assets from blend files.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Collect template data with hierarchy context <a href="https://github.com/ynput/OpenPype/pull/6154">#6154</a></summary>
|
||||
|
||||
Fixed queue loop where is used wrong variable to pop items from queue.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>OP-6382 - Thumbnail Integration Problem <a href="https://github.com/ynput/OpenPype/pull/6156">#6156</a></summary>
|
||||
|
||||
This ticket alerted to 3 different cases of integration issues;
|
||||
- [x] Using the Tray Publisher with the same image format (extension) for representation and review representation.
|
||||
- [x] Clash on publish file path from output definitions in `ExtractOIIOTranscode`.
|
||||
- [x] Clash on publish file from thumbnail in `ExtractThumbnail`There might be an issue with this fix, if a studio does not use the `{output}` token in their `render` anatomy template. But thinking if they have customized it, they will be responsible to maintain these edge cases.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Bugfix saving camera scene errored out when creating render instance with multi-camera option turned off <a href="https://github.com/ynput/OpenPype/pull/6163">#6163</a></summary>
|
||||
|
||||
This PR is to make sure the integrator of saving camera scene turned off and the render submitted successfully when multi-camera options being turned off in 3dsmax
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Fix duplicated project name on create project structure <a href="https://github.com/ynput/OpenPype/pull/6166">#6166</a></summary>
|
||||
|
||||
Small fix in project folders. It is not used same variable name to change values which breaks values on any next loop.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Remove duplicate plugin <a href="https://github.com/ynput/OpenPype/pull/6157">#6157</a></summary>
|
||||
|
||||
The two plugins below are doing the same work, so we can remove the one focused solely on lookdev.https://github.com/ynput/OpenPype/blob/develop/openpype/hosts/maya/plugins/publish/validate_look_members_unique.pyhttps://github.com/ynput/OpenPype/blob/develop/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publish report viewer: Report items sorting <a href="https://github.com/ynput/OpenPype/pull/6092">#6092</a></summary>
|
||||
|
||||
Proposal of items sorting in Publish report viewer tool. Items are sorted by report creation time. Creation time is also added to publish report data when saved from publisher tool.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Extended error message <a href="https://github.com/ynput/OpenPype/pull/6161">#6161</a></summary>
|
||||
|
||||
Added more details to message
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Added settings for Fusion creators to legacy OP <a href="https://github.com/ynput/OpenPype/pull/6162">#6162</a></summary>
|
||||
|
||||
Added missing OP variant of setting for new Fusion creator.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.18.4](https://github.com/ynput/OpenPype/tree/3.18.4)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.3...3.18.4)
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>multiple render camera supports for 3dsmax <a href="https://github.com/ynput/OpenPype/pull/5124">#5124</a></summary>
|
||||
|
||||
Supports for rendering with multiple cameras in 3dsmax
|
||||
- [x] Add Batch Render Layers functions
|
||||
- [x] Rewrite lib.rendersetting and lib.renderproduct
|
||||
- [x] Add multi-camera options in creator.
|
||||
- [x] Collector with batch render-layer when multi-camera enabled.
|
||||
- [x] Add instance plugin for saving scene files with different cameras respectively by using subprocess
|
||||
- [x] Refactor submit_max_deadline
|
||||
- [x] Check with metadata.json in submit publish job
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: new creator for image product type <a href="https://github.com/ynput/OpenPype/pull/6057">#6057</a></summary>
|
||||
|
||||
In many DCC `render` product type is expected to be sequence of files. This PR adds new explicit creator for `image` product type which is focused on single frame image. Workflows for both product types might be a bit different, this gives artists more granularity to choose better workflow.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Account and ignore free image planes. <a href="https://github.com/ynput/OpenPype/pull/5993">#5993</a></summary>
|
||||
|
||||
Free image planes do not have the `->` path separator, so we need to account for that.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix long names for instances <a href="https://github.com/ynput/OpenPype/pull/6070">#6070</a></summary>
|
||||
|
||||
Changed naming for instances to use only final part of the `folderPath`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Traypublisher & Chore: Instance version on follow workfile version <a href="https://github.com/ynput/OpenPype/pull/6117">#6117</a></summary>
|
||||
|
||||
If `follow_workfile_version` is enabled but context does not have filled workfile version, a version on instance is used instead.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Substance Painter: Thumbnail errors with PBR Texture Set <a href="https://github.com/ynput/OpenPype/pull/6127">#6127</a></summary>
|
||||
|
||||
When publishing with PBR Metallic Roughness as Output Template, Emissive Map errors out because of the missing channel in the material and the map can't be generated in Substance Painter. This PR is to make sure `imagestance.data["publish"] = False` so that the related "empty" texture instance would be skipped to generate the output.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Transcoding: Fix reading image sequences through oiiotool <a href="https://github.com/ynput/OpenPype/pull/6129">#6129</a></summary>
|
||||
|
||||
When transcoding image sequences, the second image onwards includes the invalid xml line of `Reading path/to/file.exr` of the oiiotool output.This is most likely not the best solution, but it fixes the issue and illustrates the problem.Error:
|
||||
```
|
||||
ERROR:pyblish.plugin:Traceback (most recent call last):
|
||||
File "C:\Users\tokejepsen\AppData\Local\Ynput\AYON\dependency_packages\ayon_2310271602_windows.zip\dependencies\pyblish\plugin.py", line 527, in __explicit_process
|
||||
runner(*args)
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\plugins\publish\extract_color_transcode.py", line 152, in process
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 1136, in convert_colorspace
|
||||
input_info = get_oiio_info_for_input(input_path, logger=logger)
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 124, in get_oiio_info_for_input
|
||||
output.append(parse_oiio_xml_output(xml_text, logger=logger))
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 276, in parse_oiio_xml_output
|
||||
tree = xml.etree.ElementTree.fromstring(xml_string)
|
||||
File "xml\etree\ElementTree.py", line 1347, in XML
|
||||
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
|
||||
Traceback (most recent call last):
|
||||
File "C:\Users\tokejepsen\AppData\Local\Ynput\AYON\dependency_packages\ayon_2310271602_windows.zip\dependencies\pyblish\plugin.py", line 527, in __explicit_process
|
||||
runner(*args)
|
||||
File "<string>", line 152, in process
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 1136, in convert_colorspace
|
||||
input_info = get_oiio_info_for_input(input_path, logger=logger)
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 124, in get_oiio_info_for_input
|
||||
output.append(parse_oiio_xml_output(xml_text, logger=logger))
|
||||
File "C:\Users\tokejepsen\OpenPype\openpype\lib\transcoding.py", line 276, in parse_oiio_xml_output
|
||||
tree = xml.etree.ElementTree.fromstring(xml_string)
|
||||
File "xml\etree\ElementTree.py", line 1347, in XML
|
||||
xml.etree.ElementTree.ParseError: syntax error: line 1, column 0
|
||||
```
|
||||
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Remove 'IntegrateHeroVersion' conversion <a href="https://github.com/ynput/OpenPype/pull/6130">#6130</a></summary>
|
||||
|
||||
Remove settings conversion for `IntegrateHeroVersion`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore tools: Make sure style object is not garbage collected <a href="https://github.com/ynput/OpenPype/pull/6136">#6136</a></summary>
|
||||
|
||||
Minor fix in tool utils to make sure style C++ object is not garbage collected when not stored into variable.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.18.3](https://github.com/ynput/OpenPype/tree/3.18.3)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.2...3.18.3)
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Apply initial viewport shader for Redshift Proxy after loading <a href="https://github.com/ynput/OpenPype/pull/6102">#6102</a></summary>
|
||||
|
||||
When the published redshift proxy is being loaded, the shader of the proxy is missing. This is different from the manual load through creating redshift proxy for files. This PR is to assign the default lambert to the redshift proxy, which replicates the same approach when the user manually loads the proxy with filepath.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: We should keep current subset version when we switch only the representation type <a href="https://github.com/ynput/OpenPype/pull/4629">#4629</a></summary>
|
||||
|
||||
When we switch only the representation type of subsets, we should not get the representation from the last version of the subset.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Add loader for redshift proxy family <a href="https://github.com/ynput/OpenPype/pull/5948">#5948</a></summary>
|
||||
|
||||
Loader for Redshift Proxy in Houdini (Thanks for @BigRoy contribution)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AfterEffects: exposing Deadline pools fields in Publisher UI <a href="https://github.com/ynput/OpenPype/pull/6079">#6079</a></summary>
|
||||
|
||||
Deadline pools might be adhoc set by an artist during publishing. AfterEffects implementation wasn't providing this.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Event callbacks can have order <a href="https://github.com/ynput/OpenPype/pull/6080">#6080</a></summary>
|
||||
|
||||
Event callbacks can have order in which are called, and fixed issue with getting function name and file when using `partial` function as callback.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: OpenPype addon defines runtime dependencies <a href="https://github.com/ynput/OpenPype/pull/6095">#6095</a></summary>
|
||||
|
||||
Moved runtime dependencies from ayon-launcher to openpype addon.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: User's setting for scene unit scale <a href="https://github.com/ynput/OpenPype/pull/6097">#6097</a></summary>
|
||||
|
||||
Options for users to set the default scene unit scale for their scenes.AYONLegacy OP
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Remove deprecated templates profiles <a href="https://github.com/ynput/OpenPype/pull/6103">#6103</a></summary>
|
||||
|
||||
Remove deprecated usage of template profiles from settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Window is not always on top <a href="https://github.com/ynput/OpenPype/pull/6107">#6107</a></summary>
|
||||
|
||||
Goal of this PR is to avoid using `WindowStaysOnTopHint` which causes issues, especially in cases when DCC shows a popup dialog that is behind the window, in that case both Publisher and DCC are frozen and there is nothing to do.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: add split job export support for Redshift ROP <a href="https://github.com/ynput/OpenPype/pull/6108">#6108</a></summary>
|
||||
|
||||
This is adding support for splitting of export and render jobs for Redshift as is already implemented for Vray, Mantra and Arnold.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: automatic installation of PySide2 <a href="https://github.com/ynput/OpenPype/pull/6111">#6111</a></summary>
|
||||
|
||||
This PR adds hook which tries to check if PySide2 is installed in Python used by Fusion and if not, it tries to install it automatically.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: OpenPype addon dependencies <a href="https://github.com/ynput/OpenPype/pull/6113">#6113</a></summary>
|
||||
|
||||
Added `click` and `six` to requirements of openpype addon, and removed `Qt.py` requirement, which is not used anywhere.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Thumbnail representation has 'outputName' <a href="https://github.com/ynput/OpenPype/pull/6114">#6114</a></summary>
|
||||
|
||||
Add thumbnail output name to thumbnail representation to prevent same output filename during integration.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Kitsu: Clear credentials is safe <a href="https://github.com/ynput/OpenPype/pull/6116">#6116</a></summary>
|
||||
|
||||
Do not remove not existing keyring items.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: bug fix the playblast without textures <a href="https://github.com/ynput/OpenPype/pull/5942">#5942</a></summary>
|
||||
|
||||
Bug fix the texture not being displayed when users enable texture placement in the OP/AYON setting
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Workfile instance update fix <a href="https://github.com/ynput/OpenPype/pull/6048">#6048</a></summary>
|
||||
|
||||
Make sure workfile instance has always available 'instance_node' in transient data.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix issue with parenting of widgets <a href="https://github.com/ynput/OpenPype/pull/6106">#6106</a></summary>
|
||||
|
||||
Don't use publisher window parent (usually main DCC window) as parent for report widget.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>:wrench: fix and update pydocstyle configuration <a href="https://github.com/ynput/OpenPype/pull/6109">#6109</a></summary>
|
||||
|
||||
Fix pydocstyle configuration and move it to `pyproject.toml`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Create camera node with the latest camera node class in Nuke 14 <a href="https://github.com/ynput/OpenPype/pull/6118">#6118</a></summary>
|
||||
|
||||
Creating instance fails for certain cameras, and it seems to only exist in Nuke 14. The reason of causing that contributes to the new camera node class `Camera4` while the camera creator is working with the `Camera2` class.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Site Sync: small fixes in Loader <a href="https://github.com/ynput/OpenPype/pull/6119">#6119</a></summary>
|
||||
|
||||
Resolves issue:
|
||||
- local and studio icons were same, they should be different
|
||||
- `TypeError: string indices must be integers` error when downloading/uploading workfiles
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Template data for editorial publishing <a href="https://github.com/ynput/OpenPype/pull/6120">#6120</a></summary>
|
||||
|
||||
Template data for editorial publishing are filled during `CollectInstanceAnatomyData`. The structure for editorial is determined, as it's required for ExtractHierarchy AYON/OpenPype plugins.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>SceneInventory: Fix site sync icon conversion <a href="https://github.com/ynput/OpenPype/pull/6123">#6123</a></summary>
|
||||
|
||||
Use 'get_qt_icon' to convert icon definitions from site sync.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.18.2](https://github.com/ynput/OpenPype/tree/3.18.2)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -124,23 +124,24 @@ def get_linked_representation_id(
|
|||
if not versions_to_check:
|
||||
break
|
||||
|
||||
links = con.get_versions_links(
|
||||
versions_links = con.get_versions_links(
|
||||
project_name,
|
||||
versions_to_check,
|
||||
link_types=link_types,
|
||||
link_direction="out")
|
||||
|
||||
versions_to_check = set()
|
||||
for link in links:
|
||||
# Care only about version links
|
||||
if link["entityType"] != "version":
|
||||
continue
|
||||
entity_id = link["entityId"]
|
||||
# Skip already found linked version ids
|
||||
if entity_id in linked_version_ids:
|
||||
continue
|
||||
linked_version_ids.add(entity_id)
|
||||
versions_to_check.add(entity_id)
|
||||
for links in versions_links.values():
|
||||
for link in links:
|
||||
# Care only about version links
|
||||
if link["entityType"] != "version":
|
||||
continue
|
||||
entity_id = link["entityId"]
|
||||
# Skip already found linked version ids
|
||||
if entity_id in linked_version_ids:
|
||||
continue
|
||||
linked_version_ids.add(entity_id)
|
||||
versions_to_check.add(entity_id)
|
||||
|
||||
linked_version_ids.remove(version_id)
|
||||
if not linked_version_ids:
|
||||
|
|
|
|||
|
|
@ -127,8 +127,9 @@ def isolate_objects(window, objects):
|
|||
|
||||
context = create_blender_context(selected=objects, window=window)
|
||||
|
||||
bpy.ops.view3d.view_axis(context, type="FRONT")
|
||||
bpy.ops.view3d.localview(context)
|
||||
with bpy.context.temp_override(**context):
|
||||
bpy.ops.view3d.view_axis(type="FRONT")
|
||||
bpy.ops.view3d.localview()
|
||||
|
||||
deselect_all()
|
||||
|
||||
|
|
@ -270,10 +271,12 @@ def _independent_window():
|
|||
"""Create capture-window context."""
|
||||
context = create_blender_context()
|
||||
current_windows = set(bpy.context.window_manager.windows)
|
||||
bpy.ops.wm.window_new(context)
|
||||
window = list(set(bpy.context.window_manager.windows) - current_windows)[0]
|
||||
context["window"] = window
|
||||
try:
|
||||
yield window
|
||||
finally:
|
||||
bpy.ops.wm.window_close(context)
|
||||
with bpy.context.temp_override(**context):
|
||||
bpy.ops.wm.window_new()
|
||||
window = list(
|
||||
set(bpy.context.window_manager.windows) - current_windows)[0]
|
||||
context["window"] = window
|
||||
try:
|
||||
yield window
|
||||
finally:
|
||||
bpy.ops.wm.window_close()
|
||||
|
|
|
|||
|
|
@ -36,6 +36,12 @@ def prepare_scene_name(
|
|||
if namespace:
|
||||
name = f"{name}_{namespace}"
|
||||
name = f"{name}_{subset}"
|
||||
|
||||
# Blender name for a collection or object cannot be longer than 63
|
||||
# characters. If the name is longer, it will raise an error.
|
||||
if len(name) > 63:
|
||||
raise ValueError(f"Scene name '{name}' would be too long.")
|
||||
|
||||
return name
|
||||
|
||||
|
||||
|
|
@ -226,7 +232,7 @@ class BaseCreator(Creator):
|
|||
|
||||
# Create asset group
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_name = instance_data["folderPath"]
|
||||
asset_name = instance_data["folderPath"].split("/")[-1]
|
||||
else:
|
||||
asset_name = instance_data["asset"]
|
||||
|
||||
|
|
@ -305,12 +311,16 @@ class BaseCreator(Creator):
|
|||
)
|
||||
return
|
||||
|
||||
# Rename the instance node in the scene if subset or asset changed
|
||||
# Rename the instance node in the scene if subset or asset changed.
|
||||
# Do not rename the instance if the family is workfile, as the
|
||||
# workfile instance is included in the AVALON_CONTAINER collection.
|
||||
if (
|
||||
"subset" in changes.changed_keys
|
||||
or asset_name_key in changes.changed_keys
|
||||
):
|
||||
) and created_instance.family != "workfile":
|
||||
asset_name = data[asset_name_key]
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_name = asset_name.split("/")[-1]
|
||||
name = prepare_scene_name(
|
||||
asset=asset_name, subset=data["subset"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ class CreateWorkfile(BaseCreator, AutoCreator):
|
|||
|
||||
def create(self):
|
||||
"""Create workfile instances."""
|
||||
existing_instance = next(
|
||||
workfile_instance = next(
|
||||
(
|
||||
instance for instance in self.create_context.instances
|
||||
if instance.creator_identifier == self.identifier
|
||||
|
|
@ -39,14 +39,14 @@ class CreateWorkfile(BaseCreator, AutoCreator):
|
|||
host_name = self.create_context.host_name
|
||||
|
||||
existing_asset_name = None
|
||||
if existing_instance is not None:
|
||||
if workfile_instance is not None:
|
||||
if AYON_SERVER_ENABLED:
|
||||
existing_asset_name = existing_instance.get("folderPath")
|
||||
existing_asset_name = workfile_instance.get("folderPath")
|
||||
|
||||
if existing_asset_name is None:
|
||||
existing_asset_name = existing_instance["asset"]
|
||||
existing_asset_name = workfile_instance["asset"]
|
||||
|
||||
if not existing_instance:
|
||||
if not workfile_instance:
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
task_name, task_name, asset_doc, project_name, host_name
|
||||
|
|
@ -66,19 +66,18 @@ class CreateWorkfile(BaseCreator, AutoCreator):
|
|||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
existing_instance,
|
||||
workfile_instance,
|
||||
)
|
||||
)
|
||||
self.log.info("Auto-creating workfile instance...")
|
||||
current_instance = CreatedInstance(
|
||||
workfile_instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
instance_node = bpy.data.collections.get(AVALON_CONTAINERS, {})
|
||||
current_instance.transient_data["instance_node"] = instance_node
|
||||
self._add_instance_to_context(current_instance)
|
||||
self._add_instance_to_context(workfile_instance)
|
||||
|
||||
elif (
|
||||
existing_asset_name != asset_name
|
||||
or existing_instance["task"] != task_name
|
||||
or workfile_instance["task"] != task_name
|
||||
):
|
||||
# Update instance context if it's different
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
|
@ -86,12 +85,17 @@ class CreateWorkfile(BaseCreator, AutoCreator):
|
|||
task_name, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
if AYON_SERVER_ENABLED:
|
||||
existing_instance["folderPath"] = asset_name
|
||||
workfile_instance["folderPath"] = asset_name
|
||||
else:
|
||||
existing_instance["asset"] = asset_name
|
||||
workfile_instance["asset"] = asset_name
|
||||
|
||||
existing_instance["task"] = task_name
|
||||
existing_instance["subset"] = subset_name
|
||||
workfile_instance["task"] = task_name
|
||||
workfile_instance["subset"] = subset_name
|
||||
|
||||
instance_node = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not instance_node:
|
||||
instance_node = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
workfile_instance.transient_data["instance_node"] = instance_node
|
||||
|
||||
def collect_instances(self):
|
||||
|
||||
|
|
|
|||
|
|
@ -61,5 +61,10 @@ class BlendAnimationLoader(plugin.AssetLoader):
|
|||
|
||||
bpy.data.objects.remove(container)
|
||||
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
filename = bpy.path.basename(libpath)
|
||||
# Blender has a limit of 63 characters for any data name.
|
||||
# If the filename is longer, it will be truncated.
|
||||
if len(filename) > 63:
|
||||
filename = filename[:63]
|
||||
library = bpy.data.libraries.get(filename)
|
||||
bpy.data.libraries.remove(library)
|
||||
|
|
|
|||
|
|
@ -67,7 +67,8 @@ class AudioLoader(plugin.AssetLoader):
|
|||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
bpy.ops.sequencer.sound_strip_add(oc, filepath=libpath, frame_start=1)
|
||||
with bpy.context.temp_override(**oc):
|
||||
bpy.ops.sequencer.sound_strip_add(filepath=libpath, frame_start=1)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
|
|
@ -156,17 +157,18 @@ class AudioLoader(plugin.AssetLoader):
|
|||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(oc, action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[old_audio].select = True
|
||||
with bpy.context.temp_override(**oc):
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[old_audio].select = True
|
||||
|
||||
bpy.ops.sequencer.delete(oc)
|
||||
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
|
||||
bpy.ops.sequencer.delete()
|
||||
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
|
||||
|
||||
bpy.ops.sequencer.sound_strip_add(
|
||||
oc, filepath=str(libpath), frame_start=1)
|
||||
bpy.ops.sequencer.sound_strip_add(
|
||||
filepath=str(libpath), frame_start=1)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
|
|
@ -205,12 +207,13 @@ class AudioLoader(plugin.AssetLoader):
|
|||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(oc, action='DESELECT')
|
||||
bpy.context.scene.sequence_editor.sequences_all[audio].select = True
|
||||
|
||||
bpy.ops.sequencer.delete(oc)
|
||||
with bpy.context.temp_override(**oc):
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[audio].select = True
|
||||
bpy.ops.sequencer.delete()
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
|
|
|
|||
|
|
@ -102,11 +102,15 @@ class BlendLoader(plugin.AssetLoader):
|
|||
|
||||
# Link all the container children to the collection
|
||||
for obj in container.children_recursive:
|
||||
print(obj)
|
||||
bpy.context.scene.collection.objects.link(obj)
|
||||
|
||||
# Remove the library from the blend file
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
filepath = bpy.path.basename(libpath)
|
||||
# Blender has a limit of 63 characters for any data name.
|
||||
# If the filepath is longer, it will be truncated.
|
||||
if len(filepath) > 63:
|
||||
filepath = filepath[:63]
|
||||
library = bpy.data.libraries.get(filepath)
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return container, members
|
||||
|
|
@ -189,8 +193,20 @@ class BlendLoader(plugin.AssetLoader):
|
|||
|
||||
transform = asset_group.matrix_basis.copy()
|
||||
old_data = dict(asset_group.get(AVALON_PROPERTY))
|
||||
old_members = old_data.get("members", [])
|
||||
parent = asset_group.parent
|
||||
|
||||
actions = {}
|
||||
objects_with_anim = [
|
||||
obj for obj in asset_group.children_recursive
|
||||
if obj.animation_data]
|
||||
for obj in objects_with_anim:
|
||||
# Check if the object has an action and, if so, add it to a dict
|
||||
# so we can restore it later. Save and restore the action only
|
||||
# if it wasn't originally loaded from the current asset.
|
||||
if obj.animation_data.action not in old_members:
|
||||
actions[obj.name] = obj.animation_data.action
|
||||
|
||||
self.exec_remove(container)
|
||||
|
||||
asset_group, members = self._process_data(libpath, group_name)
|
||||
|
|
@ -201,6 +217,13 @@ class BlendLoader(plugin.AssetLoader):
|
|||
asset_group.matrix_basis = transform
|
||||
asset_group.parent = parent
|
||||
|
||||
# Restore the actions
|
||||
for obj in asset_group.children_recursive:
|
||||
if obj.name in actions:
|
||||
if not obj.animation_data:
|
||||
obj.animation_data_create()
|
||||
obj.animation_data.action = actions[obj.name]
|
||||
|
||||
# Restore the old data, but reset memebers, as they don't exist anymore
|
||||
# This avoids a crash, because the memory addresses of those members
|
||||
# are not valid anymore
|
||||
|
|
|
|||
|
|
@ -60,7 +60,12 @@ class BlendSceneLoader(plugin.AssetLoader):
|
|||
bpy.context.scene.collection.children.link(container)
|
||||
|
||||
# Remove the library from the blend file
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
filepath = bpy.path.basename(libpath)
|
||||
# Blender has a limit of 63 characters for any data name.
|
||||
# If the filepath is longer, it will be truncated.
|
||||
if len(filepath) > 63:
|
||||
filepath = filepath[:63]
|
||||
library = bpy.data.libraries.get(filepath)
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return container, members
|
||||
|
|
|
|||
|
|
@ -55,13 +55,13 @@ class ExtractAnimationABC(
|
|||
context = plugin.create_blender_context(
|
||||
active=asset_group, selected=selected)
|
||||
|
||||
# We export the abc
|
||||
bpy.ops.wm.alembic_export(
|
||||
context,
|
||||
filepath=filepath,
|
||||
selected=True,
|
||||
flatten=False
|
||||
)
|
||||
with bpy.context.temp_override(**context):
|
||||
# We export the abc
|
||||
bpy.ops.wm.alembic_export(
|
||||
filepath=filepath,
|
||||
selected=True,
|
||||
flatten=False
|
||||
)
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
|
|
|
|||
|
|
@ -50,19 +50,19 @@ class ExtractCamera(publish.Extractor, publish.OptionalPyblishPluginMixin):
|
|||
scale_length = bpy.context.scene.unit_settings.scale_length
|
||||
bpy.context.scene.unit_settings.scale_length = 0.01
|
||||
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
context,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
with bpy.context.temp_override(**context):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
|
||||
bpy.context.scene.unit_settings.scale_length = scale_length
|
||||
|
||||
|
|
|
|||
|
|
@ -57,15 +57,15 @@ class ExtractFBX(publish.Extractor, publish.OptionalPyblishPluginMixin):
|
|||
scale_length = bpy.context.scene.unit_settings.scale_length
|
||||
bpy.context.scene.unit_settings.scale_length = 0.01
|
||||
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
context,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
mesh_smooth_type='FACE',
|
||||
add_leaf_bones=False
|
||||
)
|
||||
with bpy.context.temp_override(**context):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
mesh_smooth_type='FACE',
|
||||
add_leaf_bones=False
|
||||
)
|
||||
|
||||
bpy.context.scene.unit_settings.scale_length = scale_length
|
||||
|
||||
|
|
|
|||
|
|
@ -153,17 +153,20 @@ class ExtractAnimationFBX(
|
|||
|
||||
override = plugin.create_blender_context(
|
||||
active=root, selected=[root, armature])
|
||||
bpy.ops.export_scene.fbx(
|
||||
override,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
|
||||
with bpy.context.temp_override(**override):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
|
||||
armature.name = armature_name
|
||||
asset_group.name = asset_group_name
|
||||
root.select_set(True)
|
||||
|
|
|
|||
|
|
@ -80,17 +80,18 @@ class ExtractLayout(publish.Extractor, publish.OptionalPyblishPluginMixin):
|
|||
|
||||
override = plugin.create_blender_context(
|
||||
active=asset, selected=[asset, obj])
|
||||
bpy.ops.export_scene.fbx(
|
||||
override,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
with bpy.context.temp_override(**override):
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'EMPTY', 'ARMATURE'}
|
||||
)
|
||||
obj.name = armature_name
|
||||
asset.name = asset_group_name
|
||||
asset.select_set(False)
|
||||
|
|
|
|||
221
openpype/hosts/fusion/api/plugin.py
Normal file
221
openpype/hosts/fusion/api/plugin.py
Normal file
|
|
@ -0,0 +1,221 @@
|
|||
from copy import deepcopy
|
||||
import os
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
)
|
||||
|
||||
from openpype.lib import (
|
||||
BoolDef,
|
||||
EnumDef,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
Creator,
|
||||
CreatedInstance
|
||||
)
|
||||
|
||||
|
||||
class GenericCreateSaver(Creator):
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = [
|
||||
"reviewable"
|
||||
]
|
||||
|
||||
settings_category = "fusion"
|
||||
|
||||
image_format = "exr"
|
||||
|
||||
# TODO: This should be renamed together with Nuke so it is aligned
|
||||
temp_rendering_path_template = (
|
||||
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}")
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
|
||||
|
||||
instance = CreatedInstance(
|
||||
family=self.family,
|
||||
subset_name=subset_name,
|
||||
data=instance_data,
|
||||
creator=self,
|
||||
)
|
||||
data = instance.data_to_store()
|
||||
comp = get_current_comp()
|
||||
with comp_lock_and_undo_chunk(comp):
|
||||
args = (-32768, -32768) # Magical position numbers
|
||||
saver = comp.AddTool("Saver", *args)
|
||||
|
||||
self._update_tool_with_data(saver, data=data)
|
||||
|
||||
# Register the CreatedInstance
|
||||
self._imprint(saver, data)
|
||||
|
||||
# Insert the transient data
|
||||
instance.transient_data["tool"] = saver
|
||||
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
return instance
|
||||
|
||||
def collect_instances(self):
|
||||
comp = get_current_comp()
|
||||
tools = comp.GetToolList(False, "Saver").values()
|
||||
for tool in tools:
|
||||
data = self.get_managed_tool_data(tool)
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# Add instance
|
||||
created_instance = CreatedInstance.from_existing(data, self)
|
||||
|
||||
# Collect transient data
|
||||
created_instance.transient_data["tool"] = tool
|
||||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
tool = created_inst.transient_data["tool"]
|
||||
self._update_tool_with_data(tool, new_data)
|
||||
self._imprint(tool, new_data)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
# Remove the tool from the scene
|
||||
|
||||
tool = instance.transient_data["tool"]
|
||||
if tool:
|
||||
tool.Delete()
|
||||
|
||||
# Remove the collected CreatedInstance to remove from UI directly
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def _imprint(self, tool, data):
|
||||
# Save all data in a "openpype.{key}" = value data
|
||||
|
||||
# Instance id is the tool's name so we don't need to imprint as data
|
||||
data.pop("instance_id", None)
|
||||
|
||||
active = data.pop("active", None)
|
||||
if active is not None:
|
||||
# Use active value to set the passthrough state
|
||||
tool.SetAttrs({"TOOLB_PassThrough": not active})
|
||||
|
||||
for key, value in data.items():
|
||||
tool.SetData(f"openpype.{key}", value)
|
||||
|
||||
def _update_tool_with_data(self, tool, data):
|
||||
"""Update tool node name and output path based on subset data"""
|
||||
if "subset" not in data:
|
||||
return
|
||||
|
||||
original_subset = tool.GetData("openpype.subset")
|
||||
original_format = tool.GetData(
|
||||
"openpype.creator_attributes.image_format"
|
||||
)
|
||||
|
||||
subset = data["subset"]
|
||||
if (
|
||||
original_subset != subset
|
||||
or original_format != data["creator_attributes"]["image_format"]
|
||||
):
|
||||
self._configure_saver_tool(data, tool, subset)
|
||||
|
||||
def _configure_saver_tool(self, data, tool, subset):
|
||||
formatting_data = deepcopy(data)
|
||||
|
||||
# get frame padding from anatomy templates
|
||||
frame_padding = self.project_anatomy.templates["frame_padding"]
|
||||
|
||||
# get output format
|
||||
ext = data["creator_attributes"]["image_format"]
|
||||
|
||||
# Subset change detected
|
||||
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
|
||||
formatting_data.update({
|
||||
"workdir": workdir,
|
||||
"frame": "0" * frame_padding,
|
||||
"ext": ext,
|
||||
"product": {
|
||||
"name": formatting_data["subset"],
|
||||
"type": formatting_data["family"],
|
||||
},
|
||||
})
|
||||
|
||||
# build file path to render
|
||||
filepath = self.temp_rendering_path_template.format(**formatting_data)
|
||||
|
||||
comp = get_current_comp()
|
||||
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
|
||||
|
||||
# Rename tool
|
||||
if tool.Name != subset:
|
||||
print(f"Renaming {tool.Name} -> {subset}")
|
||||
tool.SetAttrs({"TOOLS_Name": subset})
|
||||
|
||||
def get_managed_tool_data(self, tool):
|
||||
"""Return data of the tool if it matches creator identifier"""
|
||||
data = tool.GetData("openpype")
|
||||
if not isinstance(data, dict):
|
||||
return
|
||||
|
||||
required = {
|
||||
"id": "pyblish.avalon.instance",
|
||||
"creator_identifier": self.identifier,
|
||||
}
|
||||
for key, value in required.items():
|
||||
if key not in data or data[key] != value:
|
||||
return
|
||||
|
||||
# Get active state from the actual tool state
|
||||
attrs = tool.GetAttrs()
|
||||
passthrough = attrs["TOOLB_PassThrough"]
|
||||
data["active"] = not passthrough
|
||||
|
||||
# Override publisher's UUID generation because tool names are
|
||||
# already unique in Fusion in a comp
|
||||
data["instance_id"] = tool.Name
|
||||
|
||||
return data
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
"""Settings for publish page"""
|
||||
return self.get_pre_create_attr_defs()
|
||||
|
||||
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
|
||||
creator_attrs = instance_data["creator_attributes"] = {}
|
||||
for pass_key in pre_create_data.keys():
|
||||
creator_attrs[pass_key] = pre_create_data[pass_key]
|
||||
|
||||
def _get_render_target_enum(self):
|
||||
rendering_targets = {
|
||||
"local": "Local machine rendering",
|
||||
"frames": "Use existing frames",
|
||||
}
|
||||
if "farm_rendering" in self.instance_attributes:
|
||||
rendering_targets["farm"] = "Farm rendering"
|
||||
|
||||
return EnumDef(
|
||||
"render_target", items=rendering_targets, label="Render target"
|
||||
)
|
||||
|
||||
def _get_reviewable_bool(self):
|
||||
return BoolDef(
|
||||
"review",
|
||||
default=("reviewable" in self.instance_attributes),
|
||||
label="Review",
|
||||
)
|
||||
|
||||
def _get_image_format_enum(self):
|
||||
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
|
||||
return EnumDef(
|
||||
"image_format",
|
||||
items=image_format_options,
|
||||
default=self.image_format,
|
||||
label="Output Image Format",
|
||||
)
|
||||
|
|
@ -64,5 +64,8 @@ class FusionPrelaunch(PreLaunchHook):
|
|||
|
||||
self.launch_context.env[py3_var] = py3_dir
|
||||
|
||||
# for hook installing PySide2
|
||||
self.data["fusion_python3_home"] = py3_dir
|
||||
|
||||
self.log.info(f"Setting OPENPYPE_FUSION: {FUSION_HOST_DIR}")
|
||||
self.launch_context.env["OPENPYPE_FUSION"] = FUSION_HOST_DIR
|
||||
|
|
|
|||
186
openpype/hosts/fusion/hooks/pre_pyside_install.py
Normal file
186
openpype/hosts/fusion/hooks/pre_pyside_install.py
Normal file
|
|
@ -0,0 +1,186 @@
|
|||
import os
|
||||
import subprocess
|
||||
import platform
|
||||
import uuid
|
||||
|
||||
from openpype.lib.applications import PreLaunchHook, LaunchTypes
|
||||
|
||||
|
||||
class InstallPySideToFusion(PreLaunchHook):
|
||||
"""Automatically installs Qt binding to fusion's python packages.
|
||||
|
||||
Check if fusion has installed PySide2 and will try to install if not.
|
||||
|
||||
For pipeline implementation is required to have Qt binding installed in
|
||||
fusion's python packages.
|
||||
"""
|
||||
|
||||
app_groups = {"fusion"}
|
||||
order = 2
|
||||
launch_types = {LaunchTypes.local}
|
||||
|
||||
def execute(self):
|
||||
# Prelaunch hook is not crucial
|
||||
try:
|
||||
settings = self.data["project_settings"][self.host_name]
|
||||
if not settings["hooks"]["InstallPySideToFusion"]["enabled"]:
|
||||
return
|
||||
self.inner_execute()
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Processing of {} crashed.".format(self.__class__.__name__),
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
def inner_execute(self):
|
||||
self.log.debug("Check for PySide2 installation.")
|
||||
|
||||
fusion_python3_home = self.data.get("fusion_python3_home")
|
||||
if not fusion_python3_home:
|
||||
self.log.warning("'fusion_python3_home' was not provided. "
|
||||
"Installation of PySide2 not possible")
|
||||
return
|
||||
|
||||
if platform.system().lower() == "windows":
|
||||
exe_filenames = ["python.exe"]
|
||||
else:
|
||||
exe_filenames = ["python3", "python"]
|
||||
|
||||
for exe_filename in exe_filenames:
|
||||
python_executable = os.path.join(fusion_python3_home, exe_filename)
|
||||
if os.path.exists(python_executable):
|
||||
break
|
||||
|
||||
if not os.path.exists(python_executable):
|
||||
self.log.warning(
|
||||
"Couldn't find python executable for fusion. {}".format(
|
||||
python_executable
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
# Check if PySide2 is installed and skip if yes
|
||||
if self._is_pyside_installed(python_executable):
|
||||
self.log.debug("Fusion has already installed PySide2.")
|
||||
return
|
||||
|
||||
self.log.debug("Installing PySide2.")
|
||||
# Install PySide2 in fusion's python
|
||||
if self._windows_require_permissions(
|
||||
os.path.dirname(python_executable)):
|
||||
result = self._install_pyside_windows(python_executable)
|
||||
else:
|
||||
result = self._install_pyside(python_executable)
|
||||
|
||||
if result:
|
||||
self.log.info("Successfully installed PySide2 module to fusion.")
|
||||
else:
|
||||
self.log.warning("Failed to install PySide2 module to fusion.")
|
||||
|
||||
def _install_pyside_windows(self, python_executable):
|
||||
"""Install PySide2 python module to fusion's python.
|
||||
|
||||
Installation requires administration rights that's why it is required
|
||||
to use "pywin32" module which can execute command's and ask for
|
||||
administration rights.
|
||||
"""
|
||||
try:
|
||||
import win32api
|
||||
import win32con
|
||||
import win32process
|
||||
import win32event
|
||||
import pywintypes
|
||||
from win32comext.shell.shell import ShellExecuteEx
|
||||
from win32comext.shell import shellcon
|
||||
except Exception:
|
||||
self.log.warning("Couldn't import \"pywin32\" modules")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Parameters
|
||||
# - use "-m pip" as module pip to install PySide2 and argument
|
||||
# "--ignore-installed" is to force install module to fusion's
|
||||
# site-packages and make sure it is binary compatible
|
||||
parameters = "-m pip install --ignore-installed PySide2"
|
||||
|
||||
# Execute command and ask for administrator's rights
|
||||
process_info = ShellExecuteEx(
|
||||
nShow=win32con.SW_SHOWNORMAL,
|
||||
fMask=shellcon.SEE_MASK_NOCLOSEPROCESS,
|
||||
lpVerb="runas",
|
||||
lpFile=python_executable,
|
||||
lpParameters=parameters,
|
||||
lpDirectory=os.path.dirname(python_executable)
|
||||
)
|
||||
process_handle = process_info["hProcess"]
|
||||
win32event.WaitForSingleObject(process_handle,
|
||||
win32event.INFINITE)
|
||||
returncode = win32process.GetExitCodeProcess(process_handle)
|
||||
return returncode == 0
|
||||
except pywintypes.error:
|
||||
return False
|
||||
|
||||
def _install_pyside(self, python_executable):
|
||||
"""Install PySide2 python module to fusion's python."""
|
||||
try:
|
||||
# Parameters
|
||||
# - use "-m pip" as module pip to install PySide2 and argument
|
||||
# "--ignore-installed" is to force install module to fusion's
|
||||
# site-packages and make sure it is binary compatible
|
||||
env = dict(os.environ)
|
||||
del env['PYTHONPATH']
|
||||
args = [
|
||||
python_executable,
|
||||
"-m",
|
||||
"pip",
|
||||
"install",
|
||||
"--ignore-installed",
|
||||
"PySide2",
|
||||
]
|
||||
process = subprocess.Popen(
|
||||
args, stdout=subprocess.PIPE, universal_newlines=True,
|
||||
env=env
|
||||
)
|
||||
process.communicate()
|
||||
return process.returncode == 0
|
||||
except PermissionError:
|
||||
self.log.warning(
|
||||
"Permission denied with command:"
|
||||
"\"{}\".".format(" ".join(args))
|
||||
)
|
||||
except OSError as error:
|
||||
self.log.warning(f"OS error has occurred: \"{error}\".")
|
||||
except subprocess.SubprocessError:
|
||||
pass
|
||||
|
||||
def _is_pyside_installed(self, python_executable):
|
||||
"""Check if PySide2 module is in fusion's pip list."""
|
||||
args = [python_executable, "-c", "from qtpy import QtWidgets"]
|
||||
process = subprocess.Popen(args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE)
|
||||
_, stderr = process.communicate()
|
||||
stderr = stderr.decode()
|
||||
if stderr:
|
||||
return False
|
||||
return True
|
||||
|
||||
def _windows_require_permissions(self, dirpath):
|
||||
if platform.system().lower() != "windows":
|
||||
return False
|
||||
|
||||
try:
|
||||
# Attempt to create a temporary file in the folder
|
||||
temp_file_path = os.path.join(dirpath, uuid.uuid4().hex)
|
||||
with open(temp_file_path, "w"):
|
||||
pass
|
||||
os.remove(temp_file_path) # Clean up temporary file
|
||||
return False
|
||||
|
||||
except PermissionError:
|
||||
return True
|
||||
|
||||
except BaseException as exc:
|
||||
print(("Failed to determine if root requires permissions."
|
||||
"Unexpected error: {}").format(exc))
|
||||
return False
|
||||
64
openpype/hosts/fusion/plugins/create/create_image_saver.py
Normal file
64
openpype/hosts/fusion/plugins/create/create_image_saver.py
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
from openpype.lib import NumberDef
|
||||
|
||||
from openpype.hosts.fusion.api.plugin import GenericCreateSaver
|
||||
from openpype.hosts.fusion.api import get_current_comp
|
||||
|
||||
|
||||
class CreateImageSaver(GenericCreateSaver):
|
||||
"""Fusion Saver to generate single image.
|
||||
|
||||
Created to explicitly separate single ('image') or
|
||||
multi frame('render) outputs.
|
||||
|
||||
This might be temporary creator until 'alias' functionality will be
|
||||
implemented to limit creation of additional product types with similar, but
|
||||
not the same workflows.
|
||||
"""
|
||||
identifier = "io.openpype.creators.fusion.imagesaver"
|
||||
label = "Image (saver)"
|
||||
name = "image"
|
||||
family = "image"
|
||||
description = "Fusion Saver to generate image"
|
||||
|
||||
default_frame = 0
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Fusion Saver to generate single image.
|
||||
|
||||
This creator is expected for publishing of single frame `image` product
|
||||
type.
|
||||
|
||||
Artist should provide frame number (integer) to specify which frame
|
||||
should be published. It must be inside of global timeline frame range.
|
||||
|
||||
Supports local and deadline rendering.
|
||||
|
||||
Supports selection from predefined set of output file extensions:
|
||||
- exr
|
||||
- tga
|
||||
- png
|
||||
- tif
|
||||
- jpg
|
||||
|
||||
Created to explicitly separate single frame ('image') or
|
||||
multi frame ('render') outputs.
|
||||
"""
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
"""Settings for create page"""
|
||||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool(),
|
||||
self._get_frame_int(),
|
||||
self._get_image_format_enum(),
|
||||
]
|
||||
return attr_defs
|
||||
|
||||
def _get_frame_int(self):
|
||||
return NumberDef(
|
||||
"frame",
|
||||
default=self.default_frame,
|
||||
label="Frame",
|
||||
tooltip="Set frame to be rendered, must be inside of global "
|
||||
"timeline range"
|
||||
)
|
||||
|
|
@ -1,187 +1,42 @@
|
|||
from copy import deepcopy
|
||||
import os
|
||||
from openpype.lib import EnumDef
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
)
|
||||
|
||||
from openpype.lib import (
|
||||
BoolDef,
|
||||
EnumDef,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
Creator as NewCreator,
|
||||
CreatedInstance,
|
||||
Anatomy,
|
||||
)
|
||||
from openpype.hosts.fusion.api.plugin import GenericCreateSaver
|
||||
|
||||
|
||||
class CreateSaver(NewCreator):
|
||||
class CreateSaver(GenericCreateSaver):
|
||||
"""Fusion Saver to generate image sequence of 'render' product type.
|
||||
|
||||
Original Saver creator targeted for 'render' product type. It uses
|
||||
original not to descriptive name because of values in Settings.
|
||||
"""
|
||||
identifier = "io.openpype.creators.fusion.saver"
|
||||
label = "Render (saver)"
|
||||
name = "render"
|
||||
family = "render"
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = ["reviewable"]
|
||||
image_format = "exr"
|
||||
default_frame_range_option = "asset_db"
|
||||
|
||||
# TODO: This should be renamed together with Nuke so it is aligned
|
||||
temp_rendering_path_template = (
|
||||
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}"
|
||||
)
|
||||
def get_detail_description(self):
|
||||
return """Fusion Saver to generate image sequence.
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
|
||||
This creator is expected for publishing of image sequences for 'render'
|
||||
product type. (But can publish even single frame 'render'.)
|
||||
|
||||
instance_data.update(
|
||||
{"id": "pyblish.avalon.instance", "subset": subset_name}
|
||||
)
|
||||
Select what should be source of render range:
|
||||
- "Current asset context" - values set on Asset in DB (Ftrack)
|
||||
- "From render in/out" - from node itself
|
||||
- "From composition timeline" - from timeline
|
||||
|
||||
comp = get_current_comp()
|
||||
with comp_lock_and_undo_chunk(comp):
|
||||
args = (-32768, -32768) # Magical position numbers
|
||||
saver = comp.AddTool("Saver", *args)
|
||||
Supports local and farm rendering.
|
||||
|
||||
self._update_tool_with_data(saver, data=instance_data)
|
||||
|
||||
# Register the CreatedInstance
|
||||
instance = CreatedInstance(
|
||||
family=self.family,
|
||||
subset_name=subset_name,
|
||||
data=instance_data,
|
||||
creator=self,
|
||||
)
|
||||
data = instance.data_to_store()
|
||||
self._imprint(saver, data)
|
||||
|
||||
# Insert the transient data
|
||||
instance.transient_data["tool"] = saver
|
||||
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
return instance
|
||||
|
||||
def collect_instances(self):
|
||||
comp = get_current_comp()
|
||||
tools = comp.GetToolList(False, "Saver").values()
|
||||
for tool in tools:
|
||||
data = self.get_managed_tool_data(tool)
|
||||
if not data:
|
||||
continue
|
||||
|
||||
# Add instance
|
||||
created_instance = CreatedInstance.from_existing(data, self)
|
||||
|
||||
# Collect transient data
|
||||
created_instance.transient_data["tool"] = tool
|
||||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
tool = created_inst.transient_data["tool"]
|
||||
self._update_tool_with_data(tool, new_data)
|
||||
self._imprint(tool, new_data)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
# Remove the tool from the scene
|
||||
|
||||
tool = instance.transient_data["tool"]
|
||||
if tool:
|
||||
tool.Delete()
|
||||
|
||||
# Remove the collected CreatedInstance to remove from UI directly
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def _imprint(self, tool, data):
|
||||
# Save all data in a "openpype.{key}" = value data
|
||||
|
||||
# Instance id is the tool's name so we don't need to imprint as data
|
||||
data.pop("instance_id", None)
|
||||
|
||||
active = data.pop("active", None)
|
||||
if active is not None:
|
||||
# Use active value to set the passthrough state
|
||||
tool.SetAttrs({"TOOLB_PassThrough": not active})
|
||||
|
||||
for key, value in data.items():
|
||||
tool.SetData(f"openpype.{key}", value)
|
||||
|
||||
def _update_tool_with_data(self, tool, data):
|
||||
"""Update tool node name and output path based on subset data"""
|
||||
if "subset" not in data:
|
||||
return
|
||||
|
||||
original_subset = tool.GetData("openpype.subset")
|
||||
original_format = tool.GetData(
|
||||
"openpype.creator_attributes.image_format"
|
||||
)
|
||||
|
||||
subset = data["subset"]
|
||||
if (
|
||||
original_subset != subset
|
||||
or original_format != data["creator_attributes"]["image_format"]
|
||||
):
|
||||
self._configure_saver_tool(data, tool, subset)
|
||||
|
||||
def _configure_saver_tool(self, data, tool, subset):
|
||||
formatting_data = deepcopy(data)
|
||||
|
||||
# get frame padding from anatomy templates
|
||||
anatomy = Anatomy()
|
||||
frame_padding = anatomy.templates["frame_padding"]
|
||||
|
||||
# get output format
|
||||
ext = data["creator_attributes"]["image_format"]
|
||||
|
||||
# Subset change detected
|
||||
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
|
||||
formatting_data.update(
|
||||
{"workdir": workdir, "frame": "0" * frame_padding, "ext": ext}
|
||||
)
|
||||
|
||||
# build file path to render
|
||||
filepath = self.temp_rendering_path_template.format(**formatting_data)
|
||||
|
||||
comp = get_current_comp()
|
||||
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
|
||||
|
||||
# Rename tool
|
||||
if tool.Name != subset:
|
||||
print(f"Renaming {tool.Name} -> {subset}")
|
||||
tool.SetAttrs({"TOOLS_Name": subset})
|
||||
|
||||
def get_managed_tool_data(self, tool):
|
||||
"""Return data of the tool if it matches creator identifier"""
|
||||
data = tool.GetData("openpype")
|
||||
if not isinstance(data, dict):
|
||||
return
|
||||
|
||||
required = {
|
||||
"id": "pyblish.avalon.instance",
|
||||
"creator_identifier": self.identifier,
|
||||
}
|
||||
for key, value in required.items():
|
||||
if key not in data or data[key] != value:
|
||||
return
|
||||
|
||||
# Get active state from the actual tool state
|
||||
attrs = tool.GetAttrs()
|
||||
passthrough = attrs["TOOLB_PassThrough"]
|
||||
data["active"] = not passthrough
|
||||
|
||||
# Override publisher's UUID generation because tool names are
|
||||
# already unique in Fusion in a comp
|
||||
data["instance_id"] = tool.Name
|
||||
|
||||
return data
|
||||
Supports selection from predefined set of output file extensions:
|
||||
- exr
|
||||
- tga
|
||||
- png
|
||||
- tif
|
||||
- jpg
|
||||
"""
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
"""Settings for create page"""
|
||||
|
|
@ -193,29 +48,6 @@ class CreateSaver(NewCreator):
|
|||
]
|
||||
return attr_defs
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
"""Settings for publish page"""
|
||||
return self.get_pre_create_attr_defs()
|
||||
|
||||
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
|
||||
creator_attrs = instance_data["creator_attributes"] = {}
|
||||
for pass_key in pre_create_data.keys():
|
||||
creator_attrs[pass_key] = pre_create_data[pass_key]
|
||||
|
||||
# These functions below should be moved to another file
|
||||
# so it can be used by other plugins. plugin.py ?
|
||||
def _get_render_target_enum(self):
|
||||
rendering_targets = {
|
||||
"local": "Local machine rendering",
|
||||
"frames": "Use existing frames",
|
||||
}
|
||||
if "farm_rendering" in self.instance_attributes:
|
||||
rendering_targets["farm"] = "Farm rendering"
|
||||
|
||||
return EnumDef(
|
||||
"render_target", items=rendering_targets, label="Render target"
|
||||
)
|
||||
|
||||
def _get_frame_range_enum(self):
|
||||
frame_range_options = {
|
||||
"asset_db": "Current asset context",
|
||||
|
|
@ -227,42 +59,5 @@ class CreateSaver(NewCreator):
|
|||
"frame_range_source",
|
||||
items=frame_range_options,
|
||||
label="Frame range source",
|
||||
)
|
||||
|
||||
def _get_reviewable_bool(self):
|
||||
return BoolDef(
|
||||
"review",
|
||||
default=("reviewable" in self.instance_attributes),
|
||||
label="Review",
|
||||
)
|
||||
|
||||
def _get_image_format_enum(self):
|
||||
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
|
||||
return EnumDef(
|
||||
"image_format",
|
||||
items=image_format_options,
|
||||
default=self.image_format,
|
||||
label="Output Image Format",
|
||||
)
|
||||
|
||||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
# plugin settings
|
||||
plugin_settings = project_settings["fusion"]["create"][
|
||||
self.__class__.__name__
|
||||
]
|
||||
|
||||
# individual attributes
|
||||
self.instance_attributes = plugin_settings.get(
|
||||
"instance_attributes", self.instance_attributes
|
||||
)
|
||||
self.default_variants = plugin_settings.get(
|
||||
"default_variants", self.default_variants
|
||||
)
|
||||
self.temp_rendering_path_template = plugin_settings.get(
|
||||
"temp_rendering_path_template", self.temp_rendering_path_template
|
||||
)
|
||||
self.image_format = plugin_settings.get(
|
||||
"image_format", self.image_format
|
||||
default=self.default_frame_range_option
|
||||
)
|
||||
|
|
|
|||
|
|
@ -95,7 +95,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
|
|||
label = "Collect Inputs"
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
hosts = ["fusion"]
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -57,6 +57,18 @@ class CollectInstanceData(pyblish.api.InstancePlugin):
|
|||
start_with_handle = comp_start
|
||||
end_with_handle = comp_end
|
||||
|
||||
frame = instance.data["creator_attributes"].get("frame")
|
||||
# explicitly publishing only single frame
|
||||
if frame is not None:
|
||||
frame = int(frame)
|
||||
|
||||
start = frame
|
||||
end = frame
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
start_with_handle = frame
|
||||
end_with_handle = frame
|
||||
|
||||
# Include start and end render frame in label
|
||||
subset = instance.data["subset"]
|
||||
label = (
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ class CollectFusionRender(
|
|||
continue
|
||||
|
||||
family = inst.data["family"]
|
||||
if family != "render":
|
||||
if family not in ["render", "image"]:
|
||||
continue
|
||||
|
||||
task_name = context.data["task"]
|
||||
|
|
@ -59,7 +59,7 @@ class CollectFusionRender(
|
|||
instance_families = inst.data.get("families", [])
|
||||
subset_name = inst.data["subset"]
|
||||
instance = FusionRenderInstance(
|
||||
family="render",
|
||||
family=family,
|
||||
tool=tool,
|
||||
workfileComp=comp,
|
||||
families=instance_families,
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ class FusionSaveComp(pyblish.api.ContextPlugin):
|
|||
label = "Save current file"
|
||||
order = pyblish.api.ExtractorOrder - 0.49
|
||||
hosts = ["fusion"]
|
||||
families = ["render", "workfile"]
|
||||
families = ["render", "image", "workfile"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class ValidateBackgroundDepth(
|
|||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Background Depth 32 bit"
|
||||
hosts = ["fusion"]
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
optional = True
|
||||
|
||||
actions = [SelectInvalidAction, publish.RepairAction]
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ class ValidateFusionCompSaved(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Comp Saved"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
|
||||
def process(self, context):
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Create Folder Checked"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [RepairAction, SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class ValidateFilenameHasExtension(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Filename Has Extension"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,27 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateImageFrame(pyblish.api.InstancePlugin):
|
||||
"""Validates that `image` product type contains only single frame."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Image Frame"
|
||||
families = ["image"]
|
||||
hosts = ["fusion"]
|
||||
|
||||
def process(self, instance):
|
||||
render_start = instance.data["frameStartHandle"]
|
||||
render_end = instance.data["frameEndHandle"]
|
||||
too_many_frames = (isinstance(instance.data["expectedFiles"], list)
|
||||
and len(instance.data["expectedFiles"]) > 1)
|
||||
|
||||
if render_end - render_start > 0 or too_many_frames:
|
||||
desc = ("Trying to render multiple frames. 'image' product type "
|
||||
"is meant for single frame. Please use 'render' creator.")
|
||||
raise PublishValidationError(
|
||||
title="Frame range outside of comp range",
|
||||
message=desc,
|
||||
description=desc
|
||||
)
|
||||
|
|
@ -7,8 +7,8 @@ class ValidateInstanceFrameRange(pyblish.api.InstancePlugin):
|
|||
"""Validate instance frame range is within comp's global render range."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Filename Has Extension"
|
||||
families = ["render"]
|
||||
label = "Validate Frame Range"
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ class ValidateSaverHasInput(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Saver Has Input"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ class ValidateSaverPassthrough(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Saver Passthrough"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -8,55 +8,6 @@ from openpype.hosts.fusion.api.action import SelectInvalidAction
|
|||
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
|
||||
|
||||
|
||||
def get_tool_resolution(tool, frame):
|
||||
"""Return the 2D input resolution to a Fusion tool
|
||||
|
||||
If the current tool hasn't been rendered its input resolution
|
||||
hasn't been saved. To combat this, add an expression in
|
||||
the comments field to read the resolution
|
||||
|
||||
Args
|
||||
tool (Fusion Tool): The tool to query input resolution
|
||||
frame (int): The frame to query the resolution on.
|
||||
|
||||
Returns:
|
||||
tuple: width, height as 2-tuple of integers
|
||||
|
||||
"""
|
||||
comp = tool.Composition
|
||||
|
||||
# False undo removes the undo-stack from the undo list
|
||||
with comp_lock_and_undo_chunk(comp, "Read resolution", False):
|
||||
# Save old comment
|
||||
old_comment = ""
|
||||
has_expression = False
|
||||
if tool["Comments"][frame] != "":
|
||||
if tool["Comments"].GetExpression() is not None:
|
||||
has_expression = True
|
||||
old_comment = tool["Comments"].GetExpression()
|
||||
tool["Comments"].SetExpression(None)
|
||||
else:
|
||||
old_comment = tool["Comments"][frame]
|
||||
tool["Comments"][frame] = ""
|
||||
|
||||
# Get input width
|
||||
tool["Comments"].SetExpression("self.Input.OriginalWidth")
|
||||
width = int(tool["Comments"][frame])
|
||||
|
||||
# Get input height
|
||||
tool["Comments"].SetExpression("self.Input.OriginalHeight")
|
||||
height = int(tool["Comments"][frame])
|
||||
|
||||
# Reset old comment
|
||||
tool["Comments"].SetExpression(None)
|
||||
if has_expression:
|
||||
tool["Comments"].SetExpression(old_comment)
|
||||
else:
|
||||
tool["Comments"][frame] = old_comment
|
||||
|
||||
return width, height
|
||||
|
||||
|
||||
class ValidateSaverResolution(
|
||||
pyblish.api.InstancePlugin, OptionalPyblishPluginMixin
|
||||
):
|
||||
|
|
@ -64,7 +15,7 @@ class ValidateSaverResolution(
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Asset Resolution"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
optional = True
|
||||
actions = [SelectInvalidAction]
|
||||
|
|
@ -87,19 +38,79 @@ class ValidateSaverResolution(
|
|||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
resolution = cls.get_resolution(instance)
|
||||
saver = instance.data["tool"]
|
||||
try:
|
||||
resolution = cls.get_resolution(instance)
|
||||
except PublishValidationError:
|
||||
resolution = None
|
||||
expected_resolution = cls.get_expected_resolution(instance)
|
||||
if resolution != expected_resolution:
|
||||
saver = instance.data["tool"]
|
||||
return [saver]
|
||||
|
||||
@classmethod
|
||||
def get_resolution(cls, instance):
|
||||
saver = instance.data["tool"]
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
return get_tool_resolution(saver, frame=first_frame)
|
||||
return cls.get_tool_resolution(saver, frame=first_frame)
|
||||
|
||||
@classmethod
|
||||
def get_expected_resolution(cls, instance):
|
||||
data = instance.data["assetEntity"]["data"]
|
||||
return data["resolutionWidth"], data["resolutionHeight"]
|
||||
|
||||
@classmethod
|
||||
def get_tool_resolution(cls, tool, frame):
|
||||
"""Return the 2D input resolution to a Fusion tool
|
||||
|
||||
If the current tool hasn't been rendered its input resolution
|
||||
hasn't been saved. To combat this, add an expression in
|
||||
the comments field to read the resolution
|
||||
|
||||
Args
|
||||
tool (Fusion Tool): The tool to query input resolution
|
||||
frame (int): The frame to query the resolution on.
|
||||
|
||||
Returns:
|
||||
tuple: width, height as 2-tuple of integers
|
||||
|
||||
"""
|
||||
comp = tool.Composition
|
||||
|
||||
# False undo removes the undo-stack from the undo list
|
||||
with comp_lock_and_undo_chunk(comp, "Read resolution", False):
|
||||
# Save old comment
|
||||
old_comment = ""
|
||||
has_expression = False
|
||||
|
||||
if tool["Comments"][frame] not in ["", None]:
|
||||
if tool["Comments"].GetExpression() is not None:
|
||||
has_expression = True
|
||||
old_comment = tool["Comments"].GetExpression()
|
||||
tool["Comments"].SetExpression(None)
|
||||
else:
|
||||
old_comment = tool["Comments"][frame]
|
||||
tool["Comments"][frame] = ""
|
||||
# Get input width
|
||||
tool["Comments"].SetExpression("self.Input.OriginalWidth")
|
||||
if tool["Comments"][frame] is None:
|
||||
raise PublishValidationError(
|
||||
"Cannot get resolution info for frame '{}'.\n\n "
|
||||
"Please check that saver has connected input.".format(
|
||||
frame
|
||||
)
|
||||
)
|
||||
|
||||
width = int(tool["Comments"][frame])
|
||||
|
||||
# Get input height
|
||||
tool["Comments"].SetExpression("self.Input.OriginalHeight")
|
||||
height = int(tool["Comments"][frame])
|
||||
|
||||
# Reset old comment
|
||||
tool["Comments"].SetExpression(None)
|
||||
if has_expression:
|
||||
tool["Comments"].SetExpression(old_comment)
|
||||
else:
|
||||
tool["Comments"][frame] = old_comment
|
||||
|
||||
return width, height
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ class ValidateUniqueSubsets(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Unique Subsets"
|
||||
families = ["render"]
|
||||
families = ["render", "image"]
|
||||
hosts = ["fusion"]
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -9,6 +9,8 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
label = "Collect Clip Effects Instances"
|
||||
families = ["clip"]
|
||||
|
||||
effect_categories = []
|
||||
|
||||
def process(self, instance):
|
||||
family = "effect"
|
||||
effects = {}
|
||||
|
|
@ -70,29 +72,62 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
|
|||
|
||||
subset_split.insert(0, "effect")
|
||||
|
||||
name = "".join(subset_split)
|
||||
effect_categories = {
|
||||
x["name"]: x["effect_classes"] for x in self.effect_categories
|
||||
}
|
||||
|
||||
# create new instance and inherit data
|
||||
data = {}
|
||||
for key, value in instance.data.items():
|
||||
if "clipEffectItems" in key:
|
||||
category_by_effect = {"": ""}
|
||||
for key, values in effect_categories.items():
|
||||
for cls in values:
|
||||
category_by_effect[cls] = key
|
||||
|
||||
effects_categorized = {k: {} for k in effect_categories.keys()}
|
||||
effects_categorized[""] = {}
|
||||
for key, value in effects.items():
|
||||
if key == "assignTo":
|
||||
continue
|
||||
data[key] = value
|
||||
|
||||
# change names
|
||||
data["subset"] = name
|
||||
data["family"] = family
|
||||
data["families"] = [family]
|
||||
data["name"] = data["subset"] + "_" + data["asset"]
|
||||
data["label"] = "{} - {}".format(
|
||||
data['asset'], data["subset"]
|
||||
)
|
||||
data["effects"] = effects
|
||||
# Some classes can have a number in them. Like Text2.
|
||||
found_cls = ""
|
||||
for cls in category_by_effect.keys():
|
||||
if cls in value["class"]:
|
||||
found_cls = cls
|
||||
|
||||
# create new instance
|
||||
_instance = instance.context.create_instance(**data)
|
||||
self.log.info("Created instance `{}`".format(_instance))
|
||||
self.log.debug("instance.data `{}`".format(_instance.data))
|
||||
effects_categorized[category_by_effect[found_cls]][key] = value
|
||||
|
||||
categories = list(effects_categorized.keys())
|
||||
for category in categories:
|
||||
if not effects_categorized[category]:
|
||||
effects_categorized.pop(category)
|
||||
continue
|
||||
|
||||
effects_categorized[category]["assignTo"] = effects["assignTo"]
|
||||
|
||||
for category, effects in effects_categorized.items():
|
||||
name = "".join(subset_split)
|
||||
name += category.capitalize()
|
||||
|
||||
# create new instance and inherit data
|
||||
data = {}
|
||||
for key, value in instance.data.items():
|
||||
if "clipEffectItems" in key:
|
||||
continue
|
||||
data[key] = value
|
||||
|
||||
# change names
|
||||
data["subset"] = name
|
||||
data["family"] = family
|
||||
data["families"] = [family]
|
||||
data["name"] = data["subset"] + "_" + data["asset"]
|
||||
data["label"] = "{} - {}".format(
|
||||
data['asset'], data["subset"]
|
||||
)
|
||||
data["effects"] = effects
|
||||
|
||||
# create new instance
|
||||
_instance = instance.context.create_instance(**data)
|
||||
self.log.info("Created instance `{}`".format(_instance))
|
||||
self.log.debug("instance.data `{}`".format(_instance.data))
|
||||
|
||||
def test_overlap(self, effect_t_in, effect_t_out):
|
||||
covering_exp = bool(
|
||||
|
|
|
|||
|
|
@ -15,6 +15,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
icon = "magic"
|
||||
ext = "exr"
|
||||
|
||||
# Default to split export and render jobs
|
||||
split_render = True
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
||||
instance_data.pop("active", None)
|
||||
|
|
@ -36,12 +39,15 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
# Also create the linked Redshift IPR Rop
|
||||
try:
|
||||
ipr_rop = instance_node.parent().createNode(
|
||||
"Redshift_IPR", node_name=basename + "_IPR"
|
||||
"Redshift_IPR", node_name=f"{basename}_IPR"
|
||||
)
|
||||
except hou.OperationFailed:
|
||||
except hou.OperationFailed as e:
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("Cannot create Redshift node. Is Redshift "
|
||||
"installed and enabled?"))
|
||||
(
|
||||
"Cannot create Redshift node. Is Redshift "
|
||||
"installed and enabled?"
|
||||
)
|
||||
) from e
|
||||
|
||||
# Move it to directly under the Redshift ROP
|
||||
ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1))
|
||||
|
|
@ -74,8 +80,15 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
for node in self.selected_nodes:
|
||||
if node.type().name() == "cam":
|
||||
camera = node.path()
|
||||
parms.update({
|
||||
"RS_renderCamera": camera or ""})
|
||||
parms["RS_renderCamera"] = camera or ""
|
||||
|
||||
export_dir = hou.text.expandString("$HIP/pyblish/rs/")
|
||||
rs_filepath = f"{export_dir}{subset_name}/{subset_name}.$F4.rs"
|
||||
parms["RS_archive_file"] = rs_filepath
|
||||
|
||||
if pre_create_data.get("split_render", self.split_render):
|
||||
parms["RS_archive_enable"] = 1
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
|
|
@ -102,6 +115,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator):
|
|||
BoolDef("farm",
|
||||
label="Submitting to Farm",
|
||||
default=True),
|
||||
BoolDef("split_render",
|
||||
label="Split export and render jobs",
|
||||
default=self.split_render),
|
||||
EnumDef("image_format",
|
||||
image_format_enum,
|
||||
default=self.ext,
|
||||
|
|
|
|||
|
|
@ -16,8 +16,9 @@ class FbxLoader(load.LoaderPlugin):
|
|||
|
||||
order = -10
|
||||
|
||||
families = ["staticMesh", "fbx"]
|
||||
representations = ["fbx"]
|
||||
families = ["*"]
|
||||
representations = ["*"]
|
||||
extensions = {"fbx"}
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
||||
|
|
|
|||
112
openpype/hosts/houdini/plugins/load/load_redshift_proxy.py
Normal file
112
openpype/hosts/houdini/plugins/load/load_redshift_proxy.py
Normal file
|
|
@ -0,0 +1,112 @@
|
|||
import os
|
||||
import re
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
from openpype.hosts.houdini.api import pipeline
|
||||
from openpype.pipeline.load import LoadError
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class RedshiftProxyLoader(load.LoaderPlugin):
|
||||
"""Load Redshift Proxy"""
|
||||
|
||||
families = ["redshiftproxy"]
|
||||
label = "Load Redshift Proxy"
|
||||
representations = ["rs"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
||||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Define node name
|
||||
namespace = namespace if namespace else context["asset"]["name"]
|
||||
node_name = "{}_{}".format(namespace, name) if namespace else name
|
||||
|
||||
# Create a new geo node
|
||||
container = obj.createNode("geo", node_name=node_name)
|
||||
|
||||
# Check whether the Redshift parameters exist - if not, then likely
|
||||
# redshift is not set up or initialized correctly
|
||||
if not container.parm("RS_objprop_proxy_enable"):
|
||||
container.destroy()
|
||||
raise LoadError("Unable to initialize geo node with Redshift "
|
||||
"attributes. Make sure you have the Redshift "
|
||||
"plug-in set up correctly for Houdini.")
|
||||
|
||||
# Enable by default
|
||||
container.setParms({
|
||||
"RS_objprop_proxy_enable": True,
|
||||
"RS_objprop_proxy_file": self.format_path(
|
||||
self.filepath_from_context(context),
|
||||
context["representation"])
|
||||
})
|
||||
|
||||
# Remove the file node, it only loads static meshes
|
||||
# Houdini 17 has removed the file node from the geo node
|
||||
file_node = container.node("file1")
|
||||
if file_node:
|
||||
file_node.destroy()
|
||||
|
||||
# Add this stub node inside so it previews ok
|
||||
proxy_sop = container.createNode("redshift_proxySOP",
|
||||
node_name=node_name)
|
||||
proxy_sop.setDisplayFlag(True)
|
||||
|
||||
nodes = [container, proxy_sop]
|
||||
|
||||
self[:] = nodes
|
||||
|
||||
return pipeline.containerise(
|
||||
node_name,
|
||||
namespace,
|
||||
nodes,
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
suffix="",
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
|
||||
# Update the file path
|
||||
file_path = get_representation_path(representation)
|
||||
|
||||
node = container["node"]
|
||||
node.setParms({
|
||||
"RS_objprop_proxy_file": self.format_path(
|
||||
file_path, representation)
|
||||
})
|
||||
|
||||
# Update attribute
|
||||
node.setParms({"representation": str(representation["_id"])})
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
@staticmethod
|
||||
def format_path(path, representation):
|
||||
"""Format file path correctly for single redshift proxy
|
||||
or redshift proxy sequence."""
|
||||
if not os.path.exists(path):
|
||||
raise RuntimeError("Path does not exist: %s" % path)
|
||||
|
||||
is_sequence = bool(representation["context"].get("frame"))
|
||||
# The path is either a single file or sequence in a folder.
|
||||
if is_sequence:
|
||||
filename = re.sub(r"(.*)\.(\d+)\.(rs.*)", "\\1.$F4.\\3", path)
|
||||
filename = os.path.join(path, filename)
|
||||
else:
|
||||
filename = path
|
||||
|
||||
filename = os.path.normpath(filename)
|
||||
filename = filename.replace("\\", "/")
|
||||
|
||||
return filename
|
||||
|
|
@ -31,7 +31,6 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
families = ["redshift_rop"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
rop = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
# Collect chunkSize
|
||||
|
|
@ -43,13 +42,29 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
|
||||
default_prefix = evalParmNoFrame(rop, "RS_outputFileNamePrefix")
|
||||
beauty_suffix = rop.evalParm("RS_outputBeautyAOVSuffix")
|
||||
render_products = []
|
||||
# Store whether we are splitting the render job (export + render)
|
||||
split_render = bool(rop.parm("RS_archive_enable").eval())
|
||||
instance.data["splitRender"] = split_render
|
||||
export_products = []
|
||||
if split_render:
|
||||
export_prefix = evalParmNoFrame(
|
||||
rop, "RS_archive_file", pad_character="0"
|
||||
)
|
||||
beauty_export_product = self.get_render_product_name(
|
||||
prefix=export_prefix,
|
||||
suffix=None)
|
||||
export_products.append(beauty_export_product)
|
||||
self.log.debug(
|
||||
"Found export product: {}".format(beauty_export_product)
|
||||
)
|
||||
instance.data["ifdFile"] = beauty_export_product
|
||||
instance.data["exportFiles"] = list(export_products)
|
||||
|
||||
# Default beauty AOV
|
||||
beauty_product = self.get_render_product_name(
|
||||
prefix=default_prefix, suffix=beauty_suffix
|
||||
)
|
||||
render_products.append(beauty_product)
|
||||
render_products = [beauty_product]
|
||||
files_by_aov = {
|
||||
"_": self.generate_expected_files(instance,
|
||||
beauty_product)}
|
||||
|
|
@ -59,11 +74,11 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
i = index + 1
|
||||
|
||||
# Skip disabled AOVs
|
||||
if not rop.evalParm("RS_aovEnable_%s" % i):
|
||||
if not rop.evalParm(f"RS_aovEnable_{i}"):
|
||||
continue
|
||||
|
||||
aov_suffix = rop.evalParm("RS_aovSuffix_%s" % i)
|
||||
aov_prefix = evalParmNoFrame(rop, "RS_aovCustomPrefix_%s" % i)
|
||||
aov_suffix = rop.evalParm(f"RS_aovSuffix_{i}")
|
||||
aov_prefix = evalParmNoFrame(rop, f"RS_aovCustomPrefix_{i}")
|
||||
if not aov_prefix:
|
||||
aov_prefix = default_prefix
|
||||
|
||||
|
|
@ -85,7 +100,7 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
instance.data["attachTo"] = [] # stub required data
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["expectedFiles"] = []
|
||||
instance.data["expectedFiles"].append(files_by_aov)
|
||||
|
||||
# update the colorspace data
|
||||
|
|
|
|||
|
|
@ -37,6 +37,95 @@ class RenderProducts(object):
|
|||
)
|
||||
}
|
||||
|
||||
def get_multiple_beauty(self, outputs, cameras):
|
||||
beauty_output_frames = dict()
|
||||
for output, camera in zip(outputs, cameras):
|
||||
filename, ext = os.path.splitext(output)
|
||||
filename = filename.replace(".", "")
|
||||
ext = ext.replace(".", "")
|
||||
start_frame = int(rt.rendStart)
|
||||
end_frame = int(rt.rendEnd) + 1
|
||||
new_beauty = self.get_expected_beauty(
|
||||
filename, start_frame, end_frame, ext
|
||||
)
|
||||
beauty_output = ({
|
||||
f"{camera}_beauty": new_beauty
|
||||
})
|
||||
beauty_output_frames.update(beauty_output)
|
||||
return beauty_output_frames
|
||||
|
||||
def get_multiple_aovs(self, outputs, cameras):
|
||||
renderer_class = get_current_renderer()
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
aovs_frames = {}
|
||||
for output, camera in zip(outputs, cameras):
|
||||
filename, ext = os.path.splitext(output)
|
||||
filename = filename.replace(".", "")
|
||||
ext = ext.replace(".", "")
|
||||
start_frame = int(rt.rendStart)
|
||||
end_frame = int(rt.rendEnd) + 1
|
||||
|
||||
if renderer in [
|
||||
"ART_Renderer",
|
||||
"V_Ray_6_Hotfix_3",
|
||||
"V_Ray_GPU_6_Hotfix_3",
|
||||
"Default_Scanline_Renderer",
|
||||
"Quicksilver_Hardware_Renderer",
|
||||
]:
|
||||
render_name = self.get_render_elements_name()
|
||||
if render_name:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
elif renderer == "Redshift_Renderer":
|
||||
render_name = self.get_render_elements_name()
|
||||
if render_name:
|
||||
rs_aov_files = rt.Execute("renderers.current.separateAovFiles") # noqa
|
||||
# this doesn't work, always returns False
|
||||
# rs_AovFiles = rt.RedShift_Renderer().separateAovFiles
|
||||
if ext == "exr" and not rs_aov_files:
|
||||
for name in render_name:
|
||||
if name == "RsCryptomatte":
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
else:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
elif renderer == "Arnold":
|
||||
render_name = self.get_arnold_product_name()
|
||||
if render_name:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_arnold_product( # noqa
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
elif renderer in [
|
||||
"V_Ray_6_Hotfix_3",
|
||||
"V_Ray_GPU_6_Hotfix_3"
|
||||
]:
|
||||
if ext != "exr":
|
||||
render_name = self.get_render_elements_name()
|
||||
if render_name:
|
||||
for name in render_name:
|
||||
aovs_frames.update({
|
||||
f"{camera}_{name}": self.get_expected_aovs(
|
||||
filename, name, start_frame,
|
||||
end_frame, ext)
|
||||
})
|
||||
|
||||
return aovs_frames
|
||||
|
||||
def get_aovs(self, container):
|
||||
render_dir = os.path.dirname(rt.rendOutputFilename)
|
||||
|
||||
|
|
@ -63,7 +152,7 @@ class RenderProducts(object):
|
|||
if render_name:
|
||||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
|
|
@ -77,14 +166,14 @@ class RenderProducts(object):
|
|||
for name in render_name:
|
||||
if name == "RsCryptomatte":
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
else:
|
||||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
|
|
@ -95,7 +184,8 @@ class RenderProducts(object):
|
|||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_arnold_product(
|
||||
output_file, name, start_frame, end_frame, img_fmt)
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt)
|
||||
})
|
||||
elif renderer in [
|
||||
"V_Ray_6_Hotfix_3",
|
||||
|
|
@ -106,7 +196,7 @@ class RenderProducts(object):
|
|||
if render_name:
|
||||
for name in render_name:
|
||||
render_dict.update({
|
||||
name: self.get_expected_render_elements(
|
||||
name: self.get_expected_aovs(
|
||||
output_file, name, start_frame,
|
||||
end_frame, img_fmt) # noqa
|
||||
})
|
||||
|
|
@ -169,8 +259,8 @@ class RenderProducts(object):
|
|||
|
||||
return render_name
|
||||
|
||||
def get_expected_render_elements(self, folder, name,
|
||||
start_frame, end_frame, fmt):
|
||||
def get_expected_aovs(self, folder, name,
|
||||
start_frame, end_frame, fmt):
|
||||
"""Get all the expected render element output files. """
|
||||
render_elements = []
|
||||
for f in range(start_frame, end_frame):
|
||||
|
|
|
|||
|
|
@ -74,13 +74,13 @@ class RenderSettings(object):
|
|||
output = os.path.join(output_dir, container)
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
self._project_settings["maya"]
|
||||
self._project_settings["max"]
|
||||
["RenderSettings"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "."
|
||||
output_filename = "{0}..{1}".format(output, img_fmt)
|
||||
output_filename = f"{output}..{img_fmt}"
|
||||
output_filename = output_filename.replace("{aov_separator}",
|
||||
aov_separator)
|
||||
rt.rendOutputFilename = output_filename
|
||||
|
|
@ -146,13 +146,13 @@ class RenderSettings(object):
|
|||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = "{0}_{1}..{2}".format(dir, renderpass, ext)
|
||||
aov_name = f"{dir}_{renderpass}..{ext}"
|
||||
render_elem.SetRenderElementFileName(i, aov_name)
|
||||
|
||||
def get_render_output(self, container, output_dir):
|
||||
output = os.path.join(output_dir, container)
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
output_filename = "{0}..{1}".format(output, img_fmt)
|
||||
output_filename = f"{output}..{img_fmt}"
|
||||
return output_filename
|
||||
|
||||
def get_render_element(self):
|
||||
|
|
@ -167,3 +167,61 @@ class RenderSettings(object):
|
|||
orig_render_elem.append(render_element)
|
||||
|
||||
return orig_render_elem
|
||||
|
||||
def get_batch_render_elements(self, container,
|
||||
output_dir, camera):
|
||||
render_element_list = list()
|
||||
output = os.path.join(output_dir, container)
|
||||
render_elem = rt.maxOps.GetCurRenderElementMgr()
|
||||
render_elem_num = render_elem.NumRenderElements()
|
||||
if render_elem_num < 0:
|
||||
return
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
|
||||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = f"{output}_{camera}_{renderpass}..{img_fmt}"
|
||||
render_element_list.append(aov_name)
|
||||
return render_element_list
|
||||
|
||||
def get_batch_render_output(self, camera):
|
||||
target_layer_no = rt.batchRenderMgr.FindView(camera)
|
||||
target_layer = rt.batchRenderMgr.GetView(target_layer_no)
|
||||
return target_layer.outputFilename
|
||||
|
||||
def batch_render_elements(self, camera):
|
||||
target_layer_no = rt.batchRenderMgr.FindView(camera)
|
||||
target_layer = rt.batchRenderMgr.GetView(target_layer_no)
|
||||
outputfilename = target_layer.outputFilename
|
||||
directory = os.path.dirname(outputfilename)
|
||||
render_elem = rt.maxOps.GetCurRenderElementMgr()
|
||||
render_elem_num = render_elem.NumRenderElements()
|
||||
if render_elem_num < 0:
|
||||
return
|
||||
ext = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
|
||||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = f"{directory}_{camera}_{renderpass}..{ext}"
|
||||
render_elem.SetRenderElementFileName(i, aov_name)
|
||||
|
||||
def batch_render_layer(self, container,
|
||||
output_dir, cameras):
|
||||
outputs = list()
|
||||
output = os.path.join(output_dir, container)
|
||||
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
|
||||
for cam in cameras:
|
||||
camera = rt.getNodeByName(cam)
|
||||
layer_no = rt.batchRenderMgr.FindView(cam)
|
||||
renderlayer = None
|
||||
if layer_no == 0:
|
||||
renderlayer = rt.batchRenderMgr.CreateView(camera)
|
||||
else:
|
||||
renderlayer = rt.batchRenderMgr.GetView(layer_no)
|
||||
# use camera name as renderlayer name
|
||||
renderlayer.name = cam
|
||||
renderlayer.outputFilename = f"{output}_{cam}..{img_fmt}"
|
||||
outputs.append(renderlayer.outputFilename)
|
||||
return outputs
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
"""Creator plugin for creating camera."""
|
||||
import os
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
|
||||
|
||||
|
|
@ -17,15 +18,33 @@ class CreateRender(plugin.MaxCreator):
|
|||
file = rt.maxFileName
|
||||
filename, _ = os.path.splitext(file)
|
||||
instance_data["AssetName"] = filename
|
||||
instance_data["multiCamera"] = pre_create_data.get("multi_cam")
|
||||
num_of_renderlayer = rt.batchRenderMgr.numViews
|
||||
if num_of_renderlayer > 0:
|
||||
rt.batchRenderMgr.DeleteView(num_of_renderlayer)
|
||||
|
||||
instance = super(CreateRender, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
|
||||
container_name = instance.data.get("instance_node")
|
||||
sel_obj = self.selected_nodes
|
||||
if sel_obj:
|
||||
# set viewport camera for rendering(mandatory for deadline)
|
||||
RenderSettings(self.project_settings).set_render_camera(sel_obj)
|
||||
# set output paths for rendering(mandatory for deadline)
|
||||
RenderSettings().render_output(container_name)
|
||||
# TODO: create multiple camera options
|
||||
if self.selected_nodes:
|
||||
selected_nodes_name = []
|
||||
for sel in self.selected_nodes:
|
||||
name = sel.name
|
||||
selected_nodes_name.append(name)
|
||||
RenderSettings().batch_render_layer(
|
||||
container_name, filename,
|
||||
selected_nodes_name)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateRender, self).get_pre_create_attr_defs()
|
||||
return attrs + [
|
||||
BoolDef("multi_cam",
|
||||
label="Multiple Cameras Submission",
|
||||
default=False),
|
||||
]
|
||||
|
|
|
|||
|
|
@ -4,8 +4,10 @@ import os
|
|||
import pyblish.api
|
||||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
from openpype.hosts.max.api import colorspace
|
||||
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
|
||||
|
||||
|
||||
|
|
@ -23,7 +25,6 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
file = rt.maxFileName
|
||||
current_file = os.path.join(folder, file)
|
||||
filepath = current_file.replace("\\", "/")
|
||||
|
||||
context.data['currentFile'] = current_file
|
||||
|
||||
files_by_aov = RenderProducts().get_beauty(instance.name)
|
||||
|
|
@ -39,6 +40,28 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
|
||||
instance.data["cameras"] = [camera.name] if camera else None # noqa
|
||||
|
||||
if instance.data.get("multiCamera"):
|
||||
cameras = instance.data.get("members")
|
||||
if not cameras:
|
||||
raise KnownPublishError("There should be at least"
|
||||
" one renderable camera in container")
|
||||
sel_cam = [
|
||||
c.name for c in cameras
|
||||
if rt.classOf(c) in rt.Camera.classes]
|
||||
container_name = instance.data.get("instance_node")
|
||||
render_dir = os.path.dirname(rt.rendOutputFilename)
|
||||
outputs = RenderSettings().batch_render_layer(
|
||||
container_name, render_dir, sel_cam
|
||||
)
|
||||
|
||||
instance.data["cameras"] = sel_cam
|
||||
|
||||
files_by_aov = RenderProducts().get_multiple_beauty(
|
||||
outputs, sel_cam)
|
||||
aovs = RenderProducts().get_multiple_aovs(
|
||||
outputs, sel_cam)
|
||||
files_by_aov.update(aovs)
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
instance.data["expectedFiles"] = list()
|
||||
instance.data["files"] = list()
|
||||
|
|
|
|||
105
openpype/hosts/max/plugins/publish/save_scenes_for_cameras.py
Normal file
105
openpype/hosts/max/plugins/publish/save_scenes_for_cameras.py
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.lib import run_subprocess
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
|
||||
|
||||
|
||||
class SaveScenesForCamera(pyblish.api.InstancePlugin):
|
||||
"""Save scene files for multiple cameras without
|
||||
editing the original scene before deadline submission
|
||||
|
||||
"""
|
||||
|
||||
label = "Save Scene files for cameras"
|
||||
order = pyblish.api.ExtractorOrder - 0.48
|
||||
hosts = ["max"]
|
||||
families = ["maxrender"]
|
||||
|
||||
def process(self, instance):
|
||||
if not instance.data.get("multiCamera"):
|
||||
self.log.debug(
|
||||
"Multi Camera disabled. "
|
||||
"Skipping to save scene files for cameras")
|
||||
return
|
||||
current_folder = rt.maxFilePath
|
||||
current_filename = rt.maxFileName
|
||||
current_filepath = os.path.join(current_folder, current_filename)
|
||||
camera_scene_files = []
|
||||
scripts = []
|
||||
filename, ext = os.path.splitext(current_filename)
|
||||
fmt = RenderProducts().image_format()
|
||||
cameras = instance.data.get("cameras")
|
||||
if not cameras:
|
||||
return
|
||||
new_folder = f"{current_folder}_{filename}"
|
||||
os.makedirs(new_folder, exist_ok=True)
|
||||
for camera in cameras:
|
||||
new_output = RenderSettings().get_batch_render_output(camera) # noqa
|
||||
new_output = new_output.replace("\\", "/")
|
||||
new_filename = f"{filename}_{camera}{ext}"
|
||||
new_filepath = os.path.join(new_folder, new_filename)
|
||||
new_filepath = new_filepath.replace("\\", "/")
|
||||
camera_scene_files.append(new_filepath)
|
||||
RenderSettings().batch_render_elements(camera)
|
||||
rt.rendOutputFilename = new_output
|
||||
rt.saveMaxFile(current_filepath)
|
||||
script = ("""
|
||||
from pymxs import runtime as rt
|
||||
import os
|
||||
filename = "{filename}"
|
||||
new_filepath = "{new_filepath}"
|
||||
new_output = "{new_output}"
|
||||
camera = "{camera}"
|
||||
rt.rendOutputFilename = new_output
|
||||
directory = os.path.dirname(rt.rendOutputFilename)
|
||||
directory = os.path.join(directory, filename)
|
||||
render_elem = rt.maxOps.GetCurRenderElementMgr()
|
||||
render_elem_num = render_elem.NumRenderElements()
|
||||
if render_elem_num > 0:
|
||||
ext = "{ext}"
|
||||
for i in range(render_elem_num):
|
||||
renderlayer_name = render_elem.GetRenderElement(i)
|
||||
target, renderpass = str(renderlayer_name).split(":")
|
||||
aov_name = f"{{directory}}_{camera}_{{renderpass}}..{ext}"
|
||||
render_elem.SetRenderElementFileName(i, aov_name)
|
||||
rt.saveMaxFile(new_filepath)
|
||||
""").format(filename=instance.name,
|
||||
new_filepath=new_filepath,
|
||||
new_output=new_output,
|
||||
camera=camera,
|
||||
ext=fmt)
|
||||
scripts.append(script)
|
||||
|
||||
maxbatch_exe = os.path.join(
|
||||
os.path.dirname(sys.executable), "3dsmaxbatch")
|
||||
maxbatch_exe = maxbatch_exe.replace("\\", "/")
|
||||
if sys.platform == "windows":
|
||||
maxbatch_exe += ".exe"
|
||||
maxbatch_exe = os.path.normpath(maxbatch_exe)
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
tmp_script_path = os.path.join(
|
||||
tmp_dir_name, "extract_scene_files.py")
|
||||
self.log.info("Using script file: {}".format(tmp_script_path))
|
||||
|
||||
with open(tmp_script_path, "wt") as tmp:
|
||||
for script in scripts:
|
||||
tmp.write(script + "\n")
|
||||
|
||||
try:
|
||||
current_filepath = current_filepath.replace("\\", "/")
|
||||
tmp_script_path = tmp_script_path.replace("\\", "/")
|
||||
run_subprocess([maxbatch_exe, tmp_script_path,
|
||||
"-sceneFile", current_filepath])
|
||||
except RuntimeError:
|
||||
self.log.debug("Checking the scene files existing")
|
||||
|
||||
for camera_scene in camera_scene_files:
|
||||
if not os.path.exists(camera_scene):
|
||||
self.log.error("Camera scene files not existed yet!")
|
||||
raise RuntimeError("MaxBatch.exe doesn't run as expected")
|
||||
self.log.debug(f"Found Camera scene:{camera_scene}")
|
||||
|
|
@ -2778,9 +2778,37 @@ def bake_to_world_space(nodes,
|
|||
list: The newly created and baked node names.
|
||||
|
||||
"""
|
||||
@contextlib.contextmanager
|
||||
def _unlock_attr(attr):
|
||||
"""Unlock attribute during context if it is locked"""
|
||||
if not cmds.getAttr(attr, lock=True):
|
||||
# If not locked, do nothing
|
||||
yield
|
||||
return
|
||||
try:
|
||||
cmds.setAttr(attr, lock=False)
|
||||
yield
|
||||
finally:
|
||||
cmds.setAttr(attr, lock=True)
|
||||
|
||||
def _get_attrs(node):
|
||||
"""Workaround for buggy shape attribute listing with listAttr"""
|
||||
"""Workaround for buggy shape attribute listing with listAttr
|
||||
|
||||
This will only return keyable settable attributes that have an
|
||||
incoming connections (those that have a reason to be baked).
|
||||
|
||||
Technically this *may* fail to return attributes driven by complex
|
||||
expressions for which maya makes no connections, e.g. doing actual
|
||||
`setAttr` calls in expressions.
|
||||
|
||||
Arguments:
|
||||
node (str): The node to list attributes for.
|
||||
|
||||
Returns:
|
||||
list: Keyable attributes with incoming connections.
|
||||
The attribute may be locked.
|
||||
|
||||
"""
|
||||
attrs = cmds.listAttr(node,
|
||||
write=True,
|
||||
scalar=True,
|
||||
|
|
@ -2805,14 +2833,14 @@ def bake_to_world_space(nodes,
|
|||
|
||||
return valid_attrs
|
||||
|
||||
transform_attrs = set(["t", "r", "s",
|
||||
"tx", "ty", "tz",
|
||||
"rx", "ry", "rz",
|
||||
"sx", "sy", "sz"])
|
||||
transform_attrs = {"t", "r", "s",
|
||||
"tx", "ty", "tz",
|
||||
"rx", "ry", "rz",
|
||||
"sx", "sy", "sz"}
|
||||
|
||||
world_space_nodes = []
|
||||
with delete_after() as delete_bin:
|
||||
|
||||
with ExitStack() as stack:
|
||||
delete_bin = stack.enter_context(delete_after())
|
||||
# Create the duplicate nodes that are in world-space connected to
|
||||
# the originals
|
||||
for node in nodes:
|
||||
|
|
@ -2824,23 +2852,26 @@ def bake_to_world_space(nodes,
|
|||
name=new_name,
|
||||
renameChildren=True)[0] # noqa
|
||||
|
||||
# Connect all attributes on the node except for transform
|
||||
# attributes
|
||||
attrs = _get_attrs(node)
|
||||
attrs = set(attrs) - transform_attrs if attrs else []
|
||||
# Parent new node to world
|
||||
if cmds.listRelatives(new_node, parent=True):
|
||||
new_node = cmds.parent(new_node, world=True)[0]
|
||||
|
||||
# Temporarily unlock and passthrough connect all attributes
|
||||
# so we can bake them over time
|
||||
# Skip transform attributes because we will constrain them later
|
||||
attrs = set(_get_attrs(node)) - transform_attrs
|
||||
for attr in attrs:
|
||||
orig_node_attr = '{0}.{1}'.format(node, attr)
|
||||
new_node_attr = '{0}.{1}'.format(new_node, attr)
|
||||
|
||||
# unlock to avoid connection errors
|
||||
cmds.setAttr(new_node_attr, lock=False)
|
||||
orig_node_attr = "{}.{}".format(node, attr)
|
||||
new_node_attr = "{}.{}".format(new_node, attr)
|
||||
|
||||
# unlock during context to avoid connection errors
|
||||
stack.enter_context(_unlock_attr(new_node_attr))
|
||||
cmds.connectAttr(orig_node_attr,
|
||||
new_node_attr,
|
||||
force=True)
|
||||
|
||||
# If shapes are also baked then connect those keyable attributes
|
||||
# If shapes are also baked then also temporarily unlock and
|
||||
# passthrough connect all shape attributes for baking
|
||||
if shape:
|
||||
children_shapes = cmds.listRelatives(new_node,
|
||||
children=True,
|
||||
|
|
@ -2855,25 +2886,19 @@ def bake_to_world_space(nodes,
|
|||
children_shapes):
|
||||
attrs = _get_attrs(orig_shape)
|
||||
for attr in attrs:
|
||||
orig_node_attr = '{0}.{1}'.format(orig_shape, attr)
|
||||
new_node_attr = '{0}.{1}'.format(new_shape, attr)
|
||||
|
||||
# unlock to avoid connection errors
|
||||
cmds.setAttr(new_node_attr, lock=False)
|
||||
orig_node_attr = "{}.{}".format(orig_shape, attr)
|
||||
new_node_attr = "{}.{}".format(new_shape, attr)
|
||||
|
||||
# unlock during context to avoid connection errors
|
||||
stack.enter_context(_unlock_attr(new_node_attr))
|
||||
cmds.connectAttr(orig_node_attr,
|
||||
new_node_attr,
|
||||
force=True)
|
||||
|
||||
# Parent to world
|
||||
if cmds.listRelatives(new_node, parent=True):
|
||||
new_node = cmds.parent(new_node, world=True)[0]
|
||||
|
||||
# Unlock transform attributes so constraint can be created
|
||||
# Constraint transforms
|
||||
for attr in transform_attrs:
|
||||
cmds.setAttr('{0}.{1}'.format(new_node, attr), lock=False)
|
||||
|
||||
# Constraints
|
||||
transform_attr = "{}.{}".format(new_node, attr)
|
||||
stack.enter_context(_unlock_attr(transform_attr))
|
||||
delete_bin.extend(cmds.parentConstraint(node, new_node, mo=False))
|
||||
delete_bin.extend(cmds.scaleConstraint(node, new_node, mo=False))
|
||||
|
||||
|
|
|
|||
|
|
@ -137,6 +137,11 @@ class RedshiftProxyLoader(load.LoaderPlugin):
|
|||
cmds.connectAttr("{}.outMesh".format(rs_mesh),
|
||||
"{}.inMesh".format(mesh_shape))
|
||||
|
||||
# TODO: use the assigned shading group as shaders if existed
|
||||
# assign default shader to redshift proxy
|
||||
if cmds.ls("initialShadingGroup", type="shadingEngine"):
|
||||
cmds.sets(mesh_shape, forceElement="initialShadingGroup")
|
||||
|
||||
group_node = cmds.group(empty=True, name="{}_GRP".format(name))
|
||||
mesh_transform = cmds.listRelatives(mesh_shape,
|
||||
parent=True, fullPath=True)
|
||||
|
|
|
|||
|
|
@ -265,13 +265,16 @@ def transfer_image_planes(source_cameras, target_cameras,
|
|||
try:
|
||||
for source_camera, target_camera in zip(source_cameras,
|
||||
target_cameras):
|
||||
image_planes = cmds.listConnections(source_camera,
|
||||
image_plane_plug = "{}.imagePlane".format(source_camera)
|
||||
image_planes = cmds.listConnections(image_plane_plug,
|
||||
source=True,
|
||||
destination=False,
|
||||
type="imagePlane") or []
|
||||
|
||||
# Split of the parent path they are attached - we want
|
||||
# the image plane node name.
|
||||
# the image plane node name if attached to a camera.
|
||||
# TODO: Does this still mean the image plane name is unique?
|
||||
image_planes = [x.split("->", 1)[1] for x in image_planes]
|
||||
image_planes = [x.split("->", 1)[-1] for x in image_planes]
|
||||
|
||||
if not image_planes:
|
||||
continue
|
||||
|
|
@ -282,7 +285,7 @@ def transfer_image_planes(source_cameras, target_cameras,
|
|||
if source_camera == target_camera:
|
||||
continue
|
||||
_attach_image_plane(target_camera, image_plane)
|
||||
else: # explicitly dettaching image planes
|
||||
else: # explicitly detach image planes
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
originals[source_camera].append(image_plane)
|
||||
yield
|
||||
|
|
|
|||
|
|
@ -1,77 +0,0 @@
|
|||
from collections import defaultdict
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError, ValidatePipelineOrder)
|
||||
|
||||
|
||||
class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin):
|
||||
"""Validate the relational nodes of the look data to ensure every node is
|
||||
unique.
|
||||
|
||||
This ensures the all member ids are unique. Every node id must be from
|
||||
a single node in the scene.
|
||||
|
||||
That means there's only ever one of a specific node inside the look to be
|
||||
published. For example if you'd have a loaded 3x the same tree and by
|
||||
accident you're trying to publish them all together in a single look that
|
||||
would be invalid, because they are the same tree. It should be included
|
||||
inside the look instance only once.
|
||||
|
||||
"""
|
||||
|
||||
order = ValidatePipelineOrder
|
||||
label = 'Look members unique'
|
||||
hosts = ['maya']
|
||||
families = ['look']
|
||||
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction,
|
||||
openpype.hosts.maya.api.action.GenerateUUIDsOnInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
"""Process all meshes"""
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
("Members found without non-unique IDs: "
|
||||
"{0}").format(invalid))
|
||||
|
||||
@staticmethod
|
||||
def get_invalid(instance):
|
||||
"""
|
||||
Check all the relationship members of the objectSets
|
||||
|
||||
Example of the lookData relationships:
|
||||
{"uuid": 59b2bb27bda2cb2776206dd8:79ab0a63ffdf,
|
||||
"members":[{"uuid": 59b2bb27bda2cb2776206dd8:1b158cc7496e,
|
||||
"name": |model_GRP|body_GES|body_GESShape}
|
||||
...,
|
||||
...]}
|
||||
|
||||
Args:
|
||||
instance:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
|
||||
# Get all members from the sets
|
||||
id_nodes = defaultdict(set)
|
||||
relationships = instance.data["lookData"]["relationships"]
|
||||
|
||||
for relationship in relationships.values():
|
||||
for member in relationship['members']:
|
||||
node_id = member["uuid"]
|
||||
node = member["name"]
|
||||
id_nodes[node_id].add(node)
|
||||
|
||||
# Check if any id has more than 1 node
|
||||
invalid = []
|
||||
for nodes in id_nodes.values():
|
||||
if len(nodes) > 1:
|
||||
invalid.extend(nodes)
|
||||
|
||||
return invalid
|
||||
|
|
@ -44,4 +44,8 @@ class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin):
|
|||
|
||||
if not is_subdir(scene_name, root_dir):
|
||||
raise PublishValidationError(
|
||||
"Maya workspace is not set correctly.")
|
||||
"Maya workspace is not set correctly.\n\n"
|
||||
f"Current workfile `{scene_name}` is not inside the "
|
||||
"current Maya project root directory `{root_dir}`.\n\n"
|
||||
"Please use Workfile app to re-save."
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3483,3 +3483,19 @@ def get_filenames_without_hash(filename, frame_start, frame_end):
|
|||
new_filename = filename_without_hashes.format(frame)
|
||||
filenames.append(new_filename)
|
||||
return filenames
|
||||
|
||||
|
||||
def create_camera_node_by_version():
|
||||
"""Function to create the camera with the latest node class
|
||||
For Nuke version 14.0 or later, the Camera4 camera node class
|
||||
would be used
|
||||
For the version before, the Camera2 camera node class
|
||||
would be used
|
||||
Returns:
|
||||
Node: camera node
|
||||
"""
|
||||
nuke_number_version = nuke.NUKE_VERSION_MAJOR
|
||||
if nuke_number_version >= 14:
|
||||
return nuke.createNode("Camera4")
|
||||
else:
|
||||
return nuke.createNode("Camera2")
|
||||
|
|
|
|||
|
|
@ -259,9 +259,7 @@ def _install_menu():
|
|||
menu.addCommand(
|
||||
"Create...",
|
||||
lambda: host_tools.show_publisher(
|
||||
parent=(
|
||||
main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None
|
||||
),
|
||||
parent=main_window,
|
||||
tab="create"
|
||||
)
|
||||
)
|
||||
|
|
@ -270,9 +268,7 @@ def _install_menu():
|
|||
menu.addCommand(
|
||||
"Publish...",
|
||||
lambda: host_tools.show_publisher(
|
||||
parent=(
|
||||
main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None
|
||||
),
|
||||
parent=main_window,
|
||||
tab="publish"
|
||||
)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -4,6 +4,9 @@ from openpype.hosts.nuke.api import (
|
|||
NukeCreatorError,
|
||||
maintained_selection
|
||||
)
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
create_camera_node_by_version
|
||||
)
|
||||
|
||||
|
||||
class CreateCamera(NukeCreator):
|
||||
|
|
@ -32,7 +35,7 @@ class CreateCamera(NukeCreator):
|
|||
"Creator error: Select only camera node type")
|
||||
created_node = self.selected_nodes[0]
|
||||
else:
|
||||
created_node = nuke.createNode("Camera2")
|
||||
created_node = create_camera_node_by_version()
|
||||
|
||||
created_node["tile_color"].setValue(
|
||||
int(self.node_color, 16))
|
||||
|
|
|
|||
|
|
@ -80,6 +80,7 @@ class ValidateOutputMaps(pyblish.api.InstancePlugin):
|
|||
self.log.warning(f"Disabling texture instance: "
|
||||
f"{image_instance}")
|
||||
image_instance.data["active"] = False
|
||||
image_instance.data["publish"] = False
|
||||
image_instance.data["integrate"] = False
|
||||
representation.setdefault("tags", []).append("delete")
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -221,9 +221,16 @@ class SettingsCreator(TrayPublishCreator):
|
|||
):
|
||||
filtered_instance_data.append(instance)
|
||||
|
||||
asset_names = {
|
||||
instance["asset"]
|
||||
for instance in filtered_instance_data}
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_names = {
|
||||
instance["folderPath"]
|
||||
for instance in filtered_instance_data
|
||||
}
|
||||
else:
|
||||
asset_names = {
|
||||
instance["asset"]
|
||||
for instance in filtered_instance_data
|
||||
}
|
||||
subset_names = {
|
||||
instance["subset"]
|
||||
for instance in filtered_instance_data}
|
||||
|
|
@ -231,7 +238,10 @@ class SettingsCreator(TrayPublishCreator):
|
|||
asset_names, subset_names
|
||||
)
|
||||
for instance in filtered_instance_data:
|
||||
asset_name = instance["asset"]
|
||||
if AYON_SERVER_ENABLED:
|
||||
asset_name = instance["folderPath"]
|
||||
else:
|
||||
asset_name = instance["asset"]
|
||||
subset_name = instance["subset"]
|
||||
version = subset_docs_by_asset_id[asset_name][subset_name]
|
||||
instance["creator_attributes"]["version_to_use"] = version
|
||||
|
|
|
|||
|
|
@ -216,6 +216,11 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
|||
instance.data["thumbnailSource"] = first_filepath
|
||||
|
||||
review_representation["tags"].append("review")
|
||||
|
||||
# Adding "review" to representation name since it can clash with main
|
||||
# representation if they share the same extension.
|
||||
review_representation["outputName"] = "review"
|
||||
|
||||
self.log.debug("Representation {} was marked for review. {}".format(
|
||||
review_representation["name"], review_path
|
||||
))
|
||||
|
|
|
|||
|
|
@ -6,13 +6,13 @@ def requests_post(*args, **kwargs):
|
|||
"""Wrap request post method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
Disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
of defense SSL is providing, and it is not recommended.
|
||||
|
||||
"""
|
||||
if "verify" not in kwargs:
|
||||
|
|
@ -24,13 +24,13 @@ def requests_get(*args, **kwargs):
|
|||
"""Wrap request get method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
Disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
of defense SSL is providing, and it is not recommended.
|
||||
|
||||
"""
|
||||
if "verify" not in kwargs:
|
||||
|
|
|
|||
|
|
@ -44,17 +44,17 @@ XML_CHAR_REF_REGEX_HEX = re.compile(r"&#x?[0-9a-fA-F]+;")
|
|||
ARRAY_TYPE_REGEX = re.compile(r"^(int|float|string)\[\d+\]$")
|
||||
|
||||
IMAGE_EXTENSIONS = {
|
||||
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
|
||||
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
|
||||
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
|
||||
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
|
||||
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
|
||||
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
|
||||
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
|
||||
".pictor", ".png", ".psd", ".psb", ".psp", ".qtvr", ".ras",
|
||||
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
|
||||
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
|
||||
".xpm", ".xwd"
|
||||
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave",
|
||||
".cal", ".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr",
|
||||
".fits", ".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc",
|
||||
".icer", ".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
|
||||
".jng", ".jpeg", ".jpeg-ls", ".jpeg-hdr", ".2000", ".jpg",
|
||||
".kra", ".logluv", ".mng", ".miff", ".nrrd", ".ora",
|
||||
".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
|
||||
".pictor", ".png", ".psd", ".psb", ".psp", ".qtvr",
|
||||
".ras", ".rgbe", ".sgi", ".tga",
|
||||
".tif", ".tiff", ".tiff/ep", ".tiff/it", ".ufo", ".ufp",
|
||||
".wbmp", ".webp", ".xr", ".xt", ".xbm", ".xcf", ".xpm", ".xwd"
|
||||
}
|
||||
|
||||
VIDEO_EXTENSIONS = {
|
||||
|
|
@ -110,8 +110,9 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False):
|
|||
if line == "</ImageSpec>":
|
||||
subimages_lines.append(lines)
|
||||
lines = []
|
||||
xml_started = False
|
||||
|
||||
if not xml_started:
|
||||
if not subimages_lines:
|
||||
raise ValueError(
|
||||
"Failed to read input file \"{}\".\nOutput:\n{}".format(
|
||||
filepath, output
|
||||
|
|
|
|||
|
|
@ -542,7 +542,8 @@ def _load_modules():
|
|||
module_dirs.insert(0, current_dir)
|
||||
|
||||
addons_dir = os.path.join(os.path.dirname(current_dir), "addons")
|
||||
module_dirs.append(addons_dir)
|
||||
if os.path.exists(addons_dir):
|
||||
module_dirs.append(addons_dir)
|
||||
|
||||
ignored_host_names = set(IGNORED_HOSTS_IN_AYON)
|
||||
ignored_current_dir_filenames = set(IGNORED_DEFAULT_FILENAMES)
|
||||
|
|
@ -1332,7 +1333,6 @@ class TrayModulesManager(ModulesManager):
|
|||
"user",
|
||||
"ftrack",
|
||||
"kitsu",
|
||||
"muster",
|
||||
"launcher_tool",
|
||||
"avalon",
|
||||
"clockify",
|
||||
|
|
|
|||
|
|
@ -34,8 +34,8 @@ def requests_post(*args, **kwargs):
|
|||
"""Wrap request post method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
|
|
@ -55,8 +55,8 @@ def requests_get(*args, **kwargs):
|
|||
"""Wrap request get method.
|
||||
|
||||
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
|
||||
variable is found. This is useful when Deadline or Muster server are
|
||||
running with self-signed certificates and their certificate is not
|
||||
variable is found. This is useful when Deadline server is
|
||||
running with self-signed certificates and its certificate is not
|
||||
added to trusted certificates on client machines.
|
||||
|
||||
Warning:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,4 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect Deadline pools. Choose default one from Settings
|
||||
|
||||
"""
|
||||
import pyblish.api
|
||||
from openpype.lib import TextDef
|
||||
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
|
||||
|
|
@ -9,11 +6,35 @@ from openpype.pipeline.publish import OpenPypePyblishPluginMixin
|
|||
|
||||
class CollectDeadlinePools(pyblish.api.InstancePlugin,
|
||||
OpenPypePyblishPluginMixin):
|
||||
"""Collect pools from instance if present, from Setting otherwise."""
|
||||
"""Collect pools from instance or Publisher attributes, from Setting
|
||||
otherwise.
|
||||
|
||||
Pools are used to control which DL workers could render the job.
|
||||
|
||||
Pools might be set:
|
||||
- directly on the instance (set directly in DCC)
|
||||
- from Publisher attributes
|
||||
- from defaults from Settings.
|
||||
|
||||
Publisher attributes could be shown even for instances that should be
|
||||
rendered locally as visibility is driven by product type of the instance
|
||||
(which will be `render` most likely).
|
||||
(Might be resolved in the future and class attribute 'families' should
|
||||
be cleaned up.)
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.420
|
||||
label = "Collect Deadline Pools"
|
||||
families = ["rendering",
|
||||
hosts = ["aftereffects",
|
||||
"fusion",
|
||||
"harmony"
|
||||
"nuke",
|
||||
"maya",
|
||||
"max"]
|
||||
|
||||
families = ["render",
|
||||
"rendering",
|
||||
"render.farm",
|
||||
"renderFarm",
|
||||
"renderlayer",
|
||||
|
|
@ -30,7 +51,6 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin,
|
|||
cls.secondary_pool = settings.get("secondary_pool", None)
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
if not instance.data.get("primaryPool"):
|
||||
instance.data["primaryPool"] = (
|
||||
|
|
@ -60,8 +80,12 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin,
|
|||
return [
|
||||
TextDef("primaryPool",
|
||||
label="Primary Pool",
|
||||
default=cls.primary_pool),
|
||||
default=cls.primary_pool,
|
||||
tooltip="Deadline primary pool, "
|
||||
"applicable for farm rendering"),
|
||||
TextDef("secondaryPool",
|
||||
label="Secondary Pool",
|
||||
default=cls.secondary_pool)
|
||||
default=cls.secondary_pool,
|
||||
tooltip="Deadline secondary pool, "
|
||||
"applicable for farm rendering")
|
||||
]
|
||||
|
|
|
|||
|
|
@ -15,6 +15,7 @@ from openpype.lib import (
|
|||
NumberDef
|
||||
)
|
||||
|
||||
|
||||
@attr.s
|
||||
class DeadlinePluginInfo():
|
||||
SceneFile = attr.ib(default=None)
|
||||
|
|
@ -41,6 +42,12 @@ class VrayRenderPluginInfo():
|
|||
SeparateFilesPerFrame = attr.ib(default=True)
|
||||
|
||||
|
||||
@attr.s
|
||||
class RedshiftRenderPluginInfo():
|
||||
SceneFile = attr.ib(default=None)
|
||||
Version = attr.ib(default=None)
|
||||
|
||||
|
||||
class HoudiniSubmitDeadline(
|
||||
abstract_submit_deadline.AbstractSubmitDeadline,
|
||||
OpenPypePyblishPluginMixin
|
||||
|
|
@ -262,6 +269,25 @@ class HoudiniSubmitDeadline(
|
|||
plugin_info = VrayRenderPluginInfo(
|
||||
InputFilename=instance.data["ifdFile"],
|
||||
)
|
||||
elif family == "redshift_rop":
|
||||
plugin_info = RedshiftRenderPluginInfo(
|
||||
SceneFile=instance.data["ifdFile"]
|
||||
)
|
||||
# Note: To use different versions of Redshift on Deadline
|
||||
# set the `REDSHIFT_VERSION` env variable in the Tools
|
||||
# settings in the AYON Application plugin. You will also
|
||||
# need to set that version in `Redshift.param` file
|
||||
# of the Redshift Deadline plugin:
|
||||
# [Redshift_Executable_*]
|
||||
# where * is the version number.
|
||||
if os.getenv("REDSHIFT_VERSION"):
|
||||
plugin_info.Version = os.getenv("REDSHIFT_VERSION")
|
||||
else:
|
||||
self.log.warning((
|
||||
"REDSHIFT_VERSION env variable is not set"
|
||||
" - using version configured in Deadline"
|
||||
))
|
||||
|
||||
else:
|
||||
self.log.error(
|
||||
"Family '%s' not supported yet to split render job",
|
||||
|
|
|
|||
|
|
@ -15,6 +15,12 @@ from openpype.pipeline import (
|
|||
from openpype.pipeline.publish.lib import (
|
||||
replace_with_published_scene_path
|
||||
)
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
from openpype.hosts.max.api.lib import (
|
||||
get_current_renderer,
|
||||
get_multipass_setting
|
||||
)
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
from openpype_modules.deadline import abstract_submit_deadline
|
||||
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
|
||||
from openpype.lib import is_running_from_build
|
||||
|
|
@ -54,7 +60,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
cls.priority)
|
||||
cls.chuck_size = settings.get("chunk_size", cls.chunk_size)
|
||||
cls.group = settings.get("group", cls.group)
|
||||
|
||||
# TODO: multiple camera instance, separate job infos
|
||||
def get_job_info(self):
|
||||
job_info = DeadlineJobInfo(Plugin="3dsmax")
|
||||
|
||||
|
|
@ -71,7 +77,6 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
src_filepath = context.data["currentFile"]
|
||||
src_filename = os.path.basename(src_filepath)
|
||||
|
||||
job_info.Name = "%s - %s" % (src_filename, instance.name)
|
||||
job_info.BatchName = src_filename
|
||||
job_info.Plugin = instance.data["plugin"]
|
||||
|
|
@ -134,11 +139,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
# Add list of expected files to job
|
||||
# ---------------------------------
|
||||
exp = instance.data.get("expectedFiles")
|
||||
|
||||
for filepath in self._iter_expected_files(exp):
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
if not instance.data.get("multiCamera"):
|
||||
exp = instance.data.get("expectedFiles")
|
||||
for filepath in self._iter_expected_files(exp):
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
|
||||
return job_info
|
||||
|
||||
|
|
@ -163,11 +168,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
def process_submission(self):
|
||||
|
||||
instance = self._instance
|
||||
filepath = self.scene_path
|
||||
filepath = instance.context.data["currentFile"]
|
||||
|
||||
files = instance.data["expectedFiles"]
|
||||
if not files:
|
||||
raise RuntimeError("No Render Elements found!")
|
||||
raise KnownPublishError("No Render Elements found!")
|
||||
first_file = next(self._iter_expected_files(files))
|
||||
output_dir = os.path.dirname(first_file)
|
||||
instance.data["outputDir"] = output_dir
|
||||
|
|
@ -181,9 +186,17 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
self.log.debug("Submitting 3dsMax render..")
|
||||
project_settings = instance.context.data["project_settings"]
|
||||
payload = self._use_published_name(payload_data, project_settings)
|
||||
job_info, plugin_info = payload
|
||||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
if instance.data.get("multiCamera"):
|
||||
self.log.debug("Submitting jobs for multiple cameras..")
|
||||
payload = self._use_published_name_for_multiples(
|
||||
payload_data, project_settings)
|
||||
job_infos, plugin_infos = payload
|
||||
for job_info, plugin_info in zip(job_infos, plugin_infos):
|
||||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
else:
|
||||
payload = self._use_published_name(payload_data, project_settings)
|
||||
job_info, plugin_info = payload
|
||||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
|
||||
def _use_published_name(self, data, project_settings):
|
||||
# Not all hosts can import these modules.
|
||||
|
|
@ -206,7 +219,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
files = instance.data.get("expectedFiles")
|
||||
if not files:
|
||||
raise RuntimeError("No render elements found")
|
||||
raise KnownPublishError("No render elements found")
|
||||
first_file = next(self._iter_expected_files(files))
|
||||
old_output_dir = os.path.dirname(first_file)
|
||||
output_beauty = RenderSettings().get_render_output(instance.name,
|
||||
|
|
@ -218,6 +231,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
plugin_data["RenderOutput"] = beauty_name
|
||||
# as 3dsmax has version with different languages
|
||||
plugin_data["Language"] = "ENU"
|
||||
|
||||
renderer_class = get_current_renderer()
|
||||
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
|
|
@ -249,6 +263,120 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
|
||||
return job_info, plugin_info
|
||||
|
||||
def get_job_info_through_camera(self, camera):
|
||||
"""Get the job parameters for deadline submission when
|
||||
multi-camera is enabled.
|
||||
Args:
|
||||
infos(dict): a dictionary with job info.
|
||||
"""
|
||||
instance = self._instance
|
||||
context = instance.context
|
||||
job_info = copy.deepcopy(self.job_info)
|
||||
exp = instance.data.get("expectedFiles")
|
||||
|
||||
src_filepath = context.data["currentFile"]
|
||||
src_filename = os.path.basename(src_filepath)
|
||||
job_info.Name = "%s - %s - %s" % (
|
||||
src_filename, instance.name, camera)
|
||||
for filepath in self._iter_expected_files(exp):
|
||||
if camera not in filepath:
|
||||
continue
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
|
||||
return job_info
|
||||
# set the output filepath with the relative camera
|
||||
|
||||
def get_plugin_info_through_camera(self, camera):
|
||||
"""Get the plugin parameters for deadline submission when
|
||||
multi-camera is enabled.
|
||||
Args:
|
||||
infos(dict): a dictionary with plugin info.
|
||||
"""
|
||||
instance = self._instance
|
||||
# set the target camera
|
||||
plugin_info = copy.deepcopy(self.plugin_info)
|
||||
|
||||
plugin_data = {}
|
||||
# set the output filepath with the relative camera
|
||||
if instance.data.get("multiCamera"):
|
||||
scene_filepath = instance.context.data["currentFile"]
|
||||
scene_filename = os.path.basename(scene_filepath)
|
||||
scene_directory = os.path.dirname(scene_filepath)
|
||||
current_filename, ext = os.path.splitext(scene_filename)
|
||||
camera_scene_name = f"{current_filename}_{camera}{ext}"
|
||||
camera_scene_filepath = os.path.join(
|
||||
scene_directory, f"_{current_filename}", camera_scene_name)
|
||||
plugin_data["SceneFile"] = camera_scene_filepath
|
||||
|
||||
files = instance.data.get("expectedFiles")
|
||||
if not files:
|
||||
raise KnownPublishError("No render elements found")
|
||||
first_file = next(self._iter_expected_files(files))
|
||||
old_output_dir = os.path.dirname(first_file)
|
||||
rgb_output = RenderSettings().get_batch_render_output(camera) # noqa
|
||||
rgb_bname = os.path.basename(rgb_output)
|
||||
dir = os.path.dirname(first_file)
|
||||
beauty_name = f"{dir}/{rgb_bname}"
|
||||
beauty_name = beauty_name.replace("\\", "/")
|
||||
plugin_info["RenderOutput"] = beauty_name
|
||||
renderer_class = get_current_renderer()
|
||||
|
||||
renderer = str(renderer_class).split(":")[0]
|
||||
if renderer in [
|
||||
"ART_Renderer",
|
||||
"Redshift_Renderer",
|
||||
"V_Ray_6_Hotfix_3",
|
||||
"V_Ray_GPU_6_Hotfix_3",
|
||||
"Default_Scanline_Renderer",
|
||||
"Quicksilver_Hardware_Renderer",
|
||||
]:
|
||||
render_elem_list = RenderSettings().get_batch_render_elements(
|
||||
instance.name, old_output_dir, camera
|
||||
)
|
||||
for i, element in enumerate(render_elem_list):
|
||||
if camera in element:
|
||||
elem_bname = os.path.basename(element)
|
||||
new_elem = f"{dir}/{elem_bname}"
|
||||
new_elem = new_elem.replace("/", "\\")
|
||||
plugin_info["RenderElementOutputFilename%d" % i] = new_elem # noqa
|
||||
|
||||
if camera:
|
||||
# set the default camera and target camera
|
||||
# (weird parameters from max)
|
||||
plugin_data["Camera"] = camera
|
||||
plugin_data["Camera1"] = camera
|
||||
plugin_data["Camera0"] = None
|
||||
|
||||
plugin_info.update(plugin_data)
|
||||
return plugin_info
|
||||
|
||||
def _use_published_name_for_multiples(self, data, project_settings):
|
||||
"""Process the parameters submission for deadline when
|
||||
user enables multi-cameras option.
|
||||
Args:
|
||||
job_info_list (list): A list of multiple job infos
|
||||
plugin_info_list (list): A list of multiple plugin infos
|
||||
"""
|
||||
job_info_list = []
|
||||
plugin_info_list = []
|
||||
instance = self._instance
|
||||
cameras = instance.data.get("cameras", [])
|
||||
plugin_data = {}
|
||||
multipass = get_multipass_setting(project_settings)
|
||||
if multipass:
|
||||
plugin_data["DisableMultipass"] = 0
|
||||
else:
|
||||
plugin_data["DisableMultipass"] = 1
|
||||
for cam in cameras:
|
||||
job_info = self.get_job_info_through_camera(cam)
|
||||
plugin_info = self.get_plugin_info_through_camera(cam)
|
||||
plugin_info.update(plugin_data)
|
||||
job_info_list.append(job_info)
|
||||
plugin_info_list.append(plugin_info)
|
||||
|
||||
return job_info_list, plugin_info_list
|
||||
|
||||
def from_published_scene(self, replace_in_path=True):
|
||||
instance = self._instance
|
||||
if instance.data["renderer"] == "Redshift_Renderer":
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
env_allowed_keys = []
|
||||
env_search_replace_values = {}
|
||||
workfile_dependency = True
|
||||
use_published_workfile = True
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
|
|
@ -85,8 +86,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
),
|
||||
BoolDef(
|
||||
"workfile_dependency",
|
||||
default=True,
|
||||
default=cls.workfile_dependency,
|
||||
label="Workfile Dependency"
|
||||
),
|
||||
BoolDef(
|
||||
"use_published_workfile",
|
||||
default=cls.use_published_workfile,
|
||||
label="Use Published Workfile"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
@ -125,20 +131,11 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
render_path = instance.data['path']
|
||||
script_path = context.data["currentFile"]
|
||||
|
||||
for item_ in context:
|
||||
if "workfile" in item_.data["family"]:
|
||||
template_data = item_.data.get("anatomyData")
|
||||
rep = item_.data.get("representations")[0].get("name")
|
||||
template_data["representation"] = rep
|
||||
template_data["ext"] = rep
|
||||
template_data["comment"] = None
|
||||
anatomy_filled = context.data["anatomy"].format(template_data)
|
||||
template_filled = anatomy_filled["publish"]["path"]
|
||||
script_path = os.path.normpath(template_filled)
|
||||
|
||||
self.log.info(
|
||||
"Using published scene for render {}".format(script_path)
|
||||
)
|
||||
use_published_workfile = instance.data["attributeValues"].get(
|
||||
"use_published_workfile", self.use_published_workfile
|
||||
)
|
||||
if use_published_workfile:
|
||||
script_path = self._get_published_workfile_path(context)
|
||||
|
||||
# only add main rendering job if target is not frames_farm
|
||||
r_job_response_json = None
|
||||
|
|
@ -197,6 +194,44 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
families.insert(0, "prerender")
|
||||
instance.data["families"] = families
|
||||
|
||||
def _get_published_workfile_path(self, context):
|
||||
"""This method is temporary while the class is not inherited from
|
||||
AbstractSubmitDeadline"""
|
||||
for instance in context:
|
||||
if (
|
||||
instance.data["family"] != "workfile"
|
||||
# Disabled instances won't be integrated
|
||||
or instance.data("publish") is False
|
||||
):
|
||||
continue
|
||||
template_data = instance.data["anatomyData"]
|
||||
# Expect workfile instance has only one representation
|
||||
representation = instance.data["representations"][0]
|
||||
# Get workfile extension
|
||||
repre_file = representation["files"]
|
||||
self.log.info(repre_file)
|
||||
ext = os.path.splitext(repre_file)[1].lstrip(".")
|
||||
|
||||
# Fill template data
|
||||
template_data["representation"] = representation["name"]
|
||||
template_data["ext"] = ext
|
||||
template_data["comment"] = None
|
||||
|
||||
anatomy = context.data["anatomy"]
|
||||
# WARNING Hardcoded template name 'publish' > may not be used
|
||||
template_obj = anatomy.templates_obj["publish"]["path"]
|
||||
|
||||
template_filled = template_obj.format(template_data)
|
||||
script_path = os.path.normpath(template_filled)
|
||||
self.log.info(
|
||||
"Using published scene for render {}".format(
|
||||
script_path
|
||||
)
|
||||
)
|
||||
return script_path
|
||||
|
||||
return None
|
||||
|
||||
def payload_submit(
|
||||
self,
|
||||
instance,
|
||||
|
|
|
|||
|
|
@ -99,10 +99,6 @@ class ProcessSubmittedCacheJobOnFarm(pyblish.api.InstancePlugin,
|
|||
def _submit_deadline_post_job(self, instance, job):
|
||||
"""Submit publish job to Deadline.
|
||||
|
||||
Deadline specific code separated from :meth:`process` for sake of
|
||||
more universal code. Muster post job is sent directly by Muster
|
||||
submitter, so this type of code isn't necessary for it.
|
||||
|
||||
Returns:
|
||||
(str): deadline_publish_job_id
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -59,21 +59,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
publish.ColormanagedPyblishPluginMixin):
|
||||
"""Process Job submitted on farm.
|
||||
|
||||
These jobs are dependent on a deadline or muster job
|
||||
These jobs are dependent on a deadline job
|
||||
submission prior to this plug-in.
|
||||
|
||||
- In case of Deadline, it creates dependent job on farm publishing
|
||||
rendered image sequence.
|
||||
|
||||
- In case of Muster, there is no need for such thing as dependent job,
|
||||
post action will be executed and rendered sequence will be published.
|
||||
It creates dependent job on farm publishing rendered image sequence.
|
||||
|
||||
Options in instance.data:
|
||||
- deadlineSubmissionJob (dict, Required): The returned .json
|
||||
data from the job submission to deadline.
|
||||
|
||||
- musterSubmissionJob (dict, Required): same as deadline.
|
||||
|
||||
- outputDir (str, Required): The output directory where the metadata
|
||||
file should be generated. It's assumed that this will also be
|
||||
final folder containing the output files.
|
||||
|
|
@ -161,10 +155,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
def _submit_deadline_post_job(self, instance, job, instances):
|
||||
"""Submit publish job to Deadline.
|
||||
|
||||
Deadline specific code separated from :meth:`process` for sake of
|
||||
more universal code. Muster post job is sent directly by Muster
|
||||
submitter, so this type of code isn't necessary for it.
|
||||
|
||||
Returns:
|
||||
(str): deadline_publish_job_id
|
||||
"""
|
||||
|
|
@ -586,9 +576,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin,
|
|||
|
||||
render_job = instance.data.pop("deadlineSubmissionJob", None)
|
||||
if not render_job and instance.data.get("tileRendering") is False:
|
||||
raise AssertionError(("Cannot continue without valid Deadline "
|
||||
"or Muster submission."))
|
||||
|
||||
raise AssertionError(("Cannot continue without valid "
|
||||
"Deadline submission."))
|
||||
if not render_job:
|
||||
import getpass
|
||||
|
||||
|
|
|
|||
|
|
@ -64,8 +64,10 @@ def clear_credentials():
|
|||
user_registry = OpenPypeSecureRegistry("kitsu_user")
|
||||
|
||||
# Set local settings
|
||||
user_registry.delete_item("login")
|
||||
user_registry.delete_item("password")
|
||||
if user_registry.get_item("login", None) is not None:
|
||||
user_registry.delete_item("login")
|
||||
if user_registry.get_item("password", None) is not None:
|
||||
user_registry.delete_item("password")
|
||||
|
||||
|
||||
def save_credentials(login: str, password: str):
|
||||
|
|
@ -92,8 +94,9 @@ def load_credentials() -> Tuple[str, str]:
|
|||
# Get user registry
|
||||
user_registry = OpenPypeSecureRegistry("kitsu_user")
|
||||
|
||||
return user_registry.get_item("login", None), user_registry.get_item(
|
||||
"password", None
|
||||
return (
|
||||
user_registry.get_item("login", None),
|
||||
user_registry.get_item("password", None)
|
||||
)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +0,0 @@
|
|||
from .muster import MusterModule
|
||||
|
||||
|
||||
__all__ = (
|
||||
"MusterModule",
|
||||
)
|
||||
|
|
@ -1,147 +0,0 @@
|
|||
import os
|
||||
import json
|
||||
|
||||
import appdirs
|
||||
import requests
|
||||
|
||||
from openpype.modules import OpenPypeModule, ITrayModule
|
||||
|
||||
|
||||
class MusterModule(OpenPypeModule, ITrayModule):
|
||||
"""
|
||||
Module handling Muster Render credentials. This will display dialog
|
||||
asking for user credentials for Muster if not already specified.
|
||||
"""
|
||||
cred_folder_path = os.path.normpath(
|
||||
appdirs.user_data_dir('pype-app', 'pype')
|
||||
)
|
||||
cred_filename = 'muster_cred.json'
|
||||
|
||||
name = "muster"
|
||||
|
||||
def initialize(self, modules_settings):
|
||||
muster_settings = modules_settings[self.name]
|
||||
self.enabled = muster_settings["enabled"]
|
||||
self.muster_url = muster_settings["MUSTER_REST_URL"]
|
||||
|
||||
self.cred_path = os.path.join(
|
||||
self.cred_folder_path, self.cred_filename
|
||||
)
|
||||
# Tray attributes
|
||||
self.widget_login = None
|
||||
self.action_show_login = None
|
||||
self.rest_api_obj = None
|
||||
|
||||
def get_global_environments(self):
|
||||
return {
|
||||
"MUSTER_REST_URL": self.muster_url
|
||||
}
|
||||
|
||||
def tray_init(self):
|
||||
from .widget_login import MusterLogin
|
||||
self.widget_login = MusterLogin(self)
|
||||
|
||||
def tray_start(self):
|
||||
"""Show login dialog if credentials not found."""
|
||||
# This should be start of module in tray
|
||||
cred = self.load_credentials()
|
||||
if not cred:
|
||||
self.show_login()
|
||||
|
||||
def tray_exit(self):
|
||||
"""Nothing special for Muster."""
|
||||
return
|
||||
|
||||
# Definition of Tray menu
|
||||
def tray_menu(self, parent):
|
||||
"""Add **change credentials** option to tray menu."""
|
||||
from qtpy import QtWidgets
|
||||
|
||||
# Menu for Tray App
|
||||
menu = QtWidgets.QMenu('Muster', parent)
|
||||
menu.setProperty('submenu', 'on')
|
||||
|
||||
# Actions
|
||||
self.action_show_login = QtWidgets.QAction(
|
||||
"Change login", menu
|
||||
)
|
||||
|
||||
menu.addAction(self.action_show_login)
|
||||
self.action_show_login.triggered.connect(self.show_login)
|
||||
|
||||
parent.addMenu(menu)
|
||||
|
||||
def load_credentials(self):
|
||||
"""
|
||||
Get credentials from JSON file
|
||||
"""
|
||||
credentials = {}
|
||||
try:
|
||||
file = open(self.cred_path, 'r')
|
||||
credentials = json.load(file)
|
||||
except Exception:
|
||||
file = open(self.cred_path, 'w+')
|
||||
file.close()
|
||||
|
||||
return credentials
|
||||
|
||||
def get_auth_token(self, username, password):
|
||||
"""
|
||||
Authenticate user with Muster and get authToken from server.
|
||||
"""
|
||||
if not self.muster_url:
|
||||
raise AttributeError("Muster REST API url not set")
|
||||
params = {
|
||||
'username': username,
|
||||
'password': password
|
||||
}
|
||||
api_entry = '/api/login'
|
||||
response = self._requests_post(
|
||||
self.muster_url + api_entry, params=params)
|
||||
if response.status_code != 200:
|
||||
self.log.error(
|
||||
'Cannot log into Muster: {}'.format(response.status_code))
|
||||
raise Exception('Cannot login into Muster.')
|
||||
|
||||
try:
|
||||
token = response.json()['ResponseData']['authToken']
|
||||
except ValueError as e:
|
||||
self.log.error('Invalid response from Muster server {}'.format(e))
|
||||
raise Exception('Invalid response from Muster while logging in.')
|
||||
|
||||
self.save_credentials(token)
|
||||
|
||||
def save_credentials(self, token):
|
||||
"""Save credentials to JSON file."""
|
||||
|
||||
with open(self.cred_path, "w") as f:
|
||||
json.dump({'token': token}, f)
|
||||
|
||||
def show_login(self):
|
||||
"""
|
||||
Show dialog to enter credentials
|
||||
"""
|
||||
if self.widget_login:
|
||||
self.widget_login.show()
|
||||
|
||||
# Webserver module implementation
|
||||
def webserver_initialization(self, server_manager):
|
||||
"""Add routes for Muster login."""
|
||||
if self.tray_initialized:
|
||||
from .rest_api import MusterModuleRestApi
|
||||
|
||||
self.rest_api_obj = MusterModuleRestApi(self, server_manager)
|
||||
|
||||
def _requests_post(self, *args, **kwargs):
|
||||
""" Wrapper for requests, disabling SSL certificate validation if
|
||||
DONT_VERIFY_SSL environment variable is found. This is useful when
|
||||
Deadline or Muster server are running with self-signed certificates
|
||||
and their certificate is not added to trusted certificates on
|
||||
client machines.
|
||||
|
||||
WARNING: disabling SSL certificate validation is defeating one line
|
||||
of defense SSL is providing and it is not recommended.
|
||||
"""
|
||||
if 'verify' not in kwargs:
|
||||
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
|
||||
return requests.post(*args, **kwargs)
|
||||
|
|
@ -1,555 +0,0 @@
|
|||
import os
|
||||
import json
|
||||
import getpass
|
||||
import platform
|
||||
|
||||
import appdirs
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
from openpype.lib import requests_post
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.hosts.maya.api.lib_rendersettings import RenderSettings
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.settings import get_system_settings
|
||||
|
||||
|
||||
# mapping between Maya renderer names and Muster template ids
|
||||
def _get_template_id(renderer):
|
||||
"""
|
||||
Return muster template ID based on renderer name.
|
||||
|
||||
:param renderer: renderer name
|
||||
:type renderer: str
|
||||
:returns: muster template id
|
||||
:rtype: int
|
||||
"""
|
||||
|
||||
# TODO: Use settings from context?
|
||||
templates = get_system_settings()["modules"]["muster"]["templates_mapping"]
|
||||
if not templates:
|
||||
raise RuntimeError(("Muster template mapping missing in "
|
||||
"pype-settings"))
|
||||
try:
|
||||
template_id = templates[renderer]
|
||||
except KeyError:
|
||||
raise RuntimeError("Unmapped renderer - missing template id")
|
||||
|
||||
return template_id
|
||||
|
||||
|
||||
def _get_script():
|
||||
"""Get path to the image sequence script"""
|
||||
try:
|
||||
from openpype.scripts import publish_filesequence
|
||||
except Exception:
|
||||
raise RuntimeError("Expected module 'publish_deadline'"
|
||||
"to be available")
|
||||
|
||||
module_path = publish_filesequence.__file__
|
||||
if module_path.endswith(".pyc"):
|
||||
module_path = module_path[:-len(".pyc")] + ".py"
|
||||
|
||||
return module_path
|
||||
|
||||
|
||||
def get_renderer_variables(renderlayer=None):
|
||||
"""Retrieve the extension which has been set in the VRay settings
|
||||
|
||||
Will return None if the current renderer is not VRay
|
||||
For Maya 2016.5 and up the renderSetup creates renderSetupLayer node which
|
||||
start with `rs`. Use the actual node name, do NOT use the `nice name`
|
||||
|
||||
Args:
|
||||
renderlayer (str): the node name of the renderlayer.
|
||||
|
||||
Returns:
|
||||
dict
|
||||
"""
|
||||
|
||||
renderer = lib.get_renderer(renderlayer or lib.get_current_renderlayer())
|
||||
|
||||
padding = cmds.getAttr(RenderSettings.get_padding_attr(renderer))
|
||||
|
||||
filename_0 = cmds.renderSettings(fullPath=True, firstImageName=True)[0]
|
||||
|
||||
if renderer == "vray":
|
||||
# Maya's renderSettings function does not return V-Ray file extension
|
||||
# so we get the extension from vraySettings
|
||||
extension = cmds.getAttr("vraySettings.imageFormatStr")
|
||||
|
||||
# When V-Ray image format has not been switched once from default .png
|
||||
# the getAttr command above returns None. As such we explicitly set
|
||||
# it to `.png`
|
||||
if extension is None:
|
||||
extension = "png"
|
||||
|
||||
filename_prefix = "<Scene>/<Scene>_<Layer>/<Layer>"
|
||||
else:
|
||||
# Get the extension, getAttr defaultRenderGlobals.imageFormat
|
||||
# returns an index number.
|
||||
filename_base = os.path.basename(filename_0)
|
||||
extension = os.path.splitext(filename_base)[-1].strip(".")
|
||||
filename_prefix = "<Scene>/<RenderLayer>/<RenderLayer>"
|
||||
|
||||
return {"ext": extension,
|
||||
"filename_prefix": filename_prefix,
|
||||
"padding": padding,
|
||||
"filename_0": filename_0}
|
||||
|
||||
|
||||
def preview_fname(folder, scene, layer, padding, ext):
|
||||
"""Return output file path with #### for padding.
|
||||
|
||||
Deadline requires the path to be formatted with # in place of numbers.
|
||||
For example `/path/to/render.####.png`
|
||||
|
||||
Args:
|
||||
folder (str): The root output folder (image path)
|
||||
scene (str): The scene name
|
||||
layer (str): The layer name to be rendered
|
||||
padding (int): The padding length
|
||||
ext(str): The output file extension
|
||||
|
||||
Returns:
|
||||
str
|
||||
|
||||
"""
|
||||
|
||||
# Following hardcoded "<Scene>/<Scene>_<Layer>/<Layer>"
|
||||
output = "{scene}/{layer}/{layer}.{number}.{ext}".format(
|
||||
scene=scene,
|
||||
layer=layer,
|
||||
number="#" * padding,
|
||||
ext=ext
|
||||
)
|
||||
|
||||
return os.path.join(folder, output)
|
||||
|
||||
|
||||
class MayaSubmitMuster(pyblish.api.InstancePlugin):
|
||||
"""Submit available render layers to Muster
|
||||
|
||||
Renders are submitted to a Muster via HTTP API as
|
||||
supplied via the environment variable ``MUSTER_REST_URL``.
|
||||
|
||||
Also needed is ``MUSTER_USER`` and ``MUSTER_PASSWORD``.
|
||||
"""
|
||||
|
||||
label = "Submit to Muster"
|
||||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
hosts = ["maya"]
|
||||
families = ["renderlayer"]
|
||||
icon = "satellite-dish"
|
||||
if not os.environ.get("MUSTER_REST_URL"):
|
||||
optional = False
|
||||
active = False
|
||||
else:
|
||||
optional = True
|
||||
|
||||
_token = None
|
||||
|
||||
def _load_credentials(self):
|
||||
"""
|
||||
Load Muster credentials from file and set `MUSTER_USER`,
|
||||
`MUSTER_PASSWORD`, `MUSTER_REST_URL` is loaded from settings.
|
||||
|
||||
.. todo::
|
||||
|
||||
Show login dialog if access token is invalid or missing.
|
||||
"""
|
||||
app_dir = os.path.normpath(
|
||||
appdirs.user_data_dir('pype-app', 'pype')
|
||||
)
|
||||
file_name = 'muster_cred.json'
|
||||
fpath = os.path.join(app_dir, file_name)
|
||||
file = open(fpath, 'r')
|
||||
muster_json = json.load(file)
|
||||
self._token = muster_json.get('token', None)
|
||||
if not self._token:
|
||||
raise RuntimeError("Invalid access token for Muster")
|
||||
file.close()
|
||||
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
|
||||
if not self.MUSTER_REST_URL:
|
||||
raise AttributeError("Muster REST API url not set")
|
||||
|
||||
def _get_templates(self):
|
||||
"""
|
||||
Get Muster templates from server.
|
||||
"""
|
||||
params = {
|
||||
"authToken": self._token,
|
||||
"select": "name"
|
||||
}
|
||||
api_entry = '/api/templates/list'
|
||||
response = requests_post(
|
||||
self.MUSTER_REST_URL + api_entry, params=params)
|
||||
if response.status_code != 200:
|
||||
self.log.error(
|
||||
'Cannot get templates from Muster: {}'.format(
|
||||
response.status_code))
|
||||
raise Exception('Cannot get templates from Muster.')
|
||||
|
||||
try:
|
||||
response_templates = response.json()["ResponseData"]["templates"]
|
||||
except ValueError as e:
|
||||
self.log.error(
|
||||
'Muster server returned unexpected data {}'.format(e)
|
||||
)
|
||||
raise Exception('Muster server returned unexpected data')
|
||||
|
||||
templates = {}
|
||||
for t in response_templates:
|
||||
templates[t.get("name")] = t.get("id")
|
||||
|
||||
self._templates = templates
|
||||
|
||||
def _resolve_template(self, renderer):
|
||||
"""
|
||||
Returns template ID based on renderer string.
|
||||
|
||||
:param renderer: Name of renderer to match against template names
|
||||
:type renderer: str
|
||||
:returns: ID of template
|
||||
:rtype: int
|
||||
:raises: Exception if template ID isn't found
|
||||
"""
|
||||
self.log.debug("Trying to find template for [{}]".format(renderer))
|
||||
mapped = _get_template_id(renderer)
|
||||
self.log.debug("got id [{}]".format(mapped))
|
||||
return self._templates.get(mapped)
|
||||
|
||||
def _submit(self, payload):
|
||||
"""
|
||||
Submit job to Muster
|
||||
|
||||
:param payload: json with job to submit
|
||||
:type payload: str
|
||||
:returns: response
|
||||
:raises: Exception status is wrong
|
||||
"""
|
||||
params = {
|
||||
"authToken": self._token,
|
||||
"name": "submit"
|
||||
}
|
||||
api_entry = '/api/queue/actions'
|
||||
response = requests_post(
|
||||
self.MUSTER_REST_URL + api_entry, params=params, json=payload)
|
||||
|
||||
if response.status_code != 200:
|
||||
self.log.error(
|
||||
'Cannot submit job to Muster: {}'.format(response.text))
|
||||
raise Exception('Cannot submit job to Muster.')
|
||||
|
||||
return response
|
||||
|
||||
def process(self, instance):
|
||||
"""
|
||||
Authenticate with Muster, collect all data, prepare path for post
|
||||
render publish job and submit job to farm.
|
||||
"""
|
||||
# setup muster environment
|
||||
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
|
||||
|
||||
if self.MUSTER_REST_URL is None:
|
||||
self.log.error(
|
||||
"\"MUSTER_REST_URL\" is not found. Skipping "
|
||||
"[{}]".format(instance)
|
||||
)
|
||||
raise RuntimeError("MUSTER_REST_URL not set")
|
||||
|
||||
self._load_credentials()
|
||||
# self._get_templates()
|
||||
|
||||
context = instance.context
|
||||
workspace = context.data["workspaceDir"]
|
||||
project_name = context.data["projectName"]
|
||||
asset_name = context.data["asset"]
|
||||
|
||||
filepath = None
|
||||
|
||||
allInstances = []
|
||||
for result in context.data["results"]:
|
||||
if ((result["instance"] is not None) and
|
||||
(result["instance"] not in allInstances)):
|
||||
allInstances.append(result["instance"])
|
||||
|
||||
for inst in allInstances:
|
||||
print(inst)
|
||||
if inst.data['family'] == 'scene':
|
||||
filepath = inst.data['destination_list'][0]
|
||||
|
||||
if not filepath:
|
||||
filepath = context.data["currentFile"]
|
||||
|
||||
self.log.debug(filepath)
|
||||
|
||||
filename = os.path.basename(filepath)
|
||||
comment = context.data.get("comment", "")
|
||||
scene = os.path.splitext(filename)[0]
|
||||
dirname = os.path.join(workspace, "renders")
|
||||
renderlayer = instance.data['renderlayer'] # rs_beauty
|
||||
renderlayer_name = instance.data['subset'] # beauty
|
||||
renderglobals = instance.data["renderGlobals"]
|
||||
# legacy_layers = renderlayer_globals["UseLegacyRenderLayers"]
|
||||
# deadline_user = context.data.get("deadlineUser", getpass.getuser())
|
||||
jobname = "%s - %s" % (filename, instance.name)
|
||||
|
||||
# Get the variables depending on the renderer
|
||||
render_variables = get_renderer_variables(renderlayer)
|
||||
output_filename_0 = preview_fname(folder=dirname,
|
||||
scene=scene,
|
||||
layer=renderlayer_name,
|
||||
padding=render_variables["padding"],
|
||||
ext=render_variables["ext"])
|
||||
|
||||
instance.data["outputDir"] = os.path.dirname(output_filename_0)
|
||||
self.log.debug("output: {}".format(filepath))
|
||||
# build path for metadata file
|
||||
metadata_filename = "{}_metadata.json".format(instance.data["subset"])
|
||||
output_dir = instance.data["outputDir"]
|
||||
metadata_path = os.path.join(output_dir, metadata_filename)
|
||||
|
||||
pype_root = os.environ["OPENPYPE_SETUP_PATH"]
|
||||
|
||||
# we must provide either full path to executable or use musters own
|
||||
# python named MPython.exe, residing directly in muster bin
|
||||
# directory.
|
||||
if platform.system().lower() == "windows":
|
||||
# for muster, those backslashes must be escaped twice
|
||||
muster_python = ("\"C:\\\\Program Files\\\\Virtual Vertex\\\\"
|
||||
"Muster 9\\\\MPython.exe\"")
|
||||
else:
|
||||
# we need to run pype as different user then Muster dispatcher
|
||||
# service is running (usually root).
|
||||
muster_python = ("/usr/sbin/runuser -u {}"
|
||||
" -- /usr/bin/python3".format(getpass.getuser()))
|
||||
|
||||
# build the path and argument. We are providing separate --pype
|
||||
# argument with network path to pype as post job actions are run
|
||||
# but dispatcher (Server) and not render clients. Render clients
|
||||
# inherit environment from publisher including PATH, so there's
|
||||
# no problem finding PYPE, but there is now way (as far as I know)
|
||||
# to set environment dynamically for dispatcher. Therefore this hack.
|
||||
args = [muster_python,
|
||||
_get_script().replace('\\', '\\\\'),
|
||||
"--paths",
|
||||
metadata_path.replace('\\', '\\\\'),
|
||||
"--pype",
|
||||
pype_root.replace('\\', '\\\\')]
|
||||
|
||||
postjob_command = " ".join(args)
|
||||
|
||||
try:
|
||||
# Ensure render folder exists
|
||||
os.makedirs(dirname)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
env = self.clean_environment()
|
||||
|
||||
payload = {
|
||||
"RequestData": {
|
||||
"platform": 0,
|
||||
"job": {
|
||||
"jobName": jobname,
|
||||
"templateId": _get_template_id(
|
||||
instance.data["renderer"]),
|
||||
"chunksInterleave": 2,
|
||||
"chunksPriority": "0",
|
||||
"chunksTimeoutValue": 320,
|
||||
"department": "",
|
||||
"dependIds": [""],
|
||||
"dependLinkMode": 0,
|
||||
"dependMode": 0,
|
||||
"emergencyQueue": False,
|
||||
"excludedPools": [""],
|
||||
"includedPools": [renderglobals["Pool"]],
|
||||
"packetSize": 4,
|
||||
"packetType": 1,
|
||||
"priority": 1,
|
||||
"jobId": -1,
|
||||
"startOn": 0,
|
||||
"parentId": -1,
|
||||
"project": project_name or scene,
|
||||
"shot": asset_name or scene,
|
||||
"camera": instance.data.get("cameras")[0],
|
||||
"dependMode": 0,
|
||||
"packetSize": 4,
|
||||
"packetType": 1,
|
||||
"priority": 1,
|
||||
"maximumInstances": 0,
|
||||
"assignedInstances": 0,
|
||||
"attributes": {
|
||||
"environmental_variables": {
|
||||
"value": ", ".join("{!s}={!r}".format(k, v)
|
||||
for (k, v) in env.items()),
|
||||
|
||||
"state": True,
|
||||
"subst": False
|
||||
},
|
||||
"memo": {
|
||||
"value": comment,
|
||||
"state": True,
|
||||
"subst": False
|
||||
},
|
||||
"frames_range": {
|
||||
"value": "{start}-{end}".format(
|
||||
start=int(instance.data["frameStart"]),
|
||||
end=int(instance.data["frameEnd"])),
|
||||
"state": True,
|
||||
"subst": False
|
||||
},
|
||||
"job_file": {
|
||||
"value": filepath,
|
||||
"state": True,
|
||||
"subst": True
|
||||
},
|
||||
"job_project": {
|
||||
"value": workspace,
|
||||
"state": True,
|
||||
"subst": True
|
||||
},
|
||||
"output_folder": {
|
||||
"value": dirname.replace("\\", "/"),
|
||||
"state": True,
|
||||
"subst": True
|
||||
},
|
||||
"post_job_action": {
|
||||
"value": postjob_command,
|
||||
"state": True,
|
||||
"subst": True
|
||||
},
|
||||
"MAYADIGITS": {
|
||||
"value": 1,
|
||||
"state": True,
|
||||
"subst": False
|
||||
},
|
||||
"ARNOLDMODE": {
|
||||
"value": "0",
|
||||
"state": True,
|
||||
"subst": False
|
||||
},
|
||||
"ABORTRENDER": {
|
||||
"value": "0",
|
||||
"state": True,
|
||||
"subst": True
|
||||
},
|
||||
"ARNOLDLICENSE": {
|
||||
"value": "0",
|
||||
"state": False,
|
||||
"subst": False
|
||||
},
|
||||
"ADD_FLAGS": {
|
||||
"value": "-rl {}".format(renderlayer),
|
||||
"state": True,
|
||||
"subst": True
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.preflight_check(instance)
|
||||
|
||||
self.log.debug("Submitting ...")
|
||||
self.log.debug(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
response = self._submit(payload)
|
||||
# response = requests.post(url, json=payload)
|
||||
if not response.ok:
|
||||
raise Exception(response.text)
|
||||
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
|
||||
instance.data["musterSubmissionJob"] = response.json()
|
||||
|
||||
def clean_environment(self):
|
||||
"""
|
||||
Clean and set environment variables for render job so render clients
|
||||
work in more or less same environment as publishing machine.
|
||||
|
||||
.. warning:: This is not usable for **post job action** as this is
|
||||
executed on dispatcher machine (server) and not render clients.
|
||||
"""
|
||||
keys = [
|
||||
# This will trigger `userSetup.py` on the slave
|
||||
# such that proper initialisation happens the same
|
||||
# way as it does on a local machine.
|
||||
# TODO(marcus): This won't work if the slaves don't
|
||||
# have access to these paths, such as if slaves are
|
||||
# running Linux and the submitter is on Windows.
|
||||
"PYTHONPATH",
|
||||
"PATH",
|
||||
|
||||
"MTOA_EXTENSIONS_PATH",
|
||||
"MTOA_EXTENSIONS",
|
||||
"DYLD_LIBRARY_PATH",
|
||||
"MAYA_RENDER_DESC_PATH",
|
||||
"MAYA_MODULE_PATH",
|
||||
"ARNOLD_PLUGIN_PATH",
|
||||
"FTRACK_API_KEY",
|
||||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"PYBLISHPLUGINPATH",
|
||||
|
||||
# todo: This is a temporary fix for yeti variables
|
||||
"PEREGRINEL_LICENSE",
|
||||
"SOLIDANGLE_LICENSE",
|
||||
"ARNOLD_LICENSE"
|
||||
"MAYA_MODULE_PATH",
|
||||
"TOOL_ENV"
|
||||
]
|
||||
environment = dict({key: os.environ[key] for key in keys
|
||||
if key in os.environ}, **legacy_io.Session)
|
||||
# self.log.debug("enviro: {}".format(pprint(environment)))
|
||||
for path in os.environ:
|
||||
if path.lower().startswith('pype_'):
|
||||
environment[path] = os.environ[path]
|
||||
|
||||
environment["PATH"] = os.environ["PATH"]
|
||||
# self.log.debug("enviro: {}".format(environment['OPENPYPE_SCRIPTS']))
|
||||
clean_environment = {}
|
||||
for key, value in environment.items():
|
||||
clean_path = ""
|
||||
self.log.debug("key: {}".format(key))
|
||||
if "://" in value:
|
||||
clean_path = value
|
||||
else:
|
||||
valid_paths = []
|
||||
for path in value.split(os.pathsep):
|
||||
if not path:
|
||||
continue
|
||||
try:
|
||||
path.decode('UTF-8', 'strict')
|
||||
valid_paths.append(os.path.normpath(path))
|
||||
except UnicodeDecodeError:
|
||||
print('path contains non UTF characters')
|
||||
|
||||
if valid_paths:
|
||||
clean_path = os.pathsep.join(valid_paths)
|
||||
|
||||
clean_environment[key] = clean_path
|
||||
|
||||
return clean_environment
|
||||
|
||||
def preflight_check(self, instance):
|
||||
"""Ensure the startFrame, endFrame and byFrameStep are integers"""
|
||||
|
||||
for key in ("frameStart", "frameEnd", "byFrameStep"):
|
||||
value = instance.data[key]
|
||||
|
||||
if int(value) == value:
|
||||
continue
|
||||
|
||||
self.log.warning(
|
||||
"%f=%d was rounded off to nearest integer"
|
||||
% (value, int(value))
|
||||
)
|
||||
|
||||
|
||||
# TODO: Remove hack to avoid this plug-in in new publisher
|
||||
# This plug-in should actually be in dedicated module
|
||||
if not os.environ.get("MUSTER_REST_URL"):
|
||||
del MayaSubmitMuster
|
||||
|
|
@ -1,96 +0,0 @@
|
|||
import os
|
||||
import json
|
||||
|
||||
import appdirs
|
||||
|
||||
import pyblish.api
|
||||
from openpype.lib import requests_get
|
||||
from openpype.pipeline.publish import (
|
||||
context_plugin_should_run,
|
||||
RepairAction,
|
||||
)
|
||||
|
||||
|
||||
class ValidateMusterConnection(pyblish.api.ContextPlugin):
|
||||
"""
|
||||
Validate Muster REST API Service is running and we have valid auth token
|
||||
"""
|
||||
|
||||
label = "Validate Muster REST API Service"
|
||||
order = pyblish.api.ValidatorOrder
|
||||
hosts = ["maya"]
|
||||
families = ["renderlayer"]
|
||||
token = None
|
||||
if not os.environ.get("MUSTER_REST_URL"):
|
||||
active = False
|
||||
actions = [RepairAction]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
# Workaround bug pyblish-base#250
|
||||
if not context_plugin_should_run(self, context):
|
||||
return
|
||||
|
||||
# test if we have environment set (redundant as this plugin shouldn'
|
||||
# be active otherwise).
|
||||
try:
|
||||
MUSTER_REST_URL = os.environ["MUSTER_REST_URL"]
|
||||
except KeyError:
|
||||
self.log.error("Muster REST API url not found.")
|
||||
raise ValueError("Muster REST API url not found.")
|
||||
|
||||
# Load credentials
|
||||
try:
|
||||
self._load_credentials()
|
||||
except RuntimeError:
|
||||
self.log.error("invalid or missing access token")
|
||||
|
||||
assert self._token is not None, "Invalid or missing token"
|
||||
|
||||
# We have token, lets do trivial query to web api to see if we can
|
||||
# connect and access token is valid.
|
||||
params = {
|
||||
'authToken': self._token
|
||||
}
|
||||
api_entry = '/api/pools/list'
|
||||
response = requests_get(
|
||||
MUSTER_REST_URL + api_entry, params=params)
|
||||
assert response.status_code == 200, "invalid response from server"
|
||||
assert response.json()['ResponseData'], "invalid data in response"
|
||||
|
||||
def _load_credentials(self):
|
||||
"""
|
||||
Load Muster credentials from file and set `MUSTER_USER`,
|
||||
`MUSTER_PASSWORD`, `MUSTER_REST_URL` is loaded from settings.
|
||||
|
||||
.. todo::
|
||||
|
||||
Show login dialog if access token is invalid or missing.
|
||||
"""
|
||||
app_dir = os.path.normpath(
|
||||
appdirs.user_data_dir('pype-app', 'pype')
|
||||
)
|
||||
file_name = 'muster_cred.json'
|
||||
fpath = os.path.join(app_dir, file_name)
|
||||
file = open(fpath, 'r')
|
||||
muster_json = json.load(file)
|
||||
self._token = muster_json.get('token', None)
|
||||
if not self._token:
|
||||
raise RuntimeError("Invalid access token for Muster")
|
||||
file.close()
|
||||
self.MUSTER_REST_URL = os.environ.get("MUSTER_REST_URL")
|
||||
if not self.MUSTER_REST_URL:
|
||||
raise AttributeError("Muster REST API url not set")
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
"""
|
||||
Renew authentication token by logging into Muster
|
||||
"""
|
||||
api_url = "{}/muster/show_login".format(
|
||||
os.environ["OPENPYPE_WEBSERVER_URL"])
|
||||
cls.log.debug(api_url)
|
||||
response = requests_get(api_url, timeout=1)
|
||||
if response.status_code != 200:
|
||||
cls.log.error('Cannot show login form to Muster')
|
||||
raise Exception('Cannot show login form to Muster')
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
from aiohttp.web_response import Response
|
||||
|
||||
|
||||
class MusterModuleRestApi:
|
||||
def __init__(self, user_module, server_manager):
|
||||
self.module = user_module
|
||||
self.server_manager = server_manager
|
||||
|
||||
self.prefix = "/muster"
|
||||
|
||||
self.register()
|
||||
|
||||
def register(self):
|
||||
self.server_manager.add_route(
|
||||
"GET",
|
||||
self.prefix + "/show_login",
|
||||
self.show_login_widget
|
||||
)
|
||||
|
||||
async def show_login_widget(self, request):
|
||||
self.module.action_show_login.trigger()
|
||||
return Response(status=200)
|
||||
|
|
@ -1,165 +0,0 @@
|
|||
from qtpy import QtCore, QtGui, QtWidgets
|
||||
from openpype import resources, style
|
||||
|
||||
|
||||
class MusterLogin(QtWidgets.QWidget):
|
||||
|
||||
SIZE_W = 300
|
||||
SIZE_H = 150
|
||||
|
||||
loginSignal = QtCore.Signal(object, object, object)
|
||||
|
||||
def __init__(self, module, parent=None):
|
||||
|
||||
super(MusterLogin, self).__init__(parent)
|
||||
|
||||
self.module = module
|
||||
|
||||
# Icon
|
||||
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
|
||||
self.setWindowIcon(icon)
|
||||
|
||||
self.setWindowFlags(
|
||||
QtCore.Qt.WindowCloseButtonHint |
|
||||
QtCore.Qt.WindowMinimizeButtonHint
|
||||
)
|
||||
|
||||
self._translate = QtCore.QCoreApplication.translate
|
||||
|
||||
# Font
|
||||
self.font = QtGui.QFont()
|
||||
self.font.setFamily("DejaVu Sans Condensed")
|
||||
self.font.setPointSize(9)
|
||||
self.font.setBold(True)
|
||||
self.font.setWeight(50)
|
||||
self.font.setKerning(True)
|
||||
|
||||
# Size setting
|
||||
self.resize(self.SIZE_W, self.SIZE_H)
|
||||
self.setMinimumSize(QtCore.QSize(self.SIZE_W, self.SIZE_H))
|
||||
self.setMaximumSize(QtCore.QSize(self.SIZE_W+100, self.SIZE_H+100))
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
self.setLayout(self._main())
|
||||
self.setWindowTitle('Muster login')
|
||||
|
||||
def _main(self):
|
||||
self.main = QtWidgets.QVBoxLayout()
|
||||
self.main.setObjectName("main")
|
||||
|
||||
self.form = QtWidgets.QFormLayout()
|
||||
self.form.setContentsMargins(10, 15, 10, 5)
|
||||
self.form.setObjectName("form")
|
||||
|
||||
self.label_username = QtWidgets.QLabel("Username:")
|
||||
self.label_username.setFont(self.font)
|
||||
self.label_username.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
|
||||
self.label_username.setTextFormat(QtCore.Qt.RichText)
|
||||
|
||||
self.input_username = QtWidgets.QLineEdit()
|
||||
self.input_username.setEnabled(True)
|
||||
self.input_username.setFrame(True)
|
||||
self.input_username.setPlaceholderText(
|
||||
self._translate("main", "e.g. John Smith")
|
||||
)
|
||||
|
||||
self.label_password = QtWidgets.QLabel("Password:")
|
||||
self.label_password.setFont(self.font)
|
||||
self.label_password.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
|
||||
self.label_password.setTextFormat(QtCore.Qt.RichText)
|
||||
|
||||
self.input_password = QtWidgets.QLineEdit()
|
||||
self.input_password.setEchoMode(QtWidgets.QLineEdit.Password)
|
||||
self.input_password.setEnabled(True)
|
||||
self.input_password.setFrame(True)
|
||||
self.input_password.setPlaceholderText(
|
||||
self._translate("main", "e.g. ********")
|
||||
)
|
||||
|
||||
self.error_label = QtWidgets.QLabel("")
|
||||
self.error_label.setFont(self.font)
|
||||
self.error_label.setStyleSheet('color: #FC6000')
|
||||
self.error_label.setWordWrap(True)
|
||||
self.error_label.hide()
|
||||
|
||||
self.form.addRow(self.label_username, self.input_username)
|
||||
self.form.addRow(self.label_password, self.input_password)
|
||||
self.form.addRow(self.error_label)
|
||||
|
||||
self.btn_group = QtWidgets.QHBoxLayout()
|
||||
self.btn_group.addStretch(1)
|
||||
self.btn_group.setObjectName("btn_group")
|
||||
|
||||
self.btn_ok = QtWidgets.QPushButton("Ok")
|
||||
self.btn_ok.clicked.connect(self.click_ok)
|
||||
|
||||
self.btn_cancel = QtWidgets.QPushButton("Cancel")
|
||||
QtWidgets.QShortcut(
|
||||
QtGui.QKeySequence(
|
||||
QtCore.Qt.Key_Escape), self).activated.connect(self.close)
|
||||
self.btn_cancel.clicked.connect(self.close)
|
||||
|
||||
self.btn_group.addWidget(self.btn_ok)
|
||||
self.btn_group.addWidget(self.btn_cancel)
|
||||
|
||||
self.main.addLayout(self.form)
|
||||
self.main.addLayout(self.btn_group)
|
||||
|
||||
return self.main
|
||||
|
||||
def keyPressEvent(self, key_event):
|
||||
if key_event.key() == QtCore.Qt.Key_Return:
|
||||
if self.input_username.hasFocus():
|
||||
self.input_password.setFocus()
|
||||
|
||||
elif self.input_password.hasFocus() or self.btn_ok.hasFocus():
|
||||
self.click_ok()
|
||||
|
||||
elif self.btn_cancel.hasFocus():
|
||||
self.close()
|
||||
else:
|
||||
super().keyPressEvent(key_event)
|
||||
|
||||
def setError(self, msg):
|
||||
self.error_label.setText(msg)
|
||||
self.error_label.show()
|
||||
|
||||
def invalid_input(self, entity):
|
||||
entity.setStyleSheet("border: 1px solid red;")
|
||||
|
||||
def click_ok(self):
|
||||
# all what should happen - validations and saving into appsdir
|
||||
username = self.input_username.text()
|
||||
password = self.input_password.text()
|
||||
# TODO: more robust validation. Password can be empty in muster?
|
||||
if not username:
|
||||
self.setError("Username cannot be empty")
|
||||
self.invalid_input(self.input_username)
|
||||
try:
|
||||
self.save_credentials(username, password)
|
||||
except Exception as e:
|
||||
self.setError(
|
||||
"<b>Cannot get auth token:</b>\n<code>{}</code>".format(e))
|
||||
else:
|
||||
self._close_widget()
|
||||
|
||||
def save_credentials(self, username, password):
|
||||
self.module.get_auth_token(username, password)
|
||||
|
||||
def showEvent(self, event):
|
||||
super(MusterLogin, self).showEvent(event)
|
||||
|
||||
# Make btns same width
|
||||
max_width = max(
|
||||
self.btn_ok.sizeHint().width(),
|
||||
self.btn_cancel.sizeHint().width()
|
||||
)
|
||||
self.btn_ok.setMinimumWidth(max_width)
|
||||
self.btn_cancel.setMinimumWidth(max_width)
|
||||
|
||||
def closeEvent(self, event):
|
||||
event.ignore()
|
||||
self._close_widget()
|
||||
|
||||
def _close_widget(self):
|
||||
self.hide()
|
||||
|
|
@ -582,16 +582,17 @@ def _create_instances_for_aov(instance, skeleton, aov_filter, additional_data,
|
|||
group_name = subset
|
||||
|
||||
# if there are multiple cameras, we need to add camera name
|
||||
if isinstance(col, (list, tuple)):
|
||||
cam = [c for c in cameras if c in col[0]]
|
||||
else:
|
||||
# in case of single frame
|
||||
cam = [c for c in cameras if c in col]
|
||||
if cam:
|
||||
if aov:
|
||||
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
|
||||
else:
|
||||
subset_name = '{}_{}'.format(group_name, cam)
|
||||
expected_filepath = col[0] if isinstance(col, (list, tuple)) else col
|
||||
cams = [cam for cam in cameras if cam in expected_filepath]
|
||||
if cams:
|
||||
for cam in cams:
|
||||
if aov:
|
||||
if not aov.startswith(cam):
|
||||
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
|
||||
else:
|
||||
subset_name = "{}_{}".format(group_name, aov)
|
||||
else:
|
||||
subset_name = '{}_{}'.format(group_name, cam)
|
||||
else:
|
||||
if aov:
|
||||
subset_name = '{}_{}'.format(group_name, aov)
|
||||
|
|
|
|||
|
|
@ -28,13 +28,20 @@ def concatenate_splitted_paths(split_paths, anatomy):
|
|||
# backward compatibility
|
||||
if "__project_root__" in path_items:
|
||||
for root, root_path in anatomy.roots.items():
|
||||
if not os.path.exists(str(root_path)):
|
||||
log.debug("Root {} path path {} not exist on \
|
||||
computer!".format(root, root_path))
|
||||
if not root_path or not os.path.exists(str(root_path)):
|
||||
log.debug(
|
||||
"Root {} path path {} not exist on computer!".format(
|
||||
root, root_path
|
||||
)
|
||||
)
|
||||
continue
|
||||
clean_items = ["{{root[{}]}}".format(root),
|
||||
r"{project[name]}"] + clean_items[1:]
|
||||
output.append(os.path.normpath(os.path.sep.join(clean_items)))
|
||||
|
||||
root_items = [
|
||||
"{{root[{}]}}".format(root),
|
||||
"{project[name]}"
|
||||
]
|
||||
root_items.extend(clean_items[1:])
|
||||
output.append(os.path.normpath(os.path.sep.join(root_items)))
|
||||
continue
|
||||
|
||||
output.append(os.path.normpath(os.path.sep.join(clean_items)))
|
||||
|
|
|
|||
|
|
@ -58,41 +58,13 @@ def get_template_name_profiles(
|
|||
if not project_settings:
|
||||
project_settings = get_project_settings(project_name)
|
||||
|
||||
profiles = (
|
||||
return copy.deepcopy(
|
||||
project_settings
|
||||
["global"]
|
||||
["tools"]
|
||||
["publish"]
|
||||
["template_name_profiles"]
|
||||
)
|
||||
if profiles:
|
||||
return copy.deepcopy(profiles)
|
||||
|
||||
# Use legacy approach for cases new settings are not filled yet for the
|
||||
# project
|
||||
legacy_profiles = (
|
||||
project_settings
|
||||
["global"]
|
||||
["publish"]
|
||||
["IntegrateAssetNew"]
|
||||
["template_name_profiles"]
|
||||
)
|
||||
if legacy_profiles:
|
||||
if not logger:
|
||||
logger = Logger.get_logger("get_template_name_profiles")
|
||||
|
||||
logger.warning((
|
||||
"Project \"{}\" is using legacy access to publish template."
|
||||
" It is recommended to move settings to new location"
|
||||
" 'project_settings/global/tools/publish/template_name_profiles'."
|
||||
).format(project_name))
|
||||
|
||||
# Replace "tasks" key with "task_names"
|
||||
profiles = []
|
||||
for profile in copy.deepcopy(legacy_profiles):
|
||||
profile["task_names"] = profile.pop("tasks", [])
|
||||
profiles.append(profile)
|
||||
return profiles
|
||||
|
||||
|
||||
def get_hero_template_name_profiles(
|
||||
|
|
@ -121,36 +93,13 @@ def get_hero_template_name_profiles(
|
|||
if not project_settings:
|
||||
project_settings = get_project_settings(project_name)
|
||||
|
||||
profiles = (
|
||||
return copy.deepcopy(
|
||||
project_settings
|
||||
["global"]
|
||||
["tools"]
|
||||
["publish"]
|
||||
["hero_template_name_profiles"]
|
||||
)
|
||||
if profiles:
|
||||
return copy.deepcopy(profiles)
|
||||
|
||||
# Use legacy approach for cases new settings are not filled yet for the
|
||||
# project
|
||||
legacy_profiles = copy.deepcopy(
|
||||
project_settings
|
||||
["global"]
|
||||
["publish"]
|
||||
["IntegrateHeroVersion"]
|
||||
["template_name_profiles"]
|
||||
)
|
||||
if legacy_profiles:
|
||||
if not logger:
|
||||
logger = Logger.get_logger("get_hero_template_name_profiles")
|
||||
|
||||
logger.warning((
|
||||
"Project \"{}\" is using legacy access to hero publish template."
|
||||
" It is recommended to move settings to new location"
|
||||
" 'project_settings/global/tools/publish/"
|
||||
"hero_template_name_profiles'."
|
||||
).format(project_name))
|
||||
return legacy_profiles
|
||||
|
||||
|
||||
def get_publish_template_name(
|
||||
|
|
|
|||
|
|
@ -1971,7 +1971,6 @@ class PlaceholderCreateMixin(object):
|
|||
if not placeholder.data.get("keep_placeholder", True):
|
||||
self.delete_placeholder(placeholder)
|
||||
|
||||
|
||||
def create_failed(self, placeholder, creator_data):
|
||||
if hasattr(placeholder, "create_failed"):
|
||||
placeholder.create_failed(creator_data)
|
||||
|
|
@ -2036,7 +2035,7 @@ class CreatePlaceholderItem(PlaceholderItem):
|
|||
self._failed_created_publish_instances = []
|
||||
|
||||
def get_errors(self):
|
||||
if not self._failed_representations:
|
||||
if not self._failed_created_publish_instances:
|
||||
return []
|
||||
message = (
|
||||
"Failed to create {} instance using Creator {}"
|
||||
|
|
|
|||
|
|
@ -190,48 +190,25 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
project_task_types = project_doc["config"]["tasks"]
|
||||
|
||||
for instance in context:
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
anatomy_updates = {
|
||||
anatomy_data = copy.deepcopy(context.data["anatomyData"])
|
||||
anatomy_data.update({
|
||||
"family": instance.data["family"],
|
||||
"subset": instance.data["subset"],
|
||||
}
|
||||
if asset_doc:
|
||||
parents = asset_doc["data"].get("parents") or list()
|
||||
parent_name = project_doc["name"]
|
||||
if parents:
|
||||
parent_name = parents[-1]
|
||||
})
|
||||
|
||||
hierarchy = "/".join(parents)
|
||||
anatomy_updates.update({
|
||||
"asset": asset_doc["name"],
|
||||
"hierarchy": hierarchy,
|
||||
"parent": parent_name,
|
||||
"folder": {
|
||||
"name": asset_doc["name"],
|
||||
},
|
||||
})
|
||||
|
||||
# Task
|
||||
task_type = None
|
||||
task_name = instance.data.get("task")
|
||||
if task_name:
|
||||
asset_tasks = asset_doc["data"]["tasks"]
|
||||
task_type = asset_tasks.get(task_name, {}).get("type")
|
||||
task_code = (
|
||||
project_task_types
|
||||
.get(task_type, {})
|
||||
.get("short_name")
|
||||
)
|
||||
anatomy_updates["task"] = {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code
|
||||
}
|
||||
self._fill_asset_data(instance, project_doc, anatomy_data)
|
||||
self._fill_task_data(instance, project_task_types, anatomy_data)
|
||||
|
||||
# Define version
|
||||
version_number = None
|
||||
if self.follow_workfile_version:
|
||||
version_number = context.data('version')
|
||||
else:
|
||||
version_number = context.data("version")
|
||||
|
||||
# Even if 'follow_workfile_version' is enabled, it may not be set
|
||||
# because workfile version was not collected to 'context.data'
|
||||
# - that can happen e.g. in 'traypublisher' or other hosts without
|
||||
# a workfile
|
||||
if version_number is None:
|
||||
version_number = instance.data.get("version")
|
||||
|
||||
# use latest version (+1) if already any exist
|
||||
|
|
@ -242,6 +219,9 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
|
||||
# If version is not specified for instance or context
|
||||
if version_number is None:
|
||||
task_data = anatomy_data.get("task") or {}
|
||||
task_name = task_data.get("name")
|
||||
task_type = task_data.get("type")
|
||||
version_number = get_versioning_start(
|
||||
context.data["projectName"],
|
||||
instance.context.data["hostName"],
|
||||
|
|
@ -250,29 +230,26 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
family=instance.data["family"],
|
||||
subset=instance.data["subset"]
|
||||
)
|
||||
anatomy_updates["version"] = version_number
|
||||
anatomy_data["version"] = version_number
|
||||
|
||||
# Additional data
|
||||
resolution_width = instance.data.get("resolutionWidth")
|
||||
if resolution_width:
|
||||
anatomy_updates["resolution_width"] = resolution_width
|
||||
anatomy_data["resolution_width"] = resolution_width
|
||||
|
||||
resolution_height = instance.data.get("resolutionHeight")
|
||||
if resolution_height:
|
||||
anatomy_updates["resolution_height"] = resolution_height
|
||||
anatomy_data["resolution_height"] = resolution_height
|
||||
|
||||
pixel_aspect = instance.data.get("pixelAspect")
|
||||
if pixel_aspect:
|
||||
anatomy_updates["pixel_aspect"] = float(
|
||||
anatomy_data["pixel_aspect"] = float(
|
||||
"{:0.2f}".format(float(pixel_aspect))
|
||||
)
|
||||
|
||||
fps = instance.data.get("fps")
|
||||
if fps:
|
||||
anatomy_updates["fps"] = float("{:0.2f}".format(float(fps)))
|
||||
|
||||
anatomy_data = copy.deepcopy(context.data["anatomyData"])
|
||||
anatomy_data.update(anatomy_updates)
|
||||
anatomy_data["fps"] = float("{:0.2f}".format(float(fps)))
|
||||
|
||||
# Store anatomy data
|
||||
instance.data["projectEntity"] = project_doc
|
||||
|
|
@ -288,3 +265,157 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
instance_name,
|
||||
json.dumps(anatomy_data, indent=4)
|
||||
))
|
||||
|
||||
def _fill_asset_data(self, instance, project_doc, anatomy_data):
|
||||
# QUESTION should we make sure that all asset data are poped if asset
|
||||
# data cannot be found?
|
||||
# - 'asset', 'hierarchy', 'parent', 'folder'
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
if asset_doc:
|
||||
parents = asset_doc["data"].get("parents") or list()
|
||||
parent_name = project_doc["name"]
|
||||
if parents:
|
||||
parent_name = parents[-1]
|
||||
|
||||
hierarchy = "/".join(parents)
|
||||
anatomy_data.update({
|
||||
"asset": asset_doc["name"],
|
||||
"hierarchy": hierarchy,
|
||||
"parent": parent_name,
|
||||
"folder": {
|
||||
"name": asset_doc["name"],
|
||||
},
|
||||
})
|
||||
return
|
||||
|
||||
if instance.data.get("newAssetPublishing"):
|
||||
hierarchy = instance.data["hierarchy"]
|
||||
anatomy_data["hierarchy"] = hierarchy
|
||||
|
||||
parent_name = project_doc["name"]
|
||||
if hierarchy:
|
||||
parent_name = hierarchy.split("/")[-1]
|
||||
|
||||
asset_name = instance.data["asset"].split("/")[-1]
|
||||
anatomy_data.update({
|
||||
"asset": asset_name,
|
||||
"hierarchy": hierarchy,
|
||||
"parent": parent_name,
|
||||
"folder": {
|
||||
"name": asset_name,
|
||||
},
|
||||
})
|
||||
|
||||
def _fill_task_data(self, instance, project_task_types, anatomy_data):
|
||||
# QUESTION should we make sure that all task data are poped if task
|
||||
# data cannot be resolved?
|
||||
# - 'task'
|
||||
|
||||
# Skip if there is no task
|
||||
task_name = instance.data.get("task")
|
||||
if not task_name:
|
||||
return
|
||||
|
||||
# Find task data based on asset entity
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
task_data = self._get_task_data_from_asset(
|
||||
asset_doc, task_name, project_task_types
|
||||
)
|
||||
if task_data:
|
||||
# Fill task data
|
||||
# - if we're in editorial, make sure the task type is filled
|
||||
if (
|
||||
not instance.data.get("newAssetPublishing")
|
||||
or task_data["type"]
|
||||
):
|
||||
anatomy_data["task"] = task_data
|
||||
return
|
||||
|
||||
# New hierarchy is not created, so we can only skip rest of the logic
|
||||
if not instance.data.get("newAssetPublishing"):
|
||||
return
|
||||
|
||||
# Try to find task data based on hierarchy context and asset name
|
||||
hierarchy_context = instance.context.data.get("hierarchyContext")
|
||||
asset_name = instance.data.get("asset")
|
||||
if not hierarchy_context or not asset_name:
|
||||
return
|
||||
|
||||
project_name = instance.context.data["projectName"]
|
||||
# OpenPype approach vs AYON approach
|
||||
if "/" not in asset_name:
|
||||
tasks_info = self._find_tasks_info_in_hierarchy(
|
||||
hierarchy_context, asset_name
|
||||
)
|
||||
else:
|
||||
current_data = hierarchy_context.get(project_name, {})
|
||||
for key in asset_name.split("/"):
|
||||
if key:
|
||||
current_data = current_data.get("childs", {}).get(key, {})
|
||||
tasks_info = current_data.get("tasks", {})
|
||||
|
||||
task_info = tasks_info.get(task_name, {})
|
||||
task_type = task_info.get("type")
|
||||
task_code = (
|
||||
project_task_types
|
||||
.get(task_type, {})
|
||||
.get("short_name")
|
||||
)
|
||||
anatomy_data["task"] = {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code
|
||||
}
|
||||
|
||||
def _get_task_data_from_asset(
|
||||
self, asset_doc, task_name, project_task_types
|
||||
):
|
||||
"""
|
||||
|
||||
Args:
|
||||
asset_doc (Union[dict[str, Any], None]): Asset document.
|
||||
task_name (Union[str, None]): Task name.
|
||||
project_task_types (dict[str, dict[str, Any]]): Project task
|
||||
types.
|
||||
|
||||
Returns:
|
||||
Union[dict[str, str], None]: Task data or None if not found.
|
||||
"""
|
||||
|
||||
if not asset_doc or not task_name:
|
||||
return None
|
||||
|
||||
asset_tasks = asset_doc["data"]["tasks"]
|
||||
task_type = asset_tasks.get(task_name, {}).get("type")
|
||||
task_code = (
|
||||
project_task_types
|
||||
.get(task_type, {})
|
||||
.get("short_name")
|
||||
)
|
||||
return {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code
|
||||
}
|
||||
|
||||
def _find_tasks_info_in_hierarchy(self, hierarchy_context, asset_name):
|
||||
"""Find tasks info for an asset in editorial hierarchy.
|
||||
|
||||
Args:
|
||||
hierarchy_context (dict[str, Any]): Editorial hierarchy context.
|
||||
asset_name (str): Asset name.
|
||||
|
||||
Returns:
|
||||
dict[str, dict[str, Any]]: Tasks info by name.
|
||||
"""
|
||||
|
||||
hierarchy_queue = collections.deque()
|
||||
hierarchy_queue.append(copy.deepcopy(hierarchy_context))
|
||||
while hierarchy_queue:
|
||||
item = hierarchy_queue.popleft()
|
||||
if asset_name in item:
|
||||
return item[asset_name].get("tasks") or {}
|
||||
|
||||
for subitem in item.values():
|
||||
hierarchy_queue.extend(subitem.get("childs") or [])
|
||||
return {}
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ class CollectFarmTarget(pyblish.api.InstancePlugin):
|
|||
farm_name = ""
|
||||
op_modules = context.data.get("openPypeModules")
|
||||
|
||||
for farm_renderer in ["deadline", "royalrender", "muster"]:
|
||||
for farm_renderer in ["deadline", "royalrender"]:
|
||||
op_module = op_modules.get(farm_renderer, False)
|
||||
|
||||
if op_module and op_module.enabled:
|
||||
|
|
|
|||
|
|
@ -93,14 +93,6 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin):
|
|||
assert ctx.get("user") == data.get("user"), ctx_err % "user"
|
||||
assert ctx.get("version") == data.get("version"), ctx_err % "version"
|
||||
|
||||
# ftrack credentials are passed as environment variables by Deadline
|
||||
# to publish job, but Muster doesn't pass them.
|
||||
if data.get("ftrack") and not os.environ.get("FTRACK_API_USER"):
|
||||
ftrack = data.get("ftrack")
|
||||
os.environ["FTRACK_API_USER"] = ftrack["FTRACK_API_USER"]
|
||||
os.environ["FTRACK_API_KEY"] = ftrack["FTRACK_API_KEY"]
|
||||
os.environ["FTRACK_SERVER"] = ftrack["FTRACK_SERVER"]
|
||||
|
||||
# now we can just add instances from json file and we are done
|
||||
any_staging_dir_persistent = False
|
||||
for instance_data in data.get("instances"):
|
||||
|
|
|
|||
|
|
@ -79,19 +79,6 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
|
|||
"representation": "TEMP"
|
||||
})
|
||||
|
||||
# Add fill keys for editorial publishing creating new entity
|
||||
# TODO handle in editorial plugin
|
||||
if instance.data.get("newAssetPublishing"):
|
||||
if "hierarchy" not in template_data:
|
||||
template_data["hierarchy"] = instance.data["hierarchy"]
|
||||
|
||||
if "asset" not in template_data:
|
||||
asset_name = instance.data["asset"].split("/")[-1]
|
||||
template_data["asset"] = asset_name
|
||||
template_data["folder"] = {
|
||||
"name": asset_name
|
||||
}
|
||||
|
||||
publish_templates = anatomy.templates_obj["publish"]
|
||||
if "folder" in publish_templates:
|
||||
publish_folder = publish_templates["folder"].format_strict(
|
||||
|
|
|
|||
|
|
@ -189,6 +189,13 @@ class ExtractOIIOTranscode(publish.Extractor):
|
|||
if len(new_repre["files"]) == 1:
|
||||
new_repre["files"] = new_repre["files"][0]
|
||||
|
||||
# If the source representation has "review" tag, but its not
|
||||
# part of the output defintion tags, then both the
|
||||
# representations will be transcoded in ExtractReview and
|
||||
# their outputs will clash in integration.
|
||||
if "review" in repre.get("tags", []):
|
||||
added_review = True
|
||||
|
||||
new_representations.append(new_repre)
|
||||
added_representations = True
|
||||
|
||||
|
|
|
|||
|
|
@ -30,8 +30,7 @@ class ExtractHierarchyToAYON(pyblish.api.ContextPlugin):
|
|||
if not AYON_SERVER_ENABLED:
|
||||
return
|
||||
|
||||
hierarchy_context = context.data.get("hierarchyContext")
|
||||
if not hierarchy_context:
|
||||
if not context.data.get("hierarchyContext"):
|
||||
self.log.debug("Skipping ExtractHierarchyToAYON")
|
||||
return
|
||||
|
||||
|
|
|
|||
|
|
@ -231,7 +231,10 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
"files": jpeg_file,
|
||||
"stagingDir": dst_staging,
|
||||
"thumbnail": True,
|
||||
"tags": new_repre_tags
|
||||
"tags": new_repre_tags,
|
||||
# If source image is jpg then there can be clash when
|
||||
# integrating to making the output name explicit.
|
||||
"outputName": "thumbnail"
|
||||
}
|
||||
|
||||
# adding representation
|
||||
|
|
|
|||
|
|
@ -65,7 +65,8 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
"files": dst_filename,
|
||||
"stagingDir": dst_staging,
|
||||
"thumbnail": True,
|
||||
"tags": ["thumbnail"]
|
||||
"tags": ["thumbnail"],
|
||||
"outputName": "thumbnail",
|
||||
}
|
||||
|
||||
# adding representation
|
||||
|
|
|
|||
|
|
@ -54,7 +54,6 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
# permissions error on files (files were used or user didn't have perms)
|
||||
# *but all other plugins must be sucessfully completed
|
||||
|
||||
template_name_profiles = []
|
||||
_default_template_name = "hero"
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -220,22 +220,6 @@ def _convert_deadline_system_settings(
|
|||
output["modules"]["deadline"] = deadline_settings
|
||||
|
||||
|
||||
def _convert_muster_system_settings(
|
||||
ayon_settings, output, addon_versions, default_settings
|
||||
):
|
||||
enabled = addon_versions.get("muster") is not None
|
||||
muster_settings = default_settings["modules"]["muster"]
|
||||
muster_settings["enabled"] = enabled
|
||||
if enabled:
|
||||
ayon_muster = ayon_settings["muster"]
|
||||
muster_settings["MUSTER_REST_URL"] = ayon_muster["MUSTER_REST_URL"]
|
||||
muster_settings["templates_mapping"] = {
|
||||
item["name"]: item["value"]
|
||||
for item in ayon_muster["templates_mapping"]
|
||||
}
|
||||
output["modules"]["muster"] = muster_settings
|
||||
|
||||
|
||||
def _convert_royalrender_system_settings(
|
||||
ayon_settings, output, addon_versions, default_settings
|
||||
):
|
||||
|
|
@ -261,7 +245,6 @@ def _convert_modules_system(
|
|||
_convert_timers_manager_system_settings,
|
||||
_convert_clockify_system_settings,
|
||||
_convert_deadline_system_settings,
|
||||
_convert_muster_system_settings,
|
||||
_convert_royalrender_system_settings,
|
||||
):
|
||||
func(ayon_settings, output, addon_versions, default_settings)
|
||||
|
|
@ -1236,6 +1219,8 @@ def _convert_global_project_settings(ayon_settings, output, default_settings):
|
|||
for profile in extract_oiio_transcode_profiles:
|
||||
new_outputs = {}
|
||||
name_counter = {}
|
||||
if "product_names" in profile:
|
||||
profile["subsets"] = profile.pop("product_names")
|
||||
for profile_output in profile["outputs"]:
|
||||
if "name" in profile_output:
|
||||
name = profile_output.pop("name")
|
||||
|
|
@ -1291,12 +1276,6 @@ def _convert_global_project_settings(ayon_settings, output, default_settings):
|
|||
for extract_burnin_def in extract_burnin_defs
|
||||
}
|
||||
|
||||
ayon_integrate_hero = ayon_publish["IntegrateHeroVersion"]
|
||||
for profile in ayon_integrate_hero["template_name_profiles"]:
|
||||
if "product_types" not in profile:
|
||||
break
|
||||
profile["families"] = profile.pop("product_types")
|
||||
|
||||
if "IntegrateProductGroup" in ayon_publish:
|
||||
subset_group = ayon_publish.pop("IntegrateProductGroup")
|
||||
subset_group_profiles = subset_group.pop("product_grouping_profiles")
|
||||
|
|
|
|||
|
|
@ -65,6 +65,8 @@
|
|||
"group": "",
|
||||
"department": "",
|
||||
"use_gpu": true,
|
||||
"workfile_dependency": true,
|
||||
"use_published_workfile": true,
|
||||
"env_allowed_keys": [],
|
||||
"env_search_replace_values": {},
|
||||
"limit_groups": {}
|
||||
|
|
|
|||
|
|
@ -15,6 +15,11 @@
|
|||
"copy_status": false,
|
||||
"force_sync": false
|
||||
},
|
||||
"hooks": {
|
||||
"InstallPySideToFusion": {
|
||||
"enabled": true
|
||||
}
|
||||
},
|
||||
"create": {
|
||||
"CreateSaver": {
|
||||
"temp_rendering_path_template": "{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}",
|
||||
|
|
@ -26,7 +31,21 @@
|
|||
"reviewable",
|
||||
"farm_rendering"
|
||||
],
|
||||
"image_format": "exr"
|
||||
"image_format": "exr",
|
||||
"default_frame_range_option": "asset_db"
|
||||
},
|
||||
"CreateImageSaver": {
|
||||
"temp_rendering_path_template": "{workdir}/renders/fusion/{subset}/{subset}.{ext}",
|
||||
"default_variants": [
|
||||
"Main",
|
||||
"Mask"
|
||||
],
|
||||
"instance_attributes": [
|
||||
"reviewable",
|
||||
"farm_rendering"
|
||||
],
|
||||
"image_format": "exr",
|
||||
"default_frame": 0
|
||||
}
|
||||
},
|
||||
"publish": {
|
||||
|
|
|
|||
|
|
@ -164,23 +164,6 @@
|
|||
"default": "http://127.0.0.1:8082"
|
||||
}
|
||||
},
|
||||
"muster": {
|
||||
"enabled": false,
|
||||
"MUSTER_REST_URL": "http://127.0.0.1:9890",
|
||||
"templates_mapping": {
|
||||
"file_layers": 7,
|
||||
"mentalray": 2,
|
||||
"mentalray_sf": 6,
|
||||
"redshift": 55,
|
||||
"renderman": 29,
|
||||
"software": 1,
|
||||
"software_sf": 5,
|
||||
"turtle": 10,
|
||||
"vector": 4,
|
||||
"vray": 37,
|
||||
"ffmpeg": 48
|
||||
}
|
||||
},
|
||||
"royalrender": {
|
||||
"enabled": false,
|
||||
"rr_paths": {
|
||||
|
|
|
|||
|
|
@ -645,7 +645,7 @@ How output of the schema could look like on save:
|
|||
},
|
||||
"is_group": true,
|
||||
"key": "templates_mapping",
|
||||
"label": "Muster - Templates mapping",
|
||||
"label": "Deadline - Templates mapping",
|
||||
"is_file": true
|
||||
}
|
||||
```
|
||||
|
|
@ -657,7 +657,7 @@ How output of the schema could look like on save:
|
|||
"object_type": "text",
|
||||
"is_group": true,
|
||||
"key": "templates_mapping",
|
||||
"label": "Muster - Templates mapping",
|
||||
"label": "Deadline - Templates mapping",
|
||||
"is_file": true
|
||||
}
|
||||
```
|
||||
|
|
|
|||
|
|
@ -362,6 +362,16 @@
|
|||
"key": "use_gpu",
|
||||
"label": "Use GPU"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "workfile_dependency",
|
||||
"label": "Workfile Dependency"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "use_published_workfile",
|
||||
"label": "Use Published Workfile"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "env_allowed_keys",
|
||||
|
|
|
|||
|
|
@ -41,6 +41,29 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "hooks",
|
||||
"label": "Hooks",
|
||||
"children": [
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "InstallPySideToFusion",
|
||||
"label": "Install PySide2",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
@ -51,7 +74,7 @@
|
|||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CreateSaver",
|
||||
"label": "Create Saver",
|
||||
"label": "Create Render Saver",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
|
|
@ -93,6 +116,71 @@
|
|||
{"tif": "tif"},
|
||||
{"jpg": "jpg"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "default_frame_range_option",
|
||||
"label": "Default frame range source",
|
||||
"type": "enum",
|
||||
"multiselect": false,
|
||||
"enum_items": [
|
||||
{"asset_db": "Current asset context"},
|
||||
{"render_range": "From render in/out"},
|
||||
{"comp_range": "From composition timeline"}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CreateImageSaver",
|
||||
"label": "Create Image Saver",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "text",
|
||||
"key": "temp_rendering_path_template",
|
||||
"label": "Temporary rendering path template"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "default_variants",
|
||||
"label": "Default variants",
|
||||
"object_type": {
|
||||
"type": "text"
|
||||
}
|
||||
},
|
||||
{
|
||||
"key": "instance_attributes",
|
||||
"label": "Instance attributes",
|
||||
"type": "enum",
|
||||
"multiselection": true,
|
||||
"enum_items": [
|
||||
{
|
||||
"reviewable": "Reviewable"
|
||||
},
|
||||
{
|
||||
"farm_rendering": "Farm rendering"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "image_format",
|
||||
"label": "Output Image Format",
|
||||
"type": "enum",
|
||||
"multiselect": false,
|
||||
"enum_items": [
|
||||
{"exr": "exr"},
|
||||
{"tga": "tga"},
|
||||
{"png": "png"},
|
||||
{"tif": "tif"},
|
||||
{"jpg": "jpg"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "default_frame",
|
||||
"label": "Default rendered frame"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1023,49 +1023,6 @@
|
|||
{
|
||||
"type": "label",
|
||||
"label": "<b>NOTE:</b> Hero publish template profiles settings were moved to <a href=\"settings://project_settings/global/tools/publish/hero_template_name_profiles\"><b>Tools/Publish/Hero template name profiles</b></a>. Please move values there."
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "template_name_profiles",
|
||||
"label": "Template name profiles (DEPRECATED)",
|
||||
"use_label_wrap": true,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"key": "families",
|
||||
"label": "Families",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"type": "hosts-enum",
|
||||
"key": "hosts",
|
||||
"label": "Hosts",
|
||||
"multiselection": true
|
||||
},
|
||||
{
|
||||
"key": "task_types",
|
||||
"label": "Task types",
|
||||
"type": "task-types-enum"
|
||||
},
|
||||
{
|
||||
"key": "task_names",
|
||||
"label": "Task names",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "template_name",
|
||||
"label": "Template name",
|
||||
"tooltip": "Name of template from Anatomy templates"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -207,37 +207,6 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "muster",
|
||||
"label": "Muster",
|
||||
"require_restart": true,
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "MUSTER_REST_URL",
|
||||
"label": "Muster Rest URL"
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"object_type": {
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 300
|
||||
},
|
||||
"is_group": true,
|
||||
"key": "templates_mapping",
|
||||
"label": "Templates mapping"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "royalrender",
|
||||
|
|
|
|||
|
|
@ -140,12 +140,10 @@ class SiteSyncModel:
|
|||
Union[dict[str, Any], None]: Site icon definition.
|
||||
"""
|
||||
|
||||
if not project_name:
|
||||
if not project_name or not self.is_site_sync_enabled(project_name):
|
||||
return None
|
||||
|
||||
active_site = self.get_active_site(project_name)
|
||||
provider = self._get_provider_for_site(project_name, active_site)
|
||||
return self._get_provider_icon(provider)
|
||||
return self._get_site_icon_def(project_name, active_site)
|
||||
|
||||
def get_remote_site_icon_def(self, project_name):
|
||||
"""Remote site icon definition.
|
||||
|
|
@ -160,7 +158,14 @@ class SiteSyncModel:
|
|||
if not project_name or not self.is_site_sync_enabled(project_name):
|
||||
return None
|
||||
remote_site = self.get_remote_site(project_name)
|
||||
provider = self._get_provider_for_site(project_name, remote_site)
|
||||
return self._get_site_icon_def(project_name, remote_site)
|
||||
|
||||
def _get_site_icon_def(self, project_name, site_name):
|
||||
# use different icon for studio even if provider is 'local_drive'
|
||||
if site_name == self._site_sync_addon.DEFAULT_SITE:
|
||||
provider = "studio"
|
||||
else:
|
||||
provider = self._get_provider_for_site(project_name, site_name)
|
||||
return self._get_provider_icon(provider)
|
||||
|
||||
def get_version_sync_availability(self, project_name, version_ids):
|
||||
|
|
|
|||
|
|
@ -84,9 +84,9 @@ class SceneInventoryController:
|
|||
def get_containers(self):
|
||||
host = self._host
|
||||
if isinstance(host, ILoadHost):
|
||||
return host.get_containers()
|
||||
return list(host.get_containers())
|
||||
elif hasattr(host, "ls"):
|
||||
return host.ls()
|
||||
return list(host.ls())
|
||||
return []
|
||||
|
||||
# Site Sync methods
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ from openpype.pipeline import (
|
|||
)
|
||||
from openpype.style import get_default_entity_icon_color
|
||||
from openpype.tools.utils.models import TreeModel, Item
|
||||
from openpype.tools.ayon_utils.widgets import get_qt_icon
|
||||
|
||||
|
||||
def walk_hierarchy(node):
|
||||
|
|
@ -71,8 +72,8 @@ class InventoryModel(TreeModel):
|
|||
site_icons = self._controller.get_site_provider_icons()
|
||||
|
||||
self._site_icons = {
|
||||
provider: QtGui.QIcon(icon_path)
|
||||
for provider, icon_path in site_icons.items()
|
||||
provider: get_qt_icon(icon_def)
|
||||
for provider, icon_def in site_icons.items()
|
||||
}
|
||||
|
||||
def outdated(self, item):
|
||||
|
|
|
|||
|
|
@ -42,8 +42,8 @@ class SiteSyncModel:
|
|||
|
||||
if not self.is_sync_server_enabled():
|
||||
return {}
|
||||
site_sync = self._get_sync_server_module()
|
||||
return site_sync.get_site_icons()
|
||||
site_sync_addon = self._get_sync_server_module()
|
||||
return site_sync_addon.get_site_icons()
|
||||
|
||||
def get_sites_information(self):
|
||||
return {
|
||||
|
|
@ -150,23 +150,23 @@ class SiteSyncModel:
|
|||
return self._remote_site_provider
|
||||
|
||||
def _cache_sites(self):
|
||||
site_sync = self._get_sync_server_module()
|
||||
active_site = None
|
||||
remote_site = None
|
||||
active_site_provider = None
|
||||
remote_site_provider = None
|
||||
if site_sync is not None:
|
||||
if self.is_sync_server_enabled():
|
||||
site_sync = self._get_sync_server_module()
|
||||
project_name = self._controller.get_current_project_name()
|
||||
active_site = site_sync.get_active_site(project_name)
|
||||
remote_site = site_sync.get_remote_site(project_name)
|
||||
active_site_provider = "studio"
|
||||
remote_site_provider = "studio"
|
||||
if active_site != "studio":
|
||||
active_site_provider = site_sync.get_active_provider(
|
||||
active_site_provider = site_sync.get_provider_for_site(
|
||||
project_name, active_site
|
||||
)
|
||||
if remote_site != "studio":
|
||||
remote_site_provider = site_sync.get_active_provider(
|
||||
remote_site_provider = site_sync.get_provider_for_site(
|
||||
project_name, remote_site
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import inspect
|
|||
from abc import ABCMeta, abstractmethod
|
||||
|
||||
import six
|
||||
import arrow
|
||||
import pyblish.api
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
|
|
@ -285,6 +286,8 @@ class PublishReportMaker:
|
|||
|
||||
def get_report(self, publish_plugins=None):
|
||||
"""Report data with all details of current state."""
|
||||
|
||||
now = arrow.utcnow().to("local")
|
||||
instances_details = {}
|
||||
for instance in self._all_instances_by_id.values():
|
||||
instances_details[instance.id] = self._extract_instance_data(
|
||||
|
|
@ -334,7 +337,8 @@ class PublishReportMaker:
|
|||
"context": self._extract_context_data(self._current_context),
|
||||
"crashed_file_paths": crashed_file_paths,
|
||||
"id": uuid.uuid4().hex,
|
||||
"report_version": "1.0.0"
|
||||
"created_at": now.isoformat(),
|
||||
"report_version": "1.0.1",
|
||||
}
|
||||
|
||||
def _extract_context_data(self, context):
|
||||
|
|
|
|||
|
|
@ -26,14 +26,14 @@ class InstancesModel(QtGui.QStandardItemModel):
|
|||
return self._items_by_id
|
||||
|
||||
def set_report(self, report_item):
|
||||
self.clear()
|
||||
root_item = self.invisibleRootItem()
|
||||
if root_item.rowCount() > 0:
|
||||
root_item.removeRows(0, root_item.rowCount())
|
||||
self._items_by_id.clear()
|
||||
self._plugin_items_by_id.clear()
|
||||
if not report_item:
|
||||
return
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
|
||||
families = set(report_item.instance_items_by_family.keys())
|
||||
families.remove(None)
|
||||
all_families = list(sorted(families))
|
||||
|
|
@ -125,14 +125,14 @@ class PluginsModel(QtGui.QStandardItemModel):
|
|||
return self._items_by_id
|
||||
|
||||
def set_report(self, report_item):
|
||||
self.clear()
|
||||
root_item = self.invisibleRootItem()
|
||||
if root_item.rowCount() > 0:
|
||||
root_item.removeRows(0, root_item.rowCount())
|
||||
self._items_by_id.clear()
|
||||
self._plugin_items_by_id.clear()
|
||||
if not report_item:
|
||||
return
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
|
||||
labels_iter = iter(self.order_label_mapping)
|
||||
cur_order, cur_label = next(labels_iter)
|
||||
cur_plugin_items = []
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue