mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-02 00:44:52 +01:00
Merge branch 'develop' into enhancement/maya_template
This commit is contained in:
commit
a129cec522
337 changed files with 11161 additions and 2351 deletions
24
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
24
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,16 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.15.7-nightly.2
|
||||
- 3.15.7-nightly.1
|
||||
- 3.15.6
|
||||
- 3.15.6-nightly.3
|
||||
- 3.15.6-nightly.2
|
||||
- 3.15.6-nightly.1
|
||||
- 3.15.5
|
||||
- 3.15.5-nightly.2
|
||||
- 3.15.5-nightly.1
|
||||
- 3.15.4
|
||||
- 3.15.4-nightly.3
|
||||
- 3.15.4-nightly.2
|
||||
- 3.15.4-nightly.1
|
||||
|
|
@ -125,16 +135,6 @@ body:
|
|||
- 3.14.1-nightly.3
|
||||
- 3.14.1-nightly.2
|
||||
- 3.14.1-nightly.1
|
||||
- 3.14.0
|
||||
- 3.14.0-nightly.1
|
||||
- 3.13.1-nightly.3
|
||||
- 3.13.1-nightly.2
|
||||
- 3.13.1-nightly.1
|
||||
- 3.13.0
|
||||
- 3.13.0-nightly.1
|
||||
- 3.12.3-nightly.3
|
||||
- 3.12.3-nightly.2
|
||||
- 3.12.3-nightly.1
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
@ -166,8 +166,8 @@ body:
|
|||
label: Are there any labels you wish to add?
|
||||
description: Please search labels and identify those related to your bug.
|
||||
options:
|
||||
- label: I have added the relevant labels to the bug report.
|
||||
required: true
|
||||
- label: I have added the relevant labels to the bug report.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
|
|
|
|||
2
.github/workflows/nightly_merge.yml
vendored
2
.github/workflows/nightly_merge.yml
vendored
|
|
@ -25,5 +25,5 @@ jobs:
|
|||
- name: Invoke pre-release workflow
|
||||
uses: benc-uk/workflow-dispatch@v1
|
||||
with:
|
||||
workflow: Nightly Prerelease
|
||||
workflow: prerelease.yml
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
|
|
|||
6
.github/workflows/prerelease.yml
vendored
6
.github/workflows/prerelease.yml
vendored
|
|
@ -65,3 +65,9 @@ jobs:
|
|||
source_ref: 'main'
|
||||
target_branch: 'develop'
|
||||
commit_message_template: '[Automated] Merged {source_ref} into {target_branch}'
|
||||
|
||||
- name: Invoke Update bug report workflow
|
||||
uses: benc-uk/workflow-dispatch@v1
|
||||
with:
|
||||
workflow: update_bug_report.yml
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
8
.github/workflows/update_bug_report.yml
vendored
8
.github/workflows/update_bug_report.yml
vendored
|
|
@ -23,3 +23,11 @@ jobs:
|
|||
limit_to: 100
|
||||
form: .github/ISSUE_TEMPLATE/bug_report.yml
|
||||
commit_message: 'chore(): update bug report / version'
|
||||
dry_run: no-push
|
||||
|
||||
- name: Push to protected develop branch
|
||||
uses: CasperWA/push-protected@v2.10.0
|
||||
with:
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
branch: develop
|
||||
unprotect_reviews: true
|
||||
551
CHANGELOG.md
551
CHANGELOG.md
|
|
@ -1,6 +1,557 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.15.6](https://github.com/ynput/OpenPype/tree/3.15.6)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.5...3.15.6)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Substance Painter Integration <a href="https://github.com/ynput/OpenPype/pull/4283">#4283</a></summary>
|
||||
|
||||
<strong>This implements a part of #4205 by implementing a Substance Painter integration
|
||||
|
||||
</strong>Status:
|
||||
- [x] Implement Host
|
||||
- [x] start substance with last workfile using `AddLastWorkfileToLaunchArgs` prelaunch hook
|
||||
- [x] Implement Qt tools
|
||||
- [x] Implement loaders
|
||||
- [x] Implemented a Set project mesh loader (this is relatively special case because a Project will always have exactly one mesh - a Substance Painter project cannot exist without a mesh).
|
||||
- [x] Implement project open callback
|
||||
- [x] On project open it notifies the user if the loaded model is outdated
|
||||
- [x] Implement publishing logic
|
||||
- [x] Workfile publishing
|
||||
- [x] Export Texture Sets
|
||||
- [x] Support OCIO using #4195 (draft brach is set up - see comment)
|
||||
- [ ] Likely needs more testing on the OCIO front
|
||||
- [x] Validate all outputs of the Export template are exported/generated
|
||||
- [x] Allow validation to be optional **(issue: there's no API method to detect what maps will be exported without doing an actual export to disk)**
|
||||
- [x] Support extracting/integration if not all outputs are generated
|
||||
- [x] Support multiple materials/texture sets per instance
|
||||
- [ ] Add validator that can enforce only a single texture set output if studio prefers that.
|
||||
- [ ] Implement Export File Format (extensions) override in Creator
|
||||
- [ ] Add settings so Admin can choose which extensions are available.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Data Exchange: Geometry in 3dsMax <a href="https://github.com/ynput/OpenPype/pull/4555">#4555</a></summary>
|
||||
|
||||
<strong>Introduces and updates a creator, extractors and loaders for model family
|
||||
|
||||
</strong>Introduces new creator, extractors and loaders for model family while adding model families into the existing max scene loader and extractor
|
||||
- [x] creators
|
||||
- [x] adding model family into max scene loader and extractor
|
||||
- [x] fbx loader
|
||||
- [x] fbx extractor
|
||||
- [x] usd loader
|
||||
- [x] usd extractor
|
||||
- [x] validator for model family
|
||||
- [x] obj loader(update function)
|
||||
- [x] fix the update function of the loader as #4675
|
||||
- [x] Add documentation
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AfterEffects: add review flag to each instance <a href="https://github.com/ynput/OpenPype/pull/4884">#4884</a></summary>
|
||||
|
||||
Adds `mark_for_review` flag to the Creator to allow artists to disable review if necessary.Exposed this flag in Settings, by default set to True (eg. same behavior as previously).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Fix Validate Output Node (VDB) <a href="https://github.com/ynput/OpenPype/pull/4819">#4819</a></summary>
|
||||
|
||||
- Removes plug-in that was a duplicate of this plug-in.
|
||||
- Optimize logging of many prims slightly
|
||||
- Fix error reporting like https://github.com/ynput/OpenPype/pull/4818 did
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Add null node as output indicator when using TAB search <a href="https://github.com/ynput/OpenPype/pull/4834">#4834</a></summary>
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Don't error in collect review if camera is not set correctly <a href="https://github.com/ynput/OpenPype/pull/4874">#4874</a></summary>
|
||||
|
||||
Do not raise an error in collector when invalid path is set as camera path. Allow camera path to not be set correctly in review instance until validation so it's nicely shown in a validation report.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Project packager: Backup and restore can store only database <a href="https://github.com/ynput/OpenPype/pull/4879">#4879</a></summary>
|
||||
|
||||
Pack project functionality have option to zip only project database without project files. Unpack project can skip project copy if the folder is not found.Added helper functions to `openpype.client.mongo` that can be also used for tests as replacement of mongo dump.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: ExtractOpenGL for Review instance not optional <a href="https://github.com/ynput/OpenPype/pull/4881">#4881</a></summary>
|
||||
|
||||
Don't make ExtractOpenGL optional for review instance optional.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Small style changes <a href="https://github.com/ynput/OpenPype/pull/4894">#4894</a></summary>
|
||||
|
||||
Small changes in styles and form of publisher UI.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Workfile icon in new publisher <a href="https://github.com/ynput/OpenPype/pull/4898">#4898</a></summary>
|
||||
|
||||
Fix icon for the workfile instance in new publisher
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Simplify creator icons code <a href="https://github.com/ynput/OpenPype/pull/4899">#4899</a></summary>
|
||||
|
||||
Simplify code for setting the icons for the Fusion creators
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Fix PySide 6.5 support for loader <a href="https://github.com/ynput/OpenPype/pull/4900">#4900</a></summary>
|
||||
|
||||
Fixes PySide 6.5 support in Loader.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Validate Attributes <a href="https://github.com/ynput/OpenPype/pull/4917">#4917</a></summary>
|
||||
|
||||
This plugin was broken due to bad fetching of data and wrong repair action.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fix: Locally copied version of last published workfile is not incremented <a href="https://github.com/ynput/OpenPype/pull/4722">#4722</a></summary>
|
||||
|
||||
### Fix 1
|
||||
When copied, the local workfile version keeps the published version number, when it must be +1 to follow OP's naming convention.
|
||||
|
||||
### Fix 2
|
||||
Local workfile version's name is built from anatomy. This avoids to get workfiles with their publish template naming.
|
||||
|
||||
### Fix 3
|
||||
In the case a subset has at least two tasks with published workfiles, for example `Modeling` and `Rigging`, launching `Rigging` was getting the first one with the `next` and trying to find representations, therefore `workfileModeling` and trying to match the current `task_name` (`Rigging`) with the `representation["context"]["task"]["name"]` of a Modeling representation, which was ending up to a `workfile_representation` to `None`, and exiting the process.
|
||||
|
||||
Trying to find the `task_name` in the `subset['name']` fixes it.
|
||||
|
||||
### Fix 4
|
||||
Fetch input dependencies of workfile.
|
||||
|
||||
Replacing https://github.com/ynput/OpenPype/pull/4102 for changes to bring this home.
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: soft-fail when pan/zoom locked on camera when playblasting <a href="https://github.com/ynput/OpenPype/pull/4929">#4929</a></summary>
|
||||
|
||||
When pan/zoom enabled attribute on camera is locked, playblasting with pan/zoom fails because it is trying to restore it. This is fixing it by skipping over with warning.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya Load References - Add Display Handle Setting <a href="https://github.com/ynput/OpenPype/pull/4904">#4904</a></summary>
|
||||
|
||||
When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Photoshop: add autocreators for review and flat image <a href="https://github.com/ynput/OpenPype/pull/4871">#4871</a></summary>
|
||||
|
||||
Review and flatten image (produced when no instance of `image` family was created) were created somehow magically. This PRintroduces two new auto creators which allow artists to disable review or flatten image.For all `image` instances `Review` flag was added to provide functionality to create separate review per `image` instance. Previously was possible only to have separate instance of `review` family.Review is not enabled on `image` family by default. (Eg. follows original behavior)Review auto creator is enabled by default as it was before.Flatten image creator must be set in Settings in `project_settings/photoshop/create/AutoImageCreator`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.15.5](https://github.com/ynput/OpenPype/tree/3.15.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.4...3.15.5)
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Playblast profiles <a href="https://github.com/ynput/OpenPype/pull/4777">#4777</a></summary>
|
||||
|
||||
Support playblast profiles.This enables studios to customize what playblast settings should be on a per task and/or subset basis. For example `modeling` should have `Wireframe On Shaded` enabled, while all other tasks should have it disabled.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Support .abc files directly for Arnold standin look assignment <a href="https://github.com/ynput/OpenPype/pull/4856">#4856</a></summary>
|
||||
|
||||
If `.abc` file is loaded into arnold standin support look assignment through the `cbId` attributes in the alembic file.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Hide animation instance in creator <a href="https://github.com/ynput/OpenPype/pull/4872">#4872</a></summary>
|
||||
|
||||
- Hide animation instance in creator
|
||||
- Add inventory action to recreate animation publish instance for loaded rigs
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Render Creator enhancements <a href="https://github.com/ynput/OpenPype/pull/4477">#4477</a></summary>
|
||||
|
||||
<strong>Improvements to the creator for render family
|
||||
|
||||
</strong>This PR introduces some enhancements to the creator for the render family in Unreal Engine:
|
||||
- Added the option to create a new, empty sequence for the render.
|
||||
- Added the option to not include the whole hierarchy for the selected sequence.
|
||||
- Improvements of the error messages.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Added settings for rendering <a href="https://github.com/ynput/OpenPype/pull/4575">#4575</a></summary>
|
||||
|
||||
<strong>Added settings for rendering in Unreal Engine.
|
||||
|
||||
</strong>Two settings has been added:
|
||||
- Pre roll frames, to set how many frames are used to load the scene before starting the actual rendering.
|
||||
- Configuration path, to allow to save a preset of settings from Unreal, and use it for rendering.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: Optimize anatomy formatting by only formatting used templates instead <a href="https://github.com/ynput/OpenPype/pull/4784">#4784</a></summary>
|
||||
|
||||
Optimization to not format full anatomy when only a single template is used. Instead format only the single template instead.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Patchelf version locked <a href="https://github.com/ynput/OpenPype/pull/4853">#4853</a></summary>
|
||||
|
||||
For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails.
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Implement `switch` method on loaders <a href="https://github.com/ynput/OpenPype/pull/4866">#4866</a></summary>
|
||||
|
||||
Implement `switch` method on loaders
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Code: Tweak docstrings and return type hints <a href="https://github.com/ynput/OpenPype/pull/4875">#4875</a></summary>
|
||||
|
||||
Tweak docstrings and return type hints for functions in `openpype.client.entities`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Clear comment on successful publish and on window close <a href="https://github.com/ynput/OpenPype/pull/4885">#4885</a></summary>
|
||||
|
||||
Clear comment text field on successful publish and on window close.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Make sure to reset asset widget when hidden and reshown <a href="https://github.com/ynput/OpenPype/pull/4886">#4886</a></summary>
|
||||
|
||||
Make sure to reset asset widget when hidden and reshown. Without this the asset list would never refresh in the set asset widget when changing context on an existing instance and thus would not show new assets from after the first time launching that widget.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix nested model instances. <a href="https://github.com/ynput/OpenPype/pull/4852">#4852</a></summary>
|
||||
|
||||
Fix nested model instance under review instance, where data collection was not including "Display Lights" and "Focal Length".
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Make default namespace naming backwards compatible <a href="https://github.com/ynput/OpenPype/pull/4873">#4873</a></summary>
|
||||
|
||||
Namespaces of loaded references are now _by default_ back to what they were before #4511
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Legacy convertor skips deprecation warnings <a href="https://github.com/ynput/OpenPype/pull/4846">#4846</a></summary>
|
||||
|
||||
Nuke legacy convertor was triggering deprecated function which is causing a lot of logs which slows down whole process. Changed the convertor to skip all nodes without `AVALON_TAB` to avoid the warnings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: move startup script logic to hook <a href="https://github.com/ynput/OpenPype/pull/4849">#4849</a></summary>
|
||||
|
||||
Startup script for OpenPype was interfering with Open Last Workfile feature. Moving this loggic from simple command line argument in the Settings to pre-launch hook is solving the order of command line arguments and making both features work.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Don't change time slider ranges in `get_frame_range` <a href="https://github.com/ynput/OpenPype/pull/4858">#4858</a></summary>
|
||||
|
||||
Don't change time slider ranges in `get_frame_range`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Looks - calculate hash for tx texture <a href="https://github.com/ynput/OpenPype/pull/4878">#4878</a></summary>
|
||||
|
||||
Texture hash is calculated for textures used in published look and it is used as key in dictionary. In recent changes, this hash is not calculated for TX files, resulting in `None` value as key in dictionary, crashing publishing. This PR is adding texture hash for TX files to solve that issue.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Collect `currentFile` context data separate from workfile instance <a href="https://github.com/ynput/OpenPype/pull/4883">#4883</a></summary>
|
||||
|
||||
Fix publishing without an active workfile instance due to missing `currentFile` data.Now collect `currentFile` into context in houdini through context plugin no matter the active instances.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixed broken slate workflow once published on deadline <a href="https://github.com/ynput/OpenPype/pull/4887">#4887</a></summary>
|
||||
|
||||
Slate workflow is now working as expected and Validate Sequence Frames is not raising the once slate frame is included.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add fps as instance.data in collect review in Houdini. <a href="https://github.com/ynput/OpenPype/pull/4888">#4888</a></summary>
|
||||
|
||||
fix the bug of failing to publish extract review in HoudiniOriginal error:
|
||||
```python
|
||||
File "OpenPype\build\exe.win-amd64-3.9\openpype\plugins\publish\extract_review.py", line 516, in prepare_temp_data
|
||||
"fps": float(instance.data["fps"]),
|
||||
KeyError: 'fps'
|
||||
```
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: Fill missing data for instances with review <a href="https://github.com/ynput/OpenPype/pull/4891">#4891</a></summary>
|
||||
|
||||
Fill required data to instance in traypublisher if instance has review family. The data are required by ExtractReview and it would be complicated to do proper fix at this moment! The collector does for review instances what did https://github.com/ynput/OpenPype/pull/4383
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Keep track about current context and fix context selection widget <a href="https://github.com/ynput/OpenPype/pull/4892">#4892</a></summary>
|
||||
|
||||
Change selected context to current context on reset. Fix bug when context widget is re-enabled.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Scene inventory: Model refresh fix with cherry picking <a href="https://github.com/ynput/OpenPype/pull/4895">#4895</a></summary>
|
||||
|
||||
Fix cherry pick issue in scene inventory.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Pre-render and missing review flag on instance causing crash <a href="https://github.com/ynput/OpenPype/pull/4897">#4897</a></summary>
|
||||
|
||||
If instance created in nuke was missing `review` flag, collector crashed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>After Effects: fix handles KeyError <a href="https://github.com/ynput/OpenPype/pull/4727">#4727</a></summary>
|
||||
|
||||
Sometimes when publishing with AE (we only saw this error on AE 2023), we got a KeyError for the handles in the "Collect Workfile" step. So I did get the handles from the context if ther's no handles in the asset entity.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.15.4](https://github.com/ynput/OpenPype/tree/3.15.4)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -415,11 +415,12 @@ def repack_version(directory):
|
|||
@main.command()
|
||||
@click.option("--project", help="Project name")
|
||||
@click.option(
|
||||
"--dirpath", help="Directory where package is stored", default=None
|
||||
)
|
||||
def pack_project(project, dirpath):
|
||||
"--dirpath", help="Directory where package is stored", default=None)
|
||||
@click.option(
|
||||
"--dbonly", help="Store only Database data", default=False, is_flag=True)
|
||||
def pack_project(project, dirpath, dbonly):
|
||||
"""Create a package of project with all files and database dump."""
|
||||
PypeCommands().pack_project(project, dirpath)
|
||||
PypeCommands().pack_project(project, dirpath, dbonly)
|
||||
|
||||
|
||||
@main.command()
|
||||
|
|
@ -427,9 +428,11 @@ def pack_project(project, dirpath):
|
|||
@click.option(
|
||||
"--root", help="Replace root which was stored in project", default=None
|
||||
)
|
||||
def unpack_project(zipfile, root):
|
||||
@click.option(
|
||||
"--dbonly", help="Store only Database data", default=False, is_flag=True)
|
||||
def unpack_project(zipfile, root, dbonly):
|
||||
"""Create a package of project with all files and database dump."""
|
||||
PypeCommands().unpack_project(zipfile, root)
|
||||
PypeCommands().unpack_project(zipfile, root, dbonly)
|
||||
|
||||
|
||||
@main.command()
|
||||
|
|
|
|||
|
|
@ -69,6 +69,19 @@ def convert_ids(in_ids):
|
|||
|
||||
|
||||
def get_projects(active=True, inactive=False, fields=None):
|
||||
"""Yield all project entity documents.
|
||||
|
||||
Args:
|
||||
active (Optional[bool]): Include active projects. Defaults to True.
|
||||
inactive (Optional[bool]): Include inactive projects.
|
||||
Defaults to False.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Yields:
|
||||
dict: Project entity data which can be reduced to specified 'fields'.
|
||||
None is returned if project with specified filters was not found.
|
||||
"""
|
||||
mongodb = get_project_database()
|
||||
for project_name in mongodb.collection_names():
|
||||
if project_name in ("system.indexes",):
|
||||
|
|
@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None):
|
|||
|
||||
|
||||
def get_project(project_name, active=True, inactive=True, fields=None):
|
||||
"""Return project entity document by project name.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
active (Optional[bool]): Allow active project. Defaults to True.
|
||||
inactive (Optional[bool]): Allow inactive project. Defaults to True.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[Dict, None]: Project entity data which can be reduced to
|
||||
specified 'fields'. None is returned if project with specified
|
||||
filters was not found.
|
||||
"""
|
||||
# Skip if both are disabled
|
||||
if not active and not inactive:
|
||||
return None
|
||||
|
|
@ -124,17 +151,18 @@ def get_whole_project(project_name):
|
|||
|
||||
|
||||
def get_asset_by_id(project_name, asset_id, fields=None):
|
||||
"""Receive asset data by it's id.
|
||||
"""Receive asset data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_id (Union[str, ObjectId]): Asset's id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict: Asset entity data.
|
||||
None: Asset was not found by id.
|
||||
Union[Dict, None]: Asset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if asset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
asset_id = convert_id(asset_id)
|
||||
|
|
@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None):
|
|||
|
||||
|
||||
def get_asset_by_name(project_name, asset_name, fields=None):
|
||||
"""Receive asset data by it's name.
|
||||
"""Receive asset data by its name.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_name (str): Asset's name.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict: Asset entity data.
|
||||
None: Asset was not found by name.
|
||||
Union[Dict, None]: Asset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if asset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_name:
|
||||
|
|
@ -195,8 +224,8 @@ def _get_assets(
|
|||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
standard (bool): Query standard assets (type 'asset').
|
||||
archived (bool): Query archived assets (type 'archived_asset').
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -261,8 +290,8 @@ def get_assets(
|
|||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
archived (bool): Add also archived assets.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -300,8 +329,8 @@ def get_archived_assets(
|
|||
be found.
|
||||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
|
|||
|
||||
|
||||
def get_subset_by_id(project_name, subset_id, fields=None):
|
||||
"""Single subset entity data by it's id.
|
||||
"""Single subset entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Id of subset which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If subset with specified filters was not found.
|
||||
Dict: Subset document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Subset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if subset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
|
||||
"""Single subset entity data by it's name and it's version id.
|
||||
"""Single subset entity data by its name and its version id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_name (str): Name of subset.
|
||||
asset_id (Union[str, ObjectId]): Id of parent asset.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[None, Dict[str, Any]]: None if subset with specified filters was
|
||||
not found or dict subset document which can be reduced to
|
||||
specified 'fields'.
|
||||
|
||||
Union[Dict, None]: Subset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if subset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
if not subset_name:
|
||||
return None
|
||||
|
|
@ -434,8 +463,8 @@ def get_subsets(
|
|||
names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering
|
||||
using asset ids and list of subset names under the asset.
|
||||
archived (bool): Look for archived subsets too.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching subsets.
|
||||
|
|
@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None):
|
|||
|
||||
|
||||
def get_version_by_id(project_name, version_id, fields=None):
|
||||
"""Single version entity data by it's id.
|
||||
"""Single version entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None):
|
|||
|
||||
|
||||
def get_version_by_name(project_name, version, subset_id, fields=None):
|
||||
"""Single version entity data by it's name and subset id.
|
||||
"""Single version entity data by its name and subset id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version (int): name of version entity (it's version).
|
||||
version (int): name of version entity (its version).
|
||||
subset_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
|
|||
|
||||
|
||||
def version_is_latest(project_name, version_id):
|
||||
"""Is version the latest from it's subset.
|
||||
"""Is version the latest from its subset.
|
||||
|
||||
Note:
|
||||
Hero versions are considered as latest.
|
||||
|
|
@ -680,8 +711,8 @@ def get_versions(
|
|||
versions (Iterable[int]): Version names (as integers).
|
||||
Filter ignored if 'None' is passed.
|
||||
hero (bool): Look also for hero versions.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching versions.
|
||||
|
|
@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Subset id under which
|
||||
is hero version.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If hero version for passed subset id does not exists.
|
||||
Dict: Hero version entity data.
|
||||
Union[Dict, None]: Hero version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if hero version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_hero_version_by_id(project_name, version_id, fields=None):
|
||||
"""Hero version by it's id.
|
||||
"""Hero version by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Hero version id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If hero version with passed id was not found.
|
||||
Dict: Hero version entity data.
|
||||
Union[Dict, None]: Hero version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if hero version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -773,8 +806,8 @@ def get_hero_versions(
|
|||
should look for hero versions. Filter ignored if 'None' is passed.
|
||||
version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter
|
||||
ignored if 'None' is passed.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor|list: Iterable yielding hero versions matching passed filters.
|
||||
|
|
@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None):
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Version id which can be used
|
||||
as input link for other versions.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Iterable: Iterable cursor yielding versions that are used as input
|
||||
|
|
@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict[ObjectId, int]: Key is subset id and value is last version name.
|
||||
|
|
@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -945,12 +979,13 @@ def get_last_version_by_subset_name(
|
|||
asset_id (Union[str, ObjectId]): Asset id which is parent of passed
|
||||
subset name.
|
||||
asset_name (str): Asset name which is parent of passed subset name.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_id and not asset_name:
|
||||
|
|
@ -972,18 +1007,18 @@ def get_last_version_by_subset_name(
|
|||
|
||||
|
||||
def get_representation_by_id(project_name, representation_id, fields=None):
|
||||
"""Representation entity data by it's id.
|
||||
"""Representation entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
representation_id (Union[str, ObjectId]): Representation id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If representation with specified filters was not found.
|
||||
Dict: Representation entity data which can be reduced
|
||||
to specified 'fields'.
|
||||
Union[Dict, None]: Representation entity data which can be reduced to
|
||||
specified 'fields'. None is returned if representation with
|
||||
specified filters was not found.
|
||||
"""
|
||||
|
||||
if not representation_id:
|
||||
|
|
@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None):
|
|||
def get_representation_by_name(
|
||||
project_name, representation_name, version_id, fields=None
|
||||
):
|
||||
"""Representation entity data by it's name and it's version id.
|
||||
"""Representation entity data by its name and its version id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
representation_name (str): Representation name.
|
||||
version_id (Union[str, ObjectId]): Id of parent version entity.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If representation with specified filters was not found.
|
||||
Dict: Representation entity data which can be reduced
|
||||
to specified 'fields'.
|
||||
Union[dict[str, Any], None]: Representation entity data which can be
|
||||
reduced to specified 'fields'. None is returned if representation
|
||||
with specified filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -1202,8 +1237,8 @@ def get_representations(
|
|||
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
|
||||
using version ids and list of names under the version.
|
||||
archived (bool): Output will also contain archived representations.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching representations.
|
||||
|
|
@ -1247,8 +1282,8 @@ def get_archived_representations(
|
|||
representation context fields.
|
||||
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
|
||||
using version ids and list of names under the version.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching representations.
|
||||
|
|
@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
|
|||
src_id (Union[str, ObjectId]): Id of source entity.
|
||||
|
||||
Returns:
|
||||
ObjectId: Thumbnail id assigned to entity.
|
||||
None: If Source entity does not have any thumbnail id assigned.
|
||||
Union[ObjectId, None]: Thumbnail id assigned to entity. If Source
|
||||
entity does not have any thumbnail id assigned.
|
||||
"""
|
||||
|
||||
if not src_type or not src_id:
|
||||
|
|
@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
|
|||
"""Receive thumbnails entity data.
|
||||
|
||||
Thumbnail entity can be used to receive binary content of thumbnail based
|
||||
on it's content and ThumbnailResolvers.
|
||||
on its content and ThumbnailResolvers.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail
|
||||
entities.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
cursor: Cursor of queried documents.
|
||||
|
|
@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If thumbnail with specified id was not found.
|
||||
Dict: Thumbnail entity data which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Thumbnail entity data which can be reduced to
|
||||
specified 'fields'.None is returned if thumbnail with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not thumbnail_id:
|
||||
|
|
@ -1458,8 +1494,13 @@ def get_workfile_info(
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_id (Union[str, ObjectId]): Id of asset entity.
|
||||
task_name (str): Task name on asset.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[Dict, None]: Workfile entity data which can be reduced to
|
||||
specified 'fields'.None is returned if workfile with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_id or not task_name or not filename:
|
||||
|
|
|
|||
|
|
@ -5,6 +5,12 @@ import logging
|
|||
import pymongo
|
||||
import certifi
|
||||
|
||||
from bson.json_util import (
|
||||
loads,
|
||||
dumps,
|
||||
CANONICAL_JSON_OPTIONS
|
||||
)
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
from urlparse import urlparse, parse_qs
|
||||
else:
|
||||
|
|
@ -15,6 +21,49 @@ class MongoEnvNotSet(Exception):
|
|||
pass
|
||||
|
||||
|
||||
def documents_to_json(docs):
|
||||
"""Convert documents to json string.
|
||||
|
||||
Args:
|
||||
Union[list[dict[str, Any]], dict[str, Any]]: Document/s to convert to
|
||||
json string.
|
||||
|
||||
Returns:
|
||||
str: Json string with mongo documents.
|
||||
"""
|
||||
|
||||
return dumps(docs, json_options=CANONICAL_JSON_OPTIONS)
|
||||
|
||||
|
||||
def load_json_file(filepath):
|
||||
"""Load mongo documents from a json file.
|
||||
|
||||
Args:
|
||||
filepath (str): Path to a json file.
|
||||
|
||||
Returns:
|
||||
Union[dict[str, Any], list[dict[str, Any]]]: Loaded content from a
|
||||
json file.
|
||||
"""
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise ValueError("Path {} was not found".format(filepath))
|
||||
|
||||
with open(filepath, "r") as stream:
|
||||
content = stream.read()
|
||||
return loads("".join(content))
|
||||
|
||||
|
||||
def get_project_database_name():
|
||||
"""Name of database name where projects are available.
|
||||
|
||||
Returns:
|
||||
str: Name of database name where projects are.
|
||||
"""
|
||||
|
||||
return os.environ.get("AVALON_DB") or "avalon"
|
||||
|
||||
|
||||
def _decompose_url(url):
|
||||
"""Decompose mongo url to basic components.
|
||||
|
||||
|
|
@ -210,12 +259,102 @@ class OpenPypeMongoConnection:
|
|||
return mongo_client
|
||||
|
||||
|
||||
def get_project_database():
|
||||
db_name = os.environ.get("AVALON_DB") or "avalon"
|
||||
return OpenPypeMongoConnection.get_mongo_client()[db_name]
|
||||
# ------ Helper Mongo functions ------
|
||||
# Functions can be helpful with custom tools to backup/restore mongo state.
|
||||
# Not meant as API functionality that should be used in production codebase!
|
||||
def get_collection_documents(database_name, collection_name, as_json=False):
|
||||
"""Query all documents from a collection.
|
||||
|
||||
Args:
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection where to look for collection.
|
||||
as_json (Optional[bool]): Output should be a json string.
|
||||
Default: 'False'
|
||||
|
||||
Returns:
|
||||
Union[list[dict[str, Any]], str]: Queried documents.
|
||||
"""
|
||||
|
||||
client = OpenPypeMongoConnection.get_mongo_client()
|
||||
output = list(client[database_name][collection_name].find({}))
|
||||
if as_json:
|
||||
output = documents_to_json(output)
|
||||
return output
|
||||
|
||||
|
||||
def get_project_connection(project_name):
|
||||
def store_collection(filepath, database_name, collection_name):
|
||||
"""Store collection documents to a json file.
|
||||
|
||||
Args:
|
||||
filepath (str): Path to a json file where documents will be stored.
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection to store.
|
||||
"""
|
||||
|
||||
# Make sure directory for output file exists
|
||||
dirpath = os.path.dirname(filepath)
|
||||
if not os.path.isdir(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
content = get_collection_documents(database_name, collection_name, True)
|
||||
with open(filepath, "w") as stream:
|
||||
stream.write(content)
|
||||
|
||||
|
||||
def replace_collection_documents(docs, database_name, collection_name):
|
||||
"""Replace all documents in a collection with passed documents.
|
||||
|
||||
Warnings:
|
||||
All existing documents in collection will be removed if there are any.
|
||||
|
||||
Args:
|
||||
docs (list[dict[str, Any]]): New documents.
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection where new documents are
|
||||
uploaded.
|
||||
"""
|
||||
|
||||
client = OpenPypeMongoConnection.get_mongo_client()
|
||||
database = client[database_name]
|
||||
if collection_name in database.list_collection_names():
|
||||
database.drop_collection(collection_name)
|
||||
col = database[collection_name]
|
||||
col.insert_many(docs)
|
||||
|
||||
|
||||
def restore_collection(filepath, database_name, collection_name):
|
||||
"""Restore/replace collection from a json filepath.
|
||||
|
||||
Warnings:
|
||||
All existing documents in collection will be removed if there are any.
|
||||
|
||||
Args:
|
||||
filepath (str): Path to a json with documents.
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection where new documents are
|
||||
uploaded.
|
||||
"""
|
||||
|
||||
docs = load_json_file(filepath)
|
||||
replace_collection_documents(docs, database_name, collection_name)
|
||||
|
||||
|
||||
def get_project_database(database_name=None):
|
||||
"""Database object where project collections are.
|
||||
|
||||
Args:
|
||||
database_name (Optional[str]): Custom name of database.
|
||||
|
||||
Returns:
|
||||
pymongo.database.Database: Collection related to passed project.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
return OpenPypeMongoConnection.get_mongo_client()[database_name]
|
||||
|
||||
|
||||
def get_project_connection(project_name, database_name=None):
|
||||
"""Direct access to mongo collection.
|
||||
|
||||
We're trying to avoid using direct access to mongo. This should be used
|
||||
|
|
@ -223,13 +362,83 @@ def get_project_connection(project_name):
|
|||
api calls for that.
|
||||
|
||||
Args:
|
||||
project_name(str): Project name for which collection should be
|
||||
project_name (str): Project name for which collection should be
|
||||
returned.
|
||||
database_name (Optional[str]): Custom name of database.
|
||||
|
||||
Returns:
|
||||
pymongo.Collection: Collection realated to passed project.
|
||||
pymongo.collection.Collection: Collection related to passed project.
|
||||
"""
|
||||
|
||||
if not project_name:
|
||||
raise ValueError("Invalid project name {}".format(str(project_name)))
|
||||
return get_project_database()[project_name]
|
||||
return get_project_database(database_name)[project_name]
|
||||
|
||||
|
||||
def get_project_documents(project_name, database_name=None):
|
||||
"""Query all documents from project collection.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
database_name (Optional[str]): Name of mongo database where to look for
|
||||
project.
|
||||
|
||||
Returns:
|
||||
list[dict[str, Any]]: Documents in project collection.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
return get_collection_documents(database_name, project_name)
|
||||
|
||||
|
||||
def store_project_documents(project_name, filepath, database_name=None):
|
||||
"""Store project documents to a file as json string.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project to store.
|
||||
filepath (str): Path to a json file where output will be stored.
|
||||
database_name (Optional[str]): Name of mongo database where to look for
|
||||
project.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
|
||||
store_collection(filepath, database_name, project_name)
|
||||
|
||||
|
||||
def replace_project_documents(project_name, docs, database_name=None):
|
||||
"""Replace documents in mongo with passed documents.
|
||||
|
||||
Warnings:
|
||||
Existing project collection is removed if exists in mongo.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
docs (list[dict[str, Any]]): Documents to restore.
|
||||
database_name (Optional[str]): Name of mongo database where project
|
||||
collection will be created.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
replace_collection_documents(docs, database_name, project_name)
|
||||
|
||||
|
||||
def restore_project_documents(project_name, filepath, database_name=None):
|
||||
"""Replace documents in mongo with passed documents.
|
||||
|
||||
Warnings:
|
||||
Existing project collection is removed if exists in mongo.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
filepath (str): File to json file with project documents.
|
||||
database_name (Optional[str]): Name of mongo database where project
|
||||
collection will be created.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
restore_collection(filepath, database_name, project_name)
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
|
|||
"blender",
|
||||
"photoshop",
|
||||
"tvpaint",
|
||||
"substancepainter",
|
||||
"aftereffects"
|
||||
]
|
||||
|
||||
|
|
|
|||
37
openpype/hooks/pre_host_set_ocio.py
Normal file
37
openpype/hooks/pre_host_set_ocio.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
from openpype.pipeline.colorspace import get_imageio_config
|
||||
from openpype.pipeline.template_data import get_template_data
|
||||
|
||||
|
||||
class PreLaunchHostSetOCIO(PreLaunchHook):
|
||||
"""Set OCIO environment for the host"""
|
||||
|
||||
order = 0
|
||||
app_groups = ["substancepainter"]
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
|
||||
anatomy_data = get_template_data(
|
||||
project_doc=self.data["project_doc"],
|
||||
asset_doc=self.data["asset_doc"],
|
||||
task_name=self.data["task_name"],
|
||||
host_name=self.host_name,
|
||||
system_settings=self.data["system_settings"]
|
||||
)
|
||||
|
||||
ocio_config = get_imageio_config(
|
||||
project_name=self.data["project_doc"]["name"],
|
||||
host_name=self.host_name,
|
||||
project_settings=self.data["project_settings"],
|
||||
anatomy_data=anatomy_data,
|
||||
anatomy=self.data["anatomy"]
|
||||
)
|
||||
|
||||
if ocio_config:
|
||||
ocio_path = ocio_config["path"]
|
||||
self.log.info(f"Setting OCIO config path: {ocio_path}")
|
||||
self.launch_context.env["OCIO"] = ocio_path
|
||||
else:
|
||||
self.log.debug("OCIO not set or enabled")
|
||||
|
|
@ -26,12 +26,9 @@ class RenderCreator(Creator):
|
|||
|
||||
create_allow_context_change = True
|
||||
|
||||
def __init__(self, project_settings, *args, **kwargs):
|
||||
super(RenderCreator, self).__init__(project_settings, *args, **kwargs)
|
||||
self._default_variants = (project_settings["aftereffects"]
|
||||
["create"]
|
||||
["RenderCreator"]
|
||||
["defaults"])
|
||||
# Settings
|
||||
default_variants = []
|
||||
mark_for_review = True
|
||||
|
||||
def create(self, subset_name_from_ui, data, pre_create_data):
|
||||
stub = api.get_stub() # only after After Effects is up
|
||||
|
|
@ -82,28 +79,40 @@ class RenderCreator(Creator):
|
|||
use_farm = pre_create_data["farm"]
|
||||
new_instance.creator_attributes["farm"] = use_farm
|
||||
|
||||
review = pre_create_data["mark_for_review"]
|
||||
new_instance.creator_attributes["mark_for_review"] = review
|
||||
|
||||
api.get_stub().imprint(new_instance.id,
|
||||
new_instance.data_to_store())
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
stub.rename_item(comp.id, subset_name)
|
||||
|
||||
def get_default_variants(self):
|
||||
return self._default_variants
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [BoolDef("farm", label="Render on farm")]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
output = [
|
||||
BoolDef("use_selection", default=True, label="Use selection"),
|
||||
BoolDef("use_composition_name",
|
||||
label="Use composition name in subset"),
|
||||
UISeparatorDef(),
|
||||
BoolDef("farm", label="Render on farm")
|
||||
BoolDef("farm", label="Render on farm"),
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review",
|
||||
default=self.mark_for_review
|
||||
)
|
||||
]
|
||||
return output
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef("farm", label="Render on farm"),
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review",
|
||||
default=False
|
||||
)
|
||||
]
|
||||
|
||||
def get_icon(self):
|
||||
return resources.get_openpype_splash_filepath()
|
||||
|
||||
|
|
@ -143,6 +152,13 @@ class RenderCreator(Creator):
|
|||
api.get_stub().rename_item(comp_id,
|
||||
new_comp_name)
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
plugin_settings = (
|
||||
project_settings["aftereffects"]["create"]["RenderCreator"]
|
||||
)
|
||||
|
||||
self.mark_for_review = plugin_settings["mark_for_review"]
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Creator for Render instances
|
||||
|
||||
|
|
@ -201,4 +217,7 @@ class RenderCreator(Creator):
|
|||
instance_data["creator_attributes"] = {"farm": is_old_farm}
|
||||
instance_data["family"] = self.family
|
||||
|
||||
if instance_data["creator_attributes"].get("mark_for_review") is None:
|
||||
instance_data["creator_attributes"]["mark_for_review"] = True
|
||||
|
||||
return instance_data
|
||||
|
|
|
|||
|
|
@ -88,10 +88,11 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
raise ValueError("No file extension set in Render Queue")
|
||||
render_item = render_q[0]
|
||||
|
||||
instance_families = inst.data.get("families", [])
|
||||
subset_name = inst.data["subset"]
|
||||
instance = AERenderInstance(
|
||||
family="render",
|
||||
families=inst.data.get("families", []),
|
||||
families=instance_families,
|
||||
version=version,
|
||||
time="",
|
||||
source=current_file,
|
||||
|
|
@ -109,6 +110,7 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
tileRendering=False,
|
||||
tilesX=0,
|
||||
tilesY=0,
|
||||
review="review" in instance_families,
|
||||
frameStart=frame_start,
|
||||
frameEnd=frame_end,
|
||||
frameStep=1,
|
||||
|
|
@ -139,6 +141,9 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
instance.toBeRenderedOn = "deadline"
|
||||
instance.renderer = "aerender"
|
||||
instance.farm = True # to skip integrate
|
||||
if "review" in instance.families:
|
||||
# to skip ExtractReview locally
|
||||
instance.families.remove("review")
|
||||
|
||||
instances.append(instance)
|
||||
instances_to_remove.append(inst)
|
||||
|
|
@ -218,15 +223,4 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
if fam not in instance.families:
|
||||
instance.families.append(fam)
|
||||
|
||||
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
reviewable_subset_filter = (settings["deadline"]
|
||||
["publish"]
|
||||
["ProcessSubmittedJobOnFarm"]
|
||||
["aov_filter"].get(self.hosts[0]))
|
||||
for aov_pattern in reviewable_subset_filter:
|
||||
if re.match(aov_pattern, instance.subset):
|
||||
instance.families.append("review")
|
||||
instance.review = True
|
||||
break
|
||||
|
||||
return instance
|
||||
|
|
|
|||
|
|
@ -0,0 +1,25 @@
|
|||
"""
|
||||
Requires:
|
||||
None
|
||||
|
||||
Provides:
|
||||
instance -> family ("review")
|
||||
"""
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectReview(pyblish.api.ContextPlugin):
|
||||
"""Add review to families if instance created with 'mark_for_review' flag
|
||||
"""
|
||||
label = "Collect Review"
|
||||
hosts = ["aftereffects"]
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
|
||||
def process(self, context):
|
||||
for instance in context:
|
||||
creator_attributes = instance.data.get("creator_attributes") or {}
|
||||
if (
|
||||
creator_attributes.get("mark_for_review")
|
||||
and "review" not in instance.data["families"]
|
||||
):
|
||||
instance.data["families"].append("review")
|
||||
|
|
@ -66,33 +66,9 @@ class ExtractLocalRender(publish.Extractor):
|
|||
first_repre = not representations
|
||||
if instance.data["review"] and first_repre:
|
||||
repre_data["tags"] = ["review"]
|
||||
thumbnail_path = os.path.join(staging_dir, files[0])
|
||||
instance.data["thumbnailSource"] = thumbnail_path
|
||||
|
||||
representations.append(repre_data)
|
||||
|
||||
instance.data["representations"] = representations
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
# Generate thumbnail.
|
||||
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
|
||||
|
||||
args = [
|
||||
ffmpeg_path, "-y",
|
||||
"-i", first_file_path,
|
||||
"-vf", "scale=300:-1",
|
||||
"-vframes", "1",
|
||||
thumbnail_path
|
||||
]
|
||||
self.log.debug("Thumbnail args:: {}".format(args))
|
||||
try:
|
||||
output = run_subprocess(args)
|
||||
except TypeError:
|
||||
self.log.warning("Error in creating thumbnail")
|
||||
six.reraise(*sys.exc_info())
|
||||
|
||||
instance.data["representations"].append({
|
||||
"name": "thumbnail",
|
||||
"ext": "jpg",
|
||||
"files": os.path.basename(thumbnail_path),
|
||||
"stagingDir": staging_dir,
|
||||
"tags": ["thumbnail"]
|
||||
})
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
import os
|
||||
|
||||
import qtawesome
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
|
|
@ -28,6 +26,7 @@ class CreateSaver(Creator):
|
|||
family = "render"
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = ["reviewable"]
|
||||
|
||||
|
|
@ -89,9 +88,6 @@ class CreateSaver(Creator):
|
|||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def get_icon(self):
|
||||
return qtawesome.icon("fa.eye", color="white")
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
|
|
|
|||
|
|
@ -1,5 +1,3 @@
|
|||
import qtawesome
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp
|
||||
)
|
||||
|
|
@ -15,6 +13,7 @@ class FusionWorkfileCreator(AutoCreator):
|
|||
identifier = "workfile"
|
||||
family = "workfile"
|
||||
label = "Workfile"
|
||||
icon = "fa5.file"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
|
|
@ -104,6 +103,3 @@ class FusionWorkfileCreator(AutoCreator):
|
|||
existing_instance["asset"] = asset_name
|
||||
existing_instance["task"] = task_name
|
||||
existing_instance["subset"] = subset_name
|
||||
|
||||
def get_icon(self):
|
||||
return qtawesome.icon("fa.file-o", color="white")
|
||||
|
|
|
|||
|
|
@ -1,29 +1,39 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
from openpype.pipeline import KnownPublishError
|
||||
|
||||
class FusionIncrementCurrentFile(pyblish.api.ContextPlugin):
|
||||
|
||||
class FusionIncrementCurrentFile(
|
||||
pyblish.api.ContextPlugin, OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Increment the current file.
|
||||
|
||||
Saves the current file with an increased version number.
|
||||
|
||||
"""
|
||||
|
||||
label = "Increment current file"
|
||||
label = "Increment workfile version"
|
||||
order = pyblish.api.IntegratorOrder + 9.0
|
||||
hosts = ["fusion"]
|
||||
families = ["workfile"]
|
||||
optional = True
|
||||
|
||||
def process(self, context):
|
||||
if not self.is_active(context.data):
|
||||
return
|
||||
|
||||
from openpype.lib import version_up
|
||||
from openpype.pipeline.publish import get_errored_plugins_from_context
|
||||
|
||||
errored_plugins = get_errored_plugins_from_context(context)
|
||||
if any(plugin.__name__ == "FusionSubmitDeadline"
|
||||
for plugin in errored_plugins):
|
||||
raise RuntimeError("Skipping incrementing current file because "
|
||||
"submission to render farm failed.")
|
||||
if any(
|
||||
plugin.__name__ == "FusionSubmitDeadline"
|
||||
for plugin in errored_plugins
|
||||
):
|
||||
raise KnownPublishError(
|
||||
"Skipping incrementing current file because "
|
||||
"submission to render farm failed."
|
||||
)
|
||||
|
||||
comp = context.data.get("currentComp")
|
||||
assert comp, "Must have comp"
|
||||
|
|
|
|||
|
|
@ -1,12 +1,17 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin,
|
||||
PublishValidationError,
|
||||
)
|
||||
|
||||
from openpype.hosts.fusion.api.action import SelectInvalidAction
|
||||
|
||||
|
||||
class ValidateBackgroundDepth(pyblish.api.InstancePlugin):
|
||||
class ValidateBackgroundDepth(
|
||||
pyblish.api.InstancePlugin, OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Validate if all Background tool are set to float32 bit"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
|
|
@ -15,11 +20,10 @@ class ValidateBackgroundDepth(pyblish.api.InstancePlugin):
|
|||
families = ["render"]
|
||||
optional = True
|
||||
|
||||
actions = [SelectInvalidAction, RepairAction]
|
||||
actions = [SelectInvalidAction, publish.RepairAction]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
context = instance.context
|
||||
comp = context.data.get("currentComp")
|
||||
assert comp, "Must have Comp object"
|
||||
|
|
@ -31,12 +35,16 @@ class ValidateBackgroundDepth(pyblish.api.InstancePlugin):
|
|||
return [i for i in backgrounds if i.GetInput("Depth") != 4.0]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Found {} Backgrounds tools which"
|
||||
" are not set to float32".format(len(invalid)),
|
||||
title=self.label)
|
||||
title=self.label,
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
|
|
|||
46
openpype/hosts/houdini/api/action.py
Normal file
46
openpype/hosts/houdini/api/action.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
import pyblish.api
|
||||
import hou
|
||||
|
||||
from openpype.pipeline.publish import get_errored_instances_from_context
|
||||
|
||||
|
||||
class SelectInvalidAction(pyblish.api.Action):
|
||||
"""Select invalid nodes in Maya when plug-in failed.
|
||||
|
||||
To retrieve the invalid nodes this assumes a static `get_invalid()`
|
||||
method is available on the plugin.
|
||||
|
||||
"""
|
||||
label = "Select invalid"
|
||||
on = "failed" # This action is only available on a failed plug-in
|
||||
icon = "search" # Icon from Awesome Icon
|
||||
|
||||
def process(self, context, plugin):
|
||||
|
||||
errored_instances = get_errored_instances_from_context(context)
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(errored_instances, plugin)
|
||||
|
||||
# Get the invalid nodes for the plug-ins
|
||||
self.log.info("Finding invalid nodes..")
|
||||
invalid = list()
|
||||
for instance in instances:
|
||||
invalid_nodes = plugin.get_invalid(instance)
|
||||
if invalid_nodes:
|
||||
if isinstance(invalid_nodes, (list, tuple)):
|
||||
invalid.extend(invalid_nodes)
|
||||
else:
|
||||
self.log.warning("Plug-in returned to be invalid, "
|
||||
"but has no selectable nodes.")
|
||||
|
||||
hou.clearAllSelected()
|
||||
if invalid:
|
||||
self.log.info("Selecting invalid nodes: {}".format(
|
||||
", ".join(node.path() for node in invalid)
|
||||
))
|
||||
for node in invalid:
|
||||
node.setSelected(True)
|
||||
node.setCurrent(True)
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
|
|
@ -12,26 +12,43 @@ import tempfile
|
|||
import logging
|
||||
import os
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.pipeline.create import CreateContext
|
||||
from openpype.resources import get_openpype_icon_filepath
|
||||
|
||||
import hou
|
||||
import stateutils
|
||||
import soptoolutils
|
||||
import loptoolutils
|
||||
import cop2toolutils
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
CATEGORY_GENERIC_TOOL = {
|
||||
hou.sopNodeTypeCategory(): soptoolutils.genericTool,
|
||||
hou.cop2NodeTypeCategory(): cop2toolutils.genericTool,
|
||||
hou.lopNodeTypeCategory(): loptoolutils.genericTool
|
||||
}
|
||||
|
||||
|
||||
CREATE_SCRIPT = """
|
||||
from openpype.hosts.houdini.api.creator_node_shelves import create_interactive
|
||||
create_interactive("{identifier}")
|
||||
create_interactive("{identifier}", **kwargs)
|
||||
"""
|
||||
|
||||
|
||||
def create_interactive(creator_identifier):
|
||||
def create_interactive(creator_identifier, **kwargs):
|
||||
"""Create a Creator using its identifier interactively.
|
||||
|
||||
This is used by the generated shelf tools as callback when a user selects
|
||||
the creator from the node tab search menu.
|
||||
|
||||
The `kwargs` should be what Houdini passes to the tool create scripts
|
||||
context. For more information see:
|
||||
https://www.sidefx.com/docs/houdini/hom/tool_script.html#arguments
|
||||
|
||||
Args:
|
||||
creator_identifier (str): The creator identifier of the Creator plugin
|
||||
to create.
|
||||
|
|
@ -58,6 +75,33 @@ def create_interactive(creator_identifier):
|
|||
|
||||
host = registered_host()
|
||||
context = CreateContext(host)
|
||||
creator = context.manual_creators.get(creator_identifier)
|
||||
if not creator:
|
||||
raise RuntimeError("Invalid creator identifier: "
|
||||
"{}".format(creator_identifier))
|
||||
|
||||
# TODO: Once more elaborate unique create behavior should exist per Creator
|
||||
# instead of per network editor area then we should move this from here
|
||||
# to a method on the Creators for which this could be the default
|
||||
# implementation.
|
||||
pane = stateutils.activePane(kwargs)
|
||||
if isinstance(pane, hou.NetworkEditor):
|
||||
pwd = pane.pwd()
|
||||
subset_name = creator.get_subset_name(
|
||||
variant=variant,
|
||||
task_name=context.get_current_task_name(),
|
||||
asset_doc=get_asset_by_name(
|
||||
project_name=context.get_current_project_name(),
|
||||
asset_name=context.get_current_asset_name()
|
||||
),
|
||||
project_name=context.get_current_project_name(),
|
||||
host_name=context.host_name
|
||||
)
|
||||
|
||||
tool_fn = CATEGORY_GENERIC_TOOL.get(pwd.childTypeCategory())
|
||||
if tool_fn is not None:
|
||||
out_null = tool_fn(kwargs, "null")
|
||||
out_null.setName("OUT_{}".format(subset_name), unique_name=True)
|
||||
|
||||
before = context.instances_by_id.copy()
|
||||
|
||||
|
|
@ -135,12 +179,20 @@ def install():
|
|||
|
||||
log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath))
|
||||
tools = []
|
||||
|
||||
with shelves_change_block():
|
||||
for identifier, creator in create_context.manual_creators.items():
|
||||
|
||||
# TODO: Allow the creator plug-in itself to override the categories
|
||||
# for where they are shown, by e.g. defining
|
||||
# `Creator.get_network_categories()`
|
||||
# Allow the creator plug-in itself to override the categories
|
||||
# for where they are shown with `Creator.get_network_categories()`
|
||||
if not hasattr(creator, "get_network_categories"):
|
||||
log.debug("Creator {} has no `get_network_categories` method "
|
||||
"and will not be added to TAB search.")
|
||||
continue
|
||||
|
||||
network_categories = creator.get_network_categories()
|
||||
if not network_categories:
|
||||
continue
|
||||
|
||||
key = "openpype_create.{}".format(identifier)
|
||||
log.debug(f"Registering {key}")
|
||||
|
|
@ -153,17 +205,13 @@ def install():
|
|||
creator.label
|
||||
),
|
||||
"help_url": None,
|
||||
"network_categories": [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
],
|
||||
"network_categories": network_categories,
|
||||
"viewer_categories": [],
|
||||
"cop_viewer_categories": [],
|
||||
"network_op_type": None,
|
||||
"viewer_op_type": None,
|
||||
"locations": ["OpenPype"]
|
||||
}
|
||||
|
||||
label = "Create {}".format(creator.label)
|
||||
tool = hou.shelves.tool(key)
|
||||
if tool:
|
||||
|
|
|
|||
|
|
@ -81,7 +81,13 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
# TODO: make sure this doesn't trigger when
|
||||
# opening with last workfile.
|
||||
_set_context_settings()
|
||||
shelves.generate_shelves()
|
||||
|
||||
if not IS_HEADLESS:
|
||||
import hdefereval # noqa, hdefereval is only available in ui mode
|
||||
# Defer generation of shelves due to issue on Windows where shelf
|
||||
# initialization during start up delays Houdini UI by minutes
|
||||
# making it extremely slow to launch.
|
||||
hdefereval.executeDeferred(shelves.generate_shelves)
|
||||
|
||||
if not IS_HEADLESS:
|
||||
import hdefereval # noqa, hdefereval is only available in ui mode
|
||||
|
|
|
|||
|
|
@ -276,3 +276,19 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
color = hou.Color((0.616, 0.871, 0.769))
|
||||
node.setUserData('nodeshape', shape)
|
||||
node.setColor(color)
|
||||
|
||||
def get_network_categories(self):
|
||||
"""Return in which network view type this creator should show.
|
||||
|
||||
The node type categories returned here will be used to define where
|
||||
the creator will show up in the TAB search for nodes in Houdini's
|
||||
Network View.
|
||||
|
||||
This can be overridden in inherited classes to define where that
|
||||
particular Creator should be visible in the TAB search.
|
||||
|
||||
Returns:
|
||||
list: List of houdini node type categories
|
||||
|
||||
"""
|
||||
return [hou.ropNodeTypeCategory()]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance, CreatorError
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateAlembicCamera(plugin.HoudiniCreator):
|
||||
"""Single baked camera from Alembic ROP."""
|
||||
|
|
@ -47,3 +49,9 @@ class CreateAlembicCamera(plugin.HoudiniCreator):
|
|||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
instance_node.parm("trange").set(1)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.objNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,9 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating composite sequences."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.pipeline import CreatedInstance, CreatorError
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateCompositeSequence(plugin.HoudiniCreator):
|
||||
|
|
@ -35,8 +37,20 @@ class CreateCompositeSequence(plugin.HoudiniCreator):
|
|||
"copoutput": filepath
|
||||
}
|
||||
|
||||
if self.selected_nodes:
|
||||
if len(self.selected_nodes) > 1:
|
||||
raise CreatorError("More than one item selected.")
|
||||
path = self.selected_nodes[0].path()
|
||||
parms["coppath"] = path
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock any parameters in this list
|
||||
to_lock = ["prim_to_detail_pattern"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.cop2NodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreatePointCache(plugin.HoudiniCreator):
|
||||
"""Alembic ROP to pointcache"""
|
||||
|
|
@ -49,3 +51,9 @@ class CreatePointCache(plugin.HoudiniCreator):
|
|||
# Lock any parameters in this list
|
||||
to_lock = ["prim_to_detail_pattern"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateUSD(plugin.HoudiniCreator):
|
||||
"""Universal Scene Description"""
|
||||
|
|
@ -13,7 +15,6 @@ class CreateUSD(plugin.HoudiniCreator):
|
|||
enabled = False
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou # noqa
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "usd"})
|
||||
|
|
@ -43,3 +44,9 @@ class CreateUSD(plugin.HoudiniCreator):
|
|||
"id",
|
||||
]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.lopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateVDBCache(plugin.HoudiniCreator):
|
||||
"""OpenVDB from Geometry ROP"""
|
||||
|
|
@ -34,3 +36,9 @@ class CreateVDBCache(plugin.HoudiniCreator):
|
|||
parms["soppath"] = self.selected_nodes[0].path()
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator):
|
|||
identifier = "io.openpype.creators.houdini.workfile"
|
||||
label = "Workfile"
|
||||
family = "workfile"
|
||||
icon = "document"
|
||||
icon = "fa5.file"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
|
|
|
|||
|
|
@ -4,15 +4,14 @@ import hou
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
||||
class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
label = "Houdini Current File"
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
def process(self, context):
|
||||
"""Inject the current working file"""
|
||||
|
||||
current_file = hou.hipFile.path()
|
||||
|
|
@ -34,26 +33,5 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
|||
"saved correctly."
|
||||
)
|
||||
|
||||
instance.context.data["currentFile"] = current_file
|
||||
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data.update({
|
||||
"setMembers": [current_file],
|
||||
"frameStart": instance.context.data['frameStart'],
|
||||
"frameEnd": instance.context.data['frameEnd'],
|
||||
"handleStart": instance.context.data['handleStart'],
|
||||
"handleEnd": instance.context.data['handleEnd']
|
||||
})
|
||||
|
||||
instance.data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('Scene path: {}'.format(current_file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
context.data["currentFile"] = current_file
|
||||
self.log.info('Current workfile path: {}'.format(current_file))
|
||||
|
|
|
|||
|
|
@ -17,6 +17,10 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
|||
# which isn't the actual frame range that this instance renders.
|
||||
instance.data["handleStart"] = 0
|
||||
instance.data["handleEnd"] = 0
|
||||
instance.data["fps"] = instance.context.data["fps"]
|
||||
|
||||
# Enable ftrack functionality
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
||||
# Get the camera from the rop node to collect the focal length
|
||||
ropnode_path = instance.data["instance_node"]
|
||||
|
|
@ -25,8 +29,9 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
|||
camera_path = ropnode.parm("camera").eval()
|
||||
camera_node = hou.node(camera_path)
|
||||
if not camera_node:
|
||||
raise RuntimeError("No valid camera node found on review node: "
|
||||
"{}".format(camera_path))
|
||||
self.log.warning("No valid camera node found on review node: "
|
||||
"{}".format(camera_path))
|
||||
return
|
||||
|
||||
# Collect focal length.
|
||||
focal_length_parm = camera_node.parm("focal")
|
||||
|
|
@ -48,5 +53,3 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
|||
# Store focal length in `burninDataMembers`
|
||||
burnin_members = instance.data.setdefault("burninDataMembers", {})
|
||||
burnin_members["focalLength"] = focal_length
|
||||
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
|
|
|||
36
openpype/hosts/houdini/plugins/publish/collect_workfile.py
Normal file
36
openpype/hosts/houdini/plugins/publish/collect_workfile.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.InstancePlugin):
|
||||
"""Inject workfile representation into instance"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
label = "Houdini Workfile Data"
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
current_file = instance.context.data["currentFile"]
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data.update({
|
||||
"setMembers": [current_file],
|
||||
"frameStart": instance.context.data['frameStart'],
|
||||
"frameEnd": instance.context.data['frameEnd'],
|
||||
"handleStart": instance.context.data['handleStart'],
|
||||
"handleEnd": instance.context.data['handleEnd']
|
||||
})
|
||||
|
||||
instance.data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
|
|
@ -2,27 +2,20 @@ import os
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.houdini.api.lib import render_rop
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class ExtractOpenGL(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
class ExtractOpenGL(publish.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.01
|
||||
label = "Extract OpenGL"
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
ropnode = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
output = ropnode.evalParm("picture")
|
||||
|
|
|
|||
|
|
@ -1,21 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Scene setting</title>
|
||||
<description>
|
||||
## Invalid input node
|
||||
|
||||
VDB input must have the same number of VDBs, points, primitives and vertices as output.
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
### __Detailed Info__ (optional)
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Invalid VDB</title>
|
||||
<description>
|
||||
## Invalid VDB output
|
||||
|
||||
All primitives of the output geometry must be VDBs, no other primitive
|
||||
types are allowed. That means that regardless of the amount of VDBs in the
|
||||
geometry it will have an equal amount of VDBs, points, primitives and
|
||||
vertices since each VDB primitive is one point, one vertex and one VDB.
|
||||
|
||||
This validation only checks the geometry on the first frame of the export
|
||||
frame range.
|
||||
|
||||
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
### Detailed Info
|
||||
|
||||
ROP node `{rop_path}` is set to export SOP path `{sop_path}`.
|
||||
|
||||
{message}
|
||||
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -16,15 +16,19 @@ class ValidateSceneReview(pyblish.api.InstancePlugin):
|
|||
label = "Scene Setting for review"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid_scene_path(instance)
|
||||
|
||||
report = []
|
||||
if invalid:
|
||||
report.append(
|
||||
"Scene path does not exist: '%s'" % invalid[0],
|
||||
)
|
||||
instance_node = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
invalid = self.get_invalid_resolution(instance)
|
||||
invalid = self.get_invalid_scene_path(instance_node)
|
||||
if invalid:
|
||||
report.append(invalid)
|
||||
|
||||
invalid = self.get_invalid_camera_path(instance_node)
|
||||
if invalid:
|
||||
report.append(invalid)
|
||||
|
||||
invalid = self.get_invalid_resolution(instance_node)
|
||||
if invalid:
|
||||
report.extend(invalid)
|
||||
|
||||
|
|
@ -33,26 +37,36 @@ class ValidateSceneReview(pyblish.api.InstancePlugin):
|
|||
"\n\n".join(report),
|
||||
title=self.label)
|
||||
|
||||
def get_invalid_scene_path(self, instance):
|
||||
|
||||
node = hou.node(instance.data.get("instance_node"))
|
||||
scene_path_parm = node.parm("scenepath")
|
||||
def get_invalid_scene_path(self, rop_node):
|
||||
scene_path_parm = rop_node.parm("scenepath")
|
||||
scene_path_node = scene_path_parm.evalAsNode()
|
||||
if not scene_path_node:
|
||||
return [scene_path_parm.evalAsString()]
|
||||
path = scene_path_parm.evalAsString()
|
||||
return "Scene path does not exist: '{}'".format(path)
|
||||
|
||||
def get_invalid_resolution(self, instance):
|
||||
node = hou.node(instance.data.get("instance_node"))
|
||||
def get_invalid_camera_path(self, rop_node):
|
||||
camera_path_parm = rop_node.parm("camera")
|
||||
camera_node = camera_path_parm.evalAsNode()
|
||||
path = camera_path_parm.evalAsString()
|
||||
if not camera_node:
|
||||
return "Camera path does not exist: '{}'".format(path)
|
||||
type_name = camera_node.type().name()
|
||||
if type_name != "cam":
|
||||
return "Camera path is not a camera: '{}' (type: {})".format(
|
||||
path, type_name
|
||||
)
|
||||
|
||||
def get_invalid_resolution(self, rop_node):
|
||||
|
||||
# The resolution setting is only used when Override Camera Resolution
|
||||
# is enabled. So we skip validation if it is disabled.
|
||||
override = node.parm("tres").eval()
|
||||
override = rop_node.parm("tres").eval()
|
||||
if not override:
|
||||
return
|
||||
|
||||
invalid = []
|
||||
res_width = node.parm("res1").eval()
|
||||
res_height = node.parm("res2").eval()
|
||||
res_width = rop_node.parm("res1").eval()
|
||||
res_height = rop_node.parm("res2").eval()
|
||||
if res_width == 0:
|
||||
invalid.append("Override Resolution width is set to zero.")
|
||||
if res_height == 0:
|
||||
|
|
|
|||
|
|
@ -1,52 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class ValidateVDBInputNode(pyblish.api.InstancePlugin):
|
||||
"""Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs create the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder + 0.1
|
||||
families = ["vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Input Node (VDB)"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
self,
|
||||
"Node connected to the output node is not of type VDB",
|
||||
title=self.label
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
node = instance.data["output_node"]
|
||||
|
||||
prims = node.geometry().prims()
|
||||
nr_of_prims = len(prims)
|
||||
|
||||
nr_of_points = len(node.geometry().points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
|
|
@ -1,14 +1,73 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import contextlib
|
||||
|
||||
import pyblish.api
|
||||
import hou
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
from openpype.hosts.houdini.api.action import SelectInvalidAction
|
||||
|
||||
|
||||
def group_consecutive_numbers(nums):
|
||||
"""
|
||||
Args:
|
||||
nums (list): List of sorted integer numbers.
|
||||
|
||||
Yields:
|
||||
str: Group ranges as {start}-{end} if more than one number in the range
|
||||
else it yields {end}
|
||||
|
||||
"""
|
||||
start = None
|
||||
end = None
|
||||
|
||||
def _result(a, b):
|
||||
if a == b:
|
||||
return "{}".format(a)
|
||||
else:
|
||||
return "{}-{}".format(a, b)
|
||||
|
||||
for num in nums:
|
||||
if start is None:
|
||||
start = num
|
||||
end = num
|
||||
elif num == end + 1:
|
||||
end = num
|
||||
else:
|
||||
yield _result(start, end)
|
||||
start = num
|
||||
end = num
|
||||
if start is not None:
|
||||
yield _result(start, end)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def update_mode_context(mode):
|
||||
original = hou.updateModeSetting()
|
||||
try:
|
||||
hou.setUpdateMode(mode)
|
||||
yield
|
||||
finally:
|
||||
hou.setUpdateMode(original)
|
||||
|
||||
|
||||
def get_geometry_at_frame(sop_node, frame, force=True):
|
||||
"""Return geometry at frame but force a cooked value."""
|
||||
with update_mode_context(hou.updateMode.AutoUpdate):
|
||||
sop_node.cook(force=force, frame_range=(frame, frame))
|
||||
return sop_node.geometryAtFrame(frame)
|
||||
|
||||
|
||||
class ValidateVDBOutputNode(pyblish.api.InstancePlugin):
|
||||
"""Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs create the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
All primitives of the output geometry must be VDBs, no other primitive
|
||||
types are allowed. That means that regardless of the amount of VDBs in the
|
||||
geometry it will have an equal amount of VDBs, points, primitives and
|
||||
vertices since each VDB primitive is one point, one vertex and one VDB.
|
||||
|
||||
This validation only checks the geometry on the first frame of the export
|
||||
frame range for optimization purposes.
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
|
|
@ -22,54 +81,95 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin):
|
|||
families = ["vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Output Node (VDB)"
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Node connected to the output node is not" " of type VDB!",
|
||||
title=self.label
|
||||
invalid_nodes, message = self.get_invalid_with_message(instance)
|
||||
if invalid_nodes:
|
||||
|
||||
# instance_node is str, but output_node is hou.Node so we convert
|
||||
output = instance.data.get("output_node")
|
||||
output_path = output.path() if output else None
|
||||
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Invalid VDB content: {}".format(message),
|
||||
formatting_data={
|
||||
"message": message,
|
||||
"rop_path": instance.data.get("instance_node"),
|
||||
"sop_path": output_path
|
||||
}
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
def get_invalid_with_message(cls, instance):
|
||||
|
||||
node = instance.data["output_node"]
|
||||
node = instance.data.get("output_node")
|
||||
if node is None:
|
||||
cls.log.error(
|
||||
instance_node = instance.data.get("instance_node")
|
||||
error = (
|
||||
"SOP path is not correctly set on "
|
||||
"ROP node '%s'." % instance.data.get("instance_node")
|
||||
"ROP node `{}`.".format(instance_node)
|
||||
)
|
||||
return [instance]
|
||||
return [hou.node(instance_node), error]
|
||||
|
||||
frame = instance.data.get("frameStart", 0)
|
||||
geometry = node.geometryAtFrame(frame)
|
||||
geometry = get_geometry_at_frame(node, frame)
|
||||
if geometry is None:
|
||||
# No geometry data on this node, maybe the node hasn't cooked?
|
||||
cls.log.error(
|
||||
"SOP node has no geometry data. "
|
||||
"Is it cooked? %s" % node.path()
|
||||
error = (
|
||||
"SOP node `{}` has no geometry data. "
|
||||
"Was it unable to cook?".format(node.path())
|
||||
)
|
||||
return [node]
|
||||
return [node, error]
|
||||
|
||||
prims = geometry.prims()
|
||||
nr_of_prims = len(prims)
|
||||
num_prims = geometry.intrinsicValue("primitivecount")
|
||||
num_points = geometry.intrinsicValue("pointcount")
|
||||
if num_prims == 0 and num_points == 0:
|
||||
# Since we are only checking the first frame it doesn't mean there
|
||||
# won't be VDB prims in a few frames. As such we'll assume for now
|
||||
# the user knows what he or she is doing
|
||||
cls.log.warning(
|
||||
"SOP node `{}` has no primitives on start frame {}. "
|
||||
"Validation is skipped and it is assumed elsewhere in the "
|
||||
"frame range VDB prims and only VDB prims will exist."
|
||||
"".format(node.path(), int(frame))
|
||||
)
|
||||
return [None, None]
|
||||
|
||||
# All primitives must be hou.VDB
|
||||
invalid_prim = False
|
||||
for prim in prims:
|
||||
if not isinstance(prim, hou.VDB):
|
||||
cls.log.error("Found non-VDB primitive: %s" % prim)
|
||||
invalid_prim = True
|
||||
if invalid_prim:
|
||||
return [instance]
|
||||
num_vdb_prims = geometry.countPrimType(hou.primType.VDB)
|
||||
cls.log.debug("Detected {} VDB primitives".format(num_vdb_prims))
|
||||
if num_prims != num_vdb_prims:
|
||||
# There's at least one primitive that is not a VDB.
|
||||
# Search them and report them to the artist.
|
||||
prims = geometry.prims()
|
||||
invalid_prims = [prim for prim in prims
|
||||
if not isinstance(prim, hou.VDB)]
|
||||
if invalid_prims:
|
||||
# Log prim numbers as consecutive ranges so logging isn't very
|
||||
# slow for large number of primitives
|
||||
error = (
|
||||
"Found non-VDB primitives for `{}`. "
|
||||
"Primitive indices {} are not VDB primitives.".format(
|
||||
node.path(),
|
||||
", ".join(group_consecutive_numbers(
|
||||
prim.number() for prim in invalid_prims
|
||||
))
|
||||
)
|
||||
)
|
||||
return [node, error]
|
||||
|
||||
nr_of_points = len(geometry.points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
if num_points != num_vdb_prims:
|
||||
# We have points unrelated to the VDB primitives.
|
||||
error = (
|
||||
"The number of primitives and points do not match in '{}'. "
|
||||
"This likely means you have unconnected points, which we do "
|
||||
"not allow in the VDB output.".format(node.path()))
|
||||
return [node, error]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
return [None, None]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
nodes, _ = cls.get_invalid_with_message(instance)
|
||||
return nodes
|
||||
|
|
|
|||
28
openpype/hosts/max/plugins/create/create_model.py
Normal file
28
openpype/hosts/max/plugins/create/create_model.py
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for model."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreateModel(plugin.MaxCreator):
|
||||
identifier = "io.openpype.creators.max.model"
|
||||
label = "Model"
|
||||
family = "model"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
instance = super(CreateModel, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
container = rt.getNodeByName(instance.data.get("instance_node"))
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
sel_obj = None
|
||||
if self.selected_nodes:
|
||||
sel_obj = list(self.selected_nodes)
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
@ -10,7 +10,9 @@ class MaxSceneLoader(load.LoaderPlugin):
|
|||
"""Max Scene Loader"""
|
||||
|
||||
families = ["camera",
|
||||
"maxScene"]
|
||||
"maxScene",
|
||||
"model"]
|
||||
|
||||
representations = ["max"]
|
||||
order = -8
|
||||
icon = "code-fork"
|
||||
|
|
|
|||
109
openpype/hosts/max/plugins/load/load_model.py
Normal file
109
openpype/hosts/max/plugins/load/load_model.py
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
|
||||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ModelAbcLoader(load.LoaderPlugin):
|
||||
"""Loading model with the Alembic loader."""
|
||||
|
||||
families = ["model"]
|
||||
label = "Load Model(Alembic)"
|
||||
representations = ["abc"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
file_path = os.path.normpath(self.fname)
|
||||
|
||||
abc_before = {
|
||||
c for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
abc_import_cmd = (f"""
|
||||
AlembicImport.ImportToRoot = false
|
||||
AlembicImport.CustomAttributes = true
|
||||
AlembicImport.UVs = true
|
||||
AlembicImport.VertexColors = true
|
||||
|
||||
importFile @"{file_path}" #noPrompt
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {abc_import_cmd}")
|
||||
rt.execute(abc_import_cmd)
|
||||
|
||||
abc_after = {
|
||||
c for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
# This should yield new AlembicContainer node
|
||||
abc_containers = abc_after.difference(abc_before)
|
||||
|
||||
if len(abc_containers) != 1:
|
||||
self.log.error("Something failed when loading.")
|
||||
|
||||
abc_container = abc_containers.pop()
|
||||
|
||||
return containerise(
|
||||
name, [abc_container], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
|
||||
for alembic in rt.selection:
|
||||
abc = rt.getNodeByName(alembic.name)
|
||||
rt.select(abc.Children)
|
||||
for abc_con in rt.selection:
|
||||
container = rt.getNodeByName(abc_con.name)
|
||||
container.source = path
|
||||
rt.select(container.Children)
|
||||
for abc_obj in rt.selection:
|
||||
alembic_obj = rt.getNodeByName(abc_obj.name)
|
||||
alembic_obj.source = path
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
|
||||
@staticmethod
|
||||
def get_container_children(parent, type_name):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
def list_children(node):
|
||||
children = []
|
||||
for c in node.Children:
|
||||
children.append(c)
|
||||
children += list_children(c)
|
||||
return children
|
||||
|
||||
filtered = []
|
||||
for child in list_children(parent):
|
||||
class_type = str(rt.classOf(child.baseObject))
|
||||
if class_type == type_name:
|
||||
filtered.append(child)
|
||||
|
||||
return filtered
|
||||
77
openpype/hosts/max/plugins/load/load_model_fbx.py
Normal file
77
openpype/hosts/max/plugins/load/load_model_fbx.py
Normal file
|
|
@ -0,0 +1,77 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class FbxModelLoader(load.LoaderPlugin):
|
||||
"""Fbx Model Loader"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["fbx"]
|
||||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
filepath = os.path.normpath(self.fname)
|
||||
|
||||
fbx_import_cmd = (
|
||||
f"""
|
||||
|
||||
FBXImporterSetParam "Animation" false
|
||||
FBXImporterSetParam "Cameras" false
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
importFile @"{filepath}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {fbx_import_cmd}")
|
||||
rt.execute(fbx_import_cmd)
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
fbx_reimport_cmd = (
|
||||
f"""
|
||||
FBXImporterSetParam "Animation" false
|
||||
FBXImporterSetParam "Cameras" false
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
importFile @"{path}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
rt.execute(fbx_reimport_cmd)
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
68
openpype/hosts/max/plugins/load/load_model_obj.py
Normal file
68
openpype/hosts/max/plugins/load/load_model_obj.py
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ObjLoader(load.LoaderPlugin):
|
||||
"""Obj Loader"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["obj"]
|
||||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
filepath = os.path.normpath(self.fname)
|
||||
self.log.debug(f"Executing command to import..")
|
||||
|
||||
rt.execute(f'importFile @"{filepath}" #noPrompt using:ObjImp')
|
||||
# create "missing" container for obj import
|
||||
container = rt.container()
|
||||
container.name = f"{name}"
|
||||
|
||||
# get current selection
|
||||
for selection in rt.getCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
|
||||
instance_name, _ = node_name.split("_")
|
||||
container = rt.getNodeByName(instance_name)
|
||||
for n in container.Children:
|
||||
rt.delete(n)
|
||||
|
||||
rt.execute(f'importFile @"{path}" #noPrompt using:ObjImp')
|
||||
# get current selection
|
||||
for selection in rt.getCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(node_name, {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
78
openpype/hosts/max/plugins/load/load_model_usd.py
Normal file
78
openpype/hosts/max/plugins/load/load_model_usd.py
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ModelUSDLoader(load.LoaderPlugin):
|
||||
"""Loading model with the USD loader."""
|
||||
|
||||
families = ["model"]
|
||||
label = "Load Model(USD)"
|
||||
representations = ["usda"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
# asset_filepath
|
||||
filepath = os.path.normpath(self.fname)
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
base_filename = os.path.basename(filepath)
|
||||
filename, ext = os.path.splitext(base_filename)
|
||||
log_filepath = filepath.replace(ext, "txt")
|
||||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.name('info')
|
||||
rt.USDImporter.importFile(filepath,
|
||||
importOptions=import_options)
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
for n in node.Children:
|
||||
for r in n.Children:
|
||||
rt.delete(r)
|
||||
rt.delete(n)
|
||||
instance_name, _ = node_name.split("_")
|
||||
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
base_filename = os.path.basename(path)
|
||||
_, ext = os.path.splitext(base_filename)
|
||||
log_filepath = path.replace(ext, "txt")
|
||||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.name('info')
|
||||
rt.USDImporter.importFile(path,
|
||||
importOptions=import_options)
|
||||
|
||||
asset = rt.getNodeByName(f"{instance_name}")
|
||||
asset.Parent = node
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(node_name, {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
|
|
@ -15,8 +15,7 @@ from openpype.hosts.max.api import lib
|
|||
class AbcLoader(load.LoaderPlugin):
|
||||
"""Alembic loader."""
|
||||
|
||||
families = ["model",
|
||||
"camera",
|
||||
families = ["camera",
|
||||
"animation",
|
||||
"pointcache"]
|
||||
label = "Load Alembic"
|
||||
|
|
|
|||
|
|
@ -21,7 +21,8 @@ class ExtractMaxSceneRaw(publish.Extractor,
|
|||
label = "Extract Max Scene (Raw)"
|
||||
hosts = ["max"]
|
||||
families = ["camera",
|
||||
"maxScene"]
|
||||
"maxScene",
|
||||
"model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
74
openpype/hosts/max/plugins/publish/extract_model.py
Normal file
74
openpype/hosts/max/plugins/publish/extract_model.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection,
|
||||
get_all_children
|
||||
)
|
||||
|
||||
|
||||
class ExtractModel(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in Alembic Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.1
|
||||
label = "Extract Geometry (Alembic)"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
# We run the render
|
||||
self.log.info("Writing alembic '%s' to '%s'" % (filename,
|
||||
stagingdir))
|
||||
|
||||
export_cmd = (
|
||||
f"""
|
||||
AlembicExport.ArchiveType = #ogawa
|
||||
AlembicExport.CoordinateSystem = #maya
|
||||
AlembicExport.CustomAttributes = true
|
||||
AlembicExport.UVs = true
|
||||
AlembicExport.VertexColors = true
|
||||
AlembicExport.PreserveInstances = true
|
||||
|
||||
exportFile @"{filepath}" #noPrompt selectedOnly:on using:AlembicExport
|
||||
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {export_cmd}")
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.execute(export_cmd)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': 'abc',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
filepath))
|
||||
74
openpype/hosts/max/plugins/publish/extract_model_fbx.py
Normal file
74
openpype/hosts/max/plugins/publish/extract_model_fbx.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection,
|
||||
get_all_children
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelFbx(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in FBX Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract FBX"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir,
|
||||
filename)
|
||||
self.log.info("Writing FBX '%s' to '%s'" % (filepath,
|
||||
stagingdir))
|
||||
|
||||
export_fbx_cmd = (
|
||||
f"""
|
||||
FBXExporterSetParam "Animation" false
|
||||
FBXExporterSetParam "Cameras" false
|
||||
FBXExporterSetParam "Lights" false
|
||||
FBXExporterSetParam "PointCache" false
|
||||
FBXExporterSetParam "AxisConversionMethod" "Animation"
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
exportFile @"{filepath}" #noPrompt selectedOnly:true using:FBXEXP
|
||||
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {export_fbx_cmd}")
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.execute(export_fbx_cmd)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
filepath))
|
||||
59
openpype/hosts/max/plugins/publish/extract_model_obj.py
Normal file
59
openpype/hosts/max/plugins/publish/extract_model_obj.py
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection,
|
||||
get_all_children
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelObj(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in OBJ Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract OBJ"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.obj".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir,
|
||||
filename)
|
||||
self.log.info("Writing OBJ '%s' to '%s'" % (filepath,
|
||||
stagingdir))
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.execute(f'exportFile @"{filepath}" #noPrompt selectedOnly:true using:ObjExp') # noqa
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'obj',
|
||||
'ext': 'obj',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
filepath))
|
||||
114
openpype/hosts/max/plugins/publish/extract_model_usd.py
Normal file
114
openpype/hosts/max/plugins/publish/extract_model_usd.py
Normal file
|
|
@ -0,0 +1,114 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelUSD(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in USDA Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract Geometry (USD)"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
asset_filename = "{name}.usda".format(**instance.data)
|
||||
asset_filepath = os.path.join(stagingdir,
|
||||
asset_filename)
|
||||
self.log.info("Writing USD '%s' to '%s'" % (asset_filepath,
|
||||
stagingdir))
|
||||
|
||||
log_filename = "{name}.txt".format(**instance.data)
|
||||
log_filepath = os.path.join(stagingdir,
|
||||
log_filename)
|
||||
self.log.info("Writing log '%s' to '%s'" % (log_filepath,
|
||||
stagingdir))
|
||||
|
||||
# get the nodes which need to be exported
|
||||
export_options = self.get_export_options(log_filepath)
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = self.get_node_list(container)
|
||||
rt.USDExporter.ExportFile(asset_filepath,
|
||||
exportOptions=export_options,
|
||||
contentSource=rt.name("selected"),
|
||||
nodeList=node_list)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'usda',
|
||||
'ext': 'usda',
|
||||
'files': asset_filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
log_representation = {
|
||||
'name': 'txt',
|
||||
'ext': 'txt',
|
||||
'files': log_filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(log_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
asset_filepath))
|
||||
|
||||
def get_node_list(self, container):
|
||||
"""
|
||||
Get the target nodes which are
|
||||
the children of the container
|
||||
"""
|
||||
node_list = []
|
||||
|
||||
container_node = rt.getNodeByName(container)
|
||||
target_node = container_node.Children
|
||||
rt.select(target_node)
|
||||
for sel in rt.selection:
|
||||
node_list.append(sel)
|
||||
|
||||
return node_list
|
||||
|
||||
def get_export_options(self, log_path):
|
||||
"""Set Export Options for USD Exporter"""
|
||||
|
||||
export_options = rt.USDExporter.createOptions()
|
||||
|
||||
export_options.Meshes = True
|
||||
export_options.Shapes = False
|
||||
export_options.Lights = False
|
||||
export_options.Cameras = False
|
||||
export_options.Materials = False
|
||||
export_options.MeshFormat = rt.name('fromScene')
|
||||
export_options.FileFormat = rt.name('ascii')
|
||||
export_options.UpAxis = rt.name('y')
|
||||
export_options.LogLevel = rt.name('info')
|
||||
export_options.LogPath = log_path
|
||||
export_options.PreserveEdgeOrientation = True
|
||||
export_options.TimeMode = rt.name('current')
|
||||
|
||||
rt.USDexporter.UIOptions = export_options
|
||||
|
||||
return export_options
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateModelContent(pyblish.api.InstancePlugin):
|
||||
"""Validates Model instance contents.
|
||||
|
||||
A model instance may only hold either geometry-related
|
||||
object(excluding Shapes) or editable meshes.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["model"]
|
||||
hosts = ["max"]
|
||||
label = "Model Contents"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Model instance must only include"
|
||||
"Geometry and Editable Mesh")
|
||||
|
||||
def get_invalid(self, instance):
|
||||
"""
|
||||
Get invalid nodes if the instance is not camera
|
||||
"""
|
||||
invalid = list()
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating look content for "
|
||||
"{}".format(container))
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children) or rt.getCurrentSelection()
|
||||
for sel in selection_list:
|
||||
if rt.classOf(sel) in rt.Camera.classes:
|
||||
invalid.append(sel)
|
||||
if rt.classOf(sel) in rt.Light.classes:
|
||||
invalid.append(sel)
|
||||
if rt.classOf(sel) in rt.Shape.classes:
|
||||
invalid.append(sel)
|
||||
|
||||
return invalid
|
||||
36
openpype/hosts/max/plugins/publish/validate_usd_plugin.py
Normal file
36
openpype/hosts/max/plugins/publish/validate_usd_plugin.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateUSDPlugin(pyblish.api.InstancePlugin):
|
||||
"""Validates if USD plugin is installed or loaded in Max
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
families = ["model"]
|
||||
hosts = ["max"]
|
||||
label = "USD Plugin"
|
||||
|
||||
def process(self, instance):
|
||||
plugin_mgr = rt.pluginManager
|
||||
plugin_count = plugin_mgr.pluginDllCount
|
||||
plugin_info = self.get_plugins(plugin_mgr,
|
||||
plugin_count)
|
||||
usd_import = "usdimport.dli"
|
||||
if usd_import not in plugin_info:
|
||||
raise PublishValidationError("USD Plugin {}"
|
||||
" not found".format(usd_import))
|
||||
usd_export = "usdexport.dle"
|
||||
if usd_export not in plugin_info:
|
||||
raise PublishValidationError("USD Plugin {}"
|
||||
" not found".format(usd_export))
|
||||
|
||||
def get_plugins(self, manager, count):
|
||||
plugin_info_list = list()
|
||||
for p in range(1, count + 1):
|
||||
plugin_info = manager.pluginDllName(p)
|
||||
plugin_info_list.append(plugin_info)
|
||||
|
||||
return plugin_info_list
|
||||
|
|
@ -43,7 +43,24 @@ class MayaTemplateBuilder(AbstractTemplateBuilder):
|
|||
))
|
||||
|
||||
cmds.sets(name=PLACEHOLDER_SET, empty=True)
|
||||
new_nodes = cmds.file(path, i=True, returnNewNodes=True)
|
||||
new_nodes = cmds.file(
|
||||
path,
|
||||
i=True,
|
||||
returnNewNodes=True,
|
||||
preserveReferences=True,
|
||||
loadReferenceDepth="all",
|
||||
)
|
||||
|
||||
# make default cameras non-renderable
|
||||
default_cameras = [cam for cam in cmds.ls(cameras=True)
|
||||
if cmds.camera(cam, query=True, startupCamera=True)]
|
||||
for cam in default_cameras:
|
||||
if not cmds.attributeQuery("renderable", node=cam, exists=True):
|
||||
self.log.debug(
|
||||
"Camera {} has no attribute 'renderable'".format(cam)
|
||||
)
|
||||
continue
|
||||
cmds.setAttr("{}.renderable".format(cam), 0)
|
||||
|
||||
cmds.setAttr(PLACEHOLDER_SET + ".hiddenInOutliner", True)
|
||||
|
||||
|
|
|
|||
|
|
@ -162,9 +162,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
with parent_nodes(roots, parent=None):
|
||||
cmds.xform(group_name, zeroTransformPivots=True)
|
||||
|
||||
cmds.setAttr("{}.displayHandle".format(group_name), 1)
|
||||
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
|
||||
display_handle = settings['maya']['load'].get(
|
||||
'reference_loader', {}
|
||||
).get('display_handle', True)
|
||||
cmds.setAttr(
|
||||
"{}.displayHandle".format(group_name), display_handle
|
||||
)
|
||||
|
||||
colors = settings['maya']['load']['colors']
|
||||
c = colors.get(family)
|
||||
if c is not None:
|
||||
|
|
@ -174,7 +180,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
(float(c[1]) / 255),
|
||||
(float(c[2]) / 255))
|
||||
|
||||
cmds.setAttr("{}.displayHandle".format(group_name), 1)
|
||||
cmds.setAttr(
|
||||
"{}.displayHandle".format(group_name), display_handle
|
||||
)
|
||||
# get bounding box
|
||||
bbox = cmds.exactWorldBoundingBox(group_name)
|
||||
# get pivot position on world space
|
||||
|
|
|
|||
|
|
@ -280,7 +280,7 @@ class MakeTX(TextureProcessor):
|
|||
# Do nothing if the source file is already a .tx file.
|
||||
return TextureResult(
|
||||
path=source,
|
||||
file_hash=None, # todo: unknown texture hash?
|
||||
file_hash=source_hash(source),
|
||||
colorspace=colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
|
|
|||
|
|
@ -217,7 +217,11 @@ class ExtractPlayblast(publish.Extractor):
|
|||
instance.data["panel"], edit=True, **viewport_defaults
|
||||
)
|
||||
|
||||
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
|
||||
try:
|
||||
cmds.setAttr(
|
||||
"{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
|
||||
except RuntimeError:
|
||||
self.log.warning("Cannot restore Pan/Zoom settings.")
|
||||
|
||||
collected_files = os.listdir(stagingdir)
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import pyblish.api
|
|||
|
||||
from openpype.hosts.maya.api.lib import set_attribute
|
||||
from openpype.pipeline.publish import (
|
||||
RepairContextAction,
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
)
|
||||
|
||||
|
|
@ -26,7 +26,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
|
|||
order = ValidateContentsOrder
|
||||
label = "Attributes"
|
||||
hosts = ["maya"]
|
||||
actions = [RepairContextAction]
|
||||
actions = [RepairAction]
|
||||
optional = True
|
||||
|
||||
attributes = None
|
||||
|
|
@ -81,7 +81,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
|
|||
if node_name not in attributes:
|
||||
continue
|
||||
|
||||
for attr_name, expected in attributes.items():
|
||||
for attr_name, expected in attributes[node_name].items():
|
||||
|
||||
# Skip if attribute does not exist
|
||||
if not cmds.attributeQuery(attr_name, node=node, exists=True):
|
||||
|
|
|
|||
|
|
@ -495,17 +495,17 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
|
|||
data (dict)
|
||||
"""
|
||||
|
||||
data = {}
|
||||
if AVALON_TAB not in node.knobs():
|
||||
return data
|
||||
|
||||
# check if lists
|
||||
if not isinstance(prefix, list):
|
||||
prefix = list([prefix])
|
||||
|
||||
data = dict()
|
||||
prefix = [prefix]
|
||||
|
||||
# loop prefix
|
||||
for p in prefix:
|
||||
# check if the node is avalon tracked
|
||||
if AVALON_TAB not in node.knobs():
|
||||
continue
|
||||
try:
|
||||
# check if data available on the node
|
||||
test = node[AVALON_DATA_GROUP].value()
|
||||
|
|
@ -516,8 +516,7 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
|
|||
if create:
|
||||
node = set_avalon_knob_data(node)
|
||||
return get_avalon_knob_data(node)
|
||||
else:
|
||||
return {}
|
||||
return {}
|
||||
|
||||
# get data from filtered knobs
|
||||
data.update({k.replace(p, ''): node[k].value()
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@ from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
|
|||
from openpype.hosts.nuke.api.lib import (
|
||||
INSTANCE_DATA_KNOB,
|
||||
get_node_data,
|
||||
get_avalon_knob_data
|
||||
get_avalon_knob_data,
|
||||
AVALON_TAB,
|
||||
)
|
||||
from openpype.hosts.nuke.api.plugin import convert_to_valid_instaces
|
||||
|
||||
|
|
@ -17,13 +18,15 @@ class LegacyConverted(SubsetConvertorPlugin):
|
|||
legacy_found = False
|
||||
# search for first available legacy item
|
||||
for node in nuke.allNodes(recurseGroups=True):
|
||||
|
||||
if node.Class() in ["Viewer", "Dot"]:
|
||||
continue
|
||||
|
||||
if get_node_data(node, INSTANCE_DATA_KNOB):
|
||||
continue
|
||||
|
||||
if AVALON_TAB not in node.knobs():
|
||||
continue
|
||||
|
||||
# get data from avalon knob
|
||||
avalon_knob_data = get_avalon_knob_data(
|
||||
node, ["avalon:", "ak:"], create=False)
|
||||
|
|
|
|||
|
|
@ -190,7 +190,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin,
|
|||
|
||||
# make sure rendered sequence on farm will
|
||||
# be used for extract review
|
||||
if not instance.data["review"]:
|
||||
if not instance.data.get("review"):
|
||||
instance.data["useSequenceForReview"] = False
|
||||
|
||||
self.log.debug("instance.data: {}".format(pformat(instance.data)))
|
||||
|
|
|
|||
|
|
@ -7,28 +7,26 @@ from openpype.pipeline import (
|
|||
from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances
|
||||
|
||||
|
||||
class PSWorkfileCreator(AutoCreator):
|
||||
identifier = "workfile"
|
||||
family = "workfile"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
class PSAutoCreator(AutoCreator):
|
||||
"""Generic autocreator to extend."""
|
||||
def get_instance_attr_defs(self):
|
||||
return []
|
||||
|
||||
def collect_instances(self):
|
||||
for instance_data in cache_and_get_instances(self):
|
||||
creator_id = instance_data.get("creator_identifier")
|
||||
|
||||
if creator_id == self.identifier:
|
||||
subset_name = instance_data["subset"]
|
||||
instance = CreatedInstance(
|
||||
self.family, subset_name, instance_data, self
|
||||
instance = CreatedInstance.from_existing(
|
||||
instance_data, self
|
||||
)
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
# nothing to change on workfiles
|
||||
pass
|
||||
self.log.debug("update_list:: {}".format(update_list))
|
||||
for created_inst, _changes in update_list:
|
||||
api.stub().imprint(created_inst.get("instance_id"),
|
||||
created_inst.data_to_store())
|
||||
|
||||
def create(self, options=None):
|
||||
existing_instance = None
|
||||
|
|
@ -58,6 +56,9 @@ class PSWorkfileCreator(AutoCreator):
|
|||
project_name, host_name, None
|
||||
))
|
||||
|
||||
if not self.active_on_create:
|
||||
data["active"] = False
|
||||
|
||||
new_instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
120
openpype/hosts/photoshop/plugins/create/create_flatten_image.py
Normal file
120
openpype/hosts/photoshop/plugins/create/create_flatten_image.py
Normal file
|
|
@ -0,0 +1,120 @@
|
|||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
from openpype.lib import BoolDef
|
||||
import openpype.hosts.photoshop.api as api
|
||||
from openpype.hosts.photoshop.lib import PSAutoCreator
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class AutoImageCreator(PSAutoCreator):
|
||||
"""Creates flatten image from all visible layers.
|
||||
|
||||
Used in simplified publishing as auto created instance.
|
||||
Must be enabled in Setting and template for subset name provided
|
||||
"""
|
||||
identifier = "auto_image"
|
||||
family = "image"
|
||||
|
||||
# Settings
|
||||
default_variant = ""
|
||||
# - Mark by default instance for review
|
||||
mark_for_review = True
|
||||
active_on_create = True
|
||||
|
||||
def create(self, options=None):
|
||||
existing_instance = None
|
||||
for instance in self.create_context.instances:
|
||||
if instance.creator_identifier == self.identifier:
|
||||
existing_instance = instance
|
||||
break
|
||||
|
||||
context = self.create_context
|
||||
project_name = context.get_current_project_name()
|
||||
asset_name = context.get_current_asset_name()
|
||||
task_name = context.get_current_task_name()
|
||||
host_name = context.host_name
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
if existing_instance is None:
|
||||
subset_name = get_subset_name(
|
||||
self.family, self.default_variant, task_name, asset_doc,
|
||||
project_name, host_name
|
||||
)
|
||||
|
||||
publishable_ids = [layer.id for layer in api.stub().get_layers()
|
||||
if layer.visible]
|
||||
data = {
|
||||
"asset": asset_name,
|
||||
"task": task_name,
|
||||
# ids are "virtual" layers, won't get grouped as 'members' do
|
||||
# same difference in color coded layers in WP
|
||||
"ids": publishable_ids
|
||||
}
|
||||
|
||||
if not self.active_on_create:
|
||||
data["active"] = False
|
||||
|
||||
creator_attributes = {"mark_for_review": self.mark_for_review}
|
||||
data.update({"creator_attributes": creator_attributes})
|
||||
|
||||
new_instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
self._add_instance_to_context(new_instance)
|
||||
api.stub().imprint(new_instance.get("instance_id"),
|
||||
new_instance.data_to_store())
|
||||
|
||||
elif ( # existing instance from different context
|
||||
existing_instance["asset"] != asset_name
|
||||
or existing_instance["task"] != task_name
|
||||
):
|
||||
subset_name = get_subset_name(
|
||||
self.family, self.default_variant, task_name, asset_doc,
|
||||
project_name, host_name
|
||||
)
|
||||
|
||||
existing_instance["asset"] = asset_name
|
||||
existing_instance["task"] = task_name
|
||||
existing_instance["subset"] = subset_name
|
||||
|
||||
api.stub().imprint(existing_instance.get("instance_id"),
|
||||
existing_instance.data_to_store())
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
return [
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review",
|
||||
default=self.mark_for_review
|
||||
)
|
||||
]
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
plugin_settings = (
|
||||
project_settings["photoshop"]["create"]["AutoImageCreator"]
|
||||
)
|
||||
|
||||
self.active_on_create = plugin_settings["active_on_create"]
|
||||
self.default_variant = plugin_settings["default_variant"]
|
||||
self.mark_for_review = plugin_settings["mark_for_review"]
|
||||
self.enabled = plugin_settings["enabled"]
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Creator for flatten image.
|
||||
|
||||
Studio might configure simple publishing workflow. In that case
|
||||
`image` instance is automatically created which will publish flat
|
||||
image from all visible layers.
|
||||
|
||||
Artist might disable this instance from publishing or from creating
|
||||
review for it though.
|
||||
"""
|
||||
|
|
@ -23,6 +23,11 @@ class ImageCreator(Creator):
|
|||
family = "image"
|
||||
description = "Image creator"
|
||||
|
||||
# Settings
|
||||
default_variants = ""
|
||||
mark_for_review = False
|
||||
active_on_create = True
|
||||
|
||||
def create(self, subset_name_from_ui, data, pre_create_data):
|
||||
groups_to_create = []
|
||||
top_layers_to_wrap = []
|
||||
|
|
@ -94,6 +99,12 @@ class ImageCreator(Creator):
|
|||
data.update({"layer_name": layer_name})
|
||||
data.update({"long_name": "_".join(layer_names_in_hierarchy)})
|
||||
|
||||
creator_attributes = {"mark_for_review": self.mark_for_review}
|
||||
data.update({"creator_attributes": creator_attributes})
|
||||
|
||||
if not self.active_on_create:
|
||||
data["active"] = False
|
||||
|
||||
new_instance = CreatedInstance(self.family, subset_name, data,
|
||||
self)
|
||||
|
||||
|
|
@ -134,11 +145,6 @@ class ImageCreator(Creator):
|
|||
self.host.remove_instance(instance)
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def get_default_variants(self):
|
||||
return [
|
||||
"Main"
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
output = [
|
||||
BoolDef("use_selection", default=True,
|
||||
|
|
@ -148,10 +154,34 @@ class ImageCreator(Creator):
|
|||
label="Create separate instance for each selected"),
|
||||
BoolDef("use_layer_name",
|
||||
default=False,
|
||||
label="Use layer name in subset")
|
||||
label="Use layer name in subset"),
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Create separate review",
|
||||
default=False
|
||||
)
|
||||
]
|
||||
return output
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
plugin_settings = (
|
||||
project_settings["photoshop"]["create"]["ImageCreator"]
|
||||
)
|
||||
|
||||
self.active_on_create = plugin_settings["active_on_create"]
|
||||
self.default_variants = plugin_settings["default_variants"]
|
||||
self.mark_for_review = plugin_settings["mark_for_review"]
|
||||
self.enabled = plugin_settings["enabled"]
|
||||
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Creator for Image instances
|
||||
|
||||
|
|
@ -180,6 +210,11 @@ class ImageCreator(Creator):
|
|||
but layer name should be used (set explicitly in UI or implicitly if
|
||||
multiple images should be created), it is added in capitalized form
|
||||
as a suffix to subset name.
|
||||
|
||||
Each image could have its separate review created if necessary via
|
||||
`Create separate review` toggle.
|
||||
But more use case is to use separate `review` instance to create review
|
||||
from all published items.
|
||||
"""
|
||||
|
||||
def _handle_legacy(self, instance_data):
|
||||
|
|
|
|||
28
openpype/hosts/photoshop/plugins/create/create_review.py
Normal file
28
openpype/hosts/photoshop/plugins/create/create_review.py
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
from openpype.hosts.photoshop.lib import PSAutoCreator
|
||||
|
||||
|
||||
class ReviewCreator(PSAutoCreator):
|
||||
"""Creates review instance which might be disabled from publishing."""
|
||||
identifier = "review"
|
||||
family = "review"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Auto creator for review.
|
||||
|
||||
Photoshop review is created from all published images or from all
|
||||
visible layers if no `image` instances got created.
|
||||
|
||||
Review might be disabled by an artist (instance shouldn't be deleted as
|
||||
it will get recreated in next publish either way).
|
||||
"""
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
plugin_settings = (
|
||||
project_settings["photoshop"]["create"]["ReviewCreator"]
|
||||
)
|
||||
|
||||
self.default_variant = plugin_settings["default_variant"]
|
||||
self.active_on_create = plugin_settings["active_on_create"]
|
||||
self.enabled = plugin_settings["enabled"]
|
||||
28
openpype/hosts/photoshop/plugins/create/create_workfile.py
Normal file
28
openpype/hosts/photoshop/plugins/create/create_workfile.py
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
from openpype.hosts.photoshop.lib import PSAutoCreator
|
||||
|
||||
|
||||
class WorkfileCreator(PSAutoCreator):
|
||||
identifier = "workfile"
|
||||
family = "workfile"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Auto creator for workfile.
|
||||
|
||||
It is expected that each publish will also publish its source workfile
|
||||
for safekeeping. This creator triggers automatically without need for
|
||||
an artist to remember and trigger it explicitly.
|
||||
|
||||
Workfile instance could be disabled if it is not required to publish
|
||||
workfile. (Instance shouldn't be deleted though as it will be recreated
|
||||
in next publish automatically).
|
||||
"""
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
plugin_settings = (
|
||||
project_settings["photoshop"]["create"]["WorkfileCreator"]
|
||||
)
|
||||
|
||||
self.active_on_create = plugin_settings["active_on_create"]
|
||||
self.enabled = plugin_settings["enabled"]
|
||||
101
openpype/hosts/photoshop/plugins/publish/collect_auto_image.py
Normal file
101
openpype/hosts/photoshop/plugins/publish/collect_auto_image.py
Normal file
|
|
@ -0,0 +1,101 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectAutoImage(pyblish.api.ContextPlugin):
|
||||
"""Creates auto image in non artist based publishes (Webpublisher).
|
||||
|
||||
'remotepublish' should be renamed to 'autopublish' or similar in the future
|
||||
"""
|
||||
|
||||
label = "Collect Auto Image"
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["photoshop"]
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
|
||||
targets = ["remotepublish"]
|
||||
|
||||
def process(self, context):
|
||||
family = "image"
|
||||
for instance in context:
|
||||
creator_identifier = instance.data.get("creator_identifier")
|
||||
if creator_identifier and creator_identifier == "auto_image":
|
||||
self.log.debug("Auto image instance found, won't create new")
|
||||
return
|
||||
|
||||
project_name = context.data["anatomyData"]["project"]["name"]
|
||||
proj_settings = context.data["project_settings"]
|
||||
task_name = context.data["anatomyData"]["task"]["name"]
|
||||
host_name = context.data["hostName"]
|
||||
asset_doc = context.data["assetEntity"]
|
||||
asset_name = asset_doc["name"]
|
||||
|
||||
auto_creator = proj_settings.get(
|
||||
"photoshop", {}).get(
|
||||
"create", {}).get(
|
||||
"AutoImageCreator", {})
|
||||
|
||||
if not auto_creator or not auto_creator["enabled"]:
|
||||
self.log.debug("Auto image creator disabled, won't create new")
|
||||
return
|
||||
|
||||
stub = photoshop.stub()
|
||||
stored_items = stub.get_layers_metadata()
|
||||
for item in stored_items:
|
||||
if item.get("creator_identifier") == "auto_image":
|
||||
if not item.get("active"):
|
||||
self.log.debug("Auto_image instance disabled")
|
||||
return
|
||||
|
||||
layer_items = stub.get_layers()
|
||||
|
||||
publishable_ids = [layer.id for layer in layer_items
|
||||
if layer.visible]
|
||||
|
||||
# collect stored image instances
|
||||
instance_names = []
|
||||
for layer_item in layer_items:
|
||||
layer_meta_data = stub.read(layer_item, stored_items)
|
||||
|
||||
# Skip layers without metadata.
|
||||
if layer_meta_data is None:
|
||||
continue
|
||||
|
||||
# Skip containers.
|
||||
if "container" in layer_meta_data["id"]:
|
||||
continue
|
||||
|
||||
# active might not be in legacy meta
|
||||
if layer_meta_data.get("active", True) and layer_item.visible:
|
||||
instance_names.append(layer_meta_data["subset"])
|
||||
|
||||
if len(instance_names) == 0:
|
||||
variants = proj_settings.get(
|
||||
"photoshop", {}).get(
|
||||
"create", {}).get(
|
||||
"CreateImage", {}).get(
|
||||
"default_variants", [''])
|
||||
family = "image"
|
||||
|
||||
variant = context.data.get("variant") or variants[0]
|
||||
|
||||
subset_name = get_subset_name(
|
||||
family, variant, task_name, asset_doc,
|
||||
project_name, host_name
|
||||
)
|
||||
|
||||
instance = context.create_instance(subset_name)
|
||||
instance.data["family"] = family
|
||||
instance.data["asset"] = asset_name
|
||||
instance.data["subset"] = subset_name
|
||||
instance.data["ids"] = publishable_ids
|
||||
instance.data["publish"] = True
|
||||
instance.data["creator_identifier"] = "auto_image"
|
||||
|
||||
if auto_creator["mark_for_review"]:
|
||||
instance.data["creator_attributes"] = {"mark_for_review": True}
|
||||
instance.data["families"] = ["review"]
|
||||
|
||||
self.log.info("auto image instance: {} ".format(instance.data))
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
"""
|
||||
Requires:
|
||||
None
|
||||
|
||||
Provides:
|
||||
instance -> family ("review")
|
||||
"""
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectAutoReview(pyblish.api.ContextPlugin):
|
||||
"""Create review instance in non artist based workflow.
|
||||
|
||||
Called only if PS is triggered in Webpublisher or in tests.
|
||||
"""
|
||||
|
||||
label = "Collect Auto Review"
|
||||
hosts = ["photoshop"]
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
targets = ["remotepublish"]
|
||||
|
||||
publish = True
|
||||
|
||||
def process(self, context):
|
||||
family = "review"
|
||||
has_review = False
|
||||
for instance in context:
|
||||
if instance.data["family"] == family:
|
||||
self.log.debug("Review instance found, won't create new")
|
||||
has_review = True
|
||||
|
||||
creator_attributes = instance.data.get("creator_attributes", {})
|
||||
if (creator_attributes.get("mark_for_review") and
|
||||
"review" not in instance.data["families"]):
|
||||
instance.data["families"].append("review")
|
||||
|
||||
if has_review:
|
||||
return
|
||||
|
||||
stub = photoshop.stub()
|
||||
stored_items = stub.get_layers_metadata()
|
||||
for item in stored_items:
|
||||
if item.get("creator_identifier") == family:
|
||||
if not item.get("active"):
|
||||
self.log.debug("Review instance disabled")
|
||||
return
|
||||
|
||||
auto_creator = context.data["project_settings"].get(
|
||||
"photoshop", {}).get(
|
||||
"create", {}).get(
|
||||
"ReviewCreator", {})
|
||||
|
||||
if not auto_creator or not auto_creator["enabled"]:
|
||||
self.log.debug("Review creator disabled, won't create new")
|
||||
return
|
||||
|
||||
variant = (context.data.get("variant") or
|
||||
auto_creator["default_variant"])
|
||||
|
||||
project_name = context.data["anatomyData"]["project"]["name"]
|
||||
proj_settings = context.data["project_settings"]
|
||||
task_name = context.data["anatomyData"]["task"]["name"]
|
||||
host_name = context.data["hostName"]
|
||||
asset_doc = context.data["assetEntity"]
|
||||
asset_name = asset_doc["name"]
|
||||
|
||||
subset_name = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name=host_name,
|
||||
project_settings=proj_settings
|
||||
)
|
||||
|
||||
instance = context.create_instance(subset_name)
|
||||
instance.data.update({
|
||||
"subset": subset_name,
|
||||
"label": subset_name,
|
||||
"name": subset_name,
|
||||
"family": family,
|
||||
"families": [],
|
||||
"representations": [],
|
||||
"asset": asset_name,
|
||||
"publish": self.publish
|
||||
})
|
||||
|
||||
self.log.debug("auto review created::{}".format(instance.data))
|
||||
|
|
@ -0,0 +1,99 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectAutoWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Collect current script for publish."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Workfile"
|
||||
hosts = ["photoshop"]
|
||||
|
||||
targets = ["remotepublish"]
|
||||
|
||||
def process(self, context):
|
||||
family = "workfile"
|
||||
file_path = context.data["currentFile"]
|
||||
_, ext = os.path.splitext(file_path)
|
||||
staging_dir = os.path.dirname(file_path)
|
||||
base_name = os.path.basename(file_path)
|
||||
workfile_representation = {
|
||||
"name": ext[1:],
|
||||
"ext": ext[1:],
|
||||
"files": base_name,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
for instance in context:
|
||||
if instance.data["family"] == family:
|
||||
self.log.debug("Workfile instance found, won't create new")
|
||||
instance.data.update({
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"representations": [],
|
||||
})
|
||||
|
||||
# creating representation
|
||||
_, ext = os.path.splitext(file_path)
|
||||
instance.data["representations"].append(
|
||||
workfile_representation)
|
||||
|
||||
return
|
||||
|
||||
stub = photoshop.stub()
|
||||
stored_items = stub.get_layers_metadata()
|
||||
for item in stored_items:
|
||||
if item.get("creator_identifier") == family:
|
||||
if not item.get("active"):
|
||||
self.log.debug("Workfile instance disabled")
|
||||
return
|
||||
|
||||
project_name = context.data["anatomyData"]["project"]["name"]
|
||||
proj_settings = context.data["project_settings"]
|
||||
auto_creator = proj_settings.get(
|
||||
"photoshop", {}).get(
|
||||
"create", {}).get(
|
||||
"WorkfileCreator", {})
|
||||
|
||||
if not auto_creator or not auto_creator["enabled"]:
|
||||
self.log.debug("Workfile creator disabled, won't create new")
|
||||
return
|
||||
|
||||
# context.data["variant"] might come only from collect_batch_data
|
||||
variant = (context.data.get("variant") or
|
||||
auto_creator["default_variant"])
|
||||
|
||||
task_name = context.data["anatomyData"]["task"]["name"]
|
||||
host_name = context.data["hostName"]
|
||||
asset_doc = context.data["assetEntity"]
|
||||
asset_name = asset_doc["name"]
|
||||
|
||||
subset_name = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name=host_name,
|
||||
project_settings=proj_settings
|
||||
)
|
||||
|
||||
# Create instance
|
||||
instance = context.create_instance(subset_name)
|
||||
instance.data.update({
|
||||
"subset": subset_name,
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"family": family,
|
||||
"families": [],
|
||||
"representations": [],
|
||||
"asset": asset_name
|
||||
})
|
||||
|
||||
# creating representation
|
||||
instance.data["representations"].append(workfile_representation)
|
||||
|
||||
self.log.debug("auto workfile review created:{}".format(instance.data))
|
||||
|
|
@ -1,116 +0,0 @@
|
|||
import pprint
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
"""Gather instances by LayerSet and file metadata
|
||||
|
||||
Collects publishable instances from file metadata or enhance
|
||||
already collected by creator (family == "image").
|
||||
|
||||
If no image instances are explicitly created, it looks if there is value
|
||||
in `flatten_subset_template` (configurable in Settings), in that case it
|
||||
produces flatten image with all visible layers.
|
||||
|
||||
Identifier:
|
||||
id (str): "pyblish.avalon.instance"
|
||||
"""
|
||||
|
||||
label = "Collect Instances"
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["photoshop"]
|
||||
families_mapping = {
|
||||
"image": []
|
||||
}
|
||||
# configurable in Settings
|
||||
flatten_subset_template = ""
|
||||
|
||||
def process(self, context):
|
||||
instance_by_layer_id = {}
|
||||
for instance in context:
|
||||
if (
|
||||
instance.data["family"] == "image" and
|
||||
instance.data.get("members")):
|
||||
layer_id = str(instance.data["members"][0])
|
||||
instance_by_layer_id[layer_id] = instance
|
||||
|
||||
stub = photoshop.stub()
|
||||
layer_items = stub.get_layers()
|
||||
layers_meta = stub.get_layers_metadata()
|
||||
instance_names = []
|
||||
|
||||
all_layer_ids = []
|
||||
for layer_item in layer_items:
|
||||
layer_meta_data = stub.read(layer_item, layers_meta)
|
||||
all_layer_ids.append(layer_item.id)
|
||||
|
||||
# Skip layers without metadata.
|
||||
if layer_meta_data is None:
|
||||
continue
|
||||
|
||||
# Skip containers.
|
||||
if "container" in layer_meta_data["id"]:
|
||||
continue
|
||||
|
||||
# active might not be in legacy meta
|
||||
if not layer_meta_data.get("active", True):
|
||||
continue
|
||||
|
||||
instance = instance_by_layer_id.get(str(layer_item.id))
|
||||
if instance is None:
|
||||
instance = context.create_instance(layer_meta_data["subset"])
|
||||
|
||||
instance.data["layer"] = layer_item
|
||||
instance.data.update(layer_meta_data)
|
||||
instance.data["families"] = self.families_mapping[
|
||||
layer_meta_data["family"]
|
||||
]
|
||||
instance.data["publish"] = layer_item.visible
|
||||
instance_names.append(layer_meta_data["subset"])
|
||||
|
||||
# Produce diagnostic message for any graphical
|
||||
# user interface interested in visualising it.
|
||||
self.log.info("Found: \"%s\" " % instance.data["name"])
|
||||
self.log.info("instance: {} ".format(
|
||||
pprint.pformat(instance.data, indent=4)))
|
||||
|
||||
if len(instance_names) != len(set(instance_names)):
|
||||
self.log.warning("Duplicate instances found. " +
|
||||
"Remove unwanted via Publisher")
|
||||
|
||||
if len(instance_names) == 0 and self.flatten_subset_template:
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
variants = get_project_settings(project_name).get(
|
||||
"photoshop", {}).get(
|
||||
"create", {}).get(
|
||||
"CreateImage", {}).get(
|
||||
"defaults", [''])
|
||||
family = "image"
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
asset_name = context.data["assetEntity"]["name"]
|
||||
|
||||
variant = context.data.get("variant") or variants[0]
|
||||
fill_pairs = {
|
||||
"variant": variant,
|
||||
"family": family,
|
||||
"task": task_name
|
||||
}
|
||||
|
||||
subset = self.flatten_subset_template.format(
|
||||
**prepare_template_data(fill_pairs))
|
||||
|
||||
instance = context.create_instance(subset)
|
||||
instance.data["family"] = family
|
||||
instance.data["asset"] = asset_name
|
||||
instance.data["subset"] = subset
|
||||
instance.data["ids"] = all_layer_ids
|
||||
instance.data["families"] = self.families_mapping[family]
|
||||
instance.data["publish"] = True
|
||||
|
||||
self.log.info("flatten instance: {} ".format(instance.data))
|
||||
|
|
@ -14,10 +14,7 @@ from openpype.pipeline.create import get_subset_name
|
|||
|
||||
|
||||
class CollectReview(pyblish.api.ContextPlugin):
|
||||
"""Gather the active document as review instance.
|
||||
|
||||
Triggers once even if no 'image' is published as by defaults it creates
|
||||
flatten image from a workfile.
|
||||
"""Adds review to families for instances marked to be reviewable.
|
||||
"""
|
||||
|
||||
label = "Collect Review"
|
||||
|
|
@ -28,25 +25,8 @@ class CollectReview(pyblish.api.ContextPlugin):
|
|||
publish = True
|
||||
|
||||
def process(self, context):
|
||||
family = "review"
|
||||
subset = get_subset_name(
|
||||
family,
|
||||
context.data.get("variant", ''),
|
||||
context.data["anatomyData"]["task"]["name"],
|
||||
context.data["assetEntity"],
|
||||
context.data["anatomyData"]["project"]["name"],
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
instance = context.create_instance(subset)
|
||||
instance.data.update({
|
||||
"subset": subset,
|
||||
"label": subset,
|
||||
"name": subset,
|
||||
"family": family,
|
||||
"families": [],
|
||||
"representations": [],
|
||||
"asset": os.environ["AVALON_ASSET"],
|
||||
"publish": self.publish
|
||||
})
|
||||
for instance in context:
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
if (creator_attributes.get("mark_for_review") and
|
||||
"review" not in instance.data["families"]):
|
||||
instance.data["families"].append("review")
|
||||
|
|
|
|||
|
|
@ -14,50 +14,19 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
default_variant = "Main"
|
||||
|
||||
def process(self, context):
|
||||
existing_instance = None
|
||||
for instance in context:
|
||||
if instance.data["family"] == "workfile":
|
||||
self.log.debug("Workfile instance found, won't create new")
|
||||
existing_instance = instance
|
||||
break
|
||||
file_path = context.data["currentFile"]
|
||||
_, ext = os.path.splitext(file_path)
|
||||
staging_dir = os.path.dirname(file_path)
|
||||
base_name = os.path.basename(file_path)
|
||||
|
||||
family = "workfile"
|
||||
# context.data["variant"] might come only from collect_batch_data
|
||||
variant = context.data.get("variant") or self.default_variant
|
||||
subset = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
context.data["anatomyData"]["task"]["name"],
|
||||
context.data["assetEntity"],
|
||||
context.data["anatomyData"]["project"]["name"],
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
file_path = context.data["currentFile"]
|
||||
staging_dir = os.path.dirname(file_path)
|
||||
base_name = os.path.basename(file_path)
|
||||
|
||||
# Create instance
|
||||
if existing_instance is None:
|
||||
instance = context.create_instance(subset)
|
||||
instance.data.update({
|
||||
"subset": subset,
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"family": family,
|
||||
"families": [],
|
||||
"representations": [],
|
||||
"asset": os.environ["AVALON_ASSET"]
|
||||
})
|
||||
else:
|
||||
instance = existing_instance
|
||||
|
||||
# creating representation
|
||||
_, ext = os.path.splitext(file_path)
|
||||
instance.data["representations"].append({
|
||||
"name": ext[1:],
|
||||
"ext": ext[1:],
|
||||
"files": base_name,
|
||||
"stagingDir": staging_dir,
|
||||
})
|
||||
# creating representation
|
||||
_, ext = os.path.splitext(file_path)
|
||||
instance.data["representations"].append({
|
||||
"name": ext[1:],
|
||||
"ext": ext[1:],
|
||||
"files": base_name,
|
||||
"stagingDir": staging_dir,
|
||||
})
|
||||
return
|
||||
|
|
|
|||
|
|
@ -47,32 +47,42 @@ class ExtractReview(publish.Extractor):
|
|||
layers = self._get_layers_from_image_instances(instance)
|
||||
self.log.info("Layers image instance found: {}".format(layers))
|
||||
|
||||
repre_name = "jpg"
|
||||
repre_skeleton = {
|
||||
"name": repre_name,
|
||||
"ext": "jpg",
|
||||
"stagingDir": staging_dir,
|
||||
"tags": self.jpg_options['tags'],
|
||||
}
|
||||
|
||||
if instance.data["family"] != "review":
|
||||
# enable creation of review, without this jpg review would clash
|
||||
# with jpg of the image family
|
||||
output_name = repre_name
|
||||
repre_name = "{}_{}".format(repre_name, output_name)
|
||||
repre_skeleton.update({"name": repre_name,
|
||||
"outputName": output_name})
|
||||
|
||||
if self.make_image_sequence and len(layers) > 1:
|
||||
self.log.info("Extract layers to image sequence.")
|
||||
img_list = self._save_sequence_images(staging_dir, layers)
|
||||
|
||||
instance.data["representations"].append({
|
||||
"name": "jpg",
|
||||
"ext": "jpg",
|
||||
"files": img_list,
|
||||
repre_skeleton.update({
|
||||
"frameStart": 0,
|
||||
"frameEnd": len(img_list),
|
||||
"fps": fps,
|
||||
"stagingDir": staging_dir,
|
||||
"tags": self.jpg_options['tags'],
|
||||
"files": img_list,
|
||||
})
|
||||
instance.data["representations"].append(repre_skeleton)
|
||||
processed_img_names = img_list
|
||||
else:
|
||||
self.log.info("Extract layers to flatten image.")
|
||||
img_list = self._save_flatten_image(staging_dir, layers)
|
||||
|
||||
instance.data["representations"].append({
|
||||
"name": "jpg",
|
||||
"ext": "jpg",
|
||||
"files": img_list, # cannot be [] for single frame
|
||||
"stagingDir": staging_dir,
|
||||
"tags": self.jpg_options['tags']
|
||||
repre_skeleton.update({
|
||||
"files": img_list,
|
||||
})
|
||||
instance.data["representations"].append(repre_skeleton)
|
||||
processed_img_names = [img_list]
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
|
|
|
|||
10
openpype/hosts/substancepainter/__init__.py
Normal file
10
openpype/hosts/substancepainter/__init__.py
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
from .addon import (
|
||||
SubstanceAddon,
|
||||
SUBSTANCE_HOST_DIR,
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"SubstanceAddon",
|
||||
"SUBSTANCE_HOST_DIR"
|
||||
)
|
||||
34
openpype/hosts/substancepainter/addon.py
Normal file
34
openpype/hosts/substancepainter/addon.py
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule, IHostAddon
|
||||
|
||||
SUBSTANCE_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class SubstanceAddon(OpenPypeModule, IHostAddon):
|
||||
name = "substancepainter"
|
||||
host_name = "substancepainter"
|
||||
|
||||
def initialize(self, module_settings):
|
||||
self.enabled = True
|
||||
|
||||
def add_implementation_envs(self, env, _app):
|
||||
# Add requirements to SUBSTANCE_PAINTER_PLUGINS_PATH
|
||||
plugin_path = os.path.join(SUBSTANCE_HOST_DIR, "deploy")
|
||||
plugin_path = plugin_path.replace("\\", "/")
|
||||
if env.get("SUBSTANCE_PAINTER_PLUGINS_PATH"):
|
||||
plugin_path += os.pathsep + env["SUBSTANCE_PAINTER_PLUGINS_PATH"]
|
||||
|
||||
env["SUBSTANCE_PAINTER_PLUGINS_PATH"] = plugin_path
|
||||
|
||||
# Log in Substance Painter doesn't support custom terminal colors
|
||||
env["OPENPYPE_LOG_NO_COLORS"] = "Yes"
|
||||
|
||||
def get_launch_hook_paths(self, app):
|
||||
if app.host_name != self.host_name:
|
||||
return []
|
||||
return [
|
||||
os.path.join(SUBSTANCE_HOST_DIR, "hooks")
|
||||
]
|
||||
|
||||
def get_workfile_extensions(self):
|
||||
return [".spp", ".toc"]
|
||||
8
openpype/hosts/substancepainter/api/__init__.py
Normal file
8
openpype/hosts/substancepainter/api/__init__.py
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
from .pipeline import (
|
||||
SubstanceHost,
|
||||
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"SubstanceHost",
|
||||
]
|
||||
157
openpype/hosts/substancepainter/api/colorspace.py
Normal file
157
openpype/hosts/substancepainter/api/colorspace.py
Normal file
|
|
@ -0,0 +1,157 @@
|
|||
"""Substance Painter OCIO management
|
||||
|
||||
Adobe Substance 3D Painter supports OCIO color management using a per project
|
||||
configuration. Output color spaces are defined at the project level
|
||||
|
||||
More information see:
|
||||
- https://substance3d.adobe.com/documentation/spdoc/color-management-223053233.html # noqa
|
||||
- https://substance3d.adobe.com/documentation/spdoc/color-management-with-opencolorio-225969419.html # noqa
|
||||
|
||||
"""
|
||||
import substance_painter.export
|
||||
import substance_painter.js
|
||||
import json
|
||||
|
||||
from .lib import (
|
||||
get_document_structure,
|
||||
get_channel_format
|
||||
)
|
||||
|
||||
|
||||
def _iter_document_stack_channels():
|
||||
"""Yield all stack paths and channels project"""
|
||||
|
||||
for material in get_document_structure()["materials"]:
|
||||
material_name = material["name"]
|
||||
for stack in material["stacks"]:
|
||||
stack_name = stack["name"]
|
||||
if stack_name:
|
||||
stack_path = [material_name, stack_name]
|
||||
else:
|
||||
stack_path = material_name
|
||||
for channel in stack["channels"]:
|
||||
yield stack_path, channel
|
||||
|
||||
|
||||
def _get_first_color_and_data_stack_and_channel():
|
||||
"""Return first found color channel and data channel."""
|
||||
color_channel = None
|
||||
data_channel = None
|
||||
for stack_path, channel in _iter_document_stack_channels():
|
||||
channel_format = get_channel_format(stack_path, channel)
|
||||
if channel_format["color"]:
|
||||
color_channel = (stack_path, channel)
|
||||
else:
|
||||
data_channel = (stack_path, channel)
|
||||
|
||||
if color_channel and data_channel:
|
||||
return color_channel, data_channel
|
||||
|
||||
return color_channel, data_channel
|
||||
|
||||
|
||||
def get_project_channel_data():
|
||||
"""Return colorSpace settings for the current substance painter project.
|
||||
|
||||
In Substance Painter only color channels have Color Management enabled
|
||||
whereas data channels have no color management applied. This can't be
|
||||
changed. The artist can only customize the export color space for color
|
||||
channels per bit-depth for 8 bpc, 16 bpc and 32 bpc.
|
||||
|
||||
As such this returns the color space for 'data' and for per bit-depth
|
||||
for color channels.
|
||||
|
||||
Example output:
|
||||
{
|
||||
"data": {'colorSpace': 'Utility - Raw'},
|
||||
"8": {"colorSpace": "ACES - AcesCG"},
|
||||
"16": {"colorSpace": "ACES - AcesCG"},
|
||||
"16f": {"colorSpace": "ACES - AcesCG"},
|
||||
"32f": {"colorSpace": "ACES - AcesCG"}
|
||||
}
|
||||
|
||||
"""
|
||||
|
||||
keys = ["colorSpace"]
|
||||
query = {key: f"${key}" for key in keys}
|
||||
|
||||
config = {
|
||||
"exportPath": "/",
|
||||
"exportShaderParams": False,
|
||||
"defaultExportPreset": "query_preset",
|
||||
|
||||
"exportPresets": [{
|
||||
"name": "query_preset",
|
||||
|
||||
# List of maps making up this export preset.
|
||||
"maps": [{
|
||||
"fileName": json.dumps(query),
|
||||
# List of source/destination defining which channels will
|
||||
# make up the texture file.
|
||||
"channels": [],
|
||||
"parameters": {
|
||||
"fileFormat": "exr",
|
||||
"bitDepth": "32f",
|
||||
"dithering": False,
|
||||
"sizeLog2": 4,
|
||||
"paddingAlgorithm": "passthrough",
|
||||
"dilationDistance": 16
|
||||
}
|
||||
}]
|
||||
}],
|
||||
}
|
||||
|
||||
def _get_query_output(config):
|
||||
# Return the basename of the single output path we defined
|
||||
result = substance_painter.export.list_project_textures(config)
|
||||
path = next(iter(result.values()))[0]
|
||||
# strip extension and slash since we know relevant json data starts
|
||||
# and ends with { and } characters
|
||||
path = path.strip("/\\.exr")
|
||||
return json.loads(path)
|
||||
|
||||
# Query for each type of channel (color and data)
|
||||
color_channel, data_channel = _get_first_color_and_data_stack_and_channel()
|
||||
colorspaces = {}
|
||||
for key, channel_data in {
|
||||
"data": data_channel,
|
||||
"color": color_channel
|
||||
}.items():
|
||||
if channel_data is None:
|
||||
# No channel of that datatype anywhere in the Stack. We're
|
||||
# unable to identify the output color space of the project
|
||||
colorspaces[key] = None
|
||||
continue
|
||||
|
||||
stack, channel = channel_data
|
||||
|
||||
# Stack must be a string
|
||||
if not isinstance(stack, str):
|
||||
# Assume iterable
|
||||
stack = "/".join(stack)
|
||||
|
||||
# Define the temp output config
|
||||
config["exportList"] = [{"rootPath": stack}]
|
||||
config_map = config["exportPresets"][0]["maps"][0]
|
||||
config_map["channels"] = [
|
||||
{
|
||||
"destChannel": x,
|
||||
"srcChannel": x,
|
||||
"srcMapType": "documentMap",
|
||||
"srcMapName": channel
|
||||
} for x in "RGB"
|
||||
]
|
||||
|
||||
if key == "color":
|
||||
# Query for each bit depth
|
||||
# Color space definition can have a different OCIO config set
|
||||
# for 8-bit, 16-bit and 32-bit outputs so we need to check each
|
||||
# bit depth
|
||||
for depth in ["8", "16", "16f", "32f"]:
|
||||
config_map["parameters"]["bitDepth"] = depth # noqa
|
||||
colorspaces[key + depth] = _get_query_output(config)
|
||||
else:
|
||||
# Data channel (not color managed)
|
||||
colorspaces[key] = _get_query_output(config)
|
||||
|
||||
return colorspaces
|
||||
649
openpype/hosts/substancepainter/api/lib.py
Normal file
649
openpype/hosts/substancepainter/api/lib.py
Normal file
|
|
@ -0,0 +1,649 @@
|
|||
import os
|
||||
import re
|
||||
import json
|
||||
from collections import defaultdict
|
||||
|
||||
import substance_painter.project
|
||||
import substance_painter.resource
|
||||
import substance_painter.js
|
||||
import substance_painter.export
|
||||
|
||||
from qtpy import QtGui, QtWidgets, QtCore
|
||||
|
||||
|
||||
def get_export_presets():
|
||||
"""Return Export Preset resource URLs for all available Export Presets.
|
||||
|
||||
Returns:
|
||||
dict: {Resource url: GUI Label}
|
||||
|
||||
"""
|
||||
# TODO: Find more optimal way to find all export templates
|
||||
|
||||
preset_resources = {}
|
||||
for shelf in substance_painter.resource.Shelves.all():
|
||||
shelf_path = os.path.normpath(shelf.path())
|
||||
|
||||
presets_path = os.path.join(shelf_path, "export-presets")
|
||||
if not os.path.exists(presets_path):
|
||||
continue
|
||||
|
||||
for filename in os.listdir(presets_path):
|
||||
if filename.endswith(".spexp"):
|
||||
template_name = os.path.splitext(filename)[0]
|
||||
|
||||
resource = substance_painter.resource.ResourceID(
|
||||
context=shelf.name(),
|
||||
name=template_name
|
||||
)
|
||||
resource_url = resource.url()
|
||||
|
||||
preset_resources[resource_url] = template_name
|
||||
|
||||
# Sort by template name
|
||||
export_templates = dict(sorted(preset_resources.items(),
|
||||
key=lambda x: x[1]))
|
||||
|
||||
# Add default built-ins at the start
|
||||
# TODO: find the built-ins automatically; scraped with https://gist.github.com/BigRoy/97150c7c6f0a0c916418207b9a2bc8f1 # noqa
|
||||
result = {
|
||||
"export-preset-generator://viewport2d": "2D View", # noqa
|
||||
"export-preset-generator://doc-channel-normal-no-alpha": "Document channels + Normal + AO (No Alpha)", # noqa
|
||||
"export-preset-generator://doc-channel-normal-with-alpha": "Document channels + Normal + AO (With Alpha)", # noqa
|
||||
"export-preset-generator://sketchfab": "Sketchfab", # noqa
|
||||
"export-preset-generator://adobe-standard-material": "Substance 3D Stager", # noqa
|
||||
"export-preset-generator://usd": "USD PBR Metal Roughness", # noqa
|
||||
"export-preset-generator://gltf": "glTF PBR Metal Roughness", # noqa
|
||||
"export-preset-generator://gltf-displacement": "glTF PBR Metal Roughness + Displacement texture (experimental)" # noqa
|
||||
}
|
||||
result.update(export_templates)
|
||||
return result
|
||||
|
||||
|
||||
def _convert_stack_path_to_cmd_str(stack_path):
|
||||
"""Convert stack path `str` or `[str, str]` for javascript query
|
||||
|
||||
Example usage:
|
||||
>>> stack_path = _convert_stack_path_to_cmd_str(stack_path)
|
||||
>>> cmd = f"alg.mapexport.channelIdentifiers({stack_path})"
|
||||
>>> substance_painter.js.evaluate(cmd)
|
||||
|
||||
Args:
|
||||
stack_path (list or str): Path to the stack, could be
|
||||
"Texture set name" or ["Texture set name", "Stack name"]
|
||||
|
||||
Returns:
|
||||
str: Stack path usable as argument in javascript query.
|
||||
|
||||
"""
|
||||
return json.dumps(stack_path)
|
||||
|
||||
|
||||
def get_channel_identifiers(stack_path=None):
|
||||
"""Return the list of channel identifiers.
|
||||
|
||||
If a context is passed (texture set/stack),
|
||||
return only used channels with resolved user channels.
|
||||
|
||||
Channel identifiers are:
|
||||
basecolor, height, specular, opacity, emissive, displacement,
|
||||
glossiness, roughness, anisotropylevel, anisotropyangle, transmissive,
|
||||
scattering, reflection, ior, metallic, normal, ambientOcclusion,
|
||||
diffuse, specularlevel, blendingmask, [custom user names].
|
||||
|
||||
Args:
|
||||
stack_path (list or str, Optional): Path to the stack, could be
|
||||
"Texture set name" or ["Texture set name", "Stack name"]
|
||||
|
||||
Returns:
|
||||
list: List of channel identifiers.
|
||||
|
||||
"""
|
||||
if stack_path is None:
|
||||
stack_path = ""
|
||||
else:
|
||||
stack_path = _convert_stack_path_to_cmd_str(stack_path)
|
||||
cmd = f"alg.mapexport.channelIdentifiers({stack_path})"
|
||||
return substance_painter.js.evaluate(cmd)
|
||||
|
||||
|
||||
def get_channel_format(stack_path, channel):
|
||||
"""Retrieve the channel format of a specific stack channel.
|
||||
|
||||
See `alg.mapexport.channelFormat` (javascript API) for more details.
|
||||
|
||||
The channel format data is:
|
||||
"label" (str): The channel format label: could be one of
|
||||
[sRGB8, L8, RGB8, L16, RGB16, L16F, RGB16F, L32F, RGB32F]
|
||||
"color" (bool): True if the format is in color, False is grayscale
|
||||
"floating" (bool): True if the format uses floating point
|
||||
representation, false otherwise
|
||||
"bitDepth" (int): Bit per color channel (could be 8, 16 or 32 bpc)
|
||||
|
||||
Arguments:
|
||||
stack_path (list or str): Path to the stack, could be
|
||||
"Texture set name" or ["Texture set name", "Stack name"]
|
||||
channel (str): Identifier of the channel to export
|
||||
(see `get_channel_identifiers`)
|
||||
|
||||
Returns:
|
||||
dict: The channel format data.
|
||||
|
||||
"""
|
||||
stack_path = _convert_stack_path_to_cmd_str(stack_path)
|
||||
cmd = f"alg.mapexport.channelFormat({stack_path}, '{channel}')"
|
||||
return substance_painter.js.evaluate(cmd)
|
||||
|
||||
|
||||
def get_document_structure():
|
||||
"""Dump the document structure.
|
||||
|
||||
See `alg.mapexport.documentStructure` (javascript API) for more details.
|
||||
|
||||
Returns:
|
||||
dict: Document structure or None when no project is open
|
||||
|
||||
"""
|
||||
return substance_painter.js.evaluate("alg.mapexport.documentStructure()")
|
||||
|
||||
|
||||
def get_export_templates(config, format="png", strip_folder=True):
|
||||
"""Return export config outputs.
|
||||
|
||||
This use the Javascript API `alg.mapexport.getPathsExportDocumentMaps`
|
||||
which returns a different output than using the Python equivalent
|
||||
`substance_painter.export.list_project_textures(config)`.
|
||||
|
||||
The nice thing about the Javascript API version is that it returns the
|
||||
output textures grouped by filename template.
|
||||
|
||||
A downside is that it doesn't return all the UDIM tiles but per template
|
||||
always returns a single file.
|
||||
|
||||
Note:
|
||||
The file format needs to be explicitly passed to the Javascript API
|
||||
but upon exporting through the Python API the file format can be based
|
||||
on the output preset. So it's likely the file extension will mismatch
|
||||
|
||||
Warning:
|
||||
Even though the function appears to solely get the expected outputs
|
||||
the Javascript API will actually create the config's texture output
|
||||
folder if it does not exist yet. As such, a valid path must be set.
|
||||
|
||||
Example output:
|
||||
{
|
||||
"DefaultMaterial": {
|
||||
"$textureSet_BaseColor(_$colorSpace)(.$udim)": "DefaultMaterial_BaseColor_ACES - ACEScg.1002.png", # noqa
|
||||
"$textureSet_Emissive(_$colorSpace)(.$udim)": "DefaultMaterial_Emissive_ACES - ACEScg.1002.png", # noqa
|
||||
"$textureSet_Height(_$colorSpace)(.$udim)": "DefaultMaterial_Height_Utility - Raw.1002.png", # noqa
|
||||
"$textureSet_Metallic(_$colorSpace)(.$udim)": "DefaultMaterial_Metallic_Utility - Raw.1002.png", # noqa
|
||||
"$textureSet_Normal(_$colorSpace)(.$udim)": "DefaultMaterial_Normal_Utility - Raw.1002.png", # noqa
|
||||
"$textureSet_Roughness(_$colorSpace)(.$udim)": "DefaultMaterial_Roughness_Utility - Raw.1002.png" # noqa
|
||||
}
|
||||
}
|
||||
|
||||
Arguments:
|
||||
config (dict) Export config
|
||||
format (str, Optional): Output format to write to, defaults to 'png'
|
||||
strip_folder (bool, Optional): Whether to strip the output folder
|
||||
from the output filenames.
|
||||
|
||||
Returns:
|
||||
dict: The expected output maps.
|
||||
|
||||
"""
|
||||
folder = config["exportPath"].replace("\\", "/")
|
||||
preset = config["defaultExportPreset"]
|
||||
cmd = f'alg.mapexport.getPathsExportDocumentMaps("{preset}", "{folder}", "{format}")' # noqa
|
||||
result = substance_painter.js.evaluate(cmd)
|
||||
|
||||
if strip_folder:
|
||||
for _stack, maps in result.items():
|
||||
for map_template, map_filepath in maps.items():
|
||||
map_filepath = map_filepath.replace("\\", "/")
|
||||
assert map_filepath.startswith(folder)
|
||||
map_filename = map_filepath[len(folder):].lstrip("/")
|
||||
maps[map_template] = map_filename
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def _templates_to_regex(templates,
|
||||
texture_set,
|
||||
colorspaces,
|
||||
project,
|
||||
mesh):
|
||||
"""Return regex based on a Substance Painter expot filename template.
|
||||
|
||||
This converts Substance Painter export filename templates like
|
||||
`$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)` into a regex
|
||||
which can be used to query an output filename to help retrieve:
|
||||
|
||||
- Which template filename the file belongs to.
|
||||
- Which color space the file is written with.
|
||||
- Which udim tile it is exactly.
|
||||
|
||||
This is used by `get_parsed_export_maps` which tries to as explicitly
|
||||
as possible match the filename pattern against the known possible outputs.
|
||||
That's why Texture Set name, Color spaces, Project path and mesh path must
|
||||
be provided. By doing so we get the best shot at correctly matching the
|
||||
right template because otherwise $texture_set could basically be any string
|
||||
and thus match even that of a color space or mesh.
|
||||
|
||||
Arguments:
|
||||
templates (list): List of templates to convert to regex.
|
||||
texture_set (str): The texture set to match against.
|
||||
colorspaces (list): The colorspaces defined in the current project.
|
||||
project (str): Filepath of current substance project.
|
||||
mesh (str): Path to mesh file used in current project.
|
||||
|
||||
Returns:
|
||||
dict: Template: Template regex pattern
|
||||
|
||||
"""
|
||||
def _filename_no_ext(path):
|
||||
return os.path.splitext(os.path.basename(path))[0]
|
||||
|
||||
if colorspaces and any(colorspaces):
|
||||
colorspace_match = "|".join(re.escape(c) for c in set(colorspaces))
|
||||
colorspace_match = f"({colorspace_match})"
|
||||
else:
|
||||
# No colorspace support enabled
|
||||
colorspace_match = ""
|
||||
|
||||
# Key to regex valid search values
|
||||
key_matches = {
|
||||
"$project": re.escape(_filename_no_ext(project)),
|
||||
"$mesh": re.escape(_filename_no_ext(mesh)),
|
||||
"$textureSet": re.escape(texture_set),
|
||||
"$colorSpace": colorspace_match,
|
||||
"$udim": "([0-9]{4})"
|
||||
}
|
||||
|
||||
# Turn the templates into regexes
|
||||
regexes = {}
|
||||
for template in templates:
|
||||
|
||||
# We need to tweak a temp
|
||||
search_regex = re.escape(template)
|
||||
|
||||
# Let's assume that any ( and ) character in the file template was
|
||||
# intended as an optional template key and do a simple `str.replace`
|
||||
# Note: we are matching against re.escape(template) so will need to
|
||||
# search for the escaped brackets.
|
||||
search_regex = search_regex.replace(re.escape("("), "(")
|
||||
search_regex = search_regex.replace(re.escape(")"), ")?")
|
||||
|
||||
# Substitute each key into a named group
|
||||
for key, key_expected_regex in key_matches.items():
|
||||
|
||||
# We want to use the template as a regex basis in the end so will
|
||||
# escape the whole thing first. Note that thus we'll need to
|
||||
# search for the escaped versions of the keys too.
|
||||
escaped_key = re.escape(key)
|
||||
key_label = key[1:] # key without $ prefix
|
||||
|
||||
key_expected_grp_regex = f"(?P<{key_label}>{key_expected_regex})"
|
||||
search_regex = search_regex.replace(escaped_key,
|
||||
key_expected_grp_regex)
|
||||
|
||||
# The filename templates don't include the extension so we add it
|
||||
# to be able to match the out filename beginning to end
|
||||
ext_regex = r"(?P<ext>\.[A-Za-z][A-Za-z0-9-]*)"
|
||||
search_regex = rf"^{search_regex}{ext_regex}$"
|
||||
|
||||
regexes[template] = search_regex
|
||||
|
||||
return regexes
|
||||
|
||||
|
||||
def strip_template(template, strip="._ "):
|
||||
"""Return static characters in a substance painter filename template.
|
||||
|
||||
>>> strip_template("$textureSet_HELLO(.$udim)")
|
||||
# HELLO
|
||||
>>> strip_template("$mesh_$textureSet_HELLO_WORLD_$colorSpace(.$udim)")
|
||||
# HELLO_WORLD
|
||||
>>> strip_template("$textureSet_HELLO(.$udim)", strip=None)
|
||||
# _HELLO
|
||||
>>> strip_template("$mesh_$textureSet_$colorSpace(.$udim)", strip=None)
|
||||
# _HELLO_
|
||||
>>> strip_template("$textureSet_HELLO(.$udim)")
|
||||
# _HELLO
|
||||
|
||||
Arguments:
|
||||
template (str): Filename template to strip.
|
||||
strip (str, optional): Characters to strip from beginning and end
|
||||
of the static string in template. Defaults to: `._ `.
|
||||
|
||||
Returns:
|
||||
str: The static string in filename template.
|
||||
|
||||
"""
|
||||
# Return only characters that were part of the template that were static.
|
||||
# Remove all keys
|
||||
keys = ["$project", "$mesh", "$textureSet", "$udim", "$colorSpace"]
|
||||
stripped_template = template
|
||||
for key in keys:
|
||||
stripped_template = stripped_template.replace(key, "")
|
||||
|
||||
# Everything inside an optional bracket space is excluded since it's not
|
||||
# static. We keep a counter to track whether we are currently iterating
|
||||
# over parts of the template that are inside an 'optional' group or not.
|
||||
counter = 0
|
||||
result = ""
|
||||
for char in stripped_template:
|
||||
if char == "(":
|
||||
counter += 1
|
||||
elif char == ")":
|
||||
counter -= 1
|
||||
if counter < 0:
|
||||
counter = 0
|
||||
else:
|
||||
if counter == 0:
|
||||
result += char
|
||||
|
||||
if strip:
|
||||
# Strip of any trailing start/end characters. Technically these are
|
||||
# static but usually start and end separators like space or underscore
|
||||
# aren't wanted.
|
||||
result = result.strip(strip)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_parsed_export_maps(config):
|
||||
"""Return Export Config's expected output textures with parsed data.
|
||||
|
||||
This tries to parse the texture outputs using a Python API export config.
|
||||
|
||||
Parses template keys: $project, $mesh, $textureSet, $colorSpace, $udim
|
||||
|
||||
Example:
|
||||
{("DefaultMaterial", ""): {
|
||||
"$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)": [
|
||||
{
|
||||
// OUTPUT DATA FOR FILE #1 OF THE TEMPLATE
|
||||
},
|
||||
{
|
||||
// OUTPUT DATA FOR FILE #2 OF THE TEMPLATE
|
||||
},
|
||||
]
|
||||
},
|
||||
}}
|
||||
|
||||
File output data (all outputs are `str`).
|
||||
1) Parsed tokens: These are parsed tokens from the template, they will
|
||||
only exist if found in the filename template and output filename.
|
||||
|
||||
project: Workfile filename without extension
|
||||
mesh: Filename of the loaded mesh without extension
|
||||
textureSet: The texture set, e.g. "DefaultMaterial",
|
||||
colorSpace: The color space, e.g. "ACES - ACEScg",
|
||||
udim: The udim tile, e.g. "1001"
|
||||
|
||||
2) Template output and filepath
|
||||
|
||||
filepath: Full path to the resulting texture map, e.g.
|
||||
"/path/to/mesh_DefaultMaterial_BaseColor_ACES - ACEScg.1002.png",
|
||||
output: "mesh_DefaultMaterial_BaseColor_ACES - ACEScg.1002.png"
|
||||
Note: if template had slashes (folders) then `output` will too.
|
||||
So `output` might include a folder.
|
||||
|
||||
Returns:
|
||||
dict: [texture_set, stack]: {template: [file1_data, file2_data]}
|
||||
|
||||
"""
|
||||
# Import is here to avoid recursive lib <-> colorspace imports
|
||||
from .colorspace import get_project_channel_data
|
||||
|
||||
outputs = substance_painter.export.list_project_textures(config)
|
||||
templates = get_export_templates(config, strip_folder=False)
|
||||
|
||||
# Get all color spaces set for the current project
|
||||
project_colorspaces = set(
|
||||
data["colorSpace"] for data in get_project_channel_data().values()
|
||||
)
|
||||
|
||||
# Get current project mesh path and project path to explicitly match
|
||||
# the $mesh and $project tokens
|
||||
project_mesh_path = substance_painter.project.last_imported_mesh_path()
|
||||
project_path = substance_painter.project.file_path()
|
||||
|
||||
# Get the current export path to strip this of the beginning of filepath
|
||||
# results, since filename templates don't have these we'll match without
|
||||
# that part of the filename.
|
||||
export_path = config["exportPath"]
|
||||
export_path = export_path.replace("\\", "/")
|
||||
if not export_path.endswith("/"):
|
||||
export_path += "/"
|
||||
|
||||
# Parse the outputs
|
||||
result = {}
|
||||
for key, filepaths in outputs.items():
|
||||
texture_set, stack = key
|
||||
|
||||
if stack:
|
||||
stack_path = f"{texture_set}/{stack}"
|
||||
else:
|
||||
stack_path = texture_set
|
||||
|
||||
stack_templates = list(templates[stack_path].keys())
|
||||
|
||||
template_regex = _templates_to_regex(stack_templates,
|
||||
texture_set=texture_set,
|
||||
colorspaces=project_colorspaces,
|
||||
mesh=project_mesh_path,
|
||||
project=project_path)
|
||||
|
||||
# Let's precompile the regexes
|
||||
for template, regex in template_regex.items():
|
||||
template_regex[template] = re.compile(regex)
|
||||
|
||||
stack_results = defaultdict(list)
|
||||
for filepath in sorted(filepaths):
|
||||
# We strip explicitly using the full parent export path instead of
|
||||
# using `os.path.basename` because export template is allowed to
|
||||
# have subfolders in its template which we want to match against
|
||||
filepath = filepath.replace("\\", "/")
|
||||
assert filepath.startswith(export_path), (
|
||||
f"Filepath {filepath} must start with folder {export_path}"
|
||||
)
|
||||
filename = filepath[len(export_path):]
|
||||
|
||||
for template, regex in template_regex.items():
|
||||
match = regex.match(filename)
|
||||
if match:
|
||||
parsed = match.groupdict(default={})
|
||||
|
||||
# Include some special outputs for convenience
|
||||
parsed["filepath"] = filepath
|
||||
parsed["output"] = filename
|
||||
|
||||
stack_results[template].append(parsed)
|
||||
break
|
||||
else:
|
||||
raise ValueError(f"Unable to match {filename} against any "
|
||||
f"template in: {list(template_regex.keys())}")
|
||||
|
||||
result[key] = dict(stack_results)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def load_shelf(path, name=None):
|
||||
"""Add shelf to substance painter (for current application session)
|
||||
|
||||
This will dynamically add a Shelf for the current session. It's good
|
||||
to note however that these will *not* persist on restart of the host.
|
||||
|
||||
Note:
|
||||
Consider the loaded shelf a static library of resources.
|
||||
|
||||
The shelf will *not* be visible in application preferences in
|
||||
Edit > Settings > Libraries.
|
||||
|
||||
The shelf will *not* show in the Assets browser if it has no existing
|
||||
assets
|
||||
|
||||
The shelf will *not* be a selectable option for selecting it as a
|
||||
destination to import resources too.
|
||||
|
||||
"""
|
||||
|
||||
# Ensure expanded path with forward slashes
|
||||
path = os.path.expandvars(path)
|
||||
path = os.path.abspath(path)
|
||||
path = path.replace("\\", "/")
|
||||
|
||||
# Path must exist
|
||||
if not os.path.isdir(path):
|
||||
raise ValueError(f"Path is not an existing folder: {path}")
|
||||
|
||||
# This name must be unique and must only contain lowercase letters,
|
||||
# numbers, underscores or hyphens.
|
||||
if name is None:
|
||||
name = os.path.basename(path)
|
||||
|
||||
name = name.lower()
|
||||
name = re.sub(r"[^a-z0-9_\-]", "_", name) # sanitize to underscores
|
||||
|
||||
if substance_painter.resource.Shelves.exists(name):
|
||||
shelf = next(
|
||||
shelf for shelf in substance_painter.resource.Shelves.all()
|
||||
if shelf.name() == name
|
||||
)
|
||||
if os.path.normpath(shelf.path()) != os.path.normpath(path):
|
||||
raise ValueError(f"Shelf with name '{name}' already exists "
|
||||
f"for a different path: '{shelf.path()}")
|
||||
|
||||
return
|
||||
|
||||
print(f"Adding Shelf '{name}' to path: {path}")
|
||||
substance_painter.resource.Shelves.add(name, path)
|
||||
|
||||
return name
|
||||
|
||||
|
||||
def _get_new_project_action():
|
||||
"""Return QAction which triggers Substance Painter's new project dialog"""
|
||||
|
||||
main_window = substance_painter.ui.get_main_window()
|
||||
|
||||
# Find the file menu's New file action
|
||||
menubar = main_window.menuBar()
|
||||
new_action = None
|
||||
for action in menubar.actions():
|
||||
menu = action.menu()
|
||||
if not menu:
|
||||
continue
|
||||
|
||||
if menu.objectName() != "file":
|
||||
continue
|
||||
|
||||
# Find the action with the CTRL+N key sequence
|
||||
new_action = next(action for action in menu.actions()
|
||||
if action.shortcut() == QtGui.QKeySequence.New)
|
||||
break
|
||||
|
||||
return new_action
|
||||
|
||||
|
||||
def prompt_new_file_with_mesh(mesh_filepath):
|
||||
"""Prompts the user for a new file using Substance Painter's own dialog.
|
||||
|
||||
This will set the mesh path to load to the given mesh and disables the
|
||||
dialog box to disallow the user to change the path. This way we can allow
|
||||
user configuration of a project but set the mesh path ourselves.
|
||||
|
||||
Warning:
|
||||
This is very hacky and experimental.
|
||||
|
||||
Note:
|
||||
If a project is currently open using the same mesh filepath it can't
|
||||
accurately detect whether the user had actually accepted the new project
|
||||
dialog or whether the project afterwards is still the original project,
|
||||
for example when the user might have cancelled the operation.
|
||||
|
||||
"""
|
||||
|
||||
app = QtWidgets.QApplication.instance()
|
||||
assert os.path.isfile(mesh_filepath), \
|
||||
f"Mesh filepath does not exist: {mesh_filepath}"
|
||||
|
||||
def _setup_file_dialog():
|
||||
"""Set filepath in QFileDialog and trigger accept result"""
|
||||
file_dialog = app.activeModalWidget()
|
||||
assert isinstance(file_dialog, QtWidgets.QFileDialog)
|
||||
|
||||
# Quickly hide the dialog
|
||||
file_dialog.hide()
|
||||
app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents, 1000)
|
||||
|
||||
file_dialog.setDirectory(os.path.dirname(mesh_filepath))
|
||||
url = QtCore.QUrl.fromLocalFile(os.path.basename(mesh_filepath))
|
||||
file_dialog.selectUrl(url)
|
||||
|
||||
# Give the explorer window time to refresh to the folder and select
|
||||
# the file
|
||||
while not file_dialog.selectedFiles():
|
||||
app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents, 1000)
|
||||
print(f"Selected: {file_dialog.selectedFiles()}")
|
||||
|
||||
# Set it again now we know the path is refreshed - without this
|
||||
# accepting the dialog will often not trigger the correct filepath
|
||||
file_dialog.setDirectory(os.path.dirname(mesh_filepath))
|
||||
url = QtCore.QUrl.fromLocalFile(os.path.basename(mesh_filepath))
|
||||
file_dialog.selectUrl(url)
|
||||
|
||||
file_dialog.done(file_dialog.Accepted)
|
||||
app.processEvents(QtCore.QEventLoop.AllEvents)
|
||||
|
||||
def _setup_prompt():
|
||||
app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents)
|
||||
dialog = app.activeModalWidget()
|
||||
assert dialog.objectName() == "NewProjectDialog"
|
||||
|
||||
# Set the window title
|
||||
mesh = os.path.basename(mesh_filepath)
|
||||
dialog.setWindowTitle(f"New Project with mesh: {mesh}")
|
||||
|
||||
# Get the select mesh file button
|
||||
mesh_select = dialog.findChild(QtWidgets.QPushButton, "meshSelect")
|
||||
|
||||
# Hide the select mesh button to the user to block changing of mesh
|
||||
mesh_select.setVisible(False)
|
||||
|
||||
# Ensure UI is visually up-to-date
|
||||
app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents)
|
||||
|
||||
# Trigger the 'select file' dialog to set the path and have the
|
||||
# new file dialog to use the path.
|
||||
QtCore.QTimer.singleShot(10, _setup_file_dialog)
|
||||
mesh_select.click()
|
||||
|
||||
app.processEvents(QtCore.QEventLoop.AllEvents, 5000)
|
||||
|
||||
mesh_filename = dialog.findChild(QtWidgets.QFrame, "meshFileName")
|
||||
mesh_filename_label = mesh_filename.findChild(QtWidgets.QLabel)
|
||||
if not mesh_filename_label.text():
|
||||
dialog.close()
|
||||
raise RuntimeError(f"Failed to set mesh path: {mesh_filepath}")
|
||||
|
||||
new_action = _get_new_project_action()
|
||||
if not new_action:
|
||||
raise RuntimeError("Unable to detect new file action..")
|
||||
|
||||
QtCore.QTimer.singleShot(0, _setup_prompt)
|
||||
new_action.trigger()
|
||||
app.processEvents(QtCore.QEventLoop.AllEvents, 5000)
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return
|
||||
|
||||
# Confirm mesh was set as expected
|
||||
project_mesh = substance_painter.project.last_imported_mesh_path()
|
||||
if os.path.normpath(project_mesh) != os.path.normpath(mesh_filepath):
|
||||
return
|
||||
|
||||
return project_mesh
|
||||
427
openpype/hosts/substancepainter/api/pipeline.py
Normal file
427
openpype/hosts/substancepainter/api/pipeline.py
Normal file
|
|
@ -0,0 +1,427 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Pipeline tools for OpenPype Substance Painter integration."""
|
||||
import os
|
||||
import logging
|
||||
from functools import partial
|
||||
|
||||
# Substance 3D Painter modules
|
||||
import substance_painter.ui
|
||||
import substance_painter.event
|
||||
import substance_painter.project
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost
|
||||
from openpype.settings import (
|
||||
get_current_project_settings,
|
||||
get_system_settings
|
||||
)
|
||||
|
||||
from openpype.pipeline.template_data import get_template_data_with_names
|
||||
from openpype.pipeline import (
|
||||
register_creator_plugin_path,
|
||||
register_loader_plugin_path,
|
||||
AVALON_CONTAINER_ID,
|
||||
Anatomy
|
||||
)
|
||||
from openpype.lib import (
|
||||
StringTemplate,
|
||||
register_event_callback,
|
||||
emit_event,
|
||||
)
|
||||
from openpype.pipeline.load import any_outdated_containers
|
||||
from openpype.hosts.substancepainter import SUBSTANCE_HOST_DIR
|
||||
|
||||
from . import lib
|
||||
|
||||
log = logging.getLogger("openpype.hosts.substance")
|
||||
|
||||
PLUGINS_DIR = os.path.join(SUBSTANCE_HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
OPENPYPE_METADATA_KEY = "OpenPype"
|
||||
OPENPYPE_METADATA_CONTAINERS_KEY = "containers" # child key
|
||||
OPENPYPE_METADATA_CONTEXT_KEY = "context" # child key
|
||||
OPENPYPE_METADATA_INSTANCES_KEY = "instances" # child key
|
||||
|
||||
|
||||
class SubstanceHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
||||
name = "substancepainter"
|
||||
|
||||
def __init__(self):
|
||||
super(SubstanceHost, self).__init__()
|
||||
self._has_been_setup = False
|
||||
self.menu = None
|
||||
self.callbacks = []
|
||||
self.shelves = []
|
||||
|
||||
def install(self):
|
||||
pyblish.api.register_host("substancepainter")
|
||||
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
|
||||
log.info("Installing callbacks ... ")
|
||||
# register_event_callback("init", on_init)
|
||||
self._register_callbacks()
|
||||
# register_event_callback("before.save", before_save)
|
||||
# register_event_callback("save", on_save)
|
||||
register_event_callback("open", on_open)
|
||||
# register_event_callback("new", on_new)
|
||||
|
||||
log.info("Installing menu ... ")
|
||||
self._install_menu()
|
||||
|
||||
project_settings = get_current_project_settings()
|
||||
self._install_shelves(project_settings)
|
||||
|
||||
self._has_been_setup = True
|
||||
|
||||
def uninstall(self):
|
||||
self._uninstall_shelves()
|
||||
self._uninstall_menu()
|
||||
self._deregister_callbacks()
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return False
|
||||
|
||||
return substance_painter.project.needs_saving()
|
||||
|
||||
def get_workfile_extensions(self):
|
||||
return [".spp", ".toc"]
|
||||
|
||||
def save_workfile(self, dst_path=None):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return False
|
||||
|
||||
if not dst_path:
|
||||
dst_path = self.get_current_workfile()
|
||||
|
||||
full_save_mode = substance_painter.project.ProjectSaveMode.Full
|
||||
substance_painter.project.save_as(dst_path, full_save_mode)
|
||||
|
||||
return dst_path
|
||||
|
||||
def open_workfile(self, filepath):
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise RuntimeError("File does not exist: {}".format(filepath))
|
||||
|
||||
# We must first explicitly close current project before opening another
|
||||
if substance_painter.project.is_open():
|
||||
substance_painter.project.close()
|
||||
|
||||
substance_painter.project.open(filepath)
|
||||
return filepath
|
||||
|
||||
def get_current_workfile(self):
|
||||
if not substance_painter.project.is_open():
|
||||
return None
|
||||
|
||||
filepath = substance_painter.project.file_path()
|
||||
if filepath and filepath.endswith(".spt"):
|
||||
# When currently in a Substance Painter template assume our
|
||||
# scene isn't saved. This can be the case directly after doing
|
||||
# "New project", the path will then be the template used. This
|
||||
# avoids Workfiles tool trying to save as .spt extension if the
|
||||
# file hasn't been saved before.
|
||||
return
|
||||
|
||||
return filepath
|
||||
|
||||
def get_containers(self):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return
|
||||
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY)
|
||||
if containers:
|
||||
for key, container in containers.items():
|
||||
container["objectName"] = key
|
||||
yield container
|
||||
|
||||
def update_context_data(self, data, changes):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return
|
||||
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
metadata.set(OPENPYPE_METADATA_CONTEXT_KEY, data)
|
||||
|
||||
def get_context_data(self):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return
|
||||
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
return metadata.get(OPENPYPE_METADATA_CONTEXT_KEY) or {}
|
||||
|
||||
def _install_menu(self):
|
||||
from PySide2 import QtWidgets
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
parent = substance_painter.ui.get_main_window()
|
||||
|
||||
menu = QtWidgets.QMenu("OpenPype")
|
||||
|
||||
action = menu.addAction("Create...")
|
||||
action.triggered.connect(
|
||||
lambda: host_tools.show_publisher(parent=parent,
|
||||
tab="create")
|
||||
)
|
||||
|
||||
action = menu.addAction("Load...")
|
||||
action.triggered.connect(
|
||||
lambda: host_tools.show_loader(parent=parent, use_context=True)
|
||||
)
|
||||
|
||||
action = menu.addAction("Publish...")
|
||||
action.triggered.connect(
|
||||
lambda: host_tools.show_publisher(parent=parent,
|
||||
tab="publish")
|
||||
)
|
||||
|
||||
action = menu.addAction("Manage...")
|
||||
action.triggered.connect(
|
||||
lambda: host_tools.show_scene_inventory(parent=parent)
|
||||
)
|
||||
|
||||
action = menu.addAction("Library...")
|
||||
action.triggered.connect(
|
||||
lambda: host_tools.show_library_loader(parent=parent)
|
||||
)
|
||||
|
||||
menu.addSeparator()
|
||||
action = menu.addAction("Work Files...")
|
||||
action.triggered.connect(
|
||||
lambda: host_tools.show_workfiles(parent=parent)
|
||||
)
|
||||
|
||||
substance_painter.ui.add_menu(menu)
|
||||
|
||||
def on_menu_destroyed():
|
||||
self.menu = None
|
||||
|
||||
menu.destroyed.connect(on_menu_destroyed)
|
||||
|
||||
self.menu = menu
|
||||
|
||||
def _uninstall_menu(self):
|
||||
if self.menu:
|
||||
self.menu.destroy()
|
||||
self.menu = None
|
||||
|
||||
def _register_callbacks(self):
|
||||
# Prepare emit event callbacks
|
||||
open_callback = partial(emit_event, "open")
|
||||
|
||||
# Connect to the Substance Painter events
|
||||
dispatcher = substance_painter.event.DISPATCHER
|
||||
for event, callback in [
|
||||
(substance_painter.event.ProjectOpened, open_callback)
|
||||
]:
|
||||
dispatcher.connect(event, callback)
|
||||
# Keep a reference so we can deregister if needed
|
||||
self.callbacks.append((event, callback))
|
||||
|
||||
def _deregister_callbacks(self):
|
||||
for event, callback in self.callbacks:
|
||||
substance_painter.event.DISPATCHER.disconnect(event, callback)
|
||||
self.callbacks.clear()
|
||||
|
||||
def _install_shelves(self, project_settings):
|
||||
|
||||
shelves = project_settings["substancepainter"].get("shelves", {})
|
||||
if not shelves:
|
||||
return
|
||||
|
||||
# Prepare formatting data if we detect any path which might have
|
||||
# template tokens like {asset} in there.
|
||||
formatting_data = {}
|
||||
has_formatting_entries = any("{" in path for path in shelves.values())
|
||||
if has_formatting_entries:
|
||||
project_name = self.get_current_project_name()
|
||||
asset_name = self.get_current_asset_name()
|
||||
task_name = self.get_current_asset_name()
|
||||
system_settings = get_system_settings()
|
||||
formatting_data = get_template_data_with_names(project_name,
|
||||
asset_name,
|
||||
task_name,
|
||||
system_settings)
|
||||
anatomy = Anatomy(project_name)
|
||||
formatting_data["root"] = anatomy.roots
|
||||
|
||||
for name, path in shelves.items():
|
||||
shelf_name = None
|
||||
|
||||
# Allow formatting with anatomy for the paths
|
||||
if "{" in path:
|
||||
path = StringTemplate.format_template(path, formatting_data)
|
||||
|
||||
try:
|
||||
shelf_name = lib.load_shelf(path, name=name)
|
||||
except ValueError as exc:
|
||||
print(f"Failed to load shelf -> {exc}")
|
||||
|
||||
if shelf_name:
|
||||
self.shelves.append(shelf_name)
|
||||
|
||||
def _uninstall_shelves(self):
|
||||
for shelf_name in self.shelves:
|
||||
substance_painter.resource.Shelves.remove(shelf_name)
|
||||
self.shelves.clear()
|
||||
|
||||
|
||||
def on_open():
|
||||
log.info("Running callback on open..")
|
||||
|
||||
if any_outdated_containers():
|
||||
from openpype.widgets import popup
|
||||
|
||||
log.warning("Scene has outdated content.")
|
||||
|
||||
# Get main window
|
||||
parent = substance_painter.ui.get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated content pop-up "
|
||||
"because Substance window can't be found.")
|
||||
else:
|
||||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
from openpype.tools.utils import host_tools
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Substance scene has outdated content")
|
||||
dialog.setMessage("There are outdated containers in "
|
||||
"your Substance scene.")
|
||||
dialog.on_clicked.connect(_on_show_inventory)
|
||||
dialog.show()
|
||||
|
||||
|
||||
def imprint_container(container,
|
||||
name,
|
||||
namespace,
|
||||
context,
|
||||
loader):
|
||||
"""Imprint a loaded container with metadata.
|
||||
|
||||
Containerisation enables a tracking of version, author and origin
|
||||
for loaded assets.
|
||||
|
||||
Arguments:
|
||||
container (dict): The (substance metadata) dictionary to imprint into.
|
||||
name (str): Name of resulting assembly
|
||||
namespace (str): Namespace under which to host container
|
||||
context (dict): Asset information
|
||||
loader (load.LoaderPlugin): loader instance used to produce container.
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
|
||||
data = [
|
||||
("schema", "openpype:container-2.0"),
|
||||
("id", AVALON_CONTAINER_ID),
|
||||
("name", str(name)),
|
||||
("namespace", str(namespace) if namespace else None),
|
||||
("loader", str(loader.__class__.__name__)),
|
||||
("representation", str(context["representation"]["_id"])),
|
||||
]
|
||||
for key, value in data:
|
||||
container[key] = value
|
||||
|
||||
|
||||
def set_container_metadata(object_name, container_data, update=False):
|
||||
"""Helper method to directly set the data for a specific container
|
||||
|
||||
Args:
|
||||
object_name (str): The unique object name identifier for the container
|
||||
container_data (dict): The data for the container.
|
||||
Note 'objectName' data is derived from `object_name` and key in
|
||||
`container_data` will be ignored.
|
||||
update (bool): Whether to only update the dict data.
|
||||
|
||||
"""
|
||||
# The objectName is derived from the key in the metadata so won't be stored
|
||||
# in the metadata in the container's data.
|
||||
container_data.pop("objectName", None)
|
||||
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) or {}
|
||||
if update:
|
||||
existing_data = containers.setdefault(object_name, {})
|
||||
existing_data.update(container_data) # mutable dict, in-place update
|
||||
else:
|
||||
containers[object_name] = container_data
|
||||
metadata.set("containers", containers)
|
||||
|
||||
|
||||
def remove_container_metadata(object_name):
|
||||
"""Helper method to remove the data for a specific container"""
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY)
|
||||
if containers:
|
||||
containers.pop(object_name, None)
|
||||
metadata.set("containers", containers)
|
||||
|
||||
|
||||
def set_instance(instance_id, instance_data, update=False):
|
||||
"""Helper method to directly set the data for a specific container
|
||||
|
||||
Args:
|
||||
instance_id (str): Unique identifier for the instance
|
||||
instance_data (dict): The instance data to store in the metaadata.
|
||||
"""
|
||||
set_instances({instance_id: instance_data}, update=update)
|
||||
|
||||
|
||||
def set_instances(instance_data_by_id, update=False):
|
||||
"""Store data for multiple instances at the same time.
|
||||
|
||||
This is more optimal than querying and setting them in the metadata one
|
||||
by one.
|
||||
"""
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
instances = metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {}
|
||||
|
||||
for instance_id, instance_data in instance_data_by_id.items():
|
||||
if update:
|
||||
existing_data = instances.get(instance_id, {})
|
||||
existing_data.update(instance_data)
|
||||
else:
|
||||
instances[instance_id] = instance_data
|
||||
|
||||
metadata.set("instances", instances)
|
||||
|
||||
|
||||
def remove_instance(instance_id):
|
||||
"""Helper method to remove the data for a specific container"""
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
instances = metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {}
|
||||
instances.pop(instance_id, None)
|
||||
metadata.set("instances", instances)
|
||||
|
||||
|
||||
def get_instances_by_id():
|
||||
"""Return all instances stored in the project instances metadata"""
|
||||
if not substance_painter.project.is_open():
|
||||
return {}
|
||||
|
||||
metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY)
|
||||
return metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {}
|
||||
|
||||
|
||||
def get_instances():
|
||||
"""Return all instances stored in the project instances as a list"""
|
||||
return list(get_instances_by_id().values())
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
|
||||
|
||||
def cleanup_openpype_qt_widgets():
|
||||
"""
|
||||
Workaround for Substance failing to shut down correctly
|
||||
when a Qt window was still open at the time of shutting down.
|
||||
|
||||
This seems to work sometimes, but not all the time.
|
||||
|
||||
"""
|
||||
# TODO: Create a more reliable method to close down all OpenPype Qt widgets
|
||||
from PySide2 import QtWidgets
|
||||
import substance_painter.ui
|
||||
|
||||
# Kill OpenPype Qt widgets
|
||||
print("Killing OpenPype Qt widgets..")
|
||||
for widget in QtWidgets.QApplication.topLevelWidgets():
|
||||
if widget.__module__.startswith("openpype."):
|
||||
print(f"Deleting widget: {widget.__class__.__name__}")
|
||||
substance_painter.ui.delete_ui_element(widget)
|
||||
|
||||
|
||||
def start_plugin():
|
||||
from openpype.pipeline import install_host
|
||||
from openpype.hosts.substancepainter.api import SubstanceHost
|
||||
install_host(SubstanceHost())
|
||||
|
||||
|
||||
def close_plugin():
|
||||
from openpype.pipeline import uninstall_host
|
||||
cleanup_openpype_qt_widgets()
|
||||
uninstall_host()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
start_plugin()
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
"""Ease the OpenPype on-boarding process by loading the plug-in on first run"""
|
||||
|
||||
OPENPYPE_PLUGIN_NAME = "openpype_plugin"
|
||||
|
||||
|
||||
def start_plugin():
|
||||
try:
|
||||
# This isn't exposed in the official API so we keep it in a try-except
|
||||
from painter_plugins_ui import (
|
||||
get_settings,
|
||||
LAUNCH_AT_START_KEY,
|
||||
ON_STATE,
|
||||
PLUGINS_MENU,
|
||||
plugin_manager
|
||||
)
|
||||
|
||||
# The `painter_plugins_ui` plug-in itself is also a startup plug-in
|
||||
# we need to take into account that it could run either earlier or
|
||||
# later than this startup script, we check whether its menu initialized
|
||||
is_before_plugins_menu = PLUGINS_MENU is None
|
||||
|
||||
settings = get_settings(OPENPYPE_PLUGIN_NAME)
|
||||
if settings.value(LAUNCH_AT_START_KEY, None) is None:
|
||||
print("Initializing OpenPype plug-in on first run...")
|
||||
if is_before_plugins_menu:
|
||||
print("- running before 'painter_plugins_ui'")
|
||||
# Delay the launch to the painter_plugins_ui initialization
|
||||
settings.setValue(LAUNCH_AT_START_KEY, ON_STATE)
|
||||
else:
|
||||
# Launch now
|
||||
print("- running after 'painter_plugins_ui'")
|
||||
plugin_manager(OPENPYPE_PLUGIN_NAME)(True)
|
||||
|
||||
# Set the checked state in the menu to avoid confusion
|
||||
action = next(action for action in PLUGINS_MENU._menu.actions()
|
||||
if action.text() == OPENPYPE_PLUGIN_NAME)
|
||||
if action is not None:
|
||||
action.blockSignals(True)
|
||||
action.setChecked(True)
|
||||
action.blockSignals(False)
|
||||
|
||||
except Exception as exc:
|
||||
print(exc)
|
||||
|
|
@ -0,0 +1,162 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating textures."""
|
||||
|
||||
from openpype.pipeline import CreatedInstance, Creator, CreatorError
|
||||
from openpype.lib import (
|
||||
EnumDef,
|
||||
UILabelDef,
|
||||
NumberDef,
|
||||
BoolDef
|
||||
)
|
||||
|
||||
from openpype.hosts.substancepainter.api.pipeline import (
|
||||
get_instances,
|
||||
set_instance,
|
||||
set_instances,
|
||||
remove_instance
|
||||
)
|
||||
from openpype.hosts.substancepainter.api.lib import get_export_presets
|
||||
|
||||
import substance_painter.project
|
||||
|
||||
|
||||
class CreateTextures(Creator):
|
||||
"""Create a texture set."""
|
||||
identifier = "io.openpype.creators.substancepainter.textureset"
|
||||
label = "Textures"
|
||||
family = "textureSet"
|
||||
icon = "picture-o"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
raise CreatorError("Can't create a Texture Set instance without "
|
||||
"an open project.")
|
||||
|
||||
instance = self.create_instance_in_context(subset_name,
|
||||
instance_data)
|
||||
set_instance(
|
||||
instance_id=instance["instance_id"],
|
||||
instance_data=instance.data_to_store()
|
||||
)
|
||||
|
||||
def collect_instances(self):
|
||||
for instance in get_instances():
|
||||
if (instance.get("creator_identifier") == self.identifier or
|
||||
instance.get("family") == self.family):
|
||||
self.create_instance_in_context_from_existing(instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
instance_data_by_id = {}
|
||||
for instance, _changes in update_list:
|
||||
# Persist the data
|
||||
instance_id = instance.get("instance_id")
|
||||
instance_data = instance.data_to_store()
|
||||
instance_data_by_id[instance_id] = instance_data
|
||||
set_instances(instance_data_by_id, update=True)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
for instance in instances:
|
||||
remove_instance(instance["instance_id"])
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
# Helper methods (this might get moved into Creator class)
|
||||
def create_instance_in_context(self, subset_name, data):
|
||||
instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
self.create_context.creator_adds_instance(instance)
|
||||
return instance
|
||||
|
||||
def create_instance_in_context_from_existing(self, data):
|
||||
instance = CreatedInstance.from_existing(data, self)
|
||||
self.create_context.creator_adds_instance(instance)
|
||||
return instance
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
return [
|
||||
EnumDef("exportPresetUrl",
|
||||
items=get_export_presets(),
|
||||
label="Output Template"),
|
||||
BoolDef("allowSkippedMaps",
|
||||
label="Allow Skipped Output Maps",
|
||||
tooltip="When enabled this allows the publish to ignore "
|
||||
"output maps in the used output template if one "
|
||||
"or more maps are skipped due to the required "
|
||||
"channels not being present in the current file.",
|
||||
default=True),
|
||||
EnumDef("exportFileFormat",
|
||||
items={
|
||||
None: "Based on output template",
|
||||
# TODO: Get available extensions from substance API
|
||||
"bmp": "bmp",
|
||||
"ico": "ico",
|
||||
"jpeg": "jpeg",
|
||||
"jng": "jng",
|
||||
"pbm": "pbm",
|
||||
"pgm": "pgm",
|
||||
"png": "png",
|
||||
"ppm": "ppm",
|
||||
"tga": "targa",
|
||||
"tif": "tiff",
|
||||
"wap": "wap",
|
||||
"wbmp": "wbmp",
|
||||
"xpm": "xpm",
|
||||
"gif": "gif",
|
||||
"hdr": "hdr",
|
||||
"exr": "exr",
|
||||
"j2k": "j2k",
|
||||
"jp2": "jp2",
|
||||
"pfm": "pfm",
|
||||
"webp": "webp",
|
||||
# TODO: Unsure why jxr format fails to export
|
||||
# "jxr": "jpeg-xr",
|
||||
# TODO: File formats that combine the exported textures
|
||||
# like psd are not correctly supported due to
|
||||
# publishing only a single file
|
||||
# "psd": "psd",
|
||||
# "sbsar": "sbsar",
|
||||
},
|
||||
default=None,
|
||||
label="File type"),
|
||||
EnumDef("exportSize",
|
||||
items={
|
||||
None: "Based on each Texture Set's size",
|
||||
# The key is size of the texture file in log2.
|
||||
# (i.e. 10 means 2^10 = 1024)
|
||||
7: "128",
|
||||
8: "256",
|
||||
9: "512",
|
||||
10: "1024",
|
||||
11: "2048",
|
||||
12: "4096"
|
||||
},
|
||||
default=None,
|
||||
label="Size"),
|
||||
|
||||
EnumDef("exportPadding",
|
||||
items={
|
||||
"passthrough": "No padding (passthrough)",
|
||||
"infinite": "Dilation infinite",
|
||||
"transparent": "Dilation + transparent",
|
||||
"color": "Dilation + default background color",
|
||||
"diffusion": "Dilation + diffusion"
|
||||
},
|
||||
default="infinite",
|
||||
label="Padding"),
|
||||
NumberDef("exportDilationDistance",
|
||||
minimum=0,
|
||||
maximum=256,
|
||||
decimals=0,
|
||||
default=16,
|
||||
label="Dilation Distance"),
|
||||
UILabelDef("*only used with "
|
||||
"'Dilation + <x>' padding"),
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attributes
|
||||
return self.get_instance_attr_defs()
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating workfiles."""
|
||||
|
||||
from openpype.pipeline import CreatedInstance, AutoCreator
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
from openpype.hosts.substancepainter.api.pipeline import (
|
||||
set_instances,
|
||||
set_instance,
|
||||
get_instances
|
||||
)
|
||||
|
||||
import substance_painter.project
|
||||
|
||||
|
||||
class CreateWorkfile(AutoCreator):
|
||||
"""Workfile auto-creator."""
|
||||
identifier = "io.openpype.creators.substancepainter.workfile"
|
||||
label = "Workfile"
|
||||
family = "workfile"
|
||||
icon = "document"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
def create(self):
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
return
|
||||
|
||||
variant = self.default_variant
|
||||
project_name = self.project_name
|
||||
asset_name = self.create_context.get_current_asset_name()
|
||||
task_name = self.create_context.get_current_task_name()
|
||||
host_name = self.create_context.host_name
|
||||
|
||||
# Workfile instance should always exist and must only exist once.
|
||||
# As such we'll first check if it already exists and is collected.
|
||||
current_instance = next(
|
||||
(
|
||||
instance for instance in self.create_context.instances
|
||||
if instance.creator_identifier == self.identifier
|
||||
), None)
|
||||
|
||||
if current_instance is None:
|
||||
self.log.info("Auto-creating workfile instance...")
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
data = {
|
||||
"asset": asset_name,
|
||||
"task": task_name,
|
||||
"variant": variant
|
||||
}
|
||||
current_instance = self.create_instance_in_context(subset_name,
|
||||
data)
|
||||
elif (
|
||||
current_instance["asset"] != asset_name
|
||||
or current_instance["task"] != task_name
|
||||
):
|
||||
# Update instance context if is not the same
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
current_instance["asset"] = asset_name
|
||||
current_instance["task"] = task_name
|
||||
current_instance["subset"] = subset_name
|
||||
|
||||
set_instance(
|
||||
instance_id=current_instance.get("instance_id"),
|
||||
instance_data=current_instance.data_to_store()
|
||||
)
|
||||
|
||||
def collect_instances(self):
|
||||
for instance in get_instances():
|
||||
if (instance.get("creator_identifier") == self.identifier or
|
||||
instance.get("family") == self.family):
|
||||
self.create_instance_in_context_from_existing(instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
instance_data_by_id = {}
|
||||
for instance, _changes in update_list:
|
||||
# Persist the data
|
||||
instance_id = instance.get("instance_id")
|
||||
instance_data = instance.data_to_store()
|
||||
instance_data_by_id[instance_id] = instance_data
|
||||
set_instances(instance_data_by_id, update=True)
|
||||
|
||||
# Helper methods (this might get moved into Creator class)
|
||||
def create_instance_in_context(self, subset_name, data):
|
||||
instance = CreatedInstance(
|
||||
self.family, subset_name, data, self
|
||||
)
|
||||
self.create_context.creator_adds_instance(instance)
|
||||
return instance
|
||||
|
||||
def create_instance_in_context_from_existing(self, data):
|
||||
instance = CreatedInstance.from_existing(data, self)
|
||||
self.create_context.creator_adds_instance(instance)
|
||||
return instance
|
||||
124
openpype/hosts/substancepainter/plugins/load/load_mesh.py
Normal file
124
openpype/hosts/substancepainter/plugins/load/load_mesh.py
Normal file
|
|
@ -0,0 +1,124 @@
|
|||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
from openpype.pipeline.load import LoadError
|
||||
from openpype.hosts.substancepainter.api.pipeline import (
|
||||
imprint_container,
|
||||
set_container_metadata,
|
||||
remove_container_metadata
|
||||
)
|
||||
from openpype.hosts.substancepainter.api.lib import prompt_new_file_with_mesh
|
||||
|
||||
import substance_painter.project
|
||||
import qargparse
|
||||
|
||||
|
||||
class SubstanceLoadProjectMesh(load.LoaderPlugin):
|
||||
"""Load mesh for project"""
|
||||
|
||||
families = ["*"]
|
||||
representations = ["abc", "fbx", "obj", "gltf"]
|
||||
|
||||
label = "Load mesh"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
options = [
|
||||
qargparse.Boolean(
|
||||
"preserve_strokes",
|
||||
default=True,
|
||||
help="Preserve strokes positions on mesh.\n"
|
||||
"(only relevant when loading into existing project)"
|
||||
),
|
||||
qargparse.Boolean(
|
||||
"import_cameras",
|
||||
default=True,
|
||||
help="Import cameras from the mesh file."
|
||||
)
|
||||
]
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
|
||||
# Get user inputs
|
||||
import_cameras = data.get("import_cameras", True)
|
||||
preserve_strokes = data.get("preserve_strokes", True)
|
||||
|
||||
if not substance_painter.project.is_open():
|
||||
# Allow to 'initialize' a new project
|
||||
result = prompt_new_file_with_mesh(mesh_filepath=self.fname)
|
||||
if not result:
|
||||
self.log.info("User cancelled new project prompt.")
|
||||
return
|
||||
|
||||
else:
|
||||
# Reload the mesh
|
||||
settings = substance_painter.project.MeshReloadingSettings(
|
||||
import_cameras=import_cameras,
|
||||
preserve_strokes=preserve_strokes
|
||||
)
|
||||
|
||||
def on_mesh_reload(status: substance_painter.project.ReloadMeshStatus): # noqa
|
||||
if status == substance_painter.project.ReloadMeshStatus.SUCCESS: # noqa
|
||||
self.log.info("Reload succeeded")
|
||||
else:
|
||||
raise LoadError("Reload of mesh failed")
|
||||
|
||||
path = self.fname
|
||||
substance_painter.project.reload_mesh(path,
|
||||
settings,
|
||||
on_mesh_reload)
|
||||
|
||||
# Store container
|
||||
container = {}
|
||||
project_mesh_object_name = "_ProjectMesh_"
|
||||
imprint_container(container,
|
||||
name=project_mesh_object_name,
|
||||
namespace=project_mesh_object_name,
|
||||
context=context,
|
||||
loader=self)
|
||||
|
||||
# We want store some options for updating to keep consistent behavior
|
||||
# from the user's original choice. We don't store 'preserve_strokes'
|
||||
# as we always preserve strokes on updates.
|
||||
container["options"] = {
|
||||
"import_cameras": import_cameras,
|
||||
}
|
||||
|
||||
set_container_metadata(project_mesh_object_name, container)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def update(self, container, representation):
|
||||
|
||||
path = get_representation_path(representation)
|
||||
|
||||
# Reload the mesh
|
||||
container_options = container.get("options", {})
|
||||
settings = substance_painter.project.MeshReloadingSettings(
|
||||
import_cameras=container_options.get("import_cameras", True),
|
||||
preserve_strokes=True
|
||||
)
|
||||
|
||||
def on_mesh_reload(status: substance_painter.project.ReloadMeshStatus):
|
||||
if status == substance_painter.project.ReloadMeshStatus.SUCCESS:
|
||||
self.log.info("Reload succeeded")
|
||||
else:
|
||||
raise LoadError("Reload of mesh failed")
|
||||
|
||||
substance_painter.project.reload_mesh(path, settings, on_mesh_reload)
|
||||
|
||||
# Update container representation
|
||||
object_name = container["objectName"]
|
||||
update_data = {"representation": str(representation["_id"])}
|
||||
set_container_metadata(object_name, update_data, update=True)
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
# Remove OpenPype related settings about what model was loaded
|
||||
# or close the project?
|
||||
# TODO: This is likely best 'hidden' away to the user because
|
||||
# this will leave the project's mesh unmanaged.
|
||||
remove_container_metadata(container["objectName"])
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
|
||||
class CollectCurrentFile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
label = "Current Workfile"
|
||||
hosts = ["substancepainter"]
|
||||
|
||||
def process(self, context):
|
||||
host = registered_host()
|
||||
path = host.get_current_workfile()
|
||||
context.data["currentFile"] = path
|
||||
self.log.debug(f"Current workfile: {path}")
|
||||
|
|
@ -0,0 +1,196 @@
|
|||
import os
|
||||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import publish
|
||||
|
||||
import substance_painter.textureset
|
||||
from openpype.hosts.substancepainter.api.lib import (
|
||||
get_parsed_export_maps,
|
||||
strip_template
|
||||
)
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class CollectTextureSet(pyblish.api.InstancePlugin):
|
||||
"""Extract Textures using an output template config"""
|
||||
# TODO: Production-test usage of color spaces
|
||||
# TODO: Detect what source data channels end up in each file
|
||||
|
||||
label = "Collect Texture Set images"
|
||||
hosts = ["substancepainter"]
|
||||
families = ["textureSet"]
|
||||
order = pyblish.api.CollectorOrder
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
config = self.get_export_config(instance)
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name=instance.context.data["projectName"],
|
||||
asset_name=instance.data["asset"]
|
||||
)
|
||||
|
||||
instance.data["exportConfig"] = config
|
||||
maps = get_parsed_export_maps(config)
|
||||
|
||||
# Let's break the instance into multiple instances to integrate
|
||||
# a subset per generated texture or texture UDIM sequence
|
||||
for (texture_set_name, stack_name), template_maps in maps.items():
|
||||
self.log.info(f"Processing {texture_set_name}/{stack_name}")
|
||||
for template, outputs in template_maps.items():
|
||||
self.log.info(f"Processing {template}")
|
||||
self.create_image_instance(instance, template, outputs,
|
||||
asset_doc=asset_doc,
|
||||
texture_set_name=texture_set_name,
|
||||
stack_name=stack_name)
|
||||
|
||||
def create_image_instance(self, instance, template, outputs,
|
||||
asset_doc, texture_set_name, stack_name):
|
||||
"""Create a new instance per image or UDIM sequence.
|
||||
|
||||
The new instances will be of family `image`.
|
||||
|
||||
"""
|
||||
|
||||
context = instance.context
|
||||
first_filepath = outputs[0]["filepath"]
|
||||
fnames = [os.path.basename(output["filepath"]) for output in outputs]
|
||||
ext = os.path.splitext(first_filepath)[1]
|
||||
assert ext.lstrip("."), f"No extension: {ext}"
|
||||
|
||||
always_include_texture_set_name = False # todo: make this configurable
|
||||
all_texture_sets = substance_painter.textureset.all_texture_sets()
|
||||
texture_set = substance_painter.textureset.TextureSet.from_name(
|
||||
texture_set_name
|
||||
)
|
||||
|
||||
# Define the suffix we want to give this particular texture
|
||||
# set and set up a remapped subset naming for it.
|
||||
suffix = ""
|
||||
if always_include_texture_set_name or len(all_texture_sets) > 1:
|
||||
# More than one texture set, include texture set name
|
||||
suffix += f".{texture_set_name}"
|
||||
if texture_set.is_layered_material() and stack_name:
|
||||
# More than one stack, include stack name
|
||||
suffix += f".{stack_name}"
|
||||
|
||||
# Always include the map identifier
|
||||
map_identifier = strip_template(template)
|
||||
suffix += f".{map_identifier}"
|
||||
|
||||
image_subset = get_subset_name(
|
||||
# TODO: The family actually isn't 'texture' currently but for now
|
||||
# this is only done so the subset name starts with 'texture'
|
||||
family="texture",
|
||||
variant=instance.data["variant"] + suffix,
|
||||
task_name=instance.data.get("task"),
|
||||
asset_doc=asset_doc,
|
||||
project_name=context.data["projectName"],
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
# Prepare representation
|
||||
representation = {
|
||||
"name": ext.lstrip("."),
|
||||
"ext": ext.lstrip("."),
|
||||
"files": fnames if len(fnames) > 1 else fnames[0],
|
||||
}
|
||||
|
||||
# Mark as UDIM explicitly if it has UDIM tiles.
|
||||
if bool(outputs[0].get("udim")):
|
||||
# The representation for a UDIM sequence should have a `udim` key
|
||||
# that is a list of all udim tiles (str) like: ["1001", "1002"]
|
||||
# strings. See CollectTextures plug-in and Integrators.
|
||||
representation["udim"] = [output["udim"] for output in outputs]
|
||||
|
||||
# Set up the representation for thumbnail generation
|
||||
# TODO: Simplify this once thumbnail extraction is refactored
|
||||
staging_dir = os.path.dirname(first_filepath)
|
||||
representation["tags"] = ["review"]
|
||||
representation["stagingDir"] = staging_dir
|
||||
|
||||
# Clone the instance
|
||||
image_instance = context.create_instance(image_subset)
|
||||
image_instance[:] = instance[:]
|
||||
image_instance.data.update(copy.deepcopy(instance.data))
|
||||
image_instance.data["name"] = image_subset
|
||||
image_instance.data["label"] = image_subset
|
||||
image_instance.data["subset"] = image_subset
|
||||
image_instance.data["family"] = "image"
|
||||
image_instance.data["families"] = ["image", "textures"]
|
||||
image_instance.data["representations"] = [representation]
|
||||
|
||||
# Group the textures together in the loader
|
||||
image_instance.data["subsetGroup"] = instance.data["subset"]
|
||||
|
||||
# Store the texture set name and stack name on the instance
|
||||
image_instance.data["textureSetName"] = texture_set_name
|
||||
image_instance.data["textureStackName"] = stack_name
|
||||
|
||||
# Store color space with the instance
|
||||
# Note: The extractor will assign it to the representation
|
||||
colorspace = outputs[0].get("colorSpace")
|
||||
if colorspace:
|
||||
self.log.debug(f"{image_subset} colorspace: {colorspace}")
|
||||
image_instance.data["colorspace"] = colorspace
|
||||
|
||||
# Store the instance in the original instance as a member
|
||||
instance.append(image_instance)
|
||||
|
||||
def get_export_config(self, instance):
|
||||
"""Return an export configuration dict for texture exports.
|
||||
|
||||
This config can be supplied to:
|
||||
- `substance_painter.export.export_project_textures`
|
||||
- `substance_painter.export.list_project_textures`
|
||||
|
||||
See documentation on substance_painter.export module about the
|
||||
formatting of the configuration dictionary.
|
||||
|
||||
Args:
|
||||
instance (pyblish.api.Instance): Texture Set instance to be
|
||||
published.
|
||||
|
||||
Returns:
|
||||
dict: Export config
|
||||
|
||||
"""
|
||||
|
||||
creator_attrs = instance.data["creator_attributes"]
|
||||
preset_url = creator_attrs["exportPresetUrl"]
|
||||
self.log.debug(f"Exporting using preset: {preset_url}")
|
||||
|
||||
# See: https://substance3d.adobe.com/documentation/ptpy/api/substance_painter/export # noqa
|
||||
config = { # noqa
|
||||
"exportShaderParams": True,
|
||||
"exportPath": publish.get_instance_staging_dir(instance),
|
||||
"defaultExportPreset": preset_url,
|
||||
|
||||
# Custom overrides to the exporter
|
||||
"exportParameters": [
|
||||
{
|
||||
"parameters": {
|
||||
"fileFormat": creator_attrs["exportFileFormat"],
|
||||
"sizeLog2": creator_attrs["exportSize"],
|
||||
"paddingAlgorithm": creator_attrs["exportPadding"],
|
||||
"dilationDistance": creator_attrs["exportDilationDistance"] # noqa
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Create the list of Texture Sets to export.
|
||||
config["exportList"] = []
|
||||
for texture_set in substance_painter.textureset.all_texture_sets():
|
||||
config["exportList"].append({"rootPath": texture_set.name()})
|
||||
|
||||
# Consider None values from the creator attributes optionals
|
||||
for override in config["exportParameters"]:
|
||||
parameters = override.get("parameters")
|
||||
for key, value in dict(parameters).items():
|
||||
if value is None:
|
||||
parameters.pop(key)
|
||||
|
||||
return config
|
||||
|
|
@ -0,0 +1,26 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWorkfileRepresentation(pyblish.api.InstancePlugin):
|
||||
"""Create a publish representation for the current workfile instance."""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Workfile representation"
|
||||
hosts = ["substancepainter"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
context = instance.context
|
||||
current_file = context.data["currentFile"]
|
||||
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data["representations"] = [{
|
||||
"name": ext.lstrip("."),
|
||||
"ext": ext.lstrip("."),
|
||||
"files": file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
import substance_painter.export
|
||||
|
||||
from openpype.pipeline import KnownPublishError, publish
|
||||
|
||||
|
||||
class ExtractTextures(publish.Extractor,
|
||||
publish.ColormanagedPyblishPluginMixin):
|
||||
"""Extract Textures using an output template config.
|
||||
|
||||
Note:
|
||||
This Extractor assumes that `collect_textureset_images` has prepared
|
||||
the relevant export config and has also collected the individual image
|
||||
instances for publishing including its representation. That is why this
|
||||
particular Extractor doesn't specify representations to integrate.
|
||||
|
||||
"""
|
||||
|
||||
label = "Extract Texture Set"
|
||||
hosts = ["substancepainter"]
|
||||
families = ["textureSet"]
|
||||
|
||||
# Run before thumbnail extractors
|
||||
order = publish.Extractor.order - 0.1
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
config = instance.data["exportConfig"]
|
||||
result = substance_painter.export.export_project_textures(config)
|
||||
|
||||
if result.status != substance_painter.export.ExportStatus.Success:
|
||||
raise KnownPublishError(
|
||||
"Failed to export texture set: {}".format(result.message)
|
||||
)
|
||||
|
||||
# Log what files we generated
|
||||
for (texture_set_name, stack_name), maps in result.textures.items():
|
||||
# Log our texture outputs
|
||||
self.log.info(f"Exported stack: {texture_set_name} {stack_name}")
|
||||
for texture_map in maps:
|
||||
self.log.info(f"Exported texture: {texture_map}")
|
||||
|
||||
# We'll insert the color space data for each image instance that we
|
||||
# added into this texture set. The collector couldn't do so because
|
||||
# some anatomy and other instance data needs to be collected prior
|
||||
context = instance.context
|
||||
for image_instance in instance:
|
||||
representation = next(iter(image_instance.data["representations"]))
|
||||
|
||||
colorspace = image_instance.data.get("colorspace")
|
||||
if not colorspace:
|
||||
self.log.debug("No color space data present for instance: "
|
||||
f"{image_instance}")
|
||||
continue
|
||||
|
||||
self.set_representation_colorspace(representation,
|
||||
context=context,
|
||||
colorspace=colorspace)
|
||||
|
||||
# The TextureSet instance should not be integrated. It generates no
|
||||
# output data. Instead the separated texture instances are generated
|
||||
# from it which themselves integrate into the database.
|
||||
instance.data["integrate"] = False
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.lib import version_up
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
|
||||
class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
|
||||
"""Increment current workfile version."""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 1
|
||||
label = "Increment Workfile Version"
|
||||
optional = True
|
||||
hosts = ["substancepainter"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
assert all(result["success"] for result in context.data["results"]), (
|
||||
"Publishing not successful so version is not increased.")
|
||||
|
||||
host = registered_host()
|
||||
path = context.data["currentFile"]
|
||||
self.log.info(f"Incrementing current workfile to: {path}")
|
||||
host.save_workfile(version_up(path))
|
||||
|
|
@ -0,0 +1,27 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
registered_host,
|
||||
KnownPublishError
|
||||
)
|
||||
|
||||
|
||||
class SaveCurrentWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Save current workfile"""
|
||||
|
||||
label = "Save current workfile"
|
||||
order = pyblish.api.ExtractorOrder - 0.49
|
||||
hosts = ["substancepainter"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
host = registered_host()
|
||||
if context.data["currentFile"] != host.get_current_workfile():
|
||||
raise KnownPublishError("Workfile has changed during publishing!")
|
||||
|
||||
if host.has_unsaved_changes():
|
||||
self.log.info("Saving current file..")
|
||||
host.save_workfile()
|
||||
else:
|
||||
self.log.debug("Skipping workfile save because there are no "
|
||||
"unsaved changes.")
|
||||
|
|
@ -0,0 +1,109 @@
|
|||
import copy
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import substance_painter.export
|
||||
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateOutputMaps(pyblish.api.InstancePlugin):
|
||||
"""Validate all output maps for Output Template are generated.
|
||||
|
||||
Output maps will be skipped by Substance Painter if it is an output
|
||||
map in the Substance Output Template which uses channels that the current
|
||||
substance painter project has not painted or generated.
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate output maps"
|
||||
hosts = ["substancepainter"]
|
||||
families = ["textureSet"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
config = instance.data["exportConfig"]
|
||||
|
||||
# Substance Painter API does not allow to query the actual output maps
|
||||
# it will generate without actually exporting the files. So we try to
|
||||
# generate the smallest size / fastest export as possible
|
||||
config = copy.deepcopy(config)
|
||||
parameters = config["exportParameters"][0]["parameters"]
|
||||
parameters["sizeLog2"] = [1, 1] # output 2x2 images (smallest)
|
||||
parameters["paddingAlgorithm"] = "passthrough" # no dilation (faster)
|
||||
parameters["dithering"] = False # no dithering (faster)
|
||||
|
||||
result = substance_painter.export.export_project_textures(config)
|
||||
if result.status != substance_painter.export.ExportStatus.Success:
|
||||
raise PublishValidationError(
|
||||
"Failed to export texture set: {}".format(result.message)
|
||||
)
|
||||
|
||||
generated_files = set()
|
||||
for texture_maps in result.textures.values():
|
||||
for texture_map in texture_maps:
|
||||
generated_files.add(os.path.normpath(texture_map))
|
||||
# Directly clean up our temporary export
|
||||
os.remove(texture_map)
|
||||
|
||||
creator_attributes = instance.data.get("creator_attributes", {})
|
||||
allow_skipped_maps = creator_attributes.get("allowSkippedMaps", True)
|
||||
error_report_missing = []
|
||||
for image_instance in instance:
|
||||
|
||||
# Confirm whether the instance has its expected files generated.
|
||||
# We assume there's just one representation and that it is
|
||||
# the actual texture representation from the collector.
|
||||
representation = next(iter(image_instance.data["representations"]))
|
||||
staging_dir = representation["stagingDir"]
|
||||
filenames = representation["files"]
|
||||
if not isinstance(filenames, (list, tuple)):
|
||||
# Convert single file to list
|
||||
filenames = [filenames]
|
||||
|
||||
missing = []
|
||||
for filename in filenames:
|
||||
filepath = os.path.join(staging_dir, filename)
|
||||
filepath = os.path.normpath(filepath)
|
||||
if filepath not in generated_files:
|
||||
self.log.warning(f"Missing texture: {filepath}")
|
||||
missing.append(filepath)
|
||||
|
||||
if not missing:
|
||||
continue
|
||||
|
||||
if allow_skipped_maps:
|
||||
# TODO: This is changing state on the instance's which
|
||||
# should not be done during validation.
|
||||
self.log.warning(f"Disabling texture instance: "
|
||||
f"{image_instance}")
|
||||
image_instance.data["active"] = False
|
||||
image_instance.data["integrate"] = False
|
||||
representation.setdefault("tags", []).append("delete")
|
||||
continue
|
||||
else:
|
||||
error_report_missing.append((image_instance, missing))
|
||||
|
||||
if error_report_missing:
|
||||
|
||||
message = (
|
||||
"The Texture Set skipped exporting some output maps which are "
|
||||
"defined in the Output Template. This happens if the Output "
|
||||
"Templates exports maps from channels which you do not "
|
||||
"have in your current Substance Painter project.\n\n"
|
||||
"To allow this enable the *Allow Skipped Output Maps* setting "
|
||||
"on the instance.\n\n"
|
||||
f"Instance {instance} skipped exporting output maps:\n"
|
||||
""
|
||||
)
|
||||
|
||||
for image_instance, missing in error_report_missing:
|
||||
missing_str = ", ".join(missing)
|
||||
message += f"- **{image_instance}** skipped: {missing_str}\n"
|
||||
|
||||
raise PublishValidationError(
|
||||
message=message,
|
||||
title="Missing output maps"
|
||||
)
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectReviewInfo(pyblish.api.InstancePlugin):
|
||||
"""Collect data required for review instances.
|
||||
|
||||
ExtractReview plugin requires frame start/end, fps on instance data which
|
||||
are missing on instances from TrayPublishes.
|
||||
|
||||
Warning:
|
||||
This is temporary solution to "make it work". Contains removed changes
|
||||
from https://github.com/ynput/OpenPype/pull/4383 reduced only for
|
||||
review instances.
|
||||
"""
|
||||
|
||||
label = "Collect Review Info"
|
||||
order = pyblish.api.CollectorOrder + 0.491
|
||||
families = ["review"]
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
asset_entity = instance.data.get("assetEntity")
|
||||
if instance.data.get("frameStart") is not None or not asset_entity:
|
||||
self.log.debug("Missing required data on instance")
|
||||
return
|
||||
|
||||
asset_data = asset_entity["data"]
|
||||
# Store collected data for logging
|
||||
collected_data = {}
|
||||
for key in (
|
||||
"fps",
|
||||
"frameStart",
|
||||
"frameEnd",
|
||||
"handleStart",
|
||||
"handleEnd",
|
||||
):
|
||||
if key in instance.data or key not in asset_data:
|
||||
continue
|
||||
value = asset_data[key]
|
||||
collected_data[key] = value
|
||||
instance.data[key] = value
|
||||
self.log.debug("Collected data: {}".format(str(collected_data)))
|
||||
|
|
@ -4,6 +4,6 @@ Supported Unreal Engine version is 4.26+ (mainly because of major Python changes
|
|||
|
||||
### Project naming
|
||||
Unreal doesn't support project names starting with non-alphabetic character. So names like `123_myProject` are
|
||||
invalid. If OpenPype detects such name it automatically prepends letter **P** to make it valid name, so `123_myProject`
|
||||
invalid. If Ayon detects such name it automatically prepends letter **P** to make it valid name, so `123_myProject`
|
||||
will become `P123_myProject`. There is also soft-limit on project name length to be shorter than 20 characters.
|
||||
Longer names will issue warning in Unreal Editor that there might be possible side effects.
|
||||
Longer names will issue warning in Unreal Editor that there might be possible side effects.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule, IHostAddon
|
||||
from pathlib import Path
|
||||
|
||||
from openpype.modules import IHostAddon, OpenPypeModule
|
||||
from .lib import get_compatible_integration
|
||||
|
||||
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
|
@ -13,15 +16,22 @@ class UnrealAddon(OpenPypeModule, IHostAddon):
|
|||
|
||||
def add_implementation_envs(self, env, app):
|
||||
"""Modify environments to contain all required for implementation."""
|
||||
# Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation
|
||||
# Set AYON_UNREAL_PLUGIN required for Unreal implementation
|
||||
|
||||
ue_plugin = "UE_5.0" if app.name[:1] == "5" else "UE_4.7"
|
||||
ue_version = app.name.replace("-", ".")
|
||||
unreal_plugin_path = os.path.join(
|
||||
UNREAL_ROOT_DIR, "integration", ue_plugin, "OpenPype"
|
||||
UNREAL_ROOT_DIR, "integration", f"UE_{ue_version}", "Ayon"
|
||||
)
|
||||
if not env.get("OPENPYPE_UNREAL_PLUGIN") or \
|
||||
env.get("OPENPYPE_UNREAL_PLUGIN") != unreal_plugin_path:
|
||||
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
|
||||
if not Path(unreal_plugin_path).exists():
|
||||
if compatible_versions := get_compatible_integration(
|
||||
ue_version, Path(UNREAL_ROOT_DIR) / "integration"
|
||||
):
|
||||
unreal_plugin_path = compatible_versions[-1] / "Ayon"
|
||||
unreal_plugin_path = unreal_plugin_path.as_posix()
|
||||
|
||||
if not env.get("AYON_UNREAL_PLUGIN") or \
|
||||
env.get("AYON_UNREAL_PLUGIN") != unreal_plugin_path:
|
||||
env["AYON_UNREAL_PLUGIN"] = unreal_plugin_path
|
||||
|
||||
# Set default environments if are not set via settings
|
||||
defaults = {
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Unreal Editor OpenPype host API."""
|
||||
"""Unreal Editor Ayon host API."""
|
||||
|
||||
from .plugin import (
|
||||
UnrealActorCreator,
|
||||
|
|
|
|||
|
|
@ -2,15 +2,15 @@
|
|||
import unreal # noqa
|
||||
|
||||
|
||||
class OpenPypeUnrealException(Exception):
|
||||
class AyonUnrealException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
@unreal.uclass()
|
||||
class OpenPypeHelpers(unreal.OpenPypeLib):
|
||||
"""Class wrapping some useful functions for OpenPype.
|
||||
class AyonHelpers(unreal.AyonLib):
|
||||
"""Class wrapping some useful functions for Ayon.
|
||||
|
||||
This class is extending native BP class in OpenPype Integration Plugin.
|
||||
This class is extending native BP class in Ayon Integration Plugin.
|
||||
|
||||
"""
|
||||
|
||||
|
|
@ -29,13 +29,13 @@ class OpenPypeHelpers(unreal.OpenPypeLib):
|
|||
|
||||
Example:
|
||||
|
||||
OpenPypeHelpers().set_folder_color(
|
||||
AyonHelpers().set_folder_color(
|
||||
"/Game/Path", unreal.LinearColor(a=1.0, r=1.0, g=0.5, b=0)
|
||||
)
|
||||
|
||||
Note:
|
||||
This will take effect only after Editor is restarted. I couldn't
|
||||
find a way to refresh it. Also this saves the color definition
|
||||
find a way to refresh it. Also, this saves the color definition
|
||||
into the project config, binding this path with color. So if you
|
||||
delete this path and later re-create, it will set this color
|
||||
again.
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from openpype.pipeline import (
|
|||
register_creator_plugin_path,
|
||||
deregister_loader_plugin_path,
|
||||
deregister_creator_plugin_path,
|
||||
AVALON_CONTAINER_ID,
|
||||
AYON_CONTAINER_ID,
|
||||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
import openpype.hosts.unreal
|
||||
|
|
@ -22,12 +22,13 @@ from openpype.host import HostBase, ILoadHost, IPublishHost
|
|||
|
||||
import unreal # noqa
|
||||
|
||||
# Rename to Ayon once parent module renames
|
||||
logger = logging.getLogger("openpype.hosts.unreal")
|
||||
|
||||
OPENPYPE_CONTAINERS = "OpenPypeContainers"
|
||||
CONTEXT_CONTAINER = "OpenPype/context.json"
|
||||
AYON_CONTAINERS = "AyonContainers"
|
||||
CONTEXT_CONTAINER = "Ayon/context.json"
|
||||
UNREAL_VERSION = semver.VersionInfo(
|
||||
*os.getenv("OPENPYPE_UNREAL_VERSION").split(".")
|
||||
*os.getenv("AYON_UNREAL_VERSION").split(".")
|
||||
)
|
||||
|
||||
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.unreal.__file__))
|
||||
|
|
@ -53,14 +54,14 @@ class UnrealHost(HostBase, ILoadHost, IPublishHost):
|
|||
def get_containers(self):
|
||||
return ls()
|
||||
|
||||
def show_tools_popup(self):
|
||||
@staticmethod
|
||||
def show_tools_popup():
|
||||
"""Show tools popup with actions leading to show other tools."""
|
||||
|
||||
show_tools_popup()
|
||||
|
||||
def show_tools_dialog(self):
|
||||
@staticmethod
|
||||
def show_tools_dialog():
|
||||
"""Show tools dialog with actions leading to show other tools."""
|
||||
|
||||
show_tools_dialog()
|
||||
|
||||
def update_context_data(self, data, changes):
|
||||
|
|
@ -72,9 +73,10 @@ class UnrealHost(HostBase, ILoadHost, IPublishHost):
|
|||
with open(op_ctx, "w+") as f:
|
||||
json.dump(data, f)
|
||||
break
|
||||
except IOError:
|
||||
except IOError as e:
|
||||
if i == attempts - 1:
|
||||
raise Exception("Failed to write context data. Aborting.")
|
||||
raise Exception(
|
||||
"Failed to write context data. Aborting.") from e
|
||||
unreal.log_warning("Failed to write context data. Retrying...")
|
||||
i += 1
|
||||
time.sleep(3)
|
||||
|
|
@ -95,19 +97,30 @@ def install():
|
|||
print("-=" * 40)
|
||||
logo = '''.
|
||||
.
|
||||
____________
|
||||
/ \\ __ \\
|
||||
\\ \\ \\/_\\ \\
|
||||
\\ \\ _____/ ______
|
||||
\\ \\ \\___// \\ \\
|
||||
\\ \\____\\ \\ \\_____\\
|
||||
\\/_____/ \\/______/ PYPE Club .
|
||||
·
|
||||
│
|
||||
·∙/
|
||||
·-∙•∙-·
|
||||
/ \\ /∙· / \\
|
||||
∙ \\ │ / ∙
|
||||
\\ \\ · / /
|
||||
\\\\ ∙ ∙ //
|
||||
\\\\/ \\//
|
||||
___
|
||||
│ │
|
||||
│ │
|
||||
│ │
|
||||
│___│
|
||||
-·
|
||||
|
||||
·-─═─-∙ A Y O N ∙-─═─-·
|
||||
by YNPUT
|
||||
.
|
||||
'''
|
||||
print(logo)
|
||||
print("installing OpenPype for Unreal ...")
|
||||
print("installing Ayon for Unreal ...")
|
||||
print("-=" * 40)
|
||||
logger.info("installing OpenPype for Unreal")
|
||||
logger.info("installing Ayon for Unreal")
|
||||
pyblish.api.register_host("unreal")
|
||||
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
|
||||
register_loader_plugin_path(str(LOAD_PATH))
|
||||
|
|
@ -117,7 +130,7 @@ def install():
|
|||
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall Unreal configuration for Avalon."""
|
||||
"""Uninstall Unreal configuration for Ayon."""
|
||||
pyblish.api.deregister_plugin_path(str(PUBLISH_PATH))
|
||||
deregister_loader_plugin_path(str(LOAD_PATH))
|
||||
deregister_creator_plugin_path(str(CREATE_PATH))
|
||||
|
|
@ -125,14 +138,14 @@ def uninstall():
|
|||
|
||||
def _register_callbacks():
|
||||
"""
|
||||
TODO: Implement callbacks if supported by UE4
|
||||
TODO: Implement callbacks if supported by UE
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
def _register_events():
|
||||
"""
|
||||
TODO: Implement callbacks if supported by UE4
|
||||
TODO: Implement callbacks if supported by UE
|
||||
"""
|
||||
pass
|
||||
|
||||
|
|
@ -146,32 +159,30 @@ def ls():
|
|||
"""
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
# UE 5.1 changed how class name is specified
|
||||
class_name = ["/Script/OpenPype", "AssetContainer"] if UNREAL_VERSION.major == 5 and UNREAL_VERSION.minor > 0 else "AssetContainer" # noqa
|
||||
openpype_containers = ar.get_assets_by_class(class_name, True)
|
||||
class_name = ["/Script/Ayon", "AyonAssetContainer"] if UNREAL_VERSION.major == 5 and UNREAL_VERSION.minor > 0 else "AyonAssetContainer" # noqa
|
||||
ayon_containers = ar.get_assets_by_class(class_name, True)
|
||||
|
||||
# get_asset_by_class returns AssetData. To get all metadata we need to
|
||||
# load asset. get_tag_values() work only on metadata registered in
|
||||
# Asset Registry Project settings (and there is no way to set it with
|
||||
# python short of editing ini configuration file).
|
||||
for asset_data in openpype_containers:
|
||||
for asset_data in ayon_containers:
|
||||
asset = asset_data.get_asset()
|
||||
data = unreal.EditorAssetLibrary.get_metadata_tag_values(asset)
|
||||
data["objectName"] = asset_data.asset_name
|
||||
data = cast_map_to_str_dict(data)
|
||||
|
||||
yield data
|
||||
yield cast_map_to_str_dict(data)
|
||||
|
||||
|
||||
def ls_inst():
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
# UE 5.1 changed how class name is specified
|
||||
class_name = [
|
||||
"/Script/OpenPype",
|
||||
"OpenPypePublishInstance"
|
||||
"/Script/Ayon",
|
||||
"AyonPublishInstance"
|
||||
] if (
|
||||
UNREAL_VERSION.major == 5
|
||||
and UNREAL_VERSION.minor > 0
|
||||
) else "OpenPypePublishInstance" # noqa
|
||||
) else "AyonPublishInstance" # noqa
|
||||
instances = ar.get_assets_by_class(class_name, True)
|
||||
|
||||
# get_asset_by_class returns AssetData. To get all metadata we need to
|
||||
|
|
@ -182,13 +193,11 @@ def ls_inst():
|
|||
asset = asset_data.get_asset()
|
||||
data = unreal.EditorAssetLibrary.get_metadata_tag_values(asset)
|
||||
data["objectName"] = asset_data.asset_name
|
||||
data = cast_map_to_str_dict(data)
|
||||
|
||||
yield data
|
||||
yield cast_map_to_str_dict(data)
|
||||
|
||||
|
||||
def parse_container(container):
|
||||
"""To get data from container, AssetContainer must be loaded.
|
||||
"""To get data from container, AyonAssetContainer must be loaded.
|
||||
|
||||
Args:
|
||||
container(str): path to container
|
||||
|
|
@ -217,7 +226,7 @@ def containerise(name, namespace, nodes, context, loader=None, suffix="_CON"):
|
|||
Unreal doesn't support *groups* of assets that you can add metadata to.
|
||||
But it does support folders that helps to organize asset. Unfortunately
|
||||
those folders are just that - you cannot add any additional information
|
||||
to them. OpenPype Integration Plugin is providing way out - Implementing
|
||||
to them. Ayon Integration Plugin is providing way out - Implementing
|
||||
`AssetContainer` Blueprint class. This class when added to folder can
|
||||
handle metadata on it using standard
|
||||
:func:`unreal.EditorAssetLibrary.set_metadata_tag()` and
|
||||
|
|
@ -226,30 +235,30 @@ def containerise(name, namespace, nodes, context, loader=None, suffix="_CON"):
|
|||
those assets is available as `assets` property.
|
||||
|
||||
This is list of strings starting with asset type and ending with its path:
|
||||
`Material /Game/OpenPype/Test/TestMaterial.TestMaterial`
|
||||
`Material /Game/Ayon/Test/TestMaterial.TestMaterial`
|
||||
|
||||
"""
|
||||
# 1 - create directory for container
|
||||
root = "/Game"
|
||||
container_name = "{}{}".format(name, suffix)
|
||||
container_name = f"{name}{suffix}"
|
||||
new_name = move_assets_to_path(root, container_name, nodes)
|
||||
|
||||
# 2 - create Asset Container there
|
||||
path = "{}/{}".format(root, new_name)
|
||||
path = f"{root}/{new_name}"
|
||||
create_container(container=container_name, path=path)
|
||||
|
||||
namespace = path
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"name": new_name,
|
||||
"namespace": namespace,
|
||||
"loader": str(loader),
|
||||
"representation": context["representation"]["_id"],
|
||||
}
|
||||
# 3 - imprint data
|
||||
imprint("{}/{}".format(path, container_name), data)
|
||||
imprint(f"{path}/{container_name}", data)
|
||||
return path
|
||||
|
||||
|
||||
|
|
@ -257,7 +266,7 @@ def instantiate(root, name, data, assets=None, suffix="_INS"):
|
|||
"""Bundles *nodes* into *container*.
|
||||
|
||||
Marking it with metadata as publishable instance. If assets are provided,
|
||||
they are moved to new path where `OpenPypePublishInstance` class asset is
|
||||
they are moved to new path where `AyonPublishInstance` class asset is
|
||||
created and imprinted with metadata.
|
||||
|
||||
This can then be collected for publishing by Pyblish for example.
|
||||
|
|
@ -271,7 +280,7 @@ def instantiate(root, name, data, assets=None, suffix="_INS"):
|
|||
suffix (str): suffix string to append to instance name
|
||||
|
||||
"""
|
||||
container_name = "{}{}".format(name, suffix)
|
||||
container_name = f"{name}{suffix}"
|
||||
|
||||
# if we specify assets, create new folder and move them there. If not,
|
||||
# just create empty folder
|
||||
|
|
@ -280,10 +289,10 @@ def instantiate(root, name, data, assets=None, suffix="_INS"):
|
|||
else:
|
||||
new_name = create_folder(root, name)
|
||||
|
||||
path = "{}/{}".format(root, new_name)
|
||||
path = f"{root}/{new_name}"
|
||||
create_publish_instance(instance=container_name, path=path)
|
||||
|
||||
imprint("{}/{}".format(path, container_name), data)
|
||||
imprint(f"{path}/{container_name}", data)
|
||||
|
||||
|
||||
def imprint(node, data):
|
||||
|
|
@ -299,7 +308,7 @@ def imprint(node, data):
|
|||
loaded_asset, key, str(value)
|
||||
)
|
||||
|
||||
with unreal.ScopedEditorTransaction("OpenPype containerising"):
|
||||
with unreal.ScopedEditorTransaction("Ayon containerising"):
|
||||
unreal.EditorAssetLibrary.save_asset(node)
|
||||
|
||||
|
||||
|
|
@ -366,11 +375,11 @@ def create_folder(root: str, name: str) -> str:
|
|||
eal = unreal.EditorAssetLibrary
|
||||
index = 1
|
||||
while True:
|
||||
if eal.does_directory_exist("{}/{}".format(root, name)):
|
||||
name = "{}{}".format(name, index)
|
||||
if eal.does_directory_exist(f"{root}/{name}"):
|
||||
name = f"{name}{index}"
|
||||
index += 1
|
||||
else:
|
||||
eal.make_directory("{}/{}".format(root, name))
|
||||
eal.make_directory(f"{root}/{name}")
|
||||
break
|
||||
|
||||
return name
|
||||
|
|
@ -403,9 +412,7 @@ def move_assets_to_path(root: str, name: str, assets: List[str]) -> str:
|
|||
unreal.log(assets)
|
||||
for asset in assets:
|
||||
loaded = eal.load_asset(asset)
|
||||
eal.rename_asset(
|
||||
asset, "{}/{}/{}".format(root, name, loaded.get_name())
|
||||
)
|
||||
eal.rename_asset(asset, f"{root}/{name}/{loaded.get_name()}")
|
||||
|
||||
return name
|
||||
|
||||
|
|
@ -432,17 +439,16 @@ def create_container(container: str, path: str) -> unreal.Object:
|
|||
)
|
||||
|
||||
"""
|
||||
factory = unreal.AssetContainerFactory()
|
||||
factory = unreal.AyonAssetContainerFactory()
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
|
||||
asset = tools.create_asset(container, path, None, factory)
|
||||
return asset
|
||||
return tools.create_asset(container, path, None, factory)
|
||||
|
||||
|
||||
def create_publish_instance(instance: str, path: str) -> unreal.Object:
|
||||
"""Helper function to create OpenPype Publish Instance on given path.
|
||||
"""Helper function to create Ayon Publish Instance on given path.
|
||||
|
||||
This behaves similarly as :func:`create_openpype_container`.
|
||||
This behaves similarly as :func:`create_ayon_container`.
|
||||
|
||||
Args:
|
||||
path (str): Path where to create Publish Instance.
|
||||
|
|
@ -460,10 +466,9 @@ def create_publish_instance(instance: str, path: str) -> unreal.Object:
|
|||
)
|
||||
|
||||
"""
|
||||
factory = unreal.OpenPypePublishInstanceFactory()
|
||||
factory = unreal.AyonPublishInstanceFactory()
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset = tools.create_asset(instance, path, None, factory)
|
||||
return asset
|
||||
return tools.create_asset(instance, path, None, factory)
|
||||
|
||||
|
||||
def cast_map_to_str_dict(umap) -> dict:
|
||||
|
|
@ -494,11 +499,14 @@ def get_subsequences(sequence: unreal.LevelSequence):
|
|||
|
||||
"""
|
||||
tracks = sequence.get_master_tracks()
|
||||
subscene_track = None
|
||||
for t in tracks:
|
||||
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
|
||||
subscene_track = t
|
||||
break
|
||||
subscene_track = next(
|
||||
(
|
||||
t
|
||||
for t in tracks
|
||||
if t.get_class() == unreal.MovieSceneSubTrack.static_class()
|
||||
),
|
||||
None,
|
||||
)
|
||||
if subscene_track is not None and subscene_track.get_sections():
|
||||
return subscene_track.get_sections()
|
||||
return []
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ from openpype.pipeline import (
|
|||
@six.add_metaclass(ABCMeta)
|
||||
class UnrealBaseCreator(Creator):
|
||||
"""Base class for Unreal creator plugins."""
|
||||
root = "/Game/OpenPype/PublishInstances"
|
||||
root = "/Game/Ayon/AyonPublishInstances"
|
||||
suffix = "_INS"
|
||||
|
||||
@staticmethod
|
||||
|
|
@ -243,5 +243,5 @@ class UnrealActorCreator(UnrealBaseCreator):
|
|||
|
||||
|
||||
class Loader(LoaderPlugin, ABC):
|
||||
"""This serves as skeleton for future OpenPype specific functionality"""
|
||||
"""This serves as skeleton for future Ayon specific functionality"""
|
||||
pass
|
||||
|
|
|
|||
|
|
@ -2,8 +2,10 @@ import os
|
|||
|
||||
import unreal
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import Anatomy
|
||||
from openpype.hosts.unreal.api import pipeline
|
||||
from openpype.widgets.message_window import Window
|
||||
|
||||
|
||||
queue = None
|
||||
|
|
@ -32,15 +34,24 @@ def start_rendering():
|
|||
"""
|
||||
Start the rendering process.
|
||||
"""
|
||||
print("Starting rendering...")
|
||||
unreal.log("Starting rendering...")
|
||||
|
||||
# Get selected sequences
|
||||
assets = unreal.EditorUtilityLibrary.get_selected_assets()
|
||||
|
||||
if not assets:
|
||||
Window(
|
||||
parent=None,
|
||||
title="No assets selected",
|
||||
message="No assets selected. Select a render instance.",
|
||||
level="warning")
|
||||
raise RuntimeError(
|
||||
"No assets selected. You need to select a render instance.")
|
||||
|
||||
# instances = pipeline.ls_inst()
|
||||
instances = [
|
||||
a for a in assets
|
||||
if a.get_class().get_name() == "OpenPypePublishInstance"]
|
||||
if a.get_class().get_name() == "AyonPublishInstance"]
|
||||
|
||||
inst_data = []
|
||||
|
||||
|
|
@ -53,8 +64,9 @@ def start_rendering():
|
|||
project = os.environ.get("AVALON_PROJECT")
|
||||
anatomy = Anatomy(project)
|
||||
root = anatomy.roots['renders']
|
||||
except Exception:
|
||||
raise Exception("Could not find render root in anatomy settings.")
|
||||
except Exception as e:
|
||||
raise Exception(
|
||||
"Could not find render root in anatomy settings.") from e
|
||||
|
||||
render_dir = f"{root}/{project}"
|
||||
|
||||
|
|
@ -66,6 +78,13 @@ def start_rendering():
|
|||
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
data = get_project_settings(project)
|
||||
config = None
|
||||
config_path = str(data.get("unreal").get("render_config_path"))
|
||||
if config_path and unreal.EditorAssetLibrary.does_asset_exist(config_path):
|
||||
unreal.log("Found saved render configuration")
|
||||
config = ar.get_asset_by_object_path(config_path).get_asset()
|
||||
|
||||
for i in inst_data:
|
||||
sequence = ar.get_asset_by_object_path(i["sequence"]).get_asset()
|
||||
|
||||
|
|
@ -81,55 +100,80 @@ def start_rendering():
|
|||
# Get all the sequences to render. If there are subsequences,
|
||||
# add them and their frame ranges to the render list. We also
|
||||
# use the names for the output paths.
|
||||
for s in sequences:
|
||||
subscenes = pipeline.get_subsequences(s.get('sequence'))
|
||||
for seq in sequences:
|
||||
subscenes = pipeline.get_subsequences(seq.get('sequence'))
|
||||
|
||||
if subscenes:
|
||||
for ss in subscenes:
|
||||
for sub_seq in subscenes:
|
||||
sequences.append({
|
||||
"sequence": ss.get_sequence(),
|
||||
"output": (f"{s.get('output')}/"
|
||||
f"{ss.get_sequence().get_name()}"),
|
||||
"sequence": sub_seq.get_sequence(),
|
||||
"output": (f"{seq.get('output')}/"
|
||||
f"{sub_seq.get_sequence().get_name()}"),
|
||||
"frame_range": (
|
||||
ss.get_start_frame(), ss.get_end_frame())
|
||||
sub_seq.get_start_frame(), sub_seq.get_end_frame())
|
||||
})
|
||||
else:
|
||||
# Avoid rendering camera sequences
|
||||
if "_camera" not in s.get('sequence').get_name():
|
||||
render_list.append(s)
|
||||
if "_camera" not in seq.get('sequence').get_name():
|
||||
render_list.append(seq)
|
||||
|
||||
# Create the rendering jobs and add them to the queue.
|
||||
for r in render_list:
|
||||
for render_setting in render_list:
|
||||
job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob)
|
||||
job.sequence = unreal.SoftObjectPath(i["master_sequence"])
|
||||
job.map = unreal.SoftObjectPath(i["master_level"])
|
||||
job.author = "OpenPype"
|
||||
job.author = "Ayon"
|
||||
|
||||
# If we have a saved configuration, copy it to the job.
|
||||
if config:
|
||||
job.get_configuration().copy_from(config)
|
||||
|
||||
# User data could be used to pass data to the job, that can be
|
||||
# read in the job's OnJobFinished callback. We could,
|
||||
# for instance, pass the AvalonPublishInstance's path to the job.
|
||||
# for instance, pass the AyonPublishInstance's path to the job.
|
||||
# job.user_data = ""
|
||||
|
||||
output_dir = render_setting.get('output')
|
||||
shot_name = render_setting.get('sequence').get_name()
|
||||
|
||||
settings = job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineOutputSetting)
|
||||
settings.output_resolution = unreal.IntPoint(1920, 1080)
|
||||
settings.custom_start_frame = r.get("frame_range")[0]
|
||||
settings.custom_end_frame = r.get("frame_range")[1]
|
||||
settings.custom_start_frame = render_setting.get("frame_range")[0]
|
||||
settings.custom_end_frame = render_setting.get("frame_range")[1]
|
||||
settings.use_custom_playback_range = True
|
||||
settings.file_name_format = "{sequence_name}.{frame_number}"
|
||||
settings.output_directory.path = f"{render_dir}/{r.get('output')}"
|
||||
|
||||
renderPass = job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineDeferredPassBase)
|
||||
renderPass.disable_multisample_effects = True
|
||||
settings.file_name_format = f"{shot_name}" + ".{frame_number}"
|
||||
settings.output_directory.path = f"{render_dir}/{output_dir}"
|
||||
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_PNG)
|
||||
unreal.MoviePipelineDeferredPassBase)
|
||||
|
||||
render_format = data.get("unreal").get("render_format", "png")
|
||||
|
||||
if render_format == "png":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_PNG)
|
||||
elif render_format == "exr":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_EXR)
|
||||
elif render_format == "jpg":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_JPG)
|
||||
elif render_format == "bmp":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_BMP)
|
||||
|
||||
# If there are jobs in the queue, start the rendering process.
|
||||
if queue.get_jobs():
|
||||
global executor
|
||||
executor = unreal.MoviePipelinePIEExecutor()
|
||||
|
||||
preroll_frames = data.get("unreal").get("preroll_frames", 0)
|
||||
|
||||
settings = unreal.MoviePipelinePIEExecutorSettings()
|
||||
settings.set_editor_property(
|
||||
"initial_delay_frame_count", preroll_frames)
|
||||
|
||||
executor.on_executor_finished_delegate.add_callable_unique(
|
||||
_queue_finish_callback)
|
||||
executor.on_individual_job_finished_delegate.add_callable_unique(
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ class ToolsDialog(QtWidgets.QDialog):
|
|||
def __init__(self, *args, **kwargs):
|
||||
super(ToolsDialog, self).__init__(*args, **kwargs)
|
||||
|
||||
self.setWindowTitle("OpenPype tools")
|
||||
self.setWindowTitle("Ayon tools")
|
||||
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
|
||||
self.setWindowIcon(icon)
|
||||
|
||||
|
|
|
|||
|
|
@ -186,15 +186,15 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
|
||||
project_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Set "OPENPYPE_UNREAL_PLUGIN" to current process environment for
|
||||
# Set "AYON_UNREAL_PLUGIN" to current process environment for
|
||||
# execution of `create_unreal_project`
|
||||
|
||||
if self.launch_context.env.get("OPENPYPE_UNREAL_PLUGIN"):
|
||||
if self.launch_context.env.get("AYON_UNREAL_PLUGIN"):
|
||||
self.log.info((
|
||||
f"{self.signature} using OpenPype plugin from "
|
||||
f"{self.launch_context.env.get('OPENPYPE_UNREAL_PLUGIN')}"
|
||||
f"{self.signature} using Ayon plugin from "
|
||||
f"{self.launch_context.env.get('AYON_UNREAL_PLUGIN')}"
|
||||
))
|
||||
env_key = "OPENPYPE_UNREAL_PLUGIN"
|
||||
env_key = "AYON_UNREAL_PLUGIN"
|
||||
if self.launch_context.env.get(env_key):
|
||||
os.environ[env_key] = self.launch_context.env[env_key]
|
||||
|
||||
|
|
@ -213,7 +213,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
engine_path,
|
||||
project_path)
|
||||
|
||||
self.launch_context.env["OPENPYPE_UNREAL_VERSION"] = engine_version
|
||||
self.launch_context.env["AYON_UNREAL_VERSION"] = engine_version
|
||||
# Append project file to launch arguments
|
||||
self.launch_context.launch_args.append(
|
||||
f"\"{project_file.as_posix()}\"")
|
||||
|
|
|
|||
10
openpype/hosts/unreal/integration/README.md
Normal file
10
openpype/hosts/unreal/integration/README.md
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
# Building the plugin
|
||||
|
||||
In order to successfully build the plugin, make sure that the path to the UnrealBuildTool.exe is specified correctly.
|
||||
After the UBT path specify for which platform it will be compiled. in the -Project parameter, specify the path to the
|
||||
CommandletProject.uproject file. Next the build type has to be specified (DebugGame, Development, Package, etc.) and then the -TargetType (Editor, Runtime, etc.)
|
||||
|
||||
`BuildPlugin_[Ver].bat` runs the building process in the background. If you want to show the progress inside the
|
||||
command prompt, use the `BuildPlugin_[Ver]_Window.bat` file.
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue