mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge remote-tracking branch 'origin/develop' into bugfix/OP-6851_use-colorspace-for-rstexbin
This commit is contained in:
commit
aba4642e98
224 changed files with 17058 additions and 1161 deletions
14
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
14
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,13 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.17.2
|
||||
- 3.17.2-nightly.4
|
||||
- 3.17.2-nightly.3
|
||||
- 3.17.2-nightly.2
|
||||
- 3.17.2-nightly.1
|
||||
- 3.17.1
|
||||
- 3.17.1-nightly.3
|
||||
- 3.17.1-nightly.2
|
||||
- 3.17.1-nightly.1
|
||||
- 3.17.0
|
||||
|
|
@ -128,13 +135,6 @@ body:
|
|||
- 3.14.11-nightly.2
|
||||
- 3.14.11-nightly.1
|
||||
- 3.14.10
|
||||
- 3.14.10-nightly.9
|
||||
- 3.14.10-nightly.8
|
||||
- 3.14.10-nightly.7
|
||||
- 3.14.10-nightly.6
|
||||
- 3.14.10-nightly.5
|
||||
- 3.14.10-nightly.4
|
||||
- 3.14.10-nightly.3
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
735
CHANGELOG.md
735
CHANGELOG.md
|
|
@ -1,6 +1,741 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.17.2](https://github.com/ynput/OpenPype/tree/3.17.2)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.1...3.17.2)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Add MayaPy application. <a href="https://github.com/ynput/OpenPype/pull/5705">#5705</a></summary>
|
||||
|
||||
This adds mayapy to the application to be launched from a task.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Feature: Copy resources when downloading last workfile <a href="https://github.com/ynput/OpenPype/pull/4944">#4944</a></summary>
|
||||
|
||||
When the last published workfile is downloaded as a prelaunch hook, all resource files referenced in the workfile representation are copied to the `resources` folder, which is inside the local workfile folder.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Deadline support <a href="https://github.com/ynput/OpenPype/pull/5438">#5438</a></summary>
|
||||
|
||||
Add Deadline support for Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: implement toggle to use Deadline plugin FusionCmd <a href="https://github.com/ynput/OpenPype/pull/5678">#5678</a></summary>
|
||||
|
||||
Fusion 17 doesn't work in DL 10.3, but FusionCmd does. It might be probably better option as headless variant.Fusion plugin seems to be closing and reopening application when worker is running on artist machine, not so with FusionCmdAdded configuration to Project Settings for admin to select appropriate Deadline plugin:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Loader tool: Refactor loader tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5729">#5729</a></summary>
|
||||
|
||||
Refactored loader tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. The tool is also replacing library loader.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: implement matchmove publishing <a href="https://github.com/ynput/OpenPype/pull/5445">#5445</a></summary>
|
||||
|
||||
Add possibility to export multiple cameras in single `matchmove` family instance, both in `abc` and `ma`.Exposed flag 'Keep image planes' to control export of image planes.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Add optional Fbx extractors in Rig and Animation family <a href="https://github.com/ynput/OpenPype/pull/5589">#5589</a></summary>
|
||||
|
||||
This PR allows user to export control rigs(optionally with mesh) and animated rig in fbx optionally by attaching the rig objects to the two newly introduced sets.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Optional Resolution Validator for Render <a href="https://github.com/ynput/OpenPype/pull/5693">#5693</a></summary>
|
||||
|
||||
Adding optional resolution validator for maya in render family, similar to the one in Max.It checks if the resolution in render setting aligns with that in setting from the db.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Use host's node uniqueness for instance id in new publisher <a href="https://github.com/ynput/OpenPype/pull/5490">#5490</a></summary>
|
||||
|
||||
Instead of writing `instance_id` as parm or attributes on the publish instances we can, for some hosts, just rely on a unique name or path within the scene to refer to that particular instance. By doing so we fix #4820 because upon duplicating such a publish instance using the host's (DCC) functionality the uniqueness for the duplicate is then already ensured instead of attributes remaining exact same value as where to were duplicated from, making `instance_id` a non-unique value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Implementation of OCIO configuration <a href="https://github.com/ynput/OpenPype/pull/5499">#5499</a></summary>
|
||||
|
||||
Resolve #5473 Implementation of OCIO configuration for Max 2024 regarding to the update of Max 2024
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Multiple format supports for ExtractReviewDataMov <a href="https://github.com/ynput/OpenPype/pull/5623">#5623</a></summary>
|
||||
|
||||
This PR would fix the bug of the plugin `ExtractReviewDataMov` not being able to support extensions other than `mov`. The plugin is also renamed to `ExtractReviewDataBakingStreams` as i provides multiple format supoort.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: houdini switching context doesnt update variables <a href="https://github.com/ynput/OpenPype/pull/5651">#5651</a></summary>
|
||||
|
||||
Allows admins to have a list of vars (e.g. JOB) with (dynamic) values that will be updated on context changes, e.g. when switching to another asset or task.Using template keys is supported but formatting keys capitalization variants is not, e.g. {Asset} and {ASSET} won't workDisabling Update Houdini vars on context change feature will leave all Houdini vars unmanaged and thus no context update changes will occur.Also, this PR adds a new button in menu to update vars on demand.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix report maker memory leak + optimize lookups using set <a href="https://github.com/ynput/OpenPype/pull/5667">#5667</a></summary>
|
||||
|
||||
Fixes a memory leak where resetting publisher does not clear the stored plugins for the Publish Report Maker.Also changes the stored plugins to a `set` to optimize the lookup speeds.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add openpype_mongo command flag for testing. <a href="https://github.com/ynput/OpenPype/pull/5676">#5676</a></summary>
|
||||
|
||||
Instead of changing the environment, this command flag allows for changing the database.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: minor docstring and code tweaks for ExtractReviewMov <a href="https://github.com/ynput/OpenPype/pull/5695">#5695</a></summary>
|
||||
|
||||
Code and docstring tweaks on https://github.com/ynput/OpenPype/pull/5623
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Small settings fixes <a href="https://github.com/ynput/OpenPype/pull/5699">#5699</a></summary>
|
||||
|
||||
Small changes/fixes related to AYON settings. All foundry apps variant `13-0` has label `13.0`. Key `"ExtractReviewIntermediates"` is not mandatory in settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Alembic Animation loader <a href="https://github.com/ynput/OpenPype/pull/5711">#5711</a></summary>
|
||||
|
||||
Implemented loading Alembic Animations in Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Missing "data" field and enabling of audio <a href="https://github.com/ynput/OpenPype/pull/5618">#5618</a></summary>
|
||||
|
||||
When updating audio containers, the field "data" was missing and the audio node was not enabled on the timeline.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Bug in validate Plug-in Path Attribute <a href="https://github.com/ynput/OpenPype/pull/5687">#5687</a></summary>
|
||||
|
||||
Overwriting list with string is causing `TypeError: string indices must be integers` in subsequent iterations, crashing the validator plugin.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Avoid fallback if value is 0 for handle start/end <a href="https://github.com/ynput/OpenPype/pull/5652">#5652</a></summary>
|
||||
|
||||
There's a bug on the `pyblish_functions.get_time_data_from_instance_or_context` where if `handleStart` or `handleEnd` on the instance are set to value 0 it's falling back to grabbing the handles from the instance context. Instead, the logic should be that it only falls back to the `instance.context` if the key doesn't exist.This change was only affecting me on the `handleStart`/`handleEnd` and it's unlikely it could cause issues on `frameStart`, `frameEnd` or `fps` but regardless, the `get` logic is wrong.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: added missing env vars to Deadline submission <a href="https://github.com/ynput/OpenPype/pull/5659">#5659</a></summary>
|
||||
|
||||
Environment variables discerning type of job was missing. Without this injection of environment variables won't start.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: workfile version synchronization settings fixed <a href="https://github.com/ynput/OpenPype/pull/5662">#5662</a></summary>
|
||||
|
||||
Settings for synchronizing workfile version to published products is fixed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Workfiles Tool: Open workfile changes context <a href="https://github.com/ynput/OpenPype/pull/5671">#5671</a></summary>
|
||||
|
||||
Change context when workfile is opened.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix remove/update in new layout instance <a href="https://github.com/ynput/OpenPype/pull/5679">#5679</a></summary>
|
||||
|
||||
Fixes an error that occurs when removing or updating an asset in a new layout instance.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Launcher tool: Fix refresh btn <a href="https://github.com/ynput/OpenPype/pull/5685">#5685</a></summary>
|
||||
|
||||
Refresh button does propagate refreshed content properly. Folders and tasks are cached for 60 seconds instead of 10 seconds. Auto-refresh in launcher will refresh only actions and related data which is project and project settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: handle all valid paths in RenderExecutable <a href="https://github.com/ynput/OpenPype/pull/5694">#5694</a></summary>
|
||||
|
||||
This commit enhances the path resolution mechanism in the RenderExecutable function of the Ayon plugin. Previously, the function only considered paths starting with a tilde (~), ignoring other valid paths listed in exe_list. This limitation led to an empty expanded_paths list when none of the paths in exe_list started with a tilde, causing the function to fail in finding the Ayon executable.With this fix, the RenderExecutable function now correctly processes and includes all valid paths from exe_list, improving its reliability and preventing unnecessary errors related to Ayon executable location.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Launcher tool: Fix skip last workfile boolean <a href="https://github.com/ynput/OpenPype/pull/5700">#5700</a></summary>
|
||||
|
||||
Skip last workfile boolean works as expected.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Explore here action can work without task <a href="https://github.com/ynput/OpenPype/pull/5703">#5703</a></summary>
|
||||
|
||||
Explore here action does not crash when task is not selected, and change error message a little.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Testing: Inject mongo_url argument earlier <a href="https://github.com/ynput/OpenPype/pull/5706">#5706</a></summary>
|
||||
|
||||
Fix for https://github.com/ynput/OpenPype/pull/5676The Mongo url is used earlier in the execution.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Add support to auto-install PySide2 in blender 4 <a href="https://github.com/ynput/OpenPype/pull/5723">#5723</a></summary>
|
||||
|
||||
Change version regex to support blender 4 subfolder.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fix: Hardcoded main site and wrongly copied workfile <a href="https://github.com/ynput/OpenPype/pull/5733">#5733</a></summary>
|
||||
|
||||
Fixing these two issues:
|
||||
- Hardcoded main site -> Replaced by `anatomy.fill_root`.
|
||||
- Workfiles can sometimes be copied while they shouldn't.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: ServerDeleteOperation asset -> folder conversion typo <a href="https://github.com/ynput/OpenPype/pull/5735">#5735</a></summary>
|
||||
|
||||
Fix ServerDeleteOperation asset -> folder conversion typo
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: loaders are filtering correctly <a href="https://github.com/ynput/OpenPype/pull/5739">#5739</a></summary>
|
||||
|
||||
Variable name for filtering by extensions were not correct - it suppose to be plural. It is fixed now and filtering is working as suppose to.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: failing multiple thumbnails integration <a href="https://github.com/ynput/OpenPype/pull/5741">#5741</a></summary>
|
||||
|
||||
This handles the situation when `ExtractReviewIntermediates` (previously `ExtractReviewDataMov`) has multiple outputs, including thumbnails that need to be integrated. Previously, integrating the thumbnail representation was causing an issue in the integration process. However, we have now resolved this issue by no longer integrating thumbnails as loadable representations.NOW default is that thumbnail representation are NOT integrated (eg. they will not show up in DB > couldn't be Loaded in Loader) and no `_thumb.jpg` will be left in `render` (most likely) publish folder.IF there would be need to override this behavior, please use `project_settings/global/publish/PreIntegrateThumbnails`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Settings: Fix global overrides <a href="https://github.com/ynput/OpenPype/pull/5745">#5745</a></summary>
|
||||
|
||||
The `output` dictionary that gets passed into `ayon_settings._convert_global_project_settings` gets replaced when converting the settings for `ExtractOIIOTranscode`. This results in `global` not being in the output dictionary and thus the defaults being used and not the project overrides.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: AYON query functions arguments <a href="https://github.com/ynput/OpenPype/pull/5752">#5752</a></summary>
|
||||
|
||||
Fixed how `archived` argument is handled in get subsets/assets function.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Refactor Report Maker plugin data storage to be a dict by plugin.id <a href="https://github.com/ynput/OpenPype/pull/5668">#5668</a></summary>
|
||||
|
||||
Refactor Report Maker plugin data storage to be a dict by `plugin.id`Also fixes `_current_plugin_data` type on `__init__`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost <a href="https://github.com/ynput/OpenPype/pull/5701">#5701</a></summary>
|
||||
|
||||
Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Maya reduce get project settings calls <a href="https://github.com/ynput/OpenPype/pull/5669">#5669</a></summary>
|
||||
|
||||
Re-use system settings / project settings where we can instead of requerying.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Extended error message when getting subset name <a href="https://github.com/ynput/OpenPype/pull/5649">#5649</a></summary>
|
||||
|
||||
Each Creator is using `get_subset_name` functions which collects context data and fills configured template with placeholders.If any key is missing in the template, non descriptive error is thrown.This should provide more verbose message:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Tests: Remove checks for env var <a href="https://github.com/ynput/OpenPype/pull/5696">#5696</a></summary>
|
||||
|
||||
Env var will be filled in `env_var` fixture, here it is too early to check
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.1](https://github.com/ynput/OpenPype/tree/3.17.1)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.0...3.17.1)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Yeti support <a href="https://github.com/ynput/OpenPype/pull/5643">#5643</a></summary>
|
||||
|
||||
Implemented Yeti support for Unreal.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Add Static Mesh product-type (family) <a href="https://github.com/ynput/OpenPype/pull/5481">#5481</a></summary>
|
||||
|
||||
This PR adds support to publish Unreal Static Mesh in Houdini as FBXQuick recap
|
||||
- [x] Add UE Static Mesh Creator
|
||||
- [x] Dynamic subset name like in Maya
|
||||
- [x] Collect Static Mesh Type
|
||||
- [x] Update collect output node
|
||||
- [x] Validate FBX output node
|
||||
- [x] Validate mesh is static
|
||||
- [x] Validate Unreal Static Mesh Name
|
||||
- [x] Validate Subset Name
|
||||
- [x] FBX Extractor
|
||||
- [x] FBX Loader
|
||||
- [x] Update OP Settings
|
||||
- [x] Update AYON Settings
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Launcher tool: Refactor launcher tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5612">#5612</a></summary>
|
||||
|
||||
Refactored launcher tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Use custom staging dir function for Maya renders - OP-5265 <a href="https://github.com/ynput/OpenPype/pull/5186">#5186</a></summary>
|
||||
|
||||
Check for custom staging dir when setting the renders output folder in Maya.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Colorspace: updating file path detection methods <a href="https://github.com/ynput/OpenPype/pull/5273">#5273</a></summary>
|
||||
|
||||
Support for OCIO v2 file rules integrated into the available color management API
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: add default isort config <a href="https://github.com/ynput/OpenPype/pull/5572">#5572</a></summary>
|
||||
|
||||
Add default configuration for isort tool
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: set PATH environment in deadline jobs by GlobalJobPreLoad <a href="https://github.com/ynput/OpenPype/pull/5622">#5622</a></summary>
|
||||
|
||||
This PR makes `GlobalJobPreLoad` to set `PATH` environment in deadline jobs so that we don't have to use the full executable path for deadline to launch the dcc app. This trick should save us adding logic to pass houdini patch version and modifying Houdini deadline plugin. This trick should work with other DCCs
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>nuke: extract review data mov read node with expression <a href="https://github.com/ynput/OpenPype/pull/5635">#5635</a></summary>
|
||||
|
||||
Some productions might have set default values for read nodes, those settings are not colliding anymore now.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Support new publisher for colorsets validation. <a href="https://github.com/ynput/OpenPype/pull/5630">#5630</a></summary>
|
||||
|
||||
Fix `validate_color_sets` for the new publisher.In current `develop` the repair option does not appear due to wrong error raising.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Camera Loader fix mismatch for Maya cameras <a href="https://github.com/ynput/OpenPype/pull/5584">#5584</a></summary>
|
||||
|
||||
This PR adds
|
||||
- A workaround to match Maya render mask in Houdini
|
||||
- `SetCameraResolution` inventory action
|
||||
- set camera resolution when loading or updating camera
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fix set colorspace on writes <a href="https://github.com/ynput/OpenPype/pull/5634">#5634</a></summary>
|
||||
|
||||
Colorspace is set correctly to any write node created from publisher.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TVPaint: Fix review family extraction <a href="https://github.com/ynput/OpenPype/pull/5637">#5637</a></summary>
|
||||
|
||||
Extractor marks representation of review instance with review tag.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON settings: Extract OIIO transcode settings <a href="https://github.com/ynput/OpenPype/pull/5639">#5639</a></summary>
|
||||
|
||||
Output definitions of Extract OIIO transcode have name to match OpenPype settings, and the settings are converted to dictionary in settings conversion.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fix task type short name conversion <a href="https://github.com/ynput/OpenPype/pull/5641">#5641</a></summary>
|
||||
|
||||
Convert AYON task type short name for OpenPype correctly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>colorspace: missing `allowed_exts` fix <a href="https://github.com/ynput/OpenPype/pull/5646">#5646</a></summary>
|
||||
|
||||
Colorspace module is not failing due to missing `allowed_exts` attribute.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Photoshop: remove trailing underscore in subset name <a href="https://github.com/ynput/OpenPype/pull/5647">#5647</a></summary>
|
||||
|
||||
If {layer} placeholder is at the end of subset name template and not used (for example in `auto_image` where separating it by layer doesn't make any sense) trailing '_' was kept. This updates cleaning logic and extracts it as it might be similar in regular `image` instance.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>traypublisher: missing `assetEntity` in context data <a href="https://github.com/ynput/OpenPype/pull/5648">#5648</a></summary>
|
||||
|
||||
Issue with missing `assetEnity` key in context data is not problem anymore.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Workfiles tool save button works <a href="https://github.com/ynput/OpenPype/pull/5653">#5653</a></summary>
|
||||
|
||||
Fix save as button in workfiles tool.(It is mystery why this stopped to work??)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: bug fix delete items from container <a href="https://github.com/ynput/OpenPype/pull/5658">#5658</a></summary>
|
||||
|
||||
Fix the bug shown when clicking "Delete Items from Container" and selecting nothing and press ok.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Remove unused functions from Fusion integration <a href="https://github.com/ynput/OpenPype/pull/5617">#5617</a></summary>
|
||||
|
||||
Cleanup unused code from Fusion integration
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Increase timout for deadline test <a href="https://github.com/ynput/OpenPype/pull/5654">#5654</a></summary>
|
||||
|
||||
DL picks up jobs quite slow, so bump up delay.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.0](https://github.com/ynput/OpenPype/tree/3.17.0)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -290,11 +290,15 @@ def run(script):
|
|||
"--setup_only",
|
||||
help="Only create dbs, do not run tests",
|
||||
default=None)
|
||||
@click.option("--mongo_url",
|
||||
help="MongoDB for testing.",
|
||||
default=None)
|
||||
def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant,
|
||||
timeout, setup_only):
|
||||
timeout, setup_only, mongo_url):
|
||||
"""Run all automatic tests after proper initialization via start.py"""
|
||||
PypeCommands().run_tests(folder, mark, pyargs, test_data_folder,
|
||||
persist, app_variant, timeout, setup_only)
|
||||
persist, app_variant, timeout, setup_only,
|
||||
mongo_url)
|
||||
|
||||
|
||||
@main.command(help="DEPRECATED - run sync server")
|
||||
|
|
|
|||
|
|
@ -75,9 +75,9 @@ def _get_subsets(
|
|||
):
|
||||
fields.add(key)
|
||||
|
||||
active = None
|
||||
active = True
|
||||
if archived:
|
||||
active = False
|
||||
active = None
|
||||
|
||||
for subset in con.get_products(
|
||||
project_name,
|
||||
|
|
@ -196,7 +196,7 @@ def get_assets(
|
|||
|
||||
active = True
|
||||
if archived:
|
||||
active = False
|
||||
active = None
|
||||
|
||||
con = get_server_api_connection()
|
||||
fields = folder_fields_v3_to_v4(fields, con)
|
||||
|
|
|
|||
|
|
@ -422,7 +422,7 @@ def failed_json_default(value):
|
|||
|
||||
|
||||
class ServerCreateOperation(CreateOperation):
|
||||
"""Opeartion to create an entity.
|
||||
"""Operation to create an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
@ -634,7 +634,7 @@ class ServerUpdateOperation(UpdateOperation):
|
|||
|
||||
|
||||
class ServerDeleteOperation(DeleteOperation):
|
||||
"""Opeartion to delete an entity.
|
||||
"""Operation to delete an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
@ -647,7 +647,7 @@ class ServerDeleteOperation(DeleteOperation):
|
|||
self._session = session
|
||||
|
||||
if entity_type == "asset":
|
||||
entity_type == "folder"
|
||||
entity_type = "folder"
|
||||
|
||||
elif entity_type == "hero_version":
|
||||
entity_type = "version"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import subprocess
|
|||
from openpype.lib.applications import PreLaunchHook, LaunchTypes
|
||||
|
||||
|
||||
class LaunchFoundryAppsWindows(PreLaunchHook):
|
||||
class LaunchNewConsoleApps(PreLaunchHook):
|
||||
"""Foundry applications have specific way how to launch them.
|
||||
|
||||
Nuke is executed "like" python process so it is required to pass
|
||||
|
|
@ -13,13 +13,15 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
|
|||
|
||||
# Should be as last hook because must change launch arguments to string
|
||||
order = 1000
|
||||
app_groups = {"nuke", "nukeassist", "nukex", "hiero", "nukestudio"}
|
||||
app_groups = {
|
||||
"nuke", "nukeassist", "nukex", "hiero", "nukestudio", "mayapy"
|
||||
}
|
||||
platforms = {"windows"}
|
||||
launch_types = {LaunchTypes.local}
|
||||
|
||||
def execute(self):
|
||||
# Change `creationflags` to CREATE_NEW_CONSOLE
|
||||
# - on Windows nuke will create new window using its console
|
||||
# - on Windows some apps will create new window using its console
|
||||
# Set `stdout` and `stderr` to None so new created console does not
|
||||
# have redirected output to DEVNULL in build
|
||||
self.launch_context.kwargs.update({
|
||||
|
|
@ -13,7 +13,7 @@ class OCIOEnvHook(PreLaunchHook):
|
|||
"fusion",
|
||||
"blender",
|
||||
"aftereffects",
|
||||
"max",
|
||||
"3dsmax",
|
||||
"houdini",
|
||||
"maya",
|
||||
"nuke",
|
||||
|
|
|
|||
|
|
@ -38,6 +38,8 @@ from .lib import (
|
|||
|
||||
from .capture import capture
|
||||
|
||||
from .render_lib import prepare_rendering
|
||||
|
||||
|
||||
__all__ = [
|
||||
"install",
|
||||
|
|
@ -66,4 +68,5 @@ __all__ = [
|
|||
"get_selection",
|
||||
"capture",
|
||||
# "unique_name",
|
||||
"prepare_rendering",
|
||||
]
|
||||
|
|
|
|||
51
openpype/hosts/blender/api/colorspace.py
Normal file
51
openpype/hosts/blender/api/colorspace.py
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
import attr
|
||||
|
||||
import bpy
|
||||
|
||||
|
||||
@attr.s
|
||||
class LayerMetadata(object):
|
||||
"""Data class for Render Layer metadata."""
|
||||
frameStart = attr.ib()
|
||||
frameEnd = attr.ib()
|
||||
|
||||
|
||||
@attr.s
|
||||
class RenderProduct(object):
|
||||
"""
|
||||
Getting Colorspace as Specific Render Product Parameter for submitting
|
||||
publish job.
|
||||
"""
|
||||
colorspace = attr.ib() # colorspace
|
||||
view = attr.ib() # OCIO view transform
|
||||
productName = attr.ib(default=None)
|
||||
|
||||
|
||||
class ARenderProduct(object):
|
||||
def __init__(self):
|
||||
"""Constructor."""
|
||||
# Initialize
|
||||
self.layer_data = self._get_layer_data()
|
||||
self.layer_data.products = self.get_render_products()
|
||||
|
||||
def _get_layer_data(self):
|
||||
scene = bpy.context.scene
|
||||
|
||||
return LayerMetadata(
|
||||
frameStart=int(scene.frame_start),
|
||||
frameEnd=int(scene.frame_end),
|
||||
)
|
||||
|
||||
def get_render_products(self):
|
||||
"""To be implemented by renderer class.
|
||||
This should return a list of RenderProducts.
|
||||
Returns:
|
||||
list: List of RenderProduct
|
||||
"""
|
||||
return [
|
||||
RenderProduct(
|
||||
colorspace="sRGB",
|
||||
view="ACES 1.0",
|
||||
productName=""
|
||||
)
|
||||
]
|
||||
|
|
@ -16,6 +16,7 @@ import bpy
|
|||
import bpy.utils.previews
|
||||
|
||||
from openpype import style
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.pipeline import get_current_asset_name, get_current_task_name
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
|
|
@ -331,10 +332,11 @@ class LaunchWorkFiles(LaunchQtApp):
|
|||
|
||||
def execute(self, context):
|
||||
result = super().execute(context)
|
||||
self._window.set_context({
|
||||
"asset": get_current_asset_name(),
|
||||
"task": get_current_task_name()
|
||||
})
|
||||
if not AYON_SERVER_ENABLED:
|
||||
self._window.set_context({
|
||||
"asset": get_current_asset_name(),
|
||||
"task": get_current_task_name()
|
||||
})
|
||||
return result
|
||||
|
||||
def before_window_show(self):
|
||||
|
|
|
|||
|
|
@ -460,36 +460,6 @@ def ls() -> Iterator:
|
|||
yield parse_container(container)
|
||||
|
||||
|
||||
def update_hierarchy(containers):
|
||||
"""Hierarchical container support
|
||||
|
||||
This is the function to support Scene Inventory to draw hierarchical
|
||||
view for containers.
|
||||
|
||||
We need both parent and children to visualize the graph.
|
||||
|
||||
"""
|
||||
|
||||
all_containers = set(ls()) # lookup set
|
||||
|
||||
for container in containers:
|
||||
# Find parent
|
||||
# FIXME (jasperge): re-evaluate this. How would it be possible
|
||||
# to 'nest' assets? Collections can have several parents, for
|
||||
# now assume it has only 1 parent
|
||||
parent = [
|
||||
coll for coll in bpy.data.collections if container in coll.children
|
||||
]
|
||||
for node in parent:
|
||||
if node in all_containers:
|
||||
container["parent"] = node
|
||||
break
|
||||
|
||||
log.debug("Container: %s", container)
|
||||
|
||||
yield container
|
||||
|
||||
|
||||
def publish():
|
||||
"""Shorthand to publish from within host."""
|
||||
|
||||
|
|
|
|||
255
openpype/hosts/blender/api/render_lib.py
Normal file
255
openpype/hosts/blender/api/render_lib.py
Normal file
|
|
@ -0,0 +1,255 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import get_current_project_name
|
||||
|
||||
|
||||
def get_default_render_folder(settings):
|
||||
"""Get default render folder from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["default_render_image_folder"])
|
||||
|
||||
|
||||
def get_aov_separator(settings):
|
||||
"""Get aov separator from blender settings."""
|
||||
|
||||
aov_sep = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["aov_separator"])
|
||||
|
||||
if aov_sep == "dash":
|
||||
return "-"
|
||||
elif aov_sep == "underscore":
|
||||
return "_"
|
||||
elif aov_sep == "dot":
|
||||
return "."
|
||||
else:
|
||||
raise ValueError(f"Invalid aov separator: {aov_sep}")
|
||||
|
||||
|
||||
def get_image_format(settings):
|
||||
"""Get image format from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["image_format"])
|
||||
|
||||
|
||||
def get_multilayer(settings):
|
||||
"""Get multilayer from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["multilayer_exr"])
|
||||
|
||||
|
||||
def get_render_product(output_path, name, aov_sep):
|
||||
"""
|
||||
Generate the path to the render product. Blender interprets the `#`
|
||||
as the frame number, when it renders.
|
||||
|
||||
Args:
|
||||
file_path (str): The path to the blender scene.
|
||||
render_folder (str): The render folder set in settings.
|
||||
file_name (str): The name of the blender scene.
|
||||
instance (pyblish.api.Instance): The instance to publish.
|
||||
ext (str): The image format to render.
|
||||
"""
|
||||
filepath = os.path.join(output_path, name)
|
||||
render_product = f"{filepath}{aov_sep}beauty.####"
|
||||
render_product = render_product.replace("\\", "/")
|
||||
|
||||
return render_product
|
||||
|
||||
|
||||
def set_render_format(ext, multilayer):
|
||||
# Set Blender to save the file with the right extension
|
||||
bpy.context.scene.render.use_file_extension = True
|
||||
|
||||
image_settings = bpy.context.scene.render.image_settings
|
||||
|
||||
if ext == "exr":
|
||||
image_settings.file_format = (
|
||||
"OPEN_EXR_MULTILAYER" if multilayer else "OPEN_EXR")
|
||||
elif ext == "bmp":
|
||||
image_settings.file_format = "BMP"
|
||||
elif ext == "rgb":
|
||||
image_settings.file_format = "IRIS"
|
||||
elif ext == "png":
|
||||
image_settings.file_format = "PNG"
|
||||
elif ext == "jpeg":
|
||||
image_settings.file_format = "JPEG"
|
||||
elif ext == "jp2":
|
||||
image_settings.file_format = "JPEG2000"
|
||||
elif ext == "tga":
|
||||
image_settings.file_format = "TARGA"
|
||||
elif ext == "tif":
|
||||
image_settings.file_format = "TIFF"
|
||||
|
||||
|
||||
def set_render_passes(settings):
|
||||
aov_list = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["aov_list"])
|
||||
|
||||
custom_passes = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["custom_passes"])
|
||||
|
||||
vl = bpy.context.view_layer
|
||||
|
||||
vl.use_pass_combined = "combined" in aov_list
|
||||
vl.use_pass_z = "z" in aov_list
|
||||
vl.use_pass_mist = "mist" in aov_list
|
||||
vl.use_pass_normal = "normal" in aov_list
|
||||
vl.use_pass_diffuse_direct = "diffuse_light" in aov_list
|
||||
vl.use_pass_diffuse_color = "diffuse_color" in aov_list
|
||||
vl.use_pass_glossy_direct = "specular_light" in aov_list
|
||||
vl.use_pass_glossy_color = "specular_color" in aov_list
|
||||
vl.eevee.use_pass_volume_direct = "volume_light" in aov_list
|
||||
vl.use_pass_emit = "emission" in aov_list
|
||||
vl.use_pass_environment = "environment" in aov_list
|
||||
vl.use_pass_shadow = "shadow" in aov_list
|
||||
vl.use_pass_ambient_occlusion = "ao" in aov_list
|
||||
|
||||
cycles = vl.cycles
|
||||
|
||||
cycles.denoising_store_passes = "denoising" in aov_list
|
||||
cycles.use_pass_volume_direct = "volume_direct" in aov_list
|
||||
cycles.use_pass_volume_indirect = "volume_indirect" in aov_list
|
||||
|
||||
aovs_names = [aov.name for aov in vl.aovs]
|
||||
for cp in custom_passes:
|
||||
cp_name = cp[0]
|
||||
if cp_name not in aovs_names:
|
||||
aov = vl.aovs.add()
|
||||
aov.name = cp_name
|
||||
else:
|
||||
aov = vl.aovs[cp_name]
|
||||
aov.type = cp[1].get("type", "VALUE")
|
||||
|
||||
return aov_list, custom_passes
|
||||
|
||||
|
||||
def set_node_tree(output_path, name, aov_sep, ext, multilayer):
|
||||
# Set the scene to use the compositor node tree to render
|
||||
bpy.context.scene.use_nodes = True
|
||||
|
||||
tree = bpy.context.scene.node_tree
|
||||
|
||||
# Get the Render Layers node
|
||||
rl_node = None
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == "CompositorNodeRLayers":
|
||||
rl_node = node
|
||||
break
|
||||
|
||||
# If there's not a Render Layers node, we create it
|
||||
if not rl_node:
|
||||
rl_node = tree.nodes.new("CompositorNodeRLayers")
|
||||
|
||||
# Get the enabled output sockets, that are the active passes for the
|
||||
# render.
|
||||
# We also exclude some layers.
|
||||
exclude_sockets = ["Image", "Alpha", "Noisy Image"]
|
||||
passes = [
|
||||
socket
|
||||
for socket in rl_node.outputs
|
||||
if socket.enabled and socket.name not in exclude_sockets
|
||||
]
|
||||
|
||||
# Remove all output nodes
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == "CompositorNodeOutputFile":
|
||||
tree.nodes.remove(node)
|
||||
|
||||
# Create a new output node
|
||||
output = tree.nodes.new("CompositorNodeOutputFile")
|
||||
|
||||
image_settings = bpy.context.scene.render.image_settings
|
||||
output.format.file_format = image_settings.file_format
|
||||
|
||||
# In case of a multilayer exr, we don't need to use the output node,
|
||||
# because the blender render already outputs a multilayer exr.
|
||||
if ext == "exr" and multilayer:
|
||||
output.layer_slots.clear()
|
||||
return []
|
||||
|
||||
output.file_slots.clear()
|
||||
output.base_path = output_path
|
||||
|
||||
aov_file_products = []
|
||||
|
||||
# For each active render pass, we add a new socket to the output node
|
||||
# and link it
|
||||
for render_pass in passes:
|
||||
filepath = f"{name}{aov_sep}{render_pass.name}.####"
|
||||
|
||||
output.file_slots.new(filepath)
|
||||
|
||||
aov_file_products.append(
|
||||
(render_pass.name, os.path.join(output_path, filepath)))
|
||||
|
||||
node_input = output.inputs[-1]
|
||||
|
||||
tree.links.new(render_pass, node_input)
|
||||
|
||||
return aov_file_products
|
||||
|
||||
|
||||
def imprint_render_settings(node, data):
|
||||
RENDER_DATA = "render_data"
|
||||
if not node.get(RENDER_DATA):
|
||||
node[RENDER_DATA] = {}
|
||||
for key, value in data.items():
|
||||
if value is None:
|
||||
continue
|
||||
node[RENDER_DATA][key] = value
|
||||
|
||||
|
||||
def prepare_rendering(asset_group):
|
||||
name = asset_group.name
|
||||
|
||||
filepath = bpy.data.filepath
|
||||
assert filepath, "Workfile not saved. Please save the file first."
|
||||
|
||||
file_path = os.path.dirname(filepath)
|
||||
file_name = os.path.basename(filepath)
|
||||
file_name, _ = os.path.splitext(file_name)
|
||||
|
||||
project = get_current_project_name()
|
||||
settings = get_project_settings(project)
|
||||
|
||||
render_folder = get_default_render_folder(settings)
|
||||
aov_sep = get_aov_separator(settings)
|
||||
ext = get_image_format(settings)
|
||||
multilayer = get_multilayer(settings)
|
||||
|
||||
set_render_format(ext, multilayer)
|
||||
aov_list, custom_passes = set_render_passes(settings)
|
||||
|
||||
output_path = os.path.join(file_path, render_folder, file_name)
|
||||
|
||||
render_product = get_render_product(output_path, name, aov_sep)
|
||||
aov_file_product = set_node_tree(
|
||||
output_path, name, aov_sep, ext, multilayer)
|
||||
|
||||
bpy.context.scene.render.filepath = render_product
|
||||
|
||||
render_settings = {
|
||||
"render_folder": render_folder,
|
||||
"aov_separator": aov_sep,
|
||||
"image_format": ext,
|
||||
"multilayer_exr": multilayer,
|
||||
"aov_list": aov_list,
|
||||
"custom_passes": custom_passes,
|
||||
"render_product": render_product,
|
||||
"aov_file_product": aov_file_product,
|
||||
"review": True,
|
||||
}
|
||||
|
||||
imprint_render_settings(asset_group, render_settings)
|
||||
|
|
@ -31,7 +31,7 @@ class InstallPySideToBlender(PreLaunchHook):
|
|||
|
||||
def inner_execute(self):
|
||||
# Get blender's python directory
|
||||
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
|
||||
version_regex = re.compile(r"^[2-4]\.[0-9]+$")
|
||||
|
||||
platform = system().lower()
|
||||
executable = self.launch_context.executable.executable_path
|
||||
|
|
|
|||
53
openpype/hosts/blender/plugins/create/create_render.py
Normal file
53
openpype/hosts/blender/plugins/create/create_render.py
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
"""Create render."""
|
||||
import bpy
|
||||
|
||||
from openpype.pipeline import get_current_task_name
|
||||
from openpype.hosts.blender.api import plugin, lib
|
||||
from openpype.hosts.blender.api.render_lib import prepare_rendering
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateRenderlayer(plugin.Creator):
|
||||
"""Single baked camera"""
|
||||
|
||||
name = "renderingMain"
|
||||
label = "Render"
|
||||
family = "render"
|
||||
icon = "eye"
|
||||
|
||||
def process(self):
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
bpy.context.scene.collection.children.link(instances)
|
||||
|
||||
# Create instance object
|
||||
asset = self.data["asset"]
|
||||
subset = self.data["subset"]
|
||||
name = plugin.asset_name(asset, subset)
|
||||
asset_group = bpy.data.collections.new(name=name)
|
||||
|
||||
try:
|
||||
instances.children.link(asset_group)
|
||||
self.data['task'] = get_current_task_name()
|
||||
lib.imprint(asset_group, self.data)
|
||||
|
||||
prepare_rendering(asset_group)
|
||||
except Exception:
|
||||
# Remove the instance if there was an error
|
||||
bpy.data.collections.remove(asset_group)
|
||||
raise
|
||||
|
||||
# TODO: this is undesiderable, but it's the only way to be sure that
|
||||
# the file is saved before the render starts.
|
||||
# Blender, by design, doesn't set the file as dirty if modifications
|
||||
# happen by script. So, when creating the instance and setting the
|
||||
# render settings, the file is not marked as dirty. This means that
|
||||
# there is the risk of sending to deadline a file without the right
|
||||
# settings. Even the validator to check that the file is saved will
|
||||
# detect the file as saved, even if it isn't. The only solution for
|
||||
# now it is to force the file to be saved.
|
||||
bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath)
|
||||
|
||||
return asset_group
|
||||
|
|
@ -26,8 +26,7 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
Note:
|
||||
At least for now it only supports Alembic files.
|
||||
"""
|
||||
|
||||
families = ["model", "pointcache"]
|
||||
families = ["model", "pointcache", "animation"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Load Alembic"
|
||||
|
|
@ -53,16 +52,12 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
def _process(self, libpath, asset_group, group_name):
|
||||
plugin.deselect_all()
|
||||
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
|
||||
relative = bpy.context.preferences.filepaths.use_relative_paths
|
||||
bpy.ops.wm.alembic_import(
|
||||
filepath=libpath,
|
||||
relative_path=relative
|
||||
)
|
||||
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
imported = lib.get_selection()
|
||||
|
||||
# Children must be linked before parents,
|
||||
|
|
@ -79,6 +74,10 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
objects.reverse()
|
||||
|
||||
for obj in objects:
|
||||
# Unlink the object from all collections
|
||||
collections = obj.users_collection
|
||||
for collection in collections:
|
||||
collection.objects.unlink(obj)
|
||||
name = obj.name
|
||||
obj.name = f"{group_name}:{name}"
|
||||
if obj.type != 'EMPTY':
|
||||
|
|
@ -90,7 +89,7 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
material_slot.material.name = f"{group_name}:{name_mat}"
|
||||
|
||||
if not obj.get(AVALON_PROPERTY):
|
||||
obj[AVALON_PROPERTY] = dict()
|
||||
obj[AVALON_PROPERTY] = {}
|
||||
|
||||
avalon_info = obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
|
@ -99,6 +98,18 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
|
||||
return objects
|
||||
|
||||
def _link_objects(self, objects, collection, containers, asset_group):
|
||||
# Link the imported objects to any collection where the asset group is
|
||||
# linked to, except the AVALON_CONTAINERS collection
|
||||
group_collections = [
|
||||
collection
|
||||
for collection in asset_group.users_collection
|
||||
if collection != containers]
|
||||
|
||||
for obj in objects:
|
||||
for collection in group_collections:
|
||||
collection.objects.link(obj)
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
|
|
@ -120,18 +131,21 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_containers = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_containers:
|
||||
avalon_containers = bpy.data.collections.new(
|
||||
name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_containers)
|
||||
containers = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not containers:
|
||||
containers = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(containers)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_containers.objects.link(asset_group)
|
||||
containers.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
# Link the asset group to the active collection
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
collection.objects.link(asset_group)
|
||||
|
||||
self._link_objects(objects, asset_group, containers, asset_group)
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
|
|
@ -207,7 +221,11 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
mat = asset_group.matrix_basis.copy()
|
||||
self._remove(asset_group)
|
||||
|
||||
self._process(str(libpath), asset_group, object_name)
|
||||
objects = self._process(str(libpath), asset_group, object_name)
|
||||
|
||||
containers = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
self._link_objects(objects, asset_group, containers, asset_group)
|
||||
|
||||
asset_group.matrix_basis = mat
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
|
|
|
|||
|
|
@ -244,7 +244,7 @@ class BlendLoader(plugin.AssetLoader):
|
|||
for parent in parent_containers:
|
||||
parent.get(AVALON_PROPERTY)["members"] = list(filter(
|
||||
lambda i: i not in members,
|
||||
parent.get(AVALON_PROPERTY)["members"]))
|
||||
parent.get(AVALON_PROPERTY).get("members", [])))
|
||||
|
||||
for attr in attrs:
|
||||
for data in getattr(bpy.data, attr):
|
||||
|
|
|
|||
123
openpype/hosts/blender/plugins/publish/collect_render.py
Normal file
123
openpype/hosts/blender/plugins/publish/collect_render.py
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect render data."""
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.hosts.blender.api import colorspace
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectBlenderRender(pyblish.api.InstancePlugin):
|
||||
"""Gather all publishable render layers from renderSetup."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
hosts = ["blender"]
|
||||
families = ["render"]
|
||||
label = "Collect Render Layers"
|
||||
sync_workfile_version = False
|
||||
|
||||
@staticmethod
|
||||
def generate_expected_beauty(
|
||||
render_product, frame_start, frame_end, frame_step, ext
|
||||
):
|
||||
"""
|
||||
Generate the expected files for the render product for the beauty
|
||||
render. This returns a list of files that should be rendered. It
|
||||
replaces the sequence of `#` with the frame number.
|
||||
"""
|
||||
path = os.path.dirname(render_product)
|
||||
file = os.path.basename(render_product)
|
||||
|
||||
expected_files = []
|
||||
|
||||
for frame in range(frame_start, frame_end + 1, frame_step):
|
||||
frame_str = str(frame).rjust(4, "0")
|
||||
filename = re.sub("#+", frame_str, file)
|
||||
expected_file = f"{os.path.join(path, filename)}.{ext}"
|
||||
expected_files.append(expected_file.replace("\\", "/"))
|
||||
|
||||
return {
|
||||
"beauty": expected_files
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def generate_expected_aovs(
|
||||
aov_file_product, frame_start, frame_end, frame_step, ext
|
||||
):
|
||||
"""
|
||||
Generate the expected files for the render product for the beauty
|
||||
render. This returns a list of files that should be rendered. It
|
||||
replaces the sequence of `#` with the frame number.
|
||||
"""
|
||||
expected_files = {}
|
||||
|
||||
for aov_name, aov_file in aov_file_product:
|
||||
path = os.path.dirname(aov_file)
|
||||
file = os.path.basename(aov_file)
|
||||
|
||||
aov_files = []
|
||||
|
||||
for frame in range(frame_start, frame_end + 1, frame_step):
|
||||
frame_str = str(frame).rjust(4, "0")
|
||||
filename = re.sub("#+", frame_str, file)
|
||||
expected_file = f"{os.path.join(path, filename)}.{ext}"
|
||||
aov_files.append(expected_file.replace("\\", "/"))
|
||||
|
||||
expected_files[aov_name] = aov_files
|
||||
|
||||
return expected_files
|
||||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
|
||||
render_data = bpy.data.collections[str(instance)].get("render_data")
|
||||
|
||||
assert render_data, "No render data found."
|
||||
|
||||
self.log.info(f"render_data: {dict(render_data)}")
|
||||
|
||||
render_product = render_data.get("render_product")
|
||||
aov_file_product = render_data.get("aov_file_product")
|
||||
ext = render_data.get("image_format")
|
||||
multilayer = render_data.get("multilayer_exr")
|
||||
|
||||
frame_start = context.data["frameStart"]
|
||||
frame_end = context.data["frameEnd"]
|
||||
frame_handle_start = context.data["frameStartHandle"]
|
||||
frame_handle_end = context.data["frameEndHandle"]
|
||||
|
||||
expected_beauty = self.generate_expected_beauty(
|
||||
render_product, int(frame_start), int(frame_end),
|
||||
int(bpy.context.scene.frame_step), ext)
|
||||
|
||||
expected_aovs = self.generate_expected_aovs(
|
||||
aov_file_product, int(frame_start), int(frame_end),
|
||||
int(bpy.context.scene.frame_step), ext)
|
||||
|
||||
expected_files = expected_beauty | expected_aovs
|
||||
|
||||
instance.data.update({
|
||||
"family": "render.farm",
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"frameStartHandle": frame_handle_start,
|
||||
"frameEndHandle": frame_handle_end,
|
||||
"fps": context.data["fps"],
|
||||
"byFrameStep": bpy.context.scene.frame_step,
|
||||
"review": render_data.get("review", False),
|
||||
"multipartExr": ext == "exr" and multilayer,
|
||||
"farm": True,
|
||||
"expectedFiles": [expected_files],
|
||||
# OCIO not currently implemented in Blender, but the following
|
||||
# settings are required by the schema, so it is hardcoded.
|
||||
# TODO: Implement OCIO in Blender
|
||||
"colorspaceConfig": "",
|
||||
"colorspaceDisplay": "sRGB",
|
||||
"colorspaceView": "ACES 1.0 SDR-video",
|
||||
"renderProducts": colorspace.ARenderProduct(),
|
||||
})
|
||||
|
||||
self.log.info(f"data: {instance.data}")
|
||||
|
|
@ -9,7 +9,8 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
|
|||
label = "Increment Workfile Version"
|
||||
optional = True
|
||||
hosts = ["blender"]
|
||||
families = ["animation", "model", "rig", "action", "layout", "blendScene"]
|
||||
families = ["animation", "model", "rig", "action", "layout", "blendScene",
|
||||
"render"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.hosts.blender.api.render_lib import prepare_rendering
|
||||
|
||||
|
||||
class ValidateDeadlinePublish(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validates Render File Directory is
|
||||
not the same in every submission
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
families = ["render.farm"]
|
||||
hosts = ["blender"]
|
||||
label = "Validate Render Output for Deadline"
|
||||
optional = True
|
||||
actions = [RepairAction]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
filepath = bpy.data.filepath
|
||||
file = os.path.basename(filepath)
|
||||
filename, ext = os.path.splitext(file)
|
||||
if filename not in bpy.context.scene.render.filepath:
|
||||
raise PublishValidationError(
|
||||
"Render output folder "
|
||||
"doesn't match the blender scene name! "
|
||||
"Use Repair action to "
|
||||
"fix the folder file path.."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
container = bpy.data.collections[str(instance)]
|
||||
prepare_rendering(container)
|
||||
bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath)
|
||||
cls.log.debug("Reset the render output folder...")
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateFileSaved(pyblish.api.InstancePlugin):
|
||||
"""Validate that the workfile has been saved."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
hosts = ["blender"]
|
||||
label = "Validate File Saved"
|
||||
optional = False
|
||||
exclude_families = []
|
||||
|
||||
def process(self, instance):
|
||||
if [ef for ef in self.exclude_families
|
||||
if instance.data["family"] in ef]:
|
||||
return
|
||||
if bpy.data.is_dirty:
|
||||
raise RuntimeError("Workfile is not saved.")
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateRenderCameraIsSet(pyblish.api.InstancePlugin):
|
||||
"""Validate that there is a camera set as active for rendering."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
hosts = ["blender"]
|
||||
families = ["render"]
|
||||
label = "Validate Render Camera Is Set"
|
||||
optional = False
|
||||
|
||||
def process(self, instance):
|
||||
if not bpy.context.scene.camera:
|
||||
raise RuntimeError("No camera is active for rendering.")
|
||||
|
|
@ -123,6 +123,9 @@ class CreateSaver(NewCreator):
|
|||
def _imprint(self, tool, data):
|
||||
# Save all data in a "openpype.{key}" = value data
|
||||
|
||||
# Instance id is the tool's name so we don't need to imprint as data
|
||||
data.pop("instance_id", None)
|
||||
|
||||
active = data.pop("active", None)
|
||||
if active is not None:
|
||||
# Use active value to set the passthrough state
|
||||
|
|
@ -188,6 +191,10 @@ class CreateSaver(NewCreator):
|
|||
passthrough = attrs["TOOLB_PassThrough"]
|
||||
data["active"] = not passthrough
|
||||
|
||||
# Override publisher's UUID generation because tool names are
|
||||
# already unique in Fusion in a comp
|
||||
data["instance_id"] = tool.Name
|
||||
|
||||
return data
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import sys
|
||||
import os
|
||||
import errno
|
||||
import re
|
||||
import uuid
|
||||
import logging
|
||||
|
|
@ -9,10 +10,15 @@ import json
|
|||
|
||||
import six
|
||||
|
||||
from openpype.lib import StringTemplate
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.settings import get_current_project_settings
|
||||
from openpype.pipeline import get_current_project_name, get_current_asset_name
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_context_template_data,
|
||||
get_current_project_asset
|
||||
)
|
||||
from openpype.widgets import popup
|
||||
import hou
|
||||
|
||||
|
||||
|
|
@ -160,8 +166,6 @@ def validate_fps():
|
|||
|
||||
if current_fps != fps:
|
||||
|
||||
from openpype.widgets import popup
|
||||
|
||||
# Find main window
|
||||
parent = hou.ui.mainQtWindow()
|
||||
if parent is None:
|
||||
|
|
@ -747,3 +751,99 @@ def get_camera_from_container(container):
|
|||
|
||||
assert len(cameras) == 1, "Camera instance must have only one camera"
|
||||
return cameras[0]
|
||||
|
||||
|
||||
def get_context_var_changes():
|
||||
"""get context var changes."""
|
||||
|
||||
houdini_vars_to_update = {}
|
||||
|
||||
project_settings = get_current_project_settings()
|
||||
houdini_vars_settings = \
|
||||
project_settings["houdini"]["general"]["update_houdini_var_context"]
|
||||
|
||||
if not houdini_vars_settings["enabled"]:
|
||||
return houdini_vars_to_update
|
||||
|
||||
houdini_vars = houdini_vars_settings["houdini_vars"]
|
||||
|
||||
# No vars specified - nothing to do
|
||||
if not houdini_vars:
|
||||
return houdini_vars_to_update
|
||||
|
||||
# Get Template data
|
||||
template_data = get_current_context_template_data()
|
||||
|
||||
# Set Houdini Vars
|
||||
for item in houdini_vars:
|
||||
# For consistency reasons we always force all vars to be uppercase
|
||||
# Also remove any leading, and trailing whitespaces.
|
||||
var = item["var"].strip().upper()
|
||||
|
||||
# get and resolve template in value
|
||||
item_value = StringTemplate.format_template(
|
||||
item["value"],
|
||||
template_data
|
||||
)
|
||||
|
||||
if var == "JOB" and item_value == "":
|
||||
# sync $JOB to $HIP if $JOB is empty
|
||||
item_value = os.environ["HIP"]
|
||||
|
||||
if item["is_directory"]:
|
||||
item_value = item_value.replace("\\", "/")
|
||||
|
||||
current_value = hou.hscript("echo -n `${}`".format(var))[0]
|
||||
|
||||
if current_value != item_value:
|
||||
houdini_vars_to_update[var] = (
|
||||
current_value, item_value, item["is_directory"]
|
||||
)
|
||||
|
||||
return houdini_vars_to_update
|
||||
|
||||
|
||||
def update_houdini_vars_context():
|
||||
"""Update asset context variables"""
|
||||
|
||||
for var, (_old, new, is_directory) in get_context_var_changes().items():
|
||||
if is_directory:
|
||||
try:
|
||||
os.makedirs(new)
|
||||
except OSError as e:
|
||||
if e.errno != errno.EEXIST:
|
||||
print(
|
||||
"Failed to create ${} dir. Maybe due to "
|
||||
"insufficient permissions.".format(var)
|
||||
)
|
||||
|
||||
hou.hscript("set {}={}".format(var, new))
|
||||
os.environ[var] = new
|
||||
print("Updated ${} to {}".format(var, new))
|
||||
|
||||
|
||||
def update_houdini_vars_context_dialog():
|
||||
"""Show pop-up to update asset context variables"""
|
||||
update_vars = get_context_var_changes()
|
||||
if not update_vars:
|
||||
# Nothing to change
|
||||
print("Nothing to change, Houdini vars are already up to date.")
|
||||
return
|
||||
|
||||
message = "\n".join(
|
||||
"${}: {} -> {}".format(var, old or "None", new or "None")
|
||||
for var, (old, new, _is_directory) in update_vars.items()
|
||||
)
|
||||
|
||||
# TODO: Use better UI!
|
||||
parent = hou.ui.mainQtWindow()
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setModal(True)
|
||||
dialog.setWindowTitle("Houdini scene has outdated asset variables")
|
||||
dialog.setMessage(message)
|
||||
dialog.setButtonText("Fix")
|
||||
|
||||
# on_show is the Fix button clicked callback
|
||||
dialog.on_clicked.connect(update_houdini_vars_context)
|
||||
|
||||
dialog.show()
|
||||
|
|
|
|||
|
|
@ -300,6 +300,9 @@ def on_save():
|
|||
|
||||
log.info("Running callback on save..")
|
||||
|
||||
# update houdini vars
|
||||
lib.update_houdini_vars_context_dialog()
|
||||
|
||||
nodes = lib.get_id_required_nodes()
|
||||
for node, new_id in lib.generate_ids(nodes):
|
||||
lib.set_id(node, new_id, overwrite=False)
|
||||
|
|
@ -335,6 +338,9 @@ def on_open():
|
|||
|
||||
log.info("Running callback on open..")
|
||||
|
||||
# update houdini vars
|
||||
lib.update_houdini_vars_context_dialog()
|
||||
|
||||
# Validate FPS after update_task_from_path to
|
||||
# ensure it is using correct FPS for the asset
|
||||
lib.validate_fps()
|
||||
|
|
@ -399,6 +405,7 @@ def _set_context_settings():
|
|||
"""
|
||||
|
||||
lib.reset_framerange()
|
||||
lib.update_houdini_vars_context()
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, new_value, old_value):
|
||||
|
|
|
|||
|
|
@ -187,13 +187,14 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
self.customize_node_look(instance_node)
|
||||
|
||||
instance_data["instance_node"] = instance_node.path()
|
||||
instance_data["instance_id"] = instance_node.path()
|
||||
instance = CreatedInstance(
|
||||
self.family,
|
||||
subset_name,
|
||||
instance_data,
|
||||
self)
|
||||
self._add_instance_to_context(instance)
|
||||
imprint(instance_node, instance.data_to_store())
|
||||
self.imprint(instance_node, instance.data_to_store())
|
||||
return instance
|
||||
|
||||
except hou.Error as er:
|
||||
|
|
@ -222,25 +223,41 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
self.cache_subsets(self.collection_shared_data)
|
||||
for instance in self.collection_shared_data[
|
||||
"houdini_cached_subsets"].get(self.identifier, []):
|
||||
|
||||
node_data = read(instance)
|
||||
|
||||
# Node paths are always the full node path since that is unique
|
||||
# Because it's the node's path it's not written into attributes
|
||||
# but explicitly collected
|
||||
node_path = instance.path()
|
||||
node_data["instance_id"] = node_path
|
||||
node_data["instance_node"] = node_path
|
||||
|
||||
created_instance = CreatedInstance.from_existing(
|
||||
read(instance), self
|
||||
node_data, self
|
||||
)
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, changes in update_list:
|
||||
instance_node = hou.node(created_inst.get("instance_node"))
|
||||
|
||||
new_values = {
|
||||
key: changes[key].new_value
|
||||
for key in changes.changed_keys
|
||||
}
|
||||
imprint(
|
||||
self.imprint(
|
||||
instance_node,
|
||||
new_values,
|
||||
update=True
|
||||
)
|
||||
|
||||
def imprint(self, node, values, update=False):
|
||||
# Never store instance node and instance id since that data comes
|
||||
# from the node's path
|
||||
values.pop("instance_node", None)
|
||||
values.pop("instance_id", None)
|
||||
imprint(node, values, update=update)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
"""Remove specified instance from the scene.
|
||||
|
||||
|
|
|
|||
|
|
@ -86,6 +86,14 @@ openpype.hosts.houdini.api.lib.reset_framerange()
|
|||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<scriptItem id="update_context_vars">
|
||||
<label>Update Houdini Vars</label>
|
||||
<scriptCode><![CDATA[
|
||||
import openpype.hosts.houdini.api.lib
|
||||
openpype.hosts.houdini.api.lib.update_houdini_vars_context_dialog()
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<separatorItem/>
|
||||
<scriptItem id="experimental_tools">
|
||||
<label>Experimental tools...</label>
|
||||
|
|
|
|||
|
|
@ -1,15 +1,35 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Library of functions useful for 3dsmax pipeline."""
|
||||
import contextlib
|
||||
import logging
|
||||
import json
|
||||
from typing import Any, Dict, Union
|
||||
|
||||
import six
|
||||
from openpype.pipeline import get_current_project_name, colorspace
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project, get_current_project_asset)
|
||||
from openpype.style import load_stylesheet
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
JSON_PREFIX = "JSON::"
|
||||
log = logging.getLogger("openpype.hosts.max")
|
||||
|
||||
|
||||
def get_main_window():
|
||||
"""Acquire Max's main window"""
|
||||
from qtpy import QtWidgets
|
||||
top_widgets = QtWidgets.QApplication.topLevelWidgets()
|
||||
name = "QmaxApplicationWindow"
|
||||
for widget in top_widgets:
|
||||
if (
|
||||
widget.inherits("QMainWindow")
|
||||
and widget.metaObject().className() == name
|
||||
):
|
||||
return widget
|
||||
raise RuntimeError('Count not find 3dsMax main window.')
|
||||
|
||||
|
||||
def imprint(node_name: str, data: dict) -> bool:
|
||||
|
|
@ -277,6 +297,7 @@ def set_context_setting():
|
|||
"""
|
||||
reset_scene_resolution()
|
||||
reset_frame_range()
|
||||
reset_colorspace()
|
||||
|
||||
|
||||
def get_max_version():
|
||||
|
|
@ -292,6 +313,14 @@ def get_max_version():
|
|||
return max_info[7]
|
||||
|
||||
|
||||
def is_headless():
|
||||
"""Check if 3dsMax runs in batch mode.
|
||||
If it returns True, it runs in 3dsbatch.exe
|
||||
If it returns False, it runs in 3dsmax.exe
|
||||
"""
|
||||
return rt.maxops.isInNonInteractiveMode()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_camera(camera):
|
||||
original = rt.viewport.getCamera()
|
||||
|
|
@ -314,6 +343,51 @@ def set_timeline(frameStart, frameEnd):
|
|||
return rt.animationRange
|
||||
|
||||
|
||||
def reset_colorspace():
|
||||
"""OCIO Configuration
|
||||
Supports in 3dsMax 2024+
|
||||
|
||||
"""
|
||||
if int(get_max_version()) < 2024:
|
||||
return
|
||||
project_name = get_current_project_name()
|
||||
colorspace_mgr = rt.ColorPipelineMgr
|
||||
project_settings = get_project_settings(project_name)
|
||||
|
||||
max_config_data = colorspace.get_imageio_config(
|
||||
project_name, "max", project_settings)
|
||||
if max_config_data:
|
||||
ocio_config_path = max_config_data["path"]
|
||||
colorspace_mgr = rt.ColorPipelineMgr
|
||||
colorspace_mgr.Mode = rt.Name("OCIO_Custom")
|
||||
colorspace_mgr.OCIOConfigPath = ocio_config_path
|
||||
|
||||
colorspace_mgr.OCIOConfigPath = ocio_config_path
|
||||
|
||||
|
||||
def check_colorspace():
|
||||
parent = get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated pop-up "
|
||||
"because Max main window can't be found.")
|
||||
if int(get_max_version()) >= 2024:
|
||||
color_mgr = rt.ColorPipelineMgr
|
||||
project_name = get_current_project_name()
|
||||
project_settings = get_project_settings(project_name)
|
||||
max_config_data = colorspace.get_imageio_config(
|
||||
project_name, "max", project_settings)
|
||||
if max_config_data and color_mgr.Mode != rt.Name("OCIO_Custom"):
|
||||
if not is_headless():
|
||||
from openpype.widgets import popup
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Warning: Wrong OCIO Mode")
|
||||
dialog.setMessage("This scene has wrong OCIO "
|
||||
"Mode setting.")
|
||||
dialog.setButtonText("Fix")
|
||||
dialog.setStyleSheet(load_stylesheet())
|
||||
dialog.on_clicked.connect(reset_colorspace)
|
||||
dialog.show()
|
||||
|
||||
def unique_namespace(namespace, format="%02d",
|
||||
prefix="", suffix="", con_suffix="CON"):
|
||||
"""Return unique namespace
|
||||
|
|
|
|||
|
|
@ -119,6 +119,10 @@ class OpenPypeMenu(object):
|
|||
frame_action.triggered.connect(self.frame_range_callback)
|
||||
openpype_menu.addAction(frame_action)
|
||||
|
||||
colorspace_action = QtWidgets.QAction("Set Colorspace", openpype_menu)
|
||||
colorspace_action.triggered.connect(self.colorspace_callback)
|
||||
openpype_menu.addAction(colorspace_action)
|
||||
|
||||
return openpype_menu
|
||||
|
||||
def load_callback(self):
|
||||
|
|
@ -148,3 +152,7 @@ class OpenPypeMenu(object):
|
|||
def frame_range_callback(self):
|
||||
"""Callback to reset frame range"""
|
||||
return lib.reset_frame_range()
|
||||
|
||||
def colorspace_callback(self):
|
||||
"""Callback to reset colorspace"""
|
||||
return lib.reset_colorspace()
|
||||
|
|
|
|||
|
|
@ -57,6 +57,9 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
rt.callbacks.addScript(rt.Name('systemPostNew'),
|
||||
context_setting)
|
||||
|
||||
rt.callbacks.addScript(rt.Name('filePostOpen'),
|
||||
lib.check_colorspace)
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
# TODO: how to get it from 3dsmax?
|
||||
return True
|
||||
|
|
|
|||
|
|
@ -65,12 +65,12 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
|||
|
||||
on button_add pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to add to
|
||||
current_sel = selectByName title:"Select Objects to add to
|
||||
the Container" buttontext:"Add" filter:nodes_to_add
|
||||
if current_selection == undefined then return False
|
||||
if current_sel == undefined then return False
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
for c in current_selection do
|
||||
for c in current_sel do
|
||||
(
|
||||
handle_name = node_to_name c
|
||||
node_ref = NodeTransformMonitor node:c
|
||||
|
|
@ -89,15 +89,18 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
|||
|
||||
on button_del pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to remove
|
||||
current_sel = selectByName title:"Select Objects to remove
|
||||
from the Container" buttontext:"Remove" filter: nodes_to_rmv
|
||||
if current_selection == undefined then return False
|
||||
if current_sel == undefined or current_sel.count == 0 then
|
||||
(
|
||||
return False
|
||||
)
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
new_i_node_arr = #()
|
||||
new_temp_arr = #()
|
||||
|
||||
for c in current_selection do
|
||||
for c in current_sel do
|
||||
(
|
||||
node_ref = NodeTransformMonitor node:c as string
|
||||
handle_name = node_to_name c
|
||||
|
|
|
|||
|
|
@ -34,6 +34,12 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
files_by_aov.update(aovs)
|
||||
|
||||
camera = rt.viewport.GetCamera()
|
||||
if instance.data.get("members"):
|
||||
camera_list = [member for member in instance.data["members"]
|
||||
if rt.ClassOf(member) == rt.Camera.Classes]
|
||||
if camera_list:
|
||||
camera = camera_list[-1]
|
||||
|
||||
instance.data["cameras"] = [camera.name] if camera else None # noqa
|
||||
|
||||
if "expectedFiles" not in instance.data:
|
||||
|
|
@ -63,6 +69,17 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
instance.data["colorspaceConfig"] = ""
|
||||
instance.data["colorspaceDisplay"] = "sRGB"
|
||||
instance.data["colorspaceView"] = "ACES 1.0 SDR-video"
|
||||
|
||||
if int(get_max_version()) >= 2024:
|
||||
colorspace_mgr = rt.ColorPipelineMgr # noqa
|
||||
display = next(
|
||||
(display for display in colorspace_mgr.GetDisplayList()))
|
||||
view_transform = next(
|
||||
(view for view in colorspace_mgr.GetViewList(display)))
|
||||
instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath
|
||||
instance.data["colorspaceDisplay"] = display
|
||||
instance.data["colorspaceView"] = view_transform
|
||||
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
instance.data["attachTo"] = []
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import pyblish.api
|
|||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.max.api.lib import get_max_version
|
||||
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
|
|
@ -43,6 +44,17 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
"dspSafeFrame": attr_values.get("dspSafeFrame"),
|
||||
"dspFrameNums": attr_values.get("dspFrameNums")
|
||||
}
|
||||
|
||||
if int(get_max_version()) >= 2024:
|
||||
colorspace_mgr = rt.ColorPipelineMgr # noqa
|
||||
display = next(
|
||||
(display for display in colorspace_mgr.GetDisplayList()))
|
||||
view_transform = next(
|
||||
(view for view in colorspace_mgr.GetViewList(display)))
|
||||
instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath
|
||||
instance.data["colorspaceDisplay"] = display
|
||||
instance.data["colorspaceView"] = view_transform
|
||||
|
||||
# Enable ftrack functionality
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
||||
|
|
@ -54,7 +66,6 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
|
||||
return [
|
||||
BoolDef("dspGeometry",
|
||||
label="Geometry",
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from pyblish.api import Instance
|
|||
|
||||
from maya import cmds # noqa
|
||||
import maya.mel as mel # noqa
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
|
||||
|
||||
class FBXExtractor:
|
||||
|
|
@ -53,7 +54,6 @@ class FBXExtractor:
|
|||
"bakeComplexEnd": int,
|
||||
"bakeComplexStep": int,
|
||||
"bakeResampleAnimation": bool,
|
||||
"animationOnly": bool,
|
||||
"useSceneName": bool,
|
||||
"quaternion": str, # "euler"
|
||||
"shapes": bool,
|
||||
|
|
@ -63,7 +63,10 @@ class FBXExtractor:
|
|||
"embeddedTextures": bool,
|
||||
"inputConnections": bool,
|
||||
"upAxis": str, # x, y or z,
|
||||
"triangulate": bool
|
||||
"triangulate": bool,
|
||||
"fileVersion": str,
|
||||
"skeletonDefinitions": bool,
|
||||
"referencedAssetsContent": bool
|
||||
}
|
||||
|
||||
@property
|
||||
|
|
@ -94,7 +97,6 @@ class FBXExtractor:
|
|||
"bakeComplexEnd": end_frame,
|
||||
"bakeComplexStep": 1,
|
||||
"bakeResampleAnimation": True,
|
||||
"animationOnly": False,
|
||||
"useSceneName": False,
|
||||
"quaternion": "euler",
|
||||
"shapes": True,
|
||||
|
|
@ -104,7 +106,10 @@ class FBXExtractor:
|
|||
"embeddedTextures": False,
|
||||
"inputConnections": True,
|
||||
"upAxis": "y",
|
||||
"triangulate": False
|
||||
"triangulate": False,
|
||||
"fileVersion": "FBX202000",
|
||||
"skeletonDefinitions": False,
|
||||
"referencedAssetsContent": False
|
||||
}
|
||||
|
||||
def __init__(self, log=None):
|
||||
|
|
@ -198,5 +203,9 @@ class FBXExtractor:
|
|||
path (str): Path to use for export.
|
||||
|
||||
"""
|
||||
cmds.select(members, r=True, noExpand=True)
|
||||
mel.eval('FBXExport -f "{}" -s'.format(path))
|
||||
# The export requires forward slashes because we need
|
||||
# to format it into a string in a mel expression
|
||||
path = path.replace("\\", "/")
|
||||
with maintained_selection():
|
||||
cmds.select(members, r=True, noExpand=True)
|
||||
mel.eval('FBXExport -f "{}" -s'.format(path))
|
||||
|
|
|
|||
|
|
@ -183,6 +183,51 @@ def maintained_selection():
|
|||
cmds.select(clear=True)
|
||||
|
||||
|
||||
def get_namespace(node):
|
||||
"""Return namespace of given node"""
|
||||
node_name = node.rsplit("|", 1)[-1]
|
||||
if ":" in node_name:
|
||||
return node_name.rsplit(":", 1)[0]
|
||||
else:
|
||||
return ""
|
||||
|
||||
|
||||
def strip_namespace(node, namespace):
|
||||
"""Strip given namespace from node path.
|
||||
|
||||
The namespace will only be stripped from names
|
||||
if it starts with that namespace. If the namespace
|
||||
occurs within another namespace it's not removed.
|
||||
|
||||
Examples:
|
||||
>>> strip_namespace("namespace:node", namespace="namespace:")
|
||||
"node"
|
||||
>>> strip_namespace("hello:world:node", namespace="hello:world")
|
||||
"node"
|
||||
>>> strip_namespace("hello:world:node", namespace="hello")
|
||||
"world:node"
|
||||
>>> strip_namespace("hello:world:node", namespace="world")
|
||||
"hello:world:node"
|
||||
>>> strip_namespace("ns:group|ns:node", namespace="ns")
|
||||
"group|node"
|
||||
|
||||
Returns:
|
||||
str: Node name without given starting namespace.
|
||||
|
||||
"""
|
||||
|
||||
# Ensure namespace ends with `:`
|
||||
if not namespace.endswith(":"):
|
||||
namespace = "{}:".format(namespace)
|
||||
|
||||
# The long path for a node can also have the namespace
|
||||
# in its parents so we need to remove it from each
|
||||
return "|".join(
|
||||
name[len(namespace):] if name.startswith(namespace) else name
|
||||
for name in node.split("|")
|
||||
)
|
||||
|
||||
|
||||
def get_custom_namespace(custom_namespace):
|
||||
"""Return unique namespace.
|
||||
|
||||
|
|
@ -922,7 +967,7 @@ def no_display_layers(nodes):
|
|||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def namespaced(namespace, new=True):
|
||||
def namespaced(namespace, new=True, relative_names=None):
|
||||
"""Work inside namespace during context
|
||||
|
||||
Args:
|
||||
|
|
@ -934,15 +979,19 @@ def namespaced(namespace, new=True):
|
|||
|
||||
"""
|
||||
original = cmds.namespaceInfo(cur=True, absoluteName=True)
|
||||
original_relative_names = cmds.namespace(query=True, relativeNames=True)
|
||||
if new:
|
||||
namespace = unique_namespace(namespace)
|
||||
cmds.namespace(add=namespace)
|
||||
|
||||
if relative_names is not None:
|
||||
cmds.namespace(relativeNames=relative_names)
|
||||
try:
|
||||
cmds.namespace(set=namespace)
|
||||
yield namespace
|
||||
finally:
|
||||
cmds.namespace(set=original)
|
||||
if relative_names is not None:
|
||||
cmds.namespace(relativeNames=original_relative_names)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
@ -2571,7 +2620,7 @@ def bake_to_world_space(nodes,
|
|||
new_name = "{0}_baked".format(short_name)
|
||||
new_node = cmds.duplicate(node,
|
||||
name=new_name,
|
||||
renameChildren=True)[0]
|
||||
renameChildren=True)[0] # noqa
|
||||
|
||||
# Connect all attributes on the node except for transform
|
||||
# attributes
|
||||
|
|
@ -4100,14 +4149,19 @@ def create_rig_animation_instance(
|
|||
"""
|
||||
if options is None:
|
||||
options = {}
|
||||
|
||||
name = context["representation"]["name"]
|
||||
output = next((node for node in nodes if
|
||||
node.endswith("out_SET")), None)
|
||||
controls = next((node for node in nodes if
|
||||
node.endswith("controls_SET")), None)
|
||||
if name != "fbx":
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
anim_skeleton = next((node for node in nodes if
|
||||
node.endswith("skeletonAnim_SET")), None)
|
||||
skeleton_mesh = next((node for node in nodes if
|
||||
node.endswith("skeletonMesh_SET")), None)
|
||||
|
||||
# Find the roots amongst the loaded nodes
|
||||
roots = (
|
||||
|
|
@ -4119,9 +4173,7 @@ def create_rig_animation_instance(
|
|||
custom_subset = options.get("animationSubsetName")
|
||||
if custom_subset:
|
||||
formatting_data = {
|
||||
# TODO remove 'asset_type' and replace 'asset_name' with 'asset'
|
||||
"asset_name": context['asset']['name'],
|
||||
"asset_type": context['asset']['type'],
|
||||
"asset": context["asset"],
|
||||
"subset": context['subset']['name'],
|
||||
"family": (
|
||||
context['subset']['data'].get('family') or
|
||||
|
|
@ -4142,10 +4194,12 @@ def create_rig_animation_instance(
|
|||
|
||||
host = registered_host()
|
||||
create_context = CreateContext(host)
|
||||
|
||||
# Create the animation instance
|
||||
rig_sets = [output, controls, anim_skeleton, skeleton_mesh]
|
||||
# Remove sets that this particular rig does not have
|
||||
rig_sets = [s for s in rig_sets if s is not None]
|
||||
with maintained_selection():
|
||||
cmds.select([output, controls] + roots, noExpand=True)
|
||||
cmds.select(rig_sets + roots, noExpand=True)
|
||||
create_context.create(
|
||||
creator_identifier=creator_identifier,
|
||||
variant=namespace,
|
||||
|
|
|
|||
|
|
@ -1,14 +1,13 @@
|
|||
import os
|
||||
import logging
|
||||
from functools import partial
|
||||
|
||||
from qtpy import QtWidgets, QtGui
|
||||
|
||||
import maya.utils
|
||||
import maya.cmds as cmds
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
get_current_asset_name,
|
||||
get_current_task_name
|
||||
)
|
||||
|
|
@ -46,12 +45,12 @@ def get_context_label():
|
|||
)
|
||||
|
||||
|
||||
def install():
|
||||
def install(project_settings):
|
||||
if cmds.about(batch=True):
|
||||
log.info("Skipping openpype.menu initialization in batch mode..")
|
||||
return
|
||||
|
||||
def deferred():
|
||||
def add_menu():
|
||||
pyblish_icon = host_tools.get_pyblish_icon()
|
||||
parent_widget = get_main_window()
|
||||
cmds.menu(
|
||||
|
|
@ -191,7 +190,7 @@ def install():
|
|||
|
||||
cmds.setParent(MENU_NAME, menu=True)
|
||||
|
||||
def add_scripts_menu():
|
||||
def add_scripts_menu(project_settings):
|
||||
try:
|
||||
import scriptsmenu.launchformaya as launchformaya
|
||||
except ImportError:
|
||||
|
|
@ -201,9 +200,6 @@ def install():
|
|||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_name = get_current_project_name()
|
||||
project_settings = get_project_settings(project_name)
|
||||
config = project_settings["maya"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["maya"]["scriptsmenu"]["name"]
|
||||
|
||||
|
|
@ -225,8 +221,9 @@ def install():
|
|||
# so that it only gets called after Maya UI has initialized too.
|
||||
# This is crucial with Maya 2020+ which initializes without UI
|
||||
# first as a QCoreApplication
|
||||
maya.utils.executeDeferred(deferred)
|
||||
cmds.evalDeferred(add_scripts_menu, lowestPriority=True)
|
||||
maya.utils.executeDeferred(add_menu)
|
||||
cmds.evalDeferred(partial(add_scripts_menu, project_settings),
|
||||
lowestPriority=True)
|
||||
|
||||
|
||||
def uninstall():
|
||||
|
|
|
|||
|
|
@ -28,8 +28,6 @@ from openpype.lib import (
|
|||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_current_project_name,
|
||||
get_current_asset_name,
|
||||
get_current_task_name,
|
||||
register_loader_plugin_path,
|
||||
register_inventory_action_path,
|
||||
register_creator_plugin_path,
|
||||
|
|
@ -97,6 +95,8 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
self.log.info("Installing callbacks ... ")
|
||||
register_event_callback("init", on_init)
|
||||
|
||||
_set_project()
|
||||
|
||||
if lib.IS_HEADLESS:
|
||||
self.log.info((
|
||||
"Running in headless mode, skipping Maya save/open/new"
|
||||
|
|
@ -105,10 +105,9 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
|
||||
return
|
||||
|
||||
_set_project()
|
||||
self._register_callbacks()
|
||||
|
||||
menu.install()
|
||||
menu.install(project_settings)
|
||||
|
||||
register_event_callback("save", on_save)
|
||||
register_event_callback("open", on_open)
|
||||
|
|
|
|||
|
|
@ -151,6 +151,7 @@ class MayaCreatorBase(object):
|
|||
# We never store the instance_node as value on the node since
|
||||
# it's the node name itself
|
||||
data.pop("instance_node", None)
|
||||
data.pop("instance_id", None)
|
||||
|
||||
# Don't store `families` since it's up to the creator itself
|
||||
# to define the initial publish families - not a stored attribute of
|
||||
|
|
@ -227,6 +228,7 @@ class MayaCreatorBase(object):
|
|||
|
||||
# Explicitly re-parse the node name
|
||||
node_data["instance_node"] = node
|
||||
node_data["instance_id"] = node
|
||||
|
||||
# If the creator plug-in specifies
|
||||
families = self.get_publish_families()
|
||||
|
|
@ -601,6 +603,13 @@ class RenderlayerCreator(NewCreator, MayaCreatorBase):
|
|||
class Loader(LoaderPlugin):
|
||||
hosts = ["maya"]
|
||||
|
||||
load_settings = {} # defined in settings
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
super(Loader, cls).apply_settings(project_settings, system_settings)
|
||||
cls.load_settings = project_settings['maya']['load']
|
||||
|
||||
def get_custom_namespace_and_group(self, context, options, loader_key):
|
||||
"""Queries Settings to get custom template for namespace and group.
|
||||
|
||||
|
|
@ -613,12 +622,9 @@ class Loader(LoaderPlugin):
|
|||
loader_key (str): key to get separate configuration from Settings
|
||||
('reference_loader'|'import_loader')
|
||||
"""
|
||||
options["attach_to_root"] = True
|
||||
|
||||
asset = context['asset']
|
||||
subset = context['subset']
|
||||
settings = get_project_settings(context['project']['name'])
|
||||
custom_naming = settings['maya']['load'][loader_key]
|
||||
options["attach_to_root"] = True
|
||||
custom_naming = self.load_settings[loader_key]
|
||||
|
||||
if not custom_naming['namespace']:
|
||||
raise LoadError("No namespace specified in "
|
||||
|
|
@ -627,6 +633,8 @@ class Loader(LoaderPlugin):
|
|||
self.log.debug("No custom group_name, no group will be created.")
|
||||
options["attach_to_root"] = False
|
||||
|
||||
asset = context['asset']
|
||||
subset = context['subset']
|
||||
formatting_data = {
|
||||
"asset_name": asset['name'],
|
||||
"asset_type": asset['type'],
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ class PreCopyMel(PreLaunchHook):
|
|||
|
||||
Hook `GlobalHostDataHook` must be executed before this hook.
|
||||
"""
|
||||
app_groups = {"maya"}
|
||||
app_groups = {"maya", "mayapy"}
|
||||
launch_types = {LaunchTypes.local}
|
||||
|
||||
def execute(self):
|
||||
|
|
|
|||
32
openpype/hosts/maya/plugins/create/create_matchmove.py
Normal file
32
openpype/hosts/maya/plugins/create/create_matchmove.py
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.lib import BoolDef
|
||||
|
||||
|
||||
class CreateMatchmove(plugin.MayaCreator):
|
||||
"""Instance for more complex setup of cameras.
|
||||
|
||||
Might contain multiple cameras, geometries etc.
|
||||
|
||||
It is expected to be extracted into .abc or .ma
|
||||
"""
|
||||
|
||||
identifier = "io.openpype.creators.maya.matchmove"
|
||||
label = "Matchmove"
|
||||
family = "matchmove"
|
||||
icon = "video-camera"
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
defs = lib.collect_animation_defs()
|
||||
|
||||
defs.extend([
|
||||
BoolDef("bakeToWorldSpace",
|
||||
label="Bake Cameras to World-Space",
|
||||
tooltip="Bake Cameras to World-Space",
|
||||
default=True),
|
||||
])
|
||||
|
||||
return defs
|
||||
|
|
@ -20,6 +20,13 @@ class CreateRig(plugin.MayaCreator):
|
|||
instance_node = instance.get("instance_node")
|
||||
|
||||
self.log.info("Creating Rig instance set up ...")
|
||||
# TODO:change name (_controls_SET -> _rigs_SET)
|
||||
controls = cmds.sets(name=subset_name + "_controls_SET", empty=True)
|
||||
# TODO:change name (_out_SET -> _geo_SET)
|
||||
pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True)
|
||||
cmds.sets([controls, pointcache], forceElement=instance_node)
|
||||
skeleton = cmds.sets(
|
||||
name=subset_name + "_skeletonAnim_SET", empty=True)
|
||||
skeleton_mesh = cmds.sets(
|
||||
name=subset_name + "_skeletonMesh_SET", empty=True)
|
||||
cmds.sets([controls, pointcache,
|
||||
skeleton, skeleton_mesh], forceElement=instance_node)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,39 @@
|
|||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.lib import NumberDef
|
||||
|
||||
|
||||
class CreateYetiCache(plugin.MayaCreator):
|
||||
"""Output for procedural plugin nodes of Yeti """
|
||||
|
||||
identifier = "io.openpype.creators.maya.unrealyeticache"
|
||||
label = "Unreal - Yeti Cache"
|
||||
family = "yeticacheUE"
|
||||
icon = "pagelines"
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
defs = [
|
||||
NumberDef("preroll",
|
||||
label="Preroll",
|
||||
minimum=0,
|
||||
default=0,
|
||||
decimals=0)
|
||||
]
|
||||
|
||||
# Add animation data without step and handles
|
||||
defs.extend(lib.collect_animation_defs())
|
||||
remove = {"step", "handleStart", "handleEnd"}
|
||||
defs = [attr_def for attr_def in defs if attr_def.key not in remove]
|
||||
|
||||
# Add samples after frame range
|
||||
defs.append(
|
||||
NumberDef("samples",
|
||||
label="Samples",
|
||||
default=3,
|
||||
decimals=0)
|
||||
)
|
||||
|
||||
return defs
|
||||
|
|
@ -1,4 +1,46 @@
|
|||
import openpype.hosts.maya.api.plugin
|
||||
import maya.cmds as cmds
|
||||
|
||||
|
||||
def _process_reference(file_url, name, namespace, options):
|
||||
"""Load files by referencing scene in Maya.
|
||||
|
||||
Args:
|
||||
file_url (str): fileapth of the objects to be loaded
|
||||
name (str): subset name
|
||||
namespace (str): namespace
|
||||
options (dict): dict of storing the param
|
||||
|
||||
Returns:
|
||||
list: list of object nodes
|
||||
"""
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
# Get name from asset being loaded
|
||||
# Assuming name is subset name from the animation, we split the number
|
||||
# suffix from the name to ensure the namespace is unique
|
||||
name = name.split("_")[0]
|
||||
ext = file_url.split(".")[-1]
|
||||
namespace = unique_namespace(
|
||||
"{}_".format(name),
|
||||
format="%03d",
|
||||
suffix="_{}".format(ext)
|
||||
)
|
||||
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
group_name = options["group_name"]
|
||||
|
||||
# no group shall be created
|
||||
if not attach_to_root:
|
||||
group_name = namespace
|
||||
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=attach_to_root,
|
||||
groupName=group_name,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
return nodes
|
||||
|
||||
|
||||
class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
|
|
@ -16,44 +58,42 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
|
||||
import maya.cmds as cmds
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
# Prevent identical alembic nodes from being shared
|
||||
# Create unique namespace for the cameras
|
||||
|
||||
# Get name from asset being loaded
|
||||
# Assuming name is subset name from the animation, we split the number
|
||||
# suffix from the name to ensure the namespace is unique
|
||||
name = name.split("_")[0]
|
||||
namespace = unique_namespace(
|
||||
"{}_".format(name),
|
||||
format="%03d",
|
||||
suffix="_abc"
|
||||
)
|
||||
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
group_name = options["group_name"]
|
||||
|
||||
# no group shall be created
|
||||
if not attach_to_root:
|
||||
group_name = namespace
|
||||
|
||||
# hero_001 (abc)
|
||||
# asset_counter{optional}
|
||||
path = self.filepath_from_context(context)
|
||||
file_url = self.prepare_root_value(path,
|
||||
context["project"]["name"])
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=attach_to_root,
|
||||
groupName=group_name,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
nodes = _process_reference(file_url, name, namespace, options)
|
||||
# load colorbleed ID attribute
|
||||
self[:] = nodes
|
||||
|
||||
return nodes
|
||||
|
||||
|
||||
class FbxLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
"""Loader to reference an Fbx files"""
|
||||
|
||||
families = ["animation",
|
||||
"camera"]
|
||||
representations = ["fbx"]
|
||||
|
||||
label = "Reference animation"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
|
||||
cmds.loadPlugin("fbx4maya.mll", quiet=True)
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
file_url = self.prepare_root_value(path,
|
||||
context["project"]["name"])
|
||||
|
||||
nodes = _process_reference(file_url, name, namespace, options)
|
||||
|
||||
self[:] = nodes
|
||||
|
||||
return nodes
|
||||
|
|
|
|||
|
|
@ -1,12 +1,6 @@
|
|||
from maya import cmds, mel
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_id,
|
||||
get_subset_by_id,
|
||||
get_version_by_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
|
|
@ -18,7 +12,7 @@ class AudioLoader(load.LoaderPlugin):
|
|||
"""Specific loader of audio."""
|
||||
|
||||
families = ["audio"]
|
||||
label = "Import audio"
|
||||
label = "Load audio"
|
||||
representations = ["wav"]
|
||||
icon = "volume-up"
|
||||
color = "orange"
|
||||
|
|
@ -27,10 +21,10 @@ class AudioLoader(load.LoaderPlugin):
|
|||
|
||||
start_frame = cmds.playbackOptions(query=True, min=True)
|
||||
sound_node = cmds.sound(
|
||||
file=context["representation"]["data"]["path"], offset=start_frame
|
||||
file=self.filepath_from_context(context), offset=start_frame
|
||||
)
|
||||
cmds.timeControl(
|
||||
mel.eval("$tmpVar=$gPlayBackSlider"),
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
edit=True,
|
||||
sound=sound_node,
|
||||
displaySound=True
|
||||
|
|
@ -59,32 +53,50 @@ class AudioLoader(load.LoaderPlugin):
|
|||
assert audio_nodes is not None, "Audio node not found."
|
||||
audio_node = audio_nodes[0]
|
||||
|
||||
current_sound = cmds.timeControl(
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
query=True,
|
||||
sound=True
|
||||
)
|
||||
activate_sound = current_sound == audio_node
|
||||
|
||||
path = get_representation_path(representation)
|
||||
cmds.setAttr("{}.filename".format(audio_node), path, type="string")
|
||||
|
||||
cmds.sound(
|
||||
audio_node,
|
||||
edit=True,
|
||||
file=path
|
||||
)
|
||||
|
||||
# The source start + end does not automatically update itself to the
|
||||
# length of thew new audio file, even though maya does do that when
|
||||
# creating a new audio node. So to update we compute it manually.
|
||||
# This would however override any source start and source end a user
|
||||
# might have done on the original audio node after load.
|
||||
audio_frame_count = cmds.getAttr("{}.frameCount".format(audio_node))
|
||||
audio_sample_rate = cmds.getAttr("{}.sampleRate".format(audio_node))
|
||||
duration_in_seconds = audio_frame_count / audio_sample_rate
|
||||
fps = mel.eval('currentTimeUnitToFPS()') # workfile FPS
|
||||
source_start = 0
|
||||
source_end = (duration_in_seconds * fps)
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
if activate_sound:
|
||||
# maya by default deactivates it from timeline on file change
|
||||
cmds.timeControl(
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
edit=True,
|
||||
sound=audio_node,
|
||||
displaySound=True
|
||||
)
|
||||
|
||||
cmds.setAttr(
|
||||
container["objectName"] + ".representation",
|
||||
str(representation["_id"]),
|
||||
type="string"
|
||||
)
|
||||
|
||||
# Set frame range.
|
||||
project_name = get_current_project_name()
|
||||
version = get_version_by_id(
|
||||
project_name, representation["parent"], fields=["parent"]
|
||||
)
|
||||
subset = get_subset_by_id(
|
||||
project_name, version["parent"], fields=["parent"]
|
||||
)
|
||||
asset = get_asset_by_id(
|
||||
project_name, subset["parent"], fields=["parent"]
|
||||
)
|
||||
|
||||
source_start = 1 - asset["data"]["frameStart"]
|
||||
source_end = asset["data"]["frameEnd"]
|
||||
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
|
|
|
|||
|
|
@ -101,7 +101,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"camerarig",
|
||||
"staticMesh",
|
||||
"skeletalMesh",
|
||||
"mvLook"]
|
||||
"mvLook",
|
||||
"matchmove"]
|
||||
|
||||
representations = ["ma", "abc", "fbx", "mb"]
|
||||
|
||||
|
|
|
|||
36
openpype/hosts/maya/plugins/publish/collect_fbx_animation.py
Normal file
36
openpype/hosts/maya/plugins/publish/collect_fbx_animation.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectFbxAnimation(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Collect Animated Rig Data for FBX Extractor."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Fbx Animation"
|
||||
hosts = ["maya"]
|
||||
families = ["animation"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
skeleton_sets = [
|
||||
i for i in instance
|
||||
if i.endswith("skeletonAnim_SET")
|
||||
]
|
||||
if not skeleton_sets:
|
||||
return
|
||||
|
||||
instance.data["families"].append("animation.fbx")
|
||||
instance.data["animated_skeleton"] = []
|
||||
for skeleton_set in skeleton_sets:
|
||||
skeleton_content = cmds.sets(skeleton_set, query=True)
|
||||
self.log.debug(
|
||||
"Collected animated skeleton data: {}".format(
|
||||
skeleton_content
|
||||
))
|
||||
if skeleton_content:
|
||||
instance.data["animated_skeleton"] = skeleton_content
|
||||
|
|
@ -22,7 +22,8 @@ class CollectRigSets(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
|
||||
# Find required sets by suffix
|
||||
searching = {"controls_SET", "out_SET"}
|
||||
searching = {"controls_SET", "out_SET",
|
||||
"skeletonAnim_SET", "skeletonMesh_SET"}
|
||||
found = {}
|
||||
for node in cmds.ls(instance, exactType="objectSet"):
|
||||
for suffix in searching:
|
||||
|
|
|
|||
44
openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py
Normal file
44
openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectSkeletonMesh(pyblish.api.InstancePlugin):
|
||||
"""Collect Static Rig Data for FBX Extractor."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Skeleton Mesh"
|
||||
hosts = ["maya"]
|
||||
families = ["rig"]
|
||||
|
||||
def process(self, instance):
|
||||
skeleton_mesh_set = instance.data["rig_sets"].get(
|
||||
"skeletonMesh_SET")
|
||||
if not skeleton_mesh_set:
|
||||
self.log.debug(
|
||||
"No skeletonMesh_SET found. "
|
||||
"Skipping collecting of skeleton mesh..."
|
||||
)
|
||||
return
|
||||
|
||||
# Store current frame to ensure single frame export
|
||||
frame = cmds.currentTime(query=True)
|
||||
instance.data["frameStart"] = frame
|
||||
instance.data["frameEnd"] = frame
|
||||
|
||||
instance.data["skeleton_mesh"] = []
|
||||
|
||||
skeleton_mesh_content = cmds.sets(
|
||||
skeleton_mesh_set, query=True) or []
|
||||
if not skeleton_mesh_content:
|
||||
self.log.debug(
|
||||
"No object nodes in skeletonMesh_SET. "
|
||||
"Skipping collecting of skeleton mesh..."
|
||||
)
|
||||
return
|
||||
instance.data["families"] += ["rig.fbx"]
|
||||
instance.data["skeleton_mesh"] = skeleton_mesh_content
|
||||
self.log.debug(
|
||||
"Collected skeletonMesh_SET members: {}".format(
|
||||
skeleton_mesh_content
|
||||
))
|
||||
|
|
@ -39,7 +39,7 @@ class CollectYetiCache(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.CollectorOrder + 0.45
|
||||
label = "Collect Yeti Cache"
|
||||
families = ["yetiRig", "yeticache"]
|
||||
families = ["yetiRig", "yeticache", "yeticacheUE"]
|
||||
hosts = ["maya"]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -6,17 +6,21 @@ from openpype.pipeline import publish
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class ExtractCameraAlembic(publish.Extractor):
|
||||
class ExtractCameraAlembic(publish.Extractor,
|
||||
publish.OptionalPyblishPluginMixin):
|
||||
"""Extract a Camera as Alembic.
|
||||
|
||||
The cameras gets baked to world space by default. Only when the instance's
|
||||
The camera gets baked to world space by default. Only when the instance's
|
||||
`bakeToWorldSpace` is set to False it will include its full hierarchy.
|
||||
|
||||
'camera' family expects only single camera, if multiple cameras are needed,
|
||||
'matchmove' is better choice.
|
||||
|
||||
"""
|
||||
|
||||
label = "Camera (Alembic)"
|
||||
label = "Extract Camera (Alembic)"
|
||||
hosts = ["maya"]
|
||||
families = ["camera"]
|
||||
families = ["camera", "matchmove"]
|
||||
bake_attributes = []
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -35,10 +39,11 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
|
||||
# validate required settings
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
if not os.path.exists(dir_path):
|
||||
os.makedirs(dir_path)
|
||||
filename = "{0}.abc".format(instance.name)
|
||||
path = os.path.join(dir_path, filename)
|
||||
|
||||
|
|
@ -64,9 +69,10 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
|
||||
# if baked, drop the camera hierarchy to maintain
|
||||
# clean output and backwards compatibility
|
||||
camera_root = cmds.listRelatives(
|
||||
camera, parent=True, fullPath=True)[0]
|
||||
job_str += ' -root {0}'.format(camera_root)
|
||||
camera_roots = cmds.listRelatives(
|
||||
cameras, parent=True, fullPath=True)
|
||||
for camera_root in camera_roots:
|
||||
job_str += ' -root {0}'.format(camera_root)
|
||||
|
||||
for member in members:
|
||||
descendants = cmds.listRelatives(member,
|
||||
|
|
|
|||
|
|
@ -2,11 +2,15 @@
|
|||
"""Extract camera as Maya Scene."""
|
||||
import os
|
||||
import itertools
|
||||
import contextlib
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.lib import (
|
||||
BoolDef
|
||||
)
|
||||
|
||||
|
||||
def massage_ma_file(path):
|
||||
|
|
@ -78,7 +82,8 @@ def unlock(plug):
|
|||
cmds.disconnectAttr(source, destination)
|
||||
|
||||
|
||||
class ExtractCameraMayaScene(publish.Extractor):
|
||||
class ExtractCameraMayaScene(publish.Extractor,
|
||||
publish.OptionalPyblishPluginMixin):
|
||||
"""Extract a Camera as Maya Scene.
|
||||
|
||||
This will create a duplicate of the camera that will be baked *with*
|
||||
|
|
@ -88,17 +93,22 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
The cameras gets baked to world space by default. Only when the instance's
|
||||
`bakeToWorldSpace` is set to False it will include its full hierarchy.
|
||||
|
||||
'camera' family expects only single camera, if multiple cameras are needed,
|
||||
'matchmove' is better choice.
|
||||
|
||||
Note:
|
||||
The extracted Maya ascii file gets "massaged" removing the uuid values
|
||||
so they are valid for older versions of Fusion (e.g. 6.4)
|
||||
|
||||
"""
|
||||
|
||||
label = "Camera (Maya Scene)"
|
||||
label = "Extract Camera (Maya Scene)"
|
||||
hosts = ["maya"]
|
||||
families = ["camera"]
|
||||
families = ["camera", "matchmove"]
|
||||
scene_type = "ma"
|
||||
|
||||
keep_image_planes = True
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
# get settings
|
||||
|
|
@ -131,15 +141,15 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
"bake to world space is ignored...")
|
||||
|
||||
# get cameras
|
||||
members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True,
|
||||
long=True, dag=True)
|
||||
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera")
|
||||
members = set(cmds.ls(instance.data['setMembers'], leaf=True,
|
||||
shapes=True, long=True, dag=True))
|
||||
cameras = set(cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera"))
|
||||
|
||||
# validate required settings
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
transform = cmds.listRelatives(camera, parent=True, fullPath=True)
|
||||
transforms = cmds.listRelatives(list(cameras),
|
||||
parent=True, fullPath=True)
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
|
|
@ -151,23 +161,21 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
with lib.evaluation("off"):
|
||||
with lib.suspended_refresh():
|
||||
if bake_to_worldspace:
|
||||
self.log.debug(
|
||||
"Performing camera bakes: {}".format(transform))
|
||||
baked = lib.bake_to_world_space(
|
||||
transform,
|
||||
transforms,
|
||||
frame_range=[start, end],
|
||||
step=step
|
||||
)
|
||||
baked_camera_shapes = cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True)
|
||||
baked_camera_shapes = set(cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True))
|
||||
|
||||
members = members + baked_camera_shapes
|
||||
members.remove(camera)
|
||||
members.update(baked_camera_shapes)
|
||||
members.difference_update(cameras)
|
||||
else:
|
||||
baked_camera_shapes = cmds.ls(cameras,
|
||||
baked_camera_shapes = cmds.ls(list(cameras),
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
|
|
@ -186,19 +194,28 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
unlock(plug)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
self.log.debug("Performing extraction..")
|
||||
cmds.select(cmds.ls(members, dag=True,
|
||||
shapes=True, long=True), noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
constructionHistory=False,
|
||||
channels=True, # allow animation
|
||||
constraints=False,
|
||||
shader=False,
|
||||
expressions=False)
|
||||
attr_values = self.get_attr_values_from_data(
|
||||
instance.data)
|
||||
keep_image_planes = attr_values.get("keep_image_planes")
|
||||
|
||||
with transfer_image_planes(sorted(cameras),
|
||||
sorted(baked_camera_shapes),
|
||||
keep_image_planes):
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
cmds.select(cmds.ls(list(members), dag=True,
|
||||
shapes=True, long=True),
|
||||
noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
constructionHistory=False,
|
||||
channels=True, # allow animation
|
||||
constraints=False,
|
||||
shader=False,
|
||||
expressions=False)
|
||||
|
||||
# Delete the baked hierarchy
|
||||
if bake_to_worldspace:
|
||||
|
|
@ -219,3 +236,62 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
defs = super(ExtractCameraMayaScene, cls).get_attribute_defs()
|
||||
|
||||
defs.extend([
|
||||
BoolDef("keep_image_planes",
|
||||
label="Keep Image Planes",
|
||||
tooltip="Preserving connected image planes on camera",
|
||||
default=cls.keep_image_planes),
|
||||
|
||||
])
|
||||
|
||||
return defs
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def transfer_image_planes(source_cameras, target_cameras,
|
||||
keep_input_connections):
|
||||
"""Reattaches image planes to baked or original cameras.
|
||||
|
||||
Baked cameras are duplicates of original ones.
|
||||
This attaches it to duplicated camera properly and after
|
||||
export it reattaches it back to original to keep image plane in workfile.
|
||||
"""
|
||||
originals = {}
|
||||
try:
|
||||
for source_camera, target_camera in zip(source_cameras,
|
||||
target_cameras):
|
||||
image_planes = cmds.listConnections(source_camera,
|
||||
type="imagePlane") or []
|
||||
|
||||
# Split of the parent path they are attached - we want
|
||||
# the image plane node name.
|
||||
# TODO: Does this still mean the image plane name is unique?
|
||||
image_planes = [x.split("->", 1)[1] for x in image_planes]
|
||||
|
||||
if not image_planes:
|
||||
continue
|
||||
|
||||
originals[source_camera] = []
|
||||
for image_plane in image_planes:
|
||||
if keep_input_connections:
|
||||
if source_camera == target_camera:
|
||||
continue
|
||||
_attach_image_plane(target_camera, image_plane)
|
||||
else: # explicitly dettaching image planes
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
originals[source_camera].append(image_plane)
|
||||
yield
|
||||
finally:
|
||||
for camera, image_planes in originals.items():
|
||||
for image_plane in image_planes:
|
||||
_attach_image_plane(camera, image_plane)
|
||||
|
||||
|
||||
def _attach_image_plane(camera, image_plane):
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
cmds.imagePlane(image_plane, edit=True, camera=camera)
|
||||
|
|
|
|||
65
openpype/hosts/maya/plugins/publish/extract_fbx_animation.py
Normal file
65
openpype/hosts/maya/plugins/publish/extract_fbx_animation.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import fbx
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
namespaced, get_namespace, strip_namespace
|
||||
)
|
||||
|
||||
|
||||
class ExtractFBXAnimation(publish.Extractor):
|
||||
"""Extract Rig in FBX format from Maya.
|
||||
|
||||
This extracts the rig in fbx with the constraints
|
||||
and referenced asset content included.
|
||||
This also optionally extract animated rig in fbx with
|
||||
geometries included.
|
||||
|
||||
"""
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract Animation (FBX)"
|
||||
hosts = ["maya"]
|
||||
families = ["animation.fbx"]
|
||||
|
||||
def process(self, instance):
|
||||
# Define output path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{0}.fbx".format(instance.name)
|
||||
path = os.path.join(staging_dir, filename)
|
||||
path = path.replace("\\", "/")
|
||||
|
||||
fbx_exporter = fbx.FBXExtractor(log=self.log)
|
||||
out_members = instance.data.get("animated_skeleton", [])
|
||||
# Export
|
||||
instance.data["constraints"] = True
|
||||
instance.data["skeletonDefinitions"] = True
|
||||
instance.data["referencedAssetsContent"] = True
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
# Export from the rig's namespace so that the exported
|
||||
# FBX does not include the namespace but preserves the node
|
||||
# names as existing in the rig workfile
|
||||
namespace = get_namespace(out_members[0])
|
||||
relative_out_members = [
|
||||
strip_namespace(node, namespace) for node in out_members
|
||||
]
|
||||
with namespaced(
|
||||
":" + namespace,
|
||||
new=False,
|
||||
relative_names=True
|
||||
) as namespace:
|
||||
fbx_exporter.export(relative_out_members, path)
|
||||
|
||||
representations = instance.data.setdefault("representations", [])
|
||||
representations.append({
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
self.log.debug(
|
||||
"Extracted FBX animation to: {0}".format(path))
|
||||
54
openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py
Normal file
54
openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.pipeline.publish import OptionalPyblishPluginMixin
|
||||
from openpype.hosts.maya.api import fbx
|
||||
|
||||
|
||||
class ExtractSkeletonMesh(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Extract Rig in FBX format from Maya.
|
||||
|
||||
This extracts the rig in fbx with the constraints
|
||||
and referenced asset content included.
|
||||
This also optionally extract animated rig in fbx with
|
||||
geometries included.
|
||||
|
||||
"""
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract Skeleton Mesh"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
# Define output path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{0}.fbx".format(instance.name)
|
||||
path = os.path.join(staging_dir, filename)
|
||||
|
||||
fbx_exporter = fbx.FBXExtractor(log=self.log)
|
||||
out_set = instance.data.get("skeleton_mesh", [])
|
||||
|
||||
instance.data["constraints"] = True
|
||||
instance.data["skeletonDefinitions"] = True
|
||||
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
|
||||
# Export
|
||||
fbx_exporter.export(out_set, path)
|
||||
|
||||
representations = instance.data.setdefault("representations", [])
|
||||
representations.append({
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
self.log.debug("Extract FBX to: {0}".format(path))
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
import os
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import publish
|
||||
|
||||
|
||||
class ExtractYetiCache(publish.Extractor):
|
||||
"""Producing Yeti cache files using scene time range.
|
||||
|
||||
This will extract Yeti cache file sequence and fur settings.
|
||||
"""
|
||||
|
||||
label = "Extract Yeti Cache"
|
||||
hosts = ["maya"]
|
||||
families = ["yeticacheUE"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
yeti_nodes = cmds.ls(instance, type="pgYetiMaya")
|
||||
if not yeti_nodes:
|
||||
raise RuntimeError("No pgYetiMaya nodes found in the instance")
|
||||
|
||||
# Define extract output file path
|
||||
dirname = self.staging_dir(instance)
|
||||
|
||||
# Collect information for writing cache
|
||||
start_frame = instance.data["frameStartHandle"]
|
||||
end_frame = instance.data["frameEndHandle"]
|
||||
preroll = instance.data["preroll"]
|
||||
if preroll > 0:
|
||||
start_frame -= preroll
|
||||
|
||||
kwargs = {}
|
||||
samples = instance.data.get("samples", 0)
|
||||
if samples == 0:
|
||||
kwargs.update({"sampleTimes": "0.0 1.0"})
|
||||
else:
|
||||
kwargs.update({"samples": samples})
|
||||
|
||||
self.log.debug(f"Writing out cache {start_frame} - {end_frame}")
|
||||
filename = f"{instance.name}.abc"
|
||||
path = os.path.join(dirname, filename)
|
||||
cmds.pgYetiCommand(yeti_nodes,
|
||||
writeAlembic=path,
|
||||
range=(start_frame, end_frame),
|
||||
asUnrealAbc=True,
|
||||
**kwargs)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': 'abc',
|
||||
'files': filename,
|
||||
'stagingDir': dirname
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.debug(f"Extracted {instance} to {dirname}")
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
import pyblish.api
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError,
|
||||
ValidateContentsOrder
|
||||
)
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class ValidateAnimatedReferenceRig(pyblish.api.InstancePlugin):
|
||||
"""Validate all nodes in skeletonAnim_SET are referenced"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
hosts = ["maya"]
|
||||
families = ["animation.fbx"]
|
||||
label = "Animated Reference Rig"
|
||||
accepted_controllers = ["transform", "locator"]
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
animated_sets = instance.data.get("animated_skeleton", [])
|
||||
if not animated_sets:
|
||||
self.log.debug(
|
||||
"No nodes found in skeletonAnim_SET. "
|
||||
"Skipping validation of animated reference rig..."
|
||||
)
|
||||
return
|
||||
|
||||
for animated_reference in animated_sets:
|
||||
is_referenced = cmds.referenceQuery(
|
||||
animated_reference, isNodeReferenced=True)
|
||||
if not bool(is_referenced):
|
||||
raise PublishValidationError(
|
||||
"All the content in skeletonAnim_SET"
|
||||
" should be referenced nodes"
|
||||
)
|
||||
invalid_controls = self.validate_controls(animated_sets)
|
||||
if invalid_controls:
|
||||
raise PublishValidationError(
|
||||
"All the content in skeletonAnim_SET"
|
||||
" should be transforms"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def validate_controls(self, set_members):
|
||||
"""Check if the controller set contains only accepted node types.
|
||||
|
||||
Checks if all its set members are within the hierarchy of the root
|
||||
Checks if the node types of the set members valid
|
||||
|
||||
Args:
|
||||
set_members: list of nodes of the skeleton_anim_set
|
||||
hierarchy: list of nodes which reside under the root node
|
||||
|
||||
Returns:
|
||||
errors (list)
|
||||
"""
|
||||
|
||||
# Validate control types
|
||||
invalid = []
|
||||
set_members = cmds.ls(set_members, long=True)
|
||||
for node in set_members:
|
||||
if cmds.nodeType(node) not in self.accepted_controllers:
|
||||
invalid.append(node)
|
||||
|
||||
return invalid
|
||||
|
|
@ -30,18 +30,21 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
invalid = list()
|
||||
|
||||
file_attr = cls.attribute
|
||||
if not file_attr:
|
||||
file_attrs = cls.attribute
|
||||
if not file_attrs:
|
||||
return invalid
|
||||
|
||||
# Consider only valid node types to avoid "Unknown object type" warning
|
||||
all_node_types = set(cmds.allNodeTypes())
|
||||
node_types = [key for key in file_attr.keys() if key in all_node_types]
|
||||
node_types = [
|
||||
key for key in file_attrs.keys()
|
||||
if key in all_node_types
|
||||
]
|
||||
|
||||
for node, node_type in pairwise(cmds.ls(type=node_types,
|
||||
showType=True)):
|
||||
# get the filepath
|
||||
file_attr = "{}.{}".format(node, file_attr[node_type])
|
||||
file_attr = "{}.{}".format(node, file_attrs[node_type])
|
||||
filepath = cmds.getAttr(file_attr)
|
||||
|
||||
if filepath and not os.path.exists(filepath):
|
||||
|
|
|
|||
117
openpype/hosts/maya/plugins/publish/validate_resolution.py
Normal file
117
openpype/hosts/maya/plugins/publish/validate_resolution.py
Normal file
|
|
@ -0,0 +1,117 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from maya import cmds
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.hosts.maya.api.lib import reset_scene_resolution
|
||||
|
||||
|
||||
class ValidateResolution(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validate the render resolution setting aligned with DB"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["renderlayer"]
|
||||
hosts = ["maya"]
|
||||
label = "Validate Resolution"
|
||||
actions = [RepairAction]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
invalid = self.get_invalid_resolution(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Render resolution is invalid. See log for details.",
|
||||
description=(
|
||||
"Wrong render resolution setting. "
|
||||
"Please use repair button to fix it.\n\n"
|
||||
"If current renderer is V-Ray, "
|
||||
"make sure vraySettings node has been created."
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid_resolution(cls, instance):
|
||||
width, height, pixelAspect = cls.get_db_resolution(instance)
|
||||
current_renderer = instance.data["renderer"]
|
||||
layer = instance.data["renderlayer"]
|
||||
invalid = False
|
||||
if current_renderer == "vray":
|
||||
vray_node = "vraySettings"
|
||||
if cmds.objExists(vray_node):
|
||||
current_width = lib.get_attr_in_layer(
|
||||
"{}.width".format(vray_node), layer=layer)
|
||||
current_height = lib.get_attr_in_layer(
|
||||
"{}.height".format(vray_node), layer=layer)
|
||||
current_pixelAspect = lib.get_attr_in_layer(
|
||||
"{}.pixelAspect".format(vray_node), layer=layer
|
||||
)
|
||||
else:
|
||||
cls.log.error(
|
||||
"Can't detect VRay resolution because there is no node "
|
||||
"named: `{}`".format(vray_node)
|
||||
)
|
||||
return True
|
||||
else:
|
||||
current_width = lib.get_attr_in_layer(
|
||||
"defaultResolution.width", layer=layer)
|
||||
current_height = lib.get_attr_in_layer(
|
||||
"defaultResolution.height", layer=layer)
|
||||
current_pixelAspect = lib.get_attr_in_layer(
|
||||
"defaultResolution.pixelAspect", layer=layer
|
||||
)
|
||||
if current_width != width or current_height != height:
|
||||
cls.log.error(
|
||||
"Render resolution {}x{} does not match "
|
||||
"asset resolution {}x{}".format(
|
||||
current_width, current_height,
|
||||
width, height
|
||||
))
|
||||
invalid = True
|
||||
if current_pixelAspect != pixelAspect:
|
||||
cls.log.error(
|
||||
"Render pixel aspect {} does not match "
|
||||
"asset pixel aspect {}".format(
|
||||
current_pixelAspect, pixelAspect
|
||||
))
|
||||
invalid = True
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def get_db_resolution(cls, instance):
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
for data in [asset_doc["data"], project_doc["data"]]:
|
||||
if (
|
||||
"resolutionWidth" in data and
|
||||
"resolutionHeight" in data and
|
||||
"pixelAspect" in data
|
||||
):
|
||||
width = data["resolutionWidth"]
|
||||
height = data["resolutionHeight"]
|
||||
pixelAspect = data["pixelAspect"]
|
||||
return int(width), int(height), float(pixelAspect)
|
||||
|
||||
# Defaults if not found in asset document or project document
|
||||
return 1920, 1080, 1.0
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
# Usually without renderlayer overrides the renderlayers
|
||||
# all share the same resolution value - so fixing the first
|
||||
# will have fixed all the others too. It's much faster to
|
||||
# check whether it's invalid first instead of switching
|
||||
# into all layers individually
|
||||
if not cls.get_invalid_resolution(instance):
|
||||
cls.log.debug(
|
||||
"Nothing to repair on instance: {}".format(instance)
|
||||
)
|
||||
return
|
||||
layer_node = instance.data['setMembers']
|
||||
with lib.renderlayer(layer_node):
|
||||
reset_scene_resolution()
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
import pyblish.api
|
||||
from maya import cmds
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError,
|
||||
ValidateContentsOrder
|
||||
|
|
@ -20,33 +20,27 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
label = "Rig Contents"
|
||||
hosts = ["maya"]
|
||||
families = ["rig"]
|
||||
action = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
|
||||
accepted_output = ["mesh", "transform"]
|
||||
accepted_controllers = ["transform"]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Invalid rig content. See log for details.")
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
# Find required sets by suffix
|
||||
required = ["controls_SET", "out_SET"]
|
||||
missing = [
|
||||
key for key in required if key not in instance.data["rig_sets"]
|
||||
]
|
||||
if missing:
|
||||
raise PublishValidationError(
|
||||
"%s is missing sets: %s" % (instance, ", ".join(missing))
|
||||
)
|
||||
required, rig_sets = cls.get_nodes(instance)
|
||||
|
||||
controls_set = instance.data["rig_sets"]["controls_SET"]
|
||||
out_set = instance.data["rig_sets"]["out_SET"]
|
||||
cls.validate_missing_objectsets(instance, required, rig_sets)
|
||||
|
||||
# Ensure there are at least some transforms or dag nodes
|
||||
# in the rig instance
|
||||
set_members = instance.data['setMembers']
|
||||
if not cmds.ls(set_members, type="dagNode", long=True):
|
||||
raise PublishValidationError(
|
||||
"No dag nodes in the pointcache instance. "
|
||||
"(Empty instance?)"
|
||||
)
|
||||
controls_set = rig_sets["controls_SET"]
|
||||
out_set = rig_sets["out_SET"]
|
||||
|
||||
# Ensure contents in sets and retrieve long path for all objects
|
||||
output_content = cmds.sets(out_set, query=True) or []
|
||||
|
|
@ -61,49 +55,92 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
)
|
||||
controls_content = cmds.ls(controls_content, long=True)
|
||||
|
||||
# Validate members are inside the hierarchy from root node
|
||||
root_nodes = cmds.ls(set_members, assemblies=True, long=True)
|
||||
hierarchy = cmds.listRelatives(root_nodes, allDescendents=True,
|
||||
fullPath=True) + root_nodes
|
||||
hierarchy = set(hierarchy)
|
||||
|
||||
invalid_hierarchy = []
|
||||
for node in output_content:
|
||||
if node not in hierarchy:
|
||||
invalid_hierarchy.append(node)
|
||||
for node in controls_content:
|
||||
if node not in hierarchy:
|
||||
invalid_hierarchy.append(node)
|
||||
rig_content = output_content + controls_content
|
||||
invalid_hierarchy = cls.invalid_hierarchy(instance, rig_content)
|
||||
|
||||
# Additional validations
|
||||
invalid_geometry = self.validate_geometry(output_content)
|
||||
invalid_controls = self.validate_controls(controls_content)
|
||||
invalid_geometry = cls.validate_geometry(output_content)
|
||||
invalid_controls = cls.validate_controls(controls_content)
|
||||
|
||||
error = False
|
||||
if invalid_hierarchy:
|
||||
self.log.error("Found nodes which reside outside of root group "
|
||||
cls.log.error("Found nodes which reside outside of root group "
|
||||
"while they are set up for publishing."
|
||||
"\n%s" % invalid_hierarchy)
|
||||
error = True
|
||||
|
||||
if invalid_controls:
|
||||
self.log.error("Only transforms can be part of the controls_SET."
|
||||
cls.log.error("Only transforms can be part of the controls_SET."
|
||||
"\n%s" % invalid_controls)
|
||||
error = True
|
||||
|
||||
if invalid_geometry:
|
||||
self.log.error("Only meshes can be part of the out_SET\n%s"
|
||||
cls.log.error("Only meshes can be part of the out_SET\n%s"
|
||||
% invalid_geometry)
|
||||
error = True
|
||||
|
||||
if error:
|
||||
return invalid_hierarchy + invalid_controls + invalid_geometry
|
||||
|
||||
@classmethod
|
||||
def validate_missing_objectsets(cls, instance,
|
||||
required_objsets, rig_sets):
|
||||
"""Validate missing objectsets in rig sets
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
required_objsets (list): list of objectset names
|
||||
rig_sets (list): list of rig sets
|
||||
|
||||
Raises:
|
||||
PublishValidationError: When the error is raised, it will show
|
||||
which instance has the missing object sets
|
||||
"""
|
||||
missing = [
|
||||
key for key in required_objsets if key not in rig_sets
|
||||
]
|
||||
if missing:
|
||||
raise PublishValidationError(
|
||||
"Invalid rig content. See log for details.")
|
||||
"%s is missing sets: %s" % (instance, ", ".join(missing))
|
||||
)
|
||||
|
||||
def validate_geometry(self, set_members):
|
||||
"""Check if the out set passes the validations
|
||||
@classmethod
|
||||
def invalid_hierarchy(cls, instance, content):
|
||||
"""
|
||||
Check if all rig set members are within the hierarchy of the rig root
|
||||
|
||||
Checks if all its set members are within the hierarchy of the root
|
||||
Args:
|
||||
instance (str): instance
|
||||
content (list): list of content from rig sets
|
||||
|
||||
Raises:
|
||||
PublishValidationError: It means no dag nodes in
|
||||
the rig instance
|
||||
|
||||
Returns:
|
||||
list: invalid hierarchy
|
||||
"""
|
||||
# Ensure there are at least some transforms or dag nodes
|
||||
# in the rig instance
|
||||
set_members = instance.data['setMembers']
|
||||
if not cmds.ls(set_members, type="dagNode", long=True):
|
||||
raise PublishValidationError(
|
||||
"No dag nodes in the rig instance. "
|
||||
"(Empty instance?)"
|
||||
)
|
||||
# Validate members are inside the hierarchy from root node
|
||||
root_nodes = cmds.ls(set_members, assemblies=True, long=True)
|
||||
hierarchy = cmds.listRelatives(root_nodes, allDescendents=True,
|
||||
fullPath=True) + root_nodes
|
||||
hierarchy = set(hierarchy)
|
||||
invalid_hierarchy = []
|
||||
for node in content:
|
||||
if node not in hierarchy:
|
||||
invalid_hierarchy.append(node)
|
||||
return invalid_hierarchy
|
||||
|
||||
@classmethod
|
||||
def validate_geometry(cls, set_members):
|
||||
"""
|
||||
Checks if the node types of the set members valid
|
||||
|
||||
Args:
|
||||
|
|
@ -122,15 +159,13 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
fullPath=True) or []
|
||||
all_shapes = cmds.ls(set_members + shapes, long=True, shapes=True)
|
||||
for shape in all_shapes:
|
||||
if cmds.nodeType(shape) not in self.accepted_output:
|
||||
if cmds.nodeType(shape) not in cls.accepted_output:
|
||||
invalid.append(shape)
|
||||
|
||||
return invalid
|
||||
|
||||
def validate_controls(self, set_members):
|
||||
"""Check if the controller set passes the validations
|
||||
|
||||
Checks if all its set members are within the hierarchy of the root
|
||||
@classmethod
|
||||
def validate_controls(cls, set_members):
|
||||
"""
|
||||
Checks if the control set members are allowed node types.
|
||||
Checks if the node types of the set members valid
|
||||
|
||||
Args:
|
||||
|
|
@ -144,7 +179,80 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
# Validate control types
|
||||
invalid = []
|
||||
for node in set_members:
|
||||
if cmds.nodeType(node) not in self.accepted_controllers:
|
||||
if cmds.nodeType(node) not in cls.accepted_controllers:
|
||||
invalid.append(node)
|
||||
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def get_nodes(cls, instance):
|
||||
"""Get the target objectsets and rig sets nodes
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
tuple: 2-tuple of list of objectsets,
|
||||
list of rig sets nodes
|
||||
"""
|
||||
objectsets = ["controls_SET", "out_SET"]
|
||||
rig_sets_nodes = instance.data.get("rig_sets", [])
|
||||
return objectsets, rig_sets_nodes
|
||||
|
||||
|
||||
class ValidateSkeletonRigContents(ValidateRigContents):
|
||||
"""Ensure skeleton rigs contains pipeline-critical content
|
||||
|
||||
The rigs optionally contain at least two object sets:
|
||||
"skeletonMesh_SET" - Set of the skinned meshes
|
||||
with bone hierarchies
|
||||
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
label = "Skeleton Rig Contents"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
objectsets, skeleton_mesh_nodes = cls.get_nodes(instance)
|
||||
cls.validate_missing_objectsets(
|
||||
instance, objectsets, instance.data["rig_sets"])
|
||||
|
||||
# Ensure contents in sets and retrieve long path for all objects
|
||||
output_content = instance.data.get("skeleton_mesh", [])
|
||||
output_content = cmds.ls(skeleton_mesh_nodes, long=True)
|
||||
|
||||
invalid_hierarchy = cls.invalid_hierarchy(
|
||||
instance, output_content)
|
||||
invalid_geometry = cls.validate_geometry(output_content)
|
||||
|
||||
error = False
|
||||
if invalid_hierarchy:
|
||||
cls.log.error("Found nodes which reside outside of root group "
|
||||
"while they are set up for publishing."
|
||||
"\n%s" % invalid_hierarchy)
|
||||
error = True
|
||||
if invalid_geometry:
|
||||
cls.log.error("Found nodes which reside outside of root group "
|
||||
"while they are set up for publishing."
|
||||
"\n%s" % invalid_hierarchy)
|
||||
error = True
|
||||
if error:
|
||||
return invalid_hierarchy + invalid_geometry
|
||||
|
||||
@classmethod
|
||||
def get_nodes(cls, instance):
|
||||
"""Get the target objectsets and rig sets nodes
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
tuple: 2-tuple of list of objectsets,
|
||||
list of rig sets nodes
|
||||
"""
|
||||
objectsets = ["skeletonMesh_SET"]
|
||||
skeleton_mesh_nodes = instance.data.get("skeleton_mesh", [])
|
||||
return objectsets, skeleton_mesh_nodes
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
controls_set = cls.get_node(instance)
|
||||
if not controls_set:
|
||||
cls.log.error(
|
||||
"Must have 'controls_SET' in rig instance"
|
||||
|
|
@ -189,7 +189,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
controls_set = cls.get_node(instance)
|
||||
if not controls_set:
|
||||
cls.log.error(
|
||||
"Unable to repair because no 'controls_SET' found in rig "
|
||||
|
|
@ -228,3 +228,64 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
default = cls.CONTROLLER_DEFAULTS[attr]
|
||||
cls.log.info("Setting %s to %s" % (plug, default))
|
||||
cmds.setAttr(plug, default)
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from controls_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from controls_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("controls_SET")
|
||||
|
||||
|
||||
class ValidateSkeletonRigControllers(ValidateRigControllers):
|
||||
"""Validate rig controller for skeletonAnim_SET
|
||||
|
||||
Controls must have the transformation attributes on their default
|
||||
values of translate zero, rotate zero and scale one when they are
|
||||
unlocked attributes.
|
||||
|
||||
Unlocked keyable attributes may not have any incoming connections. If
|
||||
these connections are required for the rig then lock the attributes.
|
||||
|
||||
The visibility attribute must be locked.
|
||||
|
||||
Note that `repair` will:
|
||||
- Lock all visibility attributes
|
||||
- Reset all default values for translate, rotate, scale
|
||||
- Break all incoming connections to keyable attributes
|
||||
|
||||
"""
|
||||
order = ValidateContentsOrder + 0.05
|
||||
label = "Skeleton Rig Controllers"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
# Default controller values
|
||||
CONTROLLER_DEFAULTS = {
|
||||
"translateX": 0,
|
||||
"translateY": 0,
|
||||
"translateZ": 0,
|
||||
"rotateX": 0,
|
||||
"rotateY": 0,
|
||||
"rotateZ": 0,
|
||||
"scaleX": 1,
|
||||
"scaleY": 1,
|
||||
"scaleZ": 1
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from skeletonMesh_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from skeletonMesh_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("skeletonMesh_SET")
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
"""Get all nodes which do not match the criteria"""
|
||||
|
||||
out_set = instance.data["rig_sets"].get("out_SET")
|
||||
out_set = cls.get_node(instance)
|
||||
if not out_set:
|
||||
return []
|
||||
|
||||
|
|
@ -85,3 +85,45 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
lib.set_id(node, sibling_id, overwrite=True)
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from out_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from out_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("out_SET")
|
||||
|
||||
|
||||
class ValidateSkeletonRigOutSetNodeIds(ValidateRigOutSetNodeIds):
|
||||
"""Validate if deformed shapes have related IDs to the original shapes
|
||||
from skeleton set.
|
||||
|
||||
When a deformer is applied in the scene on a referenced mesh that already
|
||||
had deformers then Maya will create a new shape node for the mesh that
|
||||
does not have the original id. This validator checks whether the ids are
|
||||
valid on all the shape nodes in the instance.
|
||||
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
families = ["rig.fbx"]
|
||||
hosts = ['maya']
|
||||
label = 'Skeleton Rig Out Set Node Ids'
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from skeletonMesh_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from skeletonMesh_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get(
|
||||
"skeletonMesh_SET")
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
invalid = {}
|
||||
|
||||
if compute:
|
||||
out_set = instance.data["rig_sets"].get("out_SET")
|
||||
out_set = cls.get_node(instance)
|
||||
if not out_set:
|
||||
instance.data["mismatched_output_ids"] = invalid
|
||||
return invalid
|
||||
|
|
@ -115,3 +115,40 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
"Multiple matched ids found. Please repair manually: "
|
||||
"{}".format(multiple_ids_match)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from out_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from out_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("out_SET")
|
||||
|
||||
|
||||
class ValidateSkeletonRigOutputIds(ValidateRigOutputIds):
|
||||
"""Validate rig output ids from the skeleton sets.
|
||||
|
||||
Ids must share the same id as similarly named nodes in the scene. This is
|
||||
to ensure the id from the model is preserved through animation.
|
||||
|
||||
"""
|
||||
order = ValidateContentsOrder + 0.05
|
||||
label = "Skeleton Rig Output Ids"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from skeletonMesh_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from skeletonMesh_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("skeletonMesh_SET")
|
||||
|
|
|
|||
|
|
@ -0,0 +1,40 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Plugin for validating naming conventions."""
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder,
|
||||
OptionalPyblishPluginMixin,
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class ValidateSkeletonTopGroupHierarchy(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validates top group hierarchy in the SETs
|
||||
Make sure the object inside the SETs are always top
|
||||
group of the hierarchy
|
||||
|
||||
"""
|
||||
order = ValidateContentsOrder + 0.05
|
||||
label = "Skeleton Rig Top Group Hierarchy"
|
||||
families = ["rig.fbx"]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = []
|
||||
skeleton_mesh_data = instance.data("skeleton_mesh", [])
|
||||
if skeleton_mesh_data:
|
||||
invalid = self.get_top_hierarchy(skeleton_mesh_data)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"The skeletonMesh_SET includes the object which "
|
||||
"is not at the top hierarchy: {}".format(invalid))
|
||||
|
||||
def get_top_hierarchy(self, targets):
|
||||
targets = cmds.ls(targets, long=True) # ensure long names
|
||||
non_top_hierarchy_list = [
|
||||
target for target in targets if target.count("|") > 2
|
||||
]
|
||||
return non_top_hierarchy_list
|
||||
|
|
@ -69,11 +69,8 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin,
|
|||
|
||||
invalid = []
|
||||
|
||||
project_settings = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
)
|
||||
collision_prefixes = (
|
||||
project_settings
|
||||
instance.context.data["project_settings"]
|
||||
["maya"]
|
||||
["create"]
|
||||
["CreateUnrealStaticMesh"]
|
||||
|
|
|
|||
|
|
@ -50,6 +50,8 @@ from .utils import (
|
|||
get_colorspace_list
|
||||
)
|
||||
|
||||
from .actions import SelectInvalidAction
|
||||
|
||||
__all__ = (
|
||||
"file_extensions",
|
||||
"has_unsaved_changes",
|
||||
|
|
@ -92,5 +94,7 @@ __all__ = (
|
|||
"create_write_node",
|
||||
|
||||
"colorspace_exists_on_node",
|
||||
"get_colorspace_list"
|
||||
"get_colorspace_list",
|
||||
|
||||
"SelectInvalidAction",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -20,33 +20,31 @@ class SelectInvalidAction(pyblish.api.Action):
|
|||
|
||||
def process(self, context, plugin):
|
||||
|
||||
try:
|
||||
import nuke
|
||||
except ImportError:
|
||||
raise ImportError("Current host is not Nuke")
|
||||
|
||||
errored_instances = get_errored_instances_from_context(context,
|
||||
plugin=plugin)
|
||||
# Get the errored instances for the plug-in
|
||||
errored_instances = get_errored_instances_from_context(
|
||||
context, plugin)
|
||||
|
||||
# Get the invalid nodes for the plug-ins
|
||||
self.log.info("Finding invalid nodes..")
|
||||
invalid = list()
|
||||
invalid_nodes = set()
|
||||
for instance in errored_instances:
|
||||
invalid_nodes = plugin.get_invalid(instance)
|
||||
invalid = plugin.get_invalid(instance)
|
||||
|
||||
if invalid_nodes:
|
||||
if isinstance(invalid_nodes, (list, tuple)):
|
||||
invalid.append(invalid_nodes[0])
|
||||
else:
|
||||
self.log.warning("Plug-in returned to be invalid, "
|
||||
"but has no selectable nodes.")
|
||||
if not invalid:
|
||||
continue
|
||||
|
||||
# Ensure unique (process each node only once)
|
||||
invalid = list(set(invalid))
|
||||
select_node = instance.data.get("transientData", {}).get("node")
|
||||
if not select_node:
|
||||
raise RuntimeError(
|
||||
"No transientData['node'] found on instance: {}".format(
|
||||
instance)
|
||||
)
|
||||
|
||||
if invalid:
|
||||
self.log.info("Selecting invalid nodes: {}".format(invalid))
|
||||
invalid_nodes.add(select_node)
|
||||
|
||||
if invalid_nodes:
|
||||
self.log.info("Selecting invalid nodes: {}".format(invalid_nodes))
|
||||
reset_selection()
|
||||
select_nodes(invalid)
|
||||
select_nodes(list(invalid_nodes))
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
|
|
|
|||
|
|
@ -2316,27 +2316,53 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
''' Adds correct colorspace to write node dict
|
||||
|
||||
'''
|
||||
for node in nuke.allNodes(filter="Group"):
|
||||
for node in nuke.allNodes(filter="Group", group=self._root_node):
|
||||
log.info("Setting colorspace to `{}`".format(node.name()))
|
||||
|
||||
# get data from avalon knob
|
||||
avalon_knob_data = read_avalon_data(node)
|
||||
node_data = get_node_data(node, INSTANCE_DATA_KNOB)
|
||||
|
||||
if avalon_knob_data.get("id") != "pyblish.avalon.instance":
|
||||
if (
|
||||
# backward compatibility
|
||||
# TODO: remove this once old avalon data api will be removed
|
||||
avalon_knob_data
|
||||
and avalon_knob_data.get("id") != "pyblish.avalon.instance"
|
||||
):
|
||||
continue
|
||||
elif (
|
||||
node_data
|
||||
and node_data.get("id") != "pyblish.avalon.instance"
|
||||
):
|
||||
continue
|
||||
|
||||
if "creator" not in avalon_knob_data:
|
||||
if (
|
||||
# backward compatibility
|
||||
# TODO: remove this once old avalon data api will be removed
|
||||
avalon_knob_data
|
||||
and "creator" not in avalon_knob_data
|
||||
):
|
||||
continue
|
||||
elif (
|
||||
node_data
|
||||
and "creator_identifier" not in node_data
|
||||
):
|
||||
continue
|
||||
|
||||
# establish families
|
||||
families = [avalon_knob_data["family"]]
|
||||
if avalon_knob_data.get("families"):
|
||||
families.append(avalon_knob_data.get("families"))
|
||||
nuke_imageio_writes = None
|
||||
if avalon_knob_data:
|
||||
# establish families
|
||||
families = [avalon_knob_data["family"]]
|
||||
if avalon_knob_data.get("families"):
|
||||
families.append(avalon_knob_data.get("families"))
|
||||
|
||||
nuke_imageio_writes = get_imageio_node_setting(
|
||||
node_class=avalon_knob_data["families"],
|
||||
plugin_name=avalon_knob_data["creator"],
|
||||
subset=avalon_knob_data["subset"]
|
||||
)
|
||||
nuke_imageio_writes = get_imageio_node_setting(
|
||||
node_class=avalon_knob_data["families"],
|
||||
plugin_name=avalon_knob_data["creator"],
|
||||
subset=avalon_knob_data["subset"]
|
||||
)
|
||||
elif node_data:
|
||||
nuke_imageio_writes = get_write_node_template_attr(node)
|
||||
|
||||
log.debug("nuke_imageio_writes: `{}`".format(nuke_imageio_writes))
|
||||
|
||||
|
|
@ -3397,3 +3423,27 @@ def create_viewer_profile_string(viewer, display=None, path_like=False):
|
|||
if path_like:
|
||||
return "{}/{}".format(display, viewer)
|
||||
return "{} ({})".format(viewer, display)
|
||||
|
||||
|
||||
def get_filenames_without_hash(filename, frame_start, frame_end):
|
||||
"""Get filenames without frame hash
|
||||
i.e. "renderCompositingMain.baking.0001.exr"
|
||||
|
||||
Args:
|
||||
filename (str): filename with frame hash
|
||||
frame_start (str): start of the frame
|
||||
frame_end (str): end of the frame
|
||||
|
||||
Returns:
|
||||
list: filename per frame of the sequence
|
||||
"""
|
||||
filenames = []
|
||||
for frame in range(int(frame_start), (int(frame_end) + 1)):
|
||||
if "#" in filename:
|
||||
# use regex to convert #### to {:0>4}
|
||||
def replace(match):
|
||||
return "{{:0>{}}}".format(len(match.group()))
|
||||
filename_without_hashes = re.sub("#+", replace, filename)
|
||||
new_filename = filename_without_hashes.format(frame)
|
||||
filenames.append(new_filename)
|
||||
return filenames
|
||||
|
|
|
|||
|
|
@ -21,6 +21,9 @@ from openpype.pipeline import (
|
|||
CreatedInstance,
|
||||
get_current_task_name
|
||||
)
|
||||
from openpype.lib.transcoding import (
|
||||
VIDEO_EXTENSIONS
|
||||
)
|
||||
from .lib import (
|
||||
INSTANCE_DATA_KNOB,
|
||||
Knobby,
|
||||
|
|
@ -35,7 +38,8 @@ from .lib import (
|
|||
get_node_data,
|
||||
get_view_process_node,
|
||||
get_viewer_config_from_string,
|
||||
deprecated
|
||||
deprecated,
|
||||
get_filenames_without_hash
|
||||
)
|
||||
from .pipeline import (
|
||||
list_instances,
|
||||
|
|
@ -580,18 +584,25 @@ class ExporterReview(object):
|
|||
def get_file_info(self):
|
||||
if self.collection:
|
||||
# get path
|
||||
self.fname = os.path.basename(self.collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
self.fname = os.path.basename(
|
||||
self.collection.format("{head}{padding}{tail}")
|
||||
)
|
||||
self.fhead = self.collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
self.first_frame = min(self.collection.indexes)
|
||||
self.last_frame = max(self.collection.indexes)
|
||||
|
||||
# make sure slate frame is not included
|
||||
frame_start_handle = self.instance.data["frameStartHandle"]
|
||||
if frame_start_handle > self.first_frame:
|
||||
self.first_frame = frame_start_handle
|
||||
|
||||
else:
|
||||
self.fname = os.path.basename(self.path_in)
|
||||
self.fhead = os.path.splitext(self.fname)[0] + "."
|
||||
self.first_frame = self.instance.data.get("frameStartHandle", None)
|
||||
self.last_frame = self.instance.data.get("frameEndHandle", None)
|
||||
self.first_frame = self.instance.data["frameStartHandle"]
|
||||
self.last_frame = self.instance.data["frameEndHandle"]
|
||||
|
||||
if "#" in self.fhead:
|
||||
self.fhead = self.fhead.replace("#", "")[:-1]
|
||||
|
|
@ -627,6 +638,10 @@ class ExporterReview(object):
|
|||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
if ".{}".format(self.ext) not in VIDEO_EXTENSIONS:
|
||||
filenames = get_filenames_without_hash(
|
||||
self.file, self.first_frame, self.last_frame)
|
||||
repre["files"] = filenames
|
||||
|
||||
if self.multiple_presets:
|
||||
repre["outputName"] = self.name
|
||||
|
|
@ -800,7 +815,20 @@ class ExporterReviewMov(ExporterReview):
|
|||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
if ".{}".format(self.ext) in VIDEO_EXTENSIONS:
|
||||
self.file = "{}{}.{}".format(
|
||||
self.fhead, self.name, self.ext)
|
||||
else:
|
||||
# Output is image (or image sequence)
|
||||
# When the file is an image it's possible it
|
||||
# has extra information after the `fhead` that
|
||||
# we want to preserve, e.g. like frame numbers
|
||||
# or frames hashes like `####`
|
||||
filename_no_ext = os.path.splitext(
|
||||
os.path.basename(self.path_in))[0]
|
||||
after_head = filename_no_ext[len(self.fhead):]
|
||||
self.file = "{}{}.{}.{}".format(
|
||||
self.fhead, self.name, after_head, self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
|
|
@ -869,6 +897,11 @@ class ExporterReviewMov(ExporterReview):
|
|||
r_node["origlast"].setValue(self.last_frame)
|
||||
r_node["colorspace"].setValue(self.write_colorspace)
|
||||
|
||||
# do not rely on defaults, set explicitly
|
||||
# to be sure it is set correctly
|
||||
r_node["frame_mode"].setValue("expression")
|
||||
r_node["frame"].setValue("")
|
||||
|
||||
if read_raw:
|
||||
r_node["raw"].setValue(1)
|
||||
|
||||
|
|
@ -921,7 +954,6 @@ class ExporterReviewMov(ExporterReview):
|
|||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(str(self.path))
|
||||
write_node["file_type"].setValue(str(self.ext))
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO shouldn't this come from settings on outputs?
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
|
|||
"yeticache",
|
||||
"pointcache"]
|
||||
representations = ["*"]
|
||||
extension = {"*"}
|
||||
extensions = {"*"}
|
||||
|
||||
label = "Set frame range"
|
||||
order = 11
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
|
||||
families = ["workfile", "nukenodes"]
|
||||
representations = ["*"]
|
||||
extension = {"nk"}
|
||||
extensions = {"nk"}
|
||||
|
||||
label = "Import Nuke Nodes"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["camera"]
|
||||
representations = ["*"]
|
||||
extension = {"abc"}
|
||||
extensions = {"abc"}
|
||||
|
||||
label = "Load Alembic Camera"
|
||||
icon = "camera"
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class LoadEffects(load.LoaderPlugin):
|
|||
|
||||
families = ["effect"]
|
||||
representations = ["*"]
|
||||
extension = {"json"}
|
||||
extensions = {"json"}
|
||||
|
||||
label = "Load Effects - nodes"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
|
||||
families = ["effect"]
|
||||
representations = ["*"]
|
||||
extension = {"json"}
|
||||
extensions = {"json"}
|
||||
|
||||
label = "Load Effects - Input Process"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
|
||||
families = ["gizmo"]
|
||||
representations = ["*"]
|
||||
extension = {"gizmo"}
|
||||
extensions = {"gizmo"}
|
||||
|
||||
label = "Load Gizmo"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
|
||||
families = ["gizmo"]
|
||||
representations = ["*"]
|
||||
extension = {"gizmo"}
|
||||
extensions = {"gizmo"}
|
||||
|
||||
label = "Load Gizmo - Input Process"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ class MatchmoveLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["matchmove"]
|
||||
representations = ["*"]
|
||||
extension = {"py"}
|
||||
extensions = {"py"}
|
||||
|
||||
defaults = ["Camera", "Object"]
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["model", "pointcache", "animation"]
|
||||
representations = ["*"]
|
||||
extension = {"abc"}
|
||||
extensions = {"abc"}
|
||||
|
||||
label = "Load Alembic"
|
||||
icon = "cube"
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
|
||||
families = ["workfile", "nukenodes"]
|
||||
representations = ["*"]
|
||||
extension = {"nk"}
|
||||
extensions = {"nk"}
|
||||
|
||||
label = "Load Precomp"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import nuke
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectNukeInstanceData(pyblish.api.InstancePlugin):
|
||||
class CollectInstanceData(pyblish.api.InstancePlugin):
|
||||
"""Collect Nuke instance data
|
||||
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -8,15 +8,16 @@ from openpype.hosts.nuke.api import plugin
|
|||
from openpype.hosts.nuke.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ExtractReviewDataMov(publish.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
class ExtractReviewIntermediates(publish.Extractor):
|
||||
"""Extracting intermediate videos or sequences with
|
||||
thumbnail for transcoding.
|
||||
|
||||
must be run after extract_render_local.py
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.01
|
||||
label = "Extract Review Data Mov"
|
||||
label = "Extract Review Intermediates"
|
||||
|
||||
families = ["review"]
|
||||
hosts = ["nuke"]
|
||||
|
|
@ -25,6 +26,24 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
viewer_lut_raw = None
|
||||
outputs = {}
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings):
|
||||
"""Apply the settings from the deprecated
|
||||
ExtractReviewDataMov plugin for backwards compatibility
|
||||
"""
|
||||
nuke_publish = project_settings["nuke"]["publish"]
|
||||
deprecated_setting = nuke_publish["ExtractReviewDataMov"]
|
||||
current_setting = nuke_publish.get("ExtractReviewIntermediates")
|
||||
if deprecated_setting["enabled"]:
|
||||
# Use deprecated settings if they are still enabled
|
||||
cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"]
|
||||
cls.outputs = deprecated_setting["outputs"]
|
||||
elif current_setting is None:
|
||||
pass
|
||||
elif current_setting["enabled"]:
|
||||
cls.viewer_lut_raw = current_setting["viewer_lut_raw"]
|
||||
cls.outputs = current_setting["outputs"]
|
||||
|
||||
def process(self, instance):
|
||||
families = set(instance.data["families"])
|
||||
|
||||
|
|
@ -8,6 +8,7 @@ from openpype.hosts.nuke import api as napi
|
|||
from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings
|
||||
|
||||
|
||||
# Python 2/3 compatibility
|
||||
if sys.version_info[0] >= 3:
|
||||
unicode = str
|
||||
|
||||
|
|
@ -45,11 +46,12 @@ class ExtractThumbnail(publish.Extractor):
|
|||
for o_name, o_data in instance.data["bakePresets"].items():
|
||||
self.render_thumbnail(instance, o_name, **o_data)
|
||||
else:
|
||||
viewer_process_swithes = {
|
||||
viewer_process_switches = {
|
||||
"bake_viewer_process": True,
|
||||
"bake_viewer_input_process": True
|
||||
}
|
||||
self.render_thumbnail(instance, None, **viewer_process_swithes)
|
||||
self.render_thumbnail(
|
||||
instance, None, **viewer_process_switches)
|
||||
|
||||
def render_thumbnail(self, instance, output_name=None, **kwargs):
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
|
|
@ -61,8 +63,6 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
# solve output name if any is set
|
||||
output_name = output_name or ""
|
||||
if output_name:
|
||||
output_name = "_" + output_name
|
||||
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
|
|
@ -166,26 +166,42 @@ class ExtractThumbnail(publish.Extractor):
|
|||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
|
||||
thumb_name = "thumbnail"
|
||||
# only add output name and
|
||||
# if there are more than one bake preset
|
||||
if (
|
||||
output_name
|
||||
and len(instance.data.get("bakePresets", {}).keys()) > 1
|
||||
):
|
||||
thumb_name = "{}_{}".format(output_name, thumb_name)
|
||||
|
||||
# create write node
|
||||
write_node = nuke.createNode("Write")
|
||||
file = fhead[:-1] + output_name + ".jpg"
|
||||
name = "thumbnail"
|
||||
path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
instance.data["thumbnail"] = path
|
||||
write_node["file"].setValue(path)
|
||||
file = fhead[:-1] + thumb_name + ".jpg"
|
||||
thumb_path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
|
||||
# add thumbnail to cleanup
|
||||
instance.context.data["cleanupFullPaths"].append(thumb_path)
|
||||
|
||||
# make sure only one thumbnail path is set
|
||||
# and it is existing file
|
||||
instance_thumb_path = instance.data.get("thumbnailPath")
|
||||
if not instance_thumb_path or not os.path.isfile(instance_thumb_path):
|
||||
instance.data["thumbnailPath"] = thumb_path
|
||||
|
||||
write_node["file"].setValue(thumb_path)
|
||||
write_node["file_type"].setValue("jpg")
|
||||
write_node["raw"].setValue(1)
|
||||
write_node.setInput(0, previous_node)
|
||||
temporary_nodes.append(write_node)
|
||||
tags = ["thumbnail", "publish_on_farm"]
|
||||
|
||||
repre = {
|
||||
'name': name,
|
||||
'name': thumb_name,
|
||||
'ext': "jpg",
|
||||
"outputName": "thumb",
|
||||
"outputName": thumb_name,
|
||||
'files': file,
|
||||
"stagingDir": staging_dir,
|
||||
"tags": tags
|
||||
"tags": ["thumbnail", "publish_on_farm", "delete"]
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,31 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Shot/Asset name</title>
|
||||
<description>
|
||||
## Publishing to a different asset context
|
||||
|
||||
There are publish instances present which are publishing into a different asset than your current context.
|
||||
|
||||
Usually this is not what you want but there can be cases where you might want to publish into another asset/shot or task.
|
||||
|
||||
If that's the case you can disable the validation on the instance to ignore it.
|
||||
|
||||
The wrong node's name is: \`{node_name}\`
|
||||
|
||||
### Correct context keys and values:
|
||||
|
||||
\`{correct_values}\`
|
||||
|
||||
### Wrong keys and values:
|
||||
|
||||
\`{wrong_values}\`.
|
||||
|
||||
|
||||
## How to repair?
|
||||
|
||||
1. Use \"Repair\" button.
|
||||
2. Hit Reload button on the publisher.
|
||||
</description>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -1,18 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Shot/Asset name</title>
|
||||
<description>
|
||||
## Invalid Shot/Asset name in subset
|
||||
|
||||
Following Node with name `{node_name}`:
|
||||
Is in context of `{correct_name}` but Node _asset_ knob is set as `{wrong_name}`.
|
||||
|
||||
### How to repair?
|
||||
|
||||
1. Either use Repair or Select button.
|
||||
2. If you chose Select then rename asset knob to correct name.
|
||||
3. Hit Reload button on the publisher.
|
||||
</description>
|
||||
</error>
|
||||
</root>
|
||||
112
openpype/hosts/nuke/plugins/publish/validate_asset_context.py
Normal file
112
openpype/hosts/nuke/plugins/publish/validate_asset_context.py
Normal file
|
|
@ -0,0 +1,112 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate if instance asset is the same as context asset."""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
PublishXmlValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.hosts.nuke.api import SelectInvalidAction
|
||||
|
||||
|
||||
class ValidateCorrectAssetContext(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Validator to check if instance asset context match context asset.
|
||||
|
||||
When working in per-shot style you always publish data in context of
|
||||
current asset (shot). This validator checks if this is so. It is optional
|
||||
so it can be disabled when needed.
|
||||
|
||||
Checking `asset` and `task` keys.
|
||||
"""
|
||||
order = ValidateContentsOrder
|
||||
label = "Validate asset context"
|
||||
hosts = ["nuke"]
|
||||
actions = [
|
||||
RepairAction,
|
||||
SelectInvalidAction
|
||||
]
|
||||
optional = True
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings):
|
||||
"""Apply deprecated settings from project settings.
|
||||
"""
|
||||
nuke_publish = project_settings["nuke"]["publish"]
|
||||
if "ValidateCorrectAssetName" in nuke_publish:
|
||||
settings = nuke_publish["ValidateCorrectAssetName"]
|
||||
else:
|
||||
settings = nuke_publish["ValidateCorrectAssetContext"]
|
||||
|
||||
cls.enabled = settings["enabled"]
|
||||
cls.optional = settings["optional"]
|
||||
cls.active = settings["active"]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
invalid_keys = self.get_invalid(instance)
|
||||
|
||||
if not invalid_keys:
|
||||
return
|
||||
|
||||
message_values = {
|
||||
"node_name": instance.data["transientData"]["node"].name(),
|
||||
"correct_values": ", ".join([
|
||||
"{} > {}".format(_key, instance.context.data[_key])
|
||||
for _key in invalid_keys
|
||||
]),
|
||||
"wrong_values": ", ".join([
|
||||
"{} > {}".format(_key, instance.data.get(_key))
|
||||
for _key in invalid_keys
|
||||
])
|
||||
}
|
||||
|
||||
msg = (
|
||||
"Instance `{node_name}` has wrong context keys:\n"
|
||||
"Correct: `{correct_values}` | Wrong: `{wrong_values}`").format(
|
||||
**message_values)
|
||||
|
||||
self.log.debug(msg)
|
||||
|
||||
raise PublishXmlValidationError(
|
||||
self, msg, formatting_data=message_values
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
"""Get invalid keys from instance data and context data."""
|
||||
|
||||
invalid_keys = []
|
||||
testing_keys = ["asset", "task"]
|
||||
for _key in testing_keys:
|
||||
if _key not in instance.data:
|
||||
invalid_keys.append(_key)
|
||||
continue
|
||||
if instance.data[_key] != instance.context.data[_key]:
|
||||
invalid_keys.append(_key)
|
||||
|
||||
return invalid_keys
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
"""Repair instance data with context data."""
|
||||
invalid_keys = cls.get_invalid(instance)
|
||||
|
||||
create_context = instance.context.data["create_context"]
|
||||
|
||||
instance_id = instance.data.get("instance_id")
|
||||
created_instance = create_context.get_instance_by_id(
|
||||
instance_id
|
||||
)
|
||||
for _key in invalid_keys:
|
||||
created_instance[_key] = instance.context.data[_key]
|
||||
|
||||
create_context.save_changes()
|
||||
|
|
@ -1,138 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate if instance asset is the same as context asset."""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import openpype.hosts.nuke.api.lib as nlib
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder,
|
||||
PublishXmlValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
|
||||
class SelectInvalidInstances(pyblish.api.Action):
|
||||
"""Select invalid instances in Outliner."""
|
||||
|
||||
label = "Select"
|
||||
icon = "briefcase"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
"""Process invalid validators and select invalid instances."""
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
|
||||
if instances:
|
||||
self.deselect()
|
||||
self.log.info(
|
||||
"Selecting invalid nodes: %s" % ", ".join(
|
||||
[str(x) for x in instances]
|
||||
)
|
||||
)
|
||||
self.select(instances)
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
self.deselect()
|
||||
|
||||
def select(self, instances):
|
||||
for inst in instances:
|
||||
if inst.data.get("transientData", {}).get("node"):
|
||||
select_node = inst.data["transientData"]["node"]
|
||||
select_node["selected"].setValue(True)
|
||||
|
||||
def deselect(self):
|
||||
nlib.reset_selection()
|
||||
|
||||
|
||||
class RepairSelectInvalidInstances(pyblish.api.Action):
|
||||
"""Repair the instance asset."""
|
||||
|
||||
label = "Repair"
|
||||
icon = "wrench"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
self.log.debug(instances)
|
||||
|
||||
context_asset = context.data["assetEntity"]["name"]
|
||||
for instance in instances:
|
||||
node = instance.data["transientData"]["node"]
|
||||
node_data = nlib.get_node_data(node, nlib.INSTANCE_DATA_KNOB)
|
||||
node_data["asset"] = context_asset
|
||||
nlib.set_node_data(node, nlib.INSTANCE_DATA_KNOB, node_data)
|
||||
|
||||
|
||||
class ValidateCorrectAssetName(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Validator to check if instance asset match context asset.
|
||||
|
||||
When working in per-shot style you always publish data in context of
|
||||
current asset (shot). This validator checks if this is so. It is optional
|
||||
so it can be disabled when needed.
|
||||
|
||||
Action on this validator will select invalid instances in Outliner.
|
||||
"""
|
||||
order = ValidateContentsOrder
|
||||
label = "Validate correct asset name"
|
||||
hosts = ["nuke"]
|
||||
actions = [
|
||||
SelectInvalidInstances,
|
||||
RepairSelectInvalidInstances
|
||||
]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
asset = instance.data.get("asset")
|
||||
context_asset = instance.context.data["assetEntity"]["name"]
|
||||
node = instance.data["transientData"]["node"]
|
||||
|
||||
msg = (
|
||||
"Instance `{}` has wrong shot/asset name:\n"
|
||||
"Correct: `{}` | Wrong: `{}`").format(
|
||||
instance.name, asset, context_asset)
|
||||
|
||||
self.log.debug(msg)
|
||||
|
||||
if asset != context_asset:
|
||||
raise PublishXmlValidationError(
|
||||
self, msg, formatting_data={
|
||||
"node_name": node.name(),
|
||||
"wrong_name": asset,
|
||||
"correct_name": context_asset
|
||||
}
|
||||
)
|
||||
|
|
@ -23,7 +23,7 @@ class ValidateOutputResolution(
|
|||
order = pyblish.api.ValidatorOrder
|
||||
optional = True
|
||||
families = ["render"]
|
||||
label = "Write resolution"
|
||||
label = "Validate Write resolution"
|
||||
hosts = ["nuke"]
|
||||
actions = [RepairAction]
|
||||
|
||||
|
|
|
|||
|
|
@ -82,12 +82,6 @@ class ValidateNukeWriteNode(
|
|||
correct_data = get_write_node_template_attr(write_group_node)
|
||||
|
||||
check = []
|
||||
self.log.debug("__ write_node: {}".format(
|
||||
write_node
|
||||
))
|
||||
self.log.debug("__ correct_data: {}".format(
|
||||
correct_data
|
||||
))
|
||||
|
||||
# Collect key values of same type in a list.
|
||||
values_by_name = defaultdict(list)
|
||||
|
|
@ -96,9 +90,6 @@ class ValidateNukeWriteNode(
|
|||
|
||||
for knob_data in correct_data["knobs"]:
|
||||
knob_type = knob_data["type"]
|
||||
self.log.debug("__ knob_type: {}".format(
|
||||
knob_type
|
||||
))
|
||||
|
||||
if (
|
||||
knob_type == "__legacy__"
|
||||
|
|
@ -134,9 +125,6 @@ class ValidateNukeWriteNode(
|
|||
|
||||
fixed_values.append(value)
|
||||
|
||||
self.log.debug("__ key: {} | values: {}".format(
|
||||
key, fixed_values
|
||||
))
|
||||
if (
|
||||
node_value not in fixed_values
|
||||
and key != "file"
|
||||
|
|
@ -144,8 +132,6 @@ class ValidateNukeWriteNode(
|
|||
):
|
||||
check.append([key, value, write_node[key].value()])
|
||||
|
||||
self.log.info(check)
|
||||
|
||||
if check:
|
||||
self._make_error(check)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
import re
|
||||
|
||||
import openpype.hosts.photoshop.api as api
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.pipeline import (
|
||||
AutoCreator,
|
||||
CreatedInstance
|
||||
|
|
@ -78,3 +81,17 @@ class PSAutoCreator(AutoCreator):
|
|||
existing_instance["asset"] = asset_name
|
||||
existing_instance["task"] = task_name
|
||||
existing_instance["subset"] = subset_name
|
||||
|
||||
|
||||
def clean_subset_name(subset_name):
|
||||
"""Clean all variants leftover {layer} from subset name."""
|
||||
dynamic_data = prepare_template_data({"layer": "{layer}"})
|
||||
for value in dynamic_data.values():
|
||||
if value in subset_name:
|
||||
subset_name = (subset_name.replace(value, "")
|
||||
.replace("__", "_")
|
||||
.replace("..", "."))
|
||||
# clean trailing separator as Main_
|
||||
pattern = r'[\W_]+$'
|
||||
replacement = ''
|
||||
return re.sub(pattern, replacement, subset_name)
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ from openpype.pipeline import CreatedInstance
|
|||
|
||||
from openpype.lib import BoolDef
|
||||
import openpype.hosts.photoshop.api as api
|
||||
from openpype.hosts.photoshop.lib import PSAutoCreator
|
||||
from openpype.hosts.photoshop.lib import PSAutoCreator, clean_subset_name
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.client import get_asset_by_name
|
||||
|
|
@ -129,14 +129,4 @@ class AutoImageCreator(PSAutoCreator):
|
|||
self.family, variant, task_name, asset_doc,
|
||||
project_name, host_name, dynamic_data=dynamic_data
|
||||
)
|
||||
return self._clean_subset_name(subset_name)
|
||||
|
||||
def _clean_subset_name(self, subset_name):
|
||||
"""Clean all variants leftover {layer} from subset name."""
|
||||
dynamic_data = prepare_template_data({"layer": "{layer}"})
|
||||
for value in dynamic_data.values():
|
||||
if value in subset_name:
|
||||
return (subset_name.replace(value, "")
|
||||
.replace("__", "_")
|
||||
.replace("..", "."))
|
||||
return subset_name
|
||||
return clean_subset_name(subset_name)
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ from openpype.pipeline import (
|
|||
from openpype.lib import prepare_template_data
|
||||
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
|
||||
from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances
|
||||
from openpype.hosts.photoshop.lib import clean_subset_name
|
||||
|
||||
|
||||
class ImageCreator(Creator):
|
||||
|
|
@ -88,6 +89,7 @@ class ImageCreator(Creator):
|
|||
|
||||
layer_fill = prepare_template_data({"layer": layer_name})
|
||||
subset_name = subset_name.format(**layer_fill)
|
||||
subset_name = clean_subset_name(subset_name)
|
||||
|
||||
if group.long_name:
|
||||
for directory in group.long_name[::-1]:
|
||||
|
|
@ -184,7 +186,6 @@ class ImageCreator(Creator):
|
|||
self.mark_for_review = plugin_settings["mark_for_review"]
|
||||
self.enabled = plugin_settings["enabled"]
|
||||
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Creator for Image instances
|
||||
|
||||
|
|
|
|||
|
|
@ -6,13 +6,10 @@ from .utils import (
|
|||
)
|
||||
|
||||
from .pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ResolveHost,
|
||||
ls,
|
||||
containerise,
|
||||
update_container,
|
||||
publish,
|
||||
launch_workfiles_app,
|
||||
maintained_selection,
|
||||
remove_instance,
|
||||
list_instances
|
||||
|
|
@ -76,14 +73,10 @@ __all__ = [
|
|||
"bmdvf",
|
||||
|
||||
# pipeline
|
||||
"install",
|
||||
"uninstall",
|
||||
"ResolveHost",
|
||||
"ls",
|
||||
"containerise",
|
||||
"update_container",
|
||||
"reload_pipeline",
|
||||
"publish",
|
||||
"launch_workfiles_app",
|
||||
"maintained_selection",
|
||||
"remove_instance",
|
||||
"list_instances",
|
||||
|
|
|
|||
|
|
@ -5,11 +5,6 @@ from qtpy import QtWidgets, QtCore
|
|||
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
from .pipeline import (
|
||||
publish,
|
||||
launch_workfiles_app
|
||||
)
|
||||
|
||||
|
||||
def load_stylesheet():
|
||||
path = os.path.join(os.path.dirname(__file__), "menu_style.qss")
|
||||
|
|
@ -113,7 +108,7 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
|
||||
def on_workfile_clicked(self):
|
||||
print("Clicked Workfile")
|
||||
launch_workfiles_app()
|
||||
host_tools.show_workfiles()
|
||||
|
||||
def on_create_clicked(self):
|
||||
print("Clicked Create")
|
||||
|
|
@ -121,7 +116,7 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
|
||||
def on_publish_clicked(self):
|
||||
print("Clicked Publish")
|
||||
publish(None)
|
||||
host_tools.show_publish(parent=None)
|
||||
|
||||
def on_load_clicked(self):
|
||||
print("Clicked Load")
|
||||
|
|
|
|||
|
|
@ -12,14 +12,24 @@ from openpype.pipeline import (
|
|||
schema,
|
||||
register_loader_plugin_path,
|
||||
register_creator_plugin_path,
|
||||
deregister_loader_plugin_path,
|
||||
deregister_creator_plugin_path,
|
||||
AVALON_CONTAINER_ID,
|
||||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.host import (
|
||||
HostBase,
|
||||
IWorkfileHost,
|
||||
ILoadHost
|
||||
)
|
||||
|
||||
from . import lib
|
||||
from .utils import get_resolve_module
|
||||
from .workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
file_extensions,
|
||||
has_unsaved_changes,
|
||||
work_root,
|
||||
current_file
|
||||
)
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
|
@ -32,53 +42,56 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
|||
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
|
||||
|
||||
|
||||
def install():
|
||||
"""Install resolve-specific functionality of avalon-core.
|
||||
class ResolveHost(HostBase, IWorkfileHost, ILoadHost):
|
||||
name = "resolve"
|
||||
|
||||
This is where you install menus and register families, data
|
||||
and loaders into resolve.
|
||||
def install(self):
|
||||
"""Install resolve-specific functionality of avalon-core.
|
||||
|
||||
It is called automatically when installing via `api.install(resolve)`.
|
||||
This is where you install menus and register families, data
|
||||
and loaders into resolve.
|
||||
|
||||
See the Maya equivalent for inspiration on how to implement this.
|
||||
It is called automatically when installing via `api.install(resolve)`.
|
||||
|
||||
"""
|
||||
See the Maya equivalent for inspiration on how to implement this.
|
||||
|
||||
log.info("openpype.hosts.resolve installed")
|
||||
"""
|
||||
|
||||
pyblish.register_host("resolve")
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
log.info("Registering DaVinci Resovle plug-ins..")
|
||||
log.info("openpype.hosts.resolve installed")
|
||||
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
pyblish.register_host(self.name)
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
print("Registering DaVinci Resolve plug-ins..")
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
|
||||
get_resolve_module()
|
||||
# register callback for switching publishable
|
||||
pyblish.register_callback("instanceToggled",
|
||||
on_pyblish_instance_toggled)
|
||||
|
||||
get_resolve_module()
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall all that was installed
|
||||
def open_workfile(self, filepath):
|
||||
return open_file(filepath)
|
||||
|
||||
This is where you undo everything that was done in `install()`.
|
||||
That means, removing menus, deregistering families and data
|
||||
and everything. It should be as though `install()` was never run,
|
||||
because odds are calling this function means the user is interested
|
||||
in re-installing shortly afterwards. If, for example, he has been
|
||||
modifying the menu or registered families.
|
||||
def save_workfile(self, filepath=None):
|
||||
return save_file(filepath)
|
||||
|
||||
"""
|
||||
pyblish.deregister_host("resolve")
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
log.info("Deregistering DaVinci Resovle plug-ins..")
|
||||
def work_root(self, session):
|
||||
return work_root(session)
|
||||
|
||||
deregister_loader_plugin_path(LOAD_PATH)
|
||||
deregister_creator_plugin_path(CREATE_PATH)
|
||||
def get_current_workfile(self):
|
||||
return current_file()
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
def workfile_has_unsaved_changes(self):
|
||||
return has_unsaved_changes()
|
||||
|
||||
def get_workfile_extensions(self):
|
||||
return file_extensions()
|
||||
|
||||
def get_containers(self):
|
||||
return ls()
|
||||
|
||||
|
||||
def containerise(timeline_item,
|
||||
|
|
@ -206,15 +219,6 @@ def update_container(timeline_item, data=None):
|
|||
return bool(lib.set_timeline_item_pype_tag(timeline_item, container))
|
||||
|
||||
|
||||
def launch_workfiles_app(*args):
|
||||
host_tools.show_workfiles()
|
||||
|
||||
|
||||
def publish(parent):
|
||||
"""Shorthand to publish from within host"""
|
||||
return host_tools.show_publish()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
"""Maintain selection during context
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ def get_resolve_module():
|
|||
# dont run if already loaded
|
||||
if api.bmdvr:
|
||||
log.info(("resolve module is assigned to "
|
||||
f"`pype.hosts.resolve.api.bmdvr`: {api.bmdvr}"))
|
||||
f"`openpype.hosts.resolve.api.bmdvr`: {api.bmdvr}"))
|
||||
return api.bmdvr
|
||||
try:
|
||||
"""
|
||||
|
|
@ -41,6 +41,10 @@ def get_resolve_module():
|
|||
)
|
||||
elif sys.platform.startswith("linux"):
|
||||
expected_path = "/opt/resolve/libs/Fusion/Modules"
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"Unsupported platform: {}".format(sys.platform)
|
||||
)
|
||||
|
||||
# check if the default path has it...
|
||||
print(("Unable to find module DaVinciResolveScript from "
|
||||
|
|
@ -74,6 +78,6 @@ def get_resolve_module():
|
|||
api.bmdvr = bmdvr
|
||||
api.bmdvf = bmdvf
|
||||
log.info(("Assigning resolve module to "
|
||||
f"`pype.hosts.resolve.api.bmdvr`: {api.bmdvr}"))
|
||||
f"`openpype.hosts.resolve.api.bmdvr`: {api.bmdvr}"))
|
||||
log.info(("Assigning resolve module to "
|
||||
f"`pype.hosts.resolve.api.bmdvf`: {api.bmdvf}"))
|
||||
f"`openpype.hosts.resolve.api.bmdvf`: {api.bmdvf}"))
|
||||
|
|
|
|||
|
|
@ -27,7 +27,8 @@ def ensure_installed_host():
|
|||
if host:
|
||||
return host
|
||||
|
||||
install_host(openpype.hosts.resolve.api)
|
||||
host = openpype.hosts.resolve.api.ResolveHost()
|
||||
install_host(host)
|
||||
return registered_host()
|
||||
|
||||
|
||||
|
|
@ -37,10 +38,10 @@ def launch_menu():
|
|||
openpype.hosts.resolve.api.launch_pype_menu()
|
||||
|
||||
|
||||
def open_file(path):
|
||||
def open_workfile(path):
|
||||
# Avoid the need to "install" the host
|
||||
host = ensure_installed_host()
|
||||
host.open_file(path)
|
||||
host.open_workfile(path)
|
||||
|
||||
|
||||
def main():
|
||||
|
|
@ -49,7 +50,7 @@ def main():
|
|||
|
||||
if workfile_path and os.path.exists(workfile_path):
|
||||
log.info(f"Opening last workfile: {workfile_path}")
|
||||
open_file(workfile_path)
|
||||
open_workfile(workfile_path)
|
||||
else:
|
||||
log.info("No last workfile set to open. Skipping..")
|
||||
|
||||
|
|
|
|||
|
|
@ -8,12 +8,13 @@ log = Logger.get_logger(__name__)
|
|||
|
||||
|
||||
def main(env):
|
||||
import openpype.hosts.resolve.api as bmdvr
|
||||
from openpype.hosts.resolve.api import ResolveHost, launch_pype_menu
|
||||
|
||||
# activate resolve from openpype
|
||||
install_host(bmdvr)
|
||||
host = ResolveHost()
|
||||
install_host(host)
|
||||
|
||||
bmdvr.launch_pype_menu()
|
||||
launch_pype_menu()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
|||
|
|
@ -1,28 +1,21 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectMissingFrameDataFromAssetEntity(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Collect Missing Frame Range data From Asset Entity
|
||||
class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin):
|
||||
"""Collect Frame Data From AssetEntity found in context
|
||||
|
||||
Frame range data will only be collected if the keys
|
||||
are not yet collected for the instance.
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.491
|
||||
label = "Collect Missing Frame Data From Asset Entity"
|
||||
label = "Collect Missing Frame Data From Asset"
|
||||
families = ["plate", "pointcache",
|
||||
"vdbcache", "online",
|
||||
"render"]
|
||||
hosts = ["traypublisher"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
missing_keys = []
|
||||
for key in (
|
||||
"fps",
|
||||
|
|
@ -1,33 +1,47 @@
|
|||
import pyblish.api
|
||||
import clique
|
||||
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectSequenceFrameData(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Collect Original Sequence Frame Data
|
||||
|
||||
class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
||||
"""Collect Sequence Frame Data
|
||||
If the representation includes files with frame numbers,
|
||||
then set `frameStart` and `frameEnd` for the instance to the
|
||||
start and end frame respectively
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Sequence Frame Data"
|
||||
order = pyblish.api.CollectorOrder + 0.4905
|
||||
label = "Collect Original Sequence Frame Data"
|
||||
families = ["plate", "pointcache",
|
||||
"vdbcache", "online",
|
||||
"render"]
|
||||
hosts = ["traypublisher"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
frame_data = self.get_frame_data_from_repre_sequence(instance)
|
||||
|
||||
if not frame_data:
|
||||
# if no dict data skip collecting the frame range data
|
||||
return
|
||||
|
||||
for key, value in frame_data.items():
|
||||
if key not in instance.data:
|
||||
instance.data[key] = value
|
||||
self.log.debug(f"Collected Frame range data '{key}':{value} ")
|
||||
instance.data[key] = value
|
||||
self.log.debug(f"Collected Frame range data '{key}':{value} ")
|
||||
|
||||
|
||||
def get_frame_data_from_repre_sequence(self, instance):
|
||||
repres = instance.data.get("representations")
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
|
||||
if repres:
|
||||
first_repre = repres[0]
|
||||
if "ext" not in first_repre:
|
||||
|
|
@ -36,7 +50,7 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
files = first_repre["files"]
|
||||
collections, remainder = clique.assemble(files)
|
||||
collections, _ = clique.assemble(files)
|
||||
if not collections:
|
||||
# No sequences detected and we can't retrieve
|
||||
# frame range
|
||||
|
|
@ -52,5 +66,5 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin):
|
|||
"frameEnd": repres_frames[-1],
|
||||
"handleStart": 0,
|
||||
"handleEnd": 0,
|
||||
"fps": instance.context.data["assetEntity"]["data"]["fps"]
|
||||
"fps": asset_data["fps"]
|
||||
}
|
||||
179
openpype/hosts/unreal/plugins/load/load_yeticache.py
Normal file
179
openpype/hosts/unreal/plugins/load/load_yeticache.py
Normal file
|
|
@ -0,0 +1,179 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Loader for Yeti Cache."""
|
||||
import os
|
||||
import json
|
||||
|
||||
from openpype.pipeline import (
|
||||
get_representation_path,
|
||||
AYON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
class YetiLoader(plugin.Loader):
|
||||
"""Load Yeti Cache"""
|
||||
|
||||
families = ["yeticacheUE"]
|
||||
label = "Import Yeti"
|
||||
representations = ["abc"]
|
||||
icon = "pagelines"
|
||||
color = "orange"
|
||||
|
||||
@staticmethod
|
||||
def get_task(filename, asset_dir, asset_name, replace):
|
||||
task = unreal.AssetImportTask()
|
||||
options = unreal.AbcImportSettings()
|
||||
|
||||
task.set_editor_property('filename', filename)
|
||||
task.set_editor_property('destination_path', asset_dir)
|
||||
task.set_editor_property('destination_name', asset_name)
|
||||
task.set_editor_property('replace_existing', replace)
|
||||
task.set_editor_property('automated', True)
|
||||
task.set_editor_property('save', True)
|
||||
|
||||
task.options = options
|
||||
|
||||
return task
|
||||
|
||||
@staticmethod
|
||||
def is_groom_module_active():
|
||||
"""
|
||||
Check if Groom plugin is active.
|
||||
|
||||
This is a workaround, because the Unreal python API don't have
|
||||
any method to check if plugin is active.
|
||||
"""
|
||||
prj_file = unreal.Paths.get_project_file_path()
|
||||
|
||||
with open(prj_file, "r") as fp:
|
||||
data = json.load(fp)
|
||||
|
||||
plugins = data.get("Plugins")
|
||||
|
||||
if not plugins:
|
||||
return False
|
||||
|
||||
plugin_names = [p.get("Name") for p in plugins]
|
||||
|
||||
return "HairStrands" in plugin_names
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
This is two step process. First, import FBX to temporary path and
|
||||
then call `containerise()` on it - this moves all content to new
|
||||
directory and then it will create AssetContainer there and imprint it
|
||||
with metadata. This will mark this path as container.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
namespace (str): in Unreal this is basically path to container.
|
||||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
data (dict): Those would be data to be imprinted. This is not used
|
||||
now, data are imprinted by `containerise()`.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
|
||||
"""
|
||||
# Check if Groom plugin is active
|
||||
if not self.is_groom_module_active():
|
||||
raise RuntimeError("Groom plugin is not activated.")
|
||||
|
||||
# Create directory for asset and Ayon container
|
||||
root = "/Game/Ayon/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
asset_name = f"{asset}_{name}" if asset else f"{name}"
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
f"{root}/{asset}/{name}", suffix="")
|
||||
|
||||
unique_number = 1
|
||||
while unreal.EditorAssetLibrary.does_directory_exist(
|
||||
f"{asset_dir}_{unique_number:02}"
|
||||
):
|
||||
unique_number += 1
|
||||
|
||||
asset_dir = f"{asset_dir}_{unique_number:02}"
|
||||
container_name = f"{container_name}_{unique_number:02}{suffix}"
|
||||
|
||||
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
task = self.get_task(path, asset_dir, asset_name, False)
|
||||
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "ayon:container-2.0",
|
||||
"id": AYON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
name = container["asset_name"]
|
||||
source_path = get_representation_path(representation)
|
||||
destination_path = container["namespace"]
|
||||
|
||||
task = self.get_task(source_path, destination_path, name, True)
|
||||
|
||||
# do import fbx and replace existing data
|
||||
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
|
||||
|
||||
container_path = f'{container["namespace"]}/{container["objectName"]}'
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
destination_path, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
def remove(self, container):
|
||||
path = container["namespace"]
|
||||
parent_path = os.path.dirname(path)
|
||||
|
||||
unreal.EditorAssetLibrary.delete_directory(path)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
parent_path, recursive=False
|
||||
)
|
||||
|
||||
if len(asset_content) == 0:
|
||||
unreal.EditorAssetLibrary.delete_directory(parent_path)
|
||||
|
|
@ -1,5 +1,6 @@
|
|||
import os
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.modules import OpenPypeModule, ITrayModule
|
||||
|
||||
|
||||
|
|
@ -75,20 +76,11 @@ class AvalonModule(OpenPypeModule, ITrayModule):
|
|||
|
||||
def show_library_loader(self):
|
||||
if self._library_loader_window is None:
|
||||
from qtpy import QtCore
|
||||
from openpype.tools.libraryloader import LibraryLoaderWindow
|
||||
from openpype.pipeline import install_openpype_plugins
|
||||
|
||||
libraryloader = LibraryLoaderWindow(
|
||||
show_projects=True,
|
||||
show_libraries=True
|
||||
)
|
||||
# Remove always on top flag for tray
|
||||
window_flags = libraryloader.windowFlags()
|
||||
if window_flags | QtCore.Qt.WindowStaysOnTopHint:
|
||||
window_flags ^= QtCore.Qt.WindowStaysOnTopHint
|
||||
libraryloader.setWindowFlags(window_flags)
|
||||
self._library_loader_window = libraryloader
|
||||
if AYON_SERVER_ENABLED:
|
||||
self._init_ayon_loader()
|
||||
else:
|
||||
self._init_library_loader()
|
||||
|
||||
install_openpype_plugins()
|
||||
|
||||
|
|
@ -106,3 +98,25 @@ class AvalonModule(OpenPypeModule, ITrayModule):
|
|||
if self.tray_initialized:
|
||||
from .rest_api import AvalonRestApiResource
|
||||
self.rest_api_obj = AvalonRestApiResource(self, server_manager)
|
||||
|
||||
def _init_library_loader(self):
|
||||
from qtpy import QtCore
|
||||
from openpype.tools.libraryloader import LibraryLoaderWindow
|
||||
|
||||
libraryloader = LibraryLoaderWindow(
|
||||
show_projects=True,
|
||||
show_libraries=True
|
||||
)
|
||||
# Remove always on top flag for tray
|
||||
window_flags = libraryloader.windowFlags()
|
||||
if window_flags | QtCore.Qt.WindowStaysOnTopHint:
|
||||
window_flags ^= QtCore.Qt.WindowStaysOnTopHint
|
||||
libraryloader.setWindowFlags(window_flags)
|
||||
self._library_loader_window = libraryloader
|
||||
|
||||
def _init_ayon_loader(self):
|
||||
from openpype.tools.ayon_loader.ui import LoaderWindow
|
||||
|
||||
libraryloader = LoaderWindow()
|
||||
|
||||
self._library_loader_window = libraryloader
|
||||
|
|
|
|||
|
|
@ -0,0 +1,181 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Submitting render job to Deadline."""
|
||||
|
||||
import os
|
||||
import getpass
|
||||
import attr
|
||||
from datetime import datetime
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.lib import is_running_from_build
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.farm.tools import iter_expected_files
|
||||
from openpype.tests.lib import is_in_tests
|
||||
|
||||
from openpype_modules.deadline import abstract_submit_deadline
|
||||
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
|
||||
|
||||
|
||||
@attr.s
|
||||
class BlenderPluginInfo():
|
||||
SceneFile = attr.ib(default=None) # Input
|
||||
Version = attr.ib(default=None) # Mandatory for Deadline
|
||||
SaveFile = attr.ib(default=True)
|
||||
|
||||
|
||||
class BlenderSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
|
||||
label = "Submit Render to Deadline"
|
||||
hosts = ["blender"]
|
||||
families = ["render.farm"]
|
||||
|
||||
use_published = True
|
||||
priority = 50
|
||||
chunk_size = 1
|
||||
jobInfo = {}
|
||||
pluginInfo = {}
|
||||
group = None
|
||||
|
||||
def get_job_info(self):
|
||||
job_info = DeadlineJobInfo(Plugin="Blender")
|
||||
|
||||
job_info.update(self.jobInfo)
|
||||
|
||||
instance = self._instance
|
||||
context = instance.context
|
||||
|
||||
# Always use the original work file name for the Job name even when
|
||||
# rendering is done from the published Work File. The original work
|
||||
# file name is clearer because it can also have subversion strings,
|
||||
# etc. which are stripped for the published file.
|
||||
src_filepath = context.data["currentFile"]
|
||||
src_filename = os.path.basename(src_filepath)
|
||||
|
||||
if is_in_tests():
|
||||
src_filename += datetime.now().strftime("%d%m%Y%H%M%S")
|
||||
|
||||
job_info.Name = f"{src_filename} - {instance.name}"
|
||||
job_info.BatchName = src_filename
|
||||
instance.data.get("blenderRenderPlugin", "Blender")
|
||||
job_info.UserName = context.data.get("deadlineUser", getpass.getuser())
|
||||
|
||||
# Deadline requires integers in frame range
|
||||
frames = "{start}-{end}x{step}".format(
|
||||
start=int(instance.data["frameStartHandle"]),
|
||||
end=int(instance.data["frameEndHandle"]),
|
||||
step=int(instance.data["byFrameStep"]),
|
||||
)
|
||||
job_info.Frames = frames
|
||||
|
||||
job_info.Pool = instance.data.get("primaryPool")
|
||||
job_info.SecondaryPool = instance.data.get("secondaryPool")
|
||||
job_info.Comment = context.data.get("comment")
|
||||
job_info.Priority = instance.data.get("priority", self.priority)
|
||||
|
||||
if self.group != "none" and self.group:
|
||||
job_info.Group = self.group
|
||||
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
render_globals = instance.data.setdefault("renderGlobals", {})
|
||||
machine_list = attr_values.get("machineList", "")
|
||||
if machine_list:
|
||||
if attr_values.get("whitelist", True):
|
||||
machine_list_key = "Whitelist"
|
||||
else:
|
||||
machine_list_key = "Blacklist"
|
||||
render_globals[machine_list_key] = machine_list
|
||||
|
||||
job_info.Priority = attr_values.get("priority")
|
||||
job_info.ChunkSize = attr_values.get("chunkSize")
|
||||
|
||||
# Add options from RenderGlobals
|
||||
render_globals = instance.data.get("renderGlobals", {})
|
||||
job_info.update(render_globals)
|
||||
|
||||
keys = [
|
||||
"FTRACK_API_KEY",
|
||||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"OPENPYPE_SG_USER",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
"AVALON_APP_NAME",
|
||||
"OPENPYPE_DEV"
|
||||
"IS_TEST"
|
||||
]
|
||||
|
||||
# Add OpenPype version if we are running from build.
|
||||
if is_running_from_build():
|
||||
keys.append("OPENPYPE_VERSION")
|
||||
|
||||
# Add mongo url if it's enabled
|
||||
if self._instance.context.data.get("deadlinePassMongoUrl"):
|
||||
keys.append("OPENPYPE_MONGO")
|
||||
|
||||
environment = dict({key: os.environ[key] for key in keys
|
||||
if key in os.environ}, **legacy_io.Session)
|
||||
|
||||
for key in keys:
|
||||
value = environment.get(key)
|
||||
if not value:
|
||||
continue
|
||||
job_info.EnvironmentKeyValue[key] = value
|
||||
|
||||
# to recognize job from PYPE for turning Event On/Off
|
||||
job_info.add_render_job_env_var()
|
||||
job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1"
|
||||
|
||||
# Adding file dependencies.
|
||||
if self.asset_dependencies:
|
||||
dependencies = instance.context.data["fileDependencies"]
|
||||
for dependency in dependencies:
|
||||
job_info.AssetDependency += dependency
|
||||
|
||||
# Add list of expected files to job
|
||||
# ---------------------------------
|
||||
exp = instance.data.get("expectedFiles")
|
||||
for filepath in iter_expected_files(exp):
|
||||
job_info.OutputDirectory += os.path.dirname(filepath)
|
||||
job_info.OutputFilename += os.path.basename(filepath)
|
||||
|
||||
return job_info
|
||||
|
||||
def get_plugin_info(self):
|
||||
plugin_info = BlenderPluginInfo(
|
||||
SceneFile=self.scene_path,
|
||||
Version=bpy.app.version_string,
|
||||
SaveFile=True,
|
||||
)
|
||||
|
||||
plugin_payload = attr.asdict(plugin_info)
|
||||
|
||||
# Patching with pluginInfo from settings
|
||||
for key, value in self.pluginInfo.items():
|
||||
plugin_payload[key] = value
|
||||
|
||||
return plugin_payload
|
||||
|
||||
def process_submission(self):
|
||||
instance = self._instance
|
||||
|
||||
expected_files = instance.data["expectedFiles"]
|
||||
if not expected_files:
|
||||
raise RuntimeError("No Render Elements found!")
|
||||
|
||||
first_file = next(iter_expected_files(expected_files))
|
||||
output_dir = os.path.dirname(first_file)
|
||||
instance.data["outputDir"] = output_dir
|
||||
instance.data["toBeRenderedOn"] = "deadline"
|
||||
|
||||
payload = self.assemble_payload()
|
||||
return self.submit(payload)
|
||||
|
||||
def from_published_scene(self):
|
||||
"""
|
||||
This is needed to set the correct path for the json metadata. Because
|
||||
the rendering path is set in the blend file during the collection,
|
||||
and the path is adjusted to use the published scene, this ensures that
|
||||
the metadata and the rendered files are in the same location.
|
||||
"""
|
||||
return super().from_published_scene(False)
|
||||
|
|
@ -6,6 +6,7 @@ import requests
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.publish import (
|
||||
OpenPypePyblishPluginMixin
|
||||
|
|
@ -34,6 +35,8 @@ class FusionSubmitDeadline(
|
|||
targets = ["local"]
|
||||
|
||||
# presets
|
||||
plugin = None
|
||||
|
||||
priority = 50
|
||||
chunk_size = 1
|
||||
concurrent_tasks = 1
|
||||
|
|
@ -173,7 +176,7 @@ class FusionSubmitDeadline(
|
|||
"SecondaryPool": instance.data.get("secondaryPool"),
|
||||
"Group": self.group,
|
||||
|
||||
"Plugin": "Fusion",
|
||||
"Plugin": self.plugin,
|
||||
"Frames": "{start}-{end}".format(
|
||||
start=int(instance.data["frameStartHandle"]),
|
||||
end=int(instance.data["frameEndHandle"])
|
||||
|
|
@ -216,16 +219,29 @@ class FusionSubmitDeadline(
|
|||
|
||||
# Include critical variables with submission
|
||||
keys = [
|
||||
# TODO: This won't work if the slaves don't have access to
|
||||
# these paths, such as if slaves are running Linux and the
|
||||
# submitter is on Windows.
|
||||
"PYTHONPATH",
|
||||
"OFX_PLUGIN_PATH",
|
||||
"FUSION9_MasterPrefs"
|
||||
"FTRACK_API_KEY",
|
||||
"FTRACK_API_USER",
|
||||
"FTRACK_SERVER",
|
||||
"AVALON_PROJECT",
|
||||
"AVALON_ASSET",
|
||||
"AVALON_TASK",
|
||||
"AVALON_APP_NAME",
|
||||
"OPENPYPE_DEV",
|
||||
"OPENPYPE_LOG_NO_COLORS",
|
||||
"IS_TEST"
|
||||
]
|
||||
environment = dict({key: os.environ[key] for key in keys
|
||||
if key in os.environ}, **legacy_io.Session)
|
||||
|
||||
# to recognize render jobs
|
||||
if AYON_SERVER_ENABLED:
|
||||
environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"]
|
||||
render_job_label = "AYON_RENDER_JOB"
|
||||
else:
|
||||
render_job_label = "OPENPYPE_RENDER_JOB"
|
||||
|
||||
environment[render_job_label] = "1"
|
||||
|
||||
payload["JobInfo"].update({
|
||||
"EnvironmentKeyValue%d" % index: "{key}={value}".format(
|
||||
key=key,
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue