mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-02 08:54:53 +01:00
Merge branch 'develop' into enhancement/OP-5751_render-multiple-cameras
This commit is contained in:
commit
1c95ecaa6a
264 changed files with 17365 additions and 1732 deletions
18
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
18
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,15 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.17.3-nightly.2
|
||||
- 3.17.3-nightly.1
|
||||
- 3.17.2
|
||||
- 3.17.2-nightly.4
|
||||
- 3.17.2-nightly.3
|
||||
- 3.17.2-nightly.2
|
||||
- 3.17.2-nightly.1
|
||||
- 3.17.1
|
||||
- 3.17.1-nightly.3
|
||||
- 3.17.1-nightly.2
|
||||
- 3.17.1-nightly.1
|
||||
- 3.17.0
|
||||
|
|
@ -126,15 +135,6 @@ body:
|
|||
- 3.14.11-nightly.4
|
||||
- 3.14.11-nightly.3
|
||||
- 3.14.11-nightly.2
|
||||
- 3.14.11-nightly.1
|
||||
- 3.14.10
|
||||
- 3.14.10-nightly.9
|
||||
- 3.14.10-nightly.8
|
||||
- 3.14.10-nightly.7
|
||||
- 3.14.10-nightly.6
|
||||
- 3.14.10-nightly.5
|
||||
- 3.14.10-nightly.4
|
||||
- 3.14.10-nightly.3
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
735
CHANGELOG.md
735
CHANGELOG.md
|
|
@ -1,6 +1,741 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.17.2](https://github.com/ynput/OpenPype/tree/3.17.2)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.1...3.17.2)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Add MayaPy application. <a href="https://github.com/ynput/OpenPype/pull/5705">#5705</a></summary>
|
||||
|
||||
This adds mayapy to the application to be launched from a task.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Feature: Copy resources when downloading last workfile <a href="https://github.com/ynput/OpenPype/pull/4944">#4944</a></summary>
|
||||
|
||||
When the last published workfile is downloaded as a prelaunch hook, all resource files referenced in the workfile representation are copied to the `resources` folder, which is inside the local workfile folder.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Deadline support <a href="https://github.com/ynput/OpenPype/pull/5438">#5438</a></summary>
|
||||
|
||||
Add Deadline support for Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: implement toggle to use Deadline plugin FusionCmd <a href="https://github.com/ynput/OpenPype/pull/5678">#5678</a></summary>
|
||||
|
||||
Fusion 17 doesn't work in DL 10.3, but FusionCmd does. It might be probably better option as headless variant.Fusion plugin seems to be closing and reopening application when worker is running on artist machine, not so with FusionCmdAdded configuration to Project Settings for admin to select appropriate Deadline plugin:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Loader tool: Refactor loader tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5729">#5729</a></summary>
|
||||
|
||||
Refactored loader tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. The tool is also replacing library loader.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: implement matchmove publishing <a href="https://github.com/ynput/OpenPype/pull/5445">#5445</a></summary>
|
||||
|
||||
Add possibility to export multiple cameras in single `matchmove` family instance, both in `abc` and `ma`.Exposed flag 'Keep image planes' to control export of image planes.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Add optional Fbx extractors in Rig and Animation family <a href="https://github.com/ynput/OpenPype/pull/5589">#5589</a></summary>
|
||||
|
||||
This PR allows user to export control rigs(optionally with mesh) and animated rig in fbx optionally by attaching the rig objects to the two newly introduced sets.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Optional Resolution Validator for Render <a href="https://github.com/ynput/OpenPype/pull/5693">#5693</a></summary>
|
||||
|
||||
Adding optional resolution validator for maya in render family, similar to the one in Max.It checks if the resolution in render setting aligns with that in setting from the db.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Use host's node uniqueness for instance id in new publisher <a href="https://github.com/ynput/OpenPype/pull/5490">#5490</a></summary>
|
||||
|
||||
Instead of writing `instance_id` as parm or attributes on the publish instances we can, for some hosts, just rely on a unique name or path within the scene to refer to that particular instance. By doing so we fix #4820 because upon duplicating such a publish instance using the host's (DCC) functionality the uniqueness for the duplicate is then already ensured instead of attributes remaining exact same value as where to were duplicated from, making `instance_id` a non-unique value.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Implementation of OCIO configuration <a href="https://github.com/ynput/OpenPype/pull/5499">#5499</a></summary>
|
||||
|
||||
Resolve #5473 Implementation of OCIO configuration for Max 2024 regarding to the update of Max 2024
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Multiple format supports for ExtractReviewDataMov <a href="https://github.com/ynput/OpenPype/pull/5623">#5623</a></summary>
|
||||
|
||||
This PR would fix the bug of the plugin `ExtractReviewDataMov` not being able to support extensions other than `mov`. The plugin is also renamed to `ExtractReviewDataBakingStreams` as i provides multiple format supoort.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: houdini switching context doesnt update variables <a href="https://github.com/ynput/OpenPype/pull/5651">#5651</a></summary>
|
||||
|
||||
Allows admins to have a list of vars (e.g. JOB) with (dynamic) values that will be updated on context changes, e.g. when switching to another asset or task.Using template keys is supported but formatting keys capitalization variants is not, e.g. {Asset} and {ASSET} won't workDisabling Update Houdini vars on context change feature will leave all Houdini vars unmanaged and thus no context update changes will occur.Also, this PR adds a new button in menu to update vars on demand.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Fix report maker memory leak + optimize lookups using set <a href="https://github.com/ynput/OpenPype/pull/5667">#5667</a></summary>
|
||||
|
||||
Fixes a memory leak where resetting publisher does not clear the stored plugins for the Publish Report Maker.Also changes the stored plugins to a `set` to optimize the lookup speeds.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add openpype_mongo command flag for testing. <a href="https://github.com/ynput/OpenPype/pull/5676">#5676</a></summary>
|
||||
|
||||
Instead of changing the environment, this command flag allows for changing the database.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: minor docstring and code tweaks for ExtractReviewMov <a href="https://github.com/ynput/OpenPype/pull/5695">#5695</a></summary>
|
||||
|
||||
Code and docstring tweaks on https://github.com/ynput/OpenPype/pull/5623
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Small settings fixes <a href="https://github.com/ynput/OpenPype/pull/5699">#5699</a></summary>
|
||||
|
||||
Small changes/fixes related to AYON settings. All foundry apps variant `13-0` has label `13.0`. Key `"ExtractReviewIntermediates"` is not mandatory in settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Alembic Animation loader <a href="https://github.com/ynput/OpenPype/pull/5711">#5711</a></summary>
|
||||
|
||||
Implemented loading Alembic Animations in Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Missing "data" field and enabling of audio <a href="https://github.com/ynput/OpenPype/pull/5618">#5618</a></summary>
|
||||
|
||||
When updating audio containers, the field "data" was missing and the audio node was not enabled on the timeline.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Bug in validate Plug-in Path Attribute <a href="https://github.com/ynput/OpenPype/pull/5687">#5687</a></summary>
|
||||
|
||||
Overwriting list with string is causing `TypeError: string indices must be integers` in subsequent iterations, crashing the validator plugin.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Avoid fallback if value is 0 for handle start/end <a href="https://github.com/ynput/OpenPype/pull/5652">#5652</a></summary>
|
||||
|
||||
There's a bug on the `pyblish_functions.get_time_data_from_instance_or_context` where if `handleStart` or `handleEnd` on the instance are set to value 0 it's falling back to grabbing the handles from the instance context. Instead, the logic should be that it only falls back to the `instance.context` if the key doesn't exist.This change was only affecting me on the `handleStart`/`handleEnd` and it's unlikely it could cause issues on `frameStart`, `frameEnd` or `fps` but regardless, the `get` logic is wrong.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: added missing env vars to Deadline submission <a href="https://github.com/ynput/OpenPype/pull/5659">#5659</a></summary>
|
||||
|
||||
Environment variables discerning type of job was missing. Without this injection of environment variables won't start.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: workfile version synchronization settings fixed <a href="https://github.com/ynput/OpenPype/pull/5662">#5662</a></summary>
|
||||
|
||||
Settings for synchronizing workfile version to published products is fixed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Workfiles Tool: Open workfile changes context <a href="https://github.com/ynput/OpenPype/pull/5671">#5671</a></summary>
|
||||
|
||||
Change context when workfile is opened.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix remove/update in new layout instance <a href="https://github.com/ynput/OpenPype/pull/5679">#5679</a></summary>
|
||||
|
||||
Fixes an error that occurs when removing or updating an asset in a new layout instance.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Launcher tool: Fix refresh btn <a href="https://github.com/ynput/OpenPype/pull/5685">#5685</a></summary>
|
||||
|
||||
Refresh button does propagate refreshed content properly. Folders and tasks are cached for 60 seconds instead of 10 seconds. Auto-refresh in launcher will refresh only actions and related data which is project and project settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: handle all valid paths in RenderExecutable <a href="https://github.com/ynput/OpenPype/pull/5694">#5694</a></summary>
|
||||
|
||||
This commit enhances the path resolution mechanism in the RenderExecutable function of the Ayon plugin. Previously, the function only considered paths starting with a tilde (~), ignoring other valid paths listed in exe_list. This limitation led to an empty expanded_paths list when none of the paths in exe_list started with a tilde, causing the function to fail in finding the Ayon executable.With this fix, the RenderExecutable function now correctly processes and includes all valid paths from exe_list, improving its reliability and preventing unnecessary errors related to Ayon executable location.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Launcher tool: Fix skip last workfile boolean <a href="https://github.com/ynput/OpenPype/pull/5700">#5700</a></summary>
|
||||
|
||||
Skip last workfile boolean works as expected.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Explore here action can work without task <a href="https://github.com/ynput/OpenPype/pull/5703">#5703</a></summary>
|
||||
|
||||
Explore here action does not crash when task is not selected, and change error message a little.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Testing: Inject mongo_url argument earlier <a href="https://github.com/ynput/OpenPype/pull/5706">#5706</a></summary>
|
||||
|
||||
Fix for https://github.com/ynput/OpenPype/pull/5676The Mongo url is used earlier in the execution.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Add support to auto-install PySide2 in blender 4 <a href="https://github.com/ynput/OpenPype/pull/5723">#5723</a></summary>
|
||||
|
||||
Change version regex to support blender 4 subfolder.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fix: Hardcoded main site and wrongly copied workfile <a href="https://github.com/ynput/OpenPype/pull/5733">#5733</a></summary>
|
||||
|
||||
Fixing these two issues:
|
||||
- Hardcoded main site -> Replaced by `anatomy.fill_root`.
|
||||
- Workfiles can sometimes be copied while they shouldn't.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: ServerDeleteOperation asset -> folder conversion typo <a href="https://github.com/ynput/OpenPype/pull/5735">#5735</a></summary>
|
||||
|
||||
Fix ServerDeleteOperation asset -> folder conversion typo
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: loaders are filtering correctly <a href="https://github.com/ynput/OpenPype/pull/5739">#5739</a></summary>
|
||||
|
||||
Variable name for filtering by extensions were not correct - it suppose to be plural. It is fixed now and filtering is working as suppose to.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: failing multiple thumbnails integration <a href="https://github.com/ynput/OpenPype/pull/5741">#5741</a></summary>
|
||||
|
||||
This handles the situation when `ExtractReviewIntermediates` (previously `ExtractReviewDataMov`) has multiple outputs, including thumbnails that need to be integrated. Previously, integrating the thumbnail representation was causing an issue in the integration process. However, we have now resolved this issue by no longer integrating thumbnails as loadable representations.NOW default is that thumbnail representation are NOT integrated (eg. they will not show up in DB > couldn't be Loaded in Loader) and no `_thumb.jpg` will be left in `render` (most likely) publish folder.IF there would be need to override this behavior, please use `project_settings/global/publish/PreIntegrateThumbnails`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON Settings: Fix global overrides <a href="https://github.com/ynput/OpenPype/pull/5745">#5745</a></summary>
|
||||
|
||||
The `output` dictionary that gets passed into `ayon_settings._convert_global_project_settings` gets replaced when converting the settings for `ExtractOIIOTranscode`. This results in `global` not being in the output dictionary and thus the defaults being used and not the project overrides.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: AYON query functions arguments <a href="https://github.com/ynput/OpenPype/pull/5752">#5752</a></summary>
|
||||
|
||||
Fixed how `archived` argument is handled in get subsets/assets function.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Refactor Report Maker plugin data storage to be a dict by plugin.id <a href="https://github.com/ynput/OpenPype/pull/5668">#5668</a></summary>
|
||||
|
||||
Refactor Report Maker plugin data storage to be a dict by `plugin.id`Also fixes `_current_plugin_data` type on `__init__`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost <a href="https://github.com/ynput/OpenPype/pull/5701">#5701</a></summary>
|
||||
|
||||
Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Maya reduce get project settings calls <a href="https://github.com/ynput/OpenPype/pull/5669">#5669</a></summary>
|
||||
|
||||
Re-use system settings / project settings where we can instead of requerying.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Extended error message when getting subset name <a href="https://github.com/ynput/OpenPype/pull/5649">#5649</a></summary>
|
||||
|
||||
Each Creator is using `get_subset_name` functions which collects context data and fills configured template with placeholders.If any key is missing in the template, non descriptive error is thrown.This should provide more verbose message:
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Tests: Remove checks for env var <a href="https://github.com/ynput/OpenPype/pull/5696">#5696</a></summary>
|
||||
|
||||
Env var will be filled in `env_var` fixture, here it is too early to check
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.1](https://github.com/ynput/OpenPype/tree/3.17.1)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.0...3.17.1)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Yeti support <a href="https://github.com/ynput/OpenPype/pull/5643">#5643</a></summary>
|
||||
|
||||
Implemented Yeti support for Unreal.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Add Static Mesh product-type (family) <a href="https://github.com/ynput/OpenPype/pull/5481">#5481</a></summary>
|
||||
|
||||
This PR adds support to publish Unreal Static Mesh in Houdini as FBXQuick recap
|
||||
- [x] Add UE Static Mesh Creator
|
||||
- [x] Dynamic subset name like in Maya
|
||||
- [x] Collect Static Mesh Type
|
||||
- [x] Update collect output node
|
||||
- [x] Validate FBX output node
|
||||
- [x] Validate mesh is static
|
||||
- [x] Validate Unreal Static Mesh Name
|
||||
- [x] Validate Subset Name
|
||||
- [x] FBX Extractor
|
||||
- [x] FBX Loader
|
||||
- [x] Update OP Settings
|
||||
- [x] Update AYON Settings
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Launcher tool: Refactor launcher tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5612">#5612</a></summary>
|
||||
|
||||
Refactored launcher tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Use custom staging dir function for Maya renders - OP-5265 <a href="https://github.com/ynput/OpenPype/pull/5186">#5186</a></summary>
|
||||
|
||||
Check for custom staging dir when setting the renders output folder in Maya.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Colorspace: updating file path detection methods <a href="https://github.com/ynput/OpenPype/pull/5273">#5273</a></summary>
|
||||
|
||||
Support for OCIO v2 file rules integrated into the available color management API
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: add default isort config <a href="https://github.com/ynput/OpenPype/pull/5572">#5572</a></summary>
|
||||
|
||||
Add default configuration for isort tool
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: set PATH environment in deadline jobs by GlobalJobPreLoad <a href="https://github.com/ynput/OpenPype/pull/5622">#5622</a></summary>
|
||||
|
||||
This PR makes `GlobalJobPreLoad` to set `PATH` environment in deadline jobs so that we don't have to use the full executable path for deadline to launch the dcc app. This trick should save us adding logic to pass houdini patch version and modifying Houdini deadline plugin. This trick should work with other DCCs
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>nuke: extract review data mov read node with expression <a href="https://github.com/ynput/OpenPype/pull/5635">#5635</a></summary>
|
||||
|
||||
Some productions might have set default values for read nodes, those settings are not colliding anymore now.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Support new publisher for colorsets validation. <a href="https://github.com/ynput/OpenPype/pull/5630">#5630</a></summary>
|
||||
|
||||
Fix `validate_color_sets` for the new publisher.In current `develop` the repair option does not appear due to wrong error raising.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Camera Loader fix mismatch for Maya cameras <a href="https://github.com/ynput/OpenPype/pull/5584">#5584</a></summary>
|
||||
|
||||
This PR adds
|
||||
- A workaround to match Maya render mask in Houdini
|
||||
- `SetCameraResolution` inventory action
|
||||
- set camera resolution when loading or updating camera
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fix set colorspace on writes <a href="https://github.com/ynput/OpenPype/pull/5634">#5634</a></summary>
|
||||
|
||||
Colorspace is set correctly to any write node created from publisher.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TVPaint: Fix review family extraction <a href="https://github.com/ynput/OpenPype/pull/5637">#5637</a></summary>
|
||||
|
||||
Extractor marks representation of review instance with review tag.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON settings: Extract OIIO transcode settings <a href="https://github.com/ynput/OpenPype/pull/5639">#5639</a></summary>
|
||||
|
||||
Output definitions of Extract OIIO transcode have name to match OpenPype settings, and the settings are converted to dictionary in settings conversion.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Fix task type short name conversion <a href="https://github.com/ynput/OpenPype/pull/5641">#5641</a></summary>
|
||||
|
||||
Convert AYON task type short name for OpenPype correctly.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>colorspace: missing `allowed_exts` fix <a href="https://github.com/ynput/OpenPype/pull/5646">#5646</a></summary>
|
||||
|
||||
Colorspace module is not failing due to missing `allowed_exts` attribute.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Photoshop: remove trailing underscore in subset name <a href="https://github.com/ynput/OpenPype/pull/5647">#5647</a></summary>
|
||||
|
||||
If {layer} placeholder is at the end of subset name template and not used (for example in `auto_image` where separating it by layer doesn't make any sense) trailing '_' was kept. This updates cleaning logic and extracts it as it might be similar in regular `image` instance.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>traypublisher: missing `assetEntity` in context data <a href="https://github.com/ynput/OpenPype/pull/5648">#5648</a></summary>
|
||||
|
||||
Issue with missing `assetEnity` key in context data is not problem anymore.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Workfiles tool save button works <a href="https://github.com/ynput/OpenPype/pull/5653">#5653</a></summary>
|
||||
|
||||
Fix save as button in workfiles tool.(It is mystery why this stopped to work??)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: bug fix delete items from container <a href="https://github.com/ynput/OpenPype/pull/5658">#5658</a></summary>
|
||||
|
||||
Fix the bug shown when clicking "Delete Items from Container" and selecting nothing and press ok.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Remove unused functions from Fusion integration <a href="https://github.com/ynput/OpenPype/pull/5617">#5617</a></summary>
|
||||
|
||||
Cleanup unused code from Fusion integration
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Increase timout for deadline test <a href="https://github.com/ynput/OpenPype/pull/5654">#5654</a></summary>
|
||||
|
||||
DL picks up jobs quite slow, so bump up delay.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.0](https://github.com/ynput/OpenPype/tree/3.17.0)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -290,11 +290,15 @@ def run(script):
|
|||
"--setup_only",
|
||||
help="Only create dbs, do not run tests",
|
||||
default=None)
|
||||
@click.option("--mongo_url",
|
||||
help="MongoDB for testing.",
|
||||
default=None)
|
||||
def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant,
|
||||
timeout, setup_only):
|
||||
timeout, setup_only, mongo_url):
|
||||
"""Run all automatic tests after proper initialization via start.py"""
|
||||
PypeCommands().run_tests(folder, mark, pyargs, test_data_folder,
|
||||
persist, app_variant, timeout, setup_only)
|
||||
persist, app_variant, timeout, setup_only,
|
||||
mongo_url)
|
||||
|
||||
|
||||
@main.command(help="DEPRECATED - run sync server")
|
||||
|
|
|
|||
|
|
@ -75,9 +75,9 @@ def _get_subsets(
|
|||
):
|
||||
fields.add(key)
|
||||
|
||||
active = None
|
||||
active = True
|
||||
if archived:
|
||||
active = False
|
||||
active = None
|
||||
|
||||
for subset in con.get_products(
|
||||
project_name,
|
||||
|
|
@ -196,7 +196,7 @@ def get_assets(
|
|||
|
||||
active = True
|
||||
if archived:
|
||||
active = False
|
||||
active = None
|
||||
|
||||
con = get_server_api_connection()
|
||||
fields = folder_fields_v3_to_v4(fields, con)
|
||||
|
|
|
|||
|
|
@ -422,7 +422,7 @@ def failed_json_default(value):
|
|||
|
||||
|
||||
class ServerCreateOperation(CreateOperation):
|
||||
"""Opeartion to create an entity.
|
||||
"""Operation to create an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
@ -634,7 +634,7 @@ class ServerUpdateOperation(UpdateOperation):
|
|||
|
||||
|
||||
class ServerDeleteOperation(DeleteOperation):
|
||||
"""Opeartion to delete an entity.
|
||||
"""Operation to delete an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
@ -647,7 +647,7 @@ class ServerDeleteOperation(DeleteOperation):
|
|||
self._session = session
|
||||
|
||||
if entity_type == "asset":
|
||||
entity_type == "folder"
|
||||
entity_type = "folder"
|
||||
|
||||
elif entity_type == "hero_version":
|
||||
entity_type = "version"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import subprocess
|
|||
from openpype.lib.applications import PreLaunchHook, LaunchTypes
|
||||
|
||||
|
||||
class LaunchFoundryAppsWindows(PreLaunchHook):
|
||||
class LaunchNewConsoleApps(PreLaunchHook):
|
||||
"""Foundry applications have specific way how to launch them.
|
||||
|
||||
Nuke is executed "like" python process so it is required to pass
|
||||
|
|
@ -13,13 +13,15 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
|
|||
|
||||
# Should be as last hook because must change launch arguments to string
|
||||
order = 1000
|
||||
app_groups = {"nuke", "nukeassist", "nukex", "hiero", "nukestudio"}
|
||||
app_groups = {
|
||||
"nuke", "nukeassist", "nukex", "hiero", "nukestudio", "mayapy"
|
||||
}
|
||||
platforms = {"windows"}
|
||||
launch_types = {LaunchTypes.local}
|
||||
|
||||
def execute(self):
|
||||
# Change `creationflags` to CREATE_NEW_CONSOLE
|
||||
# - on Windows nuke will create new window using its console
|
||||
# - on Windows some apps will create new window using its console
|
||||
# Set `stdout` and `stderr` to None so new created console does not
|
||||
# have redirected output to DEVNULL in build
|
||||
self.launch_context.kwargs.update({
|
||||
|
|
@ -13,7 +13,7 @@ class OCIOEnvHook(PreLaunchHook):
|
|||
"fusion",
|
||||
"blender",
|
||||
"aftereffects",
|
||||
"max",
|
||||
"3dsmax",
|
||||
"houdini",
|
||||
"maya",
|
||||
"nuke",
|
||||
|
|
|
|||
|
|
@ -38,6 +38,8 @@ from .lib import (
|
|||
|
||||
from .capture import capture
|
||||
|
||||
from .render_lib import prepare_rendering
|
||||
|
||||
|
||||
__all__ = [
|
||||
"install",
|
||||
|
|
@ -66,4 +68,5 @@ __all__ = [
|
|||
"get_selection",
|
||||
"capture",
|
||||
# "unique_name",
|
||||
"prepare_rendering",
|
||||
]
|
||||
|
|
|
|||
51
openpype/hosts/blender/api/colorspace.py
Normal file
51
openpype/hosts/blender/api/colorspace.py
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
import attr
|
||||
|
||||
import bpy
|
||||
|
||||
|
||||
@attr.s
|
||||
class LayerMetadata(object):
|
||||
"""Data class for Render Layer metadata."""
|
||||
frameStart = attr.ib()
|
||||
frameEnd = attr.ib()
|
||||
|
||||
|
||||
@attr.s
|
||||
class RenderProduct(object):
|
||||
"""
|
||||
Getting Colorspace as Specific Render Product Parameter for submitting
|
||||
publish job.
|
||||
"""
|
||||
colorspace = attr.ib() # colorspace
|
||||
view = attr.ib() # OCIO view transform
|
||||
productName = attr.ib(default=None)
|
||||
|
||||
|
||||
class ARenderProduct(object):
|
||||
def __init__(self):
|
||||
"""Constructor."""
|
||||
# Initialize
|
||||
self.layer_data = self._get_layer_data()
|
||||
self.layer_data.products = self.get_render_products()
|
||||
|
||||
def _get_layer_data(self):
|
||||
scene = bpy.context.scene
|
||||
|
||||
return LayerMetadata(
|
||||
frameStart=int(scene.frame_start),
|
||||
frameEnd=int(scene.frame_end),
|
||||
)
|
||||
|
||||
def get_render_products(self):
|
||||
"""To be implemented by renderer class.
|
||||
This should return a list of RenderProducts.
|
||||
Returns:
|
||||
list: List of RenderProduct
|
||||
"""
|
||||
return [
|
||||
RenderProduct(
|
||||
colorspace="sRGB",
|
||||
view="ACES 1.0",
|
||||
productName=""
|
||||
)
|
||||
]
|
||||
|
|
@ -16,6 +16,7 @@ import bpy
|
|||
import bpy.utils.previews
|
||||
|
||||
from openpype import style
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.pipeline import get_current_asset_name, get_current_task_name
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
|
|
@ -331,10 +332,11 @@ class LaunchWorkFiles(LaunchQtApp):
|
|||
|
||||
def execute(self, context):
|
||||
result = super().execute(context)
|
||||
self._window.set_context({
|
||||
"asset": get_current_asset_name(),
|
||||
"task": get_current_task_name()
|
||||
})
|
||||
if not AYON_SERVER_ENABLED:
|
||||
self._window.set_context({
|
||||
"asset": get_current_asset_name(),
|
||||
"task": get_current_task_name()
|
||||
})
|
||||
return result
|
||||
|
||||
def before_window_show(self):
|
||||
|
|
|
|||
|
|
@ -460,36 +460,6 @@ def ls() -> Iterator:
|
|||
yield parse_container(container)
|
||||
|
||||
|
||||
def update_hierarchy(containers):
|
||||
"""Hierarchical container support
|
||||
|
||||
This is the function to support Scene Inventory to draw hierarchical
|
||||
view for containers.
|
||||
|
||||
We need both parent and children to visualize the graph.
|
||||
|
||||
"""
|
||||
|
||||
all_containers = set(ls()) # lookup set
|
||||
|
||||
for container in containers:
|
||||
# Find parent
|
||||
# FIXME (jasperge): re-evaluate this. How would it be possible
|
||||
# to 'nest' assets? Collections can have several parents, for
|
||||
# now assume it has only 1 parent
|
||||
parent = [
|
||||
coll for coll in bpy.data.collections if container in coll.children
|
||||
]
|
||||
for node in parent:
|
||||
if node in all_containers:
|
||||
container["parent"] = node
|
||||
break
|
||||
|
||||
log.debug("Container: %s", container)
|
||||
|
||||
yield container
|
||||
|
||||
|
||||
def publish():
|
||||
"""Shorthand to publish from within host."""
|
||||
|
||||
|
|
|
|||
255
openpype/hosts/blender/api/render_lib.py
Normal file
255
openpype/hosts/blender/api/render_lib.py
Normal file
|
|
@ -0,0 +1,255 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import get_current_project_name
|
||||
|
||||
|
||||
def get_default_render_folder(settings):
|
||||
"""Get default render folder from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["default_render_image_folder"])
|
||||
|
||||
|
||||
def get_aov_separator(settings):
|
||||
"""Get aov separator from blender settings."""
|
||||
|
||||
aov_sep = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["aov_separator"])
|
||||
|
||||
if aov_sep == "dash":
|
||||
return "-"
|
||||
elif aov_sep == "underscore":
|
||||
return "_"
|
||||
elif aov_sep == "dot":
|
||||
return "."
|
||||
else:
|
||||
raise ValueError(f"Invalid aov separator: {aov_sep}")
|
||||
|
||||
|
||||
def get_image_format(settings):
|
||||
"""Get image format from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["image_format"])
|
||||
|
||||
|
||||
def get_multilayer(settings):
|
||||
"""Get multilayer from blender settings."""
|
||||
|
||||
return (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["multilayer_exr"])
|
||||
|
||||
|
||||
def get_render_product(output_path, name, aov_sep):
|
||||
"""
|
||||
Generate the path to the render product. Blender interprets the `#`
|
||||
as the frame number, when it renders.
|
||||
|
||||
Args:
|
||||
file_path (str): The path to the blender scene.
|
||||
render_folder (str): The render folder set in settings.
|
||||
file_name (str): The name of the blender scene.
|
||||
instance (pyblish.api.Instance): The instance to publish.
|
||||
ext (str): The image format to render.
|
||||
"""
|
||||
filepath = os.path.join(output_path, name)
|
||||
render_product = f"{filepath}{aov_sep}beauty.####"
|
||||
render_product = render_product.replace("\\", "/")
|
||||
|
||||
return render_product
|
||||
|
||||
|
||||
def set_render_format(ext, multilayer):
|
||||
# Set Blender to save the file with the right extension
|
||||
bpy.context.scene.render.use_file_extension = True
|
||||
|
||||
image_settings = bpy.context.scene.render.image_settings
|
||||
|
||||
if ext == "exr":
|
||||
image_settings.file_format = (
|
||||
"OPEN_EXR_MULTILAYER" if multilayer else "OPEN_EXR")
|
||||
elif ext == "bmp":
|
||||
image_settings.file_format = "BMP"
|
||||
elif ext == "rgb":
|
||||
image_settings.file_format = "IRIS"
|
||||
elif ext == "png":
|
||||
image_settings.file_format = "PNG"
|
||||
elif ext == "jpeg":
|
||||
image_settings.file_format = "JPEG"
|
||||
elif ext == "jp2":
|
||||
image_settings.file_format = "JPEG2000"
|
||||
elif ext == "tga":
|
||||
image_settings.file_format = "TARGA"
|
||||
elif ext == "tif":
|
||||
image_settings.file_format = "TIFF"
|
||||
|
||||
|
||||
def set_render_passes(settings):
|
||||
aov_list = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["aov_list"])
|
||||
|
||||
custom_passes = (settings["blender"]
|
||||
["RenderSettings"]
|
||||
["custom_passes"])
|
||||
|
||||
vl = bpy.context.view_layer
|
||||
|
||||
vl.use_pass_combined = "combined" in aov_list
|
||||
vl.use_pass_z = "z" in aov_list
|
||||
vl.use_pass_mist = "mist" in aov_list
|
||||
vl.use_pass_normal = "normal" in aov_list
|
||||
vl.use_pass_diffuse_direct = "diffuse_light" in aov_list
|
||||
vl.use_pass_diffuse_color = "diffuse_color" in aov_list
|
||||
vl.use_pass_glossy_direct = "specular_light" in aov_list
|
||||
vl.use_pass_glossy_color = "specular_color" in aov_list
|
||||
vl.eevee.use_pass_volume_direct = "volume_light" in aov_list
|
||||
vl.use_pass_emit = "emission" in aov_list
|
||||
vl.use_pass_environment = "environment" in aov_list
|
||||
vl.use_pass_shadow = "shadow" in aov_list
|
||||
vl.use_pass_ambient_occlusion = "ao" in aov_list
|
||||
|
||||
cycles = vl.cycles
|
||||
|
||||
cycles.denoising_store_passes = "denoising" in aov_list
|
||||
cycles.use_pass_volume_direct = "volume_direct" in aov_list
|
||||
cycles.use_pass_volume_indirect = "volume_indirect" in aov_list
|
||||
|
||||
aovs_names = [aov.name for aov in vl.aovs]
|
||||
for cp in custom_passes:
|
||||
cp_name = cp[0]
|
||||
if cp_name not in aovs_names:
|
||||
aov = vl.aovs.add()
|
||||
aov.name = cp_name
|
||||
else:
|
||||
aov = vl.aovs[cp_name]
|
||||
aov.type = cp[1].get("type", "VALUE")
|
||||
|
||||
return aov_list, custom_passes
|
||||
|
||||
|
||||
def set_node_tree(output_path, name, aov_sep, ext, multilayer):
|
||||
# Set the scene to use the compositor node tree to render
|
||||
bpy.context.scene.use_nodes = True
|
||||
|
||||
tree = bpy.context.scene.node_tree
|
||||
|
||||
# Get the Render Layers node
|
||||
rl_node = None
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == "CompositorNodeRLayers":
|
||||
rl_node = node
|
||||
break
|
||||
|
||||
# If there's not a Render Layers node, we create it
|
||||
if not rl_node:
|
||||
rl_node = tree.nodes.new("CompositorNodeRLayers")
|
||||
|
||||
# Get the enabled output sockets, that are the active passes for the
|
||||
# render.
|
||||
# We also exclude some layers.
|
||||
exclude_sockets = ["Image", "Alpha", "Noisy Image"]
|
||||
passes = [
|
||||
socket
|
||||
for socket in rl_node.outputs
|
||||
if socket.enabled and socket.name not in exclude_sockets
|
||||
]
|
||||
|
||||
# Remove all output nodes
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == "CompositorNodeOutputFile":
|
||||
tree.nodes.remove(node)
|
||||
|
||||
# Create a new output node
|
||||
output = tree.nodes.new("CompositorNodeOutputFile")
|
||||
|
||||
image_settings = bpy.context.scene.render.image_settings
|
||||
output.format.file_format = image_settings.file_format
|
||||
|
||||
# In case of a multilayer exr, we don't need to use the output node,
|
||||
# because the blender render already outputs a multilayer exr.
|
||||
if ext == "exr" and multilayer:
|
||||
output.layer_slots.clear()
|
||||
return []
|
||||
|
||||
output.file_slots.clear()
|
||||
output.base_path = output_path
|
||||
|
||||
aov_file_products = []
|
||||
|
||||
# For each active render pass, we add a new socket to the output node
|
||||
# and link it
|
||||
for render_pass in passes:
|
||||
filepath = f"{name}{aov_sep}{render_pass.name}.####"
|
||||
|
||||
output.file_slots.new(filepath)
|
||||
|
||||
aov_file_products.append(
|
||||
(render_pass.name, os.path.join(output_path, filepath)))
|
||||
|
||||
node_input = output.inputs[-1]
|
||||
|
||||
tree.links.new(render_pass, node_input)
|
||||
|
||||
return aov_file_products
|
||||
|
||||
|
||||
def imprint_render_settings(node, data):
|
||||
RENDER_DATA = "render_data"
|
||||
if not node.get(RENDER_DATA):
|
||||
node[RENDER_DATA] = {}
|
||||
for key, value in data.items():
|
||||
if value is None:
|
||||
continue
|
||||
node[RENDER_DATA][key] = value
|
||||
|
||||
|
||||
def prepare_rendering(asset_group):
|
||||
name = asset_group.name
|
||||
|
||||
filepath = bpy.data.filepath
|
||||
assert filepath, "Workfile not saved. Please save the file first."
|
||||
|
||||
file_path = os.path.dirname(filepath)
|
||||
file_name = os.path.basename(filepath)
|
||||
file_name, _ = os.path.splitext(file_name)
|
||||
|
||||
project = get_current_project_name()
|
||||
settings = get_project_settings(project)
|
||||
|
||||
render_folder = get_default_render_folder(settings)
|
||||
aov_sep = get_aov_separator(settings)
|
||||
ext = get_image_format(settings)
|
||||
multilayer = get_multilayer(settings)
|
||||
|
||||
set_render_format(ext, multilayer)
|
||||
aov_list, custom_passes = set_render_passes(settings)
|
||||
|
||||
output_path = os.path.join(file_path, render_folder, file_name)
|
||||
|
||||
render_product = get_render_product(output_path, name, aov_sep)
|
||||
aov_file_product = set_node_tree(
|
||||
output_path, name, aov_sep, ext, multilayer)
|
||||
|
||||
bpy.context.scene.render.filepath = render_product
|
||||
|
||||
render_settings = {
|
||||
"render_folder": render_folder,
|
||||
"aov_separator": aov_sep,
|
||||
"image_format": ext,
|
||||
"multilayer_exr": multilayer,
|
||||
"aov_list": aov_list,
|
||||
"custom_passes": custom_passes,
|
||||
"render_product": render_product,
|
||||
"aov_file_product": aov_file_product,
|
||||
"review": True,
|
||||
}
|
||||
|
||||
imprint_render_settings(asset_group, render_settings)
|
||||
|
|
@ -31,7 +31,7 @@ class InstallPySideToBlender(PreLaunchHook):
|
|||
|
||||
def inner_execute(self):
|
||||
# Get blender's python directory
|
||||
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
|
||||
version_regex = re.compile(r"^[2-4]\.[0-9]+$")
|
||||
|
||||
platform = system().lower()
|
||||
executable = self.launch_context.executable.executable_path
|
||||
|
|
|
|||
53
openpype/hosts/blender/plugins/create/create_render.py
Normal file
53
openpype/hosts/blender/plugins/create/create_render.py
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
"""Create render."""
|
||||
import bpy
|
||||
|
||||
from openpype.pipeline import get_current_task_name
|
||||
from openpype.hosts.blender.api import plugin, lib
|
||||
from openpype.hosts.blender.api.render_lib import prepare_rendering
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
|
||||
|
||||
|
||||
class CreateRenderlayer(plugin.Creator):
|
||||
"""Single baked camera"""
|
||||
|
||||
name = "renderingMain"
|
||||
label = "Render"
|
||||
family = "render"
|
||||
icon = "eye"
|
||||
|
||||
def process(self):
|
||||
# Get Instance Container or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
bpy.context.scene.collection.children.link(instances)
|
||||
|
||||
# Create instance object
|
||||
asset = self.data["asset"]
|
||||
subset = self.data["subset"]
|
||||
name = plugin.asset_name(asset, subset)
|
||||
asset_group = bpy.data.collections.new(name=name)
|
||||
|
||||
try:
|
||||
instances.children.link(asset_group)
|
||||
self.data['task'] = get_current_task_name()
|
||||
lib.imprint(asset_group, self.data)
|
||||
|
||||
prepare_rendering(asset_group)
|
||||
except Exception:
|
||||
# Remove the instance if there was an error
|
||||
bpy.data.collections.remove(asset_group)
|
||||
raise
|
||||
|
||||
# TODO: this is undesiderable, but it's the only way to be sure that
|
||||
# the file is saved before the render starts.
|
||||
# Blender, by design, doesn't set the file as dirty if modifications
|
||||
# happen by script. So, when creating the instance and setting the
|
||||
# render settings, the file is not marked as dirty. This means that
|
||||
# there is the risk of sending to deadline a file without the right
|
||||
# settings. Even the validator to check that the file is saved will
|
||||
# detect the file as saved, even if it isn't. The only solution for
|
||||
# now it is to force the file to be saved.
|
||||
bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath)
|
||||
|
||||
return asset_group
|
||||
|
|
@ -26,8 +26,7 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
Note:
|
||||
At least for now it only supports Alembic files.
|
||||
"""
|
||||
|
||||
families = ["model", "pointcache"]
|
||||
families = ["model", "pointcache", "animation"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Load Alembic"
|
||||
|
|
@ -53,16 +52,12 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
def _process(self, libpath, asset_group, group_name):
|
||||
plugin.deselect_all()
|
||||
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
|
||||
relative = bpy.context.preferences.filepaths.use_relative_paths
|
||||
bpy.ops.wm.alembic_import(
|
||||
filepath=libpath,
|
||||
relative_path=relative
|
||||
)
|
||||
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
imported = lib.get_selection()
|
||||
|
||||
# Children must be linked before parents,
|
||||
|
|
@ -79,6 +74,10 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
objects.reverse()
|
||||
|
||||
for obj in objects:
|
||||
# Unlink the object from all collections
|
||||
collections = obj.users_collection
|
||||
for collection in collections:
|
||||
collection.objects.unlink(obj)
|
||||
name = obj.name
|
||||
obj.name = f"{group_name}:{name}"
|
||||
if obj.type != 'EMPTY':
|
||||
|
|
@ -90,7 +89,7 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
material_slot.material.name = f"{group_name}:{name_mat}"
|
||||
|
||||
if not obj.get(AVALON_PROPERTY):
|
||||
obj[AVALON_PROPERTY] = dict()
|
||||
obj[AVALON_PROPERTY] = {}
|
||||
|
||||
avalon_info = obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
|
@ -99,6 +98,18 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
|
||||
return objects
|
||||
|
||||
def _link_objects(self, objects, collection, containers, asset_group):
|
||||
# Link the imported objects to any collection where the asset group is
|
||||
# linked to, except the AVALON_CONTAINERS collection
|
||||
group_collections = [
|
||||
collection
|
||||
for collection in asset_group.users_collection
|
||||
if collection != containers]
|
||||
|
||||
for obj in objects:
|
||||
for collection in group_collections:
|
||||
collection.objects.link(obj)
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
|
|
@ -120,18 +131,21 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_containers = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_containers:
|
||||
avalon_containers = bpy.data.collections.new(
|
||||
name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_containers)
|
||||
containers = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not containers:
|
||||
containers = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(containers)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_containers.objects.link(asset_group)
|
||||
containers.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
# Link the asset group to the active collection
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
collection.objects.link(asset_group)
|
||||
|
||||
self._link_objects(objects, asset_group, containers, asset_group)
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
|
|
@ -207,7 +221,11 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
mat = asset_group.matrix_basis.copy()
|
||||
self._remove(asset_group)
|
||||
|
||||
self._process(str(libpath), asset_group, object_name)
|
||||
objects = self._process(str(libpath), asset_group, object_name)
|
||||
|
||||
containers = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
self._link_objects(objects, asset_group, containers, asset_group)
|
||||
|
||||
asset_group.matrix_basis = mat
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
|
|
|
|||
|
|
@ -244,7 +244,7 @@ class BlendLoader(plugin.AssetLoader):
|
|||
for parent in parent_containers:
|
||||
parent.get(AVALON_PROPERTY)["members"] = list(filter(
|
||||
lambda i: i not in members,
|
||||
parent.get(AVALON_PROPERTY)["members"]))
|
||||
parent.get(AVALON_PROPERTY).get("members", [])))
|
||||
|
||||
for attr in attrs:
|
||||
for data in getattr(bpy.data, attr):
|
||||
|
|
|
|||
123
openpype/hosts/blender/plugins/publish/collect_render.py
Normal file
123
openpype/hosts/blender/plugins/publish/collect_render.py
Normal file
|
|
@ -0,0 +1,123 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect render data."""
|
||||
|
||||
import os
|
||||
import re
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.hosts.blender.api import colorspace
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectBlenderRender(pyblish.api.InstancePlugin):
|
||||
"""Gather all publishable render layers from renderSetup."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
hosts = ["blender"]
|
||||
families = ["render"]
|
||||
label = "Collect Render Layers"
|
||||
sync_workfile_version = False
|
||||
|
||||
@staticmethod
|
||||
def generate_expected_beauty(
|
||||
render_product, frame_start, frame_end, frame_step, ext
|
||||
):
|
||||
"""
|
||||
Generate the expected files for the render product for the beauty
|
||||
render. This returns a list of files that should be rendered. It
|
||||
replaces the sequence of `#` with the frame number.
|
||||
"""
|
||||
path = os.path.dirname(render_product)
|
||||
file = os.path.basename(render_product)
|
||||
|
||||
expected_files = []
|
||||
|
||||
for frame in range(frame_start, frame_end + 1, frame_step):
|
||||
frame_str = str(frame).rjust(4, "0")
|
||||
filename = re.sub("#+", frame_str, file)
|
||||
expected_file = f"{os.path.join(path, filename)}.{ext}"
|
||||
expected_files.append(expected_file.replace("\\", "/"))
|
||||
|
||||
return {
|
||||
"beauty": expected_files
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def generate_expected_aovs(
|
||||
aov_file_product, frame_start, frame_end, frame_step, ext
|
||||
):
|
||||
"""
|
||||
Generate the expected files for the render product for the beauty
|
||||
render. This returns a list of files that should be rendered. It
|
||||
replaces the sequence of `#` with the frame number.
|
||||
"""
|
||||
expected_files = {}
|
||||
|
||||
for aov_name, aov_file in aov_file_product:
|
||||
path = os.path.dirname(aov_file)
|
||||
file = os.path.basename(aov_file)
|
||||
|
||||
aov_files = []
|
||||
|
||||
for frame in range(frame_start, frame_end + 1, frame_step):
|
||||
frame_str = str(frame).rjust(4, "0")
|
||||
filename = re.sub("#+", frame_str, file)
|
||||
expected_file = f"{os.path.join(path, filename)}.{ext}"
|
||||
aov_files.append(expected_file.replace("\\", "/"))
|
||||
|
||||
expected_files[aov_name] = aov_files
|
||||
|
||||
return expected_files
|
||||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
|
||||
render_data = bpy.data.collections[str(instance)].get("render_data")
|
||||
|
||||
assert render_data, "No render data found."
|
||||
|
||||
self.log.info(f"render_data: {dict(render_data)}")
|
||||
|
||||
render_product = render_data.get("render_product")
|
||||
aov_file_product = render_data.get("aov_file_product")
|
||||
ext = render_data.get("image_format")
|
||||
multilayer = render_data.get("multilayer_exr")
|
||||
|
||||
frame_start = context.data["frameStart"]
|
||||
frame_end = context.data["frameEnd"]
|
||||
frame_handle_start = context.data["frameStartHandle"]
|
||||
frame_handle_end = context.data["frameEndHandle"]
|
||||
|
||||
expected_beauty = self.generate_expected_beauty(
|
||||
render_product, int(frame_start), int(frame_end),
|
||||
int(bpy.context.scene.frame_step), ext)
|
||||
|
||||
expected_aovs = self.generate_expected_aovs(
|
||||
aov_file_product, int(frame_start), int(frame_end),
|
||||
int(bpy.context.scene.frame_step), ext)
|
||||
|
||||
expected_files = expected_beauty | expected_aovs
|
||||
|
||||
instance.data.update({
|
||||
"family": "render.farm",
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"frameStartHandle": frame_handle_start,
|
||||
"frameEndHandle": frame_handle_end,
|
||||
"fps": context.data["fps"],
|
||||
"byFrameStep": bpy.context.scene.frame_step,
|
||||
"review": render_data.get("review", False),
|
||||
"multipartExr": ext == "exr" and multilayer,
|
||||
"farm": True,
|
||||
"expectedFiles": [expected_files],
|
||||
# OCIO not currently implemented in Blender, but the following
|
||||
# settings are required by the schema, so it is hardcoded.
|
||||
# TODO: Implement OCIO in Blender
|
||||
"colorspaceConfig": "",
|
||||
"colorspaceDisplay": "sRGB",
|
||||
"colorspaceView": "ACES 1.0 SDR-video",
|
||||
"renderProducts": colorspace.ARenderProduct(),
|
||||
})
|
||||
|
||||
self.log.info(f"data: {instance.data}")
|
||||
|
|
@ -9,7 +9,8 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
|
|||
label = "Increment Workfile Version"
|
||||
optional = True
|
||||
hosts = ["blender"]
|
||||
families = ["animation", "model", "rig", "action", "layout", "blendScene"]
|
||||
families = ["animation", "model", "rig", "action", "layout", "blendScene",
|
||||
"render"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
|
||||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.hosts.blender.api.render_lib import prepare_rendering
|
||||
|
||||
|
||||
class ValidateDeadlinePublish(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validates Render File Directory is
|
||||
not the same in every submission
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
families = ["render.farm"]
|
||||
hosts = ["blender"]
|
||||
label = "Validate Render Output for Deadline"
|
||||
optional = True
|
||||
actions = [RepairAction]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
filepath = bpy.data.filepath
|
||||
file = os.path.basename(filepath)
|
||||
filename, ext = os.path.splitext(file)
|
||||
if filename not in bpy.context.scene.render.filepath:
|
||||
raise PublishValidationError(
|
||||
"Render output folder "
|
||||
"doesn't match the blender scene name! "
|
||||
"Use Repair action to "
|
||||
"fix the folder file path.."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
container = bpy.data.collections[str(instance)]
|
||||
prepare_rendering(container)
|
||||
bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath)
|
||||
cls.log.debug("Reset the render output folder...")
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateFileSaved(pyblish.api.InstancePlugin):
|
||||
"""Validate that the workfile has been saved."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
hosts = ["blender"]
|
||||
label = "Validate File Saved"
|
||||
optional = False
|
||||
exclude_families = []
|
||||
|
||||
def process(self, instance):
|
||||
if [ef for ef in self.exclude_families
|
||||
if instance.data["family"] in ef]:
|
||||
return
|
||||
if bpy.data.is_dirty:
|
||||
raise RuntimeError("Workfile is not saved.")
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateRenderCameraIsSet(pyblish.api.InstancePlugin):
|
||||
"""Validate that there is a camera set as active for rendering."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
hosts = ["blender"]
|
||||
families = ["render"]
|
||||
label = "Validate Render Camera Is Set"
|
||||
optional = False
|
||||
|
||||
def process(self, instance):
|
||||
if not bpy.context.scene.camera:
|
||||
raise RuntimeError("No camera is active for rendering.")
|
||||
|
|
@ -123,6 +123,9 @@ class CreateSaver(NewCreator):
|
|||
def _imprint(self, tool, data):
|
||||
# Save all data in a "openpype.{key}" = value data
|
||||
|
||||
# Instance id is the tool's name so we don't need to imprint as data
|
||||
data.pop("instance_id", None)
|
||||
|
||||
active = data.pop("active", None)
|
||||
if active is not None:
|
||||
# Use active value to set the passthrough state
|
||||
|
|
@ -162,7 +165,8 @@ class CreateSaver(NewCreator):
|
|||
filepath = self.temp_rendering_path_template.format(
|
||||
**formatting_data)
|
||||
|
||||
tool["Clip"] = os.path.normpath(filepath)
|
||||
comp = get_current_comp()
|
||||
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
|
||||
|
||||
# Rename tool
|
||||
if tool.Name != subset:
|
||||
|
|
@ -188,6 +192,10 @@ class CreateSaver(NewCreator):
|
|||
passthrough = attrs["TOOLB_PassThrough"]
|
||||
data["active"] = not passthrough
|
||||
|
||||
# Override publisher's UUID generation because tool names are
|
||||
# already unique in Fusion in a comp
|
||||
data["instance_id"] = tool.Name
|
||||
|
||||
return data
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
|
|
|
|||
|
|
@ -161,7 +161,7 @@ class FusionLoadSequence(load.LoaderPlugin):
|
|||
with comp_lock_and_undo_chunk(comp, "Create Loader"):
|
||||
args = (-32768, -32768)
|
||||
tool = comp.AddTool("Loader", *args)
|
||||
tool["Clip"] = path
|
||||
tool["Clip"] = comp.ReverseMapPath(path)
|
||||
|
||||
# Set global in point to start frame (if in version.data)
|
||||
start = self._get_start(context["version"], tool)
|
||||
|
|
@ -244,7 +244,7 @@ class FusionLoadSequence(load.LoaderPlugin):
|
|||
"TimeCodeOffset",
|
||||
),
|
||||
):
|
||||
tool["Clip"] = path
|
||||
tool["Clip"] = comp.ReverseMapPath(path)
|
||||
|
||||
# Set the global in to the start frame of the sequence
|
||||
global_in_changed = loader_shift(tool, start, relative=False)
|
||||
|
|
|
|||
|
|
@ -145,9 +145,11 @@ class CollectFusionRender(
|
|||
start = render_instance.frameStart - render_instance.handleStart
|
||||
end = render_instance.frameEnd + render_instance.handleEnd
|
||||
|
||||
path = (
|
||||
render_instance.tool["Clip"]
|
||||
[render_instance.workfileComp.TIME_UNDEFINED]
|
||||
comp = render_instance.workfileComp
|
||||
path = comp.MapPath(
|
||||
render_instance.tool["Clip"][
|
||||
render_instance.workfileComp.TIME_UNDEFINED
|
||||
]
|
||||
)
|
||||
output_dir = os.path.dirname(path)
|
||||
render_instance.outputDir = output_dir
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import sys
|
||||
import os
|
||||
import errno
|
||||
import re
|
||||
import uuid
|
||||
import logging
|
||||
|
|
@ -9,9 +10,21 @@ import json
|
|||
|
||||
import six
|
||||
|
||||
from openpype.lib import StringTemplate
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import get_current_project_name, get_current_asset_name
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
from openpype.settings import get_current_project_settings
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
get_current_asset_name,
|
||||
registered_host
|
||||
)
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_context_template_data,
|
||||
get_current_project_asset
|
||||
)
|
||||
from openpype.widgets import popup
|
||||
from openpype.tools.utils.host_tools import get_tool_by_name
|
||||
from openpype.pipeline.create import CreateContext
|
||||
|
||||
import hou
|
||||
|
||||
|
|
@ -160,8 +173,6 @@ def validate_fps():
|
|||
|
||||
if current_fps != fps:
|
||||
|
||||
from openpype.widgets import popup
|
||||
|
||||
# Find main window
|
||||
parent = hou.ui.mainQtWindow()
|
||||
if parent is None:
|
||||
|
|
@ -321,52 +332,61 @@ def imprint(node, data, update=False):
|
|||
return
|
||||
|
||||
current_parms = {p.name(): p for p in node.spareParms()}
|
||||
update_parms = []
|
||||
templates = []
|
||||
update_parm_templates = []
|
||||
new_parm_templates = []
|
||||
|
||||
for key, value in data.items():
|
||||
if value is None:
|
||||
continue
|
||||
|
||||
parm = get_template_from_value(key, value)
|
||||
parm_template = get_template_from_value(key, value)
|
||||
|
||||
if key in current_parms:
|
||||
if node.evalParm(key) == data[key]:
|
||||
if node.evalParm(key) == value:
|
||||
continue
|
||||
if not update:
|
||||
log.debug(f"{key} already exists on {node}")
|
||||
else:
|
||||
log.debug(f"replacing {key}")
|
||||
update_parms.append(parm)
|
||||
update_parm_templates.append(parm_template)
|
||||
continue
|
||||
|
||||
templates.append(parm)
|
||||
new_parm_templates.append(parm_template)
|
||||
|
||||
parm_group = node.parmTemplateGroup()
|
||||
parm_folder = parm_group.findFolder("Extra")
|
||||
|
||||
# if folder doesn't exist yet, create one and append to it,
|
||||
# else append to existing one
|
||||
if not parm_folder:
|
||||
parm_folder = hou.FolderParmTemplate("folder", "Extra")
|
||||
parm_folder.setParmTemplates(templates)
|
||||
parm_group.append(parm_folder)
|
||||
else:
|
||||
for template in templates:
|
||||
parm_group.appendToFolder(parm_folder, template)
|
||||
# this is needed because the pointer to folder
|
||||
# is for some reason lost every call to `appendToFolder()`
|
||||
parm_folder = parm_group.findFolder("Extra")
|
||||
|
||||
node.setParmTemplateGroup(parm_group)
|
||||
|
||||
# TODO: Updating is done here, by calling probably deprecated functions.
|
||||
# This needs to be addressed in the future.
|
||||
if not update_parms:
|
||||
if not new_parm_templates and not update_parm_templates:
|
||||
return
|
||||
|
||||
for parm in update_parms:
|
||||
node.replaceSpareParmTuple(parm.name(), parm)
|
||||
parm_group = node.parmTemplateGroup()
|
||||
|
||||
# Add new parm templates
|
||||
if new_parm_templates:
|
||||
parm_folder = parm_group.findFolder("Extra")
|
||||
|
||||
# if folder doesn't exist yet, create one and append to it,
|
||||
# else append to existing one
|
||||
if not parm_folder:
|
||||
parm_folder = hou.FolderParmTemplate("folder", "Extra")
|
||||
parm_folder.setParmTemplates(new_parm_templates)
|
||||
parm_group.append(parm_folder)
|
||||
else:
|
||||
# Add to parm template folder instance then replace with updated
|
||||
# one in parm template group
|
||||
for template in new_parm_templates:
|
||||
parm_folder.addParmTemplate(template)
|
||||
parm_group.replace(parm_folder.name(), parm_folder)
|
||||
|
||||
# Update existing parm templates
|
||||
for parm_template in update_parm_templates:
|
||||
parm_group.replace(parm_template.name(), parm_template)
|
||||
|
||||
# When replacing a parm with a parm of the same name it preserves its
|
||||
# value if before the replacement the parm was not at the default,
|
||||
# because it has a value override set. Since we're trying to update the
|
||||
# parm by using the new value as `default` we enforce the parm is at
|
||||
# default state
|
||||
node.parm(parm_template.name()).revertToDefaults()
|
||||
|
||||
node.setParmTemplateGroup(parm_group)
|
||||
|
||||
|
||||
def lsattr(attr, value=None, root="/"):
|
||||
|
|
@ -747,3 +767,193 @@ def get_camera_from_container(container):
|
|||
|
||||
assert len(cameras) == 1, "Camera instance must have only one camera"
|
||||
return cameras[0]
|
||||
|
||||
|
||||
def get_context_var_changes():
|
||||
"""get context var changes."""
|
||||
|
||||
houdini_vars_to_update = {}
|
||||
|
||||
project_settings = get_current_project_settings()
|
||||
houdini_vars_settings = \
|
||||
project_settings["houdini"]["general"]["update_houdini_var_context"]
|
||||
|
||||
if not houdini_vars_settings["enabled"]:
|
||||
return houdini_vars_to_update
|
||||
|
||||
houdini_vars = houdini_vars_settings["houdini_vars"]
|
||||
|
||||
# No vars specified - nothing to do
|
||||
if not houdini_vars:
|
||||
return houdini_vars_to_update
|
||||
|
||||
# Get Template data
|
||||
template_data = get_current_context_template_data()
|
||||
|
||||
# Set Houdini Vars
|
||||
for item in houdini_vars:
|
||||
# For consistency reasons we always force all vars to be uppercase
|
||||
# Also remove any leading, and trailing whitespaces.
|
||||
var = item["var"].strip().upper()
|
||||
|
||||
# get and resolve template in value
|
||||
item_value = StringTemplate.format_template(
|
||||
item["value"],
|
||||
template_data
|
||||
)
|
||||
|
||||
if var == "JOB" and item_value == "":
|
||||
# sync $JOB to $HIP if $JOB is empty
|
||||
item_value = os.environ["HIP"]
|
||||
|
||||
if item["is_directory"]:
|
||||
item_value = item_value.replace("\\", "/")
|
||||
|
||||
current_value = hou.hscript("echo -n `${}`".format(var))[0]
|
||||
|
||||
if current_value != item_value:
|
||||
houdini_vars_to_update[var] = (
|
||||
current_value, item_value, item["is_directory"]
|
||||
)
|
||||
|
||||
return houdini_vars_to_update
|
||||
|
||||
|
||||
def update_houdini_vars_context():
|
||||
"""Update asset context variables"""
|
||||
|
||||
for var, (_old, new, is_directory) in get_context_var_changes().items():
|
||||
if is_directory:
|
||||
try:
|
||||
os.makedirs(new)
|
||||
except OSError as e:
|
||||
if e.errno != errno.EEXIST:
|
||||
print(
|
||||
"Failed to create ${} dir. Maybe due to "
|
||||
"insufficient permissions.".format(var)
|
||||
)
|
||||
|
||||
hou.hscript("set {}={}".format(var, new))
|
||||
os.environ[var] = new
|
||||
print("Updated ${} to {}".format(var, new))
|
||||
|
||||
|
||||
def update_houdini_vars_context_dialog():
|
||||
"""Show pop-up to update asset context variables"""
|
||||
update_vars = get_context_var_changes()
|
||||
if not update_vars:
|
||||
# Nothing to change
|
||||
print("Nothing to change, Houdini vars are already up to date.")
|
||||
return
|
||||
|
||||
message = "\n".join(
|
||||
"${}: {} -> {}".format(var, old or "None", new or "None")
|
||||
for var, (old, new, _is_directory) in update_vars.items()
|
||||
)
|
||||
|
||||
# TODO: Use better UI!
|
||||
parent = hou.ui.mainQtWindow()
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setModal(True)
|
||||
dialog.setWindowTitle("Houdini scene has outdated asset variables")
|
||||
dialog.setMessage(message)
|
||||
dialog.setButtonText("Fix")
|
||||
|
||||
# on_show is the Fix button clicked callback
|
||||
dialog.on_clicked.connect(update_houdini_vars_context)
|
||||
|
||||
dialog.show()
|
||||
|
||||
|
||||
def publisher_show_and_publish(comment=None):
|
||||
"""Open publisher window and trigger publishing action.
|
||||
|
||||
Args:
|
||||
comment (Optional[str]): Comment to set in publisher window.
|
||||
"""
|
||||
|
||||
main_window = get_main_window()
|
||||
publisher_window = get_tool_by_name(
|
||||
tool_name="publisher",
|
||||
parent=main_window,
|
||||
)
|
||||
publisher_window.show_and_publish(comment)
|
||||
|
||||
|
||||
def find_rop_input_dependencies(input_tuple):
|
||||
"""Self publish from ROP nodes.
|
||||
|
||||
Arguments:
|
||||
tuple (hou.RopNode.inputDependencies) which can be a nested tuples
|
||||
represents the input dependencies of the ROP node, consisting of ROPs,
|
||||
and the frames that need to be be rendered prior to rendering the ROP.
|
||||
|
||||
Returns:
|
||||
list of the RopNode.path() that can be found inside
|
||||
the input tuple.
|
||||
"""
|
||||
|
||||
out_list = []
|
||||
if isinstance(input_tuple[0], hou.RopNode):
|
||||
return input_tuple[0].path()
|
||||
|
||||
if isinstance(input_tuple[0], tuple):
|
||||
for item in input_tuple:
|
||||
out_list.append(find_rop_input_dependencies(item))
|
||||
|
||||
return out_list
|
||||
|
||||
|
||||
def self_publish():
|
||||
"""Self publish from ROP nodes.
|
||||
|
||||
Firstly, it gets the node and its dependencies.
|
||||
Then, it deactivates all other ROPs
|
||||
And finaly, it triggers the publishing action.
|
||||
"""
|
||||
|
||||
result, comment = hou.ui.readInput(
|
||||
"Add Publish Comment",
|
||||
buttons=("Publish", "Cancel"),
|
||||
title="Publish comment",
|
||||
close_choice=1
|
||||
)
|
||||
|
||||
if result:
|
||||
return
|
||||
|
||||
current_node = hou.node(".")
|
||||
inputs_paths = find_rop_input_dependencies(
|
||||
current_node.inputDependencies()
|
||||
)
|
||||
inputs_paths.append(current_node.path())
|
||||
|
||||
host = registered_host()
|
||||
context = CreateContext(host, reset=True)
|
||||
|
||||
for instance in context.instances:
|
||||
node_path = instance.data.get("instance_node")
|
||||
instance["active"] = node_path and node_path in inputs_paths
|
||||
|
||||
context.save_changes()
|
||||
|
||||
publisher_show_and_publish(comment)
|
||||
|
||||
|
||||
def add_self_publish_button(node):
|
||||
"""Adds a self publish button to the rop node."""
|
||||
|
||||
label = os.environ.get("AVALON_LABEL") or "OpenPype"
|
||||
|
||||
button_parm = hou.ButtonParmTemplate(
|
||||
"ayon_self_publish",
|
||||
"{} Publish".format(label),
|
||||
script_callback="from openpype.hosts.houdini.api.lib import "
|
||||
"self_publish; self_publish()",
|
||||
script_callback_language=hou.scriptLanguage.Python,
|
||||
join_with_next=True
|
||||
)
|
||||
|
||||
template = node.parmTemplateGroup()
|
||||
template.insertBefore((0,), button_parm)
|
||||
node.setParmTemplateGroup(template)
|
||||
|
|
|
|||
|
|
@ -300,6 +300,9 @@ def on_save():
|
|||
|
||||
log.info("Running callback on save..")
|
||||
|
||||
# update houdini vars
|
||||
lib.update_houdini_vars_context_dialog()
|
||||
|
||||
nodes = lib.get_id_required_nodes()
|
||||
for node, new_id in lib.generate_ids(nodes):
|
||||
lib.set_id(node, new_id, overwrite=False)
|
||||
|
|
@ -335,6 +338,9 @@ def on_open():
|
|||
|
||||
log.info("Running callback on open..")
|
||||
|
||||
# update houdini vars
|
||||
lib.update_houdini_vars_context_dialog()
|
||||
|
||||
# Validate FPS after update_task_from_path to
|
||||
# ensure it is using correct FPS for the asset
|
||||
lib.validate_fps()
|
||||
|
|
@ -399,6 +405,7 @@ def _set_context_settings():
|
|||
"""
|
||||
|
||||
lib.reset_framerange()
|
||||
lib.update_houdini_vars_context()
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, new_value, old_value):
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ from openpype.pipeline import (
|
|||
CreatedInstance
|
||||
)
|
||||
from openpype.lib import BoolDef
|
||||
from .lib import imprint, read, lsattr
|
||||
from .lib import imprint, read, lsattr, add_self_publish_button
|
||||
|
||||
|
||||
class OpenPypeCreatorError(CreatorError):
|
||||
|
|
@ -168,6 +168,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
"""Base class for most of the Houdini creator plugins."""
|
||||
selected_nodes = []
|
||||
settings_name = None
|
||||
add_publish_button = False
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
try:
|
||||
|
|
@ -187,13 +188,18 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
self.customize_node_look(instance_node)
|
||||
|
||||
instance_data["instance_node"] = instance_node.path()
|
||||
instance_data["instance_id"] = instance_node.path()
|
||||
instance = CreatedInstance(
|
||||
self.family,
|
||||
subset_name,
|
||||
instance_data,
|
||||
self)
|
||||
self._add_instance_to_context(instance)
|
||||
imprint(instance_node, instance.data_to_store())
|
||||
self.imprint(instance_node, instance.data_to_store())
|
||||
|
||||
if self.add_publish_button:
|
||||
add_self_publish_button(instance_node)
|
||||
|
||||
return instance
|
||||
|
||||
except hou.Error as er:
|
||||
|
|
@ -222,25 +228,42 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
self.cache_subsets(self.collection_shared_data)
|
||||
for instance in self.collection_shared_data[
|
||||
"houdini_cached_subsets"].get(self.identifier, []):
|
||||
|
||||
node_data = read(instance)
|
||||
|
||||
# Node paths are always the full node path since that is unique
|
||||
# Because it's the node's path it's not written into attributes
|
||||
# but explicitly collected
|
||||
node_path = instance.path()
|
||||
node_data["instance_id"] = node_path
|
||||
node_data["instance_node"] = node_path
|
||||
|
||||
created_instance = CreatedInstance.from_existing(
|
||||
read(instance), self
|
||||
node_data, self
|
||||
)
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, changes in update_list:
|
||||
instance_node = hou.node(created_inst.get("instance_node"))
|
||||
|
||||
new_values = {
|
||||
key: changes[key].new_value
|
||||
for key in changes.changed_keys
|
||||
}
|
||||
imprint(
|
||||
# Update parm templates and values
|
||||
self.imprint(
|
||||
instance_node,
|
||||
new_values,
|
||||
update=True
|
||||
)
|
||||
|
||||
def imprint(self, node, values, update=False):
|
||||
# Never store instance node and instance id since that data comes
|
||||
# from the node's path
|
||||
values.pop("instance_node", None)
|
||||
values.pop("instance_id", None)
|
||||
imprint(node, values, update=update)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
"""Remove specified instance from the scene.
|
||||
|
||||
|
|
@ -299,6 +322,12 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
# Apply General Settings
|
||||
houdini_general_settings = project_settings["houdini"]["general"]
|
||||
self.add_publish_button = houdini_general_settings.get(
|
||||
"add_self_publish_button", False)
|
||||
|
||||
# Apply Creator Settings
|
||||
settings_name = self.settings_name
|
||||
if settings_name is None:
|
||||
settings_name = self.__class__.__name__
|
||||
|
|
|
|||
|
|
@ -6,6 +6,9 @@ import platform
|
|||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import get_current_project_name
|
||||
|
||||
from openpype.lib import StringTemplate
|
||||
from openpype.pipeline.context_tools import get_current_context_template_data
|
||||
|
||||
import hou
|
||||
|
||||
log = logging.getLogger("openpype.hosts.houdini.shelves")
|
||||
|
|
@ -26,10 +29,16 @@ def generate_shelves():
|
|||
log.debug("No custom shelves found in project settings.")
|
||||
return
|
||||
|
||||
# Get Template data
|
||||
template_data = get_current_context_template_data()
|
||||
|
||||
for shelf_set_config in shelves_set_config:
|
||||
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
|
||||
shelf_set_os_filepath = shelf_set_filepath[current_os]
|
||||
if shelf_set_os_filepath:
|
||||
shelf_set_os_filepath = get_path_using_template_data(
|
||||
shelf_set_os_filepath, template_data
|
||||
)
|
||||
if not os.path.isfile(shelf_set_os_filepath):
|
||||
log.error("Shelf path doesn't exist - "
|
||||
"{}".format(shelf_set_os_filepath))
|
||||
|
|
@ -81,7 +90,9 @@ def generate_shelves():
|
|||
"script path of the tool.")
|
||||
continue
|
||||
|
||||
tool = get_or_create_tool(tool_definition, shelf)
|
||||
tool = get_or_create_tool(
|
||||
tool_definition, shelf, template_data
|
||||
)
|
||||
|
||||
if not tool:
|
||||
continue
|
||||
|
|
@ -144,7 +155,7 @@ def get_or_create_shelf(shelf_label):
|
|||
return new_shelf
|
||||
|
||||
|
||||
def get_or_create_tool(tool_definition, shelf):
|
||||
def get_or_create_tool(tool_definition, shelf, template_data):
|
||||
"""This function verifies if the tool exists and updates it. If not, creates
|
||||
a new one.
|
||||
|
||||
|
|
@ -162,10 +173,16 @@ def get_or_create_tool(tool_definition, shelf):
|
|||
return
|
||||
|
||||
script_path = tool_definition["script"]
|
||||
script_path = get_path_using_template_data(script_path, template_data)
|
||||
if not script_path or not os.path.exists(script_path):
|
||||
log.warning("This path doesn't exist - {}".format(script_path))
|
||||
return
|
||||
|
||||
icon_path = tool_definition["icon"]
|
||||
if icon_path:
|
||||
icon_path = get_path_using_template_data(icon_path, template_data)
|
||||
tool_definition["icon"] = icon_path
|
||||
|
||||
existing_tools = shelf.tools()
|
||||
existing_tool = next(
|
||||
(tool for tool in existing_tools if tool.label() == tool_label),
|
||||
|
|
@ -184,3 +201,10 @@ def get_or_create_tool(tool_definition, shelf):
|
|||
|
||||
tool_name = re.sub(r"[^\w\d]+", "_", tool_label).lower()
|
||||
return hou.shelves.newTool(name=tool_name, **tool_definition)
|
||||
|
||||
|
||||
def get_path_using_template_data(path, template_data):
|
||||
path = StringTemplate.format_template(path, template_data)
|
||||
path = path.replace("\\", "/")
|
||||
|
||||
return path
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import platform
|
||||
import subprocess
|
||||
|
||||
from openpype.lib.vendor_bin_utils import find_executable
|
||||
|
|
@ -8,17 +9,31 @@ from openpype.pipeline import load
|
|||
class ShowInUsdview(load.LoaderPlugin):
|
||||
"""Open USD file in usdview"""
|
||||
|
||||
families = ["colorbleed.usd"]
|
||||
label = "Show in usdview"
|
||||
representations = ["usd", "usda", "usdlc", "usdnc"]
|
||||
order = 10
|
||||
representations = ["*"]
|
||||
families = ["*"]
|
||||
extensions = {"usd", "usda", "usdlc", "usdnc", "abc"}
|
||||
order = 15
|
||||
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pathlib import Path
|
||||
|
||||
usdview = find_executable("usdview")
|
||||
if platform.system() == "Windows":
|
||||
executable = "usdview.bat"
|
||||
else:
|
||||
executable = "usdview"
|
||||
|
||||
usdview = find_executable(executable)
|
||||
if not usdview:
|
||||
raise RuntimeError("Unable to find usdview")
|
||||
|
||||
# For some reason Windows can return the path like:
|
||||
# C:/PROGRA~1/SIDEEF~1/HOUDIN~1.435/bin/usdview
|
||||
# convert to resolved path so `subprocess` can take it
|
||||
usdview = str(Path(usdview).resolve().as_posix())
|
||||
|
||||
filepath = self.filepath_from_context(context)
|
||||
filepath = os.path.normpath(filepath)
|
||||
|
|
@ -30,14 +45,4 @@ class ShowInUsdview(load.LoaderPlugin):
|
|||
|
||||
self.log.info("Start houdini variant of usdview...")
|
||||
|
||||
# For now avoid some pipeline environment variables that initialize
|
||||
# Avalon in Houdini as it is redundant for usdview and slows boot time
|
||||
env = os.environ.copy()
|
||||
env.pop("PYTHONPATH", None)
|
||||
env.pop("HOUDINI_SCRIPT_PATH", None)
|
||||
env.pop("HOUDINI_MENU_PATH", None)
|
||||
|
||||
# Force string to avoid unicode issues
|
||||
env = {str(key): str(value) for key, value in env.items()}
|
||||
|
||||
subprocess.Popen([usdview, filepath, "--renderer", "GL"], env=env)
|
||||
subprocess.Popen([usdview, filepath, "--renderer", "GL"])
|
||||
|
|
|
|||
|
|
@ -86,6 +86,14 @@ openpype.hosts.houdini.api.lib.reset_framerange()
|
|||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<scriptItem id="update_context_vars">
|
||||
<label>Update Houdini Vars</label>
|
||||
<scriptCode><![CDATA[
|
||||
import openpype.hosts.houdini.api.lib
|
||||
openpype.hosts.houdini.api.lib.update_houdini_vars_context_dialog()
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<separatorItem/>
|
||||
<scriptItem id="experimental_tools">
|
||||
<label>Experimental tools...</label>
|
||||
|
|
|
|||
|
|
@ -1,15 +1,35 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Library of functions useful for 3dsmax pipeline."""
|
||||
import contextlib
|
||||
import logging
|
||||
import json
|
||||
from typing import Any, Dict, Union
|
||||
|
||||
import six
|
||||
from openpype.pipeline import get_current_project_name, colorspace
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project, get_current_project_asset)
|
||||
from openpype.style import load_stylesheet
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
JSON_PREFIX = "JSON::"
|
||||
log = logging.getLogger("openpype.hosts.max")
|
||||
|
||||
|
||||
def get_main_window():
|
||||
"""Acquire Max's main window"""
|
||||
from qtpy import QtWidgets
|
||||
top_widgets = QtWidgets.QApplication.topLevelWidgets()
|
||||
name = "QmaxApplicationWindow"
|
||||
for widget in top_widgets:
|
||||
if (
|
||||
widget.inherits("QMainWindow")
|
||||
and widget.metaObject().className() == name
|
||||
):
|
||||
return widget
|
||||
raise RuntimeError('Count not find 3dsMax main window.')
|
||||
|
||||
|
||||
def imprint(node_name: str, data: dict) -> bool:
|
||||
|
|
@ -277,6 +297,7 @@ def set_context_setting():
|
|||
"""
|
||||
reset_scene_resolution()
|
||||
reset_frame_range()
|
||||
reset_colorspace()
|
||||
|
||||
|
||||
def get_max_version():
|
||||
|
|
@ -292,6 +313,14 @@ def get_max_version():
|
|||
return max_info[7]
|
||||
|
||||
|
||||
def is_headless():
|
||||
"""Check if 3dsMax runs in batch mode.
|
||||
If it returns True, it runs in 3dsbatch.exe
|
||||
If it returns False, it runs in 3dsmax.exe
|
||||
"""
|
||||
return rt.maxops.isInNonInteractiveMode()
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_camera(camera):
|
||||
original = rt.viewport.getCamera()
|
||||
|
|
@ -314,6 +343,51 @@ def set_timeline(frameStart, frameEnd):
|
|||
return rt.animationRange
|
||||
|
||||
|
||||
def reset_colorspace():
|
||||
"""OCIO Configuration
|
||||
Supports in 3dsMax 2024+
|
||||
|
||||
"""
|
||||
if int(get_max_version()) < 2024:
|
||||
return
|
||||
project_name = get_current_project_name()
|
||||
colorspace_mgr = rt.ColorPipelineMgr
|
||||
project_settings = get_project_settings(project_name)
|
||||
|
||||
max_config_data = colorspace.get_imageio_config(
|
||||
project_name, "max", project_settings)
|
||||
if max_config_data:
|
||||
ocio_config_path = max_config_data["path"]
|
||||
colorspace_mgr = rt.ColorPipelineMgr
|
||||
colorspace_mgr.Mode = rt.Name("OCIO_Custom")
|
||||
colorspace_mgr.OCIOConfigPath = ocio_config_path
|
||||
|
||||
colorspace_mgr.OCIOConfigPath = ocio_config_path
|
||||
|
||||
|
||||
def check_colorspace():
|
||||
parent = get_main_window()
|
||||
if parent is None:
|
||||
log.info("Skipping outdated pop-up "
|
||||
"because Max main window can't be found.")
|
||||
if int(get_max_version()) >= 2024:
|
||||
color_mgr = rt.ColorPipelineMgr
|
||||
project_name = get_current_project_name()
|
||||
project_settings = get_project_settings(project_name)
|
||||
max_config_data = colorspace.get_imageio_config(
|
||||
project_name, "max", project_settings)
|
||||
if max_config_data and color_mgr.Mode != rt.Name("OCIO_Custom"):
|
||||
if not is_headless():
|
||||
from openpype.widgets import popup
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Warning: Wrong OCIO Mode")
|
||||
dialog.setMessage("This scene has wrong OCIO "
|
||||
"Mode setting.")
|
||||
dialog.setButtonText("Fix")
|
||||
dialog.setStyleSheet(load_stylesheet())
|
||||
dialog.on_clicked.connect(reset_colorspace)
|
||||
dialog.show()
|
||||
|
||||
def unique_namespace(namespace, format="%02d",
|
||||
prefix="", suffix="", con_suffix="CON"):
|
||||
"""Return unique namespace
|
||||
|
|
|
|||
|
|
@ -119,6 +119,10 @@ class OpenPypeMenu(object):
|
|||
frame_action.triggered.connect(self.frame_range_callback)
|
||||
openpype_menu.addAction(frame_action)
|
||||
|
||||
colorspace_action = QtWidgets.QAction("Set Colorspace", openpype_menu)
|
||||
colorspace_action.triggered.connect(self.colorspace_callback)
|
||||
openpype_menu.addAction(colorspace_action)
|
||||
|
||||
return openpype_menu
|
||||
|
||||
def load_callback(self):
|
||||
|
|
@ -148,3 +152,7 @@ class OpenPypeMenu(object):
|
|||
def frame_range_callback(self):
|
||||
"""Callback to reset frame range"""
|
||||
return lib.reset_frame_range()
|
||||
|
||||
def colorspace_callback(self):
|
||||
"""Callback to reset colorspace"""
|
||||
return lib.reset_colorspace()
|
||||
|
|
|
|||
|
|
@ -57,6 +57,9 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
rt.callbacks.addScript(rt.Name('systemPostNew'),
|
||||
context_setting)
|
||||
|
||||
rt.callbacks.addScript(rt.Name('filePostOpen'),
|
||||
lib.check_colorspace)
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
# TODO: how to get it from 3dsmax?
|
||||
return True
|
||||
|
|
|
|||
|
|
@ -65,12 +65,12 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
|||
|
||||
on button_add pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to add to
|
||||
current_sel = selectByName title:"Select Objects to add to
|
||||
the Container" buttontext:"Add" filter:nodes_to_add
|
||||
if current_selection == undefined then return False
|
||||
if current_sel == undefined then return False
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
for c in current_selection do
|
||||
for c in current_sel do
|
||||
(
|
||||
handle_name = node_to_name c
|
||||
node_ref = NodeTransformMonitor node:c
|
||||
|
|
@ -89,15 +89,18 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData"
|
|||
|
||||
on button_del pressed do
|
||||
(
|
||||
current_selection = selectByName title:"Select Objects to remove
|
||||
current_sel = selectByName title:"Select Objects to remove
|
||||
from the Container" buttontext:"Remove" filter: nodes_to_rmv
|
||||
if current_selection == undefined then return False
|
||||
if current_sel == undefined or current_sel.count == 0 then
|
||||
(
|
||||
return False
|
||||
)
|
||||
temp_arr = #()
|
||||
i_node_arr = #()
|
||||
new_i_node_arr = #()
|
||||
new_temp_arr = #()
|
||||
|
||||
for c in current_selection do
|
||||
for c in current_sel do
|
||||
(
|
||||
node_ref = NodeTransformMonitor node:c as string
|
||||
handle_name = node_to_name c
|
||||
|
|
|
|||
|
|
@ -34,6 +34,12 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
files_by_aov.update(aovs)
|
||||
|
||||
camera = rt.viewport.GetCamera()
|
||||
if instance.data.get("members"):
|
||||
camera_list = [member for member in instance.data["members"]
|
||||
if rt.ClassOf(member) == rt.Camera.Classes]
|
||||
if camera_list:
|
||||
camera = camera_list[-1]
|
||||
|
||||
instance.data["cameras"] = [camera.name] if camera else None # noqa
|
||||
|
||||
if instance.data.get("multiCamera"):
|
||||
|
|
@ -86,6 +92,17 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
instance.data["colorspaceConfig"] = ""
|
||||
instance.data["colorspaceDisplay"] = "sRGB"
|
||||
instance.data["colorspaceView"] = "ACES 1.0 SDR-video"
|
||||
|
||||
if int(get_max_version()) >= 2024:
|
||||
colorspace_mgr = rt.ColorPipelineMgr # noqa
|
||||
display = next(
|
||||
(display for display in colorspace_mgr.GetDisplayList()))
|
||||
view_transform = next(
|
||||
(view for view in colorspace_mgr.GetViewList(display)))
|
||||
instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath
|
||||
instance.data["colorspaceDisplay"] = display
|
||||
instance.data["colorspaceView"] = view_transform
|
||||
|
||||
instance.data["renderProducts"] = colorspace.ARenderProduct()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
instance.data["attachTo"] = []
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import pyblish.api
|
|||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.max.api.lib import get_max_version
|
||||
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
|
|
@ -43,6 +44,17 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
"dspSafeFrame": attr_values.get("dspSafeFrame"),
|
||||
"dspFrameNums": attr_values.get("dspFrameNums")
|
||||
}
|
||||
|
||||
if int(get_max_version()) >= 2024:
|
||||
colorspace_mgr = rt.ColorPipelineMgr # noqa
|
||||
display = next(
|
||||
(display for display in colorspace_mgr.GetDisplayList()))
|
||||
view_transform = next(
|
||||
(view for view in colorspace_mgr.GetViewList(display)))
|
||||
instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath
|
||||
instance.data["colorspaceDisplay"] = display
|
||||
instance.data["colorspaceView"] = view_transform
|
||||
|
||||
# Enable ftrack functionality
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
||||
|
|
@ -54,7 +66,6 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
|
||||
return [
|
||||
BoolDef("dspGeometry",
|
||||
label="Geometry",
|
||||
|
|
|
|||
|
|
@ -36,6 +36,7 @@ class ExtractPointCloud(publish.Extractor):
|
|||
label = "Extract Point Cloud"
|
||||
hosts = ["max"]
|
||||
families = ["pointcloud"]
|
||||
settings = []
|
||||
|
||||
def process(self, instance):
|
||||
self.settings = self.get_setting(instance)
|
||||
|
|
|
|||
|
|
@ -1,21 +1,24 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateMaxContents(pyblish.api.InstancePlugin):
|
||||
"""Validates Max contents.
|
||||
class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
|
||||
"""Validates Instance has members.
|
||||
|
||||
Check if MaxScene container includes any contents underneath.
|
||||
Check if MaxScene containers includes any contents underneath.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["camera",
|
||||
"model",
|
||||
"maxScene",
|
||||
"review"]
|
||||
"review",
|
||||
"pointcache",
|
||||
"pointcloud",
|
||||
"redshiftproxy"]
|
||||
hosts = ["max"]
|
||||
label = "Max Scene Contents"
|
||||
label = "Container Contents"
|
||||
|
||||
def process(self, instance):
|
||||
if not instance.data["members"]:
|
||||
|
|
@ -100,8 +100,8 @@ class ValidatePointCloud(pyblish.api.InstancePlugin):
|
|||
|
||||
selection_list = instance.data["members"]
|
||||
|
||||
project_setting = instance.data["project_setting"]
|
||||
attr_settings = project_setting["max"]["PointCloud"]["attribute"]
|
||||
project_settings = instance.context.data["project_settings"]
|
||||
attr_settings = project_settings["max"]["PointCloud"]["attribute"]
|
||||
for sel in selection_list:
|
||||
obj = sel.baseobject
|
||||
anim_names = rt.GetSubAnimNames(obj)
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from pyblish.api import Instance
|
|||
|
||||
from maya import cmds # noqa
|
||||
import maya.mel as mel # noqa
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
|
||||
|
||||
class FBXExtractor:
|
||||
|
|
@ -53,7 +54,6 @@ class FBXExtractor:
|
|||
"bakeComplexEnd": int,
|
||||
"bakeComplexStep": int,
|
||||
"bakeResampleAnimation": bool,
|
||||
"animationOnly": bool,
|
||||
"useSceneName": bool,
|
||||
"quaternion": str, # "euler"
|
||||
"shapes": bool,
|
||||
|
|
@ -63,7 +63,10 @@ class FBXExtractor:
|
|||
"embeddedTextures": bool,
|
||||
"inputConnections": bool,
|
||||
"upAxis": str, # x, y or z,
|
||||
"triangulate": bool
|
||||
"triangulate": bool,
|
||||
"fileVersion": str,
|
||||
"skeletonDefinitions": bool,
|
||||
"referencedAssetsContent": bool
|
||||
}
|
||||
|
||||
@property
|
||||
|
|
@ -94,7 +97,6 @@ class FBXExtractor:
|
|||
"bakeComplexEnd": end_frame,
|
||||
"bakeComplexStep": 1,
|
||||
"bakeResampleAnimation": True,
|
||||
"animationOnly": False,
|
||||
"useSceneName": False,
|
||||
"quaternion": "euler",
|
||||
"shapes": True,
|
||||
|
|
@ -104,7 +106,10 @@ class FBXExtractor:
|
|||
"embeddedTextures": False,
|
||||
"inputConnections": True,
|
||||
"upAxis": "y",
|
||||
"triangulate": False
|
||||
"triangulate": False,
|
||||
"fileVersion": "FBX202000",
|
||||
"skeletonDefinitions": False,
|
||||
"referencedAssetsContent": False
|
||||
}
|
||||
|
||||
def __init__(self, log=None):
|
||||
|
|
@ -198,5 +203,9 @@ class FBXExtractor:
|
|||
path (str): Path to use for export.
|
||||
|
||||
"""
|
||||
cmds.select(members, r=True, noExpand=True)
|
||||
mel.eval('FBXExport -f "{}" -s'.format(path))
|
||||
# The export requires forward slashes because we need
|
||||
# to format it into a string in a mel expression
|
||||
path = path.replace("\\", "/")
|
||||
with maintained_selection():
|
||||
cmds.select(members, r=True, noExpand=True)
|
||||
mel.eval('FBXExport -f "{}" -s'.format(path))
|
||||
|
|
|
|||
|
|
@ -146,6 +146,10 @@ def suspended_refresh(suspend=True):
|
|||
|
||||
cmds.ogs(pause=True) is a toggle so we cant pass False.
|
||||
"""
|
||||
if IS_HEADLESS:
|
||||
yield
|
||||
return
|
||||
|
||||
original_state = cmds.ogs(query=True, pause=True)
|
||||
try:
|
||||
if suspend and not original_state:
|
||||
|
|
@ -183,6 +187,51 @@ def maintained_selection():
|
|||
cmds.select(clear=True)
|
||||
|
||||
|
||||
def get_namespace(node):
|
||||
"""Return namespace of given node"""
|
||||
node_name = node.rsplit("|", 1)[-1]
|
||||
if ":" in node_name:
|
||||
return node_name.rsplit(":", 1)[0]
|
||||
else:
|
||||
return ""
|
||||
|
||||
|
||||
def strip_namespace(node, namespace):
|
||||
"""Strip given namespace from node path.
|
||||
|
||||
The namespace will only be stripped from names
|
||||
if it starts with that namespace. If the namespace
|
||||
occurs within another namespace it's not removed.
|
||||
|
||||
Examples:
|
||||
>>> strip_namespace("namespace:node", namespace="namespace:")
|
||||
"node"
|
||||
>>> strip_namespace("hello:world:node", namespace="hello:world")
|
||||
"node"
|
||||
>>> strip_namespace("hello:world:node", namespace="hello")
|
||||
"world:node"
|
||||
>>> strip_namespace("hello:world:node", namespace="world")
|
||||
"hello:world:node"
|
||||
>>> strip_namespace("ns:group|ns:node", namespace="ns")
|
||||
"group|node"
|
||||
|
||||
Returns:
|
||||
str: Node name without given starting namespace.
|
||||
|
||||
"""
|
||||
|
||||
# Ensure namespace ends with `:`
|
||||
if not namespace.endswith(":"):
|
||||
namespace = "{}:".format(namespace)
|
||||
|
||||
# The long path for a node can also have the namespace
|
||||
# in its parents so we need to remove it from each
|
||||
return "|".join(
|
||||
name[len(namespace):] if name.startswith(namespace) else name
|
||||
for name in node.split("|")
|
||||
)
|
||||
|
||||
|
||||
def get_custom_namespace(custom_namespace):
|
||||
"""Return unique namespace.
|
||||
|
||||
|
|
@ -922,7 +971,7 @@ def no_display_layers(nodes):
|
|||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def namespaced(namespace, new=True):
|
||||
def namespaced(namespace, new=True, relative_names=None):
|
||||
"""Work inside namespace during context
|
||||
|
||||
Args:
|
||||
|
|
@ -934,15 +983,19 @@ def namespaced(namespace, new=True):
|
|||
|
||||
"""
|
||||
original = cmds.namespaceInfo(cur=True, absoluteName=True)
|
||||
original_relative_names = cmds.namespace(query=True, relativeNames=True)
|
||||
if new:
|
||||
namespace = unique_namespace(namespace)
|
||||
cmds.namespace(add=namespace)
|
||||
|
||||
if relative_names is not None:
|
||||
cmds.namespace(relativeNames=relative_names)
|
||||
try:
|
||||
cmds.namespace(set=namespace)
|
||||
yield namespace
|
||||
finally:
|
||||
cmds.namespace(set=original)
|
||||
if relative_names is not None:
|
||||
cmds.namespace(relativeNames=original_relative_names)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
@ -2571,7 +2624,7 @@ def bake_to_world_space(nodes,
|
|||
new_name = "{0}_baked".format(short_name)
|
||||
new_node = cmds.duplicate(node,
|
||||
name=new_name,
|
||||
renameChildren=True)[0]
|
||||
renameChildren=True)[0] # noqa
|
||||
|
||||
# Connect all attributes on the node except for transform
|
||||
# attributes
|
||||
|
|
@ -4100,14 +4153,19 @@ def create_rig_animation_instance(
|
|||
"""
|
||||
if options is None:
|
||||
options = {}
|
||||
|
||||
name = context["representation"]["name"]
|
||||
output = next((node for node in nodes if
|
||||
node.endswith("out_SET")), None)
|
||||
controls = next((node for node in nodes if
|
||||
node.endswith("controls_SET")), None)
|
||||
if name != "fbx":
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
anim_skeleton = next((node for node in nodes if
|
||||
node.endswith("skeletonAnim_SET")), None)
|
||||
skeleton_mesh = next((node for node in nodes if
|
||||
node.endswith("skeletonMesh_SET")), None)
|
||||
|
||||
# Find the roots amongst the loaded nodes
|
||||
roots = (
|
||||
|
|
@ -4119,9 +4177,7 @@ def create_rig_animation_instance(
|
|||
custom_subset = options.get("animationSubsetName")
|
||||
if custom_subset:
|
||||
formatting_data = {
|
||||
# TODO remove 'asset_type' and replace 'asset_name' with 'asset'
|
||||
"asset_name": context['asset']['name'],
|
||||
"asset_type": context['asset']['type'],
|
||||
"asset": context["asset"],
|
||||
"subset": context['subset']['name'],
|
||||
"family": (
|
||||
context['subset']['data'].get('family') or
|
||||
|
|
@ -4142,10 +4198,12 @@ def create_rig_animation_instance(
|
|||
|
||||
host = registered_host()
|
||||
create_context = CreateContext(host)
|
||||
|
||||
# Create the animation instance
|
||||
rig_sets = [output, controls, anim_skeleton, skeleton_mesh]
|
||||
# Remove sets that this particular rig does not have
|
||||
rig_sets = [s for s in rig_sets if s is not None]
|
||||
with maintained_selection():
|
||||
cmds.select([output, controls] + roots, noExpand=True)
|
||||
cmds.select(rig_sets + roots, noExpand=True)
|
||||
create_context.create(
|
||||
creator_identifier=creator_identifier,
|
||||
variant=namespace,
|
||||
|
|
|
|||
|
|
@ -1,14 +1,13 @@
|
|||
import os
|
||||
import logging
|
||||
from functools import partial
|
||||
|
||||
from qtpy import QtWidgets, QtGui
|
||||
|
||||
import maya.utils
|
||||
import maya.cmds as cmds
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
get_current_asset_name,
|
||||
get_current_task_name
|
||||
)
|
||||
|
|
@ -46,12 +45,12 @@ def get_context_label():
|
|||
)
|
||||
|
||||
|
||||
def install():
|
||||
def install(project_settings):
|
||||
if cmds.about(batch=True):
|
||||
log.info("Skipping openpype.menu initialization in batch mode..")
|
||||
return
|
||||
|
||||
def deferred():
|
||||
def add_menu():
|
||||
pyblish_icon = host_tools.get_pyblish_icon()
|
||||
parent_widget = get_main_window()
|
||||
cmds.menu(
|
||||
|
|
@ -191,7 +190,7 @@ def install():
|
|||
|
||||
cmds.setParent(MENU_NAME, menu=True)
|
||||
|
||||
def add_scripts_menu():
|
||||
def add_scripts_menu(project_settings):
|
||||
try:
|
||||
import scriptsmenu.launchformaya as launchformaya
|
||||
except ImportError:
|
||||
|
|
@ -201,9 +200,6 @@ def install():
|
|||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_name = get_current_project_name()
|
||||
project_settings = get_project_settings(project_name)
|
||||
config = project_settings["maya"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["maya"]["scriptsmenu"]["name"]
|
||||
|
||||
|
|
@ -225,8 +221,9 @@ def install():
|
|||
# so that it only gets called after Maya UI has initialized too.
|
||||
# This is crucial with Maya 2020+ which initializes without UI
|
||||
# first as a QCoreApplication
|
||||
maya.utils.executeDeferred(deferred)
|
||||
cmds.evalDeferred(add_scripts_menu, lowestPriority=True)
|
||||
maya.utils.executeDeferred(add_menu)
|
||||
cmds.evalDeferred(partial(add_scripts_menu, project_settings),
|
||||
lowestPriority=True)
|
||||
|
||||
|
||||
def uninstall():
|
||||
|
|
|
|||
|
|
@ -28,8 +28,6 @@ from openpype.lib import (
|
|||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_current_project_name,
|
||||
get_current_asset_name,
|
||||
get_current_task_name,
|
||||
register_loader_plugin_path,
|
||||
register_inventory_action_path,
|
||||
register_creator_plugin_path,
|
||||
|
|
@ -97,6 +95,8 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
self.log.info("Installing callbacks ... ")
|
||||
register_event_callback("init", on_init)
|
||||
|
||||
_set_project()
|
||||
|
||||
if lib.IS_HEADLESS:
|
||||
self.log.info((
|
||||
"Running in headless mode, skipping Maya save/open/new"
|
||||
|
|
@ -105,10 +105,9 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
|
||||
return
|
||||
|
||||
_set_project()
|
||||
self._register_callbacks()
|
||||
|
||||
menu.install()
|
||||
menu.install(project_settings)
|
||||
|
||||
register_event_callback("save", on_save)
|
||||
register_event_callback("open", on_open)
|
||||
|
|
|
|||
|
|
@ -151,6 +151,7 @@ class MayaCreatorBase(object):
|
|||
# We never store the instance_node as value on the node since
|
||||
# it's the node name itself
|
||||
data.pop("instance_node", None)
|
||||
data.pop("instance_id", None)
|
||||
|
||||
# Don't store `families` since it's up to the creator itself
|
||||
# to define the initial publish families - not a stored attribute of
|
||||
|
|
@ -227,6 +228,7 @@ class MayaCreatorBase(object):
|
|||
|
||||
# Explicitly re-parse the node name
|
||||
node_data["instance_node"] = node
|
||||
node_data["instance_id"] = node
|
||||
|
||||
# If the creator plug-in specifies
|
||||
families = self.get_publish_families()
|
||||
|
|
@ -601,6 +603,13 @@ class RenderlayerCreator(NewCreator, MayaCreatorBase):
|
|||
class Loader(LoaderPlugin):
|
||||
hosts = ["maya"]
|
||||
|
||||
load_settings = {} # defined in settings
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings, system_settings):
|
||||
super(Loader, cls).apply_settings(project_settings, system_settings)
|
||||
cls.load_settings = project_settings['maya']['load']
|
||||
|
||||
def get_custom_namespace_and_group(self, context, options, loader_key):
|
||||
"""Queries Settings to get custom template for namespace and group.
|
||||
|
||||
|
|
@ -613,12 +622,9 @@ class Loader(LoaderPlugin):
|
|||
loader_key (str): key to get separate configuration from Settings
|
||||
('reference_loader'|'import_loader')
|
||||
"""
|
||||
options["attach_to_root"] = True
|
||||
|
||||
asset = context['asset']
|
||||
subset = context['subset']
|
||||
settings = get_project_settings(context['project']['name'])
|
||||
custom_naming = settings['maya']['load'][loader_key]
|
||||
options["attach_to_root"] = True
|
||||
custom_naming = self.load_settings[loader_key]
|
||||
|
||||
if not custom_naming['namespace']:
|
||||
raise LoadError("No namespace specified in "
|
||||
|
|
@ -627,6 +633,8 @@ class Loader(LoaderPlugin):
|
|||
self.log.debug("No custom group_name, no group will be created.")
|
||||
options["attach_to_root"] = False
|
||||
|
||||
asset = context['asset']
|
||||
subset = context['subset']
|
||||
formatting_data = {
|
||||
"asset_name": asset['name'],
|
||||
"asset_type": asset['type'],
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ class PreCopyMel(PreLaunchHook):
|
|||
|
||||
Hook `GlobalHostDataHook` must be executed before this hook.
|
||||
"""
|
||||
app_groups = {"maya"}
|
||||
app_groups = {"maya", "mayapy"}
|
||||
launch_types = {LaunchTypes.local}
|
||||
|
||||
def execute(self):
|
||||
|
|
|
|||
32
openpype/hosts/maya/plugins/create/create_matchmove.py
Normal file
32
openpype/hosts/maya/plugins/create/create_matchmove.py
Normal file
|
|
@ -0,0 +1,32 @@
|
|||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.lib import BoolDef
|
||||
|
||||
|
||||
class CreateMatchmove(plugin.MayaCreator):
|
||||
"""Instance for more complex setup of cameras.
|
||||
|
||||
Might contain multiple cameras, geometries etc.
|
||||
|
||||
It is expected to be extracted into .abc or .ma
|
||||
"""
|
||||
|
||||
identifier = "io.openpype.creators.maya.matchmove"
|
||||
label = "Matchmove"
|
||||
family = "matchmove"
|
||||
icon = "video-camera"
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
|
||||
defs = lib.collect_animation_defs()
|
||||
|
||||
defs.extend([
|
||||
BoolDef("bakeToWorldSpace",
|
||||
label="Bake Cameras to World-Space",
|
||||
tooltip="Bake Cameras to World-Space",
|
||||
default=True),
|
||||
])
|
||||
|
||||
return defs
|
||||
211
openpype/hosts/maya/plugins/create/create_multishot_layout.py
Normal file
211
openpype/hosts/maya/plugins/create/create_multishot_layout.py
Normal file
|
|
@ -0,0 +1,211 @@
|
|||
from ayon_api import (
|
||||
get_folder_by_name,
|
||||
get_folder_by_path,
|
||||
get_folders,
|
||||
)
|
||||
from maya import cmds # noqa: F401
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.client import get_assets
|
||||
from openpype.hosts.maya.api import plugin
|
||||
from openpype.lib import BoolDef, EnumDef, TextDef
|
||||
from openpype.pipeline import (
|
||||
Creator,
|
||||
get_current_asset_name,
|
||||
get_current_project_name,
|
||||
)
|
||||
from openpype.pipeline.create import CreatorError
|
||||
|
||||
|
||||
class CreateMultishotLayout(plugin.MayaCreator):
|
||||
"""Create a multi-shot layout in the Maya scene.
|
||||
|
||||
This creator will create a Camera Sequencer in the Maya scene based on
|
||||
the shots found under the specified folder. The shots will be added to
|
||||
the sequencer in the order of their clipIn and clipOut values. For each
|
||||
shot a Layout will be created.
|
||||
|
||||
"""
|
||||
identifier = "io.openpype.creators.maya.multishotlayout"
|
||||
label = "Multi-shot Layout"
|
||||
family = "layout"
|
||||
icon = "project-diagram"
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Present artist with a list of parents of the current context
|
||||
# to choose from. This will be used to get the shots under the
|
||||
# selected folder to create the Camera Sequencer.
|
||||
|
||||
"""
|
||||
Todo: `get_folder_by_name` should be switched to `get_folder_by_path`
|
||||
once the fork to pure AYON is done.
|
||||
|
||||
Warning: this will not work for projects where the asset name
|
||||
is not unique across the project until the switch mentioned
|
||||
above is done.
|
||||
"""
|
||||
|
||||
current_folder = get_folder_by_name(
|
||||
project_name=get_current_project_name(),
|
||||
folder_name=get_current_asset_name(),
|
||||
)
|
||||
|
||||
current_path_parts = current_folder["path"].split("/")
|
||||
|
||||
# populate the list with parents of the current folder
|
||||
# this will create menu items like:
|
||||
# [
|
||||
# {
|
||||
# "value": "",
|
||||
# "label": "project (shots directly under the project)"
|
||||
# }, {
|
||||
# "value": "shots/shot_01", "label": "shot_01 (current)"
|
||||
# }, {
|
||||
# "value": "shots", "label": "shots"
|
||||
# }
|
||||
# ]
|
||||
|
||||
# add the project as the first item
|
||||
items_with_label = [
|
||||
{
|
||||
"label": f"{self.project_name} "
|
||||
"(shots directly under the project)",
|
||||
"value": ""
|
||||
}
|
||||
]
|
||||
|
||||
# go through the current folder path and add each part to the list,
|
||||
# but mark the current folder.
|
||||
for part_idx, part in enumerate(current_path_parts):
|
||||
label = part
|
||||
if label == current_folder["name"]:
|
||||
label = f"{label} (current)"
|
||||
|
||||
value = "/".join(current_path_parts[:part_idx + 1])
|
||||
|
||||
items_with_label.append({"label": label, "value": value})
|
||||
|
||||
return [
|
||||
EnumDef("shotParent",
|
||||
default=current_folder["name"],
|
||||
label="Shot Parent Folder",
|
||||
items=items_with_label,
|
||||
),
|
||||
BoolDef("groupLoadedAssets",
|
||||
label="Group Loaded Assets",
|
||||
tooltip="Enable this when you want to publish group of "
|
||||
"loaded asset",
|
||||
default=False),
|
||||
TextDef("taskName",
|
||||
label="Associated Task Name",
|
||||
tooltip=("Task name to be associated "
|
||||
"with the created Layout"),
|
||||
default="layout"),
|
||||
]
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
shots = list(
|
||||
self.get_related_shots(folder_path=pre_create_data["shotParent"])
|
||||
)
|
||||
if not shots:
|
||||
# There are no shot folders under the specified folder.
|
||||
# We are raising an error here but in the future we might
|
||||
# want to create a new shot folders by publishing the layouts
|
||||
# and shot defined in the sequencer. Sort of editorial publish
|
||||
# in side of Maya.
|
||||
raise CreatorError((
|
||||
"No shots found under the specified "
|
||||
f"folder: {pre_create_data['shotParent']}."))
|
||||
|
||||
# Get layout creator
|
||||
layout_creator_id = "io.openpype.creators.maya.layout"
|
||||
layout_creator: Creator = self.create_context.creators.get(
|
||||
layout_creator_id)
|
||||
if not layout_creator:
|
||||
raise CreatorError(
|
||||
f"Creator {layout_creator_id} not found.")
|
||||
|
||||
# Get OpenPype style asset documents for the shots
|
||||
op_asset_docs = get_assets(
|
||||
self.project_name, [s["id"] for s in shots])
|
||||
asset_docs_by_id = {doc["_id"]: doc for doc in op_asset_docs}
|
||||
for shot in shots:
|
||||
# we are setting shot name to be displayed in the sequencer to
|
||||
# `shot name (shot label)` if the label is set, otherwise just
|
||||
# `shot name`. So far, labels are used only when the name is set
|
||||
# with characters that are not allowed in the shot name.
|
||||
if not shot["active"]:
|
||||
continue
|
||||
|
||||
# get task for shot
|
||||
asset_doc = asset_docs_by_id[shot["id"]]
|
||||
|
||||
tasks = asset_doc.get("data").get("tasks").keys()
|
||||
layout_task = None
|
||||
if pre_create_data["taskName"] in tasks:
|
||||
layout_task = pre_create_data["taskName"]
|
||||
|
||||
shot_name = f"{shot['name']}%s" % (
|
||||
f" ({shot['label']})" if shot["label"] else "")
|
||||
cmds.shot(sequenceStartTime=shot["attrib"]["clipIn"],
|
||||
sequenceEndTime=shot["attrib"]["clipOut"],
|
||||
shotName=shot_name)
|
||||
|
||||
# Create layout instance by the layout creator
|
||||
|
||||
instance_data = {
|
||||
"asset": shot["name"],
|
||||
"variant": layout_creator.get_default_variant()
|
||||
}
|
||||
if layout_task:
|
||||
instance_data["task"] = layout_task
|
||||
|
||||
layout_creator.create(
|
||||
subset_name=layout_creator.get_subset_name(
|
||||
layout_creator.get_default_variant(),
|
||||
self.create_context.get_current_task_name(),
|
||||
asset_doc,
|
||||
self.project_name),
|
||||
instance_data=instance_data,
|
||||
pre_create_data={
|
||||
"groupLoadedAssets": pre_create_data["groupLoadedAssets"]
|
||||
}
|
||||
)
|
||||
|
||||
def get_related_shots(self, folder_path: str):
|
||||
"""Get all shots related to the current asset.
|
||||
|
||||
Get all folders of type Shot under specified folder.
|
||||
|
||||
Args:
|
||||
folder_path (str): Path of the folder.
|
||||
|
||||
Returns:
|
||||
list: List of dicts with folder data.
|
||||
|
||||
"""
|
||||
# if folder_path is None, project is selected as a root
|
||||
# and its name is used as a parent id
|
||||
parent_id = self.project_name
|
||||
if folder_path:
|
||||
current_folder = get_folder_by_path(
|
||||
project_name=self.project_name,
|
||||
folder_path=folder_path,
|
||||
)
|
||||
parent_id = current_folder["id"]
|
||||
|
||||
# get all child folders of the current one
|
||||
return get_folders(
|
||||
project_name=self.project_name,
|
||||
parent_ids=[parent_id],
|
||||
fields=[
|
||||
"attrib.clipIn", "attrib.clipOut",
|
||||
"attrib.frameStart", "attrib.frameEnd",
|
||||
"name", "label", "path", "folderType", "id"
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
# blast this creator if Ayon server is not enabled
|
||||
if not AYON_SERVER_ENABLED:
|
||||
del CreateMultishotLayout
|
||||
|
|
@ -20,6 +20,13 @@ class CreateRig(plugin.MayaCreator):
|
|||
instance_node = instance.get("instance_node")
|
||||
|
||||
self.log.info("Creating Rig instance set up ...")
|
||||
# TODO:change name (_controls_SET -> _rigs_SET)
|
||||
controls = cmds.sets(name=subset_name + "_controls_SET", empty=True)
|
||||
# TODO:change name (_out_SET -> _geo_SET)
|
||||
pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True)
|
||||
cmds.sets([controls, pointcache], forceElement=instance_node)
|
||||
skeleton = cmds.sets(
|
||||
name=subset_name + "_skeletonAnim_SET", empty=True)
|
||||
skeleton_mesh = cmds.sets(
|
||||
name=subset_name + "_skeletonMesh_SET", empty=True)
|
||||
cmds.sets([controls, pointcache,
|
||||
skeleton, skeleton_mesh], forceElement=instance_node)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,46 @@
|
|||
import openpype.hosts.maya.api.plugin
|
||||
import maya.cmds as cmds
|
||||
|
||||
|
||||
def _process_reference(file_url, name, namespace, options):
|
||||
"""Load files by referencing scene in Maya.
|
||||
|
||||
Args:
|
||||
file_url (str): fileapth of the objects to be loaded
|
||||
name (str): subset name
|
||||
namespace (str): namespace
|
||||
options (dict): dict of storing the param
|
||||
|
||||
Returns:
|
||||
list: list of object nodes
|
||||
"""
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
# Get name from asset being loaded
|
||||
# Assuming name is subset name from the animation, we split the number
|
||||
# suffix from the name to ensure the namespace is unique
|
||||
name = name.split("_")[0]
|
||||
ext = file_url.split(".")[-1]
|
||||
namespace = unique_namespace(
|
||||
"{}_".format(name),
|
||||
format="%03d",
|
||||
suffix="_{}".format(ext)
|
||||
)
|
||||
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
group_name = options["group_name"]
|
||||
|
||||
# no group shall be created
|
||||
if not attach_to_root:
|
||||
group_name = namespace
|
||||
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=attach_to_root,
|
||||
groupName=group_name,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
return nodes
|
||||
|
||||
|
||||
class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
|
|
@ -16,44 +58,42 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
|
||||
import maya.cmds as cmds
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
# Prevent identical alembic nodes from being shared
|
||||
# Create unique namespace for the cameras
|
||||
|
||||
# Get name from asset being loaded
|
||||
# Assuming name is subset name from the animation, we split the number
|
||||
# suffix from the name to ensure the namespace is unique
|
||||
name = name.split("_")[0]
|
||||
namespace = unique_namespace(
|
||||
"{}_".format(name),
|
||||
format="%03d",
|
||||
suffix="_abc"
|
||||
)
|
||||
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
group_name = options["group_name"]
|
||||
|
||||
# no group shall be created
|
||||
if not attach_to_root:
|
||||
group_name = namespace
|
||||
|
||||
# hero_001 (abc)
|
||||
# asset_counter{optional}
|
||||
path = self.filepath_from_context(context)
|
||||
file_url = self.prepare_root_value(path,
|
||||
context["project"]["name"])
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=attach_to_root,
|
||||
groupName=group_name,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
nodes = _process_reference(file_url, name, namespace, options)
|
||||
# load colorbleed ID attribute
|
||||
self[:] = nodes
|
||||
|
||||
return nodes
|
||||
|
||||
|
||||
class FbxLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
"""Loader to reference an Fbx files"""
|
||||
|
||||
families = ["animation",
|
||||
"camera"]
|
||||
representations = ["fbx"]
|
||||
|
||||
label = "Reference animation"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
|
||||
cmds.loadPlugin("fbx4maya.mll", quiet=True)
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
file_url = self.prepare_root_value(path,
|
||||
context["project"]["name"])
|
||||
|
||||
nodes = _process_reference(file_url, name, namespace, options)
|
||||
|
||||
self[:] = nodes
|
||||
|
||||
return nodes
|
||||
|
|
|
|||
|
|
@ -1,12 +1,6 @@
|
|||
from maya import cmds, mel
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_id,
|
||||
get_subset_by_id,
|
||||
get_version_by_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
get_current_project_name,
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
|
|
@ -18,7 +12,7 @@ class AudioLoader(load.LoaderPlugin):
|
|||
"""Specific loader of audio."""
|
||||
|
||||
families = ["audio"]
|
||||
label = "Import audio"
|
||||
label = "Load audio"
|
||||
representations = ["wav"]
|
||||
icon = "volume-up"
|
||||
color = "orange"
|
||||
|
|
@ -27,10 +21,10 @@ class AudioLoader(load.LoaderPlugin):
|
|||
|
||||
start_frame = cmds.playbackOptions(query=True, min=True)
|
||||
sound_node = cmds.sound(
|
||||
file=context["representation"]["data"]["path"], offset=start_frame
|
||||
file=self.filepath_from_context(context), offset=start_frame
|
||||
)
|
||||
cmds.timeControl(
|
||||
mel.eval("$tmpVar=$gPlayBackSlider"),
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
edit=True,
|
||||
sound=sound_node,
|
||||
displaySound=True
|
||||
|
|
@ -59,32 +53,50 @@ class AudioLoader(load.LoaderPlugin):
|
|||
assert audio_nodes is not None, "Audio node not found."
|
||||
audio_node = audio_nodes[0]
|
||||
|
||||
current_sound = cmds.timeControl(
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
query=True,
|
||||
sound=True
|
||||
)
|
||||
activate_sound = current_sound == audio_node
|
||||
|
||||
path = get_representation_path(representation)
|
||||
cmds.setAttr("{}.filename".format(audio_node), path, type="string")
|
||||
|
||||
cmds.sound(
|
||||
audio_node,
|
||||
edit=True,
|
||||
file=path
|
||||
)
|
||||
|
||||
# The source start + end does not automatically update itself to the
|
||||
# length of thew new audio file, even though maya does do that when
|
||||
# creating a new audio node. So to update we compute it manually.
|
||||
# This would however override any source start and source end a user
|
||||
# might have done on the original audio node after load.
|
||||
audio_frame_count = cmds.getAttr("{}.frameCount".format(audio_node))
|
||||
audio_sample_rate = cmds.getAttr("{}.sampleRate".format(audio_node))
|
||||
duration_in_seconds = audio_frame_count / audio_sample_rate
|
||||
fps = mel.eval('currentTimeUnitToFPS()') # workfile FPS
|
||||
source_start = 0
|
||||
source_end = (duration_in_seconds * fps)
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
if activate_sound:
|
||||
# maya by default deactivates it from timeline on file change
|
||||
cmds.timeControl(
|
||||
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
|
||||
edit=True,
|
||||
sound=audio_node,
|
||||
displaySound=True
|
||||
)
|
||||
|
||||
cmds.setAttr(
|
||||
container["objectName"] + ".representation",
|
||||
str(representation["_id"]),
|
||||
type="string"
|
||||
)
|
||||
|
||||
# Set frame range.
|
||||
project_name = get_current_project_name()
|
||||
version = get_version_by_id(
|
||||
project_name, representation["parent"], fields=["parent"]
|
||||
)
|
||||
subset = get_subset_by_id(
|
||||
project_name, version["parent"], fields=["parent"]
|
||||
)
|
||||
asset = get_asset_by_id(
|
||||
project_name, subset["parent"], fields=["parent"]
|
||||
)
|
||||
|
||||
source_start = 1 - asset["data"]["frameStart"]
|
||||
source_end = asset["data"]["frameEnd"]
|
||||
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
|
|
|
|||
|
|
@ -101,7 +101,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"camerarig",
|
||||
"staticMesh",
|
||||
"skeletalMesh",
|
||||
"mvLook"]
|
||||
"mvLook",
|
||||
"matchmove"]
|
||||
|
||||
representations = ["ma", "abc", "fbx", "mb"]
|
||||
|
||||
|
|
|
|||
36
openpype/hosts/maya/plugins/publish/collect_fbx_animation.py
Normal file
36
openpype/hosts/maya/plugins/publish/collect_fbx_animation.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin
|
||||
|
||||
|
||||
class CollectFbxAnimation(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Collect Animated Rig Data for FBX Extractor."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Fbx Animation"
|
||||
hosts = ["maya"]
|
||||
families = ["animation"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
skeleton_sets = [
|
||||
i for i in instance
|
||||
if i.endswith("skeletonAnim_SET")
|
||||
]
|
||||
if not skeleton_sets:
|
||||
return
|
||||
|
||||
instance.data["families"].append("animation.fbx")
|
||||
instance.data["animated_skeleton"] = []
|
||||
for skeleton_set in skeleton_sets:
|
||||
skeleton_content = cmds.sets(skeleton_set, query=True)
|
||||
self.log.debug(
|
||||
"Collected animated skeleton data: {}".format(
|
||||
skeleton_content
|
||||
))
|
||||
if skeleton_content:
|
||||
instance.data["animated_skeleton"] = skeleton_content
|
||||
|
|
@ -22,7 +22,8 @@ class CollectRigSets(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
|
||||
# Find required sets by suffix
|
||||
searching = {"controls_SET", "out_SET"}
|
||||
searching = {"controls_SET", "out_SET",
|
||||
"skeletonAnim_SET", "skeletonMesh_SET"}
|
||||
found = {}
|
||||
for node in cmds.ls(instance, exactType="objectSet"):
|
||||
for suffix in searching:
|
||||
|
|
|
|||
44
openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py
Normal file
44
openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectSkeletonMesh(pyblish.api.InstancePlugin):
|
||||
"""Collect Static Rig Data for FBX Extractor."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Skeleton Mesh"
|
||||
hosts = ["maya"]
|
||||
families = ["rig"]
|
||||
|
||||
def process(self, instance):
|
||||
skeleton_mesh_set = instance.data["rig_sets"].get(
|
||||
"skeletonMesh_SET")
|
||||
if not skeleton_mesh_set:
|
||||
self.log.debug(
|
||||
"No skeletonMesh_SET found. "
|
||||
"Skipping collecting of skeleton mesh..."
|
||||
)
|
||||
return
|
||||
|
||||
# Store current frame to ensure single frame export
|
||||
frame = cmds.currentTime(query=True)
|
||||
instance.data["frameStart"] = frame
|
||||
instance.data["frameEnd"] = frame
|
||||
|
||||
instance.data["skeleton_mesh"] = []
|
||||
|
||||
skeleton_mesh_content = cmds.sets(
|
||||
skeleton_mesh_set, query=True) or []
|
||||
if not skeleton_mesh_content:
|
||||
self.log.debug(
|
||||
"No object nodes in skeletonMesh_SET. "
|
||||
"Skipping collecting of skeleton mesh..."
|
||||
)
|
||||
return
|
||||
instance.data["families"] += ["rig.fbx"]
|
||||
instance.data["skeleton_mesh"] = skeleton_mesh_content
|
||||
self.log.debug(
|
||||
"Collected skeletonMesh_SET members: {}".format(
|
||||
skeleton_mesh_content
|
||||
))
|
||||
|
|
@ -6,17 +6,21 @@ from openpype.pipeline import publish
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class ExtractCameraAlembic(publish.Extractor):
|
||||
class ExtractCameraAlembic(publish.Extractor,
|
||||
publish.OptionalPyblishPluginMixin):
|
||||
"""Extract a Camera as Alembic.
|
||||
|
||||
The cameras gets baked to world space by default. Only when the instance's
|
||||
The camera gets baked to world space by default. Only when the instance's
|
||||
`bakeToWorldSpace` is set to False it will include its full hierarchy.
|
||||
|
||||
'camera' family expects only single camera, if multiple cameras are needed,
|
||||
'matchmove' is better choice.
|
||||
|
||||
"""
|
||||
|
||||
label = "Camera (Alembic)"
|
||||
label = "Extract Camera (Alembic)"
|
||||
hosts = ["maya"]
|
||||
families = ["camera"]
|
||||
families = ["camera", "matchmove"]
|
||||
bake_attributes = []
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -35,10 +39,11 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
|
||||
# validate required settings
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
if not os.path.exists(dir_path):
|
||||
os.makedirs(dir_path)
|
||||
filename = "{0}.abc".format(instance.name)
|
||||
path = os.path.join(dir_path, filename)
|
||||
|
||||
|
|
@ -64,9 +69,10 @@ class ExtractCameraAlembic(publish.Extractor):
|
|||
|
||||
# if baked, drop the camera hierarchy to maintain
|
||||
# clean output and backwards compatibility
|
||||
camera_root = cmds.listRelatives(
|
||||
camera, parent=True, fullPath=True)[0]
|
||||
job_str += ' -root {0}'.format(camera_root)
|
||||
camera_roots = cmds.listRelatives(
|
||||
cameras, parent=True, fullPath=True)
|
||||
for camera_root in camera_roots:
|
||||
job_str += ' -root {0}'.format(camera_root)
|
||||
|
||||
for member in members:
|
||||
descendants = cmds.listRelatives(member,
|
||||
|
|
|
|||
|
|
@ -2,11 +2,15 @@
|
|||
"""Extract camera as Maya Scene."""
|
||||
import os
|
||||
import itertools
|
||||
import contextlib
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.lib import (
|
||||
BoolDef
|
||||
)
|
||||
|
||||
|
||||
def massage_ma_file(path):
|
||||
|
|
@ -78,7 +82,8 @@ def unlock(plug):
|
|||
cmds.disconnectAttr(source, destination)
|
||||
|
||||
|
||||
class ExtractCameraMayaScene(publish.Extractor):
|
||||
class ExtractCameraMayaScene(publish.Extractor,
|
||||
publish.OptionalPyblishPluginMixin):
|
||||
"""Extract a Camera as Maya Scene.
|
||||
|
||||
This will create a duplicate of the camera that will be baked *with*
|
||||
|
|
@ -88,17 +93,22 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
The cameras gets baked to world space by default. Only when the instance's
|
||||
`bakeToWorldSpace` is set to False it will include its full hierarchy.
|
||||
|
||||
'camera' family expects only single camera, if multiple cameras are needed,
|
||||
'matchmove' is better choice.
|
||||
|
||||
Note:
|
||||
The extracted Maya ascii file gets "massaged" removing the uuid values
|
||||
so they are valid for older versions of Fusion (e.g. 6.4)
|
||||
|
||||
"""
|
||||
|
||||
label = "Camera (Maya Scene)"
|
||||
label = "Extract Camera (Maya Scene)"
|
||||
hosts = ["maya"]
|
||||
families = ["camera"]
|
||||
families = ["camera", "matchmove"]
|
||||
scene_type = "ma"
|
||||
|
||||
keep_image_planes = True
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
# get settings
|
||||
|
|
@ -131,15 +141,15 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
"bake to world space is ignored...")
|
||||
|
||||
# get cameras
|
||||
members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True,
|
||||
long=True, dag=True)
|
||||
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera")
|
||||
members = set(cmds.ls(instance.data['setMembers'], leaf=True,
|
||||
shapes=True, long=True, dag=True))
|
||||
cameras = set(cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera"))
|
||||
|
||||
# validate required settings
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
transform = cmds.listRelatives(camera, parent=True, fullPath=True)
|
||||
transforms = cmds.listRelatives(list(cameras),
|
||||
parent=True, fullPath=True)
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
|
|
@ -151,23 +161,21 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
with lib.evaluation("off"):
|
||||
with lib.suspended_refresh():
|
||||
if bake_to_worldspace:
|
||||
self.log.debug(
|
||||
"Performing camera bakes: {}".format(transform))
|
||||
baked = lib.bake_to_world_space(
|
||||
transform,
|
||||
transforms,
|
||||
frame_range=[start, end],
|
||||
step=step
|
||||
)
|
||||
baked_camera_shapes = cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True)
|
||||
baked_camera_shapes = set(cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True))
|
||||
|
||||
members = members + baked_camera_shapes
|
||||
members.remove(camera)
|
||||
members.update(baked_camera_shapes)
|
||||
members.difference_update(cameras)
|
||||
else:
|
||||
baked_camera_shapes = cmds.ls(cameras,
|
||||
baked_camera_shapes = cmds.ls(list(cameras),
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
|
|
@ -186,19 +194,28 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
unlock(plug)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
self.log.debug("Performing extraction..")
|
||||
cmds.select(cmds.ls(members, dag=True,
|
||||
shapes=True, long=True), noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
constructionHistory=False,
|
||||
channels=True, # allow animation
|
||||
constraints=False,
|
||||
shader=False,
|
||||
expressions=False)
|
||||
attr_values = self.get_attr_values_from_data(
|
||||
instance.data)
|
||||
keep_image_planes = attr_values.get("keep_image_planes")
|
||||
|
||||
with transfer_image_planes(sorted(cameras),
|
||||
sorted(baked_camera_shapes),
|
||||
keep_image_planes):
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
cmds.select(cmds.ls(list(members), dag=True,
|
||||
shapes=True, long=True),
|
||||
noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
constructionHistory=False,
|
||||
channels=True, # allow animation
|
||||
constraints=False,
|
||||
shader=False,
|
||||
expressions=False)
|
||||
|
||||
# Delete the baked hierarchy
|
||||
if bake_to_worldspace:
|
||||
|
|
@ -219,3 +236,62 @@ class ExtractCameraMayaScene(publish.Extractor):
|
|||
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, path))
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
defs = super(ExtractCameraMayaScene, cls).get_attribute_defs()
|
||||
|
||||
defs.extend([
|
||||
BoolDef("keep_image_planes",
|
||||
label="Keep Image Planes",
|
||||
tooltip="Preserving connected image planes on camera",
|
||||
default=cls.keep_image_planes),
|
||||
|
||||
])
|
||||
|
||||
return defs
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def transfer_image_planes(source_cameras, target_cameras,
|
||||
keep_input_connections):
|
||||
"""Reattaches image planes to baked or original cameras.
|
||||
|
||||
Baked cameras are duplicates of original ones.
|
||||
This attaches it to duplicated camera properly and after
|
||||
export it reattaches it back to original to keep image plane in workfile.
|
||||
"""
|
||||
originals = {}
|
||||
try:
|
||||
for source_camera, target_camera in zip(source_cameras,
|
||||
target_cameras):
|
||||
image_planes = cmds.listConnections(source_camera,
|
||||
type="imagePlane") or []
|
||||
|
||||
# Split of the parent path they are attached - we want
|
||||
# the image plane node name.
|
||||
# TODO: Does this still mean the image plane name is unique?
|
||||
image_planes = [x.split("->", 1)[1] for x in image_planes]
|
||||
|
||||
if not image_planes:
|
||||
continue
|
||||
|
||||
originals[source_camera] = []
|
||||
for image_plane in image_planes:
|
||||
if keep_input_connections:
|
||||
if source_camera == target_camera:
|
||||
continue
|
||||
_attach_image_plane(target_camera, image_plane)
|
||||
else: # explicitly dettaching image planes
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
originals[source_camera].append(image_plane)
|
||||
yield
|
||||
finally:
|
||||
for camera, image_planes in originals.items():
|
||||
for image_plane in image_planes:
|
||||
_attach_image_plane(camera, image_plane)
|
||||
|
||||
|
||||
def _attach_image_plane(camera, image_plane):
|
||||
cmds.imagePlane(image_plane, edit=True, detach=True)
|
||||
cmds.imagePlane(image_plane, edit=True, camera=camera)
|
||||
|
|
|
|||
65
openpype/hosts/maya/plugins/publish/extract_fbx_animation.py
Normal file
65
openpype/hosts/maya/plugins/publish/extract_fbx_animation.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import fbx
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
namespaced, get_namespace, strip_namespace
|
||||
)
|
||||
|
||||
|
||||
class ExtractFBXAnimation(publish.Extractor):
|
||||
"""Extract Rig in FBX format from Maya.
|
||||
|
||||
This extracts the rig in fbx with the constraints
|
||||
and referenced asset content included.
|
||||
This also optionally extract animated rig in fbx with
|
||||
geometries included.
|
||||
|
||||
"""
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract Animation (FBX)"
|
||||
hosts = ["maya"]
|
||||
families = ["animation.fbx"]
|
||||
|
||||
def process(self, instance):
|
||||
# Define output path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{0}.fbx".format(instance.name)
|
||||
path = os.path.join(staging_dir, filename)
|
||||
path = path.replace("\\", "/")
|
||||
|
||||
fbx_exporter = fbx.FBXExtractor(log=self.log)
|
||||
out_members = instance.data.get("animated_skeleton", [])
|
||||
# Export
|
||||
instance.data["constraints"] = True
|
||||
instance.data["skeletonDefinitions"] = True
|
||||
instance.data["referencedAssetsContent"] = True
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
# Export from the rig's namespace so that the exported
|
||||
# FBX does not include the namespace but preserves the node
|
||||
# names as existing in the rig workfile
|
||||
namespace = get_namespace(out_members[0])
|
||||
relative_out_members = [
|
||||
strip_namespace(node, namespace) for node in out_members
|
||||
]
|
||||
with namespaced(
|
||||
":" + namespace,
|
||||
new=False,
|
||||
relative_names=True
|
||||
) as namespace:
|
||||
fbx_exporter.export(relative_out_members, path)
|
||||
|
||||
representations = instance.data.setdefault("representations", [])
|
||||
representations.append({
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
self.log.debug(
|
||||
"Extracted FBX animation to: {0}".format(path))
|
||||
|
|
@ -1,5 +1,6 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Maya look extractor."""
|
||||
import sys
|
||||
from abc import ABCMeta, abstractmethod
|
||||
from collections import OrderedDict
|
||||
import contextlib
|
||||
|
|
@ -176,6 +177,24 @@ class MakeRSTexBin(TextureProcessor):
|
|||
source
|
||||
]
|
||||
|
||||
# if color management is enabled we pass color space information
|
||||
if color_management["enabled"]:
|
||||
config_path = color_management["config"]
|
||||
if not os.path.exists(config_path):
|
||||
raise RuntimeError("OCIO config not found at: "
|
||||
"{}".format(config_path))
|
||||
|
||||
if not os.getenv("OCIO"):
|
||||
self.log.debug(
|
||||
"OCIO environment variable not set."
|
||||
"Setting it with OCIO config from Maya."
|
||||
)
|
||||
os.environ["OCIO"] = config_path
|
||||
|
||||
self.log.debug("converting colorspace {0} to redshift render "
|
||||
"colorspace".format(colorspace))
|
||||
subprocess_args.extend(["-cs", colorspace])
|
||||
|
||||
hash_args = ["rstex"]
|
||||
texture_hash = source_hash(source, *hash_args)
|
||||
|
||||
|
|
@ -186,11 +205,11 @@ class MakeRSTexBin(TextureProcessor):
|
|||
|
||||
self.log.debug(" ".join(subprocess_args))
|
||||
try:
|
||||
run_subprocess(subprocess_args)
|
||||
run_subprocess(subprocess_args, logger=self.log)
|
||||
except Exception:
|
||||
self.log.error("Texture .rstexbin conversion failed",
|
||||
exc_info=True)
|
||||
raise
|
||||
six.reraise(*sys.exc_info())
|
||||
|
||||
return TextureResult(
|
||||
path=destination,
|
||||
|
|
@ -472,7 +491,7 @@ class ExtractLook(publish.Extractor):
|
|||
"rstex": MakeRSTexBin
|
||||
}.items():
|
||||
if instance.data.get(key, False):
|
||||
processor = Processor()
|
||||
processor = Processor(log=self.log)
|
||||
processor.apply_settings(context.data["system_settings"],
|
||||
context.data["project_settings"])
|
||||
processors.append(processor)
|
||||
|
|
|
|||
54
openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py
Normal file
54
openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.pipeline.publish import OptionalPyblishPluginMixin
|
||||
from openpype.hosts.maya.api import fbx
|
||||
|
||||
|
||||
class ExtractSkeletonMesh(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Extract Rig in FBX format from Maya.
|
||||
|
||||
This extracts the rig in fbx with the constraints
|
||||
and referenced asset content included.
|
||||
This also optionally extract animated rig in fbx with
|
||||
geometries included.
|
||||
|
||||
"""
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract Skeleton Mesh"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
# Define output path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{0}.fbx".format(instance.name)
|
||||
path = os.path.join(staging_dir, filename)
|
||||
|
||||
fbx_exporter = fbx.FBXExtractor(log=self.log)
|
||||
out_set = instance.data.get("skeleton_mesh", [])
|
||||
|
||||
instance.data["constraints"] = True
|
||||
instance.data["skeletonDefinitions"] = True
|
||||
|
||||
fbx_exporter.set_options_from_instance(instance)
|
||||
|
||||
# Export
|
||||
fbx_exporter.export(out_set, path)
|
||||
|
||||
representations = instance.data.setdefault("representations", [])
|
||||
representations.append({
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
self.log.debug("Extract FBX to: {0}".format(path))
|
||||
|
|
@ -0,0 +1,66 @@
|
|||
import pyblish.api
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError,
|
||||
ValidateContentsOrder
|
||||
)
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class ValidateAnimatedReferenceRig(pyblish.api.InstancePlugin):
|
||||
"""Validate all nodes in skeletonAnim_SET are referenced"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
hosts = ["maya"]
|
||||
families = ["animation.fbx"]
|
||||
label = "Animated Reference Rig"
|
||||
accepted_controllers = ["transform", "locator"]
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
animated_sets = instance.data.get("animated_skeleton", [])
|
||||
if not animated_sets:
|
||||
self.log.debug(
|
||||
"No nodes found in skeletonAnim_SET. "
|
||||
"Skipping validation of animated reference rig..."
|
||||
)
|
||||
return
|
||||
|
||||
for animated_reference in animated_sets:
|
||||
is_referenced = cmds.referenceQuery(
|
||||
animated_reference, isNodeReferenced=True)
|
||||
if not bool(is_referenced):
|
||||
raise PublishValidationError(
|
||||
"All the content in skeletonAnim_SET"
|
||||
" should be referenced nodes"
|
||||
)
|
||||
invalid_controls = self.validate_controls(animated_sets)
|
||||
if invalid_controls:
|
||||
raise PublishValidationError(
|
||||
"All the content in skeletonAnim_SET"
|
||||
" should be transforms"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def validate_controls(self, set_members):
|
||||
"""Check if the controller set contains only accepted node types.
|
||||
|
||||
Checks if all its set members are within the hierarchy of the root
|
||||
Checks if the node types of the set members valid
|
||||
|
||||
Args:
|
||||
set_members: list of nodes of the skeleton_anim_set
|
||||
hierarchy: list of nodes which reside under the root node
|
||||
|
||||
Returns:
|
||||
errors (list)
|
||||
"""
|
||||
|
||||
# Validate control types
|
||||
invalid = []
|
||||
set_members = cmds.ls(set_members, long=True)
|
||||
for node in set_members:
|
||||
if cmds.nodeType(node) not in self.accepted_controllers:
|
||||
invalid.append(node)
|
||||
|
||||
return invalid
|
||||
|
|
@ -30,18 +30,21 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
invalid = list()
|
||||
|
||||
file_attr = cls.attribute
|
||||
if not file_attr:
|
||||
file_attrs = cls.attribute
|
||||
if not file_attrs:
|
||||
return invalid
|
||||
|
||||
# Consider only valid node types to avoid "Unknown object type" warning
|
||||
all_node_types = set(cmds.allNodeTypes())
|
||||
node_types = [key for key in file_attr.keys() if key in all_node_types]
|
||||
node_types = [
|
||||
key for key in file_attrs.keys()
|
||||
if key in all_node_types
|
||||
]
|
||||
|
||||
for node, node_type in pairwise(cmds.ls(type=node_types,
|
||||
showType=True)):
|
||||
# get the filepath
|
||||
file_attr = "{}.{}".format(node, file_attr[node_type])
|
||||
file_attr = "{}.{}".format(node, file_attrs[node_type])
|
||||
filepath = cmds.getAttr(file_attr)
|
||||
|
||||
if filepath and not os.path.exists(filepath):
|
||||
|
|
|
|||
117
openpype/hosts/maya/plugins/publish/validate_resolution.py
Normal file
117
openpype/hosts/maya/plugins/publish/validate_resolution.py
Normal file
|
|
@ -0,0 +1,117 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from maya import cmds
|
||||
from openpype.pipeline.publish import RepairAction
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.hosts.maya.api.lib import reset_scene_resolution
|
||||
|
||||
|
||||
class ValidateResolution(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validate the render resolution setting aligned with DB"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["renderlayer"]
|
||||
hosts = ["maya"]
|
||||
label = "Validate Resolution"
|
||||
actions = [RepairAction]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
invalid = self.get_invalid_resolution(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Render resolution is invalid. See log for details.",
|
||||
description=(
|
||||
"Wrong render resolution setting. "
|
||||
"Please use repair button to fix it.\n\n"
|
||||
"If current renderer is V-Ray, "
|
||||
"make sure vraySettings node has been created."
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid_resolution(cls, instance):
|
||||
width, height, pixelAspect = cls.get_db_resolution(instance)
|
||||
current_renderer = instance.data["renderer"]
|
||||
layer = instance.data["renderlayer"]
|
||||
invalid = False
|
||||
if current_renderer == "vray":
|
||||
vray_node = "vraySettings"
|
||||
if cmds.objExists(vray_node):
|
||||
current_width = lib.get_attr_in_layer(
|
||||
"{}.width".format(vray_node), layer=layer)
|
||||
current_height = lib.get_attr_in_layer(
|
||||
"{}.height".format(vray_node), layer=layer)
|
||||
current_pixelAspect = lib.get_attr_in_layer(
|
||||
"{}.pixelAspect".format(vray_node), layer=layer
|
||||
)
|
||||
else:
|
||||
cls.log.error(
|
||||
"Can't detect VRay resolution because there is no node "
|
||||
"named: `{}`".format(vray_node)
|
||||
)
|
||||
return True
|
||||
else:
|
||||
current_width = lib.get_attr_in_layer(
|
||||
"defaultResolution.width", layer=layer)
|
||||
current_height = lib.get_attr_in_layer(
|
||||
"defaultResolution.height", layer=layer)
|
||||
current_pixelAspect = lib.get_attr_in_layer(
|
||||
"defaultResolution.pixelAspect", layer=layer
|
||||
)
|
||||
if current_width != width or current_height != height:
|
||||
cls.log.error(
|
||||
"Render resolution {}x{} does not match "
|
||||
"asset resolution {}x{}".format(
|
||||
current_width, current_height,
|
||||
width, height
|
||||
))
|
||||
invalid = True
|
||||
if current_pixelAspect != pixelAspect:
|
||||
cls.log.error(
|
||||
"Render pixel aspect {} does not match "
|
||||
"asset pixel aspect {}".format(
|
||||
current_pixelAspect, pixelAspect
|
||||
))
|
||||
invalid = True
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def get_db_resolution(cls, instance):
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
for data in [asset_doc["data"], project_doc["data"]]:
|
||||
if (
|
||||
"resolutionWidth" in data and
|
||||
"resolutionHeight" in data and
|
||||
"pixelAspect" in data
|
||||
):
|
||||
width = data["resolutionWidth"]
|
||||
height = data["resolutionHeight"]
|
||||
pixelAspect = data["pixelAspect"]
|
||||
return int(width), int(height), float(pixelAspect)
|
||||
|
||||
# Defaults if not found in asset document or project document
|
||||
return 1920, 1080, 1.0
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
# Usually without renderlayer overrides the renderlayers
|
||||
# all share the same resolution value - so fixing the first
|
||||
# will have fixed all the others too. It's much faster to
|
||||
# check whether it's invalid first instead of switching
|
||||
# into all layers individually
|
||||
if not cls.get_invalid_resolution(instance):
|
||||
cls.log.debug(
|
||||
"Nothing to repair on instance: {}".format(instance)
|
||||
)
|
||||
return
|
||||
layer_node = instance.data['setMembers']
|
||||
with lib.renderlayer(layer_node):
|
||||
reset_scene_resolution()
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
import pyblish.api
|
||||
from maya import cmds
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import (
|
||||
PublishValidationError,
|
||||
ValidateContentsOrder
|
||||
|
|
@ -20,33 +20,27 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
label = "Rig Contents"
|
||||
hosts = ["maya"]
|
||||
families = ["rig"]
|
||||
action = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
|
||||
accepted_output = ["mesh", "transform"]
|
||||
accepted_controllers = ["transform"]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Invalid rig content. See log for details.")
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
# Find required sets by suffix
|
||||
required = ["controls_SET", "out_SET"]
|
||||
missing = [
|
||||
key for key in required if key not in instance.data["rig_sets"]
|
||||
]
|
||||
if missing:
|
||||
raise PublishValidationError(
|
||||
"%s is missing sets: %s" % (instance, ", ".join(missing))
|
||||
)
|
||||
required, rig_sets = cls.get_nodes(instance)
|
||||
|
||||
controls_set = instance.data["rig_sets"]["controls_SET"]
|
||||
out_set = instance.data["rig_sets"]["out_SET"]
|
||||
cls.validate_missing_objectsets(instance, required, rig_sets)
|
||||
|
||||
# Ensure there are at least some transforms or dag nodes
|
||||
# in the rig instance
|
||||
set_members = instance.data['setMembers']
|
||||
if not cmds.ls(set_members, type="dagNode", long=True):
|
||||
raise PublishValidationError(
|
||||
"No dag nodes in the pointcache instance. "
|
||||
"(Empty instance?)"
|
||||
)
|
||||
controls_set = rig_sets["controls_SET"]
|
||||
out_set = rig_sets["out_SET"]
|
||||
|
||||
# Ensure contents in sets and retrieve long path for all objects
|
||||
output_content = cmds.sets(out_set, query=True) or []
|
||||
|
|
@ -61,49 +55,92 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
)
|
||||
controls_content = cmds.ls(controls_content, long=True)
|
||||
|
||||
# Validate members are inside the hierarchy from root node
|
||||
root_nodes = cmds.ls(set_members, assemblies=True, long=True)
|
||||
hierarchy = cmds.listRelatives(root_nodes, allDescendents=True,
|
||||
fullPath=True) + root_nodes
|
||||
hierarchy = set(hierarchy)
|
||||
|
||||
invalid_hierarchy = []
|
||||
for node in output_content:
|
||||
if node not in hierarchy:
|
||||
invalid_hierarchy.append(node)
|
||||
for node in controls_content:
|
||||
if node not in hierarchy:
|
||||
invalid_hierarchy.append(node)
|
||||
rig_content = output_content + controls_content
|
||||
invalid_hierarchy = cls.invalid_hierarchy(instance, rig_content)
|
||||
|
||||
# Additional validations
|
||||
invalid_geometry = self.validate_geometry(output_content)
|
||||
invalid_controls = self.validate_controls(controls_content)
|
||||
invalid_geometry = cls.validate_geometry(output_content)
|
||||
invalid_controls = cls.validate_controls(controls_content)
|
||||
|
||||
error = False
|
||||
if invalid_hierarchy:
|
||||
self.log.error("Found nodes which reside outside of root group "
|
||||
cls.log.error("Found nodes which reside outside of root group "
|
||||
"while they are set up for publishing."
|
||||
"\n%s" % invalid_hierarchy)
|
||||
error = True
|
||||
|
||||
if invalid_controls:
|
||||
self.log.error("Only transforms can be part of the controls_SET."
|
||||
cls.log.error("Only transforms can be part of the controls_SET."
|
||||
"\n%s" % invalid_controls)
|
||||
error = True
|
||||
|
||||
if invalid_geometry:
|
||||
self.log.error("Only meshes can be part of the out_SET\n%s"
|
||||
cls.log.error("Only meshes can be part of the out_SET\n%s"
|
||||
% invalid_geometry)
|
||||
error = True
|
||||
|
||||
if error:
|
||||
return invalid_hierarchy + invalid_controls + invalid_geometry
|
||||
|
||||
@classmethod
|
||||
def validate_missing_objectsets(cls, instance,
|
||||
required_objsets, rig_sets):
|
||||
"""Validate missing objectsets in rig sets
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
required_objsets (list): list of objectset names
|
||||
rig_sets (list): list of rig sets
|
||||
|
||||
Raises:
|
||||
PublishValidationError: When the error is raised, it will show
|
||||
which instance has the missing object sets
|
||||
"""
|
||||
missing = [
|
||||
key for key in required_objsets if key not in rig_sets
|
||||
]
|
||||
if missing:
|
||||
raise PublishValidationError(
|
||||
"Invalid rig content. See log for details.")
|
||||
"%s is missing sets: %s" % (instance, ", ".join(missing))
|
||||
)
|
||||
|
||||
def validate_geometry(self, set_members):
|
||||
"""Check if the out set passes the validations
|
||||
@classmethod
|
||||
def invalid_hierarchy(cls, instance, content):
|
||||
"""
|
||||
Check if all rig set members are within the hierarchy of the rig root
|
||||
|
||||
Checks if all its set members are within the hierarchy of the root
|
||||
Args:
|
||||
instance (str): instance
|
||||
content (list): list of content from rig sets
|
||||
|
||||
Raises:
|
||||
PublishValidationError: It means no dag nodes in
|
||||
the rig instance
|
||||
|
||||
Returns:
|
||||
list: invalid hierarchy
|
||||
"""
|
||||
# Ensure there are at least some transforms or dag nodes
|
||||
# in the rig instance
|
||||
set_members = instance.data['setMembers']
|
||||
if not cmds.ls(set_members, type="dagNode", long=True):
|
||||
raise PublishValidationError(
|
||||
"No dag nodes in the rig instance. "
|
||||
"(Empty instance?)"
|
||||
)
|
||||
# Validate members are inside the hierarchy from root node
|
||||
root_nodes = cmds.ls(set_members, assemblies=True, long=True)
|
||||
hierarchy = cmds.listRelatives(root_nodes, allDescendents=True,
|
||||
fullPath=True) + root_nodes
|
||||
hierarchy = set(hierarchy)
|
||||
invalid_hierarchy = []
|
||||
for node in content:
|
||||
if node not in hierarchy:
|
||||
invalid_hierarchy.append(node)
|
||||
return invalid_hierarchy
|
||||
|
||||
@classmethod
|
||||
def validate_geometry(cls, set_members):
|
||||
"""
|
||||
Checks if the node types of the set members valid
|
||||
|
||||
Args:
|
||||
|
|
@ -122,15 +159,13 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
fullPath=True) or []
|
||||
all_shapes = cmds.ls(set_members + shapes, long=True, shapes=True)
|
||||
for shape in all_shapes:
|
||||
if cmds.nodeType(shape) not in self.accepted_output:
|
||||
if cmds.nodeType(shape) not in cls.accepted_output:
|
||||
invalid.append(shape)
|
||||
|
||||
return invalid
|
||||
|
||||
def validate_controls(self, set_members):
|
||||
"""Check if the controller set passes the validations
|
||||
|
||||
Checks if all its set members are within the hierarchy of the root
|
||||
@classmethod
|
||||
def validate_controls(cls, set_members):
|
||||
"""
|
||||
Checks if the control set members are allowed node types.
|
||||
Checks if the node types of the set members valid
|
||||
|
||||
Args:
|
||||
|
|
@ -144,7 +179,80 @@ class ValidateRigContents(pyblish.api.InstancePlugin):
|
|||
# Validate control types
|
||||
invalid = []
|
||||
for node in set_members:
|
||||
if cmds.nodeType(node) not in self.accepted_controllers:
|
||||
if cmds.nodeType(node) not in cls.accepted_controllers:
|
||||
invalid.append(node)
|
||||
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def get_nodes(cls, instance):
|
||||
"""Get the target objectsets and rig sets nodes
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
tuple: 2-tuple of list of objectsets,
|
||||
list of rig sets nodes
|
||||
"""
|
||||
objectsets = ["controls_SET", "out_SET"]
|
||||
rig_sets_nodes = instance.data.get("rig_sets", [])
|
||||
return objectsets, rig_sets_nodes
|
||||
|
||||
|
||||
class ValidateSkeletonRigContents(ValidateRigContents):
|
||||
"""Ensure skeleton rigs contains pipeline-critical content
|
||||
|
||||
The rigs optionally contain at least two object sets:
|
||||
"skeletonMesh_SET" - Set of the skinned meshes
|
||||
with bone hierarchies
|
||||
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
label = "Skeleton Rig Contents"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
objectsets, skeleton_mesh_nodes = cls.get_nodes(instance)
|
||||
cls.validate_missing_objectsets(
|
||||
instance, objectsets, instance.data["rig_sets"])
|
||||
|
||||
# Ensure contents in sets and retrieve long path for all objects
|
||||
output_content = instance.data.get("skeleton_mesh", [])
|
||||
output_content = cmds.ls(skeleton_mesh_nodes, long=True)
|
||||
|
||||
invalid_hierarchy = cls.invalid_hierarchy(
|
||||
instance, output_content)
|
||||
invalid_geometry = cls.validate_geometry(output_content)
|
||||
|
||||
error = False
|
||||
if invalid_hierarchy:
|
||||
cls.log.error("Found nodes which reside outside of root group "
|
||||
"while they are set up for publishing."
|
||||
"\n%s" % invalid_hierarchy)
|
||||
error = True
|
||||
if invalid_geometry:
|
||||
cls.log.error("Found nodes which reside outside of root group "
|
||||
"while they are set up for publishing."
|
||||
"\n%s" % invalid_hierarchy)
|
||||
error = True
|
||||
if error:
|
||||
return invalid_hierarchy + invalid_geometry
|
||||
|
||||
@classmethod
|
||||
def get_nodes(cls, instance):
|
||||
"""Get the target objectsets and rig sets nodes
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
tuple: 2-tuple of list of objectsets,
|
||||
list of rig sets nodes
|
||||
"""
|
||||
objectsets = ["skeletonMesh_SET"]
|
||||
skeleton_mesh_nodes = instance.data.get("skeleton_mesh", [])
|
||||
return objectsets, skeleton_mesh_nodes
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
controls_set = cls.get_node(instance)
|
||||
if not controls_set:
|
||||
cls.log.error(
|
||||
"Must have 'controls_SET' in rig instance"
|
||||
|
|
@ -189,7 +189,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
controls_set = instance.data["rig_sets"].get("controls_SET")
|
||||
controls_set = cls.get_node(instance)
|
||||
if not controls_set:
|
||||
cls.log.error(
|
||||
"Unable to repair because no 'controls_SET' found in rig "
|
||||
|
|
@ -228,3 +228,64 @@ class ValidateRigControllers(pyblish.api.InstancePlugin):
|
|||
default = cls.CONTROLLER_DEFAULTS[attr]
|
||||
cls.log.info("Setting %s to %s" % (plug, default))
|
||||
cmds.setAttr(plug, default)
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from controls_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from controls_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("controls_SET")
|
||||
|
||||
|
||||
class ValidateSkeletonRigControllers(ValidateRigControllers):
|
||||
"""Validate rig controller for skeletonAnim_SET
|
||||
|
||||
Controls must have the transformation attributes on their default
|
||||
values of translate zero, rotate zero and scale one when they are
|
||||
unlocked attributes.
|
||||
|
||||
Unlocked keyable attributes may not have any incoming connections. If
|
||||
these connections are required for the rig then lock the attributes.
|
||||
|
||||
The visibility attribute must be locked.
|
||||
|
||||
Note that `repair` will:
|
||||
- Lock all visibility attributes
|
||||
- Reset all default values for translate, rotate, scale
|
||||
- Break all incoming connections to keyable attributes
|
||||
|
||||
"""
|
||||
order = ValidateContentsOrder + 0.05
|
||||
label = "Skeleton Rig Controllers"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
# Default controller values
|
||||
CONTROLLER_DEFAULTS = {
|
||||
"translateX": 0,
|
||||
"translateY": 0,
|
||||
"translateZ": 0,
|
||||
"rotateX": 0,
|
||||
"rotateY": 0,
|
||||
"rotateZ": 0,
|
||||
"scaleX": 1,
|
||||
"scaleY": 1,
|
||||
"scaleZ": 1
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from skeletonMesh_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from skeletonMesh_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("skeletonMesh_SET")
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
"""Get all nodes which do not match the criteria"""
|
||||
|
||||
out_set = instance.data["rig_sets"].get("out_SET")
|
||||
out_set = cls.get_node(instance)
|
||||
if not out_set:
|
||||
return []
|
||||
|
||||
|
|
@ -85,3 +85,45 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
lib.set_id(node, sibling_id, overwrite=True)
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from out_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from out_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("out_SET")
|
||||
|
||||
|
||||
class ValidateSkeletonRigOutSetNodeIds(ValidateRigOutSetNodeIds):
|
||||
"""Validate if deformed shapes have related IDs to the original shapes
|
||||
from skeleton set.
|
||||
|
||||
When a deformer is applied in the scene on a referenced mesh that already
|
||||
had deformers then Maya will create a new shape node for the mesh that
|
||||
does not have the original id. This validator checks whether the ids are
|
||||
valid on all the shape nodes in the instance.
|
||||
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
families = ["rig.fbx"]
|
||||
hosts = ['maya']
|
||||
label = 'Skeleton Rig Out Set Node Ids'
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from skeletonMesh_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from skeletonMesh_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get(
|
||||
"skeletonMesh_SET")
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
invalid = {}
|
||||
|
||||
if compute:
|
||||
out_set = instance.data["rig_sets"].get("out_SET")
|
||||
out_set = cls.get_node(instance)
|
||||
if not out_set:
|
||||
instance.data["mismatched_output_ids"] = invalid
|
||||
return invalid
|
||||
|
|
@ -115,3 +115,40 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
"Multiple matched ids found. Please repair manually: "
|
||||
"{}".format(multiple_ids_match)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from out_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from out_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("out_SET")
|
||||
|
||||
|
||||
class ValidateSkeletonRigOutputIds(ValidateRigOutputIds):
|
||||
"""Validate rig output ids from the skeleton sets.
|
||||
|
||||
Ids must share the same id as similarly named nodes in the scene. This is
|
||||
to ensure the id from the model is preserved through animation.
|
||||
|
||||
"""
|
||||
order = ValidateContentsOrder + 0.05
|
||||
label = "Skeleton Rig Output Ids"
|
||||
hosts = ["maya"]
|
||||
families = ["rig.fbx"]
|
||||
|
||||
@classmethod
|
||||
def get_node(cls, instance):
|
||||
"""Get target object nodes from skeletonMesh_SET
|
||||
|
||||
Args:
|
||||
instance (str): instance
|
||||
|
||||
Returns:
|
||||
list: list of object nodes from skeletonMesh_SET
|
||||
"""
|
||||
return instance.data["rig_sets"].get("skeletonMesh_SET")
|
||||
|
|
|
|||
|
|
@ -0,0 +1,40 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Plugin for validating naming conventions."""
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder,
|
||||
OptionalPyblishPluginMixin,
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class ValidateSkeletonTopGroupHierarchy(pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""Validates top group hierarchy in the SETs
|
||||
Make sure the object inside the SETs are always top
|
||||
group of the hierarchy
|
||||
|
||||
"""
|
||||
order = ValidateContentsOrder + 0.05
|
||||
label = "Skeleton Rig Top Group Hierarchy"
|
||||
families = ["rig.fbx"]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = []
|
||||
skeleton_mesh_data = instance.data("skeleton_mesh", [])
|
||||
if skeleton_mesh_data:
|
||||
invalid = self.get_top_hierarchy(skeleton_mesh_data)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"The skeletonMesh_SET includes the object which "
|
||||
"is not at the top hierarchy: {}".format(invalid))
|
||||
|
||||
def get_top_hierarchy(self, targets):
|
||||
targets = cmds.ls(targets, long=True) # ensure long names
|
||||
non_top_hierarchy_list = [
|
||||
target for target in targets if target.count("|") > 2
|
||||
]
|
||||
return non_top_hierarchy_list
|
||||
|
|
@ -69,11 +69,8 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin,
|
|||
|
||||
invalid = []
|
||||
|
||||
project_settings = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
)
|
||||
collision_prefixes = (
|
||||
project_settings
|
||||
instance.context.data["project_settings"]
|
||||
["maya"]
|
||||
["create"]
|
||||
["CreateUnrealStaticMesh"]
|
||||
|
|
|
|||
|
|
@ -50,6 +50,11 @@ from .utils import (
|
|||
get_colorspace_list
|
||||
)
|
||||
|
||||
from .actions import (
|
||||
SelectInvalidAction,
|
||||
SelectInstanceNodeAction
|
||||
)
|
||||
|
||||
__all__ = (
|
||||
"file_extensions",
|
||||
"has_unsaved_changes",
|
||||
|
|
@ -92,5 +97,8 @@ __all__ = (
|
|||
"create_write_node",
|
||||
|
||||
"colorspace_exists_on_node",
|
||||
"get_colorspace_list"
|
||||
"get_colorspace_list",
|
||||
|
||||
"SelectInvalidAction",
|
||||
"SelectInstanceNodeAction"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -20,33 +20,58 @@ class SelectInvalidAction(pyblish.api.Action):
|
|||
|
||||
def process(self, context, plugin):
|
||||
|
||||
try:
|
||||
import nuke
|
||||
except ImportError:
|
||||
raise ImportError("Current host is not Nuke")
|
||||
|
||||
errored_instances = get_errored_instances_from_context(context,
|
||||
plugin=plugin)
|
||||
|
||||
# Get the invalid nodes for the plug-ins
|
||||
self.log.info("Finding invalid nodes..")
|
||||
invalid = list()
|
||||
invalid = set()
|
||||
for instance in errored_instances:
|
||||
invalid_nodes = plugin.get_invalid(instance)
|
||||
|
||||
if invalid_nodes:
|
||||
if isinstance(invalid_nodes, (list, tuple)):
|
||||
invalid.append(invalid_nodes[0])
|
||||
invalid.update(invalid_nodes)
|
||||
else:
|
||||
self.log.warning("Plug-in returned to be invalid, "
|
||||
"but has no selectable nodes.")
|
||||
|
||||
# Ensure unique (process each node only once)
|
||||
invalid = list(set(invalid))
|
||||
|
||||
if invalid:
|
||||
self.log.info("Selecting invalid nodes: {}".format(invalid))
|
||||
reset_selection()
|
||||
select_nodes(invalid)
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
|
||||
|
||||
class SelectInstanceNodeAction(pyblish.api.Action):
|
||||
"""Select instance node for failed plugin."""
|
||||
label = "Select instance node"
|
||||
on = "failed" # This action is only available on a failed plug-in
|
||||
icon = "mdi.cursor-default-click"
|
||||
|
||||
def process(self, context, plugin):
|
||||
|
||||
# Get the errored instances for the plug-in
|
||||
errored_instances = get_errored_instances_from_context(
|
||||
context, plugin)
|
||||
|
||||
# Get the invalid nodes for the plug-ins
|
||||
self.log.info("Finding instance nodes..")
|
||||
nodes = set()
|
||||
for instance in errored_instances:
|
||||
instance_node = instance.data.get("transientData", {}).get("node")
|
||||
if not instance_node:
|
||||
raise RuntimeError(
|
||||
"No transientData['node'] found on instance: {}".format(
|
||||
instance
|
||||
)
|
||||
)
|
||||
nodes.add(instance_node)
|
||||
|
||||
if nodes:
|
||||
self.log.info("Selecting instance nodes: {}".format(nodes))
|
||||
reset_selection()
|
||||
select_nodes(nodes)
|
||||
else:
|
||||
self.log.info("No instance nodes found.")
|
||||
|
|
|
|||
|
|
@ -48,20 +48,15 @@ from openpype.pipeline import (
|
|||
get_current_asset_name,
|
||||
)
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_project_asset,
|
||||
get_custom_workfile_template_from_session
|
||||
)
|
||||
from openpype.pipeline.colorspace import (
|
||||
get_imageio_config
|
||||
)
|
||||
from openpype.pipeline.colorspace import get_imageio_config
|
||||
from openpype.pipeline.workfile import BuildWorkfile
|
||||
from . import gizmo_menu
|
||||
from .constants import ASSIST
|
||||
|
||||
from .workio import (
|
||||
save_file,
|
||||
open_file
|
||||
)
|
||||
from .workio import save_file
|
||||
from .utils import get_node_outputs
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
|
@ -2222,7 +2217,6 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies.
|
|||
"""
|
||||
# replace path with env var if possible
|
||||
ocio_path = self._replace_ocio_path_with_env_var(config_data)
|
||||
ocio_path = ocio_path.replace("\\", "/")
|
||||
|
||||
log.info("Setting OCIO config path to: `{}`".format(
|
||||
ocio_path))
|
||||
|
|
@ -2802,16 +2796,28 @@ def find_free_space_to_paste_nodes(
|
|||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
def maintained_selection(exclude_nodes=None):
|
||||
"""Maintain selection during context
|
||||
|
||||
Maintain selection during context and unselect
|
||||
all nodes after context is done.
|
||||
|
||||
Arguments:
|
||||
exclude_nodes (list[nuke.Node]): list of nodes to be unselected
|
||||
before context is done
|
||||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... node["selected"].setValue(True)
|
||||
>>> print(node["selected"].value())
|
||||
False
|
||||
"""
|
||||
if exclude_nodes:
|
||||
for node in exclude_nodes:
|
||||
node["selected"].setValue(False)
|
||||
|
||||
previous_selection = nuke.selectedNodes()
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
|
|
@ -2823,6 +2829,51 @@ def maintained_selection():
|
|||
select_nodes(previous_selection)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def swap_node_with_dependency(old_node, new_node):
|
||||
""" Swap node with dependency
|
||||
|
||||
Swap node with dependency and reconnect all inputs and outputs.
|
||||
It removes old node.
|
||||
|
||||
Arguments:
|
||||
old_node (nuke.Node): node to be replaced
|
||||
new_node (nuke.Node): node to replace with
|
||||
|
||||
Example:
|
||||
>>> old_node_name = old_node["name"].value()
|
||||
>>> print(old_node_name)
|
||||
old_node_name_01
|
||||
>>> with swap_node_with_dependency(old_node, new_node) as node_name:
|
||||
... new_node["name"].setValue(node_name)
|
||||
>>> print(new_node["name"].value())
|
||||
old_node_name_01
|
||||
"""
|
||||
# preserve position
|
||||
xpos, ypos = old_node.xpos(), old_node.ypos()
|
||||
# preserve selection after all is done
|
||||
outputs = get_node_outputs(old_node)
|
||||
inputs = old_node.dependencies()
|
||||
node_name = old_node["name"].value()
|
||||
|
||||
try:
|
||||
nuke.delete(old_node)
|
||||
|
||||
yield node_name
|
||||
finally:
|
||||
|
||||
# Reconnect inputs
|
||||
for i, node in enumerate(inputs):
|
||||
new_node.setInput(i, node)
|
||||
# Reconnect outputs
|
||||
if outputs:
|
||||
for n, pipes in outputs.items():
|
||||
for i in pipes:
|
||||
n.setInput(i, new_node)
|
||||
# return to original position
|
||||
new_node.setXYpos(xpos, ypos)
|
||||
|
||||
|
||||
def reset_selection():
|
||||
"""Deselect all selected nodes"""
|
||||
for node in nuke.selectedNodes():
|
||||
|
|
@ -2833,9 +2884,10 @@ def select_nodes(nodes):
|
|||
"""Selects all inputted nodes
|
||||
|
||||
Arguments:
|
||||
nodes (list): nuke nodes to be selected
|
||||
nodes (Union[list, tuple, set]): nuke nodes to be selected
|
||||
"""
|
||||
assert isinstance(nodes, (list, tuple)), "nodes has to be list or tuple"
|
||||
assert isinstance(nodes, (list, tuple, set)), \
|
||||
"nodes has to be list, tuple or set"
|
||||
|
||||
for node in nodes:
|
||||
node["selected"].setValue(True)
|
||||
|
|
@ -2919,13 +2971,13 @@ def process_workfile_builder():
|
|||
"workfile_builder", {})
|
||||
|
||||
# get settings
|
||||
createfv_on = workfile_builder.get("create_first_version") or None
|
||||
create_fv_on = workfile_builder.get("create_first_version") or None
|
||||
builder_on = workfile_builder.get("builder_on_start") or None
|
||||
|
||||
last_workfile_path = os.environ.get("AVALON_LAST_WORKFILE")
|
||||
|
||||
# generate first version in file not existing and feature is enabled
|
||||
if createfv_on and not os.path.exists(last_workfile_path):
|
||||
if create_fv_on and not os.path.exists(last_workfile_path):
|
||||
# get custom template path if any
|
||||
custom_template_path = get_custom_workfile_template_from_session(
|
||||
project_settings=project_settings
|
||||
|
|
@ -3423,3 +3475,27 @@ def create_viewer_profile_string(viewer, display=None, path_like=False):
|
|||
if path_like:
|
||||
return "{}/{}".format(display, viewer)
|
||||
return "{} ({})".format(viewer, display)
|
||||
|
||||
|
||||
def get_filenames_without_hash(filename, frame_start, frame_end):
|
||||
"""Get filenames without frame hash
|
||||
i.e. "renderCompositingMain.baking.0001.exr"
|
||||
|
||||
Args:
|
||||
filename (str): filename with frame hash
|
||||
frame_start (str): start of the frame
|
||||
frame_end (str): end of the frame
|
||||
|
||||
Returns:
|
||||
list: filename per frame of the sequence
|
||||
"""
|
||||
filenames = []
|
||||
for frame in range(int(frame_start), (int(frame_end) + 1)):
|
||||
if "#" in filename:
|
||||
# use regex to convert #### to {:0>4}
|
||||
def replace(match):
|
||||
return "{{:0>{}}}".format(len(match.group()))
|
||||
filename_without_hashes = re.sub("#+", replace, filename)
|
||||
new_filename = filename_without_hashes.format(frame)
|
||||
filenames.append(new_filename)
|
||||
return filenames
|
||||
|
|
|
|||
|
|
@ -21,6 +21,9 @@ from openpype.pipeline import (
|
|||
CreatedInstance,
|
||||
get_current_task_name
|
||||
)
|
||||
from openpype.lib.transcoding import (
|
||||
VIDEO_EXTENSIONS
|
||||
)
|
||||
from .lib import (
|
||||
INSTANCE_DATA_KNOB,
|
||||
Knobby,
|
||||
|
|
@ -35,7 +38,8 @@ from .lib import (
|
|||
get_node_data,
|
||||
get_view_process_node,
|
||||
get_viewer_config_from_string,
|
||||
deprecated
|
||||
deprecated,
|
||||
get_filenames_without_hash
|
||||
)
|
||||
from .pipeline import (
|
||||
list_instances,
|
||||
|
|
@ -634,6 +638,10 @@ class ExporterReview(object):
|
|||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
if ".{}".format(self.ext) not in VIDEO_EXTENSIONS:
|
||||
filenames = get_filenames_without_hash(
|
||||
self.file, self.first_frame, self.last_frame)
|
||||
repre["files"] = filenames
|
||||
|
||||
if self.multiple_presets:
|
||||
repre["outputName"] = self.name
|
||||
|
|
@ -807,7 +815,20 @@ class ExporterReviewMov(ExporterReview):
|
|||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
if ".{}".format(self.ext) in VIDEO_EXTENSIONS:
|
||||
self.file = "{}{}.{}".format(
|
||||
self.fhead, self.name, self.ext)
|
||||
else:
|
||||
# Output is image (or image sequence)
|
||||
# When the file is an image it's possible it
|
||||
# has extra information after the `fhead` that
|
||||
# we want to preserve, e.g. like frame numbers
|
||||
# or frames hashes like `####`
|
||||
filename_no_ext = os.path.splitext(
|
||||
os.path.basename(self.path_in))[0]
|
||||
after_head = filename_no_ext[len(self.fhead):]
|
||||
self.file = "{}{}.{}.{}".format(
|
||||
self.fhead, self.name, after_head, self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
|
|
@ -933,7 +954,6 @@ class ExporterReviewMov(ExporterReview):
|
|||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(str(self.path))
|
||||
write_node["file_type"].setValue(str(self.ext))
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO shouldn't this come from settings on outputs?
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
|
|||
"yeticache",
|
||||
"pointcache"]
|
||||
representations = ["*"]
|
||||
extension = {"*"}
|
||||
extensions = {"*"}
|
||||
|
||||
label = "Set frame range"
|
||||
order = 11
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
|
||||
families = ["workfile", "nukenodes"]
|
||||
representations = ["*"]
|
||||
extension = {"nk"}
|
||||
extensions = {"nk"}
|
||||
|
||||
label = "Import Nuke Nodes"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["camera"]
|
||||
representations = ["*"]
|
||||
extension = {"abc"}
|
||||
extensions = {"abc"}
|
||||
|
||||
label = "Load Alembic Camera"
|
||||
icon = "camera"
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class LoadEffects(load.LoaderPlugin):
|
|||
|
||||
families = ["effect"]
|
||||
representations = ["*"]
|
||||
extension = {"json"}
|
||||
extensions = {"json"}
|
||||
|
||||
label = "Load Effects - nodes"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
|
||||
families = ["effect"]
|
||||
representations = ["*"]
|
||||
extension = {"json"}
|
||||
extensions = {"json"}
|
||||
|
||||
label = "Load Effects - Input Process"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -12,7 +12,8 @@ from openpype.pipeline import (
|
|||
from openpype.hosts.nuke.api.lib import (
|
||||
maintained_selection,
|
||||
get_avalon_knob_data,
|
||||
set_avalon_knob_data
|
||||
set_avalon_knob_data,
|
||||
swap_node_with_dependency,
|
||||
)
|
||||
from openpype.hosts.nuke.api import (
|
||||
containerise,
|
||||
|
|
@ -26,7 +27,7 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
|
||||
families = ["gizmo"]
|
||||
representations = ["*"]
|
||||
extension = {"gizmo"}
|
||||
extensions = {"nk"}
|
||||
|
||||
label = "Load Gizmo"
|
||||
order = 0
|
||||
|
|
@ -45,7 +46,7 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
data (dict): compulsory attribute > not used
|
||||
|
||||
Returns:
|
||||
nuke node: containerised nuke node object
|
||||
nuke node: containerized nuke node object
|
||||
"""
|
||||
|
||||
# get main variables
|
||||
|
|
@ -83,12 +84,12 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
# add group from nk
|
||||
nuke.nodePaste(file)
|
||||
|
||||
GN = nuke.selectedNode()
|
||||
group_node = nuke.selectedNode()
|
||||
|
||||
GN["name"].setValue(object_name)
|
||||
group_node["name"].setValue(object_name)
|
||||
|
||||
return containerise(
|
||||
node=GN,
|
||||
node=group_node,
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
|
|
@ -110,7 +111,7 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
group_node = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
|
|
@ -135,22 +136,24 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
||||
# capture pipeline metadata
|
||||
avalon_data = get_avalon_knob_data(group_node)
|
||||
|
||||
# adding nodes to node graph
|
||||
# just in case we are in group lets jump out of it
|
||||
nuke.endGroup()
|
||||
|
||||
with maintained_selection():
|
||||
xpos = GN.xpos()
|
||||
ypos = GN.ypos()
|
||||
avalon_data = get_avalon_knob_data(GN)
|
||||
nuke.delete(GN)
|
||||
# add group from nk
|
||||
with maintained_selection([group_node]):
|
||||
# insert nuke script to the script
|
||||
nuke.nodePaste(file)
|
||||
|
||||
GN = nuke.selectedNode()
|
||||
set_avalon_knob_data(GN, avalon_data)
|
||||
GN.setXYpos(xpos, ypos)
|
||||
GN["name"].setValue(object_name)
|
||||
# convert imported to selected node
|
||||
new_group_node = nuke.selectedNode()
|
||||
# swap nodes with maintained connections
|
||||
with swap_node_with_dependency(
|
||||
group_node, new_group_node) as node_name:
|
||||
new_group_node["name"].setValue(node_name)
|
||||
# set updated pipeline metadata
|
||||
set_avalon_knob_data(new_group_node, avalon_data)
|
||||
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
|
|
@ -161,11 +164,12 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
color_value = self.node_color
|
||||
else:
|
||||
color_value = "0xd88467ff"
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
new_group_node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(GN, data_imprint)
|
||||
return update_container(new_group_node, data_imprint)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -14,7 +14,8 @@ from openpype.hosts.nuke.api.lib import (
|
|||
maintained_selection,
|
||||
create_backdrop,
|
||||
get_avalon_knob_data,
|
||||
set_avalon_knob_data
|
||||
set_avalon_knob_data,
|
||||
swap_node_with_dependency,
|
||||
)
|
||||
from openpype.hosts.nuke.api import (
|
||||
containerise,
|
||||
|
|
@ -28,7 +29,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
|
||||
families = ["gizmo"]
|
||||
representations = ["*"]
|
||||
extension = {"gizmo"}
|
||||
extensions = {"nk"}
|
||||
|
||||
label = "Load Gizmo - Input Process"
|
||||
order = 0
|
||||
|
|
@ -47,7 +48,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
data (dict): compulsory attribute > not used
|
||||
|
||||
Returns:
|
||||
nuke node: containerised nuke node object
|
||||
nuke node: containerized nuke node object
|
||||
"""
|
||||
|
||||
# get main variables
|
||||
|
|
@ -85,17 +86,17 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
# add group from nk
|
||||
nuke.nodePaste(file)
|
||||
|
||||
GN = nuke.selectedNode()
|
||||
group_node = nuke.selectedNode()
|
||||
|
||||
GN["name"].setValue(object_name)
|
||||
group_node["name"].setValue(object_name)
|
||||
|
||||
# try to place it under Viewer1
|
||||
if not self.connect_active_viewer(GN):
|
||||
nuke.delete(GN)
|
||||
if not self.connect_active_viewer(group_node):
|
||||
nuke.delete(group_node)
|
||||
return
|
||||
|
||||
return containerise(
|
||||
node=GN,
|
||||
node=group_node,
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
|
|
@ -117,7 +118,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
group_node = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
|
|
@ -142,22 +143,24 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
||||
# capture pipeline metadata
|
||||
avalon_data = get_avalon_knob_data(group_node)
|
||||
|
||||
# adding nodes to node graph
|
||||
# just in case we are in group lets jump out of it
|
||||
nuke.endGroup()
|
||||
|
||||
with maintained_selection():
|
||||
xpos = GN.xpos()
|
||||
ypos = GN.ypos()
|
||||
avalon_data = get_avalon_knob_data(GN)
|
||||
nuke.delete(GN)
|
||||
# add group from nk
|
||||
with maintained_selection([group_node]):
|
||||
# insert nuke script to the script
|
||||
nuke.nodePaste(file)
|
||||
|
||||
GN = nuke.selectedNode()
|
||||
set_avalon_knob_data(GN, avalon_data)
|
||||
GN.setXYpos(xpos, ypos)
|
||||
GN["name"].setValue(object_name)
|
||||
# convert imported to selected node
|
||||
new_group_node = nuke.selectedNode()
|
||||
# swap nodes with maintained connections
|
||||
with swap_node_with_dependency(
|
||||
group_node, new_group_node) as node_name:
|
||||
new_group_node["name"].setValue(node_name)
|
||||
# set updated pipeline metadata
|
||||
set_avalon_knob_data(new_group_node, avalon_data)
|
||||
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
|
|
@ -168,11 +171,11 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
color_value = self.node_color
|
||||
else:
|
||||
color_value = "0xd88467ff"
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
new_group_node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(GN, data_imprint)
|
||||
return update_container(new_group_node, data_imprint)
|
||||
|
||||
def connect_active_viewer(self, group_node):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ class MatchmoveLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["matchmove"]
|
||||
representations = ["*"]
|
||||
extension = {"py"}
|
||||
extensions = {"py"}
|
||||
|
||||
defaults = ["Camera", "Object"]
|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["model", "pointcache", "animation"]
|
||||
representations = ["*"]
|
||||
extension = {"abc"}
|
||||
extensions = {"abc"}
|
||||
|
||||
label = "Load Alembic"
|
||||
icon = "cube"
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
|
||||
families = ["workfile", "nukenodes"]
|
||||
representations = ["*"]
|
||||
extension = {"nk"}
|
||||
extensions = {"nk"}
|
||||
|
||||
label = "Load Precomp"
|
||||
order = 0
|
||||
|
|
|
|||
|
|
@ -57,4 +57,4 @@ class CollectBackdrops(pyblish.api.InstancePlugin):
|
|||
if version:
|
||||
instance.data['version'] = version
|
||||
|
||||
self.log.info("Backdrop instance collected: `{}`".format(instance))
|
||||
self.log.debug("Backdrop instance collected: `{}`".format(instance))
|
||||
|
|
|
|||
|
|
@ -64,4 +64,4 @@ class CollectContextData(pyblish.api.ContextPlugin):
|
|||
context.data["scriptData"] = script_data
|
||||
context.data.update(script_data)
|
||||
|
||||
self.log.info('Context from Nuke script collected')
|
||||
self.log.debug('Context from Nuke script collected')
|
||||
|
|
|
|||
|
|
@ -43,4 +43,4 @@ class CollectGizmo(pyblish.api.InstancePlugin):
|
|||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame
|
||||
})
|
||||
self.log.info("Gizmo instance collected: `{}`".format(instance))
|
||||
self.log.debug("Gizmo instance collected: `{}`".format(instance))
|
||||
|
|
|
|||
|
|
@ -43,4 +43,4 @@ class CollectModel(pyblish.api.InstancePlugin):
|
|||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame
|
||||
})
|
||||
self.log.info("Model instance collected: `{}`".format(instance))
|
||||
self.log.debug("Model instance collected: `{}`".format(instance))
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import nuke
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectNukeInstanceData(pyblish.api.InstancePlugin):
|
||||
class CollectInstanceData(pyblish.api.InstancePlugin):
|
||||
"""Collect Nuke instance data
|
||||
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ class CollectSlate(pyblish.api.InstancePlugin):
|
|||
instance.data["slateNode"] = slate_node
|
||||
instance.data["slate"] = True
|
||||
instance.data["families"].append("slate")
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Slate node is in node graph: `{}`".format(slate.name()))
|
||||
self.log.debug(
|
||||
"__ instance.data: `{}`".format(instance.data))
|
||||
|
|
|
|||
|
|
@ -37,4 +37,6 @@ class CollectWorkfile(pyblish.api.InstancePlugin):
|
|||
# adding basic script data
|
||||
instance.data.update(script_data)
|
||||
|
||||
self.log.info("Collect script version")
|
||||
self.log.debug(
|
||||
"Collected current script version: {}".format(current_file)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -56,8 +56,6 @@ class ExtractBackdropNode(publish.Extractor):
|
|||
# connect output node
|
||||
for n, output in connections_out.items():
|
||||
opn = nuke.createNode("Output")
|
||||
self.log.info(n.name())
|
||||
self.log.info(output.name())
|
||||
output.setInput(
|
||||
next((i for i, d in enumerate(output.dependencies())
|
||||
if d.name() in n.name()), 0), opn)
|
||||
|
|
@ -102,5 +100,5 @@ class ExtractBackdropNode(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{}' to: {}".format(
|
||||
self.log.debug("Extracted instance '{}' to: {}".format(
|
||||
instance.name, path))
|
||||
|
|
|
|||
|
|
@ -36,11 +36,8 @@ class ExtractCamera(publish.Extractor):
|
|||
step = 1
|
||||
output_range = str(nuke.FrameRange(first_frame, last_frame, step))
|
||||
|
||||
self.log.info("instance.data: `{}`".format(
|
||||
pformat(instance.data)))
|
||||
|
||||
rm_nodes = []
|
||||
self.log.info("Crating additional nodes")
|
||||
self.log.debug("Creating additional nodes for 3D Camera Extractor")
|
||||
subset = instance.data["subset"]
|
||||
staging_dir = self.staging_dir(instance)
|
||||
|
||||
|
|
@ -84,8 +81,6 @@ class ExtractCamera(publish.Extractor):
|
|||
for n in rm_nodes:
|
||||
nuke.delete(n)
|
||||
|
||||
self.log.info(file_path)
|
||||
|
||||
# create representation data
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
@ -112,7 +107,7 @@ class ExtractCamera(publish.Extractor):
|
|||
"frameEndHandle": last_frame,
|
||||
})
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, file_path))
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -85,8 +85,5 @@ class ExtractGizmo(publish.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '{}' to: {}".format(
|
||||
self.log.debug("Extracted instance '{}' to: {}".format(
|
||||
instance.name, path))
|
||||
|
||||
self.log.info("Data {}".format(
|
||||
instance.data))
|
||||
|
|
|
|||
|
|
@ -33,13 +33,13 @@ class ExtractModel(publish.Extractor):
|
|||
first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
last_frame = int(nuke.root()["last_frame"].getValue())
|
||||
|
||||
self.log.info("instance.data: `{}`".format(
|
||||
self.log.debug("instance.data: `{}`".format(
|
||||
pformat(instance.data)))
|
||||
|
||||
rm_nodes = []
|
||||
model_node = instance.data["transientData"]["node"]
|
||||
|
||||
self.log.info("Crating additional nodes")
|
||||
self.log.debug("Creating additional nodes for Extract Model")
|
||||
subset = instance.data["subset"]
|
||||
staging_dir = self.staging_dir(instance)
|
||||
|
||||
|
|
@ -76,7 +76,7 @@ class ExtractModel(publish.Extractor):
|
|||
for n in rm_nodes:
|
||||
nuke.delete(n)
|
||||
|
||||
self.log.info(file_path)
|
||||
self.log.debug("Filepath: {}".format(file_path))
|
||||
|
||||
# create representation data
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -104,5 +104,5 @@ class ExtractModel(publish.Extractor):
|
|||
"frameEndHandle": last_frame,
|
||||
})
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, file_path))
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class CreateOutputNode(pyblish.api.ContextPlugin):
|
|||
|
||||
if active_node:
|
||||
active_node = active_node.pop()
|
||||
self.log.info(active_node)
|
||||
self.log.debug("Active node: {}".format(active_node))
|
||||
active_node['selected'].setValue(True)
|
||||
|
||||
# select only instance render node
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ class NukeRenderLocal(publish.Extractor,
|
|||
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
self.log.debug("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name,
|
||||
out_dir
|
||||
))
|
||||
|
|
@ -143,7 +143,7 @@ class NukeRenderLocal(publish.Extractor,
|
|||
instance.data["families"] = families
|
||||
|
||||
collections, remainder = clique.assemble(filenames)
|
||||
self.log.info('collections: {}'.format(str(collections)))
|
||||
self.log.debug('collections: {}'.format(str(collections)))
|
||||
|
||||
if collections:
|
||||
collection = collections[0]
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class ExtractReviewDataLut(publish.Extractor):
|
|||
hosts = ["nuke"]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.info("Creating staging dir...")
|
||||
self.log.debug("Creating staging dir...")
|
||||
if "representations" in instance.data:
|
||||
staging_dir = instance.data[
|
||||
"representations"][0]["stagingDir"].replace("\\", "/")
|
||||
|
|
@ -33,7 +33,7 @@ class ExtractReviewDataLut(publish.Extractor):
|
|||
staging_dir = os.path.normpath(os.path.dirname(render_path))
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
# generate data
|
||||
|
|
|
|||
|
|
@ -8,15 +8,16 @@ from openpype.hosts.nuke.api import plugin
|
|||
from openpype.hosts.nuke.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ExtractReviewDataMov(publish.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
class ExtractReviewIntermediates(publish.Extractor):
|
||||
"""Extracting intermediate videos or sequences with
|
||||
thumbnail for transcoding.
|
||||
|
||||
must be run after extract_render_local.py
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.01
|
||||
label = "Extract Review Data Mov"
|
||||
label = "Extract Review Intermediates"
|
||||
|
||||
families = ["review"]
|
||||
hosts = ["nuke"]
|
||||
|
|
@ -25,6 +26,24 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
viewer_lut_raw = None
|
||||
outputs = {}
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings):
|
||||
"""Apply the settings from the deprecated
|
||||
ExtractReviewDataMov plugin for backwards compatibility
|
||||
"""
|
||||
nuke_publish = project_settings["nuke"]["publish"]
|
||||
deprecated_setting = nuke_publish["ExtractReviewDataMov"]
|
||||
current_setting = nuke_publish.get("ExtractReviewIntermediates")
|
||||
if deprecated_setting["enabled"]:
|
||||
# Use deprecated settings if they are still enabled
|
||||
cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"]
|
||||
cls.outputs = deprecated_setting["outputs"]
|
||||
elif current_setting is None:
|
||||
pass
|
||||
elif current_setting["enabled"]:
|
||||
cls.viewer_lut_raw = current_setting["viewer_lut_raw"]
|
||||
cls.outputs = current_setting["outputs"]
|
||||
|
||||
def process(self, instance):
|
||||
families = set(instance.data["families"])
|
||||
|
||||
|
|
@ -33,7 +52,7 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
|
||||
task_type = instance.context.data["taskType"]
|
||||
subset = instance.data["subset"]
|
||||
self.log.info("Creating staging dir...")
|
||||
self.log.debug("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
@ -43,10 +62,10 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
self.log.info(self.outputs)
|
||||
self.log.debug("Outputs: {}".format(self.outputs))
|
||||
|
||||
# generate data
|
||||
with maintained_selection():
|
||||
|
|
@ -85,9 +104,10 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
re.search(s, subset) for s in f_subsets):
|
||||
continue
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"Baking output `{}` with settings: {}".format(
|
||||
o_name, o_data))
|
||||
o_name, o_data)
|
||||
)
|
||||
|
||||
# check if settings have more then one preset
|
||||
# so we dont need to add outputName to representation
|
||||
|
|
@ -136,10 +156,10 @@ class ExtractReviewDataMov(publish.Extractor):
|
|||
instance.data["useSequenceForReview"] = False
|
||||
else:
|
||||
instance.data["families"].remove("review")
|
||||
self.log.info((
|
||||
self.log.debug(
|
||||
"Removing `review` from families. "
|
||||
"Not available baking profile."
|
||||
))
|
||||
)
|
||||
self.log.debug(instance.data["families"])
|
||||
|
||||
self.log.debug(
|
||||
|
|
@ -3,13 +3,12 @@ import pyblish.api
|
|||
|
||||
|
||||
class ExtractScriptSave(pyblish.api.Extractor):
|
||||
"""
|
||||
"""
|
||||
"""Save current Nuke workfile script"""
|
||||
label = 'Script Save'
|
||||
order = pyblish.api.Extractor.order - 0.1
|
||||
hosts = ['nuke']
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
self.log.info('saving script')
|
||||
self.log.debug('Saving current script')
|
||||
nuke.scriptSave()
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ class ExtractSlateFrame(publish.Extractor):
|
|||
|
||||
if instance.data.get("bakePresets"):
|
||||
for o_name, o_data in instance.data["bakePresets"].items():
|
||||
self.log.info("_ o_name: {}, o_data: {}".format(
|
||||
self.log.debug("_ o_name: {}, o_data: {}".format(
|
||||
o_name, pformat(o_data)))
|
||||
self.render_slate(
|
||||
instance,
|
||||
|
|
@ -65,14 +65,14 @@ class ExtractSlateFrame(publish.Extractor):
|
|||
|
||||
def _create_staging_dir(self, instance):
|
||||
|
||||
self.log.info("Creating staging dir...")
|
||||
self.log.debug("Creating staging dir...")
|
||||
|
||||
staging_dir = os.path.normpath(
|
||||
os.path.dirname(instance.data["path"]))
|
||||
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
def _check_frames_exists(self, instance):
|
||||
|
|
@ -275,10 +275,10 @@ class ExtractSlateFrame(publish.Extractor):
|
|||
break
|
||||
|
||||
if not matching_repre:
|
||||
self.log.info((
|
||||
"Matching reresentaion was not found."
|
||||
self.log.info(
|
||||
"Matching reresentation was not found."
|
||||
" Representation files were not filled with slate."
|
||||
))
|
||||
)
|
||||
return
|
||||
|
||||
# Add frame to matching representation files
|
||||
|
|
@ -345,7 +345,7 @@ class ExtractSlateFrame(publish.Extractor):
|
|||
|
||||
try:
|
||||
node[key].setValue(value)
|
||||
self.log.info("Change key \"{}\" to value \"{}\"".format(
|
||||
self.log.debug("Change key \"{}\" to value \"{}\"".format(
|
||||
key, value
|
||||
))
|
||||
except NameError:
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ from openpype.hosts.nuke import api as napi
|
|||
from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings
|
||||
|
||||
|
||||
# Python 2/3 compatibility
|
||||
if sys.version_info[0] >= 3:
|
||||
unicode = str
|
||||
|
||||
|
|
@ -45,11 +46,12 @@ class ExtractThumbnail(publish.Extractor):
|
|||
for o_name, o_data in instance.data["bakePresets"].items():
|
||||
self.render_thumbnail(instance, o_name, **o_data)
|
||||
else:
|
||||
viewer_process_swithes = {
|
||||
viewer_process_switches = {
|
||||
"bake_viewer_process": True,
|
||||
"bake_viewer_input_process": True
|
||||
}
|
||||
self.render_thumbnail(instance, None, **viewer_process_swithes)
|
||||
self.render_thumbnail(
|
||||
instance, None, **viewer_process_switches)
|
||||
|
||||
def render_thumbnail(self, instance, output_name=None, **kwargs):
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
|
|
@ -61,15 +63,13 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
# solve output name if any is set
|
||||
output_name = output_name or ""
|
||||
if output_name:
|
||||
output_name = "_" + output_name
|
||||
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
|
||||
node = instance.data["transientData"]["node"] # group node
|
||||
self.log.info("Creating staging dir...")
|
||||
self.log.debug("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
@ -79,7 +79,7 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(
|
||||
self.log.debug(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
temporary_nodes = []
|
||||
|
|
@ -166,26 +166,42 @@ class ExtractThumbnail(publish.Extractor):
|
|||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
|
||||
thumb_name = "thumbnail"
|
||||
# only add output name and
|
||||
# if there are more than one bake preset
|
||||
if (
|
||||
output_name
|
||||
and len(instance.data.get("bakePresets", {}).keys()) > 1
|
||||
):
|
||||
thumb_name = "{}_{}".format(output_name, thumb_name)
|
||||
|
||||
# create write node
|
||||
write_node = nuke.createNode("Write")
|
||||
file = fhead[:-1] + output_name + ".jpg"
|
||||
name = "thumbnail"
|
||||
path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
instance.data["thumbnail"] = path
|
||||
write_node["file"].setValue(path)
|
||||
file = fhead[:-1] + thumb_name + ".jpg"
|
||||
thumb_path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
|
||||
# add thumbnail to cleanup
|
||||
instance.context.data["cleanupFullPaths"].append(thumb_path)
|
||||
|
||||
# make sure only one thumbnail path is set
|
||||
# and it is existing file
|
||||
instance_thumb_path = instance.data.get("thumbnailPath")
|
||||
if not instance_thumb_path or not os.path.isfile(instance_thumb_path):
|
||||
instance.data["thumbnailPath"] = thumb_path
|
||||
|
||||
write_node["file"].setValue(thumb_path)
|
||||
write_node["file_type"].setValue("jpg")
|
||||
write_node["raw"].setValue(1)
|
||||
write_node.setInput(0, previous_node)
|
||||
temporary_nodes.append(write_node)
|
||||
tags = ["thumbnail", "publish_on_farm"]
|
||||
|
||||
repre = {
|
||||
'name': name,
|
||||
'name': thumb_name,
|
||||
'ext': "jpg",
|
||||
"outputName": "thumb",
|
||||
"outputName": thumb_name,
|
||||
'files': file,
|
||||
"stagingDir": staging_dir,
|
||||
"tags": tags
|
||||
"tags": ["thumbnail", "publish_on_farm", "delete"]
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue