Merge branch 'develop' into feature/validation_exceptions_nuke

This commit is contained in:
Jakub Jezek 2022-07-25 08:46:48 +02:00
commit 7cb68975d3
No known key found for this signature in database
GPG key ID: 730D7C02726179A7
756 changed files with 106335 additions and 13100 deletions

View file

@ -309,7 +309,18 @@
"contributions": [
"code"
]
},
{
"login": "Tilix4",
"name": "Félix David",
"avatar_url": "https://avatars.githubusercontent.com/u/22875539?v=4",
"profile": "http://felixdavid.com/",
"contributions": [
"code",
"doc"
]
}
],
"contributorsPerLine": 7
}
"contributorsPerLine": 7,
"skipCi": true
}

2
.gitignore vendored
View file

@ -102,3 +102,5 @@ website/.docusaurus
.poetry/
.python-version
tools/run_eventserver.*

7
.gitmodules vendored Normal file
View file

@ -0,0 +1,7 @@
[submodule "tools/modules/powershell/BurntToast"]
path = tools/modules/powershell/BurntToast
url = https://github.com/Windos/BurntToast.git
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git

View file

@ -1,142 +1,154 @@
# Changelog
## [3.10.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.12.2-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.8...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...HEAD)
**🆕 New features**
### 📖 Documentation
- General: OpenPype modules publish plugins are registered in host [\#3180](https://github.com/pypeclub/OpenPype/pull/3180)
- General: Creator plugins from addons can be registered [\#3179](https://github.com/pypeclub/OpenPype/pull/3179)
- Ftrack: Single image reviewable [\#3157](https://github.com/pypeclub/OpenPype/pull/3157)
- Nuke: Expose write attributes to settings [\#3123](https://github.com/pypeclub/OpenPype/pull/3123)
- Hiero: Initial frame publish support [\#3106](https://github.com/pypeclub/OpenPype/pull/3106)
- Update website with more studios [\#3554](https://github.com/pypeclub/OpenPype/pull/3554)
- Documentation: Update publishing dev docs [\#3549](https://github.com/pypeclub/OpenPype/pull/3549)
**🚀 Enhancements**
- Project Manager: Allow to paste Tasks into multiple assets at the same time [\#3226](https://github.com/pypeclub/OpenPype/pull/3226)
- Project manager: Sped up project load [\#3216](https://github.com/pypeclub/OpenPype/pull/3216)
- Loader UI: Speed issues of loader with sync server [\#3199](https://github.com/pypeclub/OpenPype/pull/3199)
- Maya: added clean\_import option to Import loader [\#3181](https://github.com/pypeclub/OpenPype/pull/3181)
- Maya: add maya 2023 to default applications [\#3167](https://github.com/pypeclub/OpenPype/pull/3167)
- Compressed bgeo publishing in SAP and Houdini loader [\#3153](https://github.com/pypeclub/OpenPype/pull/3153)
- General: Add 'dataclasses' to required python modules [\#3149](https://github.com/pypeclub/OpenPype/pull/3149)
- Hooks: Tweak logging grammar [\#3147](https://github.com/pypeclub/OpenPype/pull/3147)
- Nuke: settings for reformat node in CreateWriteRender node [\#3143](https://github.com/pypeclub/OpenPype/pull/3143)
- Houdini: Add loader for alembic through Alembic Archive node [\#3140](https://github.com/pypeclub/OpenPype/pull/3140)
- Publisher: UI Modifications and fixes [\#3139](https://github.com/pypeclub/OpenPype/pull/3139)
- General: Simplified OP modules/addons import [\#3137](https://github.com/pypeclub/OpenPype/pull/3137)
- Terminal: Tweak coloring of TrayModuleManager logging enabled states [\#3133](https://github.com/pypeclub/OpenPype/pull/3133)
- General: Cleanup some Loader docstrings [\#3131](https://github.com/pypeclub/OpenPype/pull/3131)
- Nuke: render instance with subset name filtered overrides [\#3117](https://github.com/pypeclub/OpenPype/pull/3117)
- Unreal: Layout and Camera update and remove functions reimplemented and improvements [\#3116](https://github.com/pypeclub/OpenPype/pull/3116)
- Settings: Remove environment groups from settings [\#3115](https://github.com/pypeclub/OpenPype/pull/3115)
- TVPaint: Match renderlayer key with other hosts [\#3110](https://github.com/pypeclub/OpenPype/pull/3110)
- Tray publisher: Simple families from settings [\#3105](https://github.com/pypeclub/OpenPype/pull/3105)
- Maya: add additional validators to Settings [\#3540](https://github.com/pypeclub/OpenPype/pull/3540)
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
- Ftrack: Trigger custom ftrack topic of project structure creation [\#3506](https://github.com/pypeclub/OpenPype/pull/3506)
- Settings UI: Add extract to file action on project view [\#3505](https://github.com/pypeclub/OpenPype/pull/3505)
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
- General: Event system [\#3499](https://github.com/pypeclub/OpenPype/pull/3499)
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
- Nuke: load clip with options from settings [\#3497](https://github.com/pypeclub/OpenPype/pull/3497)
- TrayPublisher: implemented render\_mov\_batch [\#3486](https://github.com/pypeclub/OpenPype/pull/3486)
- Migrate basic families to the new Tray Publisher [\#3469](https://github.com/pypeclub/OpenPype/pull/3469)
**🐛 Bug fixes**
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
- Project Manager: Fix persistent editors on project change [\#3218](https://github.com/pypeclub/OpenPype/pull/3218)
- Deadline: instance data overwrite fix [\#3214](https://github.com/pypeclub/OpenPype/pull/3214)
- Ftrack: Push hierarchical attributes action works [\#3210](https://github.com/pypeclub/OpenPype/pull/3210)
- Standalone Publisher: Always create new representation for thumbnail [\#3203](https://github.com/pypeclub/OpenPype/pull/3203)
- Photoshop: skip collector when automatic testing [\#3202](https://github.com/pypeclub/OpenPype/pull/3202)
- Nuke: render/workfile version sync doesn't work on farm [\#3185](https://github.com/pypeclub/OpenPype/pull/3185)
- Ftrack: Review image only if there are no mp4 reviews [\#3183](https://github.com/pypeclub/OpenPype/pull/3183)
- General: Avoid creating multiple thumbnails [\#3176](https://github.com/pypeclub/OpenPype/pull/3176)
- General/Hiero: better clip duration calculation [\#3169](https://github.com/pypeclub/OpenPype/pull/3169)
- General: Oiio conversion for ffmpeg checks for invalid characters [\#3166](https://github.com/pypeclub/OpenPype/pull/3166)
- Fix for attaching render to subset [\#3164](https://github.com/pypeclub/OpenPype/pull/3164)
- Harmony: fixed missing task name in render instance [\#3163](https://github.com/pypeclub/OpenPype/pull/3163)
- Ftrack: Action delete old versions formatting works [\#3152](https://github.com/pypeclub/OpenPype/pull/3152)
- Deadline: fix the output directory [\#3144](https://github.com/pypeclub/OpenPype/pull/3144)
- General: New Session schema [\#3141](https://github.com/pypeclub/OpenPype/pull/3141)
- General: Missing version on headless mode crash properly [\#3136](https://github.com/pypeclub/OpenPype/pull/3136)
- TVPaint: Composite layers in reversed order [\#3135](https://github.com/pypeclub/OpenPype/pull/3135)
- Nuke: fixing default settings for workfile builder loaders [\#3120](https://github.com/pypeclub/OpenPype/pull/3120)
- Nuke: fix anatomy imageio regex default [\#3119](https://github.com/pypeclub/OpenPype/pull/3119)
- Remove invalid submodules from `/vendor` [\#3557](https://github.com/pypeclub/OpenPype/pull/3557)
- General: Remove hosts filter on integrator plugins [\#3556](https://github.com/pypeclub/OpenPype/pull/3556)
- Settings: Clean default values of environments [\#3550](https://github.com/pypeclub/OpenPype/pull/3550)
- Module interfaces: Fix import error [\#3547](https://github.com/pypeclub/OpenPype/pull/3547)
- Workfiles tool: Show of tool and it's flags [\#3539](https://github.com/pypeclub/OpenPype/pull/3539)
- General: Create workfile documents works again [\#3538](https://github.com/pypeclub/OpenPype/pull/3538)
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- Nuke: double slate [\#3521](https://github.com/pypeclub/OpenPype/pull/3521)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
**🔀 Refactored code**
- Avalon repo removed from Jobs workflow [\#3193](https://github.com/pypeclub/OpenPype/pull/3193)
- General: Remove remaining imports from avalon [\#3130](https://github.com/pypeclub/OpenPype/pull/3130)
- Refactor Integrate Asset [\#3530](https://github.com/pypeclub/OpenPype/pull/3530)
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- General: Get current context document functions [\#3522](https://github.com/pypeclub/OpenPype/pull/3522)
- Kitsu: Use query function from client [\#3496](https://github.com/pypeclub/OpenPype/pull/3496)
- TimersManager: Use query functions [\#3495](https://github.com/pypeclub/OpenPype/pull/3495)
- Deadline: Use query functions [\#3466](https://github.com/pypeclub/OpenPype/pull/3466)
**Merged pull requests:**
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
- Maya: added jpg to filter for Image Plane Loader [\#3223](https://github.com/pypeclub/OpenPype/pull/3223)
- Webpublisher: replace space by underscore in subset names [\#3160](https://github.com/pypeclub/OpenPype/pull/3160)
- StandalonePublisher: removed Extract Background plugins [\#3093](https://github.com/pypeclub/OpenPype/pull/3093)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements**
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
- NewPublisher: Align creator attributes from top to bottom [\#3487](https://github.com/pypeclub/OpenPype/pull/3487)
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Hiero: Add custom scripts menu [\#3425](https://github.com/pypeclub/OpenPype/pull/3425)
**🐛 Bug fixes**
- TrayPublisher: Keep use instance label in list view [\#3493](https://github.com/pypeclub/OpenPype/pull/3493)
- General: Extract review use first frame of input sequence [\#3491](https://github.com/pypeclub/OpenPype/pull/3491)
- General: Fix Plist loading for application launch [\#3485](https://github.com/pypeclub/OpenPype/pull/3485)
- Nuke: Workfile tools open on start [\#3479](https://github.com/pypeclub/OpenPype/pull/3479)
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
**🔀 Refactored code**
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
**🚀 Enhancements**
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
**🐛 Bug fixes**
- NewPublisher: Fix subset name change on change of creator plugin [\#3420](https://github.com/pypeclub/OpenPype/pull/3420)
- Bug: fix invalid avalon import [\#3418](https://github.com/pypeclub/OpenPype/pull/3418)
**🔀 Refactored code**
- Unreal: Use client query functions [\#3421](https://github.com/pypeclub/OpenPype/pull/3421)
- General: Move editorial lib to pipeline [\#3419](https://github.com/pypeclub/OpenPype/pull/3419)
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)
**🚀 Enhancements**
- nuke: generate publishing nodes inside render group node [\#3206](https://github.com/pypeclub/OpenPype/pull/3206)
- Loader UI: Speed issues of loader with sync server [\#3200](https://github.com/pypeclub/OpenPype/pull/3200)
- Backport of fix for attaching renders to subsets [\#3195](https://github.com/pypeclub/OpenPype/pull/3195)
**🐛 Bug fixes**
- Standalone Publisher: Always create new representation for thumbnail [\#3204](https://github.com/pypeclub/OpenPype/pull/3204)
- Nuke: render/workfile version sync doesn't work on farm [\#3184](https://github.com/pypeclub/OpenPype/pull/3184)
- Ftrack: Review image only if there are no mp4 reviews [\#3182](https://github.com/pypeclub/OpenPype/pull/3182)
- Ftrack: Locations deepcopy issue [\#3177](https://github.com/pypeclub/OpenPype/pull/3177)
- Ftrack: Locations deepcopy issue [\#3175](https://github.com/pypeclub/OpenPype/pull/3175)
- General: Avoid creating multiple thumbnails [\#3174](https://github.com/pypeclub/OpenPype/pull/3174)
- General: TemplateResult can be copied [\#3170](https://github.com/pypeclub/OpenPype/pull/3170)
**Merged pull requests:**
- hiero: otio p3 compatibility issue - metadata on effect use update [\#3194](https://github.com/pypeclub/OpenPype/pull/3194)
## [3.9.7](https://github.com/pypeclub/OpenPype/tree/3.9.7) (2022-05-11)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.6...3.9.7)
**🆕 New features**
- Ftrack: Single image reviewable [\#3158](https://github.com/pypeclub/OpenPype/pull/3158)
**🚀 Enhancements**
- Deadline output dir issue to 3.9x [\#3155](https://github.com/pypeclub/OpenPype/pull/3155)
- nuke: removing redundant code from startup [\#3142](https://github.com/pypeclub/OpenPype/pull/3142)
**🐛 Bug fixes**
- Ftrack: Action delete old versions formatting works [\#3154](https://github.com/pypeclub/OpenPype/pull/3154)
- nuke: adding extract thumbnail settings [\#3148](https://github.com/pypeclub/OpenPype/pull/3148)
**Merged pull requests:**
- Webpublisher: replace space by underscore in subset names [\#3159](https://github.com/pypeclub/OpenPype/pull/3159)
## [3.9.6](https://github.com/pypeclub/OpenPype/tree/3.9.6) (2022-05-03)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.5...3.9.6)
**🆕 New features**
- Nuke: render instance with subset name filtered overrides \(3.9.x\) [\#3125](https://github.com/pypeclub/OpenPype/pull/3125)
**🚀 Enhancements**
- TVPaint: Match renderlayer key with other hosts [\#3109](https://github.com/pypeclub/OpenPype/pull/3109)
**🐛 Bug fixes**
- TVPaint: Composite layers in reversed order [\#3134](https://github.com/pypeclub/OpenPype/pull/3134)
- General: Python 3 compatibility in queries [\#3111](https://github.com/pypeclub/OpenPype/pull/3111)
**Merged pull requests:**
- Ftrack: AssetVersion status on publish [\#3114](https://github.com/pypeclub/OpenPype/pull/3114)
- renderman support for 3.9.x [\#3107](https://github.com/pypeclub/OpenPype/pull/3107)
## [3.9.5](https://github.com/pypeclub/OpenPype/tree/3.9.5) (2022-04-25)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.2...3.9.5)

View file

@ -1,6 +1,6 @@
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-26-orange.svg?style=flat-square)](#contributors-)
[![All Contributors](https://img.shields.io/badge/all_contributors-27-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->
OpenPype
====
@ -328,6 +328,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
<td align="center"><a href="https://github.com/Malthaldar"><img src="https://avatars.githubusercontent.com/u/33671694?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Malthaldar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Malthaldar" title="Code">💻</a></td>
<td align="center"><a href="http://www.svenneve.com/"><img src="https://avatars.githubusercontent.com/u/2472863?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Sven Neve</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=svenneve" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/zafrs"><img src="https://avatars.githubusercontent.com/u/26890002?v=4?s=100" width="100px;" alt=""/><br /><sub><b>zafrs</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=zafrs" title="Code">💻</a></td>
<td align="center"><a href="http://felixdavid.com/"><img src="https://avatars.githubusercontent.com/u/22875539?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Félix David</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Documentation">📖</a></td>
</tr>
</table>

View file

@ -18,7 +18,8 @@ AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
DefaultDirName={autopf}\{#MyAppName}
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes
OutputBaseFilename={#MyAppName}-{#AppVer}-install
AllowCancelDuringInstall=yes
@ -27,7 +28,7 @@ AllowCancelDuringInstall=yes
PrivilegesRequiredOverridesAllowed=dialog
SetupIconFile=igniter\openpype.ico
OutputDir=build\
Compression=lzma
Compression=lzma2
SolidCompression=yes
WizardStyle=modern
@ -37,6 +38,11 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[InstallDelete]
; clean everything in previous installation folder
Type: filesandordirs; Name: "{app}\*"
[Files]
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
; NOTE: Don't use "Flags: ignoreversion" on any shared system files

View file

@ -15,7 +15,6 @@ from .lib import (
run_subprocess,
version_up,
get_asset,
get_hierarchy,
get_workdir_data,
get_version_from_path,
get_last_version_from_path,
@ -101,7 +100,6 @@ __all__ = [
# get contextual data
"version_up",
"get_asset",
"get_hierarchy",
"get_workdir_data",
"get_version_from_path",
"get_last_version_from_path",

View file

@ -2,7 +2,7 @@
"""Package for handling pype command line arguments."""
import os
import sys
import code
import click
# import sys
@ -424,3 +424,22 @@ def pack_project(project, dirpath):
def unpack_project(zipfile, root):
"""Create a package of project with all files and database dump."""
PypeCommands().unpack_project(zipfile, root)
@main.command()
def interactive():
"""Interative (Python like) console.
Helpfull command not only for development to directly work with python
interpreter.
Warning:
Executable 'openpype_gui' on windows won't work.
"""
from openpype.version import __version__
banner = "OpenPype {}\nPython {} on {}".format(
__version__, sys.version, sys.platform
)
code.interact(banner)

View file

@ -0,0 +1,81 @@
from .entities import (
get_projects,
get_project,
get_whole_project,
get_asset_by_id,
get_asset_by_name,
get_assets,
get_archived_assets,
get_asset_ids_with_subsets,
get_subset_by_id,
get_subset_by_name,
get_subsets,
get_subset_families,
get_version_by_id,
get_version_by_name,
get_versions,
get_hero_version_by_id,
get_hero_version_by_subset_id,
get_hero_versions,
get_last_versions,
get_last_version_by_subset_id,
get_last_version_by_subset_name,
get_output_link_versions,
get_representation_by_id,
get_representation_by_name,
get_representations,
get_representation_parents,
get_representations_parents,
get_archived_representations,
get_thumbnail,
get_thumbnails,
get_thumbnail_id_from_source,
get_workfile_info,
)
__all__ = (
"get_projects",
"get_project",
"get_whole_project",
"get_asset_by_id",
"get_asset_by_name",
"get_assets",
"get_archived_assets",
"get_asset_ids_with_subsets",
"get_subset_by_id",
"get_subset_by_name",
"get_subsets",
"get_subset_families",
"get_version_by_id",
"get_version_by_name",
"get_versions",
"get_hero_version_by_id",
"get_hero_version_by_subset_id",
"get_hero_versions",
"get_last_versions",
"get_last_version_by_subset_id",
"get_last_version_by_subset_name",
"get_output_link_versions",
"get_representation_by_id",
"get_representation_by_name",
"get_representations",
"get_representation_parents",
"get_representations_parents",
"get_archived_representations",
"get_thumbnail",
"get_thumbnails",
"get_thumbnail_id_from_source",
"get_workfile_info",
)

1371
openpype/client/entities.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,10 @@
from openpype.api import Anatomy
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
prepare_app_environments,
prepare_context_environments
)
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
class GlobalHostDataHook(PreLaunchHook):

13
openpype/host/__init__.py Normal file
View file

@ -0,0 +1,13 @@
from .host import (
HostBase,
IWorkfileHost,
ILoadHost,
INewPublisher,
)
__all__ = (
"HostBase",
"IWorkfileHost",
"ILoadHost",
"INewPublisher",
)

524
openpype/host/host.py Normal file
View file

@ -0,0 +1,524 @@
import logging
import contextlib
from abc import ABCMeta, abstractproperty, abstractmethod
import six
# NOTE can't import 'typing' because of issues in Maya 2020
# - shiboken crashes on 'typing' module import
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
@six.add_metaclass(ABCMeta)
class HostBase(object):
"""Base of host implementation class.
Host is pipeline implementation of DCC application. This class should help
to identify what must/should/can be implemented for specific functionality.
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
or loading. Not all host implementations may allow that for those purposes
can be logic extended with implementing functions for the purpose. There
are prepared interfaces to be able identify what must be implemented to
be able use that functionality.
- current statement is that it is not required to inherit from interfaces
but all of the methods are validated (only their existence!)
# Installation of host before (avalon concept):
```python
from openpype.pipeline import install_host
import openpype.hosts.maya.api as host
install_host(host)
```
# Installation of host now:
```python
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
host = MayaHost()
install_host(host)
```
Todo:
- move content of 'install_host' as method of this class
- register host object
- install legacy_io
- install global plugin paths
- store registered plugin paths to this object
- handle current context (project, asset, task)
- this must be done in many separated steps
- have it's object of host tools instead of using globals
This implementation will probably change over time when more
functionality and responsibility will be added.
"""
_log = None
def __init__(self):
"""Initialization of host.
Register DCC callbacks, host specific plugin paths, targets etc.
(Part of what 'install' did in 'avalon' concept.)
Note:
At this moment global "installation" must happen before host
installation. Because of this current limitation it is recommended
to implement 'install' method which is triggered after global
'install'.
"""
pass
@property
def log(self):
if self._log is None:
self._log = logging.getLogger(self.__class__.__name__)
return self._log
@abstractproperty
def name(self):
"""Host name."""
pass
def get_current_context(self):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
Default implementation returns values from 'legacy_io.Session'.
Returns:
dict: Context with 3 keys 'project_name', 'asset_name' and
'task_name'. All of them can be 'None'.
"""
from openpype.pipeline import legacy_io
if legacy_io.is_installed():
legacy_io.install()
return {
"project_name": legacy_io.Session["AVALON_PROJECT"],
"asset_name": legacy_io.Session["AVALON_ASSET"],
"task_name": legacy_io.Session["AVALON_TASK"]
}
def get_context_title(self):
"""Context title shown for UI purposes.
Should return current context title if possible.
Note:
This method is used only for UI purposes so it is possible to
return some logical title for contextless cases.
Is not meant for "Context menu" label.
Returns:
str: Context title.
None: Default title is used based on UI implementation.
"""
# Use current context to fill the context title
current_context = self.get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
task_name = current_context["task_name"]
items = []
if project_name:
items.append(project_name)
if asset_name:
items.append(asset_name)
if task_name:
items.append(task_name)
if items:
return "/".join(items)
return None
@contextlib.contextmanager
def maintained_selection(self):
"""Some functionlity will happen but selection should stay same.
This is DCC specific. Some may not allow to implement this ability
that is reason why default implementation is empty context manager.
Yields:
None: Yield when is ready to restore selected at the end.
"""
try:
yield
finally:
pass
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

View file

@ -65,14 +65,14 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
instance[0].Visible = new_value
def get_asset_settings():
def get_asset_settings(asset_doc):
"""Get settings on current asset from database.
Returns:
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")

View file

@ -17,11 +17,8 @@ class RenderCreator(Creator):
create_allow_context_change = True
def __init__(
self, create_context, system_settings, project_settings, headless=False
):
super(RenderCreator, self).__init__(create_context, system_settings,
project_settings, headless)
def __init__(self, project_settings, *args, **kwargs):
super(RenderCreator, self).__init__(project_settings, *args, **kwargs)
self._default_variants = (project_settings["aftereffects"]
["create"]
["RenderCreator"]

View file

@ -1,4 +1,5 @@
import openpype.hosts.aftereffects.api as api
from openpype.client import get_asset_by_name
from openpype.pipeline import (
AutoCreator,
CreatedInstance,
@ -41,10 +42,7 @@ class AEWorkfileCreator(AutoCreator):
host_name = legacy_io.Session["AVALON_APP"]
if existing_instance is None:
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
@ -69,10 +67,7 @@ class AEWorkfileCreator(AutoCreator):
existing_instance["asset"] != asset_name
or existing_instance["task"] != task_name
):
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)

View file

@ -6,8 +6,8 @@ import attr
import pyblish.api
from openpype.settings import get_project_settings
from openpype.lib import abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
from openpype.hosts.aftereffects.api import get_stub
@ -21,11 +21,11 @@ class AERenderInstance(RenderInstance):
projectEntity = attr.ib(default=None)
stagingDir = attr.ib(default=None)
app_version = attr.ib(default=None)
publish_attributes = attr.ib(default=None)
publish_attributes = attr.ib(default={})
file_name = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
class CollectAERender(publish.AbstractCollectRender):
order = pyblish.api.CollectorOrder + 0.405
label = "Collect After Effects Render Layers"
@ -90,7 +90,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
subset_name = inst.data["subset"]
instance = AERenderInstance(
family=family,
family="render",
families=inst.data.get("families", []),
version=version,
time="",
@ -116,7 +116,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
toBeRenderedOn='deadline',
fps=fps,
app_version=app_version,
publish_attributes=inst.data.get("publish_attributes"),
publish_attributes=inst.data.get("publish_attributes", {}),
file_name=render_q.file_name
)

View file

@ -1,5 +1,9 @@
# -*- coding: utf-8 -*-
"""Validate scene settings."""
"""Validate scene settings.
Requires:
instance -> assetEntity
instance -> anatomyData
"""
import os
import re
@ -54,7 +58,7 @@ class ValidateSceneSettings(OptionalPyblishPluginMixin,
order = pyblish.api.ValidatorOrder
label = "Validate Scene Settings"
families = ["render.farm", "render"]
families = ["render.farm", "render.local", "render"]
hosts = ["aftereffects"]
optional = True
@ -67,7 +71,8 @@ class ValidateSceneSettings(OptionalPyblishPluginMixin,
if not self.is_active(instance.data):
return
expected_settings = get_asset_settings()
asset_doc = instance.data["assetEntity"]
expected_settings = get_asset_settings(asset_doc)
self.log.info("config from DB::{}".format(expected_settings))
task_name = instance.data["anatomyData"]["task"]["name"]

View file

@ -10,6 +10,7 @@ from . import ops
import pyblish.api
from openpype.client import get_asset_by_name
from openpype.pipeline import (
schema,
legacy_io,
@ -83,18 +84,16 @@ def uninstall():
def set_start_end_frames():
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
scene = bpy.context.scene
# Default scene settings
frameStart = scene.frame_start
frameEnd = scene.frame_end
fps = scene.render.fps
fps = scene.render.fps / scene.render.fps_base
resolution_x = scene.render.resolution_x
resolution_y = scene.render.resolution_y
@ -117,7 +116,8 @@ def set_start_end_frames():
scene.frame_start = frameStart
scene.frame_end = frameEnd
scene.render.fps = fps
scene.render.fps = round(fps)
scene.render.fps_base = round(fps) / fps
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y

View file

@ -1,6 +1,7 @@
import os
import re
import subprocess
from platform import system
from openpype.lib import PreLaunchHook
@ -13,12 +14,9 @@ class InstallPySideToBlender(PreLaunchHook):
For pipeline implementation is required to have Qt binding installed in
blender's python packages.
Prelaunch hook can work only on Windows right now.
"""
app_groups = ["blender"]
platforms = ["windows"]
def execute(self):
# Prelaunch hook is not crucial
@ -34,25 +32,28 @@ class InstallPySideToBlender(PreLaunchHook):
# Get blender's python directory
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
platform = system().lower()
executable = self.launch_context.executable.executable_path
if os.path.basename(executable).lower() != "blender.exe":
expected_executable = "blender"
if platform == "windows":
expected_executable += ".exe"
if os.path.basename(executable).lower() != expected_executable:
self.log.info((
"Executable does not lead to blender.exe file. Can't determine"
" blender's python to check/install PySide2."
f"Executable does not lead to {expected_executable} file."
"Can't determine blender's python to check/install PySide2."
))
return
executable_dir = os.path.dirname(executable)
versions_dir = os.path.dirname(executable)
if platform == "darwin":
versions_dir = os.path.join(
os.path.dirname(versions_dir), "Resources"
)
version_subfolders = []
for name in os.listdir(executable_dir):
fullpath = os.path.join(name, executable_dir)
if not os.path.isdir(fullpath):
continue
if not version_regex.match(name):
continue
version_subfolders.append(name)
for dir_entry in os.scandir(versions_dir):
if dir_entry.is_dir() and version_regex.match(dir_entry.name):
version_subfolders.append(dir_entry.name)
if not version_subfolders:
self.log.info(
@ -72,16 +73,21 @@ class InstallPySideToBlender(PreLaunchHook):
version_subfolder = version_subfolders[0]
pythond_dir = os.path.join(
os.path.dirname(executable),
version_subfolder,
"python"
)
python_dir = os.path.join(versions_dir, version_subfolder, "python")
python_lib = os.path.join(python_dir, "lib")
python_version = "python"
if platform != "windows":
for dir_entry in os.scandir(python_lib):
if dir_entry.is_dir() and dir_entry.name.startswith("python"):
python_lib = dir_entry.path
python_version = dir_entry.name
break
# Change PYTHONPATH to contain blender's packages as first
python_paths = [
os.path.join(pythond_dir, "lib"),
os.path.join(pythond_dir, "lib", "site-packages"),
python_lib,
os.path.join(python_lib, "site-packages"),
]
python_path = self.launch_context.env.get("PYTHONPATH") or ""
for path in python_path.split(os.pathsep):
@ -91,7 +97,15 @@ class InstallPySideToBlender(PreLaunchHook):
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths)
# Get blender's python executable
python_executable = os.path.join(pythond_dir, "bin", "python.exe")
python_bin = os.path.join(python_dir, "bin")
if platform == "windows":
python_executable = os.path.join(python_bin, "python.exe")
else:
python_executable = os.path.join(python_bin, python_version)
# Check for python with enabled 'pymalloc'
if not os.path.exists(python_executable):
python_executable += "m"
if not os.path.exists(python_executable):
self.log.warning(
"Couldn't find python executable for blender. {}".format(
@ -106,7 +120,15 @@ class InstallPySideToBlender(PreLaunchHook):
return
# Install PySide2 in blender's python
self.install_pyside_windows(python_executable)
if platform == "windows":
result = self.install_pyside_windows(python_executable)
else:
result = self.install_pyside(python_executable)
if result:
self.log.info("Successfully installed PySide2 module to blender.")
else:
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside_windows(self, python_executable):
"""Install PySide2 python module to blender's python.
@ -144,19 +166,41 @@ class InstallPySideToBlender(PreLaunchHook):
lpDirectory=os.path.dirname(python_executable)
)
process_handle = process_info["hProcess"]
obj = win32event.WaitForSingleObject(
process_handle, win32event.INFINITE
)
win32event.WaitForSingleObject(process_handle, win32event.INFINITE)
returncode = win32process.GetExitCodeProcess(process_handle)
if returncode == 0:
self.log.info(
"Successfully installed PySide2 module to blender."
)
return
return returncode == 0
except pywintypes.error:
pass
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside(self, python_executable):
"""Install PySide2 python module to blender's python."""
try:
# Parameters
# - use "-m pip" as module pip to install PySide2 and argument
# "--ignore-installed" is to force install module to blender's
# site-packages and make sure it is binary compatible
args = [
python_executable,
"-m",
"pip",
"install",
"--ignore-installed",
"PySide2",
]
process = subprocess.Popen(
args, stdout=subprocess.PIPE, universal_newlines=True
)
process.communicate()
return process.returncode == 0
except PermissionError:
self.log.warning(
"Permission denied with command:"
"\"{}\".".format(" ".join(args))
)
except OSError as error:
self.log.warning(f"OS error has occurred: \"{error}\".")
except subprocess.SubprocessError:
pass
def is_pyside_installed(self, python_executable):
"""Check if PySide2 module is in blender's pip list.
@ -169,7 +213,7 @@ class InstallPySideToBlender(PreLaunchHook):
args = [python_executable, "-m", "pip", "list"]
process = subprocess.Popen(args, stdout=subprocess.PIPE)
stdout, _ = process.communicate()
lines = stdout.decode().split("\r\n")
lines = stdout.decode().split(os.linesep)
# Second line contain dashes that define maximum length of module name.
# Second column of dashes define maximum length of module version.
package_dashes, *_ = lines[1].split(" ")

View file

@ -1,13 +1,11 @@
import os
import json
from bson.objectid import ObjectId
import bpy
import bpy_extras
import bpy_extras.anim_utils
from openpype.pipeline import legacy_io
from openpype.client import get_representation_by_name
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
import openpype.api
@ -131,43 +129,32 @@ class ExtractLayout(openpype.api.Extractor):
fbx_count = 0
project_name = instance.context.data["projectEntity"]["name"]
for asset in asset_group.children:
metadata = asset.get(AVALON_PROPERTY)
parent = metadata["parent"]
version_id = metadata["parent"]
family = metadata["family"]
self.log.debug("Parent: {}".format(parent))
self.log.debug("Parent: {}".format(version_id))
# Get blend reference
blend = legacy_io.find_one(
{
"type": "representation",
"parent": ObjectId(parent),
"name": "blend"
},
projection={"_id": True})
blend = get_representation_by_name(
project_name, "blend", version_id, fields=["_id"]
)
blend_id = None
if blend:
blend_id = blend["_id"]
# Get fbx reference
fbx = legacy_io.find_one(
{
"type": "representation",
"parent": ObjectId(parent),
"name": "fbx"
},
projection={"_id": True})
fbx = get_representation_by_name(
project_name, "fbx", version_id, fields=["_id"]
)
fbx_id = None
if fbx:
fbx_id = fbx["_id"]
# Get abc reference
abc = legacy_io.find_one(
{
"type": "representation",
"parent": ObjectId(parent),
"name": "abc"
},
projection={"_id": True})
abc = get_representation_by_name(
project_name, "abc", version_id, fields=["_id"]
)
abc_id = None
if abc:
abc_id = abc["_id"]

View file

@ -4,6 +4,11 @@ from pprint import pformat
import pyblish.api
from openpype.client import (
get_subsets,
get_last_versions,
get_representations
)
from openpype.pipeline import legacy_io
@ -60,10 +65,10 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
"""
# Query all subsets for asset
subset_docs = legacy_io.find({
"type": "subset",
"parent": asset_doc["_id"]
})
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
)
# Collect all subset ids
subset_ids = [
subset_doc["_id"]
@ -76,37 +81,19 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
"Try this for start `r'.*'`: asset: `{}`"
).format(asset_doc["name"])
# Last version aggregation
pipeline = [
# Find all versions of those subsets
{"$match": {
"type": "version",
"parent": {"$in": subset_ids}
}},
# Sorting versions all together
{"$sort": {"name": 1}},
# Group them by "parent", but only take the last
{"$group": {
"_id": "$parent",
"_version_id": {"$last": "$_id"},
"name": {"$last": "$name"}
}}
]
last_versions_by_subset_id = dict()
for doc in legacy_io.aggregate(pipeline):
doc["parent"] = doc["_id"]
doc["_id"] = doc.pop("_version_id")
last_versions_by_subset_id[doc["parent"]] = doc
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids, fields=["_id", "parent"]
)
version_docs_by_id = {}
for version_doc in last_versions_by_subset_id.values():
version_docs_by_id[version_doc["_id"]] = version_doc
repre_docs = legacy_io.find({
"type": "representation",
"parent": {"$in": list(version_docs_by_id.keys())},
"name": {"$in": representations}
})
repre_docs = get_representations(
project_name,
version_ids=version_docs_by_id.keys(),
representation_names=representations
)
repre_docs_by_version_id = collections.defaultdict(list)
for repre_doc in repre_docs:
version_id = repre_doc["parent"]

View file

@ -3,6 +3,7 @@ import os
import re
import json
import pickle
import clique
import tempfile
import itertools
import contextlib
@ -560,7 +561,7 @@ def get_segment_attributes(segment):
if not hasattr(segment, attr_name):
continue
attr = getattr(segment, attr_name)
segment_attrs_data[attr] = str(attr).replace("+", ":")
segment_attrs_data[attr_name] = str(attr).replace("+", ":")
if attr_name in ["record_in", "record_out"]:
clip_data[attr_name] = attr.relative_frame
@ -762,6 +763,7 @@ class MediaInfoFile(object):
_start_frame = None
_fps = None
_drop_mode = None
_file_pattern = None
def __init__(self, path, **kwargs):
@ -773,17 +775,28 @@ class MediaInfoFile(object):
self._validate_media_script_path()
# derivate other feed variables
self.feed_basename = os.path.basename(path)
self.feed_dir = os.path.dirname(path)
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
feed_basename = os.path.basename(path)
feed_dir = os.path.dirname(path)
feed_ext = os.path.splitext(feed_basename)[1][1:].lower()
with maintained_temp_file_path(".clip") as tmp_path:
self.log.info("Temp File: {}".format(tmp_path))
self._generate_media_info_file(tmp_path)
self._generate_media_info_file(tmp_path, feed_ext, feed_dir)
# get collection containing feed_basename from path
self.file_pattern = self._get_collection(
feed_basename, feed_dir, feed_ext)
if (
not self.file_pattern
and os.path.exists(os.path.join(feed_dir, feed_basename))
):
self.file_pattern = feed_basename
# get clip data and make them single if there is multiple
# clips data
xml_data = self._make_single_clip_media_info(tmp_path)
xml_data = self._make_single_clip_media_info(
tmp_path, feed_basename, self.file_pattern)
self.log.debug("xml_data: {}".format(xml_data))
self.log.debug("type: {}".format(type(xml_data)))
@ -794,6 +807,123 @@ class MediaInfoFile(object):
self.log.debug("drop frame: {}".format(self.drop_mode))
self.clip_data = xml_data
def _get_collection(self, feed_basename, feed_dir, feed_ext):
""" Get collection string
Args:
feed_basename (str): file base name
feed_dir (str): file's directory
feed_ext (str): file extension
Raises:
AttributeError: feed_ext is not matching feed_basename
Returns:
str: collection basename with range of sequence
"""
partialname = self._separate_file_head(feed_basename, feed_ext)
self.log.debug("__ partialname: {}".format(partialname))
# make sure partial input basename is having correct extensoon
if not partialname:
raise AttributeError(
"Wrong input attributes. Basename - {}, Ext - {}".format(
feed_basename, feed_ext
)
)
# get all related files
files = [
f for f in os.listdir(feed_dir)
if partialname == self._separate_file_head(f, feed_ext)
]
# ignore reminders as we dont need them
collections = clique.assemble(files)[0]
# in case no collection found return None
# it is probably just single file
if not collections:
return
# we expect only one collection
collection = collections[0]
self.log.debug("__ collection: {}".format(collection))
if collection.is_contiguous():
return self._format_collection(collection)
# add `[` in front to make sure it want capture
# shot name with the same number
number_from_path = self._separate_number(feed_basename, feed_ext)
search_number_pattern = "[" + number_from_path
# convert to multiple collections
_continues_colls = collection.separate()
for _coll in _continues_colls:
coll_to_text = self._format_collection(
_coll, len(number_from_path))
self.log.debug("__ coll_to_text: {}".format(coll_to_text))
if search_number_pattern in coll_to_text:
return coll_to_text
@staticmethod
def _format_collection(collection, padding=None):
padding = padding or collection.padding
# if no holes then return collection
head = collection.format("{head}")
tail = collection.format("{tail}")
range_template = "[{{:0{0}d}}-{{:0{0}d}}]".format(
padding)
ranges = range_template.format(
min(collection.indexes),
max(collection.indexes)
)
# if no holes then return collection
return "{}{}{}".format(head, ranges, tail)
def _separate_file_head(self, basename, extension):
""" Get only head with out sequence and extension
Args:
basename (str): file base name
extension (str): file extension
Returns:
str: file head
"""
# in case sequence file
found = re.findall(
r"(.*)[._][\d]*(?=.{})".format(extension),
basename,
)
if found:
return found.pop()
# in case single file
name, ext = os.path.splitext(basename)
if extension == ext[1:]:
return name
def _separate_number(self, basename, extension):
""" Get only sequence number as string
Args:
basename (str): file base name
extension (str): file extension
Returns:
str: number with padding
"""
# in case sequence file
found = re.findall(
r"[._]([\d]*)(?=.{})".format(extension),
basename,
)
if found:
return found.pop()
@property
def clip_data(self):
"""Clip's xml clip data
@ -846,18 +976,41 @@ class MediaInfoFile(object):
def drop_mode(self, text):
self._drop_mode = str(text)
@property
def file_pattern(self):
"""Clips file patter
Returns:
str: file pattern. ex. file.[1-2].exr
"""
return self._file_pattern
@file_pattern.setter
def file_pattern(self, fpattern):
self._file_pattern = fpattern
def _validate_media_script_path(self):
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
raise IOError("Media Scirpt does not exist: `{}`".format(
self.MEDIA_SCRIPT_PATH))
def _generate_media_info_file(self, fpath):
def _generate_media_info_file(self, fpath, feed_ext, feed_dir):
""" Generate media info xml .clip file
Args:
fpath (str): .clip file path
feed_ext (str): file extension to be filtered
feed_dir (str): look up directory
Raises:
TypeError: Type error if it fails
"""
# Create cmd arguments for gettig xml file info file
cmd_args = [
self.MEDIA_SCRIPT_PATH,
"-e", self.feed_ext,
"-e", feed_ext,
"-o", fpath,
self.feed_dir
feed_dir
]
try:
@ -867,7 +1020,20 @@ class MediaInfoFile(object):
raise TypeError(
"Error creating `{}` due: {}".format(fpath, error))
def _make_single_clip_media_info(self, fpath):
def _make_single_clip_media_info(self, fpath, feed_basename, path_pattern):
""" Separate only relative clip object form .clip file
Args:
fpath (str): clip file path
feed_basename (str): search basename
path_pattern (str): search file pattern (file.[1-2].exr)
Raises:
ET.ParseError: if nothing found
Returns:
ET.Element: xml element data of matching clip
"""
with open(fpath) as f:
lines = f.readlines()
_added_root = itertools.chain(
@ -878,14 +1044,30 @@ class MediaInfoFile(object):
xml_clips = new_root.findall("clip")
matching_clip = None
for xml_clip in xml_clips:
if xml_clip.find("name").text in self.feed_basename:
matching_clip = xml_clip
clip_name = xml_clip.find("name").text
self.log.debug("__ clip_name: `{}`".format(clip_name))
if clip_name not in feed_basename:
continue
# test path pattern
for out_track in xml_clip.iter("track"):
for out_feed in out_track.iter("feed"):
for span in out_feed.iter("span"):
# start frame
span_path = span.find("path")
self.log.debug(
"__ span_path.text: {}, path_pattern: {}".format(
span_path.text, path_pattern
)
)
if path_pattern in span_path.text:
matching_clip = xml_clip
if matching_clip is None:
# return warning there is missing clip
raise ET.ParseError(
"Missing clip in `{}`. Available clips {}".format(
self.feed_basename, [
feed_basename, [
xml_clip.find("name").text
for xml_clip in xml_clips
]
@ -894,6 +1076,11 @@ class MediaInfoFile(object):
return matching_clip
def _get_time_info_from_origin(self, xml_data):
"""Set time info to class attributes
Args:
xml_data (ET.Element): clip data
"""
try:
for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'):
@ -912,8 +1099,6 @@ class MediaInfoFile(object):
'startTimecode/dropMode')
self.drop_mode = out_feed_drop_mode_obj.text
break
else:
continue
except Exception as msg:
self.log.warning(msg)

View file

@ -360,6 +360,7 @@ class PublishableClip:
driving_layer_default = ""
index_from_segment_default = False
use_shot_name_default = False
include_handles_default = False
def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"]
@ -493,6 +494,8 @@ class PublishableClip:
"reviewTrack", {}).get("value") or self.review_track_default
self.audio = self.ui_inputs.get(
"audio", {}).get("value") or False
self.include_handles = self.ui_inputs.get(
"includeHandles", {}).get("value") or self.include_handles_default
# build subset name from layer name
if self.subset_name == "[ track name ]":

View file

@ -1,5 +1,8 @@
import os
from xml.etree import ElementTree as ET
from openpype.api import Logger
log = Logger.get_logger(__name__)
def export_clip(export_path, clip, preset_path, **kwargs):
@ -143,10 +146,40 @@ def modify_preset_file(xml_path, staging_dir, data):
# change xml following data keys
with open(xml_path, "r") as datafile:
tree = ET.parse(datafile)
_root = ET.parse(datafile)
for key, value in data.items():
for element in tree.findall(".//{}".format(key)):
element.text = str(value)
tree.write(temp_path)
try:
if "/" in key:
if not key.startswith("./"):
key = ".//" + key
split_key_path = key.split("/")
element_key = split_key_path[-1]
parent_obj_path = "/".join(split_key_path[:-1])
parent_obj = _root.find(parent_obj_path)
element_obj = parent_obj.find(element_key)
if not element_obj:
append_element(parent_obj, element_key, value)
else:
finds = _root.findall(".//{}".format(key))
if not finds:
raise AttributeError
for element in finds:
element.text = str(value)
except AttributeError:
log.warning(
"Cannot create attribute: {}: {}. Skipping".format(
key, value
))
_root.write(temp_path)
return temp_path
def append_element(root_element_obj, key, value):
new_element_obj = ET.Element(key)
log.debug("__ new_element_obj: {}".format(new_element_obj))
new_element_obj.text = str(value)
root_element_obj.insert(0, new_element_obj)

View file

@ -94,83 +94,30 @@ def create_otio_time_range(start_frame, frame_duration, fps):
def _get_metadata(item):
if hasattr(item, 'metadata'):
if not item.metadata:
return {}
return {key: value for key, value in dict(item.metadata)}
return dict(item.metadata) if item.metadata else {}
return {}
def create_time_effects(otio_clip, item):
# todo #2426: add retiming effects to export
# get all subtrack items
# subTrackItems = flatten(track_item.parent().subTrackItems())
# speed = track_item.playbackSpeed()
def create_time_effects(otio_clip, speed):
otio_effect = None
# otio_effect = None
# # retime on track item
# if speed != 1.:
# # make effect
# otio_effect = otio.schema.LinearTimeWarp()
# otio_effect.name = "Speed"
# otio_effect.time_scalar = speed
# otio_effect.metadata = {}
# retime on track item
if speed != 1.:
# make effect
otio_effect = otio.schema.LinearTimeWarp()
otio_effect.name = "Speed"
otio_effect.time_scalar = speed
otio_effect.metadata = {}
# # freeze frame effect
# if speed == 0.:
# otio_effect = otio.schema.FreezeFrame()
# otio_effect.name = "FreezeFrame"
# otio_effect.metadata = {}
# freeze frame effect
if speed == 0.:
otio_effect = otio.schema.FreezeFrame()
otio_effect.name = "FreezeFrame"
otio_effect.metadata = {}
# if otio_effect:
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
# # loop through and get all Timewarps
# for effect in subTrackItems:
# if ((track_item not in effect.linkedItems())
# and (len(effect.linkedItems()) > 0)):
# continue
# # avoid all effect which are not TimeWarp and disabled
# if "TimeWarp" not in effect.name():
# continue
# if not effect.isEnabled():
# continue
# node = effect.node()
# name = node["name"].value()
# # solve effect class as effect name
# _name = effect.name()
# if "_" in _name:
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
# else:
# effect_name = re.sub(r"\d+", "", _name) # one number
# metadata = {}
# # add knob to metadata
# for knob in ["lookup", "length"]:
# value = node[knob].value()
# animated = node[knob].isAnimated()
# if animated:
# value = [
# ((node[knob].getValueAt(i)) - i)
# for i in range(
# track_item.timelineIn(),
# track_item.timelineOut() + 1)
# ]
# metadata[knob] = value
# # make effect
# otio_effect = otio.schema.TimeEffect()
# otio_effect.name = name
# otio_effect.effect_name = effect_name
# otio_effect.metadata = metadata
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
pass
if otio_effect:
# add otio effect to clip effects
otio_clip.effects.append(otio_effect)
def _get_marker_color(flame_colour):
@ -260,6 +207,7 @@ def create_otio_markers(otio_item, item):
def create_otio_reference(clip_data, fps=None):
metadata = _get_metadata(clip_data)
duration = int(clip_data["source_duration"])
# get file info for path and start frame
frame_start = 0
@ -273,7 +221,6 @@ def create_otio_reference(clip_data, fps=None):
# get padding and other file infos
log.debug("_ path: {}".format(path))
frame_duration = clip_data["source_duration"]
otio_ex_ref_item = None
is_sequence = frame_number = utils.get_frame_from_filename(file_name)
@ -300,7 +247,7 @@ def create_otio_reference(clip_data, fps=None):
rate=fps,
available_range=create_otio_time_range(
frame_start,
frame_duration,
duration,
fps
)
)
@ -316,7 +263,7 @@ def create_otio_reference(clip_data, fps=None):
target_url=reformated_path,
available_range=create_otio_time_range(
frame_start,
frame_duration,
duration,
fps
)
)
@ -333,23 +280,50 @@ def create_otio_clip(clip_data):
segment = clip_data["PySegment"]
# calculate source in
media_info = MediaInfoFile(clip_data["fpath"])
media_info = MediaInfoFile(clip_data["fpath"], logger=log)
media_timecode_start = media_info.start_frame
media_fps = media_info.fps
# create media reference
media_reference = create_otio_reference(clip_data, media_fps)
# define first frame
first_frame = media_timecode_start or utils.get_frame_from_filename(
clip_data["fpath"]) or 0
source_in = int(clip_data["source_in"]) - int(first_frame)
_clip_source_in = int(clip_data["source_in"])
_clip_source_out = int(clip_data["source_out"])
_clip_record_duration = int(clip_data["record_duration"])
# first solve if the reverse timing
speed = 1
if clip_data["source_in"] > clip_data["source_out"]:
source_in = _clip_source_out - int(first_frame)
source_out = _clip_source_in - int(first_frame)
speed = -1
else:
source_in = _clip_source_in - int(first_frame)
source_out = _clip_source_out - int(first_frame)
source_duration = (source_out - source_in + 1)
# secondly check if any change of speed
if source_duration != _clip_record_duration:
retime_speed = float(source_duration) / float(_clip_record_duration)
log.debug("_ retime_speed: {}".format(retime_speed))
speed *= retime_speed
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
log.debug("_ speed: {}".format(speed))
log.debug("_ source_duration: {}".format(source_duration))
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))
# create media reference
media_reference = create_otio_reference(
clip_data, media_fps)
# creatae source range
source_range = create_otio_time_range(
source_in,
clip_data["record_duration"],
_clip_record_duration,
CTX.get_fps()
)
@ -363,6 +337,9 @@ def create_otio_clip(clip_data):
if MARKERS_INCLUDE:
create_otio_markers(otio_clip, segment)
if speed != 1:
create_time_effects(otio_clip, speed)
return otio_clip

View file

@ -268,6 +268,14 @@ class CreateShotClip(opfapi.Creator):
"target": "tag",
"toolTip": "Handle at end of clip", # noqa
"order": 2
},
"includeHandles": {
"value": False,
"type": "QCheckBox",
"label": "Include handles",
"target": "tag",
"toolTip": "By default handles are excluded", # noqa
"order": 3
}
}
}

View file

@ -2,7 +2,7 @@ import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClip(opfapi.ClipLoader):
"""Load a subset to timeline as clip
@ -22,7 +22,7 @@ class LoadClip(opfapi.ClipLoader):
# settings
reel_group_name = "OpenPype_Reels"
reel_name = "Loaded"
clip_name_template = "{asset}_{subset}_{output}"
clip_name_template = "{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -36,8 +36,8 @@ class LoadClip(opfapi.ClipLoader):
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping

View file

@ -2,6 +2,7 @@ import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClipBatch(opfapi.ClipLoader):
@ -21,7 +22,7 @@ class LoadClipBatch(opfapi.ClipLoader):
# settings
reel_name = "OP_LoadedReel"
clip_name_template = "{asset}_{subset}_{output}"
clip_name_template = "{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -39,8 +40,8 @@ class LoadClipBatch(opfapi.ClipLoader):
if not context["representation"]["context"].get("output"):
self.clip_name_template.replace("output", "representation")
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping

View file

@ -1,8 +1,12 @@
import re
from types import NoneType
import pyblish
import openpype
import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export
from openpype.pipeline.editorial import (
is_overlapping_otio_ranges,
get_media_range_with_retimes
)
# # developer reload modules
from pprint import pformat
@ -36,6 +40,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
for segment in selected_segments:
# get openpype tag data
marker_data = opfapi.get_segment_data_marker(segment)
self.log.debug("__ marker_data: {}".format(
pformat(marker_data)))
@ -58,24 +63,50 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
clip_name = clip_data["segment_name"]
self.log.debug("clip_name: {}".format(clip_name))
# get otio clip data
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
# get file path
file_path = clip_data["fpath"]
first_frame = opfapi.get_frame_from_filename(file_path) or 0
head, tail = self._get_head_tail(clip_data, first_frame)
head, tail = self._get_head_tail(
clip_data,
otio_data["otioClip"],
marker_data["handleStart"],
marker_data["handleEnd"]
)
# make sure there is not NoneType rather 0
if isinstance(head, NoneType):
head = 0
if isinstance(tail, NoneType):
tail = 0
# make sure value is absolute
if head != 0:
head = abs(head)
if tail != 0:
tail = abs(tail)
# solve handles length
marker_data["handleStart"] = min(
marker_data["handleStart"], abs(head))
marker_data["handleStart"], head)
marker_data["handleEnd"] = min(
marker_data["handleEnd"], abs(tail))
marker_data["handleEnd"], tail)
workfile_start = self._set_workfile_start(marker_data)
with_audio = bool(marker_data.pop("audio"))
# add marker data to instance data
inst_data = dict(marker_data.items())
# add ocio_data to instance data
inst_data.update(otio_data)
asset = marker_data["asset"]
subset = marker_data["subset"]
@ -98,20 +129,15 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"families": families,
"publish": marker_data["publish"],
"fps": self.fps,
"workfileFrameStart": workfile_start,
"sourceFirstFrame": int(first_frame),
"path": file_path,
"flameAddTasks": self.add_tasks,
"tasks": {
task["name"]: {"type": task["type"]}
for task in self.add_tasks}
for task in self.add_tasks},
"representations": []
})
# get otio clip data
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
# add to instance data
inst_data.update(otio_data)
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
# add resolution
@ -145,6 +171,17 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
if marker_data.get("reviewTrack") is not None:
instance.data["reviewAudio"] = True
@staticmethod
def _set_workfile_start(data):
include_handles = data.get("includeHandles")
workfile_start = data["workfileFrameStart"]
handle_start = data["handleStart"]
if include_handles:
workfile_start += handle_start
return workfile_start
def _get_comment_attributes(self, segment):
comment = segment.comment.get_value()
@ -236,20 +273,24 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
return split_comments
def _get_head_tail(self, clip_data, first_frame):
def _get_head_tail(self, clip_data, otio_clip, handle_start, handle_end):
# calculate head and tail with forward compatibility
head = clip_data.get("segment_head")
tail = clip_data.get("segment_tail")
self.log.debug("__ head: `{}`".format(head))
self.log.debug("__ tail: `{}`".format(tail))
# HACK: it is here to serve for versions bellow 2021.1
if not head:
head = int(clip_data["source_in"]) - int(first_frame)
if not tail:
tail = int(
clip_data["source_duration"] - (
head + clip_data["record_duration"]
)
)
if not any([head, tail]):
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
# retimed head and tail
head = int(retimed_attributes["handleStart"])
tail = int(retimed_attributes["handleEnd"])
return head, tail
def _get_resolution_to_data(self, data, context):
@ -340,7 +381,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
continue
if otio_clip.name not in segment.name.get_value():
continue
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -39,7 +39,8 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
"name": subset_name,
"asset": asset_doc["name"],
"subset": subset_name,
"family": "workfile"
"family": "workfile",
"families": []
}
# create instance with workfile

View file

@ -1,11 +1,13 @@
import os
import re
import tempfile
from pprint import pformat
from copy import deepcopy
import pyblish.api
import openpype.api
from openpype.hosts.flame import api as opfapi
from openpype.hosts.flame.api import MediaInfoFile
import flame
@ -21,6 +23,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
hosts = ["flame"]
# plugin defaults
keep_original_representation = False
default_presets = {
"thumbnail": {
"active": True,
@ -33,24 +37,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
"representation_add_range": False,
"representation_tags": ["thumbnail"],
"path_regex": ".*"
},
"ftrackpreview": {
"active": True,
"ext": "mov",
"xml_preset_file": "Apple iPad (1920x1080).xml",
"xml_preset_dir": "",
"export_type": "Movie",
"parsed_comment_attrs": False,
"colorspace_out": "Output - Rec.709",
"representation_add_range": True,
"representation_tags": [
"review",
"delete"
],
"path_regex": ".*"
}
}
keep_original_representation = False
# hide publisher during exporting
hide_ui_on_process = True
@ -59,11 +47,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {}
def process(self, instance):
if (
self.keep_original_representation
and "representations" not in instance.data
or not self.keep_original_representation
):
if not self.keep_original_representation:
# remove previeous representation if not needed
instance.data["representations"] = []
# flame objects
@ -91,7 +77,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
handles = max(handle_start, handle_end)
# get media source range with handles
source_end_handles = instance.data["sourceEndH"]
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
@ -101,34 +86,18 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add default preset type for thumbnail and reviewable video
# update them with settings and override in case the same
# are found in there
export_presets = deepcopy(self.default_presets)
_preset_keys = [k.split('_')[0] for k in self.export_presets_mapping]
export_presets = {
k: v for k, v in deepcopy(self.default_presets).items()
if k not in _preset_keys
}
export_presets.update(self.export_presets_mapping)
# loop all preset names and
for unique_name, preset_config in export_presets.items():
modify_xml_data = {}
# get activating attributes
activated_preset = preset_config["active"]
filter_path_regex = preset_config.get("filter_path_regex")
self.log.info(
"Preset `{}` is active `{}` with filter `{}`".format(
unique_name, activated_preset, filter_path_regex
)
)
self.log.debug(
"__ clip_path: `{}`".format(clip_path))
# skip if not activated presete
if not activated_preset:
continue
# exclude by regex filter if any
if (
filter_path_regex
and not re.search(filter_path_regex, clip_path)
):
if self._should_skip(preset_config, clip_path, unique_name):
continue
# get all presets attributes
@ -146,20 +115,12 @@ class ExtractSubsetResources(openpype.api.Extractor):
)
)
# get attribures related loading in integrate_batch_group
load_to_batch_group = preset_config.get(
"load_to_batch_group")
batch_group_loader_name = preset_config.get(
"batch_group_loader_name")
# convert to None if empty string
if batch_group_loader_name == "":
batch_group_loader_name = None
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles) + 1
source_end_handles - source_start_handles)
# define in/out marks
in_mark = (source_start_handles - source_first_frame) + 1
@ -180,15 +141,15 @@ class ExtractSubsetResources(openpype.api.Extractor):
name_patern_xml = (
"<segment name>_<shot name>_{}.").format(
unique_name)
# change in/out marks to timeline in/out
in_mark = clip_in
out_mark = clip_out
else:
exporting_clip = self.import_clip(clip_path)
exporting_clip.name.set_value("{}_{}".format(
asset_name, segment_name))
# change in/out marks to timeline in/out
in_mark = clip_in
out_mark = clip_out
# add xml tags modifications
modify_xml_data.update({
"exportHandles": True,
@ -201,10 +162,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add any xml overrides collected form segment.comment
modify_xml_data.update(instance.data["xml_overrides"])
self.log.debug("__ modify_xml_data: {}".format(pformat(
modify_xml_data
)))
export_kwargs = {}
# validate xml preset file is filled
if preset_file == "":
@ -231,19 +188,34 @@ class ExtractSubsetResources(openpype.api.Extractor):
preset_dir, preset_file
))
preset_path = opfapi.modify_preset_file(
preset_orig_xml_path, staging_dir, modify_xml_data)
# define kwargs based on preset type
if "thumbnail" in unique_name:
export_kwargs["thumb_frame_number"] = int(in_mark + (
modify_xml_data.update({
"video/posterFrame": True,
"video/useFrameAsPoster": 1,
"namePattern": "__thumbnail"
})
thumb_frame_number = int(in_mark + (
source_duration_handles / 2))
self.log.debug("__ in_mark: {}".format(in_mark))
self.log.debug("__ thumb_frame_number: {}".format(
thumb_frame_number
))
export_kwargs["thumb_frame_number"] = thumb_frame_number
else:
export_kwargs.update({
"in_mark": in_mark,
"out_mark": out_mark
})
self.log.debug("__ modify_xml_data: {}".format(
pformat(modify_xml_data)
))
preset_path = opfapi.modify_preset_file(
preset_orig_xml_path, staging_dir, modify_xml_data)
# get and make export dir paths
export_dir_path = str(os.path.join(
staging_dir, unique_name
@ -254,18 +226,29 @@ class ExtractSubsetResources(openpype.api.Extractor):
opfapi.export_clip(
export_dir_path, exporting_clip, preset_path, **export_kwargs)
repr_name = unique_name
# make sure only first segment is used if underscore in name
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
if (
"thumbnail" in unique_name
or "ftrackreview" in unique_name
):
repr_name = unique_name.split("_")[0]
# create representation data
representation_data = {
"name": unique_name,
"outputName": unique_name,
"name": repr_name,
"outputName": repr_name,
"ext": extension,
"stagingDir": export_dir_path,
"tags": repre_tags,
"data": {
"colorspace": color_out
},
"load_to_batch_group": load_to_batch_group,
"batch_group_loader_name": batch_group_loader_name
"load_to_batch_group": preset_config.get(
"load_to_batch_group"),
"batch_group_loader_name": preset_config.get(
"batch_group_loader_name") or None
}
# collect all available content of export dir
@ -289,7 +272,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
if os.path.splitext(f)[-1] == ".mov"
]
# then try if thumbnail is not in unique name
or unique_name == "thumbnail"
or repr_name == "thumbnail"
):
representation_data["files"] = files.pop()
else:
@ -320,6 +303,30 @@ class ExtractSubsetResources(openpype.api.Extractor):
self.log.debug("All representations: {}".format(
pformat(instance.data["representations"])))
def _should_skip(self, preset_config, clip_path, unique_name):
# get activating attributes
activated_preset = preset_config["active"]
filter_path_regex = preset_config.get("filter_path_regex")
self.log.info(
"Preset `{}` is active `{}` with filter `{}`".format(
unique_name, activated_preset, filter_path_regex
)
)
self.log.debug(
"__ clip_path: `{}`".format(clip_path))
# skip if not activated presete
if not activated_preset:
return True
# exclude by regex filter if any
if (
filter_path_regex
and not re.search(filter_path_regex, clip_path)
):
return True
def _unfolds_nested_folders(self, stage_dir, files_list, ext):
"""Unfolds nested folders
@ -408,8 +415,17 @@ class ExtractSubsetResources(openpype.api.Extractor):
"""
Import clip from path
"""
clips = flame.import_clips(path)
dir_path = os.path.dirname(path)
media_info = MediaInfoFile(path, logger=self.log)
file_pattern = media_info.file_pattern
self.log.debug("__ file_pattern: {}".format(file_pattern))
# rejoin the pattern to dir path
new_path = os.path.join(dir_path, file_pattern)
clips = flame.import_clips(new_path)
self.log.info("Clips [{}] imported from `{}`".format(clips, path))
if not clips:
self.log.warning("Path `{}` is not having any clips".format(path))
return None
@ -418,3 +434,30 @@ class ExtractSubsetResources(openpype.api.Extractor):
"Path `{}` is containing more that one clip".format(path)
)
return clips[0]
def staging_dir(self, instance):
"""Provide a temporary directory in which to store extracted files
Upon calling this method the staging directory is stored inside
the instance.data['stagingDir']
"""
staging_dir = instance.data.get('stagingDir', None)
openpype_temp_dir = os.getenv("OPENPYPE_TEMP_DIR")
if not staging_dir:
if openpype_temp_dir and os.path.exists(openpype_temp_dir):
staging_dir = os.path.normpath(
tempfile.mkdtemp(
prefix="pyblish_tmp_",
dir=openpype_temp_dir
)
)
else:
staging_dir = os.path.normpath(
tempfile.mkdtemp(prefix="pyblish_tmp_")
)
instance.data['stagingDir'] = staging_dir
instance.context.data["cleanupFullPaths"].append(staging_dir)
return staging_dir

View file

@ -323,6 +323,8 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
def _get_shot_task_dir_path(self, instance, task_data):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame")
project_doc, asset_entity, task_data["name"], "flame", anatomy
)

View file

@ -3,9 +3,16 @@ import sys
import re
import contextlib
from bson.objectid import ObjectId
from Qt import QtGui
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_id,
get_representation_by_name,
get_representation_parents,
)
from openpype.pipeline import (
switch_container,
legacy_io,
@ -93,13 +100,16 @@ def switch_item(container,
raise ValueError("Must have at least one change provided to switch.")
# Collect any of current asset, subset and representation if not provided
# so we can use the original name from those.
# so we can use the original name from those.
project_name = legacy_io.active_project()
if any(not x for x in [asset_name, subset_name, representation_name]):
_id = ObjectId(container["representation"])
representation = legacy_io.find_one({
"type": "representation", "_id": _id
})
version, subset, asset, project = legacy_io.parenthood(representation)
repre_id = container["representation"]
representation = get_representation_by_id(project_name, repre_id)
repre_parent_docs = get_representation_parents(representation)
if repre_parent_docs:
version, subset, asset, _ = repre_parent_docs
else:
version = subset = asset = None
if asset_name is None:
asset_name = asset["name"]
@ -111,39 +121,26 @@ def switch_item(container,
representation_name = representation["name"]
# Find the new one
asset = legacy_io.find_one({
"name": asset_name,
"type": "asset"
})
asset = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset, ("Could not find asset in the database with the name "
"'%s'" % asset_name)
subset = legacy_io.find_one({
"name": subset_name,
"type": "subset",
"parent": asset["_id"]
})
subset = get_subset_by_name(
project_name, subset_name, asset["_id"], fields=["_id"]
)
assert subset, ("Could not find subset in the database with the name "
"'%s'" % subset_name)
version = legacy_io.find_one(
{
"type": "version",
"parent": subset["_id"]
},
sort=[('name', -1)]
version = get_last_version_by_subset_id(
project_name, subset["_id"], fields=["_id"]
)
assert version, "Could not find a version for {}.{}".format(
asset_name, subset_name
)
representation = legacy_io.find_one({
"name": representation_name,
"type": "representation",
"parent": version["_id"]}
representation = get_representation_by_name(
project_name, representation_name, version["_id"]
)
assert representation, ("Could not find representation in the database "
"with the name '%s'" % representation_name)

View file

@ -1,6 +1,7 @@
import os
import contextlib
from openpype.client import get_version_by_id
from openpype.pipeline import (
load,
legacy_io,
@ -123,7 +124,7 @@ def loader_shift(loader, frame, relative=True):
class FusionLoadSequence(load.LoaderPlugin):
"""Load image sequence into Fusion"""
families = ["imagesequence", "review", "render"]
families = ["imagesequence", "review", "render", "plate"]
representations = ["*"]
label = "Load sequence"
@ -211,10 +212,8 @@ class FusionLoadSequence(load.LoaderPlugin):
path = self._get_first_image(root)
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
start = version["data"].get("frameStart")
if start is None:
self.log.warning("Missing start frame for updated version"

View file

@ -4,6 +4,11 @@ import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
from openpype.pipeline import (
legacy_io,
install_host,
@ -164,9 +169,9 @@ def update_frame_range(comp, representations):
"""
version_ids = [r["parent"] for r in representations]
versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}})
versions = list(versions)
project_name = legacy_io.active_project()
version_ids = {r["parent"] for r in representations}
versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions
if v["data"].get("frameStart", None) is not None]
@ -203,11 +208,12 @@ def switch(asset_name, filepath=None, new=True):
# Assert asset name exists
# It is better to do this here then to wait till switch_shot does it
asset = legacy_io.find_one({"type": "asset", "name": asset_name})
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = legacy_io.find_one({"type": "project"})
self._project = get_project(project_name)
# Go to comp
if not filepath:

View file

@ -7,6 +7,7 @@ from Qt import QtWidgets, QtCore
import qtawesome as qta
from openpype.client import get_assets
from openpype import style
from openpype.pipeline import (
install_host,
@ -142,7 +143,7 @@ class App(QtWidgets.QWidget):
# Clear any existing items
self._assets.clear()
asset_names = [a["name"] for a in self.collect_assets()]
asset_names = self.collect_asset_names()
completer = QtWidgets.QCompleter(asset_names)
self._assets.setCompleter(completer)
@ -165,8 +166,14 @@ class App(QtWidgets.QWidget):
items = glob.glob("{}/*.comp".format(directory))
return items
def collect_assets(self):
return list(legacy_io.find({"type": "asset"}, {"name": True}))
def collect_asset_names(self):
project_name = legacy_io.active_project()
asset_docs = get_assets(project_name, fields=["name"])
asset_names = {
asset_doc["name"]
for asset_doc in asset_docs
}
return list(asset_names)
def populate_comp_box(self, files):
"""Ensure we display the filename only but the path is stored as well

View file

@ -610,7 +610,8 @@ class ImageSequenceLoader(load.LoaderPlugin):
def update(self, container, representation):
node = container.pop("node")
version = legacy_io.find_one({"_id": representation["parent"]})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
files = []
for f in version["data"]["files"]:
files.append(

View file

@ -35,7 +35,11 @@ function Client() {
self.pack = function(num) {
var ascii='';
for (var i = 3; i >= 0; i--) {
ascii += String.fromCharCode((num >> (8 * i)) & 255);
var hex = ((num >> (8 * i)) & 255).toString(16);
if (hex.length < 2){
ascii += "0";
}
ascii += hex;
}
return ascii;
};
@ -279,19 +283,22 @@ function Client() {
};
self._send = function(message) {
var data = new QByteArray();
var outstr = new QDataStream(data, QIODevice.WriteOnly);
outstr.writeInt(0);
data.append('UTF-8');
outstr.device().seek(0);
outstr.writeInt(data.size() - 4);
var codec = QTextCodec.codecForUtfText(data);
var msg = codec.fromUnicode(message);
var l = msg.size();
var coded = new QByteArray('AH').append(self.pack(l));
coded = coded.append(msg);
self.socket.write(new QByteArray(coded));
self.logDebug('Sent.');
/** Harmony 21.1 doesn't have QDataStream anymore.
This means we aren't able to write bytes into QByteArray so we had
modify how content lenght is sent do the server.
Content lenght is sent as string of 8 char convertible into integer
(instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
var codec_name = new QByteArray().append("UTF-8");
var codec = QTextCodec.codecForName(codec_name);
var msg = codec.fromUnicode(message);
var l = msg.size();
var header = new QByteArray().append('AH').append(self.pack(l));
var coded = msg.prepend(header);
self.socket.write(coded);
self.logDebug('Sent.');
};
self.waitForLock = function() {
@ -351,7 +358,14 @@ function start() {
app.avalonClient = new Client();
app.avalonClient.socket.connectToHost(host, port);
}
var menuBar = QApplication.activeWindow().menuBar();
var mainWindow = null;
var widgets = QApplication.topLevelWidgets();
for (var i = 0 ; i < widgets.length; i++) {
if (widgets[i] instanceof QMainWindow){
mainWindow = widgets[i];
}
}
var menuBar = mainWindow.menuBar();
var actions = menuBar.actions();
app.avalonMenu = null;

View file

@ -2,10 +2,10 @@ import os
from pathlib import Path
import logging
from bson.objectid import ObjectId
import pyblish.api
from openpype import lib
from openpype.client import get_representation_by_id
from openpype.lib import register_event_callback
from openpype.pipeline import (
legacy_io,
@ -15,6 +15,7 @@ from openpype.pipeline import (
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.context_tools import get_current_project_asset
import openpype.hosts.harmony
import openpype.hosts.harmony.api as harmony
@ -50,7 +51,9 @@ def get_asset_settings():
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset()
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
@ -104,22 +107,20 @@ def check_inventory():
If it does it will colorize outdated nodes and display warning message
in Harmony.
"""
if not lib.any_outdated():
return
project_name = legacy_io.active_project()
outdated_containers = []
for container in ls():
representation = container['representation']
representation_doc = legacy_io.find_one(
{
"_id": ObjectId(representation),
"type": "representation"
},
projection={"parent": True}
representation_id = container['representation']
representation_doc = get_representation_by_id(
project_name, representation_id, fields=["parent"]
)
if representation_doc and not lib.is_latest(representation_doc):
outdated_containers.append(container)
if not outdated_containers:
return
# Colour nodes.
outdated_nodes = []
for container in outdated_containers:

View file

@ -88,21 +88,25 @@ class Server(threading.Thread):
"""
current_time = time.time()
while True:
self.log.info("wait ttt")
# Receive the data in small chunks and retransmit it
request = None
header = self.connection.recv(6)
header = self.connection.recv(10)
if len(header) == 0:
# null data received, socket is closing.
self.log.info(f"[{self.timestamp()}] Connection closing.")
break
if header[0:2] != b"AH":
self.log.error("INVALID HEADER")
length = struct.unpack(">I", header[2:])[0]
content_length_str = header[2:].decode()
length = int(content_length_str, 16)
data = self.connection.recv(length)
while (len(data) < length):
# we didn't received everything in first try, lets wait for
# all data.
self.log.info("loop")
time.sleep(0.1)
if self.connection is None:
self.log.error(f"[{self.timestamp()}] "
@ -113,7 +117,7 @@ class Server(threading.Thread):
break
data += self.connection.recv(length - len(data))
self.log.debug("data:: {} {}".format(data, type(data)))
self.received += data.decode("utf-8")
pretty = self._pretty(self.received)
self.log.debug(

View file

@ -4,11 +4,10 @@ from pathlib import Path
import attr
import openpype.lib
import openpype.lib.abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.lib import get_formatted_current_time
from openpype.pipeline import legacy_io
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
import openpype.hosts.harmony.api as harmony
@ -20,8 +19,7 @@ class HarmonyRenderInstance(RenderInstance):
leadingZeros = attr.ib(default=3)
class CollectFarmRender(openpype.lib.abstract_collect_render.
AbstractCollectRender):
class CollectFarmRender(publish.AbstractCollectRender):
"""Gather all publishable renders."""
# https://docs.toonboom.com/help/harmony-17/premium/reference/node/output/write-node-image-formats.html

View file

@ -47,6 +47,6 @@ class ValidateAudio(pyblish.api.InstancePlugin):
formatting_data = {
"audio_url": audio_path
}
if os.path.isfile(audio_path):
if not os.path.isfile(audio_path):
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)

View file

@ -55,6 +55,10 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
def process(self, instance):
"""Plugin entry point."""
# TODO 'get_asset_settings' could expect asset document as argument
# which is available on 'context.data["assetEntity"]'
# - the same approach can be used in 'ValidateSceneSettingsRepair'
expected_settings = harmony.get_asset_settings()
self.log.info("scene settings from DB:".format(expected_settings))

View file

@ -27,7 +27,9 @@ from .lib import (
get_track_items,
get_current_project,
get_current_sequence,
get_timeline_selection,
get_current_track,
get_track_item_tags,
get_track_item_pype_tag,
set_track_item_pype_tag,
get_track_item_pype_data,
@ -80,7 +82,9 @@ __all__ = [
"get_track_items",
"get_current_project",
"get_current_sequence",
"get_timeline_selection",
"get_current_track",
"get_track_item_tags",
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",

View file

@ -109,8 +109,9 @@ def register_hiero_events():
# hiero.core.events.registerInterest("kShutdown", shutDown)
# hiero.core.events.registerInterest("kStartup", startupCompleted)
hiero.core.events.registerInterest(
("kSelectionChanged", "kTimeline"), selection_changed_timeline)
# INFO: was disabled because it was slowing down timeline operations
# hiero.core.events.registerInterest(
# ("kSelectionChanged", "kTimeline"), selection_changed_timeline)
# workfiles
try:

View file

@ -0,0 +1,85 @@
import logging
from scriptsmenu import scriptsmenu
from Qt import QtWidgets
log = logging.getLogger(__name__)
def _hiero_main_window():
"""Return Hiero's main window"""
for obj in QtWidgets.QApplication.topLevelWidgets():
if (obj.inherits('QMainWindow') and
obj.metaObject().className() == 'Foundry::UI::DockMainWindow'):
return obj
raise RuntimeError('Could not find HieroWindow instance')
def _hiero_main_menubar():
"""Retrieve the main menubar of the Hiero window"""
hiero_window = _hiero_main_window()
menubar = [i for i in hiero_window.children() if isinstance(
i,
QtWidgets.QMenuBar
)]
assert len(menubar) == 1, "Error, could not find menu bar!"
return menubar[0]
def find_scripts_menu(title, parent):
"""
Check if the menu exists with the given title in the parent
Args:
title (str): the title name of the scripts menu
parent (QtWidgets.QMenuBar): the menubar to check
Returns:
QtWidgets.QMenu or None
"""
menu = None
search = [i for i in parent.children() if
isinstance(i, scriptsmenu.ScriptsMenu)
and i.title() == title]
if search:
assert len(search) < 2, ("Multiple instances of menu '{}' "
"in menu bar".format(title))
menu = search[0]
return menu
def main(title="Scripts", parent=None, objectName=None):
"""Build the main scripts menu in Hiero
Args:
title (str): name of the menu in the application
parent (QtWidgets.QtMenuBar): the parent object for the menu
objectName (str): custom objectName for scripts menu
Returns:
scriptsmenu.ScriptsMenu instance
"""
hieromainbar = parent or _hiero_main_menubar()
try:
# check menu already exists
menu = find_scripts_menu(title, hieromainbar)
if not menu:
log.info("Attempting to build menu ...")
object_name = objectName or title.lower()
menu = scriptsmenu.ScriptsMenu(title=title,
parent=hieromainbar,
objectName=object_name)
except Exception as e:
log.error(e)
return
return menu

View file

@ -1,6 +1,8 @@
"""
Host specific functions where host api is connected
"""
from copy import deepcopy
import os
import re
import sys
@ -10,10 +12,16 @@ import shutil
import hiero
from Qt import QtWidgets
from bson.objectid import ObjectId
from openpype.pipeline import legacy_io
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
from openpype.client import (
get_project,
get_versions,
get_last_versions,
get_representations,
)
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from . import tags
try:
@ -89,13 +97,19 @@ def get_current_sequence(name=None, new=False):
if not sequence:
# if nothing found create new with input name
sequence = get_current_sequence(name, True)
elif not name and not new:
else:
# if name is none and new is False then return current open sequence
sequence = hiero.ui.activeSequence()
return sequence
def get_timeline_selection():
active_sequence = hiero.ui.activeSequence()
timeline_editor = hiero.ui.getTimelineEditor(active_sequence)
return list(timeline_editor.selection())
def get_current_track(sequence, name, audio=False):
"""
Get current track in context of active project.
@ -118,7 +132,7 @@ def get_current_track(sequence, name, audio=False):
# get track by name
track = None
for _track in tracks:
if _track.name() in name:
if _track.name() == name:
track = _track
if not track:
@ -126,13 +140,14 @@ def get_current_track(sequence, name, audio=False):
track = hiero.core.VideoTrack(name)
else:
track = hiero.core.AudioTrack(name)
sequence.addTrack(track)
return track
def get_track_items(
selected=False,
selection=False,
sequence_name=None,
track_item_name=None,
track_name=None,
@ -143,7 +158,7 @@ def get_track_items(
"""Get all available current timeline track items.
Attribute:
selected (bool)[optional]: return only selected items on timeline
selection (list)[optional]: list of selected track items
sequence_name (str)[optional]: return only clips from input sequence
track_item_name (str)[optional]: return only item with input name
track_name (str)[optional]: return only items from track name
@ -155,32 +170,34 @@ def get_track_items(
Return:
list or hiero.core.TrackItem: list of track items or single track item
"""
return_list = list()
track_items = list()
track_type = track_type or "video"
selection = selection or []
return_list = []
# get selected track items or all in active sequence
if selected:
if selection:
try:
selected_items = list(hiero.selection)
for item in selected_items:
if track_name and track_name in item.parent().name():
# filter only items fitting input track name
track_items.append(item)
elif not track_name:
# or add all if no track_name was defined
track_items.append(item)
for track_item in selection:
log.info("___ track_item: {}".format(track_item))
# make sure only trackitems are selected
if not isinstance(track_item, hiero.core.TrackItem):
continue
if _validate_all_atrributes(
track_item,
track_item_name,
track_name,
track_type,
check_enabled,
check_tagged
):
log.info("___ valid trackitem: {}".format(track_item))
return_list.append(track_item)
except AttributeError:
pass
# check if any collected track items are
# `core.Hiero.Python.TrackItem` instance
if track_items:
any_track_item = track_items[0]
if not isinstance(any_track_item, hiero.core.TrackItem):
selected_items = []
# collect all available active sequence track items
if not track_items:
if not return_list:
sequence = get_current_sequence(name=sequence_name)
# get all available tracks from sequence
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
@ -191,42 +208,101 @@ def get_track_items(
if check_enabled and not track.isEnabled():
continue
# and all items in track
for item in track.items():
if check_tagged and not item.tags():
for track_item in track.items():
# make sure no subtrackitem is also track items
if not isinstance(track_item, hiero.core.TrackItem):
continue
# check if track item is enabled
if check_enabled:
if not item.isEnabled():
continue
if track_item_name:
if track_item_name in item.name():
return item
# make sure only track items with correct track names are added
if track_name and track_name in track.name():
# filter out only defined track_name items
track_items.append(item)
elif not track_name:
# or add all if no track_name is defined
track_items.append(item)
if _validate_all_atrributes(
track_item,
track_item_name,
track_name,
track_type,
check_enabled,
check_tagged
):
return_list.append(track_item)
# filter out only track items with defined track_type
for track_item in track_items:
if track_type and track_type == "video" and isinstance(
return return_list
def _validate_all_atrributes(
track_item,
track_item_name,
track_name,
track_type,
check_enabled,
check_tagged
):
def _validate_correct_name_track_item():
if track_item_name and track_item_name in track_item.name():
return True
elif not track_item_name:
return True
def _validate_tagged_track_item():
if check_tagged and track_item.tags():
return True
elif not check_tagged:
return True
def _validate_enabled_track_item():
if check_enabled and track_item.isEnabled():
return True
elif not check_enabled:
return True
def _validate_parent_track_item():
if track_name and track_name in track_item.parent().name():
# filter only items fitting input track name
return True
elif not track_name:
# or add all if no track_name was defined
return True
def _validate_type_track_item():
if track_type == "video" and isinstance(
track_item.parent(), hiero.core.VideoTrack):
# only video track items are allowed
return_list.append(track_item)
elif track_type and track_type == "audio" and isinstance(
return True
elif track_type == "audio" and isinstance(
track_item.parent(), hiero.core.AudioTrack):
# only audio track items are allowed
return_list.append(track_item)
elif not track_type:
# add all if no track_type is defined
return_list.append(track_item)
return True
# return output list but make sure all items are TrackItems
return [_i for _i in return_list
if type(_i) == hiero.core.TrackItem]
# check if track item is enabled
return all([
_validate_enabled_track_item(),
_validate_type_track_item(),
_validate_tagged_track_item(),
_validate_parent_track_item(),
_validate_correct_name_track_item()
])
def get_track_item_tags(track_item):
"""
Get track item tags excluded openpype tag
Attributes:
trackItem (hiero.core.TrackItem): hiero object
Returns:
hiero.core.Tag: hierarchy, orig clip attributes
"""
returning_tag_data = []
# get all tags from track item
_tags = track_item.tags()
if not _tags:
return []
# collect all tags which are not openpype tag
returning_tag_data.extend(
tag for tag in _tags
if tag.name() != self.pype_tag_name
)
return returning_tag_data
def get_track_item_pype_tag(track_item):
@ -245,7 +321,7 @@ def get_track_item_pype_tag(track_item):
return None
for tag in _tags:
# return only correct tag defined by global name
if tag.name() in self.pype_tag_name:
if tag.name() == self.pype_tag_name:
return tag
@ -266,7 +342,7 @@ def set_track_item_pype_tag(track_item, data=None):
"editable": "0",
"note": "OpenPype data container",
"icon": "openpype_icon.png",
"metadata": {k: v for k, v in data.items()}
"metadata": dict(data.items())
}
# get available pype tag if any
_tag = get_track_item_pype_tag(track_item)
@ -301,9 +377,9 @@ def get_track_item_pype_data(track_item):
return None
# get tag metadata attribute
tag_data = tag.metadata()
tag_data = deepcopy(dict(tag.metadata()))
# convert tag metadata to normal keys names and values to correct types
for k, v in dict(tag_data).items():
for k, v in tag_data.items():
key = k.replace("tag.", "")
try:
@ -324,7 +400,7 @@ def get_track_item_pype_data(track_item):
log.warning(msg)
value = v
data.update({key: value})
data[key] = value
return data
@ -407,7 +483,7 @@ def sync_avalon_data_to_workfile():
project.setProjectRoot(active_project_root)
# get project data from avalon db
project_doc = legacy_io.find_one({"type": "project"})
project_doc = get_project(project_name)
project_data = project_doc["data"]
log.debug("project_data: {}".format(project_data))
@ -497,7 +573,7 @@ class PyblishSubmission(hiero.exporters.FnSubmission.Submission):
from . import publish
# Add submission to Hiero module for retrieval in plugins.
hiero.submission = self
publish()
publish(hiero.ui.mainWindow())
def add_submission():
@ -527,7 +603,7 @@ class PublishAction(QtWidgets.QAction):
# from getting picked up when not using the "Export" dialog.
if hasattr(hiero, "submission"):
del hiero.submission
publish()
publish(hiero.ui.mainWindow())
def eventHandler(self, event):
# Add the Menu to the right-click menu
@ -893,32 +969,33 @@ def apply_colorspace_clips():
def is_overlapping(ti_test, ti_original, strict=False):
covering_exp = bool(
covering_exp = (
(ti_test.timelineIn() <= ti_original.timelineIn())
and (ti_test.timelineOut() >= ti_original.timelineOut())
)
inside_exp = bool(
if strict:
return covering_exp
inside_exp = (
(ti_test.timelineIn() >= ti_original.timelineIn())
and (ti_test.timelineOut() <= ti_original.timelineOut())
)
overlaying_right_exp = bool(
overlaying_right_exp = (
(ti_test.timelineIn() < ti_original.timelineOut())
and (ti_test.timelineOut() >= ti_original.timelineOut())
)
overlaying_left_exp = bool(
overlaying_left_exp = (
(ti_test.timelineOut() > ti_original.timelineIn())
and (ti_test.timelineIn() <= ti_original.timelineIn())
)
if not strict:
return any((
covering_exp,
inside_exp,
overlaying_right_exp,
overlaying_left_exp
))
else:
return covering_exp
return any((
covering_exp,
inside_exp,
overlaying_right_exp,
overlaying_left_exp
))
def get_sequence_pattern_and_padding(file):
@ -936,17 +1013,13 @@ def get_sequence_pattern_and_padding(file):
"""
foundall = re.findall(
r"(#+)|(%\d+d)|(?<=[^a-zA-Z0-9])(\d+)(?=\.\w+$)", file)
if foundall:
found = sorted(list(set(foundall[0])))[-1]
if "%" in found:
padding = int(re.findall(r"\d+", found)[-1])
else:
padding = len(found)
return found, padding
else:
if not foundall:
return None, None
found = sorted(list(set(foundall[0])))[-1]
padding = int(
re.findall(r"\d+", found)[-1]) if "%" in found else len(found)
return found, padding
def sync_clip_name_to_data_asset(track_items_list):
@ -982,7 +1055,7 @@ def sync_clip_name_to_data_asset(track_items_list):
print("asset was changed in clip: {}".format(ti_name))
def check_inventory_versions():
def check_inventory_versions(track_items=None):
"""
Actual version color idetifier of Loaded containers
@ -993,40 +1066,68 @@ def check_inventory_versions():
"""
from . import parse_container
track_item = track_items or get_track_items()
# presets
clip_color_last = "green"
clip_color = "red"
# get all track items from current timeline
for track_item in get_track_items():
item_with_repre_id = []
repre_ids = set()
# Find all containers and collect it's node and representation ids
for track_item in track_item:
container = parse_container(track_item)
if container:
# get representation from io
representation = legacy_io.find_one({
"type": "representation",
"_id": ObjectId(container["representation"])
})
repre_id = container["representation"]
repre_ids.add(repre_id)
item_with_repre_id.append((track_item, repre_id))
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
# Skip if nothing was found
if not repre_ids:
return
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
project_name = legacy_io.active_project()
# Find representations based on found containers
repre_docs = get_representations(
project_name,
repre_ids=repre_ids,
fields=["_id", "parent"]
)
# Store representations by id and collect version ids
repre_docs_by_id = {}
version_ids = set()
for repre_doc in repre_docs:
# Use stringed representation id to match value in containers
repre_id = str(repre_doc["_id"])
repre_docs_by_id[repre_id] = repre_doc
version_ids.add(repre_doc["parent"])
max_version = max(versions)
version_docs = get_versions(
project_name, version_ids, fields=["_id", "name", "parent"]
)
# Store versions by id and collect subset ids
version_docs_by_id = {}
subset_ids = set()
for version_doc in version_docs:
version_docs_by_id[version_doc["_id"]] = version_doc
subset_ids.add(version_doc["parent"])
# set clip colour
if version.get("name") == max_version:
track_item.source().binItem().setColor(clip_color_last)
else:
track_item.source().binItem().setColor(clip_color)
# Query last versions based on subset ids
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids=subset_ids, fields=["_id", "parent"]
)
for item in item_with_repre_id:
# Some python versions of nuke can't unfold tuple in for loop
track_item, repre_id = item
repre_doc = repre_docs_by_id[repre_id]
version_doc = version_docs_by_id[repre_doc["parent"]]
last_version_doc = last_versions_by_subset_id[version_doc["parent"]]
# Check if last version is same as current version
if version_doc["_id"] == last_version_doc["_id"]:
track_item.source().binItem().setColor(clip_color_last)
else:
track_item.source().binItem().setColor(clip_color)
def selection_changed_timeline(event):
@ -1038,29 +1139,31 @@ def selection_changed_timeline(event):
timeline_editor = event.sender
selection = timeline_editor.selection()
selection = [ti for ti in selection
if isinstance(ti, hiero.core.TrackItem)]
track_items = get_track_items(
selection=selection,
track_type="video",
check_enabled=True,
check_locked=True,
check_tagged=True
)
# run checking function
sync_clip_name_to_data_asset(selection)
# also mark old versions of loaded containers
check_inventory_versions()
sync_clip_name_to_data_asset(track_items)
def before_project_save(event):
track_items = get_track_items(
selected=False,
track_type="video",
check_enabled=True,
check_locked=True,
check_tagged=True)
check_tagged=True
)
# run checking function
sync_clip_name_to_data_asset(track_items)
# also mark old versions of loaded containers
check_inventory_versions()
check_inventory_versions(track_items)
def get_main_window():

View file

@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
from . import tags
from openpype.settings import get_project_settings
log = Logger.get_logger(__name__)
@ -41,6 +42,7 @@ def menu_install():
Installing menu into Hiero
"""
from Qt import QtGui
from . import (
publish, launch_workfiles_app, reload_config,
@ -138,3 +140,30 @@ def menu_install():
exeprimental_action.triggered.connect(
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
)
def add_scripts_menu():
try:
from . import launchforhiero
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["hiero"]["scriptsmenu"]["definition"]
_menu = project_settings["hiero"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Hiero menu
studio_menu = launchforhiero.main(title=_menu.title())
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)

View file

@ -132,7 +132,7 @@ def create_time_effects(otio_clip, track_item):
otio_effect = otio.schema.TimeEffect()
otio_effect.name = name
otio_effect.effect_name = effect_name
otio_effect.metadata = metadata
otio_effect.metadata.update(metadata)
# add otio effect to clip effects
otio_clip.effects.append(otio_effect)
@ -151,7 +151,7 @@ def create_otio_reference(clip):
padding = media_source.filenamePadding()
file_head = media_source.filenameHead()
is_sequence = not media_source.singleFile()
frame_duration = media_source.duration() - 1
frame_duration = media_source.duration()
fps = utils.get_rate(clip) or self.project_fps
extension = os.path.splitext(path)[-1]

View file

@ -48,6 +48,7 @@ def install():
# install menu
menu.menu_install()
menu.add_scripts_menu()
# register hiero events
events.register_hiero_events()
@ -143,6 +144,11 @@ def parse_container(track_item, validate=True):
"""
# convert tag metadata to normal keys names
data = lib.get_track_item_pype_data(track_item)
if (
not data
or data.get("id") != "pyblish.avalon.container"
):
return
if validate and data and data.get("schema"):
schema.validate(data)

View file

@ -1,4 +1,5 @@
import os
from pprint import pformat
import re
from copy import deepcopy
@ -9,6 +10,7 @@ import qargparse
import openpype.api as openpype
from openpype.pipeline import LoaderPlugin, LegacyCreator
from openpype.pipeline.context_tools import get_current_project_asset
from . import lib
log = openpype.Logger().get_logger(__name__)
@ -400,7 +402,8 @@ class ClipLoader:
# inject asset data to representation dict
self._get_asset_data()
log.debug("__init__ self.data: `{}`".format(self.data))
log.info("__init__ self.data: `{}`".format(pformat(self.data)))
log.info("__init__ options: `{}`".format(pformat(options)))
# add active components to class
if self.new_sequence:
@ -482,7 +485,9 @@ class ClipLoader:
"""
asset_name = self.context["representation"]["context"]["asset"]
self.data["assetData"] = openpype.get_asset(asset_name)["data"]
asset_doc = get_current_project_asset(asset_name)
log.debug("__ asset_doc: {}".format(pformat(asset_doc)))
self.data["assetData"] = asset_doc["data"]
def _make_track_item(self, source_bin_item, audio=False):
""" Create track item with """
@ -500,7 +505,7 @@ class ClipLoader:
track_item.setSource(clip)
track_item.setSourceIn(self.handle_start)
track_item.setTimelineIn(self.timeline_in)
track_item.setSourceOut(self.media_duration - self.handle_end)
track_item.setSourceOut((self.media_duration) - self.handle_end)
track_item.setTimelineOut(self.timeline_out)
track_item.setPlaybackSpeed(1)
self.active_track.addTrackItem(track_item)
@ -520,14 +525,18 @@ class ClipLoader:
self.handle_start = self.data["versionData"].get("handleStart")
self.handle_end = self.data["versionData"].get("handleEnd")
if self.handle_start is None:
self.handle_start = int(self.data["assetData"]["handleStart"])
self.handle_start = self.data["assetData"]["handleStart"]
if self.handle_end is None:
self.handle_end = int(self.data["assetData"]["handleEnd"])
self.handle_end = self.data["assetData"]["handleEnd"]
self.handle_start = int(self.handle_start)
self.handle_end = int(self.handle_end)
if self.sequencial_load:
last_track_item = lib.get_track_items(
sequence_name=self.active_sequence.name(),
track_name=self.active_track.name())
track_name=self.active_track.name()
)
if len(last_track_item) == 0:
last_timeline_out = 0
else:
@ -541,17 +550,12 @@ class ClipLoader:
self.timeline_in = int(self.data["assetData"]["clipIn"])
self.timeline_out = int(self.data["assetData"]["clipOut"])
log.debug("__ self.timeline_in: {}".format(self.timeline_in))
log.debug("__ self.timeline_out: {}".format(self.timeline_out))
# check if slate is included
# either in version data families or by calculating frame diff
slate_on = next(
# check iterate if slate is in families
(f for f in self.context["version"]["data"]["families"]
if "slate" in f),
# if nothing was found then use default None
# so other bool could be used
None) or bool(int(
(self.timeline_out - self.timeline_in + 1)
+ self.handle_start + self.handle_end) < self.media_duration)
slate_on = "slate" in self.context["version"]["data"]["families"]
log.debug("__ slate_on: {}".format(slate_on))
# if slate is on then remove the slate frame from beginning
if slate_on:
@ -572,7 +576,7 @@ class ClipLoader:
# there were some cases were hiero was not creating it
source_bin_item = None
for item in self.active_bin.items():
if self.data["clip_name"] in item.name():
if self.data["clip_name"] == item.name():
source_bin_item = item
if not source_bin_item:
log.warning("Problem with created Source clip: `{}`".format(
@ -599,8 +603,8 @@ class Creator(LegacyCreator):
rename_index = None
def __init__(self, *args, **kwargs):
import openpype.hosts.hiero.api as phiero
super(Creator, self).__init__(*args, **kwargs)
import openpype.hosts.hiero.api as phiero
self.presets = openpype.get_current_project_settings()[
"hiero"]["create"].get(self.__class__.__name__, {})
@ -609,7 +613,10 @@ class Creator(LegacyCreator):
self.sequence = phiero.get_current_sequence()
if (self.options or {}).get("useSelection"):
self.selected = phiero.get_track_items(selected=True)
timeline_selection = phiero.get_timeline_selection()
self.selected = phiero.get_track_items(
selection=timeline_selection
)
else:
self.selected = phiero.get_track_items()
@ -716,6 +723,10 @@ class PublishClip:
else:
self.tag_data.update({"reviewTrack": None})
log.debug("___ self.tag_data: {}".format(
pformat(self.tag_data)
))
# create pype tag on track_item and add data
lib.imprint(self.track_item, self.tag_data)

View file

@ -2,6 +2,7 @@ import re
import os
import hiero
from openpype.client import get_project, get_assets
from openpype.api import Logger
from openpype.pipeline import legacy_io
@ -86,7 +87,7 @@ def update_tag(tag, data):
# due to hiero bug we have to make sure keys which are not existent in
# data are cleared of value by `None`
for _mk in mtd.keys():
for _mk in mtd.dict().keys():
if _mk.replace("tag.", "") not in data_mtd.keys():
mtd.setValue(_mk, str(None))
@ -141,7 +142,9 @@ def add_tags_to_workfile():
nks_pres_tags = tag_data()
# Get project task types.
tasks = legacy_io.find_one({"type": "project"})["config"]["tasks"]
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
tasks = project_doc["config"]["tasks"]
nks_pres_tags["[Tasks]"] = {}
log.debug("__ tasks: {}".format(tasks))
for task_type in tasks.keys():
@ -159,7 +162,9 @@ def add_tags_to_workfile():
# asset builds and shots.
if int(os.getenv("TAG_ASSETBUILD_STARTUP", 0)) == 1:
nks_pres_tags["[AssetBuilds]"] = {}
for asset in legacy_io.find({"type": "asset"}):
for asset in get_assets(
project_name, fields=["name", "data.entityType"]
):
if asset["data"]["entityType"] == "AssetBuild":
nks_pres_tags["[AssetBuilds]"][asset["name"]] = {
"editable": "1",

View file

@ -1,12 +1,12 @@
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id
)
from openpype.pipeline import (
legacy_io,
get_representation_path,
)
import openpype.hosts.hiero.api as phiero
# from openpype.hosts.hiero.api import plugin, lib
# reload(lib)
# reload(plugin)
# reload(phiero)
class LoadClip(phiero.SequenceLoader):
@ -106,13 +106,13 @@ class LoadClip(phiero.SequenceLoader):
name = container['name']
namespace = container['namespace']
track_item = phiero.get_track_items(
track_item_name=namespace)
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
version_data = version.get("data", {})
version_name = version.get("name", None)
track_item_name=namespace).pop()
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
version_data = version_doc.get("data", {})
version_name = version_doc.get("name", None)
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
file = get_representation_path(representation).replace("\\", "/")
@ -147,7 +147,7 @@ class LoadClip(phiero.SequenceLoader):
})
# update color of clip regarding the version order
self.set_item_color(track_item, version)
self.set_item_color(track_item, version_doc)
return phiero.update_container(track_item, data_imprint)
@ -157,7 +157,7 @@ class LoadClip(phiero.SequenceLoader):
# load clip to timeline and get main variables
namespace = container['namespace']
track_item = phiero.get_track_items(
track_item_name=namespace)
track_item_name=namespace).pop()
track = track_item.parent()
# remove track item from track
@ -170,21 +170,14 @@ class LoadClip(phiero.SequenceLoader):
cls.sequence = cls.track.parent()
@classmethod
def set_item_color(cls, track_item, version):
def set_item_color(cls, track_item, version_doc):
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
clip = track_item.source()
# define version name
version_name = version.get("name", None)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
# set clip colour
if version_name == max_version:
if version_doc["_id"] == last_version_doc["_id"]:
clip.binItem().setColor(cls.clip_color_last)
else:
clip.binItem().setColor(cls.clip_color)

View file

@ -4,16 +4,16 @@ from pyblish import api
class CollectClipTagTasks(api.InstancePlugin):
"""Collect Tags from selected track items."""
order = api.CollectorOrder
order = api.CollectorOrder - 0.077
label = "Collect Tag Tasks"
hosts = ["hiero"]
families = ['clip']
families = ["shot"]
def process(self, instance):
# gets tags
tags = instance.data["tags"]
tasks = dict()
tasks = {}
for tag in tags:
t_metadata = dict(tag.metadata())
t_family = t_metadata.get("tag.family", "")

View file

@ -1,5 +1,5 @@
import pyblish
import openpype
from openpype.pipeline.editorial import is_overlapping_otio_ranges
from openpype.hosts.hiero import api as phiero
from openpype.hosts.hiero.api.otio import hiero_export
import hiero
@ -19,9 +19,12 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
def process(self, context):
self.otio_timeline = context.data["otioTimeline"]
timeline_selection = phiero.get_timeline_selection()
selected_timeline_items = phiero.get_track_items(
selected=True, check_tagged=True, check_enabled=True)
selection=timeline_selection,
check_tagged=True,
check_enabled=True
)
# only return enabled track items
if not selected_timeline_items:
@ -103,7 +106,10 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
# clip's effect
"clipEffectItems": subtracks,
"clipAnnotations": annotations
"clipAnnotations": annotations,
# add all additional tags
"tags": phiero.get_track_item_tags(track_item)
})
# otio clip data
@ -269,7 +275,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
parent_range = otio_audio.range_in_parent()
# if any overaling clip found then return True
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=False):
return True
@ -292,13 +298,13 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
for otio_clip in self.otio_timeline.each_clip():
track_name = otio_clip.parent().name
parent_range = otio_clip.range_in_parent()
if ti_track_name not in track_name:
if ti_track_name != track_name:
continue
if otio_clip.name not in track_item.name():
if otio_clip.name != track_item.name():
continue
self.log.debug("__ parent_range: {}".format(parent_range))
self.log.debug("__ timeline_range: {}".format(timeline_range))
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata
@ -314,7 +320,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn())
frame_duration = int(track_item.sourceDuration() / speed)
frame_duration = int((track_item.duration() - 1) / speed)
fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range(

View file

@ -16,7 +16,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
label = "Precollect Workfile"
order = pyblish.api.CollectorOrder - 0.5
order = pyblish.api.CollectorOrder - 0.491
def process(self, context):
@ -84,6 +84,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
"colorspace": self.get_colorspace(project),
"fps": fps
}
self.log.debug("__ context_data: {}".format(pformat(context_data)))
context.data.update(context_data)
self.log.info("Creating instance: {}".format(instance))

View file

@ -1,4 +1,5 @@
from pyblish import api
from openpype.client import get_assets
from openpype.pipeline import legacy_io
@ -17,8 +18,9 @@ class CollectAssetBuilds(api.ContextPlugin):
hosts = ["hiero"]
def process(self, context):
project_name = legacy_io.active_project()
asset_builds = {}
for asset in legacy_io.find({"type": "asset"}):
for asset in get_assets(project_name):
if asset["data"]["entityType"] == "AssetBuild":
self.log.debug("Found \"{}\" in database.".format(asset))
asset_builds[asset["name"]] = asset

View file

@ -4,8 +4,9 @@ from contextlib import contextmanager
import six
from openpype.api import get_asset
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io
from openpype.pipeline.context_tools import get_current_project_asset
import hou
@ -15,7 +16,7 @@ log = logging.getLogger(__name__)
def get_asset_fps():
"""Return current asset fps."""
return get_asset()["data"].get("fps")
return get_current_project_asset()["data"].get("fps")
def set_id(node, unique_id, overwrite=False):
@ -74,16 +75,13 @@ def generate_ids(nodes, asset_id=None):
"""
if asset_id is None:
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
# Get the asset ID from the database for the asset of current context
asset_data = legacy_io.find_one(
{
"type": "asset",
"name": legacy_io.Session["AVALON_ASSET"]
},
projection={"_id": True}
)
assert asset_data, "No current asset found in Session"
asset_id = asset_data['_id']
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset_doc, "No current asset found in Session"
asset_id = asset_doc['_id']
node_ids = []
for node in nodes:
@ -130,6 +128,8 @@ def get_output_parameter(node):
elif node_type == "arnold":
if node.evalParm("ar_ass_export_enable"):
return node.parm("ar_ass_file")
elif node_type == "Redshift_Proxy_Output":
return node.parm("RS_archive_file")
raise TypeError("Node type '%s' not supported" % node_type)
@ -428,26 +428,29 @@ def maintained_selection():
def reset_framerange():
"""Set frame range to current asset"""
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset = legacy_io.find_one({"name": asset_name, "type": "asset"})
# Get the asset ID from the database for the asset of current context
asset_doc = get_asset_by_name(project_name, asset_name)
asset_data = asset_doc["data"]
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset["data"].get("edit_in")
frame_end = asset["data"].get("edit_out")
frame_start = asset_data.get("edit_in")
frame_end = asset_data.get("edit_out")
if frame_start is None or frame_end is None:
log.warning("No edit information found for %s" % asset_name)
return
handles = asset["data"].get("handles") or 0
handle_start = asset["data"].get("handleStart")
handles = asset_data.get("handles") or 0
handle_start = asset_data.get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset["data"].get("handleEnd")
handle_end = asset_data.get("handleEnd")
if handle_end is None:
handle_end = handles

View file

@ -6,6 +6,7 @@ import logging
from Qt import QtWidgets, QtCore, QtGui
from openpype import style
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io
from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget
@ -46,10 +47,8 @@ class SelectAssetDialog(QtWidgets.QWidget):
select_id = None
name = self._parm.eval()
if name:
db_asset = legacy_io.find_one(
{"name": name, "type": "asset"},
{"_id": True}
)
project_name = legacy_io.active_project()
db_asset = get_asset_by_name(project_name, name, fields=["_id"])
if db_asset:
select_id = db_asset["_id"]

View file

@ -1,6 +1,10 @@
# -*- coding: utf-8 -*-
import hou
from openpype.client import (
get_asset_by_name,
get_subsets,
)
from openpype.pipeline import legacy_io
from openpype.hosts.houdini.api import lib
from openpype.hosts.houdini.api import plugin
@ -23,20 +27,16 @@ class CreateHDA(plugin.Creator):
# type: (str) -> bool
"""Check if existing subset name versions already exists."""
# Get all subsets of the current asset
asset_id = legacy_io.find_one(
{"name": self.data["asset"], "type": "asset"},
projection={"_id": True}
)['_id']
subset_docs = legacy_io.find(
{
"type": "subset",
"parent": asset_id
},
{"name": 1}
project_name = legacy_io.active_project()
asset_doc = get_asset_by_name(
project_name, self.data["asset"], fields=["_id"]
)
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["name"]
)
existing_subset_names = set(subset_docs.distinct("name"))
existing_subset_names_low = {
_name.lower() for _name in existing_subset_names
subset_doc["name"].lower()
for subset_doc in subset_docs
}
return subset_name.lower() in existing_subset_names_low

View file

@ -0,0 +1,48 @@
from openpype.hosts.houdini.api import plugin
class CreateRedshiftProxy(plugin.Creator):
"""Redshift Proxy"""
label = "Redshift Proxy"
family = "redshiftproxy"
icon = "magic"
def __init__(self, *args, **kwargs):
super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
# Remove the active, we are checking the bypass flag of the nodes
self.data.pop("active", None)
# Redshift provides a `Redshift_Proxy_Output` node type which shows
# a limited set of parameters by default and is set to extract a
# Redshift Proxy. However when "imprinting" extra parameters needed
# for OpenPype it starts showing all its parameters again. It's unclear
# why this happens.
# TODO: Somehow enforce so that it only shows the original limited
# attributes of the Redshift_Proxy_Output node type
self.data.update({"node_type": "Redshift_Proxy_Output"})
def _process(self, instance):
"""Creator main entry point.
Args:
instance (hou.Node): Created Houdini instance.
"""
parms = {
"RS_archive_file": '$HIP/pyblish/`chs("subset")`.$F4.rs',
}
if self.nodes:
node = self.nodes[0]
path = node.path()
parms["RS_archive_sopPath"] = path
instance.setParms(parms)
# Lock some Avalon attributes
to_lock = ["family", "id"]
for name in to_lock:
parm = instance.parm(name)
parm.lock(True)

View file

@ -44,7 +44,8 @@ class BgeoLoader(load.LoaderPlugin):
# Explicitly create a file node
file_node = container.createNode("file", node_name=node_name)
file_node.setParms({"file": self.format_path(self.fname, is_sequence)})
file_node.setParms(
{"file": self.format_path(self.fname, context["representation"])})
# Set display on last node
file_node.setDisplayFlag(True)
@ -62,15 +63,15 @@ class BgeoLoader(load.LoaderPlugin):
)
@staticmethod
def format_path(path, is_sequence):
def format_path(path, representation):
"""Format file path correctly for single bgeo or bgeo sequence."""
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
# The path is either a single file or sequence in a folder.
if not is_sequence:
filename = path
print("single")
else:
filename = re.sub(r"(.*)\.(\d+)\.(bgeo.*)", "\\1.$F4.\\3", path)
@ -94,9 +95,9 @@ class BgeoLoader(load.LoaderPlugin):
# Update the file path
file_path = get_representation_path(representation)
file_path = self.format_path(file_path)
file_path = self.format_path(file_path, representation)
file_node.setParms({"fileName": file_path})
file_node.setParms({"file": file_path})
# Update attribute
node.setParms({"representation": str(representation["_id"])})

View file

@ -40,7 +40,8 @@ class VdbLoader(load.LoaderPlugin):
# Explicitly create a file node
file_node = container.createNode("file", node_name=node_name)
file_node.setParms({"file": self.format_path(self.fname)})
file_node.setParms(
{"file": self.format_path(self.fname, context["representation"])})
# Set display on last node
file_node.setDisplayFlag(True)
@ -57,30 +58,20 @@ class VdbLoader(load.LoaderPlugin):
suffix="",
)
def format_path(self, path):
@staticmethod
def format_path(path, representation):
"""Format file path correctly for single vdb or vdb sequence."""
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
# The path is either a single file or sequence in a folder.
is_single_file = os.path.isfile(path)
if is_single_file:
if not is_sequence:
filename = path
else:
# The path points to the publish .vdb sequence folder so we
# find the first file in there that ends with .vdb
files = sorted(os.listdir(path))
first = next((x for x in files if x.endswith(".vdb")), None)
if first is None:
raise RuntimeError(
"Couldn't find first .vdb file of "
"sequence in: %s" % path
)
filename = re.sub(r"(.*)\.(\d+)\.vdb$", "\\1.$F4.vdb", path)
# Set <frame>.vdb to $F.vdb
first = re.sub(r"\.(\d+)\.vdb$", ".$F.vdb", first)
filename = os.path.join(path, first)
filename = os.path.join(path, filename)
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
@ -100,9 +91,9 @@ class VdbLoader(load.LoaderPlugin):
# Update the file path
file_path = get_representation_path(representation)
file_path = self.format_path(file_path)
file_path = self.format_path(file_path, representation)
file_node.setParms({"fileName": file_path})
file_node.setParms({"file": file_path})
# Update attribute
node.setParms({"representation": str(representation["_id"])})

View file

@ -20,7 +20,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
order = pyblish.api.CollectorOrder
label = "Collect Frames"
families = ["vdbcache", "imagesequence", "ass"]
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
def process(self, instance):

View file

@ -12,6 +12,7 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
"imagesequence",
"usd",
"usdrender",
"redshiftproxy"
]
hosts = ["houdini"]
@ -54,6 +55,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
else:
out_node = node.parm("loppath").evalAsNode()
elif node_type == "Redshift_Proxy_Output":
out_node = node.parm("RS_archive_sopPath").evalAsNode()
else:
raise ValueError(
"ROP node type '%s' is" " not supported." % node_type

View file

@ -1,5 +1,6 @@
import pyblish.api
from openyppe.client import get_subset_by_name, get_asset_by_name
from openpype.pipeline import legacy_io
import openpype.lib.usdlib as usdlib
@ -50,10 +51,8 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
self.log.debug("Add bootstrap for: %s" % bootstrap)
asset = legacy_io.find_one({
"name": instance.data["asset"],
"type": "asset"
})
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, instance.data["asset"])
assert asset, "Asset must exist: %s" % asset
# Check which are not about to be created and don't exist yet
@ -70,7 +69,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
self.log.debug("Checking required bootstrap: %s" % required)
for subset in required:
if self._subset_exists(instance, subset, asset):
if self._subset_exists(project_name, instance, subset, asset):
continue
self.log.debug(
@ -93,7 +92,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
for key in ["asset"]:
new.data[key] = instance.data[key]
def _subset_exists(self, instance, subset, asset):
def _subset_exists(self, project_name, instance, subset, asset):
"""Return whether subset exists in current context or in database."""
# Allow it to be created during this publish session
context = instance.context
@ -106,9 +105,8 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
# Or, if they already exist in the database we can
# skip them too.
return bool(
legacy_io.find_one(
{"name": subset, "type": "subset", "parent": asset["_id"]},
{"_id": True}
)
)
if get_subset_by_name(
project_name, subset, asset["_id"], fields=["_id"]
):
return True
return False

View file

@ -0,0 +1,48 @@
import os
import pyblish.api
import openpype.api
from openpype.hosts.houdini.api.lib import render_rop
class ExtractRedshiftProxy(openpype.api.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract Redshift Proxy"
families = ["redshiftproxy"]
hosts = ["houdini"]
def process(self, instance):
ropnode = instance[0]
# Get the filename from the filename parameter
# `.evalParm(parameter)` will make sure all tokens are resolved
output = ropnode.evalParm("RS_archive_file")
staging_dir = os.path.normpath(os.path.dirname(output))
instance.data["stagingDir"] = staging_dir
file_name = os.path.basename(output)
self.log.info("Writing Redshift Proxy '%s' to '%s'" % (file_name,
staging_dir))
render_rop(ropnode)
output = instance.data["frames"]
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
"name": "rs",
"ext": "rs",
"files": output,
"stagingDir": staging_dir,
}
# A single frame may also be rendered without start/end frame.
if "frameStart" in instance.data and "frameEnd" in instance.data:
representation["frameStart"] = instance.data["frameStart"]
representation["frameEnd"] = instance.data["frameEnd"]
instance.data["representations"].append(representation)

View file

@ -7,6 +7,12 @@ from collections import deque
import pyblish.api
import openpype.api
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_name,
)
from openpype.pipeline import (
get_representation_path,
legacy_io,
@ -244,11 +250,14 @@ class ExtractUSDLayered(openpype.api.Extractor):
# Set up the dependency for publish if they have new content
# compared to previous publishes
project_name = legacy_io.active_project()
for dependency in active_dependencies:
dependency_fname = dependency.data["usdFilename"]
filepath = os.path.join(staging_dir, dependency_fname)
similar = self._compare_with_latest_publish(dependency, filepath)
similar = self._compare_with_latest_publish(
project_name, dependency, filepath
)
if similar:
# Deactivate this dependency
self.log.debug(
@ -268,7 +277,7 @@ class ExtractUSDLayered(openpype.api.Extractor):
instance.data["files"] = []
instance.data["files"].append(fname)
def _compare_with_latest_publish(self, dependency, new_file):
def _compare_with_latest_publish(self, project_name, dependency, new_file):
import filecmp
_, ext = os.path.splitext(new_file)
@ -276,35 +285,29 @@ class ExtractUSDLayered(openpype.api.Extractor):
# Compare this dependency with the latest published version
# to detect whether we should make this into a new publish
# version. If not, skip it.
asset = legacy_io.find_one(
{"name": dependency.data["asset"], "type": "asset"}
asset = get_asset_by_name(
project_name, dependency.data["asset"], fields=["_id"]
)
subset = legacy_io.find_one(
{
"name": dependency.data["subset"],
"type": "subset",
"parent": asset["_id"],
}
subset = get_subset_by_name(
project_name,
dependency.data["subset"],
asset["_id"],
fields=["_id"]
)
if not subset:
# Subset doesn't exist yet. Definitely new file
self.log.debug("No existing subset..")
return False
version = legacy_io.find_one(
{"type": "version", "parent": subset["_id"], },
sort=[("name", -1)]
version = get_last_version_by_subset_id(
project_name, subset["_id"], fields=["_id"]
)
if not version:
self.log.debug("No existing version..")
return False
representation = legacy_io.find_one(
{
"name": ext.lstrip("."),
"type": "representation",
"parent": version["_id"],
}
representation = get_representation_by_name(
project_name, ext.lstrip("."), version["_id"]
)
if not representation:
self.log.debug("No existing representation..")

View file

@ -2,6 +2,7 @@ import re
import pyblish.api
from openpype.client import get_subset_by_name
import openpype.api
from openpype.pipeline import legacy_io
@ -15,31 +16,23 @@ class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin):
label = "USD Shade model exists"
def process(self, instance):
asset = instance.data["asset"]
project_name = legacy_io.active_project()
asset_name = instance.data["asset"]
subset = instance.data["subset"]
# Assume shading variation starts after a dot separator
shade_subset = subset.split(".", 1)[0]
model_subset = re.sub("^usdShade", "usdModel", shade_subset)
asset_doc = legacy_io.find_one(
{"name": asset, "type": "asset"},
{"_id": True}
)
asset_doc = instance.data.get("assetEntity")
if not asset_doc:
raise RuntimeError("Asset does not exist: %s" % asset)
raise RuntimeError("Asset document is not filled on instance.")
subset_doc = legacy_io.find_one(
{
"name": model_subset,
"type": "subset",
"parent": asset_doc["_id"],
},
{"_id": True}
subset_doc = get_subset_by_name(
project_name, model_subset, asset_doc["_id"], fields=["_id"]
)
if not subset_doc:
raise RuntimeError(
"USD Model subset not found: "
"%s (%s)" % (model_subset, asset)
"%s (%s)" % (model_subset, asset_name)
)

View file

@ -4,19 +4,8 @@ import husdoutputprocessors.base as base
import colorbleed.usdlib as usdlib
from openpype.pipeline import (
legacy_io,
registered_root,
)
def _get_project_publish_template():
"""Return publish template from database for current project"""
project = legacy_io.find_one(
{"type": "project"},
projection={"config.template.publish": True}
)
return project["config"]["template"]["publish"]
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io, Anatomy
class AvalonURIOutputProcessor(base.OutputProcessorBase):
@ -35,7 +24,6 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
ever created in a Houdini session. Therefore be very careful
about what data gets put in this object.
"""
self._template = None
self._use_publish_paths = False
self._cache = dict()
@ -60,14 +48,11 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
return self._parameters
def beginSave(self, config_node, t):
self._template = _get_project_publish_template()
parm = self._parms["use_publish_paths"]
self._use_publish_paths = config_node.parm(parm).evalAtTime(t)
self._cache.clear()
def endSave(self):
self._template = None
self._use_publish_paths = None
self._cache.clear()
@ -138,22 +123,19 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
"""
PROJECT = legacy_io.Session["AVALON_PROJECT"]
asset_doc = legacy_io.find_one({
"name": asset,
"type": "asset"
})
anatomy = Anatomy(PROJECT)
asset_doc = get_asset_by_name(PROJECT, asset)
if not asset_doc:
raise RuntimeError("Invalid asset name: '%s'" % asset)
root = registered_root()
path = self._template.format(**{
"root": root,
formatted_anatomy = anatomy.format({
"project": PROJECT,
"asset": asset_doc["name"],
"subset": subset,
"representation": ext,
"version": 0 # stub version zero
})
path = formatted_anatomy["publish"]["path"]
# Remove the version folder
subset_folder = os.path.dirname(os.path.dirname(path))

View file

@ -5,11 +5,11 @@ Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .pipeline import (
install,
uninstall,
ls,
containerise,
MayaHost,
)
from .plugin import (
Creator,
@ -40,11 +40,11 @@ from .lib import (
__all__ = [
"install",
"uninstall",
"ls",
"containerise",
"MayaHost",
"Creator",
"Loader",

View file

@ -3,6 +3,7 @@ from __future__ import absolute_import
import pyblish.api
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io
from openpype.api import get_errored_instances_from_context
@ -74,12 +75,21 @@ class GenerateUUIDsOnInvalidAction(pyblish.api.Action):
from . import lib
asset = instance.data['asset']
asset_id = legacy_io.find_one(
{"name": asset, "type": "asset"},
projection={"_id": True}
)['_id']
for node, _id in lib.generate_ids(nodes, asset_id=asset_id):
# Expecting this is called on validators in which case 'assetEntity'
# should be always available, but kept a way to query it by name.
asset_doc = instance.data.get("assetEntity")
if not asset_doc:
asset_name = instance.data["asset"]
project_name = legacy_io.active_project()
self.log.info((
"Asset is not stored on instance."
" Querying by name \"{}\" from project \"{}\""
).format(asset_name, project_name))
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["_id"]
)
for node, _id in lib.generate_ids(nodes, asset_id=asset_doc["_id"]):
lib.set_id(node, _id, overwrite=True)

View file

@ -2,6 +2,7 @@
"""OpenPype script commands to be used directly in Maya."""
from maya import cmds
from openpype.client import get_asset_by_name, get_project
from openpype.pipeline import legacy_io
@ -79,8 +80,9 @@ def reset_frame_range():
cmds.currentUnit(time=fps)
# Set frame start/end
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset = legacy_io.find_one({"name": asset_name, "type": "asset"})
asset = get_asset_by_name(project_name, asset_name)
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
@ -145,8 +147,9 @@ def reset_resolution():
resolution_height = 1080
# Get resolution from asset
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset_doc = legacy_io.find_one({"name": asset_name, "type": "asset"})
asset_doc = get_asset_by_name(project_name, asset_name)
resolution = _resolution_from_document(asset_doc)
# Try get resolution from project
if resolution is None:
@ -155,7 +158,7 @@ def reset_resolution():
"Asset \"{}\" does not have set resolution."
" Trying to get resolution from project"
).format(asset_name))
project_doc = legacy_io.find_one({"type": "project"})
project_doc = get_project(project_name)
resolution = _resolution_from_document(project_doc)
if resolution is None:

View file

@ -12,12 +12,17 @@ import contextlib
from collections import OrderedDict, defaultdict
from math import ceil
from six import string_types
import bson
from maya import cmds, mel
import maya.api.OpenMaya as om
from openpype import lib
from openpype.client import (
get_project,
get_asset_by_name,
get_subsets,
get_last_versions,
get_representation_by_name
)
from openpype.api import get_anatomy_settings
from openpype.pipeline import (
legacy_io,
@ -27,6 +32,7 @@ from openpype.pipeline import (
load_container,
registered_host,
)
from openpype.pipeline.context_tools import get_current_project_asset
from .commands import reset_frame_range
@ -1387,15 +1393,11 @@ def generate_ids(nodes, asset_id=None):
if asset_id is None:
# Get the asset ID from the database for the asset of current context
asset_data = legacy_io.find_one(
{
"type": "asset",
"name": legacy_io.Session["AVALON_ASSET"]
},
projection={"_id": True}
)
assert asset_data, "No current asset found in Session"
asset_id = asset_data['_id']
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset_doc, "No current asset found in Session"
asset_id = asset_doc['_id']
node_ids = []
for node in nodes:
@ -1548,13 +1550,15 @@ def list_looks(asset_id):
# # get all subsets with look leading in
# the name associated with the asset
subset = legacy_io.find({
"parent": bson.ObjectId(asset_id),
"type": "subset",
"name": {"$regex": "look*"}
})
return list(subset)
# TODO this should probably look for family 'look' instead of checking
# subset name that can not start with family
project_name = legacy_io.active_project()
subset_docs = get_subsets(project_name, asset_ids=[asset_id])
return [
subset_doc
for subset_doc in subset_docs
if subset_doc["name"].startswith("look")
]
def assign_look_by_version(nodes, version_id):
@ -1570,18 +1574,15 @@ def assign_look_by_version(nodes, version_id):
None
"""
# Get representations of shader file and relationships
look_representation = legacy_io.find_one({
"type": "representation",
"parent": version_id,
"name": "ma"
})
project_name = legacy_io.active_project()
json_representation = legacy_io.find_one({
"type": "representation",
"parent": version_id,
"name": "json"
})
# Get representations of shader file and relationships
look_representation = get_representation_by_name(
project_name, "ma", version_id
)
json_representation = get_representation_by_name(
project_name, "json", version_id
)
# See if representation is already loaded, if so reuse it.
host = registered_host()
@ -1639,42 +1640,54 @@ def assign_look(nodes, subset="lookDefault"):
parts = pype_id.split(":", 1)
grouped[parts[0]].append(node)
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, subset_names=[subset], asset_ids=grouped.keys()
)
subset_docs_by_asset_id = {
str(subset_doc["parent"]): subset_doc
for subset_doc in subset_docs
}
subset_ids = {
subset_doc["_id"]
for subset_doc in subset_docs_by_asset_id.values()
}
last_version_docs = get_last_versions(
project_name,
subset_ids=subset_ids,
fields=["_id", "name", "data.families"]
)
last_version_docs_by_subset_id = {
last_version_doc["parent"]: last_version_doc
for last_version_doc in last_version_docs
}
for asset_id, asset_nodes in grouped.items():
# create objectId for database
try:
asset_id = bson.ObjectId(asset_id)
except bson.errors.InvalidId:
log.warning("Asset ID is not compatible with bson")
continue
subset_data = legacy_io.find_one({
"type": "subset",
"name": subset,
"parent": asset_id
})
if not subset_data:
subset_doc = subset_docs_by_asset_id.get(asset_id)
if not subset_doc:
log.warning("No subset '{}' found for {}".format(subset, asset_id))
continue
# get last version
# with backwards compatibility
version = legacy_io.find_one(
{
"parent": subset_data['_id'],
"type": "version",
"data.families": {"$in": ["look"]}
},
sort=[("name", -1)],
projection={
"_id": True,
"name": True
}
)
last_version = last_version_docs_by_subset_id.get(subset_doc["_id"])
if not last_version:
log.warning((
"Not found last version for subset '{}' on asset with id {}"
).format(subset, asset_id))
continue
log.debug("Assigning look '{}' <v{:03d}>".format(subset,
version["name"]))
families = last_version.get("data", {}).get("families") or []
if "look" not in families:
log.warning((
"Last version for subset '{}' on asset with id {}"
" does not have look family"
).format(subset, asset_id))
continue
assign_look_by_version(asset_nodes, version['_id'])
log.debug("Assigning look '{}' <v{:03d}>".format(
subset, last_version["name"]))
assign_look_by_version(asset_nodes, last_version["_id"])
def apply_shaders(relationships, shadernodes, nodes):
@ -1737,8 +1750,11 @@ def apply_shaders(relationships, shadernodes, nodes):
log.warning("No nodes found for shading engine "
"'{0}'".format(id_shading_engines[0]))
continue
try:
cmds.sets(filtered_nodes, forceElement=id_shading_engines[0])
except RuntimeError as rte:
log.error("Error during shader assignment: {}".format(rte))
cmds.sets(filtered_nodes, forceElement=id_shading_engines[0])
# endregion
apply_attributes(attributes, nodes_by_id)
@ -1892,7 +1908,7 @@ def iter_parents(node):
"""
while True:
split = node.rsplit("|", 1)
if len(split) == 1:
if len(split) == 1 or not split[0]:
return
node = split[0]
@ -2123,9 +2139,11 @@ def set_scene_resolution(width, height, pixelAspect):
control_node = "defaultResolution"
current_renderer = cmds.getAttr("defaultRenderGlobals.currentRenderer")
aspect_ratio_attr = "deviceAspectRatio"
# Give VRay a helping hand as it is slightly different from the rest
if current_renderer == "vray":
aspect_ratio_attr = "aspectRatio"
vray_node = "vraySettings"
if cmds.objExists(vray_node):
control_node = vray_node
@ -2138,7 +2156,8 @@ def set_scene_resolution(width, height, pixelAspect):
cmds.setAttr("%s.height" % control_node, height)
deviceAspectRatio = ((float(width) / float(height)) * float(pixelAspect))
cmds.setAttr("%s.deviceAspectRatio" % control_node, deviceAspectRatio)
cmds.setAttr(
"{}.{}".format(control_node, aspect_ratio_attr), deviceAspectRatio)
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
@ -2152,9 +2171,10 @@ def reset_scene_resolution():
None
"""
project_doc = legacy_io.find_one({"type": "project"})
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
project_data = project_doc["data"]
asset_data = lib.get_asset()["data"]
asset_data = get_current_project_asset()["data"]
# Set project resolution
width_key = "resolutionWidth"
@ -2185,9 +2205,11 @@ def set_context_settings():
"""
# Todo (Wijnand): apply renderer and resolution of project
project_doc = legacy_io.find_one({"type": "project"})
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
project_data = project_doc["data"]
asset_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset(fields=["data.fps"])
asset_data = asset_doc.get("data", {})
# Set project fps
fps = asset_data.get("fps", project_data.get("fps", 25))
@ -2212,7 +2234,7 @@ def validate_fps():
"""
fps = lib.get_asset()["data"]["fps"]
fps = get_current_project_asset(fields=["data.fps"])["data"]["fps"]
# TODO(antirotor): This is hack as for framerates having multiple
# decimal places. FTrack is ceiling decimal values on
# fps to two decimal places but Maya 2019+ is reporting those fps
@ -2501,12 +2523,30 @@ def load_capture_preset(data=None):
temp_options2['multiSampleEnable'] = False
temp_options2['multiSampleCount'] = preset[id][key]
if key == 'renderDepthOfField':
temp_options2['renderDepthOfField'] = preset[id][key]
if key == 'ssaoEnable':
if preset[id][key] is True:
temp_options2['ssaoEnable'] = True
else:
temp_options2['ssaoEnable'] = False
if key == 'ssaoSamples':
temp_options2['ssaoSamples'] = preset[id][key]
if key == 'ssaoAmount':
temp_options2['ssaoAmount'] = preset[id][key]
if key == 'ssaoRadius':
temp_options2['ssaoRadius'] = preset[id][key]
if key == 'hwFogDensity':
temp_options2['hwFogDensity'] = preset[id][key]
if key == 'ssaoFilterRadius':
temp_options2['ssaoFilterRadius'] = preset[id][key]
if key == 'alphaCut':
temp_options2['transparencyAlgorithm'] = 5
temp_options2['transparencyQuality'] = 1
@ -2514,6 +2554,48 @@ def load_capture_preset(data=None):
if key == 'headsUpDisplay':
temp_options['headsUpDisplay'] = True
if key == 'fogging':
temp_options['fogging'] = preset[id][key] or False
if key == 'hwFogStart':
temp_options2['hwFogStart'] = preset[id][key]
if key == 'hwFogEnd':
temp_options2['hwFogEnd'] = preset[id][key]
if key == 'hwFogAlpha':
temp_options2['hwFogAlpha'] = preset[id][key]
if key == 'hwFogFalloff':
temp_options2['hwFogFalloff'] = int(preset[id][key])
if key == 'hwFogColorR':
temp_options2['hwFogColorR'] = preset[id][key]
if key == 'hwFogColorG':
temp_options2['hwFogColorG'] = preset[id][key]
if key == 'hwFogColorB':
temp_options2['hwFogColorB'] = preset[id][key]
if key == 'motionBlurEnable':
if preset[id][key] is True:
temp_options2['motionBlurEnable'] = True
else:
temp_options2['motionBlurEnable'] = False
if key == 'motionBlurSampleCount':
temp_options2['motionBlurSampleCount'] = preset[id][key]
if key == 'motionBlurShutterOpenFraction':
temp_options2['motionBlurShutterOpenFraction'] = preset[id][key]
if key == 'lineAAEnable':
if preset[id][key] is True:
temp_options2['lineAAEnable'] = True
else:
temp_options2['lineAAEnable'] = False
else:
temp_options[str(key)] = preset[id][key]
@ -2523,7 +2605,24 @@ def load_capture_preset(data=None):
'gpuCacheDisplayFilter',
'multiSample',
'ssaoEnable',
'textureMaxResolution'
'ssaoSamples',
'ssaoAmount',
'ssaoFilterRadius',
'ssaoRadius',
'hwFogStart',
'hwFogEnd',
'hwFogAlpha',
'hwFogFalloff',
'hwFogColorR',
'hwFogColorG',
'hwFogColorB',
'hwFogDensity',
'textureMaxResolution',
'motionBlurEnable',
'motionBlurSampleCount',
'motionBlurShutterOpenFraction',
'lineAAEnable',
'renderDepthOfField'
]:
temp_options.pop(key, None)
@ -2953,8 +3052,9 @@ def update_content_on_context_change():
This will update scene content to match new asset on context change
"""
scene_sets = cmds.listSets(allSets=True)
new_asset = legacy_io.Session["AVALON_ASSET"]
new_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset()
new_asset = asset_doc["name"]
new_data = asset_doc["data"]
for s in scene_sets:
try:
if cmds.getAttr("{}.id".format(s)) == "pyblish.avalon.instance":
@ -3192,3 +3292,209 @@ def parent_nodes(nodes, parent=None):
node[0].setParent(node[1])
if delete_parent:
pm.delete(parent_node)
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)
def iter_visible_nodes_in_range(nodes, start, end):
"""Yield nodes that are visible in start-end frame range.
- Ignores intermediateObjects completely.
- Considers animated visibility attributes + upstream visibilities.
This is optimized for large scenes where some nodes in the parent
hierarchy might have some input connections to the visibilities,
e.g. key, driven keys, connections to other attributes, etc.
This only does a single time step to `start` if current frame is
not inside frame range since the assumption is made that changing
a frame isn't so slow that it beats querying all visibility
plugs through MDGContext on another frame.
Args:
nodes (list): List of node names to consider.
start (int, float): Start frame.
end (int, float): End frame.
Returns:
list: List of node names. These will be long full path names so
might have a longer name than the input nodes.
"""
# States we consider per node
VISIBLE = 1 # always visible
INVISIBLE = 0 # always invisible
ANIMATED = -1 # animated visibility
# Ensure integers
start = int(start)
end = int(end)
# Consider only non-intermediate dag nodes and use the "long" names.
nodes = cmds.ls(nodes, long=True, noIntermediate=True, type="dagNode")
if not nodes:
return
with maintained_time():
# Go to first frame of the range if the current time is outside
# the queried range so can directly query all visible nodes on
# that frame.
current_time = cmds.currentTime(query=True)
if not (start <= current_time <= end):
cmds.currentTime(start)
visible = cmds.ls(nodes, long=True, visible=True)
for node in visible:
yield node
if len(visible) == len(nodes) or start == end:
# All are visible on frame one, so they are at least visible once
# inside the frame range.
return
# For the invisible ones check whether its visibility and/or
# any of its parents visibility attributes are animated. If so, it might
# get visible on other frames in the range.
def memodict(f):
"""Memoization decorator for a function taking a single argument.
See: http://code.activestate.com/recipes/
578231-probably-the-fastest-memoization-decorator-in-the-/
"""
class memodict(dict):
def __missing__(self, key):
ret = self[key] = f(key)
return ret
return memodict().__getitem__
@memodict
def get_state(node):
plug = node + ".visibility"
connections = cmds.listConnections(plug,
source=True,
destination=False)
if connections:
return ANIMATED
else:
return VISIBLE if cmds.getAttr(plug) else INVISIBLE
visible = set(visible)
invisible = [node for node in nodes if node not in visible]
always_invisible = set()
# Iterate over the nodes by short to long names to iterate the highest
# in hierarchy nodes first. So the collected data can be used from the
# cache for parent queries in next iterations.
node_dependencies = dict()
for node in sorted(invisible, key=len):
state = get_state(node)
if state == INVISIBLE:
always_invisible.add(node)
continue
# If not always invisible by itself we should go through and check
# the parents to see if any of them are always invisible. For those
# that are "ANIMATED" we consider that this node is dependent on
# that attribute, we store them as dependency.
dependencies = set()
if state == ANIMATED:
dependencies.add(node)
traversed_parents = list()
for parent in iter_parents(node):
if parent in always_invisible or get_state(parent) == INVISIBLE:
# When parent is always invisible then consider this parent,
# this node we started from and any of the parents we
# have traversed in-between to be *always invisible*
always_invisible.add(parent)
always_invisible.add(node)
always_invisible.update(traversed_parents)
break
# If we have traversed the parent before and its visibility
# was dependent on animated visibilities then we can just extend
# its dependencies for to those for this node and break further
# iteration upwards.
parent_dependencies = node_dependencies.get(parent, None)
if parent_dependencies is not None:
dependencies.update(parent_dependencies)
break
state = get_state(parent)
if state == ANIMATED:
dependencies.add(parent)
traversed_parents.append(parent)
if node not in always_invisible and dependencies:
node_dependencies[node] = dependencies
if not node_dependencies:
return
# Now we only have to check the visibilities for nodes that have animated
# visibility dependencies upstream. The fastest way to check these
# visibility attributes across different frames is with Python api 2.0
# so we do that.
@memodict
def get_visibility_mplug(node):
"""Return api 2.0 MPlug with cached memoize decorator"""
sel = om.MSelectionList()
sel.add(node)
dag = sel.getDagPath(0)
return om.MFnDagNode(dag).findPlug("visibility", True)
@contextlib.contextmanager
def dgcontext(mtime):
"""MDGContext context manager"""
context = om.MDGContext(mtime)
try:
previous = context.makeCurrent()
yield context
finally:
previous.makeCurrent()
# We skip the first frame as we already used that frame to check for
# overall visibilities. And end+1 to include the end frame.
scene_units = om.MTime.uiUnit()
for frame in range(start + 1, end + 1):
mtime = om.MTime(frame, unit=scene_units)
# Build little cache so we don't query the same MPlug's value
# again if it was checked on this frame and also is a dependency
# for another node
frame_visibilities = {}
with dgcontext(mtime) as context:
for node, dependencies in list(node_dependencies.items()):
for dependency in dependencies:
dependency_visible = frame_visibilities.get(dependency,
None)
if dependency_visible is None:
mplug = get_visibility_mplug(dependency)
dependency_visible = mplug.asBool(context)
frame_visibilities[dependency] = dependency_visible
if not dependency_visible:
# One dependency is not visible, thus the
# node is not visible.
break
else:
# All dependencies are visible.
yield node
# Remove node with dependencies for next frame iterations
# because it was visible at least once.
node_dependencies.pop(node)
# If no more nodes to process break the frame iterations..
if not node_dependencies:
break

View file

@ -1087,7 +1087,7 @@ class RenderProductsRenderman(ARenderProducts):
"d_tiff": "tif"
}
displays = get_displays()["displays"]
displays = get_displays(override_dst="render")["displays"]
for name, display in displays.items():
enabled = display["params"]["enable"]["value"]
if not enabled:
@ -1106,9 +1106,33 @@ class RenderProductsRenderman(ARenderProducts):
display["driverNode"]["type"], "exr")
for camera in cameras:
product = RenderProduct(productName=aov_name,
ext=extensions,
camera=camera)
# Create render product and set it as multipart only on
# display types supporting it. In all other cases, Renderman
# will create separate output per channel.
if display["driverNode"]["type"] in ["d_openexr", "d_deepexr", "d_tiff"]: # noqa
product = RenderProduct(
productName=aov_name,
ext=extensions,
camera=camera,
multipart=True
)
else:
# this code should handle the case where no multipart
# capable format is selected. But since it involves
# shady logic to determine what channel become what
# lets not do that as all productions will use exr anyway.
"""
for channel in display['params']['displayChannels']['value']: # noqa
product = RenderProduct(
productName="{}_{}".format(aov_name, channel),
ext=extensions,
camera=camera,
multipart=False
)
"""
raise UnsupportedImageFormatException(
"Only exr, deep exr and tiff formats are supported.")
products.append(product)
return products
@ -1201,3 +1225,7 @@ class UnsupportedRendererException(Exception):
Raised when requesting data from unsupported renderer.
"""
class UnsupportedImageFormatException(Exception):
"""Custom exception to report unsupported output image format."""

View file

@ -1,13 +1,15 @@
import os
import sys
import errno
import logging
import contextlib
from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om
import pyblish.api
from openpype.settings import get_project_settings
from openpype.host import HostBase, IWorkfileHost, ILoadHost
import openpype.hosts.maya
from openpype.tools.utils import host_tools
from openpype.lib import (
@ -28,6 +30,14 @@ from openpype.pipeline import (
)
from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = logging.getLogger("openpype.hosts.maya")
@ -40,50 +50,121 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
self = sys.modules[__name__]
self._ignore_lock = False
self._events = {}
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
name = "maya"
def install():
from openpype.settings import get_project_settings
def __init__(self):
super(MayaHost, self).__init__()
self._op_events = {}
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
def install(self):
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
log.info(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
self.log.info(PUBLISH_PATH)
log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
self.log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
# Callbacks below are not required for headless mode, the `init` however
# is important to load referenced Alembics correctly at rendertime.
if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya "
"save/open/new callback installation.."))
return
if lib.IS_HEADLESS:
self.log.info((
"Running in headless mode, skipping Maya save/open/new"
" callback installation.."
))
_set_project()
_register_callbacks()
return
menu.install()
_set_project()
self._register_callbacks()
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
menu.install()
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
yield
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._op_events[handler] = None
except RuntimeError as exc:
self.log.info(exc)
self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
_before_scene_save
)
)
self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._op_events[_on_maya_initialized] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized,
_on_maya_initialized
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
def _set_project():
@ -107,44 +188,6 @@ def _set_project():
cmds.workspace(workdir, openWorkspace=True)
def _register_callbacks():
for handler, event in self._events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._events[handler] = None
except RuntimeError as e:
log.info(e)
self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save
)
self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized
)
self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
log.info("Installed event handler _on_scene_save..")
log.info("Installed event handler _before_scene_save..")
log.info("Installed event handler _on_scene_new..")
log.info("Installed event handler _on_maya_initialized..")
log.info("Installed event handler _on_scene_open..")
def _on_maya_initialized(*args):
emit_event("init")
@ -476,7 +519,6 @@ def on_task_changed():
workdir = legacy_io.Session["AVALON_WORKDIR"]
if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir)
_set_project()
# Set Maya fileDialog's start-dir to /scenes

View file

@ -9,8 +9,9 @@ from openpype.pipeline import (
LoaderPlugin,
get_representation_path,
AVALON_CONTAINER_ID,
Anatomy,
)
from openpype.settings import get_project_settings
from .pipeline import containerise
from . import lib
@ -230,6 +231,10 @@ class ReferenceLoader(Loader):
self.log.debug("No alembic nodes found in {}".format(members))
try:
path = self.prepare_root_value(path,
representation["context"]
["project"]
["code"])
content = cmds.file(path,
loadReference=reference_node,
type=file_type,
@ -319,6 +324,29 @@ class ReferenceLoader(Loader):
except RuntimeError:
pass
def prepare_root_value(self, file_url, project_name):
"""Replace root value with env var placeholder.
Use ${OPENPYPE_ROOT_WORK} (or any other root) instead of proper root
value when storing referenced url into a workfile.
Useful for remote workflows with SiteSync.
Args:
file_url (str)
project_name (dict)
Returns:
(str)
"""
settings = get_project_settings(project_name)
use_env_var_as_root = (settings["maya"]
["maya-dirmap"]
["use_env_var_as_root"])
if use_env_var_as_root:
anatomy = Anatomy(project_name)
file_url = anatomy.replace_root_with_env_key(file_url, '${{{}}}')
return file_url
@staticmethod
def _organize_containers(nodes, container):
# type: (list, str) -> None

View file

@ -6,10 +6,16 @@ import contextlib
import copy
import six
from bson.objectid import ObjectId
from maya import cmds
from openpype.client import (
get_version_by_name,
get_last_version_by_subset_id,
get_representation_by_id,
get_representation_by_name,
get_representation_parents,
)
from openpype.pipeline import (
schema,
legacy_io,
@ -283,36 +289,32 @@ def update_package_version(container, version):
"""
# Versioning (from `core.maya.pipeline`)
current_representation = legacy_io.find_one({
"_id": ObjectId(container["representation"])
})
project_name = legacy_io.active_project()
current_representation = get_representation_by_id(
project_name, container["representation"]
)
assert current_representation is not None, "This is a bug"
version_, subset, asset, project = legacy_io.parenthood(
current_representation
version_doc, subset_doc, asset_doc, project_doc = (
get_representation_parents(project_name, current_representation)
)
if version == -1:
new_version = legacy_io.find_one({
"type": "version",
"parent": subset["_id"]
}, sort=[("name", -1)])
new_version = get_last_version_by_subset_id(
project_name, subset_doc["_id"]
)
else:
new_version = legacy_io.find_one({
"type": "version",
"parent": subset["_id"],
"name": version,
})
new_version = get_version_by_name(
project_name, version, subset_doc["_id"]
)
assert new_version is not None, "This is a bug"
# Get the new representation (new file)
new_representation = legacy_io.find_one({
"type": "representation",
"parent": new_version["_id"],
"name": current_representation["name"]
})
new_representation = get_representation_by_name(
project_name, current_representation["name"], new_version["_id"]
)
update_package(container, new_representation)
@ -330,10 +332,10 @@ def update_package(set_container, representation):
"""
# Load the original package data
current_representation = legacy_io.find_one({
"_id": ObjectId(set_container['representation']),
"type": "representation"
})
project_name = legacy_io.active_project()
current_representation = get_representation_by_id(
project_name, set_container["representation"]
)
current_file = get_representation_path(current_representation)
assert current_file.endswith(".json")
@ -380,6 +382,7 @@ def update_scene(set_container, containers, current_data, new_data, new_file):
from openpype.hosts.maya.lib import DEFAULT_MATRIX, get_container_transforms
set_namespace = set_container['namespace']
project_name = legacy_io.active_project()
# Update the setdress hierarchy alembic
set_root = get_container_transforms(set_container, root=True)
@ -481,12 +484,12 @@ def update_scene(set_container, containers, current_data, new_data, new_file):
# Check whether the conversion can be done by the Loader.
# They *must* use the same asset, subset and Loader for
# `update_container` to make sense.
old = legacy_io.find_one({
"_id": ObjectId(representation_current)
})
new = legacy_io.find_one({
"_id": ObjectId(representation_new)
})
old = get_representation_by_id(
project_name, representation_current
)
new = get_representation_by_id(
project_name, representation_new
)
is_valid = compare_representations(old=old, new=new)
if not is_valid:
log.error("Skipping: %s. See log for details.",

View file

@ -38,3 +38,7 @@ class CreateAnimation(plugin.Creator):
# Default to exporting world-space
self.data["worldSpace"] = True
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50

View file

@ -0,0 +1,15 @@
from openpype.hosts.maya.api import plugin
class CreateMultiverseLook(plugin.Creator):
"""Create Multiverse Look"""
name = "mvLook"
label = "Multiverse Look"
family = "mvLook"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseLook, self).__init__(*args, **kwargs)
self.data["fileFormat"] = ["usda", "usd"]
self.data["publishMipMap"] = True

View file

@ -2,11 +2,11 @@ from openpype.hosts.maya.api import plugin, lib
class CreateMultiverseUsd(plugin.Creator):
"""Multiverse USD data"""
"""Create Multiverse USD Asset"""
name = "usdMain"
label = "Multiverse USD"
family = "usd"
name = "mvUsdMain"
label = "Multiverse USD Asset"
family = "mvUsd"
icon = "cubes"
def __init__(self, *args, **kwargs):
@ -15,7 +15,8 @@ class CreateMultiverseUsd(plugin.Creator):
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
self.data["stripNamespaces"] = False
self.data["fileFormat"] = ["usd", "usda", "usdz"]
self.data["stripNamespaces"] = True
self.data["mergeTransformAndShape"] = False
self.data["writeAncestors"] = True
self.data["flattenParentXforms"] = False
@ -36,15 +37,16 @@ class CreateMultiverseUsd(plugin.Creator):
self.data["writeUVs"] = True
self.data["writeColorSets"] = False
self.data["writeTangents"] = False
self.data["writeRefPositions"] = False
self.data["writeRefPositions"] = True
self.data["writeBlendShapes"] = False
self.data["writeDisplayColor"] = False
self.data["writeDisplayColor"] = True
self.data["writeSkinWeights"] = False
self.data["writeMaterialAssignment"] = False
self.data["writeHardwareShader"] = False
self.data["writeShadingNetworks"] = False
self.data["writeTransformMatrix"] = True
self.data["writeUsdAttributes"] = False
self.data["writeUsdAttributes"] = True
self.data["writeInstancesAsReferences"] = False
self.data["timeVaryingTopology"] = False
self.data["customMaterialNamespace"] = ''
self.data["numTimeSamples"] = 1

View file

@ -4,9 +4,9 @@ from openpype.hosts.maya.api import plugin, lib
class CreateMultiverseUsdComp(plugin.Creator):
"""Create Multiverse USD Composition"""
name = "usdCompositionMain"
name = "mvUsdCompositionMain"
label = "Multiverse USD Composition"
family = "usdComposition"
family = "mvUsdComposition"
icon = "cubes"
def __init__(self, *args, **kwargs):
@ -15,9 +15,12 @@ class CreateMultiverseUsdComp(plugin.Creator):
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
# Order of `fileFormat` must match extract_multiverse_usd_comp.py
self.data["fileFormat"] = ["usda", "usd"]
self.data["stripNamespaces"] = False
self.data["mergeTransformAndShape"] = False
self.data["flattenContent"] = False
self.data["writeAsCompoundLayers"] = False
self.data["writePendingOverrides"] = False
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0

View file

@ -2,11 +2,11 @@ from openpype.hosts.maya.api import plugin, lib
class CreateMultiverseUsdOver(plugin.Creator):
"""Multiverse USD data"""
"""Create Multiverse USD Override"""
name = "usdOverrideMain"
name = "mvUsdOverrideMain"
label = "Multiverse USD Override"
family = "usdOverride"
family = "mvUsdOverride"
icon = "cubes"
def __init__(self, *args, **kwargs):
@ -15,6 +15,8 @@ class CreateMultiverseUsdOver(plugin.Creator):
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
# Order of `fileFormat` must match extract_multiverse_usd_over.py
self.data["fileFormat"] = ["usda", "usd"]
self.data["writeAll"] = False
self.data["writeTransforms"] = True
self.data["writeVisibility"] = True

View file

@ -28,3 +28,7 @@ class CreatePointCache(plugin.Creator):
# Add options for custom attributes
self.data["attr"] = ""
self.data["attrPrefix"] = ""
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50

View file

@ -15,13 +15,13 @@ from openpype.hosts.maya.api import (
from openpype.lib import requests_get
from openpype.api import (
get_system_settings,
get_project_settings,
get_asset)
get_project_settings)
from openpype.modules import ModulesManager
from openpype.pipeline import (
CreatorError,
legacy_io,
)
from openpype.pipeline.context_tools import get_current_project_asset
class CreateRender(plugin.Creator):
@ -413,7 +413,7 @@ class CreateRender(plugin.Creator):
prefix,
type="string")
asset = get_asset()
asset = get_current_project_asset()
if renderer == "arnold":
# set format to exr

View file

@ -15,6 +15,8 @@ class CreateReview(plugin.Creator):
keepImages = False
isolate = False
imagePlane = True
Width = 0
Height = 0
transparency = [
"preset",
"simple",
@ -33,6 +35,8 @@ class CreateReview(plugin.Creator):
for key, value in animation_data.items():
data[key] = value
data["review_width"] = self.Width
data["review_height"] = self.Height
data["isolate"] = self.isolate
data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane

View file

@ -22,7 +22,8 @@ class CreateYetiCache(plugin.Creator):
# Add animation data without step and handles
anim_data = lib.collect_animation_data()
anim_data.pop("step")
anim_data.pop("handles")
anim_data.pop("handleStart")
anim_data.pop("handleEnd")
self.data.update(anim_data)
# Add samples

View file

@ -1,6 +1,10 @@
import re
import json
from bson.objectid import ObjectId
from openpype.client import (
get_representation_by_id,
get_representations
)
from openpype.pipeline import (
InventoryAction,
get_representation_context,
@ -31,6 +35,7 @@ class ImportModelRender(InventoryAction):
def process(self, containers):
from maya import cmds
project_name = legacy_io.active_project()
for container in containers:
con_name = container["objectName"]
nodes = []
@ -40,9 +45,9 @@ class ImportModelRender(InventoryAction):
else:
nodes.append(n)
repr_doc = legacy_io.find_one({
"_id": ObjectId(container["representation"]),
})
repr_doc = get_representation_by_id(
project_name, container["representation"], fields=["parent"]
)
version_id = repr_doc["parent"]
print("Importing render sets for model %r" % con_name)
@ -63,26 +68,38 @@ class ImportModelRender(InventoryAction):
from maya import cmds
project_name = legacy_io.active_project()
repre_docs = get_representations(
project_name, version_ids=[version_id], fields=["_id", "name"]
)
# Get representations of shader file and relationships
look_repr = legacy_io.find_one({
"type": "representation",
"parent": version_id,
"name": {"$regex": self.scene_type_regex},
})
if not look_repr:
json_repre = None
look_repres = []
scene_type_regex = re.compile(self.scene_type_regex)
for repre_doc in repre_docs:
repre_name = repre_doc["name"]
if repre_name == self.look_data_type:
json_repre = repre_doc
continue
if scene_type_regex.fullmatch(repre_name):
look_repres.append(repre_doc)
# QUESTION should we care if there is more then one look
# representation? (since it's based on regex match)
look_repre = None
if look_repres:
look_repre = look_repres[0]
# QUESTION shouldn't be json representation validated too?
if not look_repre:
print("No model render sets for this model version..")
return
json_repr = legacy_io.find_one({
"type": "representation",
"parent": version_id,
"name": self.look_data_type,
})
context = get_representation_context(look_repr["_id"])
context = get_representation_context(look_repre["_id"])
maya_file = self.filepath_from_context(context)
context = get_representation_context(json_repr["_id"])
context = get_representation_context(json_repre["_id"])
json_file = self.filepath_from_context(context)
# Import the look file

View file

@ -35,8 +35,9 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# hero_001 (abc)
# asset_counter{optional}
nodes = cmds.file(self.fname,
file_url = self.prepare_root_value(self.fname,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace,
sharedReferenceFile=False,
groupReference=True,

View file

@ -64,9 +64,11 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
path = os.path.join(publish_folder, filename)
proxyPath = proxyPath_base + ".ma"
self.log.info
nodes = cmds.file(proxyPath,
file_url = self.prepare_root_value(proxyPath,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace,
reference=True,
returnNewNodes=True,
@ -123,7 +125,11 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
assert os.path.exists(proxyPath), "%s does not exist." % proxyPath
try:
content = cmds.file(proxyPath,
file_url = self.prepare_root_value(proxyPath,
representation["context"]
["project"]
["code"])
content = cmds.file(file_url,
loadReference=reference_node,
type="mayaAscii",
returnNewNodes=True)

View file

@ -1,5 +1,10 @@
from maya import cmds, mel
from openpype.client import (
get_asset_by_id,
get_subset_by_id,
get_version_by_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -65,9 +70,16 @@ class AudioLoader(load.LoaderPlugin):
)
# Set frame range.
version = legacy_io.find_one({"_id": representation["parent"]})
subset = legacy_io.find_one({"_id": version["parent"]})
asset = legacy_io.find_one({"_id": subset["parent"]})
project_name = legacy_io.active_project()
version = get_version_by_id(
project_name, representation["parent"], fields=["parent"]
)
subset = get_subset_by_id(
project_name, version["parent"], fields=["parent"]
)
asset = get_asset_by_id(
project_name, subset["parent"], fields=["parent"]
)
audio_node.sourceStart.set(1 - asset["data"]["frameStart"])
audio_node.sourceEnd.set(asset["data"]["frameEnd"])

View file

@ -10,7 +10,7 @@ from openpype.api import get_project_settings
class GpuCacheLoader(load.LoaderPlugin):
"""Load Alembic as gpuCache"""
families = ["model"]
families = ["model", "animation", "pointcache"]
representations = ["abc"]
label = "Import Gpu Cache"

View file

@ -1,5 +1,10 @@
from Qt import QtWidgets, QtCore
from openpype.client import (
get_asset_by_id,
get_subset_by_id,
get_version_by_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -216,9 +221,16 @@ class ImagePlaneLoader(load.LoaderPlugin):
)
# Set frame range.
version = legacy_io.find_one({"_id": representation["parent"]})
subset = legacy_io.find_one({"_id": version["parent"]})
asset = legacy_io.find_one({"_id": subset["parent"]})
project_name = legacy_io.active_project()
version = get_version_by_id(
project_name, representation["parent"], fields=["parent"]
)
subset = get_subset_by_id(
project_name, version["parent"], fields=["parent"]
)
asset = get_asset_by_id(
project_name, subset["parent"], fields=["parent"]
)
start_frame = asset["data"]["frameStart"]
end_frame = asset["data"]["frameEnd"]
image_plane_shape.frameOffset.set(1 - start_frame)

View file

@ -5,6 +5,7 @@ from collections import defaultdict
from Qt import QtWidgets
from openpype.client import get_representation_by_name
from openpype.pipeline import (
legacy_io,
get_representation_path,
@ -31,7 +32,9 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
import maya.cmds as cmds
with lib.maintained_selection():
nodes = cmds.file(self.fname,
file_url = self.prepare_root_value(self.fname,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace,
reference=True,
returnNewNodes=True)
@ -73,11 +76,10 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
shader_nodes = cmds.ls(members, type='shadingEngine')
nodes = set(self._get_nodes_with_shader(shader_nodes))
json_representation = legacy_io.find_one({
"type": "representation",
"parent": representation['parent'],
"name": "json"
})
project_name = legacy_io.active_project()
json_representation = get_representation_by_name(
project_name, "json", representation["parent"]
)
# Load relationships
shader_relation = get_representation_path(json_representation)

View file

@ -14,13 +14,13 @@ from openpype.hosts.maya.api.pipeline import containerise
class MultiverseUsdLoader(load.LoaderPlugin):
"""Load the USD by Multiverse"""
"""Read USD data in a Multiverse Compound"""
families = ["model", "usd", "usdComposition", "usdOverride",
families = ["model", "mvUsd", "mvUsdComposition", "mvUsdOverride",
"pointcache", "animation"]
representations = ["usd", "usda", "usdc", "usdz", "abc"]
label = "Read USD by Multiverse"
label = "Load USD to Multiverse"
order = -10
icon = "code-fork"
color = "orange"

View file

@ -51,7 +51,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
with maintained_selection():
cmds.loadPlugin("AbcImport.mll", quiet=True)
nodes = cmds.file(self.fname,
file_url = self.prepare_root_value(self.fname,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace,
sharedReferenceFile=False,
reference=True,

View file

@ -0,0 +1,139 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
)
# TODO aiVolume doesn't automatically set velocity fps correctly, set manual?
class LoadVDBtoArnold(load.LoaderPlugin):
"""Load OpenVDB for Arnold in aiVolume"""
families = ["vdbcache"]
representations = ["vdb"]
label = "Load VDB to Arnold"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from maya import cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "vdbcache"
# Check if the plugin for arnold is available on the pc
try:
cmds.loadPlugin("mtoa", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255)
)
# Create VRayVolumeGrid
grid_node = cmds.createNode("aiVolume",
name="{}Shape".format(root),
parent=root)
self._set_path(grid_node,
path=self.fname,
representation=context["representation"])
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="aiVolume", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the aiVolume node"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".filename", path, type="string")

View file

@ -1,11 +1,21 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import load
from openpype.pipeline import (
load,
get_representation_path
)
class LoadVDBtoRedShift(load.LoaderPlugin):
"""Load OpenVDB in a Redshift Volume Shape"""
"""Load OpenVDB in a Redshift Volume Shape
Note that the RedshiftVolumeShape is created without a RedshiftVolume
shader assigned. To get the Redshift volume to render correctly assign
a RedshiftVolume shader (in the Hypershade) and set the density, scatter
and emission channels to the channel names of the volumes in the VDB file.
"""
families = ["vdbcache"]
representations = ["vdb"]
@ -55,7 +65,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
root = cmds.createNode("transform", name=label)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
@ -74,9 +84,9 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
name="{}RVSShape".format(label),
parent=root)
cmds.setAttr("{}.fileName".format(volume_node),
self.fname,
type="string")
self._set_path(volume_node,
path=self.fname,
representation=context["representation"])
nodes = [root, volume_node]
self[:] = nodes
@ -87,3 +97,56 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def switch(self, container, representation):
self.update(container, representation)
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the RedshiftVolumeShape"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".fileName", path, type="string")

View file

@ -7,10 +7,9 @@ loader will use them instead of native vray vrmesh format.
"""
import os
from bson.objectid import ObjectId
import maya.cmds as cmds
from openpype.client import get_representation_by_name
from openpype.api import get_project_settings
from openpype.pipeline import (
legacy_io,
@ -185,12 +184,8 @@ class VRayProxyLoader(load.LoaderPlugin):
"""
self.log.debug(
"Looking for abc in published representations of this version.")
abc_rep = legacy_io.find_one({
"type": "representation",
"parent": ObjectId(version_id),
"name": "abc"
})
project_name = legacy_io.active_project()
abc_rep = get_representation_by_name(project_name, "abc", version_id)
if abc_rep:
self.log.debug("Found, we'll link alembic to vray proxy.")
file_name = get_representation_path(abc_rep)

Some files were not shown because too many files have changed in this diff Show more