mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into bugfix/OP-2913-Nuke-Slate-no-timecode
This commit is contained in:
commit
9867f00f20
266 changed files with 8543 additions and 4528 deletions
4
.github/workflows/prerelease.yml
vendored
4
.github/workflows/prerelease.yml
vendored
|
|
@ -69,16 +69,14 @@ jobs:
|
|||
run: |
|
||||
git config user.email ${{ secrets.CI_EMAIL }}
|
||||
git config user.name ${{ secrets.CI_USER }}
|
||||
cd repos/avalon-core
|
||||
git checkout main
|
||||
git pull
|
||||
cd ../..
|
||||
git add .
|
||||
git commit -m "[Automated] Bump version"
|
||||
tag_name="CI/${{ steps.version.outputs.next_tag }}"
|
||||
echo $tag_name
|
||||
git tag -a $tag_name -m "nightly build"
|
||||
|
||||
|
||||
- name: Push to protected main branch
|
||||
uses: CasperWA/push-protected@v2.10.0
|
||||
with:
|
||||
|
|
|
|||
271
CHANGELOG.md
271
CHANGELOG.md
|
|
@ -1,166 +1,167 @@
|
|||
# Changelog
|
||||
|
||||
## [3.10.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.10.1-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.4...HEAD)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Docs: add all-contributors config and initial list [\#3094](https://github.com/pypeclub/OpenPype/pull/3094)
|
||||
- Nuke docs with videos [\#3052](https://github.com/pypeclub/OpenPype/pull/3052)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.10.0...HEAD)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Standalone publisher: add support for bgeo and vdb [\#3080](https://github.com/pypeclub/OpenPype/pull/3080)
|
||||
- Update collect\_render.py [\#3055](https://github.com/pypeclub/OpenPype/pull/3055)
|
||||
- SiteSync: Added compute\_resource\_sync\_sites to sync\_server\_module [\#2983](https://github.com/pypeclub/OpenPype/pull/2983)
|
||||
- TVPaint: Init file for TVPaint worker also handle guideline images [\#3250](https://github.com/pypeclub/OpenPype/pull/3250)
|
||||
- Support for Unreal 5 [\#3122](https://github.com/pypeclub/OpenPype/pull/3122)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- RoyalRender Control Submission - AVALON\_APP\_NAME default [\#3091](https://github.com/pypeclub/OpenPype/pull/3091)
|
||||
- Ftrack: Update Create Folders action [\#3089](https://github.com/pypeclub/OpenPype/pull/3089)
|
||||
- Project Manager: Avoid unnecessary updates of asset documents [\#3083](https://github.com/pypeclub/OpenPype/pull/3083)
|
||||
- Standalone publisher: Fix plugins install [\#3077](https://github.com/pypeclub/OpenPype/pull/3077)
|
||||
- General: Extract review sequence is not converted with same names [\#3076](https://github.com/pypeclub/OpenPype/pull/3076)
|
||||
- Webpublisher: Use variant value [\#3068](https://github.com/pypeclub/OpenPype/pull/3068)
|
||||
- Nuke: Add aov matching even for remainder and prerender [\#3060](https://github.com/pypeclub/OpenPype/pull/3060)
|
||||
- Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
|
||||
- Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240)
|
||||
- Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239)
|
||||
- Unreal: Fixed Camera loading in UE5 [\#3238](https://github.com/pypeclub/OpenPype/pull/3238)
|
||||
|
||||
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- General: OpenPype modules publish plugins are registered in host [\#3180](https://github.com/pypeclub/OpenPype/pull/3180)
|
||||
- General: Creator plugins from addons can be registered [\#3179](https://github.com/pypeclub/OpenPype/pull/3179)
|
||||
- Ftrack: Single image reviewable [\#3157](https://github.com/pypeclub/OpenPype/pull/3157)
|
||||
- Nuke: Expose write attributes to settings [\#3123](https://github.com/pypeclub/OpenPype/pull/3123)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
|
||||
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
|
||||
- Project Manager: Allow to paste Tasks into multiple assets at the same time [\#3226](https://github.com/pypeclub/OpenPype/pull/3226)
|
||||
- Project manager: Sped up project load [\#3216](https://github.com/pypeclub/OpenPype/pull/3216)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3199](https://github.com/pypeclub/OpenPype/pull/3199)
|
||||
- Looks: add basic support for Renderman [\#3190](https://github.com/pypeclub/OpenPype/pull/3190)
|
||||
- Maya: added clean\_import option to Import loader [\#3181](https://github.com/pypeclub/OpenPype/pull/3181)
|
||||
- Add the scripts menu definition to nuke [\#3168](https://github.com/pypeclub/OpenPype/pull/3168)
|
||||
- Maya: add maya 2023 to default applications [\#3167](https://github.com/pypeclub/OpenPype/pull/3167)
|
||||
- Compressed bgeo publishing in SAP and Houdini loader [\#3153](https://github.com/pypeclub/OpenPype/pull/3153)
|
||||
- General: Add 'dataclasses' to required python modules [\#3149](https://github.com/pypeclub/OpenPype/pull/3149)
|
||||
- Hooks: Tweak logging grammar [\#3147](https://github.com/pypeclub/OpenPype/pull/3147)
|
||||
- Nuke: settings for reformat node in CreateWriteRender node [\#3143](https://github.com/pypeclub/OpenPype/pull/3143)
|
||||
- Houdini: Add loader for alembic through Alembic Archive node [\#3140](https://github.com/pypeclub/OpenPype/pull/3140)
|
||||
- Publisher: UI Modifications and fixes [\#3139](https://github.com/pypeclub/OpenPype/pull/3139)
|
||||
- General: Simplified OP modules/addons import [\#3137](https://github.com/pypeclub/OpenPype/pull/3137)
|
||||
- Terminal: Tweak coloring of TrayModuleManager logging enabled states [\#3133](https://github.com/pypeclub/OpenPype/pull/3133)
|
||||
- General: Cleanup some Loader docstrings [\#3131](https://github.com/pypeclub/OpenPype/pull/3131)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- nuke: use framerange issue [\#3254](https://github.com/pypeclub/OpenPype/pull/3254)
|
||||
- Ftrack: Chunk sizes for queries has minimal condition [\#3244](https://github.com/pypeclub/OpenPype/pull/3244)
|
||||
- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242)
|
||||
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
|
||||
- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236)
|
||||
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
|
||||
- Hiero: debugging frame range and other 3.10 [\#3222](https://github.com/pypeclub/OpenPype/pull/3222)
|
||||
- Project Manager: Fix persistent editors on project change [\#3218](https://github.com/pypeclub/OpenPype/pull/3218)
|
||||
- Deadline: instance data overwrite fix [\#3214](https://github.com/pypeclub/OpenPype/pull/3214)
|
||||
- Ftrack: Push hierarchical attributes action works [\#3210](https://github.com/pypeclub/OpenPype/pull/3210)
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3203](https://github.com/pypeclub/OpenPype/pull/3203)
|
||||
- Photoshop: skip collector when automatic testing [\#3202](https://github.com/pypeclub/OpenPype/pull/3202)
|
||||
- Nuke: render/workfile version sync doesn't work on farm [\#3185](https://github.com/pypeclub/OpenPype/pull/3185)
|
||||
- Ftrack: Review image only if there are no mp4 reviews [\#3183](https://github.com/pypeclub/OpenPype/pull/3183)
|
||||
- Ftrack: Locations deepcopy issue [\#3177](https://github.com/pypeclub/OpenPype/pull/3177)
|
||||
- General: Avoid creating multiple thumbnails [\#3176](https://github.com/pypeclub/OpenPype/pull/3176)
|
||||
- General/Hiero: better clip duration calculation [\#3169](https://github.com/pypeclub/OpenPype/pull/3169)
|
||||
- General: Oiio conversion for ffmpeg checks for invalid characters [\#3166](https://github.com/pypeclub/OpenPype/pull/3166)
|
||||
- Fix for attaching render to subset [\#3164](https://github.com/pypeclub/OpenPype/pull/3164)
|
||||
- Harmony: fixed missing task name in render instance [\#3163](https://github.com/pypeclub/OpenPype/pull/3163)
|
||||
- Ftrack: Action delete old versions formatting works [\#3152](https://github.com/pypeclub/OpenPype/pull/3152)
|
||||
- Deadline: fix the output directory [\#3144](https://github.com/pypeclub/OpenPype/pull/3144)
|
||||
- General: New Session schema [\#3141](https://github.com/pypeclub/OpenPype/pull/3141)
|
||||
- General: Missing version on headless mode crash properly [\#3136](https://github.com/pypeclub/OpenPype/pull/3136)
|
||||
- TVPaint: Composite layers in reversed order [\#3135](https://github.com/pypeclub/OpenPype/pull/3135)
|
||||
- Nuke: fix anatomy imageio regex default [\#3119](https://github.com/pypeclub/OpenPype/pull/3119)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- General: Move host install [\#3009](https://github.com/pypeclub/OpenPype/pull/3009)
|
||||
- Avalon repo removed from Jobs workflow [\#3193](https://github.com/pypeclub/OpenPype/pull/3193)
|
||||
- General: Remove remaining imports from avalon [\#3130](https://github.com/pypeclub/OpenPype/pull/3130)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Nuke: added suspend\_publish knob [\#3078](https://github.com/pypeclub/OpenPype/pull/3078)
|
||||
- Bump async from 2.6.3 to 2.6.4 in /website [\#3065](https://github.com/pypeclub/OpenPype/pull/3065)
|
||||
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
|
||||
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
|
||||
- Maya: added jpg to filter for Image Plane Loader [\#3223](https://github.com/pypeclub/OpenPype/pull/3223)
|
||||
- Webpublisher: replace space by underscore in subset names [\#3160](https://github.com/pypeclub/OpenPype/pull/3160)
|
||||
|
||||
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- nuke: generate publishing nodes inside render group node [\#3206](https://github.com/pypeclub/OpenPype/pull/3206)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3200](https://github.com/pypeclub/OpenPype/pull/3200)
|
||||
- Backport of fix for attaching renders to subsets [\#3195](https://github.com/pypeclub/OpenPype/pull/3195)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3204](https://github.com/pypeclub/OpenPype/pull/3204)
|
||||
- Nuke: render/workfile version sync doesn't work on farm [\#3184](https://github.com/pypeclub/OpenPype/pull/3184)
|
||||
- Ftrack: Review image only if there are no mp4 reviews [\#3182](https://github.com/pypeclub/OpenPype/pull/3182)
|
||||
- Ftrack: Locations deepcopy issue [\#3175](https://github.com/pypeclub/OpenPype/pull/3175)
|
||||
- General: Avoid creating multiple thumbnails [\#3174](https://github.com/pypeclub/OpenPype/pull/3174)
|
||||
- General: TemplateResult can be copied [\#3170](https://github.com/pypeclub/OpenPype/pull/3170)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- hiero: otio p3 compatibility issue - metadata on effect use update [\#3194](https://github.com/pypeclub/OpenPype/pull/3194)
|
||||
|
||||
## [3.9.7](https://github.com/pypeclub/OpenPype/tree/3.9.7) (2022-05-11)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.6...3.9.7)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Ftrack: Single image reviewable [\#3158](https://github.com/pypeclub/OpenPype/pull/3158)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Deadline output dir issue to 3.9x [\#3155](https://github.com/pypeclub/OpenPype/pull/3155)
|
||||
- nuke: removing redundant code from startup [\#3142](https://github.com/pypeclub/OpenPype/pull/3142)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Ftrack: Action delete old versions formatting works [\#3154](https://github.com/pypeclub/OpenPype/pull/3154)
|
||||
- nuke: adding extract thumbnail settings [\#3148](https://github.com/pypeclub/OpenPype/pull/3148)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Webpublisher: replace space by underscore in subset names [\#3159](https://github.com/pypeclub/OpenPype/pull/3159)
|
||||
|
||||
## [3.9.6](https://github.com/pypeclub/OpenPype/tree/3.9.6) (2022-05-03)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.5...3.9.6)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Nuke: render instance with subset name filtered overrides \(3.9.x\) [\#3125](https://github.com/pypeclub/OpenPype/pull/3125)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TVPaint: Composite layers in reversed order [\#3134](https://github.com/pypeclub/OpenPype/pull/3134)
|
||||
|
||||
## [3.9.5](https://github.com/pypeclub/OpenPype/tree/3.9.5) (2022-04-25)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.2...3.9.5)
|
||||
|
||||
## [3.9.4](https://github.com/pypeclub/OpenPype/tree/3.9.4) (2022-04-15)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.4-nightly.2...3.9.4)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: more info about Tasks [\#3062](https://github.com/pypeclub/OpenPype/pull/3062)
|
||||
- Documentation: Python requirements to 3.7.9 [\#3035](https://github.com/pypeclub/OpenPype/pull/3035)
|
||||
- Website Docs: Remove unused pages [\#2974](https://github.com/pypeclub/OpenPype/pull/2974)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- General: Local overrides for environment variables [\#3045](https://github.com/pypeclub/OpenPype/pull/3045)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- TVPaint: Added init file for worker to triggers missing sound file dialog [\#3053](https://github.com/pypeclub/OpenPype/pull/3053)
|
||||
- Ftrack: Custom attributes can be filled in slate values [\#3036](https://github.com/pypeclub/OpenPype/pull/3036)
|
||||
- Resolve environment variable in google drive credential path [\#3008](https://github.com/pypeclub/OpenPype/pull/3008)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- GitHub: Updated push-protected action in github workflow [\#3064](https://github.com/pypeclub/OpenPype/pull/3064)
|
||||
- Nuke: Typos in imports from Nuke implementation [\#3061](https://github.com/pypeclub/OpenPype/pull/3061)
|
||||
- Hotfix: fixing deadline job publishing [\#3059](https://github.com/pypeclub/OpenPype/pull/3059)
|
||||
- General: Extract Review handle invalid characters for ffmpeg [\#3050](https://github.com/pypeclub/OpenPype/pull/3050)
|
||||
- Slate Review: Support to keep format on slate concatenation [\#3049](https://github.com/pypeclub/OpenPype/pull/3049)
|
||||
- Webpublisher: fix processing of workfile [\#3048](https://github.com/pypeclub/OpenPype/pull/3048)
|
||||
- Ftrack: Integrate ftrack api fix [\#3044](https://github.com/pypeclub/OpenPype/pull/3044)
|
||||
- Webpublisher - removed wrong hardcoded family [\#3043](https://github.com/pypeclub/OpenPype/pull/3043)
|
||||
- LibraryLoader: Use current project for asset query in families filter [\#3042](https://github.com/pypeclub/OpenPype/pull/3042)
|
||||
- SiteSync: Providers ignore that site is disabled [\#3041](https://github.com/pypeclub/OpenPype/pull/3041)
|
||||
- Unreal: Creator import fixes [\#3040](https://github.com/pypeclub/OpenPype/pull/3040)
|
||||
- Settings UI: Version column can be extended so version are visible [\#3032](https://github.com/pypeclub/OpenPype/pull/3032)
|
||||
- SiteSync: fix transitive alternate sites, fix dropdown in Local Settings [\#3018](https://github.com/pypeclub/OpenPype/pull/3018)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Deadline: reworked pools assignment [\#3051](https://github.com/pypeclub/OpenPype/pull/3051)
|
||||
- Houdini: Avoid ImportError on `hdefereval` when Houdini runs without UI [\#2987](https://github.com/pypeclub/OpenPype/pull/2987)
|
||||
|
||||
## [3.9.3](https://github.com/pypeclub/OpenPype/tree/3.9.3) (2022-04-07)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.3-nightly.2...3.9.3)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Website Docs: Manager Ftrack fix broken links [\#2979](https://github.com/pypeclub/OpenPype/pull/2979)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Ftrack: Add description integrator [\#3027](https://github.com/pypeclub/OpenPype/pull/3027)
|
||||
- Publishing textures for Unreal [\#2988](https://github.com/pypeclub/OpenPype/pull/2988)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Ftrack: Add more options for note text of integrate ftrack note [\#3025](https://github.com/pypeclub/OpenPype/pull/3025)
|
||||
- Console Interpreter: Changed how console splitter size are reused on show [\#3016](https://github.com/pypeclub/OpenPype/pull/3016)
|
||||
- Deadline: Use more suitable name for sequence review logic [\#3015](https://github.com/pypeclub/OpenPype/pull/3015)
|
||||
- General: default workfile subset name for workfile [\#3011](https://github.com/pypeclub/OpenPype/pull/3011)
|
||||
- Deadline: priority configurable in Maya jobs [\#2995](https://github.com/pypeclub/OpenPype/pull/2995)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Deadline: Fixed default value of use sequence for review [\#3033](https://github.com/pypeclub/OpenPype/pull/3033)
|
||||
- General: Fix validate asset docs plug-in filename and class name [\#3029](https://github.com/pypeclub/OpenPype/pull/3029)
|
||||
- General: Fix import after movements [\#3028](https://github.com/pypeclub/OpenPype/pull/3028)
|
||||
- Harmony: Added creating subset name for workfile from template [\#3024](https://github.com/pypeclub/OpenPype/pull/3024)
|
||||
- AfterEffects: Added creating subset name for workfile from template [\#3023](https://github.com/pypeclub/OpenPype/pull/3023)
|
||||
- General: Add example addons to ignored [\#3022](https://github.com/pypeclub/OpenPype/pull/3022)
|
||||
- Maya: Remove missing import [\#3017](https://github.com/pypeclub/OpenPype/pull/3017)
|
||||
- Ftrack: multiple reviewable componets [\#3012](https://github.com/pypeclub/OpenPype/pull/3012)
|
||||
- Tray publisher: Fixes after code movement [\#3010](https://github.com/pypeclub/OpenPype/pull/3010)
|
||||
- Nuke: fixing unicode type detection in effect loaders [\#3002](https://github.com/pypeclub/OpenPype/pull/3002)
|
||||
- Nuke: removing redundant Ftrack asset when farm publishing [\#2996](https://github.com/pypeclub/OpenPype/pull/2996)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Maya: Allow to select invalid camera contents if no cameras found [\#3030](https://github.com/pypeclub/OpenPype/pull/3030)
|
||||
- General: adding limitations for pyright [\#2994](https://github.com/pypeclub/OpenPype/pull/2994)
|
||||
|
||||
## [3.9.2](https://github.com/pypeclub/OpenPype/tree/3.9.2) (2022-04-04)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.2-nightly.4...3.9.2)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Added mention of adding My Drive as a root [\#2999](https://github.com/pypeclub/OpenPype/pull/2999)
|
||||
- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- nuke: bypass baking [\#2992](https://github.com/pypeclub/OpenPype/pull/2992)
|
||||
- Maya to Unreal: Static and Skeletal Meshes [\#2978](https://github.com/pypeclub/OpenPype/pull/2978)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Nuke: add concurrency attr to deadline job [\#3005](https://github.com/pypeclub/OpenPype/pull/3005)
|
||||
- Photoshop: create image without instance [\#3001](https://github.com/pypeclub/OpenPype/pull/3001)
|
||||
- TVPaint: Render scene family [\#3000](https://github.com/pypeclub/OpenPype/pull/3000)
|
||||
- Nuke: ReviewDataMov Read RAW attribute [\#2985](https://github.com/pypeclub/OpenPype/pull/2985)
|
||||
- General: `METADATA\_KEYS` constant as `frozenset` for optimal immutable lookup [\#2980](https://github.com/pypeclub/OpenPype/pull/2980)
|
||||
- General: Tools with host filters [\#2975](https://github.com/pypeclub/OpenPype/pull/2975)
|
||||
- Hero versions: Use custom templates [\#2967](https://github.com/pypeclub/OpenPype/pull/2967)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Hosts: Remove path existence checks in 'add\_implementation\_envs' [\#3004](https://github.com/pypeclub/OpenPype/pull/3004)
|
||||
- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998)
|
||||
- PS: fix renaming subset incorrectly in PS [\#2991](https://github.com/pypeclub/OpenPype/pull/2991)
|
||||
- Fix: Disable setuptools auto discovery [\#2990](https://github.com/pypeclub/OpenPype/pull/2990)
|
||||
- AEL: fix opening existing workfile if no scene opened [\#2989](https://github.com/pypeclub/OpenPype/pull/2989)
|
||||
- Maya: Don't do hardlinks on windows for look publishing [\#2986](https://github.com/pypeclub/OpenPype/pull/2986)
|
||||
- Settings UI: Fix version completer on linux [\#2981](https://github.com/pypeclub/OpenPype/pull/2981)
|
||||
- Photoshop: Fix creation of subset names in PS review and workfile [\#2969](https://github.com/pypeclub/OpenPype/pull/2969)
|
||||
- Slack: Added default for review\_upload\_limit for Slack [\#2965](https://github.com/pypeclub/OpenPype/pull/2965)
|
||||
- General: OIIO conversion for ffmeg can handle sequences [\#2958](https://github.com/pypeclub/OpenPype/pull/2958)
|
||||
- Settings: Conditional dictionary avoid invalid logs [\#2956](https://github.com/pypeclub/OpenPype/pull/2956)
|
||||
- General: Smaller fixes and typos [\#2950](https://github.com/pypeclub/OpenPype/pull/2950)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump paramiko from 2.9.2 to 2.10.1 [\#2973](https://github.com/pypeclub/OpenPype/pull/2973)
|
||||
- Bump minimist from 1.2.5 to 1.2.6 in /website [\#2954](https://github.com/pypeclub/OpenPype/pull/2954)
|
||||
- Bump node-forge from 1.2.1 to 1.3.0 in /website [\#2953](https://github.com/pypeclub/OpenPype/pull/2953)
|
||||
- Maya - added transparency into review creator [\#2952](https://github.com/pypeclub/OpenPype/pull/2952)
|
||||
|
||||
## [3.9.1](https://github.com/pypeclub/OpenPype/tree/3.9.1) (2022-03-18)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.1-nightly.3...3.9.1)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ from .settings import (
|
|||
get_project_settings,
|
||||
get_current_project_settings,
|
||||
get_anatomy_settings,
|
||||
get_environments,
|
||||
|
||||
SystemSettings,
|
||||
ProjectSettings
|
||||
|
|
@ -23,7 +22,6 @@ from .lib import (
|
|||
get_app_environments_for_context,
|
||||
source_hash,
|
||||
get_latest_version,
|
||||
get_global_environments,
|
||||
get_local_site_id,
|
||||
change_openpype_mongo_url,
|
||||
create_project_folders,
|
||||
|
|
@ -69,10 +67,10 @@ __all__ = [
|
|||
"get_project_settings",
|
||||
"get_current_project_settings",
|
||||
"get_anatomy_settings",
|
||||
"get_environments",
|
||||
"get_project_basic_paths",
|
||||
|
||||
"SystemSettings",
|
||||
"ProjectSettings",
|
||||
|
||||
"PypeLogger",
|
||||
"Logger",
|
||||
|
|
@ -102,8 +100,9 @@ __all__ = [
|
|||
|
||||
# get contextual data
|
||||
"version_up",
|
||||
"get_hierarchy",
|
||||
"get_asset",
|
||||
"get_hierarchy",
|
||||
"get_workdir_data",
|
||||
"get_version_from_path",
|
||||
"get_last_version_from_path",
|
||||
"get_app_environments_for_context",
|
||||
|
|
@ -111,7 +110,6 @@ __all__ = [
|
|||
|
||||
"run_subprocess",
|
||||
"get_latest_version",
|
||||
"get_global_environments",
|
||||
|
||||
"get_local_site_id",
|
||||
"change_openpype_mongo_url",
|
||||
|
|
|
|||
|
|
@ -266,7 +266,7 @@ class AssetLoader(LoaderPlugin):
|
|||
# Only containerise if it's not already a collection from a .blend file.
|
||||
# representation = context["representation"]["name"]
|
||||
# if representation != "blend":
|
||||
# from avalon.blender.pipeline import containerise
|
||||
# from openpype.hosts.blender.api.pipeline import containerise
|
||||
# return containerise(
|
||||
# name=name,
|
||||
# namespace=namespace,
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import os
|
|||
import re
|
||||
import json
|
||||
import pickle
|
||||
import clique
|
||||
import tempfile
|
||||
import itertools
|
||||
import contextlib
|
||||
|
|
@ -560,7 +561,7 @@ def get_segment_attributes(segment):
|
|||
if not hasattr(segment, attr_name):
|
||||
continue
|
||||
attr = getattr(segment, attr_name)
|
||||
segment_attrs_data[attr] = str(attr).replace("+", ":")
|
||||
segment_attrs_data[attr_name] = str(attr).replace("+", ":")
|
||||
|
||||
if attr_name in ["record_in", "record_out"]:
|
||||
clip_data[attr_name] = attr.relative_frame
|
||||
|
|
@ -762,6 +763,7 @@ class MediaInfoFile(object):
|
|||
_start_frame = None
|
||||
_fps = None
|
||||
_drop_mode = None
|
||||
_file_pattern = None
|
||||
|
||||
def __init__(self, path, **kwargs):
|
||||
|
||||
|
|
@ -773,17 +775,28 @@ class MediaInfoFile(object):
|
|||
self._validate_media_script_path()
|
||||
|
||||
# derivate other feed variables
|
||||
self.feed_basename = os.path.basename(path)
|
||||
self.feed_dir = os.path.dirname(path)
|
||||
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
|
||||
feed_basename = os.path.basename(path)
|
||||
feed_dir = os.path.dirname(path)
|
||||
feed_ext = os.path.splitext(feed_basename)[1][1:].lower()
|
||||
|
||||
with maintained_temp_file_path(".clip") as tmp_path:
|
||||
self.log.info("Temp File: {}".format(tmp_path))
|
||||
self._generate_media_info_file(tmp_path)
|
||||
self._generate_media_info_file(tmp_path, feed_ext, feed_dir)
|
||||
|
||||
# get collection containing feed_basename from path
|
||||
self.file_pattern = self._get_collection(
|
||||
feed_basename, feed_dir, feed_ext)
|
||||
|
||||
if (
|
||||
not self.file_pattern
|
||||
and os.path.exists(os.path.join(feed_dir, feed_basename))
|
||||
):
|
||||
self.file_pattern = feed_basename
|
||||
|
||||
# get clip data and make them single if there is multiple
|
||||
# clips data
|
||||
xml_data = self._make_single_clip_media_info(tmp_path)
|
||||
xml_data = self._make_single_clip_media_info(
|
||||
tmp_path, feed_basename, self.file_pattern)
|
||||
self.log.debug("xml_data: {}".format(xml_data))
|
||||
self.log.debug("type: {}".format(type(xml_data)))
|
||||
|
||||
|
|
@ -794,6 +807,123 @@ class MediaInfoFile(object):
|
|||
self.log.debug("drop frame: {}".format(self.drop_mode))
|
||||
self.clip_data = xml_data
|
||||
|
||||
def _get_collection(self, feed_basename, feed_dir, feed_ext):
|
||||
""" Get collection string
|
||||
|
||||
Args:
|
||||
feed_basename (str): file base name
|
||||
feed_dir (str): file's directory
|
||||
feed_ext (str): file extension
|
||||
|
||||
Raises:
|
||||
AttributeError: feed_ext is not matching feed_basename
|
||||
|
||||
Returns:
|
||||
str: collection basename with range of sequence
|
||||
"""
|
||||
partialname = self._separate_file_head(feed_basename, feed_ext)
|
||||
self.log.debug("__ partialname: {}".format(partialname))
|
||||
|
||||
# make sure partial input basename is having correct extensoon
|
||||
if not partialname:
|
||||
raise AttributeError(
|
||||
"Wrong input attributes. Basename - {}, Ext - {}".format(
|
||||
feed_basename, feed_ext
|
||||
)
|
||||
)
|
||||
|
||||
# get all related files
|
||||
files = [
|
||||
f for f in os.listdir(feed_dir)
|
||||
if partialname == self._separate_file_head(f, feed_ext)
|
||||
]
|
||||
|
||||
# ignore reminders as we dont need them
|
||||
collections = clique.assemble(files)[0]
|
||||
|
||||
# in case no collection found return None
|
||||
# it is probably just single file
|
||||
if not collections:
|
||||
return
|
||||
|
||||
# we expect only one collection
|
||||
collection = collections[0]
|
||||
|
||||
self.log.debug("__ collection: {}".format(collection))
|
||||
|
||||
if collection.is_contiguous():
|
||||
return self._format_collection(collection)
|
||||
|
||||
# add `[` in front to make sure it want capture
|
||||
# shot name with the same number
|
||||
number_from_path = self._separate_number(feed_basename, feed_ext)
|
||||
search_number_pattern = "[" + number_from_path
|
||||
# convert to multiple collections
|
||||
_continues_colls = collection.separate()
|
||||
for _coll in _continues_colls:
|
||||
coll_to_text = self._format_collection(
|
||||
_coll, len(number_from_path))
|
||||
self.log.debug("__ coll_to_text: {}".format(coll_to_text))
|
||||
if search_number_pattern in coll_to_text:
|
||||
return coll_to_text
|
||||
|
||||
@staticmethod
|
||||
def _format_collection(collection, padding=None):
|
||||
padding = padding or collection.padding
|
||||
# if no holes then return collection
|
||||
head = collection.format("{head}")
|
||||
tail = collection.format("{tail}")
|
||||
range_template = "[{{:0{0}d}}-{{:0{0}d}}]".format(
|
||||
padding)
|
||||
ranges = range_template.format(
|
||||
min(collection.indexes),
|
||||
max(collection.indexes)
|
||||
)
|
||||
# if no holes then return collection
|
||||
return "{}{}{}".format(head, ranges, tail)
|
||||
|
||||
def _separate_file_head(self, basename, extension):
|
||||
""" Get only head with out sequence and extension
|
||||
|
||||
Args:
|
||||
basename (str): file base name
|
||||
extension (str): file extension
|
||||
|
||||
Returns:
|
||||
str: file head
|
||||
"""
|
||||
# in case sequence file
|
||||
found = re.findall(
|
||||
r"(.*)[._][\d]*(?=.{})".format(extension),
|
||||
basename,
|
||||
)
|
||||
if found:
|
||||
return found.pop()
|
||||
|
||||
# in case single file
|
||||
name, ext = os.path.splitext(basename)
|
||||
|
||||
if extension == ext[1:]:
|
||||
return name
|
||||
|
||||
def _separate_number(self, basename, extension):
|
||||
""" Get only sequence number as string
|
||||
|
||||
Args:
|
||||
basename (str): file base name
|
||||
extension (str): file extension
|
||||
|
||||
Returns:
|
||||
str: number with padding
|
||||
"""
|
||||
# in case sequence file
|
||||
found = re.findall(
|
||||
r"[._]([\d]*)(?=.{})".format(extension),
|
||||
basename,
|
||||
)
|
||||
if found:
|
||||
return found.pop()
|
||||
|
||||
@property
|
||||
def clip_data(self):
|
||||
"""Clip's xml clip data
|
||||
|
|
@ -846,18 +976,41 @@ class MediaInfoFile(object):
|
|||
def drop_mode(self, text):
|
||||
self._drop_mode = str(text)
|
||||
|
||||
@property
|
||||
def file_pattern(self):
|
||||
"""Clips file patter
|
||||
|
||||
Returns:
|
||||
str: file pattern. ex. file.[1-2].exr
|
||||
"""
|
||||
return self._file_pattern
|
||||
|
||||
@file_pattern.setter
|
||||
def file_pattern(self, fpattern):
|
||||
self._file_pattern = fpattern
|
||||
|
||||
def _validate_media_script_path(self):
|
||||
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
|
||||
raise IOError("Media Scirpt does not exist: `{}`".format(
|
||||
self.MEDIA_SCRIPT_PATH))
|
||||
|
||||
def _generate_media_info_file(self, fpath):
|
||||
def _generate_media_info_file(self, fpath, feed_ext, feed_dir):
|
||||
""" Generate media info xml .clip file
|
||||
|
||||
Args:
|
||||
fpath (str): .clip file path
|
||||
feed_ext (str): file extension to be filtered
|
||||
feed_dir (str): look up directory
|
||||
|
||||
Raises:
|
||||
TypeError: Type error if it fails
|
||||
"""
|
||||
# Create cmd arguments for gettig xml file info file
|
||||
cmd_args = [
|
||||
self.MEDIA_SCRIPT_PATH,
|
||||
"-e", self.feed_ext,
|
||||
"-e", feed_ext,
|
||||
"-o", fpath,
|
||||
self.feed_dir
|
||||
feed_dir
|
||||
]
|
||||
|
||||
try:
|
||||
|
|
@ -867,7 +1020,20 @@ class MediaInfoFile(object):
|
|||
raise TypeError(
|
||||
"Error creating `{}` due: {}".format(fpath, error))
|
||||
|
||||
def _make_single_clip_media_info(self, fpath):
|
||||
def _make_single_clip_media_info(self, fpath, feed_basename, path_pattern):
|
||||
""" Separate only relative clip object form .clip file
|
||||
|
||||
Args:
|
||||
fpath (str): clip file path
|
||||
feed_basename (str): search basename
|
||||
path_pattern (str): search file pattern (file.[1-2].exr)
|
||||
|
||||
Raises:
|
||||
ET.ParseError: if nothing found
|
||||
|
||||
Returns:
|
||||
ET.Element: xml element data of matching clip
|
||||
"""
|
||||
with open(fpath) as f:
|
||||
lines = f.readlines()
|
||||
_added_root = itertools.chain(
|
||||
|
|
@ -878,14 +1044,30 @@ class MediaInfoFile(object):
|
|||
xml_clips = new_root.findall("clip")
|
||||
matching_clip = None
|
||||
for xml_clip in xml_clips:
|
||||
if xml_clip.find("name").text in self.feed_basename:
|
||||
matching_clip = xml_clip
|
||||
clip_name = xml_clip.find("name").text
|
||||
self.log.debug("__ clip_name: `{}`".format(clip_name))
|
||||
if clip_name not in feed_basename:
|
||||
continue
|
||||
|
||||
# test path pattern
|
||||
for out_track in xml_clip.iter("track"):
|
||||
for out_feed in out_track.iter("feed"):
|
||||
for span in out_feed.iter("span"):
|
||||
# start frame
|
||||
span_path = span.find("path")
|
||||
self.log.debug(
|
||||
"__ span_path.text: {}, path_pattern: {}".format(
|
||||
span_path.text, path_pattern
|
||||
)
|
||||
)
|
||||
if path_pattern in span_path.text:
|
||||
matching_clip = xml_clip
|
||||
|
||||
if matching_clip is None:
|
||||
# return warning there is missing clip
|
||||
raise ET.ParseError(
|
||||
"Missing clip in `{}`. Available clips {}".format(
|
||||
self.feed_basename, [
|
||||
feed_basename, [
|
||||
xml_clip.find("name").text
|
||||
for xml_clip in xml_clips
|
||||
]
|
||||
|
|
@ -894,6 +1076,11 @@ class MediaInfoFile(object):
|
|||
return matching_clip
|
||||
|
||||
def _get_time_info_from_origin(self, xml_data):
|
||||
"""Set time info to class attributes
|
||||
|
||||
Args:
|
||||
xml_data (ET.Element): clip data
|
||||
"""
|
||||
try:
|
||||
for out_track in xml_data.iter('track'):
|
||||
for out_feed in out_track.iter('feed'):
|
||||
|
|
@ -912,8 +1099,6 @@ class MediaInfoFile(object):
|
|||
'startTimecode/dropMode')
|
||||
self.drop_mode = out_feed_drop_mode_obj.text
|
||||
break
|
||||
else:
|
||||
continue
|
||||
except Exception as msg:
|
||||
self.log.warning(msg)
|
||||
|
||||
|
|
|
|||
|
|
@ -360,6 +360,7 @@ class PublishableClip:
|
|||
driving_layer_default = ""
|
||||
index_from_segment_default = False
|
||||
use_shot_name_default = False
|
||||
include_handles_default = False
|
||||
|
||||
def __init__(self, segment, **kwargs):
|
||||
self.rename_index = kwargs["rename_index"]
|
||||
|
|
@ -493,6 +494,8 @@ class PublishableClip:
|
|||
"reviewTrack", {}).get("value") or self.review_track_default
|
||||
self.audio = self.ui_inputs.get(
|
||||
"audio", {}).get("value") or False
|
||||
self.include_handles = self.ui_inputs.get(
|
||||
"includeHandles", {}).get("value") or self.include_handles_default
|
||||
|
||||
# build subset name from layer name
|
||||
if self.subset_name == "[ track name ]":
|
||||
|
|
@ -873,6 +876,5 @@ class OpenClipSolver(flib.MediaInfoFile):
|
|||
if feed_clr_obj is not None:
|
||||
feed_clr_obj = ET.Element(
|
||||
"colourSpace", {"type": "string"})
|
||||
feed_clr_obj.text = profile_name
|
||||
feed_storage_obj.append(feed_clr_obj)
|
||||
|
||||
feed_clr_obj.text = profile_name
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
import os
|
||||
from xml.etree import ElementTree as ET
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def export_clip(export_path, clip, preset_path, **kwargs):
|
||||
|
|
@ -143,10 +146,40 @@ def modify_preset_file(xml_path, staging_dir, data):
|
|||
|
||||
# change xml following data keys
|
||||
with open(xml_path, "r") as datafile:
|
||||
tree = ET.parse(datafile)
|
||||
_root = ET.parse(datafile)
|
||||
|
||||
for key, value in data.items():
|
||||
for element in tree.findall(".//{}".format(key)):
|
||||
element.text = str(value)
|
||||
tree.write(temp_path)
|
||||
try:
|
||||
if "/" in key:
|
||||
if not key.startswith("./"):
|
||||
key = ".//" + key
|
||||
|
||||
split_key_path = key.split("/")
|
||||
element_key = split_key_path[-1]
|
||||
parent_obj_path = "/".join(split_key_path[:-1])
|
||||
|
||||
parent_obj = _root.find(parent_obj_path)
|
||||
element_obj = parent_obj.find(element_key)
|
||||
if not element_obj:
|
||||
append_element(parent_obj, element_key, value)
|
||||
else:
|
||||
finds = _root.findall(".//{}".format(key))
|
||||
if not finds:
|
||||
raise AttributeError
|
||||
for element in finds:
|
||||
element.text = str(value)
|
||||
except AttributeError:
|
||||
log.warning(
|
||||
"Cannot create attribute: {}: {}. Skipping".format(
|
||||
key, value
|
||||
))
|
||||
_root.write(temp_path)
|
||||
|
||||
return temp_path
|
||||
|
||||
|
||||
def append_element(root_element_obj, key, value):
|
||||
new_element_obj = ET.Element(key)
|
||||
log.debug("__ new_element_obj: {}".format(new_element_obj))
|
||||
new_element_obj.text = str(value)
|
||||
root_element_obj.insert(0, new_element_obj)
|
||||
|
|
|
|||
|
|
@ -94,83 +94,30 @@ def create_otio_time_range(start_frame, frame_duration, fps):
|
|||
|
||||
def _get_metadata(item):
|
||||
if hasattr(item, 'metadata'):
|
||||
if not item.metadata:
|
||||
return {}
|
||||
return {key: value for key, value in dict(item.metadata)}
|
||||
return dict(item.metadata) if item.metadata else {}
|
||||
return {}
|
||||
|
||||
|
||||
def create_time_effects(otio_clip, item):
|
||||
# todo #2426: add retiming effects to export
|
||||
# get all subtrack items
|
||||
# subTrackItems = flatten(track_item.parent().subTrackItems())
|
||||
# speed = track_item.playbackSpeed()
|
||||
def create_time_effects(otio_clip, speed):
|
||||
otio_effect = None
|
||||
|
||||
# otio_effect = None
|
||||
# # retime on track item
|
||||
# if speed != 1.:
|
||||
# # make effect
|
||||
# otio_effect = otio.schema.LinearTimeWarp()
|
||||
# otio_effect.name = "Speed"
|
||||
# otio_effect.time_scalar = speed
|
||||
# otio_effect.metadata = {}
|
||||
# retime on track item
|
||||
if speed != 1.:
|
||||
# make effect
|
||||
otio_effect = otio.schema.LinearTimeWarp()
|
||||
otio_effect.name = "Speed"
|
||||
otio_effect.time_scalar = speed
|
||||
otio_effect.metadata = {}
|
||||
|
||||
# # freeze frame effect
|
||||
# if speed == 0.:
|
||||
# otio_effect = otio.schema.FreezeFrame()
|
||||
# otio_effect.name = "FreezeFrame"
|
||||
# otio_effect.metadata = {}
|
||||
# freeze frame effect
|
||||
if speed == 0.:
|
||||
otio_effect = otio.schema.FreezeFrame()
|
||||
otio_effect.name = "FreezeFrame"
|
||||
otio_effect.metadata = {}
|
||||
|
||||
# if otio_effect:
|
||||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
|
||||
# # loop through and get all Timewarps
|
||||
# for effect in subTrackItems:
|
||||
# if ((track_item not in effect.linkedItems())
|
||||
# and (len(effect.linkedItems()) > 0)):
|
||||
# continue
|
||||
# # avoid all effect which are not TimeWarp and disabled
|
||||
# if "TimeWarp" not in effect.name():
|
||||
# continue
|
||||
|
||||
# if not effect.isEnabled():
|
||||
# continue
|
||||
|
||||
# node = effect.node()
|
||||
# name = node["name"].value()
|
||||
|
||||
# # solve effect class as effect name
|
||||
# _name = effect.name()
|
||||
# if "_" in _name:
|
||||
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
|
||||
# else:
|
||||
# effect_name = re.sub(r"\d+", "", _name) # one number
|
||||
|
||||
# metadata = {}
|
||||
# # add knob to metadata
|
||||
# for knob in ["lookup", "length"]:
|
||||
# value = node[knob].value()
|
||||
# animated = node[knob].isAnimated()
|
||||
# if animated:
|
||||
# value = [
|
||||
# ((node[knob].getValueAt(i)) - i)
|
||||
# for i in range(
|
||||
# track_item.timelineIn(),
|
||||
# track_item.timelineOut() + 1)
|
||||
# ]
|
||||
|
||||
# metadata[knob] = value
|
||||
|
||||
# # make effect
|
||||
# otio_effect = otio.schema.TimeEffect()
|
||||
# otio_effect.name = name
|
||||
# otio_effect.effect_name = effect_name
|
||||
# otio_effect.metadata = metadata
|
||||
|
||||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
pass
|
||||
if otio_effect:
|
||||
# add otio effect to clip effects
|
||||
otio_clip.effects.append(otio_effect)
|
||||
|
||||
|
||||
def _get_marker_color(flame_colour):
|
||||
|
|
@ -260,6 +207,7 @@ def create_otio_markers(otio_item, item):
|
|||
|
||||
def create_otio_reference(clip_data, fps=None):
|
||||
metadata = _get_metadata(clip_data)
|
||||
duration = int(clip_data["source_duration"])
|
||||
|
||||
# get file info for path and start frame
|
||||
frame_start = 0
|
||||
|
|
@ -273,7 +221,6 @@ def create_otio_reference(clip_data, fps=None):
|
|||
# get padding and other file infos
|
||||
log.debug("_ path: {}".format(path))
|
||||
|
||||
frame_duration = clip_data["source_duration"]
|
||||
otio_ex_ref_item = None
|
||||
|
||||
is_sequence = frame_number = utils.get_frame_from_filename(file_name)
|
||||
|
|
@ -300,7 +247,7 @@ def create_otio_reference(clip_data, fps=None):
|
|||
rate=fps,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
|
@ -316,7 +263,7 @@ def create_otio_reference(clip_data, fps=None):
|
|||
target_url=reformated_path,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
|
@ -333,23 +280,50 @@ def create_otio_clip(clip_data):
|
|||
segment = clip_data["PySegment"]
|
||||
|
||||
# calculate source in
|
||||
media_info = MediaInfoFile(clip_data["fpath"])
|
||||
media_info = MediaInfoFile(clip_data["fpath"], logger=log)
|
||||
media_timecode_start = media_info.start_frame
|
||||
media_fps = media_info.fps
|
||||
|
||||
# create media reference
|
||||
media_reference = create_otio_reference(clip_data, media_fps)
|
||||
|
||||
# define first frame
|
||||
first_frame = media_timecode_start or utils.get_frame_from_filename(
|
||||
clip_data["fpath"]) or 0
|
||||
|
||||
source_in = int(clip_data["source_in"]) - int(first_frame)
|
||||
_clip_source_in = int(clip_data["source_in"])
|
||||
_clip_source_out = int(clip_data["source_out"])
|
||||
_clip_record_duration = int(clip_data["record_duration"])
|
||||
|
||||
# first solve if the reverse timing
|
||||
speed = 1
|
||||
if clip_data["source_in"] > clip_data["source_out"]:
|
||||
source_in = _clip_source_out - int(first_frame)
|
||||
source_out = _clip_source_in - int(first_frame)
|
||||
speed = -1
|
||||
else:
|
||||
source_in = _clip_source_in - int(first_frame)
|
||||
source_out = _clip_source_out - int(first_frame)
|
||||
|
||||
source_duration = (source_out - source_in + 1)
|
||||
|
||||
# secondly check if any change of speed
|
||||
if source_duration != _clip_record_duration:
|
||||
retime_speed = float(source_duration) / float(_clip_record_duration)
|
||||
log.debug("_ retime_speed: {}".format(retime_speed))
|
||||
speed *= retime_speed
|
||||
|
||||
log.debug("_ source_in: {}".format(source_in))
|
||||
log.debug("_ source_out: {}".format(source_out))
|
||||
log.debug("_ speed: {}".format(speed))
|
||||
log.debug("_ source_duration: {}".format(source_duration))
|
||||
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))
|
||||
|
||||
# create media reference
|
||||
media_reference = create_otio_reference(
|
||||
clip_data, media_fps)
|
||||
|
||||
# creatae source range
|
||||
source_range = create_otio_time_range(
|
||||
source_in,
|
||||
clip_data["record_duration"],
|
||||
_clip_record_duration,
|
||||
CTX.get_fps()
|
||||
)
|
||||
|
||||
|
|
@ -363,6 +337,9 @@ def create_otio_clip(clip_data):
|
|||
if MARKERS_INCLUDE:
|
||||
create_otio_markers(otio_clip, segment)
|
||||
|
||||
if speed != 1:
|
||||
create_time_effects(otio_clip, speed)
|
||||
|
||||
return otio_clip
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -268,6 +268,14 @@ class CreateShotClip(opfapi.Creator):
|
|||
"target": "tag",
|
||||
"toolTip": "Handle at end of clip", # noqa
|
||||
"order": 2
|
||||
},
|
||||
"includeHandles": {
|
||||
"value": False,
|
||||
"type": "QCheckBox",
|
||||
"label": "Include handles",
|
||||
"target": "tag",
|
||||
"toolTip": "By default handles are excluded", # noqa
|
||||
"order": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import re
|
||||
import pyblish
|
||||
import openpype
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export
|
||||
import openpype.lib as oplib
|
||||
|
||||
# # developer reload modules
|
||||
from pprint import pformat
|
||||
|
|
@ -26,18 +26,17 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
add_tasks = []
|
||||
|
||||
def process(self, context):
|
||||
project = context.data["flameProject"]
|
||||
selected_segments = context.data["flameSelectedSegments"]
|
||||
self.log.debug("__ selected_segments: {}".format(selected_segments))
|
||||
|
||||
self.otio_timeline = context.data["otioTimeline"]
|
||||
self.clips_in_reels = opfapi.get_clips_in_reels(project)
|
||||
self.fps = context.data["fps"]
|
||||
|
||||
# process all sellected
|
||||
for segment in selected_segments:
|
||||
# get openpype tag data
|
||||
marker_data = opfapi.get_segment_data_marker(segment)
|
||||
|
||||
self.log.debug("__ marker_data: {}".format(
|
||||
pformat(marker_data)))
|
||||
|
||||
|
|
@ -60,27 +59,44 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
clip_name = clip_data["segment_name"]
|
||||
self.log.debug("clip_name: {}".format(clip_name))
|
||||
|
||||
# get otio clip data
|
||||
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
|
||||
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
|
||||
|
||||
# get file path
|
||||
file_path = clip_data["fpath"]
|
||||
|
||||
# get source clip
|
||||
source_clip = self._get_reel_clip(file_path)
|
||||
|
||||
first_frame = opfapi.get_frame_from_filename(file_path) or 0
|
||||
|
||||
head, tail = self._get_head_tail(clip_data, first_frame)
|
||||
head, tail = self._get_head_tail(
|
||||
clip_data,
|
||||
otio_data["otioClip"],
|
||||
marker_data["handleStart"],
|
||||
marker_data["handleEnd"]
|
||||
)
|
||||
|
||||
# make sure value is absolute
|
||||
if head != 0:
|
||||
head = abs(head)
|
||||
if tail != 0:
|
||||
tail = abs(tail)
|
||||
|
||||
# solve handles length
|
||||
marker_data["handleStart"] = min(
|
||||
marker_data["handleStart"], abs(head))
|
||||
marker_data["handleStart"], head)
|
||||
marker_data["handleEnd"] = min(
|
||||
marker_data["handleEnd"], abs(tail))
|
||||
marker_data["handleEnd"], tail)
|
||||
|
||||
workfile_start = self._set_workfile_start(marker_data)
|
||||
|
||||
with_audio = bool(marker_data.pop("audio"))
|
||||
|
||||
# add marker data to instance data
|
||||
inst_data = dict(marker_data.items())
|
||||
|
||||
# add ocio_data to instance data
|
||||
inst_data.update(otio_data)
|
||||
|
||||
asset = marker_data["asset"]
|
||||
subset = marker_data["subset"]
|
||||
|
||||
|
|
@ -103,7 +119,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
"families": families,
|
||||
"publish": marker_data["publish"],
|
||||
"fps": self.fps,
|
||||
"flameSourceClip": source_clip,
|
||||
"workfileFrameStart": workfile_start,
|
||||
"sourceFirstFrame": int(first_frame),
|
||||
"path": file_path,
|
||||
"flameAddTasks": self.add_tasks,
|
||||
|
|
@ -111,13 +127,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
task["name"]: {"type": task["type"]}
|
||||
for task in self.add_tasks}
|
||||
})
|
||||
|
||||
# get otio clip data
|
||||
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
|
||||
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
|
||||
|
||||
# add to instance data
|
||||
inst_data.update(otio_data)
|
||||
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
|
||||
|
||||
# add resolution
|
||||
|
|
@ -151,6 +160,17 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
if marker_data.get("reviewTrack") is not None:
|
||||
instance.data["reviewAudio"] = True
|
||||
|
||||
@staticmethod
|
||||
def _set_workfile_start(data):
|
||||
include_handles = data.get("includeHandles")
|
||||
workfile_start = data["workfileFrameStart"]
|
||||
handle_start = data["handleStart"]
|
||||
|
||||
if include_handles:
|
||||
workfile_start += handle_start
|
||||
|
||||
return workfile_start
|
||||
|
||||
def _get_comment_attributes(self, segment):
|
||||
comment = segment.comment.get_value()
|
||||
|
||||
|
|
@ -242,29 +262,25 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
return split_comments
|
||||
|
||||
def _get_head_tail(self, clip_data, first_frame):
|
||||
def _get_head_tail(self, clip_data, otio_clip, handle_start, handle_end):
|
||||
# calculate head and tail with forward compatibility
|
||||
head = clip_data.get("segment_head")
|
||||
tail = clip_data.get("segment_tail")
|
||||
self.log.debug("__ head: `{}`".format(head))
|
||||
self.log.debug("__ tail: `{}`".format(tail))
|
||||
|
||||
# HACK: it is here to serve for versions bellow 2021.1
|
||||
if not head:
|
||||
head = int(clip_data["source_in"]) - int(first_frame)
|
||||
if not tail:
|
||||
tail = int(
|
||||
clip_data["source_duration"] - (
|
||||
head + clip_data["record_duration"]
|
||||
)
|
||||
)
|
||||
return head, tail
|
||||
if not any([head, tail]):
|
||||
retimed_attributes = oplib.get_media_range_with_retimes(
|
||||
otio_clip, handle_start, handle_end)
|
||||
self.log.debug(
|
||||
">> retimed_attributes: {}".format(retimed_attributes))
|
||||
|
||||
def _get_reel_clip(self, path):
|
||||
match_reel_clip = [
|
||||
clip for clip in self.clips_in_reels
|
||||
if clip["fpath"] == path
|
||||
]
|
||||
if match_reel_clip:
|
||||
return match_reel_clip.pop()
|
||||
# retimed head and tail
|
||||
head = int(retimed_attributes["handleStart"])
|
||||
tail = int(retimed_attributes["handleEnd"])
|
||||
|
||||
return head, tail
|
||||
|
||||
def _get_resolution_to_data(self, data, context):
|
||||
assert data.get("otioClip"), "Missing `otioClip` data"
|
||||
|
|
@ -354,7 +370,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
continue
|
||||
if otio_clip.name not in segment.name.get_value():
|
||||
continue
|
||||
if openpype.lib.is_overlapping_otio_ranges(
|
||||
if oplib.is_overlapping_otio_ranges(
|
||||
parent_range, timeline_range, strict=True):
|
||||
|
||||
# add pypedata marker to otio_clip metadata
|
||||
|
|
|
|||
|
|
@ -39,7 +39,8 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
|
|||
"name": subset_name,
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset_name,
|
||||
"family": "workfile"
|
||||
"family": "workfile",
|
||||
"families": []
|
||||
}
|
||||
|
||||
# create instance with workfile
|
||||
|
|
|
|||
|
|
@ -1,10 +1,14 @@
|
|||
import os
|
||||
import re
|
||||
from pprint import pformat
|
||||
from copy import deepcopy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.hosts.flame import api as opfapi
|
||||
from openpype.hosts.flame.api import MediaInfoFile
|
||||
|
||||
import flame
|
||||
|
||||
|
||||
class ExtractSubsetResources(openpype.api.Extractor):
|
||||
|
|
@ -20,30 +24,18 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
# plugin defaults
|
||||
default_presets = {
|
||||
"thumbnail": {
|
||||
"active": True,
|
||||
"ext": "jpg",
|
||||
"xml_preset_file": "Jpeg (8-bit).xml",
|
||||
"xml_preset_dir": "",
|
||||
"export_type": "File Sequence",
|
||||
"ignore_comment_attrs": True,
|
||||
"parsed_comment_attrs": False,
|
||||
"colorspace_out": "Output - sRGB",
|
||||
"representation_add_range": False,
|
||||
"representation_tags": ["thumbnail"]
|
||||
},
|
||||
"ftrackpreview": {
|
||||
"ext": "mov",
|
||||
"xml_preset_file": "Apple iPad (1920x1080).xml",
|
||||
"xml_preset_dir": "",
|
||||
"export_type": "Movie",
|
||||
"ignore_comment_attrs": True,
|
||||
"colorspace_out": "Output - Rec.709",
|
||||
"representation_add_range": True,
|
||||
"representation_tags": [
|
||||
"review",
|
||||
"delete"
|
||||
]
|
||||
"representation_tags": ["thumbnail"],
|
||||
"path_regex": ".*"
|
||||
}
|
||||
}
|
||||
keep_original_representation = False
|
||||
|
||||
# hide publisher during exporting
|
||||
hide_ui_on_process = True
|
||||
|
|
@ -52,22 +44,15 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
export_presets_mapping = {}
|
||||
|
||||
def process(self, instance):
|
||||
if (
|
||||
self.keep_original_representation
|
||||
and "representations" not in instance.data
|
||||
or not self.keep_original_representation
|
||||
):
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
# flame objects
|
||||
segment = instance.data["item"]
|
||||
asset_name = instance.data["asset"]
|
||||
segment_name = segment.name.get_value()
|
||||
clip_path = instance.data["path"]
|
||||
sequence_clip = instance.context.data["flameSequence"]
|
||||
clip_data = instance.data["flameSourceClip"]
|
||||
|
||||
reel_clip = None
|
||||
if clip_data:
|
||||
reel_clip = clip_data["PyClip"]
|
||||
|
||||
# segment's parent track name
|
||||
s_track_name = segment.parent.name.get_value()
|
||||
|
|
@ -87,7 +72,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
handles = max(handle_start, handle_end)
|
||||
|
||||
# get media source range with handles
|
||||
source_end_handles = instance.data["sourceEndH"]
|
||||
source_start_handles = instance.data["sourceStartH"]
|
||||
source_end_handles = instance.data["sourceEndH"]
|
||||
|
||||
|
|
@ -104,192 +88,231 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
for unique_name, preset_config in export_presets.items():
|
||||
modify_xml_data = {}
|
||||
|
||||
if self._should_skip(preset_config, clip_path, unique_name):
|
||||
continue
|
||||
|
||||
# get all presets attributes
|
||||
extension = preset_config["ext"]
|
||||
preset_file = preset_config["xml_preset_file"]
|
||||
preset_dir = preset_config["xml_preset_dir"]
|
||||
export_type = preset_config["export_type"]
|
||||
repre_tags = preset_config["representation_tags"]
|
||||
ignore_comment_attrs = preset_config["ignore_comment_attrs"]
|
||||
parsed_comment_attrs = preset_config["parsed_comment_attrs"]
|
||||
color_out = preset_config["colorspace_out"]
|
||||
|
||||
# get attribures related loading in integrate_batch_group
|
||||
load_to_batch_group = preset_config.get(
|
||||
"load_to_batch_group")
|
||||
batch_group_loader_name = preset_config.get(
|
||||
"batch_group_loader_name")
|
||||
|
||||
# convert to None if empty string
|
||||
if batch_group_loader_name == "":
|
||||
batch_group_loader_name = None
|
||||
self.log.info(
|
||||
"Processing `{}` as `{}` to `{}` type...".format(
|
||||
preset_file, export_type, extension
|
||||
)
|
||||
)
|
||||
|
||||
# get frame range with handles for representation range
|
||||
frame_start_handle = frame_start - handle_start
|
||||
|
||||
# calculate duration with handles
|
||||
source_duration_handles = (
|
||||
source_end_handles - source_start_handles) + 1
|
||||
source_end_handles - source_start_handles)
|
||||
|
||||
# define in/out marks
|
||||
in_mark = (source_start_handles - source_first_frame) + 1
|
||||
out_mark = in_mark + source_duration_handles
|
||||
|
||||
# make test for type of preset and available reel_clip
|
||||
if (
|
||||
not reel_clip
|
||||
and export_type != "Sequence Publish"
|
||||
):
|
||||
self.log.warning((
|
||||
"Skipping preset {}. Not available "
|
||||
"reel clip for {}").format(
|
||||
preset_file, segment_name
|
||||
))
|
||||
continue
|
||||
|
||||
# by default export source clips
|
||||
exporting_clip = reel_clip
|
||||
|
||||
exporting_clip = None
|
||||
name_patern_xml = "<name>_{}.".format(
|
||||
unique_name)
|
||||
if export_type == "Sequence Publish":
|
||||
# change export clip to sequence
|
||||
exporting_clip = sequence_clip
|
||||
exporting_clip = flame.duplicate(sequence_clip)
|
||||
|
||||
# only keep visible layer where instance segment is child
|
||||
self.hide_others(
|
||||
exporting_clip, segment_name, s_track_name)
|
||||
|
||||
# change name patern
|
||||
name_patern_xml = (
|
||||
"<segment name>_<shot name>_{}.").format(
|
||||
unique_name)
|
||||
|
||||
# change in/out marks to timeline in/out
|
||||
in_mark = clip_in
|
||||
out_mark = clip_out
|
||||
else:
|
||||
exporting_clip = self.import_clip(clip_path)
|
||||
exporting_clip.name.set_value("{}_{}".format(
|
||||
asset_name, segment_name))
|
||||
|
||||
# add xml tags modifications
|
||||
modify_xml_data.update({
|
||||
"exportHandles": True,
|
||||
"nbHandles": handles,
|
||||
"startFrame": frame_start
|
||||
})
|
||||
# add xml tags modifications
|
||||
modify_xml_data.update({
|
||||
"exportHandles": True,
|
||||
"nbHandles": handles,
|
||||
"startFrame": frame_start,
|
||||
"namePattern": name_patern_xml
|
||||
})
|
||||
|
||||
if not ignore_comment_attrs:
|
||||
# add any xml overrides collected form segment.comment
|
||||
modify_xml_data.update(instance.data["xml_overrides"])
|
||||
if parsed_comment_attrs:
|
||||
# add any xml overrides collected form segment.comment
|
||||
modify_xml_data.update(instance.data["xml_overrides"])
|
||||
|
||||
self.log.debug("__ modify_xml_data: {}".format(pformat(
|
||||
modify_xml_data
|
||||
)))
|
||||
export_kwargs = {}
|
||||
# validate xml preset file is filled
|
||||
if preset_file == "":
|
||||
raise ValueError(
|
||||
("Check Settings for {} preset: "
|
||||
"`XML preset file` is not filled").format(
|
||||
unique_name)
|
||||
)
|
||||
|
||||
# with maintained duplication loop all presets
|
||||
with opfapi.maintained_object_duplication(
|
||||
exporting_clip) as duplclip:
|
||||
kwargs = {}
|
||||
# resolve xml preset dir if not filled
|
||||
if preset_dir == "":
|
||||
preset_dir = opfapi.get_preset_path_by_xml_name(
|
||||
preset_file)
|
||||
|
||||
if export_type == "Sequence Publish":
|
||||
# only keep visible layer where instance segment is child
|
||||
self.hide_others(duplclip, segment_name, s_track_name)
|
||||
|
||||
# validate xml preset file is filled
|
||||
if preset_file == "":
|
||||
if not preset_dir:
|
||||
raise ValueError(
|
||||
("Check Settings for {} preset: "
|
||||
"`XML preset file` is not filled").format(
|
||||
unique_name)
|
||||
"`XML preset file` {} is not found").format(
|
||||
unique_name, preset_file)
|
||||
)
|
||||
|
||||
# resolve xml preset dir if not filled
|
||||
if preset_dir == "":
|
||||
preset_dir = opfapi.get_preset_path_by_xml_name(
|
||||
preset_file)
|
||||
# create preset path
|
||||
preset_orig_xml_path = str(os.path.join(
|
||||
preset_dir, preset_file
|
||||
))
|
||||
|
||||
if not preset_dir:
|
||||
raise ValueError(
|
||||
("Check Settings for {} preset: "
|
||||
"`XML preset file` {} is not found").format(
|
||||
unique_name, preset_file)
|
||||
)
|
||||
# define kwargs based on preset type
|
||||
if "thumbnail" in unique_name:
|
||||
modify_xml_data.update({
|
||||
"video/posterFrame": True,
|
||||
"video/useFrameAsPoster": 1,
|
||||
"namePattern": "__thumbnail"
|
||||
})
|
||||
thumb_frame_number = int(in_mark + (
|
||||
source_duration_handles / 2))
|
||||
|
||||
# create preset path
|
||||
preset_orig_xml_path = str(os.path.join(
|
||||
preset_dir, preset_file
|
||||
self.log.debug("__ in_mark: {}".format(in_mark))
|
||||
self.log.debug("__ thumb_frame_number: {}".format(
|
||||
thumb_frame_number
|
||||
))
|
||||
|
||||
preset_path = opfapi.modify_preset_file(
|
||||
preset_orig_xml_path, staging_dir, modify_xml_data)
|
||||
export_kwargs["thumb_frame_number"] = thumb_frame_number
|
||||
else:
|
||||
export_kwargs.update({
|
||||
"in_mark": in_mark,
|
||||
"out_mark": out_mark
|
||||
})
|
||||
|
||||
# define kwargs based on preset type
|
||||
if "thumbnail" in unique_name:
|
||||
kwargs["thumb_frame_number"] = in_mark + (
|
||||
source_duration_handles / 2)
|
||||
else:
|
||||
kwargs.update({
|
||||
"in_mark": in_mark,
|
||||
"out_mark": out_mark
|
||||
})
|
||||
self.log.debug("__ modify_xml_data: {}".format(
|
||||
pformat(modify_xml_data)
|
||||
))
|
||||
preset_path = opfapi.modify_preset_file(
|
||||
preset_orig_xml_path, staging_dir, modify_xml_data)
|
||||
|
||||
# get and make export dir paths
|
||||
export_dir_path = str(os.path.join(
|
||||
staging_dir, unique_name
|
||||
))
|
||||
os.makedirs(export_dir_path)
|
||||
# get and make export dir paths
|
||||
export_dir_path = str(os.path.join(
|
||||
staging_dir, unique_name
|
||||
))
|
||||
os.makedirs(export_dir_path)
|
||||
|
||||
# export
|
||||
opfapi.export_clip(
|
||||
export_dir_path, duplclip, preset_path, **kwargs)
|
||||
# export
|
||||
opfapi.export_clip(
|
||||
export_dir_path, exporting_clip, preset_path, **export_kwargs)
|
||||
|
||||
extension = preset_config["ext"]
|
||||
# make sure only first segment is used if underscore in name
|
||||
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
|
||||
repr_name = unique_name.split("_")[0]
|
||||
|
||||
# create representation data
|
||||
representation_data = {
|
||||
"name": unique_name,
|
||||
"outputName": unique_name,
|
||||
"ext": extension,
|
||||
"stagingDir": export_dir_path,
|
||||
"tags": repre_tags,
|
||||
"data": {
|
||||
"colorspace": color_out
|
||||
},
|
||||
"load_to_batch_group": load_to_batch_group,
|
||||
"batch_group_loader_name": batch_group_loader_name
|
||||
}
|
||||
# create representation data
|
||||
representation_data = {
|
||||
"name": repr_name,
|
||||
"outputName": repr_name,
|
||||
"ext": extension,
|
||||
"stagingDir": export_dir_path,
|
||||
"tags": repre_tags,
|
||||
"data": {
|
||||
"colorspace": color_out
|
||||
},
|
||||
"load_to_batch_group": preset_config.get(
|
||||
"load_to_batch_group"),
|
||||
"batch_group_loader_name": preset_config.get(
|
||||
"batch_group_loader_name") or None
|
||||
}
|
||||
|
||||
# collect all available content of export dir
|
||||
files = os.listdir(export_dir_path)
|
||||
# collect all available content of export dir
|
||||
files = os.listdir(export_dir_path)
|
||||
|
||||
# make sure no nested folders inside
|
||||
n_stage_dir, n_files = self._unfolds_nested_folders(
|
||||
export_dir_path, files, extension)
|
||||
# make sure no nested folders inside
|
||||
n_stage_dir, n_files = self._unfolds_nested_folders(
|
||||
export_dir_path, files, extension)
|
||||
|
||||
# fix representation in case of nested folders
|
||||
if n_stage_dir:
|
||||
representation_data["stagingDir"] = n_stage_dir
|
||||
files = n_files
|
||||
# fix representation in case of nested folders
|
||||
if n_stage_dir:
|
||||
representation_data["stagingDir"] = n_stage_dir
|
||||
files = n_files
|
||||
|
||||
# add files to represetation but add
|
||||
# imagesequence as list
|
||||
if (
|
||||
# first check if path in files is not mov extension
|
||||
[
|
||||
f for f in files
|
||||
if os.path.splitext(f)[-1] == ".mov"
|
||||
]
|
||||
# then try if thumbnail is not in unique name
|
||||
or unique_name == "thumbnail"
|
||||
):
|
||||
representation_data["files"] = files.pop()
|
||||
else:
|
||||
representation_data["files"] = files
|
||||
# add files to represetation but add
|
||||
# imagesequence as list
|
||||
if (
|
||||
# first check if path in files is not mov extension
|
||||
[
|
||||
f for f in files
|
||||
if os.path.splitext(f)[-1] == ".mov"
|
||||
]
|
||||
# then try if thumbnail is not in unique name
|
||||
or unique_name == "thumbnail"
|
||||
):
|
||||
representation_data["files"] = files.pop()
|
||||
else:
|
||||
representation_data["files"] = files
|
||||
|
||||
# add frame range
|
||||
if preset_config["representation_add_range"]:
|
||||
representation_data.update({
|
||||
"frameStart": frame_start_handle,
|
||||
"frameEnd": (
|
||||
frame_start_handle + source_duration_handles),
|
||||
"fps": instance.data["fps"]
|
||||
})
|
||||
# add frame range
|
||||
if preset_config["representation_add_range"]:
|
||||
representation_data.update({
|
||||
"frameStart": frame_start_handle,
|
||||
"frameEnd": (
|
||||
frame_start_handle + source_duration_handles),
|
||||
"fps": instance.data["fps"]
|
||||
})
|
||||
|
||||
instance.data["representations"].append(representation_data)
|
||||
instance.data["representations"].append(representation_data)
|
||||
|
||||
# add review family if found in tags
|
||||
if "review" in repre_tags:
|
||||
instance.data["families"].append("review")
|
||||
# add review family if found in tags
|
||||
if "review" in repre_tags:
|
||||
instance.data["families"].append("review")
|
||||
|
||||
self.log.info("Added representation: {}".format(
|
||||
representation_data))
|
||||
self.log.info("Added representation: {}".format(
|
||||
representation_data))
|
||||
|
||||
if export_type == "Sequence Publish":
|
||||
# at the end remove the duplicated clip
|
||||
flame.delete(exporting_clip)
|
||||
|
||||
self.log.debug("All representations: {}".format(
|
||||
pformat(instance.data["representations"])))
|
||||
|
||||
def _should_skip(self, preset_config, clip_path, unique_name):
|
||||
# get activating attributes
|
||||
activated_preset = preset_config["active"]
|
||||
filter_path_regex = preset_config.get("filter_path_regex")
|
||||
|
||||
self.log.info(
|
||||
"Preset `{}` is active `{}` with filter `{}`".format(
|
||||
unique_name, activated_preset, filter_path_regex
|
||||
)
|
||||
)
|
||||
self.log.debug(
|
||||
"__ clip_path: `{}`".format(clip_path))
|
||||
|
||||
# skip if not activated presete
|
||||
if not activated_preset:
|
||||
return True
|
||||
|
||||
# exclude by regex filter if any
|
||||
if (
|
||||
filter_path_regex
|
||||
and not re.search(filter_path_regex, clip_path)
|
||||
):
|
||||
return True
|
||||
|
||||
def _unfolds_nested_folders(self, stage_dir, files_list, ext):
|
||||
"""Unfolds nested folders
|
||||
|
||||
|
|
@ -373,3 +396,27 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
for segment in track.segments:
|
||||
if segment.name.get_value() != segment_name:
|
||||
segment.hidden = True
|
||||
|
||||
def import_clip(self, path):
|
||||
"""
|
||||
Import clip from path
|
||||
"""
|
||||
dir_path = os.path.dirname(path)
|
||||
media_info = MediaInfoFile(path, logger=self.log)
|
||||
file_pattern = media_info.file_pattern
|
||||
self.log.debug("__ file_pattern: {}".format(file_pattern))
|
||||
|
||||
# rejoin the pattern to dir path
|
||||
new_path = os.path.join(dir_path, file_pattern)
|
||||
|
||||
clips = flame.import_clips(new_path)
|
||||
self.log.info("Clips [{}] imported from `{}`".format(clips, path))
|
||||
|
||||
if not clips:
|
||||
self.log.warning("Path `{}` is not having any clips".format(path))
|
||||
return None
|
||||
elif len(clips) > 1:
|
||||
self.log.warning(
|
||||
"Path `{}` is containing more that one clip".format(path)
|
||||
)
|
||||
return clips[0]
|
||||
|
|
|
|||
|
|
@ -1,26 +0,0 @@
|
|||
import pyblish
|
||||
|
||||
|
||||
@pyblish.api.log
|
||||
class ValidateSourceClip(pyblish.api.InstancePlugin):
|
||||
"""Validate instance is not having empty `flameSourceClip`"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Source Clip"
|
||||
hosts = ["flame"]
|
||||
families = ["clip"]
|
||||
optional = True
|
||||
active = False
|
||||
|
||||
def process(self, instance):
|
||||
flame_source_clip = instance.data["flameSourceClip"]
|
||||
|
||||
self.log.debug("_ flame_source_clip: {}".format(flame_source_clip))
|
||||
|
||||
if flame_source_clip is None:
|
||||
raise AttributeError((
|
||||
"Timeline segment `{}` is not having "
|
||||
"relative clip in reels. Please make sure "
|
||||
"you push `Save Sources` button in Conform Tab").format(
|
||||
instance.data["asset"]
|
||||
))
|
||||
|
|
@ -45,7 +45,8 @@ def install():
|
|||
This is where you install menus and register families, data
|
||||
and loaders into fusion.
|
||||
|
||||
It is called automatically when installing via `api.install(avalon.fusion)`
|
||||
It is called automatically when installing via
|
||||
`openpype.pipeline.install_host(openpype.hosts.fusion.api)`
|
||||
|
||||
See the Maya equivalent for inspiration on how to implement this.
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ from openpype.pipeline import load
|
|||
|
||||
|
||||
class FusionSetFrameRangeLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Set frame range excluding pre- and post-handles"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
@ -40,7 +40,7 @@ class FusionSetFrameRangeLoader(load.LoaderPlugin):
|
|||
|
||||
|
||||
class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Set frame range including pre- and post-handles"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
|
|||
|
|
@ -35,7 +35,11 @@ function Client() {
|
|||
self.pack = function(num) {
|
||||
var ascii='';
|
||||
for (var i = 3; i >= 0; i--) {
|
||||
ascii += String.fromCharCode((num >> (8 * i)) & 255);
|
||||
var hex = ((num >> (8 * i)) & 255).toString(16);
|
||||
if (hex.length < 2){
|
||||
ascii += "0";
|
||||
}
|
||||
ascii += hex;
|
||||
}
|
||||
return ascii;
|
||||
};
|
||||
|
|
@ -279,19 +283,22 @@ function Client() {
|
|||
};
|
||||
|
||||
self._send = function(message) {
|
||||
var data = new QByteArray();
|
||||
var outstr = new QDataStream(data, QIODevice.WriteOnly);
|
||||
outstr.writeInt(0);
|
||||
data.append('UTF-8');
|
||||
outstr.device().seek(0);
|
||||
outstr.writeInt(data.size() - 4);
|
||||
var codec = QTextCodec.codecForUtfText(data);
|
||||
var msg = codec.fromUnicode(message);
|
||||
var l = msg.size();
|
||||
var coded = new QByteArray('AH').append(self.pack(l));
|
||||
coded = coded.append(msg);
|
||||
self.socket.write(new QByteArray(coded));
|
||||
self.logDebug('Sent.');
|
||||
/** Harmony 21.1 doesn't have QDataStream anymore.
|
||||
|
||||
This means we aren't able to write bytes into QByteArray so we had
|
||||
modify how content lenght is sent do the server.
|
||||
Content lenght is sent as string of 8 char convertible into integer
|
||||
(instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
|
||||
var codec_name = new QByteArray().append("UTF-8");
|
||||
|
||||
var codec = QTextCodec.codecForName(codec_name);
|
||||
var msg = codec.fromUnicode(message);
|
||||
var l = msg.size();
|
||||
var header = new QByteArray().append('AH').append(self.pack(l));
|
||||
var coded = msg.prepend(header);
|
||||
|
||||
self.socket.write(coded);
|
||||
self.logDebug('Sent.');
|
||||
};
|
||||
|
||||
self.waitForLock = function() {
|
||||
|
|
@ -351,7 +358,14 @@ function start() {
|
|||
app.avalonClient = new Client();
|
||||
app.avalonClient.socket.connectToHost(host, port);
|
||||
}
|
||||
var menuBar = QApplication.activeWindow().menuBar();
|
||||
var mainWindow = null;
|
||||
var widgets = QApplication.topLevelWidgets();
|
||||
for (var i = 0 ; i < widgets.length; i++) {
|
||||
if (widgets[i] instanceof QMainWindow){
|
||||
mainWindow = widgets[i];
|
||||
}
|
||||
}
|
||||
var menuBar = mainWindow.menuBar();
|
||||
var actions = menuBar.actions();
|
||||
app.avalonMenu = null;
|
||||
|
||||
|
|
|
|||
|
|
@ -463,7 +463,7 @@ def imprint(node_id, data, remove=False):
|
|||
remove (bool): Removes the data from the scene.
|
||||
|
||||
Example:
|
||||
>>> from avalon.harmony import lib
|
||||
>>> from openpype.hosts.harmony.api import lib
|
||||
>>> node = "Top/Display"
|
||||
>>> data = {"str": "someting", "int": 1, "float": 0.32, "bool": True}
|
||||
>>> lib.imprint(layer, data)
|
||||
|
|
|
|||
|
|
@ -88,21 +88,25 @@ class Server(threading.Thread):
|
|||
"""
|
||||
current_time = time.time()
|
||||
while True:
|
||||
|
||||
self.log.info("wait ttt")
|
||||
# Receive the data in small chunks and retransmit it
|
||||
request = None
|
||||
header = self.connection.recv(6)
|
||||
header = self.connection.recv(10)
|
||||
if len(header) == 0:
|
||||
# null data received, socket is closing.
|
||||
self.log.info(f"[{self.timestamp()}] Connection closing.")
|
||||
break
|
||||
|
||||
if header[0:2] != b"AH":
|
||||
self.log.error("INVALID HEADER")
|
||||
length = struct.unpack(">I", header[2:])[0]
|
||||
content_length_str = header[2:].decode()
|
||||
|
||||
length = int(content_length_str, 16)
|
||||
data = self.connection.recv(length)
|
||||
while (len(data) < length):
|
||||
# we didn't received everything in first try, lets wait for
|
||||
# all data.
|
||||
self.log.info("loop")
|
||||
time.sleep(0.1)
|
||||
if self.connection is None:
|
||||
self.log.error(f"[{self.timestamp()}] "
|
||||
|
|
@ -113,7 +117,7 @@ class Server(threading.Thread):
|
|||
break
|
||||
|
||||
data += self.connection.recv(length - len(data))
|
||||
|
||||
self.log.debug("data:: {} {}".format(data, type(data)))
|
||||
self.received += data.decode("utf-8")
|
||||
pretty = self._pretty(self.received)
|
||||
self.log.debug(
|
||||
|
|
|
|||
|
|
@ -144,6 +144,7 @@ class CollectFarmRender(openpype.lib.abstract_collect_render.
|
|||
label=node.split("/")[1],
|
||||
subset=subset_name,
|
||||
asset=legacy_io.Session["AVALON_ASSET"],
|
||||
task=task_name,
|
||||
attachTo=False,
|
||||
setMembers=[node],
|
||||
publish=info[4],
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ from .lib import (
|
|||
get_track_items,
|
||||
get_current_project,
|
||||
get_current_sequence,
|
||||
get_timeline_selection,
|
||||
get_current_track,
|
||||
get_track_item_pype_tag,
|
||||
set_track_item_pype_tag,
|
||||
|
|
@ -80,6 +81,7 @@ __all__ = [
|
|||
"get_track_items",
|
||||
"get_current_project",
|
||||
"get_current_sequence",
|
||||
"get_timeline_selection",
|
||||
"get_current_track",
|
||||
"get_track_item_pype_tag",
|
||||
"set_track_item_pype_tag",
|
||||
|
|
|
|||
|
|
@ -109,8 +109,9 @@ def register_hiero_events():
|
|||
# hiero.core.events.registerInterest("kShutdown", shutDown)
|
||||
# hiero.core.events.registerInterest("kStartup", startupCompleted)
|
||||
|
||||
hiero.core.events.registerInterest(
|
||||
("kSelectionChanged", "kTimeline"), selection_changed_timeline)
|
||||
# INFO: was disabled because it was slowing down timeline operations
|
||||
# hiero.core.events.registerInterest(
|
||||
# ("kSelectionChanged", "kTimeline"), selection_changed_timeline)
|
||||
|
||||
# workfiles
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
"""
|
||||
Host specific functions where host api is connected
|
||||
"""
|
||||
|
||||
from copy import deepcopy
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
|
@ -89,13 +91,19 @@ def get_current_sequence(name=None, new=False):
|
|||
if not sequence:
|
||||
# if nothing found create new with input name
|
||||
sequence = get_current_sequence(name, True)
|
||||
elif not name and not new:
|
||||
else:
|
||||
# if name is none and new is False then return current open sequence
|
||||
sequence = hiero.ui.activeSequence()
|
||||
|
||||
return sequence
|
||||
|
||||
|
||||
def get_timeline_selection():
|
||||
active_sequence = hiero.ui.activeSequence()
|
||||
timeline_editor = hiero.ui.getTimelineEditor(active_sequence)
|
||||
return list(timeline_editor.selection())
|
||||
|
||||
|
||||
def get_current_track(sequence, name, audio=False):
|
||||
"""
|
||||
Get current track in context of active project.
|
||||
|
|
@ -118,7 +126,7 @@ def get_current_track(sequence, name, audio=False):
|
|||
# get track by name
|
||||
track = None
|
||||
for _track in tracks:
|
||||
if _track.name() in name:
|
||||
if _track.name() == name:
|
||||
track = _track
|
||||
|
||||
if not track:
|
||||
|
|
@ -126,13 +134,14 @@ def get_current_track(sequence, name, audio=False):
|
|||
track = hiero.core.VideoTrack(name)
|
||||
else:
|
||||
track = hiero.core.AudioTrack(name)
|
||||
|
||||
sequence.addTrack(track)
|
||||
|
||||
return track
|
||||
|
||||
|
||||
def get_track_items(
|
||||
selected=False,
|
||||
selection=False,
|
||||
sequence_name=None,
|
||||
track_item_name=None,
|
||||
track_name=None,
|
||||
|
|
@ -143,7 +152,7 @@ def get_track_items(
|
|||
"""Get all available current timeline track items.
|
||||
|
||||
Attribute:
|
||||
selected (bool)[optional]: return only selected items on timeline
|
||||
selection (list)[optional]: list of selected track items
|
||||
sequence_name (str)[optional]: return only clips from input sequence
|
||||
track_item_name (str)[optional]: return only item with input name
|
||||
track_name (str)[optional]: return only items from track name
|
||||
|
|
@ -155,32 +164,34 @@ def get_track_items(
|
|||
Return:
|
||||
list or hiero.core.TrackItem: list of track items or single track item
|
||||
"""
|
||||
return_list = list()
|
||||
track_items = list()
|
||||
track_type = track_type or "video"
|
||||
selection = selection or []
|
||||
return_list = []
|
||||
|
||||
# get selected track items or all in active sequence
|
||||
if selected:
|
||||
if selection:
|
||||
try:
|
||||
selected_items = list(hiero.selection)
|
||||
for item in selected_items:
|
||||
if track_name and track_name in item.parent().name():
|
||||
# filter only items fitting input track name
|
||||
track_items.append(item)
|
||||
elif not track_name:
|
||||
# or add all if no track_name was defined
|
||||
track_items.append(item)
|
||||
for track_item in selection:
|
||||
log.info("___ track_item: {}".format(track_item))
|
||||
# make sure only trackitems are selected
|
||||
if not isinstance(track_item, hiero.core.TrackItem):
|
||||
continue
|
||||
|
||||
if _validate_all_atrributes(
|
||||
track_item,
|
||||
track_item_name,
|
||||
track_name,
|
||||
track_type,
|
||||
check_enabled,
|
||||
check_tagged
|
||||
):
|
||||
log.info("___ valid trackitem: {}".format(track_item))
|
||||
return_list.append(track_item)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# check if any collected track items are
|
||||
# `core.Hiero.Python.TrackItem` instance
|
||||
if track_items:
|
||||
any_track_item = track_items[0]
|
||||
if not isinstance(any_track_item, hiero.core.TrackItem):
|
||||
selected_items = []
|
||||
|
||||
# collect all available active sequence track items
|
||||
if not track_items:
|
||||
if not return_list:
|
||||
sequence = get_current_sequence(name=sequence_name)
|
||||
# get all available tracks from sequence
|
||||
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
|
||||
|
|
@ -191,42 +202,76 @@ def get_track_items(
|
|||
if check_enabled and not track.isEnabled():
|
||||
continue
|
||||
# and all items in track
|
||||
for item in track.items():
|
||||
if check_tagged and not item.tags():
|
||||
for track_item in track.items():
|
||||
# make sure no subtrackitem is also track items
|
||||
if not isinstance(track_item, hiero.core.TrackItem):
|
||||
continue
|
||||
|
||||
# check if track item is enabled
|
||||
if check_enabled:
|
||||
if not item.isEnabled():
|
||||
continue
|
||||
if track_item_name:
|
||||
if track_item_name in item.name():
|
||||
return item
|
||||
# make sure only track items with correct track names are added
|
||||
if track_name and track_name in track.name():
|
||||
# filter out only defined track_name items
|
||||
track_items.append(item)
|
||||
elif not track_name:
|
||||
# or add all if no track_name is defined
|
||||
track_items.append(item)
|
||||
if _validate_all_atrributes(
|
||||
track_item,
|
||||
track_item_name,
|
||||
track_name,
|
||||
track_type,
|
||||
check_enabled,
|
||||
check_tagged
|
||||
):
|
||||
return_list.append(track_item)
|
||||
|
||||
# filter out only track items with defined track_type
|
||||
for track_item in track_items:
|
||||
if track_type and track_type == "video" and isinstance(
|
||||
return return_list
|
||||
|
||||
|
||||
def _validate_all_atrributes(
|
||||
track_item,
|
||||
track_item_name,
|
||||
track_name,
|
||||
track_type,
|
||||
check_enabled,
|
||||
check_tagged
|
||||
):
|
||||
def _validate_correct_name_track_item():
|
||||
if track_item_name and track_item_name in track_item.name():
|
||||
return True
|
||||
elif not track_item_name:
|
||||
return True
|
||||
|
||||
def _validate_tagged_track_item():
|
||||
if check_tagged and track_item.tags():
|
||||
return True
|
||||
elif not check_tagged:
|
||||
return True
|
||||
|
||||
def _validate_enabled_track_item():
|
||||
if check_enabled and track_item.isEnabled():
|
||||
return True
|
||||
elif not check_enabled:
|
||||
return True
|
||||
|
||||
def _validate_parent_track_item():
|
||||
if track_name and track_name in track_item.parent().name():
|
||||
# filter only items fitting input track name
|
||||
return True
|
||||
elif not track_name:
|
||||
# or add all if no track_name was defined
|
||||
return True
|
||||
|
||||
def _validate_type_track_item():
|
||||
if track_type == "video" and isinstance(
|
||||
track_item.parent(), hiero.core.VideoTrack):
|
||||
# only video track items are allowed
|
||||
return_list.append(track_item)
|
||||
elif track_type and track_type == "audio" and isinstance(
|
||||
return True
|
||||
elif track_type == "audio" and isinstance(
|
||||
track_item.parent(), hiero.core.AudioTrack):
|
||||
# only audio track items are allowed
|
||||
return_list.append(track_item)
|
||||
elif not track_type:
|
||||
# add all if no track_type is defined
|
||||
return_list.append(track_item)
|
||||
return True
|
||||
|
||||
# return output list but make sure all items are TrackItems
|
||||
return [_i for _i in return_list
|
||||
if type(_i) == hiero.core.TrackItem]
|
||||
# check if track item is enabled
|
||||
return all([
|
||||
_validate_enabled_track_item(),
|
||||
_validate_type_track_item(),
|
||||
_validate_tagged_track_item(),
|
||||
_validate_parent_track_item(),
|
||||
_validate_correct_name_track_item()
|
||||
])
|
||||
|
||||
|
||||
def get_track_item_pype_tag(track_item):
|
||||
|
|
@ -245,7 +290,7 @@ def get_track_item_pype_tag(track_item):
|
|||
return None
|
||||
for tag in _tags:
|
||||
# return only correct tag defined by global name
|
||||
if tag.name() in self.pype_tag_name:
|
||||
if tag.name() == self.pype_tag_name:
|
||||
return tag
|
||||
|
||||
|
||||
|
|
@ -266,7 +311,7 @@ def set_track_item_pype_tag(track_item, data=None):
|
|||
"editable": "0",
|
||||
"note": "OpenPype data container",
|
||||
"icon": "openpype_icon.png",
|
||||
"metadata": {k: v for k, v in data.items()}
|
||||
"metadata": dict(data.items())
|
||||
}
|
||||
# get available pype tag if any
|
||||
_tag = get_track_item_pype_tag(track_item)
|
||||
|
|
@ -301,9 +346,9 @@ def get_track_item_pype_data(track_item):
|
|||
return None
|
||||
|
||||
# get tag metadata attribute
|
||||
tag_data = tag.metadata()
|
||||
tag_data = deepcopy(dict(tag.metadata()))
|
||||
# convert tag metadata to normal keys names and values to correct types
|
||||
for k, v in dict(tag_data).items():
|
||||
for k, v in tag_data.items():
|
||||
key = k.replace("tag.", "")
|
||||
|
||||
try:
|
||||
|
|
@ -324,7 +369,7 @@ def get_track_item_pype_data(track_item):
|
|||
log.warning(msg)
|
||||
value = v
|
||||
|
||||
data.update({key: value})
|
||||
data[key] = value
|
||||
|
||||
return data
|
||||
|
||||
|
|
@ -497,7 +542,7 @@ class PyblishSubmission(hiero.exporters.FnSubmission.Submission):
|
|||
from . import publish
|
||||
# Add submission to Hiero module for retrieval in plugins.
|
||||
hiero.submission = self
|
||||
publish()
|
||||
publish(hiero.ui.mainWindow())
|
||||
|
||||
|
||||
def add_submission():
|
||||
|
|
@ -527,7 +572,7 @@ class PublishAction(QtWidgets.QAction):
|
|||
# from getting picked up when not using the "Export" dialog.
|
||||
if hasattr(hiero, "submission"):
|
||||
del hiero.submission
|
||||
publish()
|
||||
publish(hiero.ui.mainWindow())
|
||||
|
||||
def eventHandler(self, event):
|
||||
# Add the Menu to the right-click menu
|
||||
|
|
@ -553,10 +598,10 @@ class PublishAction(QtWidgets.QAction):
|
|||
#
|
||||
# '''
|
||||
# import hiero.core
|
||||
# from avalon.nuke import imprint
|
||||
# from pype.hosts.nuke import (
|
||||
# lib as nklib
|
||||
# )
|
||||
# from openpype.hosts.nuke.api.lib import (
|
||||
# BuildWorkfile,
|
||||
# imprint
|
||||
# )
|
||||
#
|
||||
# # check if the file exists if does then Raise "File exists!"
|
||||
# if os.path.exists(filepath):
|
||||
|
|
@ -583,8 +628,7 @@ class PublishAction(QtWidgets.QAction):
|
|||
#
|
||||
# nuke_script.addNode(root_node)
|
||||
#
|
||||
# # here to call pype.hosts.nuke.lib.BuildWorkfile
|
||||
# script_builder = nklib.BuildWorkfile(
|
||||
# script_builder = BuildWorkfile(
|
||||
# root_node=root_node,
|
||||
# root_path=root_path,
|
||||
# nodes=nuke_script.getNodes(),
|
||||
|
|
@ -894,32 +938,33 @@ def apply_colorspace_clips():
|
|||
|
||||
|
||||
def is_overlapping(ti_test, ti_original, strict=False):
|
||||
covering_exp = bool(
|
||||
covering_exp = (
|
||||
(ti_test.timelineIn() <= ti_original.timelineIn())
|
||||
and (ti_test.timelineOut() >= ti_original.timelineOut())
|
||||
)
|
||||
inside_exp = bool(
|
||||
|
||||
if strict:
|
||||
return covering_exp
|
||||
|
||||
inside_exp = (
|
||||
(ti_test.timelineIn() >= ti_original.timelineIn())
|
||||
and (ti_test.timelineOut() <= ti_original.timelineOut())
|
||||
)
|
||||
overlaying_right_exp = bool(
|
||||
overlaying_right_exp = (
|
||||
(ti_test.timelineIn() < ti_original.timelineOut())
|
||||
and (ti_test.timelineOut() >= ti_original.timelineOut())
|
||||
)
|
||||
overlaying_left_exp = bool(
|
||||
overlaying_left_exp = (
|
||||
(ti_test.timelineOut() > ti_original.timelineIn())
|
||||
and (ti_test.timelineIn() <= ti_original.timelineIn())
|
||||
)
|
||||
|
||||
if not strict:
|
||||
return any((
|
||||
covering_exp,
|
||||
inside_exp,
|
||||
overlaying_right_exp,
|
||||
overlaying_left_exp
|
||||
))
|
||||
else:
|
||||
return covering_exp
|
||||
return any((
|
||||
covering_exp,
|
||||
inside_exp,
|
||||
overlaying_right_exp,
|
||||
overlaying_left_exp
|
||||
))
|
||||
|
||||
|
||||
def get_sequence_pattern_and_padding(file):
|
||||
|
|
@ -937,17 +982,13 @@ def get_sequence_pattern_and_padding(file):
|
|||
"""
|
||||
foundall = re.findall(
|
||||
r"(#+)|(%\d+d)|(?<=[^a-zA-Z0-9])(\d+)(?=\.\w+$)", file)
|
||||
if foundall:
|
||||
found = sorted(list(set(foundall[0])))[-1]
|
||||
|
||||
if "%" in found:
|
||||
padding = int(re.findall(r"\d+", found)[-1])
|
||||
else:
|
||||
padding = len(found)
|
||||
|
||||
return found, padding
|
||||
else:
|
||||
if not foundall:
|
||||
return None, None
|
||||
found = sorted(list(set(foundall[0])))[-1]
|
||||
|
||||
padding = int(
|
||||
re.findall(r"\d+", found)[-1]) if "%" in found else len(found)
|
||||
return found, padding
|
||||
|
||||
|
||||
def sync_clip_name_to_data_asset(track_items_list):
|
||||
|
|
@ -983,7 +1024,7 @@ def sync_clip_name_to_data_asset(track_items_list):
|
|||
print("asset was changed in clip: {}".format(ti_name))
|
||||
|
||||
|
||||
def check_inventory_versions():
|
||||
def check_inventory_versions(track_items=None):
|
||||
"""
|
||||
Actual version color idetifier of Loaded containers
|
||||
|
||||
|
|
@ -994,14 +1035,14 @@ def check_inventory_versions():
|
|||
"""
|
||||
from . import parse_container
|
||||
|
||||
track_item = track_items or get_track_items()
|
||||
# presets
|
||||
clip_color_last = "green"
|
||||
clip_color = "red"
|
||||
|
||||
# get all track items from current timeline
|
||||
for track_item in get_track_items():
|
||||
for track_item in track_item:
|
||||
container = parse_container(track_item)
|
||||
|
||||
if container:
|
||||
# get representation from io
|
||||
representation = legacy_io.find_one({
|
||||
|
|
@ -1039,29 +1080,31 @@ def selection_changed_timeline(event):
|
|||
timeline_editor = event.sender
|
||||
selection = timeline_editor.selection()
|
||||
|
||||
selection = [ti for ti in selection
|
||||
if isinstance(ti, hiero.core.TrackItem)]
|
||||
track_items = get_track_items(
|
||||
selection=selection,
|
||||
track_type="video",
|
||||
check_enabled=True,
|
||||
check_locked=True,
|
||||
check_tagged=True
|
||||
)
|
||||
|
||||
# run checking function
|
||||
sync_clip_name_to_data_asset(selection)
|
||||
|
||||
# also mark old versions of loaded containers
|
||||
check_inventory_versions()
|
||||
sync_clip_name_to_data_asset(track_items)
|
||||
|
||||
|
||||
def before_project_save(event):
|
||||
track_items = get_track_items(
|
||||
selected=False,
|
||||
track_type="video",
|
||||
check_enabled=True,
|
||||
check_locked=True,
|
||||
check_tagged=True)
|
||||
check_tagged=True
|
||||
)
|
||||
|
||||
# run checking function
|
||||
sync_clip_name_to_data_asset(track_items)
|
||||
|
||||
# also mark old versions of loaded containers
|
||||
check_inventory_versions()
|
||||
check_inventory_versions(track_items)
|
||||
|
||||
|
||||
def get_main_window():
|
||||
|
|
|
|||
|
|
@ -143,6 +143,11 @@ def parse_container(track_item, validate=True):
|
|||
"""
|
||||
# convert tag metadata to normal keys names
|
||||
data = lib.get_track_item_pype_data(track_item)
|
||||
if (
|
||||
not data
|
||||
or data.get("id") != "pyblish.avalon.container"
|
||||
):
|
||||
return
|
||||
|
||||
if validate and data and data.get("schema"):
|
||||
schema.validate(data)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
import re
|
||||
from copy import deepcopy
|
||||
|
||||
|
|
@ -400,7 +401,8 @@ class ClipLoader:
|
|||
|
||||
# inject asset data to representation dict
|
||||
self._get_asset_data()
|
||||
log.debug("__init__ self.data: `{}`".format(self.data))
|
||||
log.info("__init__ self.data: `{}`".format(pformat(self.data)))
|
||||
log.info("__init__ options: `{}`".format(pformat(options)))
|
||||
|
||||
# add active components to class
|
||||
if self.new_sequence:
|
||||
|
|
@ -482,7 +484,9 @@ class ClipLoader:
|
|||
|
||||
"""
|
||||
asset_name = self.context["representation"]["context"]["asset"]
|
||||
self.data["assetData"] = openpype.get_asset(asset_name)["data"]
|
||||
asset_doc = openpype.get_asset(asset_name)
|
||||
log.debug("__ asset_doc: {}".format(pformat(asset_doc)))
|
||||
self.data["assetData"] = asset_doc["data"]
|
||||
|
||||
def _make_track_item(self, source_bin_item, audio=False):
|
||||
""" Create track item with """
|
||||
|
|
@ -500,7 +504,7 @@ class ClipLoader:
|
|||
track_item.setSource(clip)
|
||||
track_item.setSourceIn(self.handle_start)
|
||||
track_item.setTimelineIn(self.timeline_in)
|
||||
track_item.setSourceOut(self.media_duration - self.handle_end)
|
||||
track_item.setSourceOut((self.media_duration) - self.handle_end)
|
||||
track_item.setTimelineOut(self.timeline_out)
|
||||
track_item.setPlaybackSpeed(1)
|
||||
self.active_track.addTrackItem(track_item)
|
||||
|
|
@ -520,14 +524,18 @@ class ClipLoader:
|
|||
self.handle_start = self.data["versionData"].get("handleStart")
|
||||
self.handle_end = self.data["versionData"].get("handleEnd")
|
||||
if self.handle_start is None:
|
||||
self.handle_start = int(self.data["assetData"]["handleStart"])
|
||||
self.handle_start = self.data["assetData"]["handleStart"]
|
||||
if self.handle_end is None:
|
||||
self.handle_end = int(self.data["assetData"]["handleEnd"])
|
||||
self.handle_end = self.data["assetData"]["handleEnd"]
|
||||
|
||||
self.handle_start = int(self.handle_start)
|
||||
self.handle_end = int(self.handle_end)
|
||||
|
||||
if self.sequencial_load:
|
||||
last_track_item = lib.get_track_items(
|
||||
sequence_name=self.active_sequence.name(),
|
||||
track_name=self.active_track.name())
|
||||
track_name=self.active_track.name()
|
||||
)
|
||||
if len(last_track_item) == 0:
|
||||
last_timeline_out = 0
|
||||
else:
|
||||
|
|
@ -541,17 +549,12 @@ class ClipLoader:
|
|||
self.timeline_in = int(self.data["assetData"]["clipIn"])
|
||||
self.timeline_out = int(self.data["assetData"]["clipOut"])
|
||||
|
||||
log.debug("__ self.timeline_in: {}".format(self.timeline_in))
|
||||
log.debug("__ self.timeline_out: {}".format(self.timeline_out))
|
||||
|
||||
# check if slate is included
|
||||
# either in version data families or by calculating frame diff
|
||||
slate_on = next(
|
||||
# check iterate if slate is in families
|
||||
(f for f in self.context["version"]["data"]["families"]
|
||||
if "slate" in f),
|
||||
# if nothing was found then use default None
|
||||
# so other bool could be used
|
||||
None) or bool(int(
|
||||
(self.timeline_out - self.timeline_in + 1)
|
||||
+ self.handle_start + self.handle_end) < self.media_duration)
|
||||
slate_on = "slate" in self.context["version"]["data"]["families"]
|
||||
log.debug("__ slate_on: {}".format(slate_on))
|
||||
|
||||
# if slate is on then remove the slate frame from beginning
|
||||
if slate_on:
|
||||
|
|
@ -572,7 +575,7 @@ class ClipLoader:
|
|||
# there were some cases were hiero was not creating it
|
||||
source_bin_item = None
|
||||
for item in self.active_bin.items():
|
||||
if self.data["clip_name"] in item.name():
|
||||
if self.data["clip_name"] == item.name():
|
||||
source_bin_item = item
|
||||
if not source_bin_item:
|
||||
log.warning("Problem with created Source clip: `{}`".format(
|
||||
|
|
@ -599,8 +602,8 @@ class Creator(LegacyCreator):
|
|||
rename_index = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
super(Creator, self).__init__(*args, **kwargs)
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
self.presets = openpype.get_current_project_settings()[
|
||||
"hiero"]["create"].get(self.__class__.__name__, {})
|
||||
|
||||
|
|
@ -609,7 +612,10 @@ class Creator(LegacyCreator):
|
|||
self.sequence = phiero.get_current_sequence()
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
self.selected = phiero.get_track_items(selected=True)
|
||||
timeline_selection = phiero.get_timeline_selection()
|
||||
self.selected = phiero.get_track_items(
|
||||
selection=timeline_selection
|
||||
)
|
||||
else:
|
||||
self.selected = phiero.get_track_items()
|
||||
|
||||
|
|
@ -716,6 +722,10 @@ class PublishClip:
|
|||
else:
|
||||
self.tag_data.update({"reviewTrack": None})
|
||||
|
||||
log.debug("___ self.tag_data: {}".format(
|
||||
pformat(self.tag_data)
|
||||
))
|
||||
|
||||
# create pype tag on track_item and add data
|
||||
lib.imprint(self.track_item, self.tag_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -10,16 +10,6 @@ log = Logger.get_logger(__name__)
|
|||
|
||||
def tag_data():
|
||||
return {
|
||||
# "Retiming": {
|
||||
# "editable": "1",
|
||||
# "note": "Clip has retime or TimeWarp effects (or multiple effects stacked on the clip)", # noqa
|
||||
# "icon": "retiming.png",
|
||||
# "metadata": {
|
||||
# "family": "retiming",
|
||||
# "marginIn": 1,
|
||||
# "marginOut": 1
|
||||
# }
|
||||
# },
|
||||
"[Lenses]": {
|
||||
"Set lense here": {
|
||||
"editable": "1",
|
||||
|
|
@ -48,6 +38,16 @@ def tag_data():
|
|||
"family": "comment",
|
||||
"subset": "main"
|
||||
}
|
||||
},
|
||||
"FrameMain": {
|
||||
"editable": "1",
|
||||
"note": "Publishing a frame subset.",
|
||||
"icon": "z_layer_main.png",
|
||||
"metadata": {
|
||||
"family": "frame",
|
||||
"subset": "main",
|
||||
"format": "png"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -86,7 +86,7 @@ def update_tag(tag, data):
|
|||
|
||||
# due to hiero bug we have to make sure keys which are not existent in
|
||||
# data are cleared of value by `None`
|
||||
for _mk in mtd.keys():
|
||||
for _mk in mtd.dict().keys():
|
||||
if _mk.replace("tag.", "") not in data_mtd.keys():
|
||||
mtd.setValue(_mk, str(None))
|
||||
|
||||
|
|
|
|||
|
|
@ -3,10 +3,6 @@ from openpype.pipeline import (
|
|||
get_representation_path,
|
||||
)
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
# from openpype.hosts.hiero.api import plugin, lib
|
||||
# reload(lib)
|
||||
# reload(plugin)
|
||||
# reload(phiero)
|
||||
|
||||
|
||||
class LoadClip(phiero.SequenceLoader):
|
||||
|
|
@ -106,7 +102,7 @@ class LoadClip(phiero.SequenceLoader):
|
|||
name = container['name']
|
||||
namespace = container['namespace']
|
||||
track_item = phiero.get_track_items(
|
||||
track_item_name=namespace)
|
||||
track_item_name=namespace).pop()
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
|
|
@ -157,7 +153,7 @@ class LoadClip(phiero.SequenceLoader):
|
|||
# load clip to timeline and get main variables
|
||||
namespace = container['namespace']
|
||||
track_item = phiero.get_track_items(
|
||||
track_item_name=namespace)
|
||||
track_item_name=namespace).pop()
|
||||
track = track_item.parent()
|
||||
|
||||
# remove track item from track
|
||||
|
|
|
|||
|
|
@ -0,0 +1,142 @@
|
|||
from pprint import pformat
|
||||
import re
|
||||
import ast
|
||||
import json
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectFrameTagInstances(pyblish.api.ContextPlugin):
|
||||
"""Collect frames from tags.
|
||||
|
||||
Tag is expected to have metadata:
|
||||
{
|
||||
"family": "frame"
|
||||
"subset": "main"
|
||||
}
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Frames"
|
||||
hosts = ["hiero"]
|
||||
|
||||
def process(self, context):
|
||||
self._context = context
|
||||
|
||||
# collect all sequence tags
|
||||
subset_data = self._create_frame_subset_data_sequence(context)
|
||||
|
||||
self.log.debug("__ subset_data: {}".format(
|
||||
pformat(subset_data)
|
||||
))
|
||||
|
||||
# create instances
|
||||
self._create_instances(subset_data)
|
||||
|
||||
def _get_tag_data(self, tag):
|
||||
data = {}
|
||||
|
||||
# get tag metadata attribute
|
||||
tag_data = tag.metadata()
|
||||
|
||||
# convert tag metadata to normal keys names and values to correct types
|
||||
for k, v in dict(tag_data).items():
|
||||
key = k.replace("tag.", "")
|
||||
|
||||
try:
|
||||
# capture exceptions which are related to strings only
|
||||
if re.match(r"^[\d]+$", v):
|
||||
value = int(v)
|
||||
elif re.match(r"^True$", v):
|
||||
value = True
|
||||
elif re.match(r"^False$", v):
|
||||
value = False
|
||||
elif re.match(r"^None$", v):
|
||||
value = None
|
||||
elif re.match(r"^[\w\d_]+$", v):
|
||||
value = v
|
||||
else:
|
||||
value = ast.literal_eval(v)
|
||||
except (ValueError, SyntaxError):
|
||||
value = v
|
||||
|
||||
data[key] = value
|
||||
|
||||
return data
|
||||
|
||||
def _create_frame_subset_data_sequence(self, context):
|
||||
|
||||
sequence_tags = []
|
||||
sequence = context.data["activeTimeline"]
|
||||
|
||||
# get all publishable sequence frames
|
||||
publish_frames = range(int(sequence.duration() + 1))
|
||||
|
||||
self.log.debug("__ publish_frames: {}".format(
|
||||
pformat(publish_frames)
|
||||
))
|
||||
|
||||
# get all sequence tags
|
||||
for tag in sequence.tags():
|
||||
tag_data = self._get_tag_data(tag)
|
||||
self.log.debug("__ tag_data: {}".format(
|
||||
pformat(tag_data)
|
||||
))
|
||||
if not tag_data:
|
||||
continue
|
||||
|
||||
if "family" not in tag_data:
|
||||
continue
|
||||
|
||||
if tag_data["family"] != "frame":
|
||||
continue
|
||||
|
||||
sequence_tags.append(tag_data)
|
||||
|
||||
self.log.debug("__ sequence_tags: {}".format(
|
||||
pformat(sequence_tags)
|
||||
))
|
||||
|
||||
# first collect all available subset tag frames
|
||||
subset_data = {}
|
||||
for tag_data in sequence_tags:
|
||||
frame = int(tag_data["start"])
|
||||
|
||||
if frame not in publish_frames:
|
||||
continue
|
||||
|
||||
subset = tag_data["subset"]
|
||||
|
||||
if subset in subset_data:
|
||||
# update existing subset key
|
||||
subset_data[subset]["frames"].append(frame)
|
||||
else:
|
||||
# create new subset key
|
||||
subset_data[subset] = {
|
||||
"frames": [frame],
|
||||
"format": tag_data["format"],
|
||||
"asset": context.data["assetEntity"]["name"]
|
||||
}
|
||||
return subset_data
|
||||
|
||||
def _create_instances(self, subset_data):
|
||||
# create instance per subset
|
||||
for subset_name, subset_data in subset_data.items():
|
||||
name = "frame" + subset_name.title()
|
||||
data = {
|
||||
"name": name,
|
||||
"label": "{} {}".format(name, subset_data["frames"]),
|
||||
"family": "image",
|
||||
"families": ["frame"],
|
||||
"asset": subset_data["asset"],
|
||||
"subset": name,
|
||||
"format": subset_data["format"],
|
||||
"frames": subset_data["frames"]
|
||||
}
|
||||
self._context.create_instance(**data)
|
||||
|
||||
self.log.info(
|
||||
"Created instance: {}".format(
|
||||
json.dumps(data, sort_keys=True, indent=4)
|
||||
)
|
||||
)
|
||||
82
openpype/hosts/hiero/plugins/publish/extract_frames.py
Normal file
82
openpype/hosts/hiero/plugins/publish/extract_frames.py
Normal file
|
|
@ -0,0 +1,82 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
import openpype
|
||||
|
||||
|
||||
class ExtractFrames(openpype.api.Extractor):
|
||||
"""Extracts frames"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract Frames"
|
||||
hosts = ["hiero"]
|
||||
families = ["frame"]
|
||||
movie_extensions = ["mov", "mp4"]
|
||||
|
||||
def process(self, instance):
|
||||
oiio_tool_path = openpype.lib.get_oiio_tools_path()
|
||||
staging_dir = self.staging_dir(instance)
|
||||
output_template = os.path.join(staging_dir, instance.data["name"])
|
||||
sequence = instance.context.data["activeTimeline"]
|
||||
|
||||
files = []
|
||||
for frame in instance.data["frames"]:
|
||||
track_item = sequence.trackItemAt(frame)
|
||||
media_source = track_item.source().mediaSource()
|
||||
input_path = media_source.fileinfos()[0].filename()
|
||||
input_frame = (
|
||||
track_item.mapTimelineToSource(frame) +
|
||||
track_item.source().mediaSource().startTime()
|
||||
)
|
||||
output_ext = instance.data["format"]
|
||||
output_path = output_template
|
||||
output_path += ".{:04d}.{}".format(int(frame), output_ext)
|
||||
|
||||
args = [oiio_tool_path]
|
||||
|
||||
ext = os.path.splitext(input_path)[1][1:]
|
||||
if ext in self.movie_extensions:
|
||||
args.extend(["--subimage", str(int(input_frame))])
|
||||
else:
|
||||
args.extend(["--frames", str(int(input_frame))])
|
||||
|
||||
if ext == "exr":
|
||||
args.extend(["--powc", "0.45,0.45,0.45,1.0"])
|
||||
|
||||
args.extend([input_path, "-o", output_path])
|
||||
output = openpype.api.run_subprocess(args)
|
||||
|
||||
failed_output = "oiiotool produced no output."
|
||||
if failed_output in output:
|
||||
raise ValueError(
|
||||
"oiiotool processing failed. Args: {}".format(args)
|
||||
)
|
||||
|
||||
files.append(output_path)
|
||||
|
||||
# Feedback to user because "oiiotool" can make the publishing
|
||||
# appear unresponsive.
|
||||
self.log.info(
|
||||
"Processed {} of {} frames".format(
|
||||
instance.data["frames"].index(frame) + 1,
|
||||
len(instance.data["frames"])
|
||||
)
|
||||
)
|
||||
|
||||
if len(files) == 1:
|
||||
instance.data["representations"] = [
|
||||
{
|
||||
"name": output_ext,
|
||||
"ext": output_ext,
|
||||
"files": os.path.basename(files[0]),
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
]
|
||||
else:
|
||||
instance.data["representations"] = [
|
||||
{
|
||||
"name": output_ext,
|
||||
"ext": output_ext,
|
||||
"files": [os.path.basename(x) for x in files],
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
]
|
||||
|
|
@ -19,9 +19,12 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
self.otio_timeline = context.data["otioTimeline"]
|
||||
|
||||
timeline_selection = phiero.get_timeline_selection()
|
||||
selected_timeline_items = phiero.get_track_items(
|
||||
selected=True, check_tagged=True, check_enabled=True)
|
||||
selection=timeline_selection,
|
||||
check_tagged=True,
|
||||
check_enabled=True
|
||||
)
|
||||
|
||||
# only return enabled track items
|
||||
if not selected_timeline_items:
|
||||
|
|
@ -292,10 +295,12 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
for otio_clip in self.otio_timeline.each_clip():
|
||||
track_name = otio_clip.parent().name
|
||||
parent_range = otio_clip.range_in_parent()
|
||||
if ti_track_name not in track_name:
|
||||
if ti_track_name != track_name:
|
||||
continue
|
||||
if otio_clip.name not in track_item.name():
|
||||
if otio_clip.name != track_item.name():
|
||||
continue
|
||||
self.log.debug("__ parent_range: {}".format(parent_range))
|
||||
self.log.debug("__ timeline_range: {}".format(timeline_range))
|
||||
if openpype.lib.is_overlapping_otio_ranges(
|
||||
parent_range, timeline_range, strict=True):
|
||||
|
||||
|
|
@ -312,7 +317,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
speed = track_item.playbackSpeed()
|
||||
timeline = phiero.get_current_sequence()
|
||||
frame_start = int(track_item.timelineIn())
|
||||
frame_duration = int(track_item.sourceDuration() / speed)
|
||||
frame_duration = int((track_item.duration() - 1) / speed)
|
||||
fps = timeline.framerate().toFloat()
|
||||
|
||||
return hiero_export.create_otio_time_range(
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"""Inject the current working file into context"""
|
||||
|
||||
label = "Precollect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.5
|
||||
order = pyblish.api.CollectorOrder - 0.491
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
@ -68,6 +68,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"subset": "{}{}".format(asset, subset.capitalize()),
|
||||
"item": project,
|
||||
"family": "workfile",
|
||||
"families": [],
|
||||
"representations": [workfile_representation, thumb_representation]
|
||||
}
|
||||
|
||||
|
|
@ -77,11 +78,13 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
# update context with main project attributes
|
||||
context_data = {
|
||||
"activeProject": project,
|
||||
"activeTimeline": active_timeline,
|
||||
"otioTimeline": otio_timeline,
|
||||
"currentFile": curent_file,
|
||||
"colorspace": self.get_colorspace(project),
|
||||
"fps": fps
|
||||
}
|
||||
self.log.debug("__ context_data: {}".format(pformat(context_data)))
|
||||
context.data.update(context_data)
|
||||
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
|
|
|
|||
|
|
@ -1,38 +0,0 @@
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectClipResolution(pyblish.api.InstancePlugin):
|
||||
"""Collect clip geometry resolution"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
label = "Collect Clip Resolution"
|
||||
hosts = ["hiero"]
|
||||
families = ["clip"]
|
||||
|
||||
def process(self, instance):
|
||||
sequence = instance.context.data['activeSequence']
|
||||
item = instance.data["item"]
|
||||
source_resolution = instance.data.get("sourceResolution", None)
|
||||
|
||||
resolution_width = int(sequence.format().width())
|
||||
resolution_height = int(sequence.format().height())
|
||||
pixel_aspect = sequence.format().pixelAspect()
|
||||
|
||||
# source exception
|
||||
if source_resolution:
|
||||
resolution_width = int(item.source().mediaSource().width())
|
||||
resolution_height = int(item.source().mediaSource().height())
|
||||
pixel_aspect = item.source().mediaSource().pixelAspect()
|
||||
|
||||
resolution_data = {
|
||||
"resolutionWidth": resolution_width,
|
||||
"resolutionHeight": resolution_height,
|
||||
"pixelAspect": pixel_aspect
|
||||
}
|
||||
# add to instacne data
|
||||
instance.data.update(resolution_data)
|
||||
|
||||
self.log.info("Resolution of instance '{}' is: {}".format(
|
||||
instance,
|
||||
resolution_data
|
||||
))
|
||||
|
|
@ -1,15 +0,0 @@
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHostVersion(pyblish.api.ContextPlugin):
|
||||
"""Inject the hosts version into context"""
|
||||
|
||||
label = "Collect Host and HostVersion"
|
||||
order = pyblish.api.CollectorOrder - 0.5
|
||||
|
||||
def process(self, context):
|
||||
import nuke
|
||||
import pyblish.api
|
||||
|
||||
context.set_data("host", pyblish.api.current_host())
|
||||
context.set_data('hostVersion', value=nuke.NUKE_VERSION_STRING)
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
from pyblish import api
|
||||
|
||||
|
||||
class CollectTagRetime(api.InstancePlugin):
|
||||
"""Collect Retiming from Tags of selected track items."""
|
||||
|
||||
order = api.CollectorOrder + 0.014
|
||||
label = "Collect Retiming Tag"
|
||||
hosts = ["hiero"]
|
||||
families = ['clip']
|
||||
|
||||
def process(self, instance):
|
||||
# gets tags
|
||||
tags = instance.data["tags"]
|
||||
|
||||
for t in tags:
|
||||
t_metadata = dict(t["metadata"])
|
||||
t_family = t_metadata.get("tag.family", "")
|
||||
|
||||
# gets only task family tags and collect labels
|
||||
if "retiming" in t_family:
|
||||
margin_in = t_metadata.get("tag.marginIn", "")
|
||||
margin_out = t_metadata.get("tag.marginOut", "")
|
||||
|
||||
instance.data["retimeMarginIn"] = int(margin_in)
|
||||
instance.data["retimeMarginOut"] = int(margin_out)
|
||||
instance.data["retime"] = True
|
||||
|
||||
self.log.info("retimeMarginIn: `{}`".format(margin_in))
|
||||
self.log.info("retimeMarginOut: `{}`".format(margin_out))
|
||||
|
||||
instance.data["families"] += ["retime"]
|
||||
|
|
@ -1,223 +0,0 @@
|
|||
from compiler.ast import flatten
|
||||
from pyblish import api
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
import hiero
|
||||
# from openpype.hosts.hiero.api import lib
|
||||
# reload(lib)
|
||||
# reload(phiero)
|
||||
|
||||
|
||||
class PreCollectInstances(api.ContextPlugin):
|
||||
"""Collect all Track items selection."""
|
||||
|
||||
order = api.CollectorOrder - 0.509
|
||||
label = "Pre-collect Instances"
|
||||
hosts = ["hiero"]
|
||||
|
||||
def process(self, context):
|
||||
track_items = phiero.get_track_items(
|
||||
selected=True, check_tagged=True, check_enabled=True)
|
||||
# only return enabled track items
|
||||
if not track_items:
|
||||
track_items = phiero.get_track_items(
|
||||
check_enabled=True, check_tagged=True)
|
||||
# get sequence and video tracks
|
||||
sequence = context.data["activeSequence"]
|
||||
tracks = sequence.videoTracks()
|
||||
|
||||
# add collection to context
|
||||
tracks_effect_items = self.collect_sub_track_items(tracks)
|
||||
|
||||
context.data["tracksEffectItems"] = tracks_effect_items
|
||||
|
||||
self.log.info(
|
||||
"Processing enabled track items: {}".format(len(track_items)))
|
||||
|
||||
for _ti in track_items:
|
||||
data = {}
|
||||
clip = _ti.source()
|
||||
|
||||
# get clips subtracks and anotations
|
||||
annotations = self.clip_annotations(clip)
|
||||
subtracks = self.clip_subtrack(_ti)
|
||||
self.log.debug("Annotations: {}".format(annotations))
|
||||
self.log.debug(">> Subtracks: {}".format(subtracks))
|
||||
|
||||
# get pype tag data
|
||||
tag_parsed_data = phiero.get_track_item_pype_data(_ti)
|
||||
# self.log.debug(pformat(tag_parsed_data))
|
||||
|
||||
if not tag_parsed_data:
|
||||
continue
|
||||
|
||||
if tag_parsed_data.get("id") != "pyblish.avalon.instance":
|
||||
continue
|
||||
# add tag data to instance data
|
||||
data.update({
|
||||
k: v for k, v in tag_parsed_data.items()
|
||||
if k not in ("id", "applieswhole", "label")
|
||||
})
|
||||
|
||||
asset = tag_parsed_data["asset"]
|
||||
subset = tag_parsed_data["subset"]
|
||||
review_track = tag_parsed_data.get("reviewTrack")
|
||||
hiero_track = tag_parsed_data.get("heroTrack")
|
||||
audio = tag_parsed_data.get("audio")
|
||||
|
||||
# remove audio attribute from data
|
||||
data.pop("audio")
|
||||
|
||||
# insert family into families
|
||||
family = tag_parsed_data["family"]
|
||||
families = [str(f) for f in tag_parsed_data["families"]]
|
||||
families.insert(0, str(family))
|
||||
|
||||
track = _ti.parent()
|
||||
media_source = _ti.source().mediaSource()
|
||||
source_path = media_source.firstpath()
|
||||
file_head = media_source.filenameHead()
|
||||
file_info = media_source.fileinfos().pop()
|
||||
source_first_frame = int(file_info.startFrame())
|
||||
|
||||
# apply only for review and master track instance
|
||||
if review_track and hiero_track:
|
||||
families += ["review", "ftrack"]
|
||||
|
||||
data.update({
|
||||
"name": "{} {} {}".format(asset, subset, families),
|
||||
"asset": asset,
|
||||
"item": _ti,
|
||||
"families": families,
|
||||
|
||||
# tags
|
||||
"tags": _ti.tags(),
|
||||
|
||||
# track item attributes
|
||||
"track": track.name(),
|
||||
"trackItem": track,
|
||||
"reviewTrack": review_track,
|
||||
|
||||
# version data
|
||||
"versionData": {
|
||||
"colorspace": _ti.sourceMediaColourTransform()
|
||||
},
|
||||
|
||||
# source attribute
|
||||
"source": source_path,
|
||||
"sourceMedia": media_source,
|
||||
"sourcePath": source_path,
|
||||
"sourceFileHead": file_head,
|
||||
"sourceFirst": source_first_frame,
|
||||
|
||||
# clip's effect
|
||||
"clipEffectItems": subtracks
|
||||
})
|
||||
|
||||
instance = context.create_instance(**data)
|
||||
|
||||
self.log.info("Creating instance.data: {}".format(instance.data))
|
||||
|
||||
if audio:
|
||||
a_data = dict()
|
||||
|
||||
# add tag data to instance data
|
||||
a_data.update({
|
||||
k: v for k, v in tag_parsed_data.items()
|
||||
if k not in ("id", "applieswhole", "label")
|
||||
})
|
||||
|
||||
# create main attributes
|
||||
subset = "audioMain"
|
||||
family = "audio"
|
||||
families = ["clip", "ftrack"]
|
||||
families.insert(0, str(family))
|
||||
|
||||
name = "{} {} {}".format(asset, subset, families)
|
||||
|
||||
a_data.update({
|
||||
"name": name,
|
||||
"subset": subset,
|
||||
"asset": asset,
|
||||
"family": family,
|
||||
"families": families,
|
||||
"item": _ti,
|
||||
|
||||
# tags
|
||||
"tags": _ti.tags(),
|
||||
})
|
||||
|
||||
a_instance = context.create_instance(**a_data)
|
||||
self.log.info("Creating audio instance: {}".format(a_instance))
|
||||
|
||||
@staticmethod
|
||||
def clip_annotations(clip):
|
||||
"""
|
||||
Returns list of Clip's hiero.core.Annotation
|
||||
"""
|
||||
annotations = []
|
||||
subTrackItems = flatten(clip.subTrackItems())
|
||||
annotations += [item for item in subTrackItems if isinstance(
|
||||
item, hiero.core.Annotation)]
|
||||
return annotations
|
||||
|
||||
@staticmethod
|
||||
def clip_subtrack(clip):
|
||||
"""
|
||||
Returns list of Clip's hiero.core.SubTrackItem
|
||||
"""
|
||||
subtracks = []
|
||||
subTrackItems = flatten(clip.parent().subTrackItems())
|
||||
for item in subTrackItems:
|
||||
# avoid all anotation
|
||||
if isinstance(item, hiero.core.Annotation):
|
||||
continue
|
||||
# # avoid all not anaibled
|
||||
if not item.isEnabled():
|
||||
continue
|
||||
subtracks.append(item)
|
||||
return subtracks
|
||||
|
||||
@staticmethod
|
||||
def collect_sub_track_items(tracks):
|
||||
"""
|
||||
Returns dictionary with track index as key and list of subtracks
|
||||
"""
|
||||
# collect all subtrack items
|
||||
sub_track_items = dict()
|
||||
for track in tracks:
|
||||
items = track.items()
|
||||
|
||||
# skip if no clips on track > need track with effect only
|
||||
if items:
|
||||
continue
|
||||
|
||||
# skip all disabled tracks
|
||||
if not track.isEnabled():
|
||||
continue
|
||||
|
||||
track_index = track.trackIndex()
|
||||
_sub_track_items = flatten(track.subTrackItems())
|
||||
|
||||
# continue only if any subtrack items are collected
|
||||
if len(_sub_track_items) < 1:
|
||||
continue
|
||||
|
||||
enabled_sti = list()
|
||||
# loop all found subtrack items and check if they are enabled
|
||||
for _sti in _sub_track_items:
|
||||
# checking if not enabled
|
||||
if not _sti.isEnabled():
|
||||
continue
|
||||
if isinstance(_sti, hiero.core.Annotation):
|
||||
continue
|
||||
# collect the subtrack item
|
||||
enabled_sti.append(_sti)
|
||||
|
||||
# continue only if any subtrack items are collected
|
||||
if len(enabled_sti) < 1:
|
||||
continue
|
||||
|
||||
# add collection of subtrackitems to dict
|
||||
sub_track_items[track_index] = enabled_sti
|
||||
|
||||
return sub_track_items
|
||||
|
|
@ -1,74 +0,0 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.hosts.hiero import api as phiero
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class PreCollectWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
label = "Pre-collect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.51
|
||||
|
||||
def process(self, context):
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
subset = "workfile"
|
||||
|
||||
project = phiero.get_current_project()
|
||||
active_sequence = phiero.get_current_sequence()
|
||||
video_tracks = active_sequence.videoTracks()
|
||||
audio_tracks = active_sequence.audioTracks()
|
||||
current_file = project.path()
|
||||
staging_dir = os.path.dirname(current_file)
|
||||
base_name = os.path.basename(current_file)
|
||||
|
||||
# get workfile's colorspace properties
|
||||
_clrs = {}
|
||||
_clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
|
||||
_clrs["lutSetting16Bit"] = project.lutSetting16Bit()
|
||||
_clrs["lutSetting8Bit"] = project.lutSetting8Bit()
|
||||
_clrs["lutSettingFloat"] = project.lutSettingFloat()
|
||||
_clrs["lutSettingLog"] = project.lutSettingLog()
|
||||
_clrs["lutSettingViewer"] = project.lutSettingViewer()
|
||||
_clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
|
||||
_clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
|
||||
_clrs["ocioConfigName"] = project.ocioConfigName()
|
||||
_clrs["ocioConfigPath"] = project.ocioConfigPath()
|
||||
|
||||
# set main project attributes to context
|
||||
context.data["activeProject"] = project
|
||||
context.data["activeSequence"] = active_sequence
|
||||
context.data["videoTracks"] = video_tracks
|
||||
context.data["audioTracks"] = audio_tracks
|
||||
context.data["currentFile"] = current_file
|
||||
context.data["colorspace"] = _clrs
|
||||
|
||||
self.log.info("currentFile: {}".format(current_file))
|
||||
|
||||
# creating workfile representation
|
||||
representation = {
|
||||
'name': 'hrox',
|
||||
'ext': 'hrox',
|
||||
'files': base_name,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
instance_data = {
|
||||
"name": "{}_{}".format(asset, subset),
|
||||
"asset": asset,
|
||||
"subset": "{}{}".format(asset, subset.capitalize()),
|
||||
"item": project,
|
||||
"family": "workfile",
|
||||
|
||||
# version data
|
||||
"versionData": {
|
||||
"colorspace": _clrs
|
||||
},
|
||||
|
||||
# source attribute
|
||||
"sourcePath": current_file,
|
||||
"representations": [representation]
|
||||
}
|
||||
|
||||
instance = context.create_instance(**instance_data)
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
|
|
@ -6,7 +6,7 @@ from openpype.pipeline import load
|
|||
|
||||
|
||||
class SetFrameRangeLoader(load.LoaderPlugin):
|
||||
"""Set Houdini frame range"""
|
||||
"""Set frame range excluding pre- and post-handles"""
|
||||
|
||||
families = [
|
||||
"animation",
|
||||
|
|
@ -44,7 +44,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
|
|||
|
||||
|
||||
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
|
||||
"""Set Maya frame range including pre- and post-handles"""
|
||||
"""Set frame range including pre- and post-handles"""
|
||||
|
||||
families = [
|
||||
"animation",
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ from openpype.hosts.houdini.api import pipeline
|
|||
|
||||
|
||||
class AbcLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Load Alembic"""
|
||||
|
||||
families = ["model", "animation", "pointcache", "gpuCache"]
|
||||
label = "Load Alembic"
|
||||
|
|
|
|||
75
openpype/hosts/houdini/plugins/load/load_alembic_archive.py
Normal file
75
openpype/hosts/houdini/plugins/load/load_alembic_archive.py
Normal file
|
|
@ -0,0 +1,75 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
from openpype.hosts.houdini.api import pipeline
|
||||
|
||||
|
||||
class AbcArchiveLoader(load.LoaderPlugin):
|
||||
"""Load Alembic as full geometry network hierarchy """
|
||||
|
||||
families = ["model", "animation", "pointcache", "gpuCache"]
|
||||
label = "Load Alembic as Archive"
|
||||
representations = ["abc"]
|
||||
order = -5
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
||||
import hou
|
||||
|
||||
# Format file name, Houdini only wants forward slashes
|
||||
file_path = os.path.normpath(self.fname)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
|
||||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Define node name
|
||||
namespace = namespace if namespace else context["asset"]["name"]
|
||||
node_name = "{}_{}".format(namespace, name) if namespace else name
|
||||
|
||||
# Create an Alembic archive node
|
||||
node = obj.createNode("alembicarchive", node_name=node_name)
|
||||
node.moveToGoodPosition()
|
||||
|
||||
# TODO: add FPS of project / asset
|
||||
node.setParms({"fileName": file_path,
|
||||
"channelRef": True})
|
||||
|
||||
# Apply some magic
|
||||
node.parm("buildHierarchy").pressButton()
|
||||
node.moveToGoodPosition()
|
||||
|
||||
nodes = [node]
|
||||
|
||||
self[:] = nodes
|
||||
|
||||
return pipeline.containerise(node_name,
|
||||
namespace,
|
||||
nodes,
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
suffix="")
|
||||
|
||||
def update(self, container, representation):
|
||||
|
||||
node = container["node"]
|
||||
|
||||
# Update the file path
|
||||
file_path = get_representation_path(representation)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
|
||||
# Update attributes
|
||||
node.setParms({"fileName": file_path,
|
||||
"representation": str(representation["_id"])})
|
||||
|
||||
# Rebuild
|
||||
node.parm("buildHierarchy").pressButton()
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
107
openpype/hosts/houdini/plugins/load/load_bgeo.py
Normal file
107
openpype/hosts/houdini/plugins/load/load_bgeo.py
Normal file
|
|
@ -0,0 +1,107 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import re
|
||||
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path,
|
||||
)
|
||||
from openpype.hosts.houdini.api import pipeline
|
||||
|
||||
|
||||
class BgeoLoader(load.LoaderPlugin):
|
||||
"""Load bgeo files to Houdini."""
|
||||
|
||||
label = "Load bgeo"
|
||||
families = ["model", "pointcache", "bgeo"]
|
||||
representations = [
|
||||
"bgeo", "bgeosc", "bgeogz",
|
||||
"bgeo.sc", "bgeo.gz", "bgeo.lzma", "bgeo.bz2"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
||||
import hou
|
||||
|
||||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Define node name
|
||||
namespace = namespace if namespace else context["asset"]["name"]
|
||||
node_name = "{}_{}".format(namespace, name) if namespace else name
|
||||
|
||||
# Create a new geo node
|
||||
container = obj.createNode("geo", node_name=node_name)
|
||||
is_sequence = bool(context["representation"]["context"].get("frame"))
|
||||
|
||||
# Remove the file node, it only loads static meshes
|
||||
# Houdini 17 has removed the file node from the geo node
|
||||
file_node = container.node("file1")
|
||||
if file_node:
|
||||
file_node.destroy()
|
||||
|
||||
# Explicitly create a file node
|
||||
file_node = container.createNode("file", node_name=node_name)
|
||||
file_node.setParms({"file": self.format_path(self.fname, is_sequence)})
|
||||
|
||||
# Set display on last node
|
||||
file_node.setDisplayFlag(True)
|
||||
|
||||
nodes = [container, file_node]
|
||||
self[:] = nodes
|
||||
|
||||
return pipeline.containerise(
|
||||
node_name,
|
||||
namespace,
|
||||
nodes,
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
suffix="",
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def format_path(path, is_sequence):
|
||||
"""Format file path correctly for single bgeo or bgeo sequence."""
|
||||
if not os.path.exists(path):
|
||||
raise RuntimeError("Path does not exist: %s" % path)
|
||||
|
||||
# The path is either a single file or sequence in a folder.
|
||||
if not is_sequence:
|
||||
filename = path
|
||||
print("single")
|
||||
else:
|
||||
filename = re.sub(r"(.*)\.(\d+)\.(bgeo.*)", "\\1.$F4.\\3", path)
|
||||
|
||||
filename = os.path.join(path, filename)
|
||||
|
||||
filename = os.path.normpath(filename)
|
||||
filename = filename.replace("\\", "/")
|
||||
|
||||
return filename
|
||||
|
||||
def update(self, container, representation):
|
||||
|
||||
node = container["node"]
|
||||
try:
|
||||
file_node = next(
|
||||
n for n in node.children() if n.type().name() == "file"
|
||||
)
|
||||
except StopIteration:
|
||||
self.log.error("Could not find node of type `alembic`")
|
||||
return
|
||||
|
||||
# Update the file path
|
||||
file_path = get_representation_path(representation)
|
||||
file_path = self.format_path(file_path)
|
||||
|
||||
file_node.setParms({"fileName": file_path})
|
||||
|
||||
# Update attribute
|
||||
node.setParms({"representation": str(representation["_id"])})
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
|
@ -78,7 +78,7 @@ def transfer_non_default_values(src, dest, ignore=None):
|
|||
|
||||
|
||||
class CameraLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Load camera from an Alembic file"""
|
||||
|
||||
families = ["camera"]
|
||||
label = "Load Camera (abc)"
|
||||
|
|
|
|||
|
|
@ -42,9 +42,9 @@ def get_image_avalon_container():
|
|||
|
||||
|
||||
class ImageLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Load images into COP2"""
|
||||
|
||||
families = ["colorbleed.imagesequence"]
|
||||
families = ["imagesequence"]
|
||||
label = "Load Image (COP2)"
|
||||
representations = ["*"]
|
||||
order = -10
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ from openpype.hosts.houdini.api import pipeline
|
|||
|
||||
|
||||
class VdbLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Load VDB"""
|
||||
|
||||
families = ["vdbcache"]
|
||||
label = "Load VDB"
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
import os
|
||||
import subprocess
|
||||
|
||||
from openpype.lib.vendor_bin_utils import find_executable
|
||||
from openpype.pipeline import load
|
||||
|
||||
|
||||
|
|
@ -14,12 +18,7 @@ class ShowInUsdview(load.LoaderPlugin):
|
|||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
|
||||
import avalon.lib as lib
|
||||
|
||||
usdview = lib.which("usdview")
|
||||
usdview = find_executable("usdview")
|
||||
|
||||
filepath = os.path.normpath(self.fname)
|
||||
filepath = filepath.replace("\\", "/")
|
||||
|
|
|
|||
|
|
@ -77,6 +77,7 @@ IMAGE_PREFIXES = {
|
|||
"arnold": "defaultRenderGlobals.imageFilePrefix",
|
||||
"renderman": "rmanGlobals.imageFileFormat",
|
||||
"redshift": "defaultRenderGlobals.imageFilePrefix",
|
||||
"mayahardware2": "defaultRenderGlobals.imageFilePrefix"
|
||||
}
|
||||
|
||||
RENDERMAN_IMAGE_DIR = "maya/<scene>/<layer>"
|
||||
|
|
@ -155,7 +156,8 @@ def get(layer, render_instance=None):
|
|||
"arnold": RenderProductsArnold,
|
||||
"vray": RenderProductsVray,
|
||||
"redshift": RenderProductsRedshift,
|
||||
"renderman": RenderProductsRenderman
|
||||
"renderman": RenderProductsRenderman,
|
||||
"mayahardware2": RenderProductsMayaHardware
|
||||
}.get(renderer_name.lower(), None)
|
||||
if renderer is None:
|
||||
raise UnsupportedRendererException(
|
||||
|
|
@ -1091,6 +1093,11 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
if not enabled:
|
||||
continue
|
||||
|
||||
# Skip display types not producing any file output.
|
||||
# Is there a better way to do it?
|
||||
if not display_types.get(display["driverNode"]["type"]):
|
||||
continue
|
||||
|
||||
aov_name = name
|
||||
if aov_name == "rmanDefaultDisplay":
|
||||
aov_name = "beauty"
|
||||
|
|
@ -1124,6 +1131,67 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
return new_files
|
||||
|
||||
|
||||
class RenderProductsMayaHardware(ARenderProducts):
|
||||
"""Expected files for MayaHardware renderer."""
|
||||
|
||||
renderer = "mayahardware2"
|
||||
|
||||
extensions = [
|
||||
{"label": "JPEG", "index": 8, "extension": "jpg"},
|
||||
{"label": "PNG", "index": 32, "extension": "png"},
|
||||
{"label": "EXR(exr)", "index": 40, "extension": "exr"}
|
||||
]
|
||||
|
||||
def _get_extension(self, value):
|
||||
result = None
|
||||
if isinstance(value, int):
|
||||
extensions = {
|
||||
extension["index"]: extension["extension"]
|
||||
for extension in self.extensions
|
||||
}
|
||||
try:
|
||||
result = extensions[value]
|
||||
except KeyError:
|
||||
raise NotImplementedError(
|
||||
"Could not find extension for {}".format(value)
|
||||
)
|
||||
|
||||
if isinstance(value, six.string_types):
|
||||
extensions = {
|
||||
extension["label"]: extension["extension"]
|
||||
for extension in self.extensions
|
||||
}
|
||||
try:
|
||||
result = extensions[value]
|
||||
except KeyError:
|
||||
raise NotImplementedError(
|
||||
"Could not find extension for {}".format(value)
|
||||
)
|
||||
|
||||
if not result:
|
||||
raise NotImplementedError(
|
||||
"Could not find extension for {}".format(value)
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
def get_render_products(self):
|
||||
"""Get all AOVs.
|
||||
See Also:
|
||||
:func:`ARenderProducts.get_render_products()`
|
||||
"""
|
||||
ext = self._get_extension(
|
||||
self._get_attr("defaultRenderGlobals.imageFormat")
|
||||
)
|
||||
|
||||
products = []
|
||||
for cam in self.get_renderable_cameras():
|
||||
product = RenderProduct(productName="beauty", ext=ext, camera=cam)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
||||
class AOVError(Exception):
|
||||
"""Custom exception for determining AOVs."""
|
||||
|
||||
|
|
|
|||
|
|
@ -77,7 +77,8 @@ class CreateRender(plugin.Creator):
|
|||
'vray': 'vraySettings.fileNamePrefix',
|
||||
'arnold': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'renderman': 'rmanGlobals.imageFileFormat',
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix'
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
|
||||
}
|
||||
|
||||
_image_prefixes = {
|
||||
|
|
@ -87,7 +88,8 @@ class CreateRender(plugin.Creator):
|
|||
# this needs `imageOutputDir`
|
||||
# (<ws>/renders/maya/<scene>) set separately
|
||||
'renderman': '<layer>_<aov>.<f4>.<ext>',
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>' # noqa
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
|
||||
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
|
||||
}
|
||||
|
||||
_aov_chars = {
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import openpype.hosts.maya.api.plugin
|
|||
|
||||
|
||||
class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Loader to reference an Alembic file"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
"""A module containing generic loader actions that will display in the Loader.
|
||||
|
||||
"""
|
||||
|
||||
import qargparse
|
||||
from openpype.pipeline import load
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection,
|
||||
|
|
@ -10,7 +10,7 @@ from openpype.hosts.maya.api.lib import (
|
|||
|
||||
|
||||
class SetFrameRangeLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Set frame range excluding pre- and post-handles"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
@ -44,7 +44,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
|
|||
|
||||
|
||||
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Set frame range including pre- and post-handles"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
@ -98,6 +98,15 @@ class ImportMayaLoader(load.LoaderPlugin):
|
|||
icon = "arrow-circle-down"
|
||||
color = "#775555"
|
||||
|
||||
options = [
|
||||
qargparse.Boolean(
|
||||
"clean_import",
|
||||
label="Clean import",
|
||||
default=False,
|
||||
help="Should all occurences of cbId be purged?"
|
||||
)
|
||||
]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
import maya.cmds as cmds
|
||||
|
||||
|
|
@ -114,13 +123,22 @@ class ImportMayaLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
with maintained_selection():
|
||||
cmds.file(self.fname,
|
||||
i=True,
|
||||
preserveReferences=True,
|
||||
namespace=namespace,
|
||||
returnNewNodes=True,
|
||||
groupReference=True,
|
||||
groupName="{}:{}".format(namespace, name))
|
||||
nodes = cmds.file(self.fname,
|
||||
i=True,
|
||||
preserveReferences=True,
|
||||
namespace=namespace,
|
||||
returnNewNodes=True,
|
||||
groupReference=True,
|
||||
groupName="{}:{}".format(namespace, name))
|
||||
|
||||
if data.get("clean_import", False):
|
||||
remove_attributes = ["cbId"]
|
||||
for node in nodes:
|
||||
for attr in remove_attributes:
|
||||
if cmds.attributeQuery(attr, node=node, exists=True):
|
||||
full_attr = "{}.{}".format(node, attr)
|
||||
print("Removing {}".format(full_attr))
|
||||
cmds.deleteAttr(full_attr)
|
||||
|
||||
# We do not containerize imported content, it remains unmanaged
|
||||
return
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ from openpype.hosts.maya.api.pipeline import containerise
|
|||
|
||||
|
||||
class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
"""Load the Proxy"""
|
||||
"""Load Arnold Proxy as reference"""
|
||||
|
||||
families = ["ass"]
|
||||
representations = ["ass"]
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ from openpype.api import get_project_settings
|
|||
|
||||
|
||||
class GpuCacheLoader(load.LoaderPlugin):
|
||||
"""Load model Alembic as gpuCache"""
|
||||
"""Load Alembic as gpuCache"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["abc"]
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
|
||||
families = ["image", "plate", "render"]
|
||||
label = "Load imagePlane"
|
||||
representations = ["mov", "exr", "preview", "png"]
|
||||
representations = ["mov", "exr", "preview", "png", "jpg"]
|
||||
icon = "image"
|
||||
color = "orange"
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ from openpype.hosts.maya.api.lib import maintained_selection
|
|||
|
||||
|
||||
class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
"""Load the model"""
|
||||
"""Reference file"""
|
||||
|
||||
families = ["model",
|
||||
"pointcache",
|
||||
|
|
|
|||
|
|
@ -74,6 +74,7 @@ def _fix_duplicate_vvg_callbacks():
|
|||
|
||||
|
||||
class LoadVDBtoVRay(load.LoaderPlugin):
|
||||
"""Load OpenVDB in a V-Ray Volume Grid"""
|
||||
|
||||
families = ["vdbcache"]
|
||||
representations = ["vdb"]
|
||||
|
|
|
|||
20
openpype/hosts/maya/plugins/publish/collect_fbx_camera.py
Normal file
20
openpype/hosts/maya/plugins/publish/collect_fbx_camera.py
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectFbxCamera(pyblish.api.InstancePlugin):
|
||||
"""Collect Camera for FBX export."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Camera for FBX export"
|
||||
families = ["camera"]
|
||||
|
||||
def process(self, instance):
|
||||
if not instance.data.get("families"):
|
||||
instance.data["families"] = []
|
||||
|
||||
if "fbx" not in instance.data["families"]:
|
||||
instance.data["families"].append("fbx")
|
||||
|
||||
instance.data["cameras"] = True
|
||||
|
|
@ -22,10 +22,46 @@ RENDERER_NODE_TYPES = [
|
|||
# redshift
|
||||
"RedshiftMeshParameters"
|
||||
]
|
||||
|
||||
SHAPE_ATTRS = set(SHAPE_ATTRS)
|
||||
|
||||
|
||||
def get_pxr_multitexture_file_attrs(node):
|
||||
attrs = []
|
||||
for i in range(9):
|
||||
if cmds.attributeQuery("filename{}".format(i), node=node, ex=True):
|
||||
file = cmds.getAttr("{}.filename{}".format(node, i))
|
||||
if file:
|
||||
attrs.append("filename{}".format(i))
|
||||
return attrs
|
||||
|
||||
|
||||
FILE_NODES = {
|
||||
"file": "fileTextureName",
|
||||
|
||||
"aiImage": "filename",
|
||||
|
||||
"RedshiftNormalMap": "text0",
|
||||
|
||||
"PxrBump": "filename",
|
||||
"PxrNormalMap": "filename",
|
||||
"PxrMultiTexture": get_pxr_multitexture_file_attrs,
|
||||
"PxrPtexture": "filename",
|
||||
"PxrTexture": "filename"
|
||||
}
|
||||
|
||||
|
||||
def get_attributes(dictionary, attr, node=None):
|
||||
# type: (dict, str, str) -> list
|
||||
if callable(dictionary[attr]):
|
||||
val = dictionary[attr](node)
|
||||
else:
|
||||
val = dictionary.get(attr, [])
|
||||
|
||||
if not isinstance(val, list):
|
||||
return [val]
|
||||
return val
|
||||
|
||||
|
||||
def get_look_attrs(node):
|
||||
"""Returns attributes of a node that are important for the look.
|
||||
|
||||
|
|
@ -51,15 +87,14 @@ def get_look_attrs(node):
|
|||
if cmds.objectType(node, isAType="shape"):
|
||||
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
|
||||
for attr in attrs:
|
||||
if attr in SHAPE_ATTRS:
|
||||
if attr in SHAPE_ATTRS or \
|
||||
attr not in SHAPE_ATTRS and attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
elif attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def node_uses_image_sequence(node):
|
||||
def node_uses_image_sequence(node, node_path):
|
||||
# type: (str) -> bool
|
||||
"""Return whether file node uses an image sequence or single image.
|
||||
|
||||
Determine if a node uses an image sequence or just a single image,
|
||||
|
|
@ -74,12 +109,15 @@ def node_uses_image_sequence(node):
|
|||
"""
|
||||
|
||||
# useFrameExtension indicates an explicit image sequence
|
||||
node_path = get_file_node_path(node).lower()
|
||||
|
||||
# The following tokens imply a sequence
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>",
|
||||
"u<u>_v<v>", "<frame0", "<f4>"]
|
||||
try:
|
||||
use_frame_extension = cmds.getAttr('%s.useFrameExtension' % node)
|
||||
except ValueError:
|
||||
use_frame_extension = False
|
||||
|
||||
return (cmds.getAttr('%s.useFrameExtension' % node) or
|
||||
return (use_frame_extension or
|
||||
any(pattern in node_path for pattern in patterns))
|
||||
|
||||
|
||||
|
|
@ -137,14 +175,15 @@ def seq_to_glob(path):
|
|||
return path
|
||||
|
||||
|
||||
def get_file_node_path(node):
|
||||
def get_file_node_paths(node):
|
||||
# type: (str) -> list
|
||||
"""Get the file path used by a Maya file node.
|
||||
|
||||
Args:
|
||||
node (str): Name of the Maya file node
|
||||
|
||||
Returns:
|
||||
str: the file path in use
|
||||
list: the file paths in use
|
||||
|
||||
"""
|
||||
# if the path appears to be sequence, use computedFileTextureNamePattern,
|
||||
|
|
@ -163,15 +202,20 @@ def get_file_node_path(node):
|
|||
"<uvtile>"]
|
||||
lower = texture_pattern.lower()
|
||||
if any(pattern in lower for pattern in patterns):
|
||||
return texture_pattern
|
||||
return [texture_pattern]
|
||||
|
||||
if cmds.nodeType(node) == 'aiImage':
|
||||
return cmds.getAttr('{0}.filename'.format(node))
|
||||
if cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
return cmds.getAttr('{}.tex0'.format(node))
|
||||
try:
|
||||
file_attributes = get_attributes(
|
||||
FILE_NODES, cmds.nodeType(node), node)
|
||||
except AttributeError:
|
||||
file_attributes = "fileTextureName"
|
||||
|
||||
# otherwise use fileTextureName
|
||||
return cmds.getAttr('{0}.fileTextureName'.format(node))
|
||||
files = []
|
||||
for file_attr in file_attributes:
|
||||
if cmds.attributeQuery(file_attr, node=node, exists=True):
|
||||
files.append(cmds.getAttr("{}.{}".format(node, file_attr)))
|
||||
|
||||
return files
|
||||
|
||||
|
||||
def get_file_node_files(node):
|
||||
|
|
@ -185,16 +229,21 @@ def get_file_node_files(node):
|
|||
list: List of full file paths.
|
||||
|
||||
"""
|
||||
paths = get_file_node_paths(node)
|
||||
sequences = []
|
||||
replaces = []
|
||||
for index, path in enumerate(paths):
|
||||
if node_uses_image_sequence(node, path):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
sequences.extend(glob.glob(glob_pattern))
|
||||
replaces.append(index)
|
||||
|
||||
path = get_file_node_path(node)
|
||||
path = cmds.workspace(expandName=path)
|
||||
if node_uses_image_sequence(node):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
return glob.glob(glob_pattern)
|
||||
elif os.path.exists(path):
|
||||
return [path]
|
||||
else:
|
||||
return []
|
||||
for index in replaces:
|
||||
paths.pop(index)
|
||||
|
||||
paths.extend(sequences)
|
||||
|
||||
return [p for p in paths if os.path.exists(p)]
|
||||
|
||||
|
||||
class CollectLook(pyblish.api.InstancePlugin):
|
||||
|
|
@ -238,13 +287,13 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
"for %s" % instance.data['name'])
|
||||
|
||||
# Discover related object sets
|
||||
self.log.info("Gathering sets..")
|
||||
self.log.info("Gathering sets ...")
|
||||
sets = self.collect_sets(instance)
|
||||
|
||||
# Lookup set (optimization)
|
||||
instance_lookup = set(cmds.ls(instance, long=True))
|
||||
|
||||
self.log.info("Gathering set relations..")
|
||||
self.log.info("Gathering set relations ...")
|
||||
# Ensure iteration happen in a list so we can remove keys from the
|
||||
# dict within the loop
|
||||
|
||||
|
|
@ -326,7 +375,10 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
"volumeShader",
|
||||
"displacementShader",
|
||||
"aiSurfaceShader",
|
||||
"aiVolumeShader"]
|
||||
"aiVolumeShader",
|
||||
"rman__surface",
|
||||
"rman__displacement"
|
||||
]
|
||||
if look_sets:
|
||||
materials = []
|
||||
|
||||
|
|
@ -374,15 +426,17 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
or []
|
||||
)
|
||||
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
files.extend(cmds.ls(history, type="aiImage", long=True))
|
||||
files.extend(cmds.ls(history, type="RedshiftNormalMap", long=True))
|
||||
all_supported_nodes = FILE_NODES.keys()
|
||||
files = []
|
||||
for node_type in all_supported_nodes:
|
||||
files.extend(cmds.ls(history, type=node_type, long=True))
|
||||
|
||||
self.log.info("Collected file nodes:\n{}".format(files))
|
||||
# Collect textures if any file nodes are found
|
||||
instance.data["resources"] = []
|
||||
for n in files:
|
||||
instance.data["resources"].append(self.collect_resource(n))
|
||||
for res in self.collect_resources(n):
|
||||
instance.data["resources"].append(res)
|
||||
|
||||
self.log.info("Collected resources: {}".format(instance.data["resources"]))
|
||||
|
||||
|
|
@ -502,7 +556,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
return attributes
|
||||
|
||||
def collect_resource(self, node):
|
||||
def collect_resources(self, node):
|
||||
"""Collect the link to the file(s) used (resource)
|
||||
Args:
|
||||
node (str): name of the node
|
||||
|
|
@ -510,68 +564,69 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
Returns:
|
||||
dict
|
||||
"""
|
||||
|
||||
self.log.debug("processing: {}".format(node))
|
||||
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
|
||||
all_supported_nodes = FILE_NODES.keys()
|
||||
if cmds.nodeType(node) not in all_supported_nodes:
|
||||
self.log.error(
|
||||
"Unsupported file node: {}".format(cmds.nodeType(node)))
|
||||
raise AssertionError("Unsupported file node")
|
||||
|
||||
if cmds.nodeType(node) == 'file':
|
||||
self.log.debug(" - file node")
|
||||
attribute = "{}.fileTextureName".format(node)
|
||||
computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
elif cmds.nodeType(node) == 'aiImage':
|
||||
self.log.debug("aiImage node")
|
||||
attribute = "{}.filename".format(node)
|
||||
computed_attribute = attribute
|
||||
elif cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
self.log.debug("RedshiftNormalMap node")
|
||||
attribute = "{}.tex0".format(node)
|
||||
computed_attribute = attribute
|
||||
self.log.debug(" - got {}".format(cmds.nodeType(node)))
|
||||
|
||||
source = cmds.getAttr(attribute)
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
attributes = get_attributes(FILE_NODES, cmds.nodeType(node), node)
|
||||
for attribute in attributes:
|
||||
source = cmds.getAttr("{}.{}".format(
|
||||
node,
|
||||
attribute
|
||||
))
|
||||
computed_attribute = "{}.{}".format(node, attribute)
|
||||
if attribute == "fileTextureName":
|
||||
computed_attribute = node + ".computedFileTextureNamePattern"
|
||||
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with
|
||||
# the <UDIM> pattern in it, to generate some logging information
|
||||
# about this difference
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
|
||||
files = get_file_node_files(node)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
files = get_file_node_files(node)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
# Define the resource
|
||||
return {"node": node,
|
||||
"attribute": attribute,
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
yield {
|
||||
"node": node,
|
||||
# here we are passing not only attribute, but with node again
|
||||
# this should be simplified and changed extractor.
|
||||
"attribute": "{}.{}".format(node, attribute),
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
"color_space": color_space
|
||||
} # required for resources
|
||||
|
||||
|
||||
class CollectModelRenderSets(CollectLook):
|
||||
|
|
|
|||
|
|
@ -326,8 +326,8 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
"byFrameStep": int(
|
||||
self.get_render_attribute("byFrameStep",
|
||||
layer=layer_name)),
|
||||
"renderer": self.get_render_attribute("currentRenderer",
|
||||
layer=layer_name),
|
||||
"renderer": self.get_render_attribute(
|
||||
"currentRenderer", layer=layer_name).lower(),
|
||||
# instance subset
|
||||
"family": "renderlayer",
|
||||
"families": ["renderlayer"],
|
||||
|
|
@ -339,9 +339,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
"source": filepath,
|
||||
"expectedFiles": full_exp_files,
|
||||
"publishRenderMetadataFolder": common_publish_meta_path,
|
||||
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
|
||||
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
|
||||
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),
|
||||
"resolutionWidth": lib.get_attr_in_layer(
|
||||
"defaultResolution.height", layer=layer_name
|
||||
),
|
||||
"resolutionHeight": lib.get_attr_in_layer(
|
||||
"defaultResolution.width", layer=layer_name
|
||||
),
|
||||
"pixelAspect": lib.get_attr_in_layer(
|
||||
"defaultResolution.pixelAspect", layer=layer_name
|
||||
),
|
||||
"tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501
|
||||
"tilesX": render_instance.data.get("tilesX") or 2,
|
||||
"tilesY": render_instance.data.get("tilesY") or 2,
|
||||
|
|
|
|||
|
|
@ -124,9 +124,15 @@ class CollectVrayScene(pyblish.api.InstancePlugin):
|
|||
# Add source to allow tracing back to the scene from
|
||||
# which was submitted originally
|
||||
"source": context.data["currentFile"].replace("\\", "/"),
|
||||
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
|
||||
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
|
||||
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),
|
||||
"resolutionWidth": lib.get_attr_in_layer(
|
||||
"defaultResolution.height", layer=layer_name
|
||||
),
|
||||
"resolutionHeight": lib.get_attr_in_layer(
|
||||
"defaultResolution.width", layer=layer_name
|
||||
),
|
||||
"pixelAspect": lib.get_attr_in_layer(
|
||||
"defaultResolution.pixelAspect", layer=layer_name
|
||||
),
|
||||
"priority": instance.data.get("priority"),
|
||||
"useMultipleSceneFiles": instance.data.get(
|
||||
"vraySceneMultipleFiles")
|
||||
|
|
|
|||
|
|
@ -372,10 +372,12 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
if mode == COPY:
|
||||
transfers.append((source, destination))
|
||||
self.log.info('copying')
|
||||
self.log.info('file will be copied {} -> {}'.format(
|
||||
source, destination))
|
||||
elif mode == HARDLINK:
|
||||
hardlinks.append((source, destination))
|
||||
self.log.info('hardlinking')
|
||||
self.log.info('file will be hardlinked {} -> {}'.format(
|
||||
source, destination))
|
||||
|
||||
# Store the hashes from hash to destination to include in the
|
||||
# database
|
||||
|
|
|
|||
|
|
@ -12,7 +12,8 @@ ImagePrefixes = {
|
|||
'vray': 'vraySettings.fileNamePrefix',
|
||||
'arnold': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'renderman': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix'
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -50,15 +50,17 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
'vray': 'vraySettings.fileNamePrefix',
|
||||
'arnold': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'renderman': 'rmanGlobals.imageFileFormat',
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix'
|
||||
'redshift': 'defaultRenderGlobals.imageFilePrefix',
|
||||
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
|
||||
}
|
||||
|
||||
ImagePrefixTokens = {
|
||||
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
|
||||
'vray': 'maya/<Scene>/<Layer>/<Layer>',
|
||||
'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>' # noqa
|
||||
'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>',
|
||||
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
|
||||
}
|
||||
|
||||
_aov_chars = {
|
||||
|
|
@ -234,7 +236,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
# load validation definitions from settings
|
||||
validation_settings = (
|
||||
instance.context.data["project_settings"]["maya"]["publish"]["ValidateRenderSettings"].get( # noqa: E501
|
||||
"{}_render_attributes".format(renderer))
|
||||
"{}_render_attributes".format(renderer)) or []
|
||||
)
|
||||
|
||||
# go through definitions and test if such node.attribute exists.
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
import re
|
||||
import six
|
||||
import platform
|
||||
|
|
@ -193,7 +194,7 @@ def imprint(node, data, tab=None):
|
|||
Examples:
|
||||
```
|
||||
import nuke
|
||||
from avalon.nuke import lib
|
||||
from openpype.hosts.nuke.api import lib
|
||||
|
||||
node = nuke.createNode("NoOp")
|
||||
data = {
|
||||
|
|
@ -364,17 +365,15 @@ def fix_data_for_node_create(data):
|
|||
return data
|
||||
|
||||
|
||||
def add_write_node(name, **kwarg):
|
||||
def add_write_node_legacy(name, **kwarg):
|
||||
"""Adding nuke write node
|
||||
|
||||
Arguments:
|
||||
name (str): nuke node name
|
||||
kwarg (attrs): data for nuke knobs
|
||||
|
||||
Returns:
|
||||
node (obj): nuke write node
|
||||
"""
|
||||
frame_range = kwarg.get("frame_range", None)
|
||||
use_range_limit = kwarg.get("use_range_limit", None)
|
||||
|
||||
w = nuke.createNode(
|
||||
"Write",
|
||||
|
|
@ -392,15 +391,44 @@ def add_write_node(name, **kwarg):
|
|||
log.debug(e)
|
||||
continue
|
||||
|
||||
if frame_range:
|
||||
if use_range_limit:
|
||||
w["use_limit"].setValue(True)
|
||||
w["first"].setValue(frame_range[0])
|
||||
w["last"].setValue(frame_range[1])
|
||||
w["first"].setValue(kwarg["frame_range"][0])
|
||||
w["last"].setValue(kwarg["frame_range"][1])
|
||||
|
||||
return w
|
||||
|
||||
|
||||
def read(node):
|
||||
def add_write_node(name, file_path, knobs, **kwarg):
|
||||
"""Adding nuke write node
|
||||
|
||||
Arguments:
|
||||
name (str): nuke node name
|
||||
kwarg (attrs): data for nuke knobs
|
||||
|
||||
Returns:
|
||||
node (obj): nuke write node
|
||||
"""
|
||||
use_range_limit = kwarg.get("use_range_limit", None)
|
||||
|
||||
w = nuke.createNode(
|
||||
"Write",
|
||||
"name {}".format(name))
|
||||
|
||||
w["file"].setValue(file_path)
|
||||
|
||||
# finally add knob overrides
|
||||
set_node_knobs_from_settings(w, knobs, **kwarg)
|
||||
|
||||
if use_range_limit:
|
||||
w["use_limit"].setValue(True)
|
||||
w["first"].setValue(kwarg["frame_range"][0])
|
||||
w["last"].setValue(kwarg["frame_range"][1])
|
||||
|
||||
return w
|
||||
|
||||
|
||||
def read_avalon_data(node):
|
||||
"""Return user-defined knobs from given `node`
|
||||
|
||||
Args:
|
||||
|
|
@ -415,8 +443,6 @@ def read(node):
|
|||
return knob_name[len("avalon:"):]
|
||||
elif knob_name.startswith("ak:"):
|
||||
return knob_name[len("ak:"):]
|
||||
else:
|
||||
return knob_name
|
||||
|
||||
data = dict()
|
||||
|
||||
|
|
@ -445,7 +471,8 @@ def read(node):
|
|||
(knob_type == 26 and value)
|
||||
):
|
||||
key = compat_prefixed(knob_name)
|
||||
data[key] = value
|
||||
if key is not None:
|
||||
data[key] = value
|
||||
|
||||
if knob_name == first_user_knob:
|
||||
break
|
||||
|
|
@ -501,30 +528,171 @@ def get_nuke_imageio_settings():
|
|||
return get_anatomy_settings(Context.project_name)["imageio"]["nuke"]
|
||||
|
||||
|
||||
def get_created_node_imageio_setting(**kwarg):
|
||||
def get_created_node_imageio_setting_legacy(nodeclass, creator, subset):
|
||||
''' Get preset data for dataflow (fileType, compression, bitDepth)
|
||||
'''
|
||||
log.debug(kwarg)
|
||||
nodeclass = kwarg.get("nodeclass", None)
|
||||
creator = kwarg.get("creator", None)
|
||||
|
||||
assert any([creator, nodeclass]), nuke.message(
|
||||
"`{}`: Missing mandatory kwargs `host`, `cls`".format(__file__))
|
||||
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]["requiredNodes"]
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]
|
||||
required_nodes = imageio_nodes["requiredNodes"]
|
||||
override_nodes = imageio_nodes["overrideNodes"]
|
||||
|
||||
imageio_node = None
|
||||
for node in imageio_nodes:
|
||||
for node in required_nodes:
|
||||
log.info(node)
|
||||
if (nodeclass in node["nukeNodeClass"]) and (
|
||||
creator in node["plugins"]):
|
||||
if (
|
||||
nodeclass in node["nukeNodeClass"]
|
||||
and creator in node["plugins"]
|
||||
):
|
||||
imageio_node = node
|
||||
break
|
||||
|
||||
log.debug("__ imageio_node: {}".format(imageio_node))
|
||||
|
||||
# find matching override node
|
||||
override_imageio_node = None
|
||||
for onode in override_nodes:
|
||||
log.info(onode)
|
||||
if nodeclass not in node["nukeNodeClass"]:
|
||||
continue
|
||||
|
||||
if creator not in node["plugins"]:
|
||||
continue
|
||||
|
||||
if (
|
||||
onode["subsets"]
|
||||
and not any(re.search(s, subset) for s in onode["subsets"])
|
||||
):
|
||||
continue
|
||||
|
||||
override_imageio_node = onode
|
||||
break
|
||||
|
||||
log.debug("__ override_imageio_node: {}".format(override_imageio_node))
|
||||
# add overrides to imageio_node
|
||||
if override_imageio_node:
|
||||
# get all knob names in imageio_node
|
||||
knob_names = [k["name"] for k in imageio_node["knobs"]]
|
||||
|
||||
for oknob in override_imageio_node["knobs"]:
|
||||
for knob in imageio_node["knobs"]:
|
||||
# override matching knob name
|
||||
if oknob["name"] == knob["name"]:
|
||||
log.debug(
|
||||
"_ overriding knob: `{}` > `{}`".format(
|
||||
knob, oknob
|
||||
))
|
||||
if not oknob["value"]:
|
||||
# remove original knob if no value found in oknob
|
||||
imageio_node["knobs"].remove(knob)
|
||||
else:
|
||||
# override knob value with oknob's
|
||||
knob["value"] = oknob["value"]
|
||||
|
||||
# add missing knobs into imageio_node
|
||||
if oknob["name"] not in knob_names:
|
||||
log.debug(
|
||||
"_ adding knob: `{}`".format(oknob))
|
||||
imageio_node["knobs"].append(oknob)
|
||||
knob_names.append(oknob["name"])
|
||||
|
||||
log.info("ImageIO node: {}".format(imageio_node))
|
||||
return imageio_node
|
||||
|
||||
|
||||
def get_imageio_node_setting(node_class, plugin_name, subset):
|
||||
''' Get preset data for dataflow (fileType, compression, bitDepth)
|
||||
'''
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]
|
||||
required_nodes = imageio_nodes["requiredNodes"]
|
||||
|
||||
imageio_node = None
|
||||
for node in required_nodes:
|
||||
log.info(node)
|
||||
if (
|
||||
node_class in node["nukeNodeClass"]
|
||||
and plugin_name in node["plugins"]
|
||||
):
|
||||
imageio_node = node
|
||||
break
|
||||
|
||||
log.debug("__ imageio_node: {}".format(imageio_node))
|
||||
|
||||
if not imageio_node:
|
||||
return
|
||||
|
||||
# find overrides and update knobs with them
|
||||
get_imageio_node_override_setting(
|
||||
node_class,
|
||||
plugin_name,
|
||||
subset,
|
||||
imageio_node["knobs"]
|
||||
)
|
||||
|
||||
log.info("ImageIO node: {}".format(imageio_node))
|
||||
return imageio_node
|
||||
|
||||
|
||||
def get_imageio_node_override_setting(
|
||||
node_class, plugin_name, subset, knobs_settings
|
||||
):
|
||||
''' Get imageio node overrides from settings
|
||||
'''
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]
|
||||
override_nodes = imageio_nodes["overrideNodes"]
|
||||
|
||||
# find matching override node
|
||||
override_imageio_node = None
|
||||
for onode in override_nodes:
|
||||
log.info(onode)
|
||||
if node_class not in onode["nukeNodeClass"]:
|
||||
continue
|
||||
|
||||
if plugin_name not in onode["plugins"]:
|
||||
continue
|
||||
|
||||
if (
|
||||
onode["subsets"]
|
||||
and not any(re.search(s, subset) for s in onode["subsets"])
|
||||
):
|
||||
continue
|
||||
|
||||
override_imageio_node = onode
|
||||
break
|
||||
|
||||
log.debug("__ override_imageio_node: {}".format(override_imageio_node))
|
||||
# add overrides to imageio_node
|
||||
if override_imageio_node:
|
||||
# get all knob names in imageio_node
|
||||
knob_names = [k["name"] for k in knobs_settings]
|
||||
|
||||
for oknob in override_imageio_node["knobs"]:
|
||||
for knob in knobs_settings:
|
||||
# override matching knob name
|
||||
if oknob["name"] == knob["name"]:
|
||||
log.debug(
|
||||
"_ overriding knob: `{}` > `{}`".format(
|
||||
knob, oknob
|
||||
))
|
||||
if not oknob["value"]:
|
||||
# remove original knob if no value found in oknob
|
||||
knobs_settings.remove(knob)
|
||||
else:
|
||||
# override knob value with oknob's
|
||||
knob["value"] = oknob["value"]
|
||||
|
||||
# add missing knobs into imageio_node
|
||||
if oknob["name"] not in knob_names:
|
||||
log.debug(
|
||||
"_ adding knob: `{}`".format(oknob))
|
||||
knobs_settings.append(oknob)
|
||||
knob_names.append(oknob["name"])
|
||||
|
||||
return knobs_settings
|
||||
|
||||
|
||||
def get_imageio_input_colorspace(filename):
|
||||
''' Get input file colorspace based on regex in settings.
|
||||
'''
|
||||
|
|
@ -542,7 +710,7 @@ def get_imageio_input_colorspace(filename):
|
|||
def on_script_load():
|
||||
''' Callback for ffmpeg support
|
||||
'''
|
||||
if nuke.env['LINUX']:
|
||||
if nuke.env["LINUX"]:
|
||||
nuke.tcl('load ffmpegReader')
|
||||
nuke.tcl('load ffmpegWriter')
|
||||
else:
|
||||
|
|
@ -567,7 +735,7 @@ def check_inventory_versions():
|
|||
|
||||
if container:
|
||||
node = nuke.toNode(container["objectName"])
|
||||
avalon_knob_data = read(node)
|
||||
avalon_knob_data = read_avalon_data(node)
|
||||
|
||||
# get representation from io
|
||||
representation = legacy_io.find_one({
|
||||
|
|
@ -593,7 +761,7 @@ def check_inventory_versions():
|
|||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
}).distinct("name")
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
|
|
@ -623,20 +791,20 @@ def writes_version_sync():
|
|||
if _NODE_TAB_NAME not in each.knobs():
|
||||
continue
|
||||
|
||||
avalon_knob_data = read(each)
|
||||
avalon_knob_data = read_avalon_data(each)
|
||||
|
||||
try:
|
||||
if avalon_knob_data['families'] not in ["render"]:
|
||||
log.debug(avalon_knob_data['families'])
|
||||
if avalon_knob_data["families"] not in ["render"]:
|
||||
log.debug(avalon_knob_data["families"])
|
||||
continue
|
||||
|
||||
node_file = each['file'].value()
|
||||
node_file = each["file"].value()
|
||||
|
||||
node_version = "v" + get_version_from_path(node_file)
|
||||
log.debug("node_version: {}".format(node_version))
|
||||
|
||||
node_new_file = node_file.replace(node_version, new_version)
|
||||
each['file'].setValue(node_new_file)
|
||||
each["file"].setValue(node_new_file)
|
||||
if not os.path.isdir(os.path.dirname(node_new_file)):
|
||||
log.warning("Path does not exist! I am creating it.")
|
||||
os.makedirs(os.path.dirname(node_new_file))
|
||||
|
|
@ -665,21 +833,21 @@ def check_subsetname_exists(nodes, subset_name):
|
|||
bool: True of False
|
||||
"""
|
||||
return next((True for n in nodes
|
||||
if subset_name in read(n).get("subset", "")),
|
||||
if subset_name in read_avalon_data(n).get("subset", "")),
|
||||
False)
|
||||
|
||||
|
||||
def get_render_path(node):
|
||||
''' Generate Render path from presets regarding avalon knob data
|
||||
'''
|
||||
data = {'avalon': read(node)}
|
||||
data_preset = {
|
||||
"nodeclass": data['avalon']['family'],
|
||||
"families": [data['avalon']['families']],
|
||||
"creator": data['avalon']['creator']
|
||||
}
|
||||
avalon_knob_data = read_avalon_data(node)
|
||||
data = {'avalon': avalon_knob_data}
|
||||
|
||||
nuke_imageio_writes = get_created_node_imageio_setting(**data_preset)
|
||||
nuke_imageio_writes = get_imageio_node_setting(
|
||||
node_class=avalon_knob_data["family"],
|
||||
plugin_name=avalon_knob_data["creator"],
|
||||
subset=avalon_knob_data["subset"]
|
||||
)
|
||||
host_name = os.environ.get("AVALON_APP")
|
||||
|
||||
data.update({
|
||||
|
|
@ -749,7 +917,7 @@ def format_anatomy(data):
|
|||
def script_name():
|
||||
''' Returns nuke script path
|
||||
'''
|
||||
return nuke.root().knob('name').value()
|
||||
return nuke.root().knob("name").value()
|
||||
|
||||
|
||||
def add_button_write_to_read(node):
|
||||
|
|
@ -771,8 +939,282 @@ def add_button_clear_rendered(node, path):
|
|||
node.addKnob(knob)
|
||||
|
||||
|
||||
def create_write_node(name, data, input=None, prenodes=None,
|
||||
review=True, linked_knobs=None, farm=True):
|
||||
def create_prenodes(
|
||||
prev_node,
|
||||
nodes_setting,
|
||||
plugin_name=None,
|
||||
subset=None,
|
||||
**kwargs
|
||||
):
|
||||
last_node = None
|
||||
for_dependency = {}
|
||||
for name, node in nodes_setting.items():
|
||||
# get attributes
|
||||
nodeclass = node["nodeclass"]
|
||||
knobs = node["knobs"]
|
||||
|
||||
# create node
|
||||
now_node = nuke.createNode(
|
||||
nodeclass, "name {}".format(name))
|
||||
now_node.hideControlPanel()
|
||||
|
||||
# add for dependency linking
|
||||
for_dependency[name] = {
|
||||
"node": now_node,
|
||||
"dependent": node["dependent"]
|
||||
}
|
||||
|
||||
if all([plugin_name, subset]):
|
||||
# find imageio overrides
|
||||
get_imageio_node_override_setting(
|
||||
now_node.Class(),
|
||||
plugin_name,
|
||||
subset,
|
||||
knobs
|
||||
)
|
||||
|
||||
# add data to knob
|
||||
set_node_knobs_from_settings(now_node, knobs, **kwargs)
|
||||
|
||||
# switch actual node to previous
|
||||
last_node = now_node
|
||||
|
||||
for _node_name, node_prop in for_dependency.items():
|
||||
if not node_prop["dependent"]:
|
||||
node_prop["node"].setInput(
|
||||
0, prev_node)
|
||||
elif node_prop["dependent"] in for_dependency:
|
||||
_prev_node = for_dependency[node_prop["dependent"]]["node"]
|
||||
node_prop["node"].setInput(
|
||||
0, _prev_node)
|
||||
else:
|
||||
log.warning("Dependency has wrong name of node: {}".format(
|
||||
node_prop
|
||||
))
|
||||
|
||||
return last_node
|
||||
|
||||
|
||||
def create_write_node(
|
||||
name,
|
||||
data,
|
||||
input=None,
|
||||
prenodes=None,
|
||||
review=True,
|
||||
farm=True,
|
||||
linked_knobs=None,
|
||||
**kwargs
|
||||
):
|
||||
''' Creating write node which is group node
|
||||
|
||||
Arguments:
|
||||
name (str): name of node
|
||||
data (dict): creator write instance data
|
||||
input (node)[optional]: selected node to connect to
|
||||
prenodes (dict)[optional]:
|
||||
nodes to be created before write with dependency
|
||||
review (bool)[optional]: adding review knob
|
||||
farm (bool)[optional]: rendering workflow target
|
||||
kwargs (dict)[optional]: additional key arguments for formating
|
||||
|
||||
Example:
|
||||
prenodes = {
|
||||
"nodeName": {
|
||||
"nodeclass": "Reformat",
|
||||
"dependent": [
|
||||
following_node_01,
|
||||
...
|
||||
],
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
"name": "knobname",
|
||||
"value": "knob value"
|
||||
},
|
||||
...
|
||||
]
|
||||
},
|
||||
...
|
||||
}
|
||||
|
||||
|
||||
Return:
|
||||
node (obj): group node with avalon data as Knobs
|
||||
'''
|
||||
prenodes = prenodes or {}
|
||||
|
||||
# group node knob overrides
|
||||
knob_overrides = data.pop("knobs", [])
|
||||
|
||||
# filtering variables
|
||||
plugin_name = data["creator"]
|
||||
subset = data["subset"]
|
||||
|
||||
# get knob settings for write node
|
||||
imageio_writes = get_imageio_node_setting(
|
||||
node_class=data["nodeclass"],
|
||||
plugin_name=plugin_name,
|
||||
subset=subset
|
||||
)
|
||||
|
||||
for knob in imageio_writes["knobs"]:
|
||||
if knob["name"] == "file_type":
|
||||
representation = knob["value"]
|
||||
|
||||
host_name = os.environ.get("AVALON_APP")
|
||||
try:
|
||||
data.update({
|
||||
"app": host_name,
|
||||
"imageio_writes": imageio_writes,
|
||||
"representation": representation,
|
||||
})
|
||||
anatomy_filled = format_anatomy(data)
|
||||
|
||||
except Exception as e:
|
||||
msg = "problem with resolving anatomy template: {}".format(e)
|
||||
log.error(msg)
|
||||
nuke.message(msg)
|
||||
|
||||
# build file path to workfiles
|
||||
fdir = str(anatomy_filled["work"]["folder"]).replace("\\", "/")
|
||||
fpath = data["fpath_template"].format(
|
||||
work=fdir,
|
||||
version=data["version"],
|
||||
subset=data["subset"],
|
||||
frame=data["frame"],
|
||||
ext=representation
|
||||
)
|
||||
|
||||
# create directory
|
||||
if not os.path.isdir(os.path.dirname(fpath)):
|
||||
log.warning("Path does not exist! I am creating it.")
|
||||
os.makedirs(os.path.dirname(fpath))
|
||||
|
||||
GN = nuke.createNode("Group", "name {}".format(name))
|
||||
|
||||
prev_node = None
|
||||
with GN:
|
||||
if input:
|
||||
input_name = str(input.name()).replace(" ", "")
|
||||
# if connected input node was defined
|
||||
prev_node = nuke.createNode(
|
||||
"Input", "name {}".format(input_name))
|
||||
else:
|
||||
# generic input node connected to nothing
|
||||
prev_node = nuke.createNode(
|
||||
"Input", "name {}".format("rgba"))
|
||||
prev_node.hideControlPanel()
|
||||
|
||||
# creating pre-write nodes `prenodes`
|
||||
last_prenode = create_prenodes(
|
||||
prev_node,
|
||||
prenodes,
|
||||
plugin_name,
|
||||
subset,
|
||||
**kwargs
|
||||
)
|
||||
if last_prenode:
|
||||
prev_node = last_prenode
|
||||
|
||||
# creating write node
|
||||
write_node = now_node = add_write_node(
|
||||
"inside_{}".format(name),
|
||||
fpath,
|
||||
imageio_writes["knobs"],
|
||||
**data
|
||||
)
|
||||
write_node.hideControlPanel()
|
||||
# connect to previous node
|
||||
now_node.setInput(0, prev_node)
|
||||
|
||||
# switch actual node to previous
|
||||
prev_node = now_node
|
||||
|
||||
now_node = nuke.createNode("Output", "name Output1")
|
||||
now_node.hideControlPanel()
|
||||
|
||||
# connect to previous node
|
||||
now_node.setInput(0, prev_node)
|
||||
|
||||
# imprinting group node
|
||||
set_avalon_knob_data(GN, data["avalon"])
|
||||
add_publish_knob(GN)
|
||||
add_rendering_knobs(GN, farm)
|
||||
|
||||
if review:
|
||||
add_review_knob(GN)
|
||||
|
||||
# add divider
|
||||
GN.addKnob(nuke.Text_Knob('', 'Rendering'))
|
||||
|
||||
# Add linked knobs.
|
||||
linked_knob_names = []
|
||||
|
||||
# add input linked knobs and create group only if any input
|
||||
if linked_knobs:
|
||||
linked_knob_names.append("_grp-start_")
|
||||
linked_knob_names.extend(linked_knobs)
|
||||
linked_knob_names.append("_grp-end_")
|
||||
|
||||
linked_knob_names.append("Render")
|
||||
|
||||
for _k_name in linked_knob_names:
|
||||
if "_grp-start_" in _k_name:
|
||||
knob = nuke.Tab_Knob(
|
||||
"rnd_attr", "Rendering attributes", nuke.TABBEGINCLOSEDGROUP)
|
||||
GN.addKnob(knob)
|
||||
elif "_grp-end_" in _k_name:
|
||||
knob = nuke.Tab_Knob(
|
||||
"rnd_attr_end", "Rendering attributes", nuke.TABENDGROUP)
|
||||
GN.addKnob(knob)
|
||||
else:
|
||||
if "___" in _k_name:
|
||||
# add divider
|
||||
GN.addKnob(nuke.Text_Knob(""))
|
||||
else:
|
||||
# add linked knob by _k_name
|
||||
link = nuke.Link_Knob("")
|
||||
link.makeLink(write_node.name(), _k_name)
|
||||
link.setName(_k_name)
|
||||
|
||||
# make render
|
||||
if "Render" in _k_name:
|
||||
link.setLabel("Render Local")
|
||||
link.setFlag(0x1000)
|
||||
GN.addKnob(link)
|
||||
|
||||
# adding write to read button
|
||||
add_button_write_to_read(GN)
|
||||
|
||||
# adding write to read button
|
||||
add_button_clear_rendered(GN, os.path.dirname(fpath))
|
||||
|
||||
# Deadline tab.
|
||||
add_deadline_tab(GN)
|
||||
|
||||
# open the our Tab as default
|
||||
GN[_NODE_TAB_NAME].setFlag(0)
|
||||
|
||||
# set tile color
|
||||
tile_color = next(
|
||||
iter(
|
||||
k["value"] for k in imageio_writes["knobs"]
|
||||
if "tile_color" in k["name"]
|
||||
), [255, 0, 0, 255]
|
||||
)
|
||||
GN["tile_color"].setValue(
|
||||
color_gui_to_int(tile_color))
|
||||
|
||||
# finally add knob overrides
|
||||
set_node_knobs_from_settings(GN, knob_overrides, **kwargs)
|
||||
|
||||
return GN
|
||||
|
||||
|
||||
def create_write_node_legacy(
|
||||
name, data, input=None, prenodes=None,
|
||||
review=True, linked_knobs=None, farm=True
|
||||
):
|
||||
''' Creating write node which is group node
|
||||
|
||||
Arguments:
|
||||
|
|
@ -804,8 +1246,14 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
Return:
|
||||
node (obj): group node with avalon data as Knobs
|
||||
'''
|
||||
knob_overrides = data.get("knobs", [])
|
||||
nodeclass = data["nodeclass"]
|
||||
creator = data["creator"]
|
||||
subset = data["subset"]
|
||||
|
||||
imageio_writes = get_created_node_imageio_setting(**data)
|
||||
imageio_writes = get_created_node_imageio_setting_legacy(
|
||||
nodeclass, creator, subset
|
||||
)
|
||||
for knob in imageio_writes["knobs"]:
|
||||
if knob["name"] == "file_type":
|
||||
representation = knob["value"]
|
||||
|
|
@ -844,7 +1292,7 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
# adding dataflow template
|
||||
log.debug("imageio_writes: `{}`".format(imageio_writes))
|
||||
for knob in imageio_writes["knobs"]:
|
||||
_data.update({knob["name"]: knob["value"]})
|
||||
_data[knob["name"]] = knob["value"]
|
||||
|
||||
_data = fix_data_for_node_create(_data)
|
||||
|
||||
|
|
@ -927,7 +1375,8 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
prev_node = now_node
|
||||
|
||||
# creating write node
|
||||
write_node = now_node = add_write_node(
|
||||
|
||||
write_node = now_node = add_write_node_legacy(
|
||||
"inside_{}".format(name),
|
||||
**_data
|
||||
)
|
||||
|
|
@ -1007,9 +1456,106 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
tile_color = _data.get("tile_color", "0xff0000ff")
|
||||
GN["tile_color"].setValue(tile_color)
|
||||
|
||||
# overrie knob values from settings
|
||||
for knob in knob_overrides:
|
||||
knob_type = knob["type"]
|
||||
knob_name = knob["name"]
|
||||
knob_value = knob["value"]
|
||||
if knob_name not in GN.knobs():
|
||||
continue
|
||||
if not knob_value:
|
||||
continue
|
||||
|
||||
# set correctly knob types
|
||||
if knob_type == "string":
|
||||
knob_value = str(knob_value)
|
||||
if knob_type == "number":
|
||||
knob_value = int(knob_value)
|
||||
if knob_type == "decimal_number":
|
||||
knob_value = float(knob_value)
|
||||
if knob_type == "bool":
|
||||
knob_value = bool(knob_value)
|
||||
if knob_type in ["2d_vector", "3d_vector"]:
|
||||
knob_value = list(knob_value)
|
||||
|
||||
GN[knob_name].setValue(knob_value)
|
||||
|
||||
return GN
|
||||
|
||||
|
||||
def set_node_knobs_from_settings(node, knob_settings, **kwargs):
|
||||
""" Overriding knob values from settings
|
||||
|
||||
Using `schema_nuke_knob_inputs` for knob type definitions.
|
||||
|
||||
Args:
|
||||
node (nuke.Node): nuke node
|
||||
knob_settings (list): list of dict. Keys are `type`, `name`, `value`
|
||||
kwargs (dict)[optional]: keys for formatable knob settings
|
||||
"""
|
||||
for knob in knob_settings:
|
||||
log.debug("__ knob: {}".format(pformat(knob)))
|
||||
knob_type = knob["type"]
|
||||
knob_name = knob["name"]
|
||||
|
||||
if knob_name not in node.knobs():
|
||||
continue
|
||||
|
||||
# first deal with formatable knob settings
|
||||
if knob_type == "formatable":
|
||||
template = knob["template"]
|
||||
to_type = knob["to_type"]
|
||||
try:
|
||||
_knob_value = template.format(
|
||||
**kwargs
|
||||
)
|
||||
log.debug("__ knob_value0: {}".format(_knob_value))
|
||||
except KeyError as msg:
|
||||
log.warning("__ msg: {}".format(msg))
|
||||
raise KeyError(msg)
|
||||
|
||||
# convert value to correct type
|
||||
if to_type == "2d_vector":
|
||||
knob_value = _knob_value.split(";").split(",")
|
||||
else:
|
||||
knob_value = _knob_value
|
||||
|
||||
knob_type = to_type
|
||||
|
||||
else:
|
||||
knob_value = knob["value"]
|
||||
|
||||
if not knob_value:
|
||||
continue
|
||||
|
||||
# first convert string types to string
|
||||
# just to ditch unicode
|
||||
if isinstance(knob_value, six.text_type):
|
||||
knob_value = str(knob_value)
|
||||
|
||||
# set correctly knob types
|
||||
if knob_type == "bool":
|
||||
knob_value = bool(knob_value)
|
||||
elif knob_type == "decimal_number":
|
||||
knob_value = float(knob_value)
|
||||
elif knob_type == "number":
|
||||
knob_value = int(knob_value)
|
||||
elif knob_type == "text":
|
||||
knob_value = knob_value
|
||||
elif knob_type == "color_gui":
|
||||
knob_value = color_gui_to_int(knob_value)
|
||||
elif knob_type in ["2d_vector", "3d_vector", "color"]:
|
||||
knob_value = [float(v) for v in knob_value]
|
||||
|
||||
node[knob_name].setValue(knob_value)
|
||||
|
||||
|
||||
def color_gui_to_int(color_gui):
|
||||
hex_value = (
|
||||
"0x{0:0>2x}{1:0>2x}{2:0>2x}{3:0>2x}").format(*color_gui)
|
||||
return int(hex_value, 16)
|
||||
|
||||
|
||||
def add_rendering_knobs(node, farm=True):
|
||||
''' Adds additional rendering knobs to given node
|
||||
|
||||
|
|
@ -1193,15 +1739,19 @@ class WorkfileSettings(object):
|
|||
|
||||
erased_viewers = []
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
v['viewerProcess'].setValue(str(viewer_dict["viewerProcess"]))
|
||||
# set viewProcess to preset from settings
|
||||
v["viewerProcess"].setValue(
|
||||
str(viewer_dict["viewerProcess"])
|
||||
)
|
||||
|
||||
if str(viewer_dict["viewerProcess"]) \
|
||||
not in v['viewerProcess'].value():
|
||||
not in v["viewerProcess"].value():
|
||||
copy_inputs = v.dependencies()
|
||||
copy_knobs = {k: v[k].value() for k in v.knobs()
|
||||
if k not in filter_knobs}
|
||||
|
||||
# delete viewer with wrong settings
|
||||
erased_viewers.append(v['name'].value())
|
||||
erased_viewers.append(v["name"].value())
|
||||
nuke.delete(v)
|
||||
|
||||
# create new viewer
|
||||
|
|
@ -1217,7 +1767,7 @@ class WorkfileSettings(object):
|
|||
nv[k].setValue(v)
|
||||
|
||||
# set viewerProcess
|
||||
nv['viewerProcess'].setValue(str(viewer_dict["viewerProcess"]))
|
||||
nv["viewerProcess"].setValue(str(viewer_dict["viewerProcess"]))
|
||||
|
||||
if erased_viewers:
|
||||
log.warning(
|
||||
|
|
@ -1293,12 +1843,12 @@ class WorkfileSettings(object):
|
|||
for node in nuke.allNodes(filter="Group"):
|
||||
|
||||
# get data from avalon knob
|
||||
avalon_knob_data = read(node)
|
||||
avalon_knob_data = read_avalon_data(node)
|
||||
|
||||
if not avalon_knob_data:
|
||||
if avalon_knob_data.get("id") != "pyblish.avalon.instance":
|
||||
continue
|
||||
|
||||
if avalon_knob_data["id"] != "pyblish.avalon.instance":
|
||||
if "creator" not in avalon_knob_data:
|
||||
continue
|
||||
|
||||
# establish families
|
||||
|
|
@ -1306,14 +1856,11 @@ class WorkfileSettings(object):
|
|||
if avalon_knob_data.get("families"):
|
||||
families.append(avalon_knob_data.get("families"))
|
||||
|
||||
data_preset = {
|
||||
"nodeclass": avalon_knob_data["family"],
|
||||
"families": families,
|
||||
"creator": avalon_knob_data['creator']
|
||||
}
|
||||
|
||||
nuke_imageio_writes = get_created_node_imageio_setting(
|
||||
**data_preset)
|
||||
nuke_imageio_writes = get_imageio_node_setting(
|
||||
node_class=avalon_knob_data["family"],
|
||||
plugin_name=avalon_knob_data["creator"],
|
||||
subset=avalon_knob_data["subset"]
|
||||
)
|
||||
|
||||
log.debug("nuke_imageio_writes: `{}`".format(nuke_imageio_writes))
|
||||
|
||||
|
|
@ -1342,7 +1889,6 @@ class WorkfileSettings(object):
|
|||
|
||||
write_node[knob["name"]].setValue(value)
|
||||
|
||||
|
||||
def set_reads_colorspace(self, read_clrs_inputs):
|
||||
""" Setting colorspace to Read nodes
|
||||
|
||||
|
|
@ -1368,17 +1914,16 @@ class WorkfileSettings(object):
|
|||
current = n["colorspace"].value()
|
||||
future = str(preset_clrsp)
|
||||
if current != future:
|
||||
changes.update({
|
||||
n.name(): {
|
||||
"from": current,
|
||||
"to": future
|
||||
}
|
||||
})
|
||||
changes[n.name()] = {
|
||||
"from": current,
|
||||
"to": future
|
||||
}
|
||||
|
||||
log.debug(changes)
|
||||
if changes:
|
||||
msg = "Read nodes are not set to correct colospace:\n\n"
|
||||
for nname, knobs in changes.items():
|
||||
msg += str(
|
||||
msg += (
|
||||
" - node: '{0}' is now '{1}' but should be '{2}'\n"
|
||||
).format(nname, knobs["from"], knobs["to"])
|
||||
|
||||
|
|
@ -1610,17 +2155,17 @@ def get_hierarchical_attr(entity, attr, default=None):
|
|||
if not value:
|
||||
break
|
||||
|
||||
if value or entity['type'].lower() == 'project':
|
||||
if value or entity["type"].lower() == "project":
|
||||
return value
|
||||
|
||||
parent_id = entity['parent']
|
||||
parent_id = entity["parent"]
|
||||
if (
|
||||
entity['type'].lower() == 'asset'
|
||||
and entity.get('data', {}).get('visualParent')
|
||||
entity["type"].lower() == "asset"
|
||||
and entity.get("data", {}).get("visualParent")
|
||||
):
|
||||
parent_id = entity['data']['visualParent']
|
||||
parent_id = entity["data"]["visualParent"]
|
||||
|
||||
parent = legacy_io.find_one({'_id': parent_id})
|
||||
parent = legacy_io.find_one({"_id": parent_id})
|
||||
|
||||
return get_hierarchical_attr(parent, attr)
|
||||
|
||||
|
|
@ -1630,26 +2175,24 @@ def get_write_node_template_attr(node):
|
|||
|
||||
'''
|
||||
# get avalon data from node
|
||||
data = dict()
|
||||
data['avalon'] = read(node)
|
||||
data_preset = {
|
||||
"nodeclass": data['avalon']['family'],
|
||||
"families": [data['avalon']['families']],
|
||||
"creator": data['avalon']['creator']
|
||||
}
|
||||
|
||||
avalon_knob_data = read_avalon_data(node)
|
||||
# get template data
|
||||
nuke_imageio_writes = get_created_node_imageio_setting(**data_preset)
|
||||
nuke_imageio_writes = get_imageio_node_setting(
|
||||
node_class=avalon_knob_data["family"],
|
||||
plugin_name=avalon_knob_data["creator"],
|
||||
subset=avalon_knob_data["subset"]
|
||||
)
|
||||
|
||||
# collecting correct data
|
||||
correct_data = OrderedDict({
|
||||
"file": get_render_path(node)
|
||||
})
|
||||
|
||||
# adding imageio template
|
||||
{correct_data.update({k: v})
|
||||
for k, v in nuke_imageio_writes.items()
|
||||
if k not in ["_id", "_previous"]}
|
||||
# adding imageio knob presets
|
||||
for k, v in nuke_imageio_writes.items():
|
||||
if k in ["_id", "_previous"]:
|
||||
continue
|
||||
correct_data[k] = v
|
||||
|
||||
# fix badly encoded data
|
||||
return fix_data_for_node_create(correct_data)
|
||||
|
|
@ -1765,8 +2308,8 @@ def maintained_selection():
|
|||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... node['selected'].setValue(True)
|
||||
>>> print(node['selected'].value())
|
||||
... node["selected"].setValue(True)
|
||||
>>> print(node["selected"].value())
|
||||
False
|
||||
"""
|
||||
previous_selection = nuke.selectedNodes()
|
||||
|
|
@ -1774,11 +2317,11 @@ def maintained_selection():
|
|||
yield
|
||||
finally:
|
||||
# unselect all selection in case there is some
|
||||
current_seletion = nuke.selectedNodes()
|
||||
[n['selected'].setValue(False) for n in current_seletion]
|
||||
reset_selection()
|
||||
|
||||
# and select all previously selected nodes
|
||||
if previous_selection:
|
||||
[n['selected'].setValue(True) for n in previous_selection]
|
||||
select_nodes(previous_selection)
|
||||
|
||||
|
||||
def reset_selection():
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ from .lib import (
|
|||
launch_workfiles_app,
|
||||
check_inventory_versions,
|
||||
set_avalon_knob_data,
|
||||
read,
|
||||
read_avalon_data,
|
||||
Context
|
||||
)
|
||||
|
||||
|
|
@ -359,7 +359,7 @@ def parse_container(node):
|
|||
dict: The container schema data for this container node.
|
||||
|
||||
"""
|
||||
data = read(node)
|
||||
data = read_avalon_data(node)
|
||||
|
||||
# (TODO) Remove key validation when `ls` has re-implemented.
|
||||
#
|
||||
|
|
|
|||
|
|
@ -17,7 +17,9 @@ from .lib import (
|
|||
reset_selection,
|
||||
maintained_selection,
|
||||
set_avalon_knob_data,
|
||||
add_publish_knob
|
||||
add_publish_knob,
|
||||
get_nuke_imageio_settings,
|
||||
set_node_knobs_from_settings
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -27,9 +29,6 @@ class OpenPypeCreator(LegacyCreator):
|
|||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(OpenPypeCreator, self).__init__(*args, **kwargs)
|
||||
self.presets = get_current_project_settings()["nuke"]["create"].get(
|
||||
self.__class__.__name__, {}
|
||||
)
|
||||
if check_subsetname_exists(
|
||||
nuke.allNodes(),
|
||||
self.data["subset"]):
|
||||
|
|
@ -260,8 +259,6 @@ class ExporterReview(object):
|
|||
return nuke_imageio["viewer"]["viewerProcess"]
|
||||
|
||||
|
||||
|
||||
|
||||
class ExporterReviewLut(ExporterReview):
|
||||
"""
|
||||
Generator object for review lut from Nuke
|
||||
|
|
@ -501,16 +498,7 @@ class ExporterReviewMov(ExporterReview):
|
|||
add_tags.append("reformated")
|
||||
|
||||
rf_node = nuke.createNode("Reformat")
|
||||
for kn_conf in reformat_node_config:
|
||||
_type = kn_conf["type"]
|
||||
k_name = str(kn_conf["name"])
|
||||
k_value = kn_conf["value"]
|
||||
|
||||
# to remove unicode as nuke doesn't like it
|
||||
if _type == "string":
|
||||
k_value = str(kn_conf["value"])
|
||||
|
||||
rf_node[k_name].setValue(k_value)
|
||||
set_node_knobs_from_settings(rf_node, reformat_node_config)
|
||||
|
||||
# connect
|
||||
rf_node.setInput(0, self.previous_node)
|
||||
|
|
@ -607,6 +595,8 @@ class AbstractWriteRender(OpenPypeCreator):
|
|||
family = "render"
|
||||
icon = "sign-out"
|
||||
defaults = ["Main", "Mask"]
|
||||
knobs = []
|
||||
prenodes = {}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(AbstractWriteRender, self).__init__(*args, **kwargs)
|
||||
|
|
@ -673,7 +663,9 @@ class AbstractWriteRender(OpenPypeCreator):
|
|||
write_data = {
|
||||
"nodeclass": self.n_class,
|
||||
"families": [self.family],
|
||||
"avalon": self.data
|
||||
"avalon": self.data,
|
||||
"subset": self.data["subset"],
|
||||
"knobs": self.knobs
|
||||
}
|
||||
|
||||
# add creator data
|
||||
|
|
@ -681,21 +673,12 @@ class AbstractWriteRender(OpenPypeCreator):
|
|||
self.data.update(creator_data)
|
||||
write_data.update(creator_data)
|
||||
|
||||
if self.presets.get('fpath_template'):
|
||||
self.log.info("Adding template path from preset")
|
||||
write_data.update(
|
||||
{"fpath_template": self.presets["fpath_template"]}
|
||||
)
|
||||
else:
|
||||
self.log.info("Adding template path from plugin")
|
||||
write_data.update({
|
||||
"fpath_template":
|
||||
("{work}/" + self.family + "s/nuke/{subset}"
|
||||
"/{subset}.{frame}.{ext}")})
|
||||
|
||||
write_node = self._create_write_node(selected_node,
|
||||
inputs, outputs,
|
||||
write_data)
|
||||
write_node = self._create_write_node(
|
||||
selected_node,
|
||||
inputs,
|
||||
outputs,
|
||||
write_data
|
||||
)
|
||||
|
||||
# relinking to collected connections
|
||||
for i, input in enumerate(inputs):
|
||||
|
|
@ -710,6 +693,28 @@ class AbstractWriteRender(OpenPypeCreator):
|
|||
|
||||
return write_node
|
||||
|
||||
def is_legacy(self):
|
||||
"""Check if it needs to run legacy code
|
||||
|
||||
In case where `type` key is missing in singe
|
||||
knob it is legacy project anatomy.
|
||||
|
||||
Returns:
|
||||
bool: True if legacy
|
||||
"""
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]
|
||||
node = imageio_nodes["requiredNodes"][0]
|
||||
if "type" not in node["knobs"][0]:
|
||||
# if type is not yet in project anatomy
|
||||
return True
|
||||
elif next(iter(
|
||||
_k for _k in node["knobs"]
|
||||
if _k.get("type") == "__legacy__"
|
||||
), None):
|
||||
# in case someone re-saved anatomy
|
||||
# with old configuration
|
||||
return True
|
||||
|
||||
@abstractmethod
|
||||
def _create_write_node(self, selected_node, inputs, outputs, write_data):
|
||||
"""Family dependent implementation of Write node creation
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import nuke
|
||||
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
from openpype.hosts.nuke.api.lib import create_write_node
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
create_write_node, create_write_node_legacy)
|
||||
|
||||
|
||||
class CreateWritePrerender(plugin.AbstractWriteRender):
|
||||
|
|
@ -12,22 +13,41 @@ class CreateWritePrerender(plugin.AbstractWriteRender):
|
|||
n_class = "Write"
|
||||
family = "prerender"
|
||||
icon = "sign-out"
|
||||
|
||||
# settings
|
||||
fpath_template = "{work}/render/nuke/{subset}/{subset}.{frame}.{ext}"
|
||||
defaults = ["Key01", "Bg01", "Fg01", "Branch01", "Part01"]
|
||||
reviewable = False
|
||||
use_range_limit = True
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateWritePrerender, self).__init__(*args, **kwargs)
|
||||
|
||||
def _create_write_node(self, selected_node, inputs, outputs, write_data):
|
||||
reviewable = self.presets.get("reviewable")
|
||||
write_node = create_write_node(
|
||||
self.data["subset"],
|
||||
write_data,
|
||||
input=selected_node,
|
||||
prenodes=[],
|
||||
review=reviewable,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"])
|
||||
# add fpath_template
|
||||
write_data["fpath_template"] = self.fpath_template
|
||||
write_data["use_range_limit"] = self.use_range_limit
|
||||
write_data["frame_range"] = (
|
||||
nuke.root()["first_frame"].value(),
|
||||
nuke.root()["last_frame"].value()
|
||||
)
|
||||
|
||||
return write_node
|
||||
if not self.is_legacy():
|
||||
return create_write_node(
|
||||
self.data["subset"],
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=self.reviewable,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"]
|
||||
)
|
||||
else:
|
||||
return create_write_node_legacy(
|
||||
self.data["subset"],
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=self.reviewable,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"]
|
||||
)
|
||||
|
||||
def _modify_write_node(self, write_node):
|
||||
# open group node
|
||||
|
|
@ -38,7 +58,7 @@ class CreateWritePrerender(plugin.AbstractWriteRender):
|
|||
w_node = n
|
||||
write_node.end()
|
||||
|
||||
if self.presets.get("use_range_limit"):
|
||||
if self.use_range_limit:
|
||||
w_node["use_limit"].setValue(True)
|
||||
w_node["first"].setValue(nuke.root()["first_frame"].value())
|
||||
w_node["last"].setValue(nuke.root()["last_frame"].value())
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import nuke
|
||||
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
from openpype.hosts.nuke.api.lib import create_write_node
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
create_write_node, create_write_node_legacy)
|
||||
|
||||
|
||||
class CreateWriteRender(plugin.AbstractWriteRender):
|
||||
|
|
@ -12,12 +13,36 @@ class CreateWriteRender(plugin.AbstractWriteRender):
|
|||
n_class = "Write"
|
||||
family = "render"
|
||||
icon = "sign-out"
|
||||
|
||||
# settings
|
||||
fpath_template = "{work}/render/nuke/{subset}/{subset}.{frame}.{ext}"
|
||||
defaults = ["Main", "Mask"]
|
||||
prenodes = {
|
||||
"Reformat01": {
|
||||
"nodeclass": "Reformat",
|
||||
"dependent": None,
|
||||
"knobs": [
|
||||
{
|
||||
"type": "text",
|
||||
"name": "resize",
|
||||
"value": "none"
|
||||
},
|
||||
{
|
||||
"type": "bool",
|
||||
"name": "black_outside",
|
||||
"value": True
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateWriteRender, self).__init__(*args, **kwargs)
|
||||
|
||||
def _create_write_node(self, selected_node, inputs, outputs, write_data):
|
||||
# add fpath_template
|
||||
write_data["fpath_template"] = self.fpath_template
|
||||
|
||||
# add reformat node to cut off all outside of format bounding box
|
||||
# get width and height
|
||||
try:
|
||||
|
|
@ -26,25 +51,36 @@ class CreateWriteRender(plugin.AbstractWriteRender):
|
|||
actual_format = nuke.root().knob('format').value()
|
||||
width, height = (actual_format.width(), actual_format.height())
|
||||
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "Reformat01",
|
||||
"class": "Reformat",
|
||||
"knobs": [
|
||||
("resize", 0),
|
||||
("black_outside", 1),
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
]
|
||||
if not self.is_legacy():
|
||||
return create_write_node(
|
||||
self.data["subset"],
|
||||
write_data,
|
||||
input=selected_node,
|
||||
prenodes=self.prenodes,
|
||||
**{
|
||||
"width": width,
|
||||
"height": height
|
||||
}
|
||||
)
|
||||
else:
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "Reformat01",
|
||||
"class": "Reformat",
|
||||
"knobs": [
|
||||
("resize", 0),
|
||||
("black_outside", 1),
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
]
|
||||
|
||||
write_node = create_write_node(
|
||||
self.data["subset"],
|
||||
write_data,
|
||||
input=selected_node,
|
||||
prenodes=_prenodes)
|
||||
|
||||
return write_node
|
||||
return create_write_node_legacy(
|
||||
self.data["subset"],
|
||||
write_data,
|
||||
input=selected_node,
|
||||
prenodes=_prenodes
|
||||
)
|
||||
|
||||
def _modify_write_node(self, write_node):
|
||||
return write_node
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import nuke
|
||||
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
from openpype.hosts.nuke.api.lib import create_write_node
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
create_write_node, create_write_node_legacy)
|
||||
|
||||
|
||||
class CreateWriteStill(plugin.AbstractWriteRender):
|
||||
|
|
@ -12,42 +13,69 @@ class CreateWriteStill(plugin.AbstractWriteRender):
|
|||
n_class = "Write"
|
||||
family = "still"
|
||||
icon = "image"
|
||||
|
||||
# settings
|
||||
fpath_template = "{work}/render/nuke/{subset}/{subset}.{ext}"
|
||||
defaults = [
|
||||
"ImageFrame{:0>4}".format(nuke.frame()),
|
||||
"MPFrame{:0>4}".format(nuke.frame()),
|
||||
"LayoutFrame{:0>4}".format(nuke.frame())
|
||||
"ImageFrame",
|
||||
"MPFrame",
|
||||
"LayoutFrame"
|
||||
]
|
||||
prenodes = {
|
||||
"FrameHold01": {
|
||||
"nodeclass": "FrameHold",
|
||||
"dependent": None,
|
||||
"knobs": [
|
||||
{
|
||||
"type": "formatable",
|
||||
"name": "first_frame",
|
||||
"template": "{frame}",
|
||||
"to_type": "number"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateWriteStill, self).__init__(*args, **kwargs)
|
||||
|
||||
def _create_write_node(self, selected_node, inputs, outputs, write_data):
|
||||
# explicitly reset template to 'renders', not same as other 2 writes
|
||||
write_data.update({
|
||||
"fpath_template": (
|
||||
"{work}/renders/nuke/{subset}/{subset}.{ext}")})
|
||||
# add fpath_template
|
||||
write_data["fpath_template"] = self.fpath_template
|
||||
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "FrameHold01",
|
||||
"class": "FrameHold",
|
||||
"knobs": [
|
||||
("first_frame", nuke.frame())
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
]
|
||||
|
||||
write_node = create_write_node(
|
||||
self.name,
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=False,
|
||||
prenodes=_prenodes,
|
||||
farm=False,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"])
|
||||
|
||||
return write_node
|
||||
if not self.is_legacy():
|
||||
return create_write_node(
|
||||
self.name,
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=False,
|
||||
prenodes=self.prenodes,
|
||||
farm=False,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"],
|
||||
**{
|
||||
"frame": nuke.frame()
|
||||
}
|
||||
)
|
||||
else:
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "FrameHold01",
|
||||
"class": "FrameHold",
|
||||
"knobs": [
|
||||
("first_frame", nuke.frame())
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
]
|
||||
return create_write_node_legacy(
|
||||
self.name,
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=False,
|
||||
prenodes=_prenodes,
|
||||
farm=False,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"]
|
||||
)
|
||||
|
||||
def _modify_write_node(self, write_node):
|
||||
write_node.begin()
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ log = Logger().get_logger(__name__)
|
|||
|
||||
|
||||
class SetFrameRangeLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Set frame range excluding pre- and post-handles"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
@ -43,7 +43,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
|
|||
|
||||
|
||||
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
|
||||
"""Specific loader of Alembic for the avalon.animation family"""
|
||||
"""Set frame range including pre- and post-handles"""
|
||||
|
||||
families = ["animation",
|
||||
"camera",
|
||||
|
|
|
|||
|
|
@ -15,13 +15,13 @@ from openpype.hosts.nuke.api import (
|
|||
|
||||
class AlembicModelLoader(load.LoaderPlugin):
|
||||
"""
|
||||
This will load alembic model into script.
|
||||
This will load alembic model or anim into script.
|
||||
"""
|
||||
|
||||
families = ["model"]
|
||||
families = ["model", "pointcache", "animation"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Load Alembic Model"
|
||||
label = "Load Alembic"
|
||||
icon = "cube"
|
||||
color = "orange"
|
||||
node_color = "0x4ecd91ff"
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
for o_name, o_data in self.outputs.items():
|
||||
f_families = o_data["filter"]["families"]
|
||||
f_task_types = o_data["filter"]["task_types"]
|
||||
f_subsets = o_data["filter"]["sebsets"]
|
||||
f_subsets = o_data["filter"]["subsets"]
|
||||
|
||||
self.log.debug(
|
||||
"f_families `{}` > families: {}".format(
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import nuke
|
||||
import os
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.pipeline import install_host
|
||||
|
|
@ -9,6 +10,7 @@ from openpype.hosts.nuke.api.lib import (
|
|||
WorkfileSettings,
|
||||
dirmap_file_name_filter
|
||||
)
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
|
@ -28,3 +30,32 @@ nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
|
|||
nuke.addFilenameFilter(dirmap_file_name_filter)
|
||||
|
||||
log.info('Automatic syncing of write file knob to script version')
|
||||
|
||||
|
||||
def add_scripts_menu():
|
||||
try:
|
||||
from scriptsmenu import launchfornuke
|
||||
except ImportError:
|
||||
log.warning(
|
||||
"Skipping studio.menu install, because "
|
||||
"'scriptsmenu' module seems unavailable."
|
||||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
config = project_settings["nuke"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["nuke"]["scriptsmenu"]["name"]
|
||||
|
||||
if not config:
|
||||
log.warning("Skipping studio menu, no definition found.")
|
||||
return
|
||||
|
||||
# run the launcher for Maya menu
|
||||
studio_menu = launchfornuke.main(title=_menu.title())
|
||||
|
||||
# apply configuration
|
||||
studio_menu.build_from_configuration(studio_menu, config)
|
||||
|
||||
|
||||
add_scripts_menu()
|
||||
|
|
|
|||
|
|
@ -29,6 +29,16 @@ class PSItem(object):
|
|||
color_code = attr.ib(default=None) # color code of layer
|
||||
instance_id = attr.ib(default=None)
|
||||
|
||||
@property
|
||||
def clean_name(self):
|
||||
"""Returns layer name without publish icon highlight
|
||||
|
||||
Returns:
|
||||
(str)
|
||||
"""
|
||||
return (self.name.replace(PhotoshopServerStub.PUBLISH_ICON, '')
|
||||
.replace(PhotoshopServerStub.LOADED_ICON, ''))
|
||||
|
||||
|
||||
class PhotoshopServerStub:
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -39,6 +39,9 @@ class CollectBatchData(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
self.log.info("CollectBatchData")
|
||||
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
|
||||
if os.environ.get("IS_TEST"):
|
||||
self.log.debug("Automatic testing, no batch data, skipping")
|
||||
return
|
||||
|
||||
assert batch_dir, (
|
||||
"Missing `OPENPYPE_PUBLISH_DATA`")
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import pyblish.api
|
|||
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.hosts.photoshop import api as photoshop
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
|
||||
class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
||||
|
|
@ -49,6 +50,12 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
asset_name = context.data["asset"]
|
||||
task_name = context.data["task"]
|
||||
variant = context.data["variant"]
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
|
||||
naming_conventions = get_project_settings(project_name).get(
|
||||
"photoshop", {}).get(
|
||||
"publish", {}).get(
|
||||
"ValidateNaming", {})
|
||||
|
||||
stub = photoshop.stub()
|
||||
layers = stub.get_layers()
|
||||
|
|
@ -77,12 +84,15 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
"variant": variant,
|
||||
"family": resolved_family,
|
||||
"task": task_name,
|
||||
"layer": layer.name
|
||||
"layer": layer.clean_name
|
||||
}
|
||||
|
||||
subset = resolved_subset_template.format(
|
||||
**prepare_template_data(fill_pairs))
|
||||
|
||||
subset = self._clean_subset_name(stub, naming_conventions,
|
||||
subset, layer)
|
||||
|
||||
if subset in existing_subset_names:
|
||||
self.log.info(
|
||||
"Subset {} already created, skipping.".format(subset))
|
||||
|
|
@ -141,6 +151,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
instance.data["task"] = task_name
|
||||
instance.data["subset"] = subset
|
||||
instance.data["layer"] = layer
|
||||
instance.data["families"] = []
|
||||
|
||||
return instance
|
||||
|
||||
|
|
@ -186,3 +197,21 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
self.log.debug("resolved_subset_template {}".format(
|
||||
resolved_subset_template))
|
||||
return family, resolved_subset_template
|
||||
|
||||
def _clean_subset_name(self, stub, naming_conventions, subset, layer):
|
||||
"""Cleans invalid characters from subset name and layer name."""
|
||||
if re.search(naming_conventions["invalid_chars"], subset):
|
||||
subset = re.sub(
|
||||
naming_conventions["invalid_chars"],
|
||||
naming_conventions["replace_char"],
|
||||
subset
|
||||
)
|
||||
layer_name = re.sub(
|
||||
naming_conventions["invalid_chars"],
|
||||
naming_conventions["replace_char"],
|
||||
layer.clean_name
|
||||
)
|
||||
layer.name = layer_name
|
||||
stub.rename_layer(layer.id, layer_name)
|
||||
|
||||
return subset
|
||||
|
|
|
|||
|
|
@ -42,7 +42,8 @@ class ValidateNamingRepair(pyblish.api.Action):
|
|||
|
||||
layer_name = re.sub(invalid_chars,
|
||||
replace_char,
|
||||
current_layer_state.name)
|
||||
current_layer_state.clean_name)
|
||||
layer_name = stub.PUBLISH_ICON + layer_name
|
||||
|
||||
stub.rename_layer(current_layer_state.id, layer_name)
|
||||
|
||||
|
|
@ -73,13 +74,17 @@ class ValidateNaming(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
help_msg = ' Use Repair action (A) in Pyblish to fix it.'
|
||||
msg = "Name \"{}\" is not allowed.{}".format(instance.data["name"],
|
||||
help_msg)
|
||||
|
||||
formatting_data = {"msg": msg}
|
||||
if re.search(self.invalid_chars, instance.data["name"]):
|
||||
raise PublishXmlValidationError(self, msg,
|
||||
formatting_data=formatting_data)
|
||||
layer = instance.data.get("layer")
|
||||
if layer:
|
||||
msg = "Name \"{}\" is not allowed.{}".format(layer.clean_name,
|
||||
help_msg)
|
||||
|
||||
formatting_data = {"msg": msg}
|
||||
if re.search(self.invalid_chars, layer.clean_name):
|
||||
raise PublishXmlValidationError(self, msg,
|
||||
formatting_data=formatting_data
|
||||
)
|
||||
|
||||
msg = "Subset \"{}\" is not allowed.{}".format(instance.data["subset"],
|
||||
help_msg)
|
||||
|
|
|
|||
|
|
@ -1,70 +0,0 @@
|
|||
import copy
|
||||
import pyblish.api
|
||||
from pprint import pformat
|
||||
|
||||
|
||||
class CollectBatchInstances(pyblish.api.InstancePlugin):
|
||||
"""Collect all available instances for batch publish."""
|
||||
|
||||
label = "Collect Batch Instances"
|
||||
order = pyblish.api.CollectorOrder + 0.489
|
||||
hosts = ["standalonepublisher"]
|
||||
families = ["background_batch"]
|
||||
|
||||
# presets
|
||||
default_subset_task = {
|
||||
"background_batch": "background"
|
||||
}
|
||||
subsets = {
|
||||
"background_batch": {
|
||||
"backgroundLayout": {
|
||||
"task": "background",
|
||||
"family": "backgroundLayout"
|
||||
},
|
||||
"backgroundComp": {
|
||||
"task": "background",
|
||||
"family": "backgroundComp"
|
||||
},
|
||||
"workfileBackground": {
|
||||
"task": "background",
|
||||
"family": "workfile"
|
||||
}
|
||||
}
|
||||
}
|
||||
unchecked_by_default = []
|
||||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
asset_name = instance.data["asset"]
|
||||
family = instance.data["family"]
|
||||
|
||||
default_task_name = self.default_subset_task.get(family)
|
||||
for subset_name, subset_data in self.subsets[family].items():
|
||||
instance_name = f"{asset_name}_{subset_name}"
|
||||
task_name = subset_data.get("task") or default_task_name
|
||||
|
||||
# create new instance
|
||||
new_instance = context.create_instance(instance_name)
|
||||
|
||||
# add original instance data except name key
|
||||
for key, value in instance.data.items():
|
||||
if key not in ["name"]:
|
||||
# Make sure value is copy since value may be object which
|
||||
# can be shared across all new created objects
|
||||
new_instance.data[key] = copy.deepcopy(value)
|
||||
|
||||
# add subset data from preset
|
||||
new_instance.data.update(subset_data)
|
||||
|
||||
new_instance.data["label"] = instance_name
|
||||
new_instance.data["subset"] = subset_name
|
||||
new_instance.data["task"] = task_name
|
||||
|
||||
if subset_name in self.unchecked_by_default:
|
||||
new_instance.data["publish"] = False
|
||||
|
||||
self.log.info(f"Created new instance: {instance_name}")
|
||||
self.log.debug(f"_ inst_data: {pformat(new_instance.data)}")
|
||||
|
||||
# delete original instance
|
||||
context.remove(instance)
|
||||
|
|
@ -1,243 +0,0 @@
|
|||
import os
|
||||
import json
|
||||
import copy
|
||||
|
||||
import openpype.api
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
PSDImage = None
|
||||
|
||||
|
||||
class ExtractBGForComp(openpype.api.Extractor):
|
||||
label = "Extract Background for Compositing"
|
||||
families = ["backgroundComp"]
|
||||
hosts = ["standalonepublisher"]
|
||||
|
||||
new_instance_family = "background"
|
||||
|
||||
# Presetable
|
||||
allowed_group_names = [
|
||||
"OL", "BG", "MG", "FG", "SB", "UL", "SKY", "Field Guide", "Field_Guide",
|
||||
"ANIM"
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
# Check if python module `psd_tools` is installed
|
||||
try:
|
||||
global PSDImage
|
||||
from psd_tools import PSDImage
|
||||
except Exception:
|
||||
raise AssertionError(
|
||||
"BUG: Python module `psd-tools` is not installed!"
|
||||
)
|
||||
|
||||
self.allowed_group_names = [
|
||||
name.lower()
|
||||
for name in self.allowed_group_names
|
||||
]
|
||||
|
||||
self.redo_global_plugins(instance)
|
||||
|
||||
repres = instance.data.get("representations")
|
||||
if not repres:
|
||||
self.log.info("There are no representations on instance.")
|
||||
return
|
||||
|
||||
if not instance.data.get("transfers"):
|
||||
instance.data["transfers"] = []
|
||||
|
||||
# Prepare staging dir
|
||||
staging_dir = self.staging_dir(instance)
|
||||
if not os.path.exists(staging_dir):
|
||||
os.makedirs(staging_dir)
|
||||
|
||||
for repre in tuple(repres):
|
||||
# Skip all files without .psd extension
|
||||
repre_ext = repre["ext"].lower()
|
||||
if repre_ext.startswith("."):
|
||||
repre_ext = repre_ext[1:]
|
||||
|
||||
if repre_ext != "psd":
|
||||
continue
|
||||
|
||||
# Prepare publish dir for transfers
|
||||
publish_dir = instance.data["publishDir"]
|
||||
|
||||
# Prepare json filepath where extracted metadata are stored
|
||||
json_filename = "{}.json".format(instance.name)
|
||||
json_full_path = os.path.join(staging_dir, json_filename)
|
||||
|
||||
self.log.debug(f"`staging_dir` is \"{staging_dir}\"")
|
||||
|
||||
# Prepare new repre data
|
||||
new_repre = {
|
||||
"name": "json",
|
||||
"ext": "json",
|
||||
"files": json_filename,
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
|
||||
# TODO add check of list
|
||||
psd_filename = repre["files"]
|
||||
psd_folder_path = repre["stagingDir"]
|
||||
psd_filepath = os.path.join(psd_folder_path, psd_filename)
|
||||
self.log.debug(f"psd_filepath: \"{psd_filepath}\"")
|
||||
psd_object = PSDImage.open(psd_filepath)
|
||||
|
||||
json_data, transfers = self.export_compositing_images(
|
||||
psd_object, staging_dir, publish_dir
|
||||
)
|
||||
self.log.info("Json file path: {}".format(json_full_path))
|
||||
with open(json_full_path, "w") as json_filestream:
|
||||
json.dump(json_data, json_filestream, indent=4)
|
||||
|
||||
instance.data["transfers"].extend(transfers)
|
||||
instance.data["representations"].remove(repre)
|
||||
instance.data["representations"].append(new_repre)
|
||||
|
||||
def export_compositing_images(self, psd_object, output_dir, publish_dir):
|
||||
json_data = {
|
||||
"__schema_version__": 1,
|
||||
"children": []
|
||||
}
|
||||
transfers = []
|
||||
for main_idx, main_layer in enumerate(psd_object):
|
||||
if (
|
||||
not main_layer.is_visible()
|
||||
or main_layer.name.lower() not in self.allowed_group_names
|
||||
or not main_layer.is_group
|
||||
):
|
||||
continue
|
||||
|
||||
export_layers = []
|
||||
layers_idx = 0
|
||||
for layer in main_layer:
|
||||
# TODO this way may be added also layers next to "ADJ"
|
||||
if layer.name.lower() == "adj":
|
||||
for _layer in layer:
|
||||
export_layers.append((layers_idx, _layer))
|
||||
layers_idx += 1
|
||||
|
||||
else:
|
||||
export_layers.append((layers_idx, layer))
|
||||
layers_idx += 1
|
||||
|
||||
if not export_layers:
|
||||
continue
|
||||
|
||||
main_layer_data = {
|
||||
"index": main_idx,
|
||||
"name": main_layer.name,
|
||||
"children": []
|
||||
}
|
||||
|
||||
for layer_idx, layer in export_layers:
|
||||
has_size = layer.width > 0 and layer.height > 0
|
||||
if not has_size:
|
||||
self.log.debug((
|
||||
"Skipping layer \"{}\" because does "
|
||||
"not have any content."
|
||||
).format(layer.name))
|
||||
continue
|
||||
|
||||
main_layer_name = main_layer.name.replace(" ", "_")
|
||||
layer_name = layer.name.replace(" ", "_")
|
||||
|
||||
filename = "{:0>2}_{}_{:0>2}_{}.png".format(
|
||||
main_idx + 1, main_layer_name, layer_idx + 1, layer_name
|
||||
)
|
||||
layer_data = {
|
||||
"index": layer_idx,
|
||||
"name": layer.name,
|
||||
"filename": filename
|
||||
}
|
||||
output_filepath = os.path.join(output_dir, filename)
|
||||
dst_filepath = os.path.join(publish_dir, filename)
|
||||
transfers.append((output_filepath, dst_filepath))
|
||||
|
||||
pil_object = layer.composite(viewport=psd_object.viewbox)
|
||||
pil_object.save(output_filepath, "PNG")
|
||||
|
||||
main_layer_data["children"].append(layer_data)
|
||||
|
||||
if main_layer_data["children"]:
|
||||
json_data["children"].append(main_layer_data)
|
||||
|
||||
return json_data, transfers
|
||||
|
||||
def redo_global_plugins(self, instance):
|
||||
# TODO do this in collection phase
|
||||
# Copy `families` and check if `family` is not in current families
|
||||
families = instance.data.get("families") or list()
|
||||
if families:
|
||||
families = list(set(families))
|
||||
|
||||
if self.new_instance_family in families:
|
||||
families.remove(self.new_instance_family)
|
||||
|
||||
self.log.debug(
|
||||
"Setting new instance families {}".format(str(families))
|
||||
)
|
||||
instance.data["families"] = families
|
||||
|
||||
# Override instance data with new information
|
||||
instance.data["family"] = self.new_instance_family
|
||||
|
||||
subset_name = instance.data["anatomyData"]["subset"]
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
latest_version = self.find_last_version(subset_name, asset_doc)
|
||||
version_number = 1
|
||||
if latest_version is not None:
|
||||
version_number += latest_version
|
||||
|
||||
instance.data["latestVersion"] = latest_version
|
||||
instance.data["version"] = version_number
|
||||
|
||||
# Same data apply to anatomy data
|
||||
instance.data["anatomyData"].update({
|
||||
"family": self.new_instance_family,
|
||||
"version": version_number
|
||||
})
|
||||
|
||||
# Redo publish and resources dir
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
template_data = copy.deepcopy(instance.data["anatomyData"])
|
||||
template_data.update({
|
||||
"frame": "FRAME_TEMP",
|
||||
"representation": "TEMP"
|
||||
})
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
if "folder" in anatomy.templates["publish"]:
|
||||
publish_folder = anatomy_filled["publish"]["folder"]
|
||||
else:
|
||||
publish_folder = os.path.dirname(anatomy_filled["publish"]["path"])
|
||||
|
||||
publish_folder = os.path.normpath(publish_folder)
|
||||
resources_folder = os.path.join(publish_folder, "resources")
|
||||
|
||||
instance.data["publishDir"] = publish_folder
|
||||
instance.data["resourcesDir"] = resources_folder
|
||||
|
||||
self.log.debug("publishDir: \"{}\"".format(publish_folder))
|
||||
self.log.debug("resourcesDir: \"{}\"".format(resources_folder))
|
||||
|
||||
def find_last_version(self, subset_name, asset_doc):
|
||||
subset_doc = legacy_io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset_doc["_id"]
|
||||
})
|
||||
|
||||
if subset_doc is None:
|
||||
self.log.debug("Subset entity does not exist yet.")
|
||||
else:
|
||||
version_doc = legacy_io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset_doc["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if version_doc:
|
||||
return int(version_doc["name"])
|
||||
return None
|
||||
|
|
@ -1,248 +0,0 @@
|
|||
import os
|
||||
import copy
|
||||
import json
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import openpype.api
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
PSDImage = None
|
||||
|
||||
|
||||
class ExtractBGMainGroups(openpype.api.Extractor):
|
||||
label = "Extract Background Layout"
|
||||
order = pyblish.api.ExtractorOrder + 0.02
|
||||
families = ["backgroundLayout"]
|
||||
hosts = ["standalonepublisher"]
|
||||
|
||||
new_instance_family = "background"
|
||||
|
||||
# Presetable
|
||||
allowed_group_names = [
|
||||
"OL", "BG", "MG", "FG", "UL", "SB", "SKY", "Field Guide", "Field_Guide",
|
||||
"ANIM"
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
# Check if python module `psd_tools` is installed
|
||||
try:
|
||||
global PSDImage
|
||||
from psd_tools import PSDImage
|
||||
except Exception:
|
||||
raise AssertionError(
|
||||
"BUG: Python module `psd-tools` is not installed!"
|
||||
)
|
||||
|
||||
self.allowed_group_names = [
|
||||
name.lower()
|
||||
for name in self.allowed_group_names
|
||||
]
|
||||
repres = instance.data.get("representations")
|
||||
if not repres:
|
||||
self.log.info("There are no representations on instance.")
|
||||
return
|
||||
|
||||
self.redo_global_plugins(instance)
|
||||
|
||||
repres = instance.data.get("representations")
|
||||
if not repres:
|
||||
self.log.info("There are no representations on instance.")
|
||||
return
|
||||
|
||||
if not instance.data.get("transfers"):
|
||||
instance.data["transfers"] = []
|
||||
|
||||
# Prepare staging dir
|
||||
staging_dir = self.staging_dir(instance)
|
||||
if not os.path.exists(staging_dir):
|
||||
os.makedirs(staging_dir)
|
||||
|
||||
# Prepare publish dir for transfers
|
||||
publish_dir = instance.data["publishDir"]
|
||||
|
||||
for repre in tuple(repres):
|
||||
# Skip all files without .psd extension
|
||||
repre_ext = repre["ext"].lower()
|
||||
if repre_ext.startswith("."):
|
||||
repre_ext = repre_ext[1:]
|
||||
|
||||
if repre_ext != "psd":
|
||||
continue
|
||||
|
||||
# Prepare json filepath where extracted metadata are stored
|
||||
json_filename = "{}.json".format(instance.name)
|
||||
json_full_path = os.path.join(staging_dir, json_filename)
|
||||
|
||||
self.log.debug(f"`staging_dir` is \"{staging_dir}\"")
|
||||
|
||||
# Prepare new repre data
|
||||
new_repre = {
|
||||
"name": "json",
|
||||
"ext": "json",
|
||||
"files": json_filename,
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
|
||||
# TODO add check of list
|
||||
psd_filename = repre["files"]
|
||||
psd_folder_path = repre["stagingDir"]
|
||||
psd_filepath = os.path.join(psd_folder_path, psd_filename)
|
||||
self.log.debug(f"psd_filepath: \"{psd_filepath}\"")
|
||||
psd_object = PSDImage.open(psd_filepath)
|
||||
|
||||
json_data, transfers = self.export_compositing_images(
|
||||
psd_object, staging_dir, publish_dir
|
||||
)
|
||||
self.log.info("Json file path: {}".format(json_full_path))
|
||||
with open(json_full_path, "w") as json_filestream:
|
||||
json.dump(json_data, json_filestream, indent=4)
|
||||
|
||||
instance.data["transfers"].extend(transfers)
|
||||
instance.data["representations"].remove(repre)
|
||||
instance.data["representations"].append(new_repre)
|
||||
|
||||
def export_compositing_images(self, psd_object, output_dir, publish_dir):
|
||||
json_data = {
|
||||
"__schema_version__": 1,
|
||||
"children": []
|
||||
}
|
||||
output_ext = ".png"
|
||||
|
||||
to_export = []
|
||||
for layer_idx, layer in enumerate(psd_object):
|
||||
layer_name = layer.name.replace(" ", "_")
|
||||
if (
|
||||
not layer.is_visible()
|
||||
or layer_name.lower() not in self.allowed_group_names
|
||||
):
|
||||
continue
|
||||
|
||||
has_size = layer.width > 0 and layer.height > 0
|
||||
if not has_size:
|
||||
self.log.debug((
|
||||
"Skipping layer \"{}\" because does not have any content."
|
||||
).format(layer.name))
|
||||
continue
|
||||
|
||||
filebase = "{:0>2}_{}".format(layer_idx, layer_name)
|
||||
if layer_name.lower() == "anim":
|
||||
if not layer.is_group:
|
||||
self.log.warning("ANIM layer is not a group layer.")
|
||||
continue
|
||||
|
||||
children = []
|
||||
for anim_idx, anim_layer in enumerate(layer):
|
||||
anim_layer_name = anim_layer.name.replace(" ", "_")
|
||||
filename = "{}_{:0>2}_{}{}".format(
|
||||
filebase, anim_idx, anim_layer_name, output_ext
|
||||
)
|
||||
children.append({
|
||||
"index": anim_idx,
|
||||
"name": anim_layer.name,
|
||||
"filename": filename
|
||||
})
|
||||
to_export.append((anim_layer, filename))
|
||||
|
||||
json_data["children"].append({
|
||||
"index": layer_idx,
|
||||
"name": layer.name,
|
||||
"children": children
|
||||
})
|
||||
continue
|
||||
|
||||
filename = filebase + output_ext
|
||||
json_data["children"].append({
|
||||
"index": layer_idx,
|
||||
"name": layer.name,
|
||||
"filename": filename
|
||||
})
|
||||
to_export.append((layer, filename))
|
||||
|
||||
transfers = []
|
||||
for layer, filename in to_export:
|
||||
output_filepath = os.path.join(output_dir, filename)
|
||||
dst_filepath = os.path.join(publish_dir, filename)
|
||||
transfers.append((output_filepath, dst_filepath))
|
||||
|
||||
pil_object = layer.composite(viewport=psd_object.viewbox)
|
||||
pil_object.save(output_filepath, "PNG")
|
||||
|
||||
return json_data, transfers
|
||||
|
||||
def redo_global_plugins(self, instance):
|
||||
# TODO do this in collection phase
|
||||
# Copy `families` and check if `family` is not in current families
|
||||
families = instance.data.get("families") or list()
|
||||
if families:
|
||||
families = list(set(families))
|
||||
|
||||
if self.new_instance_family in families:
|
||||
families.remove(self.new_instance_family)
|
||||
|
||||
self.log.debug(
|
||||
"Setting new instance families {}".format(str(families))
|
||||
)
|
||||
instance.data["families"] = families
|
||||
|
||||
# Override instance data with new information
|
||||
instance.data["family"] = self.new_instance_family
|
||||
|
||||
subset_name = instance.data["anatomyData"]["subset"]
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
latest_version = self.find_last_version(subset_name, asset_doc)
|
||||
version_number = 1
|
||||
if latest_version is not None:
|
||||
version_number += latest_version
|
||||
|
||||
instance.data["latestVersion"] = latest_version
|
||||
instance.data["version"] = version_number
|
||||
|
||||
# Same data apply to anatomy data
|
||||
instance.data["anatomyData"].update({
|
||||
"family": self.new_instance_family,
|
||||
"version": version_number
|
||||
})
|
||||
|
||||
# Redo publish and resources dir
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
template_data = copy.deepcopy(instance.data["anatomyData"])
|
||||
template_data.update({
|
||||
"frame": "FRAME_TEMP",
|
||||
"representation": "TEMP"
|
||||
})
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
if "folder" in anatomy.templates["publish"]:
|
||||
publish_folder = anatomy_filled["publish"]["folder"]
|
||||
else:
|
||||
publish_folder = os.path.dirname(anatomy_filled["publish"]["path"])
|
||||
|
||||
publish_folder = os.path.normpath(publish_folder)
|
||||
resources_folder = os.path.join(publish_folder, "resources")
|
||||
|
||||
instance.data["publishDir"] = publish_folder
|
||||
instance.data["resourcesDir"] = resources_folder
|
||||
|
||||
self.log.debug("publishDir: \"{}\"".format(publish_folder))
|
||||
self.log.debug("resourcesDir: \"{}\"".format(resources_folder))
|
||||
|
||||
def find_last_version(self, subset_name, asset_doc):
|
||||
subset_doc = legacy_io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset_doc["_id"]
|
||||
})
|
||||
|
||||
if subset_doc is None:
|
||||
self.log.debug("Subset entity does not exist yet.")
|
||||
else:
|
||||
version_doc = legacy_io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset_doc["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if version_doc:
|
||||
return int(version_doc["name"])
|
||||
return None
|
||||
|
|
@ -1,171 +0,0 @@
|
|||
import os
|
||||
import copy
|
||||
import pyblish.api
|
||||
|
||||
import openpype.api
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
PSDImage = None
|
||||
|
||||
|
||||
class ExtractImagesFromPSD(openpype.api.Extractor):
|
||||
# PLUGIN is not currently enabled because was decided to use different
|
||||
# approach
|
||||
enabled = False
|
||||
active = False
|
||||
label = "Extract Images from PSD"
|
||||
order = pyblish.api.ExtractorOrder + 0.02
|
||||
families = ["backgroundLayout"]
|
||||
hosts = ["standalonepublisher"]
|
||||
|
||||
new_instance_family = "image"
|
||||
ignored_instance_data_keys = ("name", "label", "stagingDir", "version")
|
||||
# Presetable
|
||||
allowed_group_names = [
|
||||
"OL", "BG", "MG", "FG", "UL", "SKY", "Field Guide", "Field_Guide",
|
||||
"ANIM"
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
# Check if python module `psd_tools` is installed
|
||||
try:
|
||||
global PSDImage
|
||||
from psd_tools import PSDImage
|
||||
except Exception:
|
||||
raise AssertionError(
|
||||
"BUG: Python module `psd-tools` is not installed!"
|
||||
)
|
||||
|
||||
self.allowed_group_names = [
|
||||
name.lower()
|
||||
for name in self.allowed_group_names
|
||||
]
|
||||
repres = instance.data.get("representations")
|
||||
if not repres:
|
||||
self.log.info("There are no representations on instance.")
|
||||
return
|
||||
|
||||
for repre in tuple(repres):
|
||||
# Skip all files without .psd extension
|
||||
repre_ext = repre["ext"].lower()
|
||||
if repre_ext.startswith("."):
|
||||
repre_ext = repre_ext[1:]
|
||||
|
||||
if repre_ext != "psd":
|
||||
continue
|
||||
|
||||
# TODO add check of list of "files" value
|
||||
psd_filename = repre["files"]
|
||||
psd_folder_path = repre["stagingDir"]
|
||||
psd_filepath = os.path.join(psd_folder_path, psd_filename)
|
||||
self.log.debug(f"psd_filepath: \"{psd_filepath}\"")
|
||||
psd_object = PSDImage.open(psd_filepath)
|
||||
|
||||
self.create_new_instances(instance, psd_object)
|
||||
|
||||
# Remove the instance from context
|
||||
instance.context.remove(instance)
|
||||
|
||||
def create_new_instances(self, instance, psd_object):
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
for layer in psd_object:
|
||||
if (
|
||||
not layer.is_visible()
|
||||
or layer.name.lower() not in self.allowed_group_names
|
||||
):
|
||||
continue
|
||||
|
||||
has_size = layer.width > 0 and layer.height > 0
|
||||
if not has_size:
|
||||
self.log.debug((
|
||||
"Skipping layer \"{}\" because does "
|
||||
"not have any content."
|
||||
).format(layer.name))
|
||||
continue
|
||||
|
||||
layer_name = layer.name.replace(" ", "_")
|
||||
instance_name = subset_name = f"image{layer_name}"
|
||||
self.log.info(
|
||||
f"Creating new instance with name \"{instance_name}\""
|
||||
)
|
||||
new_instance = instance.context.create_instance(instance_name)
|
||||
for key, value in instance.data.items():
|
||||
if key not in self.ignored_instance_data_keys:
|
||||
new_instance.data[key] = copy.deepcopy(value)
|
||||
|
||||
new_instance.data["label"] = " ".join(
|
||||
(new_instance.data["asset"], instance_name)
|
||||
)
|
||||
|
||||
# Find latest version
|
||||
latest_version = self.find_last_version(subset_name, asset_doc)
|
||||
version_number = 1
|
||||
if latest_version is not None:
|
||||
version_number += latest_version
|
||||
|
||||
self.log.info(
|
||||
"Next version of instance \"{}\" will be {}".format(
|
||||
instance_name, version_number
|
||||
)
|
||||
)
|
||||
|
||||
# Set family and subset
|
||||
new_instance.data["family"] = self.new_instance_family
|
||||
new_instance.data["subset"] = subset_name
|
||||
new_instance.data["version"] = version_number
|
||||
new_instance.data["latestVersion"] = latest_version
|
||||
|
||||
new_instance.data["anatomyData"].update({
|
||||
"subset": subset_name,
|
||||
"family": self.new_instance_family,
|
||||
"version": version_number
|
||||
})
|
||||
|
||||
# Copy `families` and check if `family` is not in current families
|
||||
families = new_instance.data.get("families") or list()
|
||||
if families:
|
||||
families = list(set(families))
|
||||
|
||||
if self.new_instance_family in families:
|
||||
families.remove(self.new_instance_family)
|
||||
new_instance.data["families"] = families
|
||||
|
||||
# Prepare staging dir for new instance
|
||||
staging_dir = self.staging_dir(new_instance)
|
||||
|
||||
output_filename = "{}.png".format(layer_name)
|
||||
output_filepath = os.path.join(staging_dir, output_filename)
|
||||
pil_object = layer.composite(viewport=psd_object.viewbox)
|
||||
pil_object.save(output_filepath, "PNG")
|
||||
|
||||
new_repre = {
|
||||
"name": "png",
|
||||
"ext": "png",
|
||||
"files": output_filename,
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
self.log.debug(
|
||||
"Creating new representation: {}".format(new_repre)
|
||||
)
|
||||
new_instance.data["representations"] = [new_repre]
|
||||
|
||||
def find_last_version(self, subset_name, asset_doc):
|
||||
subset_doc = legacy_io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset_doc["_id"]
|
||||
})
|
||||
|
||||
if subset_doc is None:
|
||||
self.log.debug("Subset entity does not exist yet.")
|
||||
else:
|
||||
version_doc = legacy_io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset_doc["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if version_doc:
|
||||
return int(version_doc["name"])
|
||||
return None
|
||||
|
|
@ -2,7 +2,11 @@ import os
|
|||
import tempfile
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.lib
|
||||
from openpype.lib import (
|
||||
get_ffmpeg_tool_path,
|
||||
get_ffprobe_streams,
|
||||
path_to_subprocess_arg,
|
||||
)
|
||||
|
||||
|
||||
class ExtractThumbnailSP(pyblish.api.InstancePlugin):
|
||||
|
|
@ -34,85 +38,78 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
|
|||
if not thumbnail_repre:
|
||||
return
|
||||
|
||||
thumbnail_repre.pop("thumbnail")
|
||||
files = thumbnail_repre.get("files")
|
||||
if not files:
|
||||
return
|
||||
|
||||
if isinstance(files, list):
|
||||
files_len = len(files)
|
||||
file = str(files[0])
|
||||
first_filename = str(files[0])
|
||||
else:
|
||||
files_len = 1
|
||||
file = files
|
||||
first_filename = files
|
||||
|
||||
staging_dir = None
|
||||
is_jpeg = False
|
||||
if file.endswith(".jpeg") or file.endswith(".jpg"):
|
||||
is_jpeg = True
|
||||
|
||||
if is_jpeg and files_len == 1:
|
||||
# skip if already is single jpeg file
|
||||
return
|
||||
# Convert to jpeg if not yet
|
||||
full_input_path = os.path.join(
|
||||
thumbnail_repre["stagingDir"], first_filename
|
||||
)
|
||||
self.log.info("input {}".format(full_input_path))
|
||||
with tempfile.NamedTemporaryFile(suffix=".jpg") as tmp:
|
||||
full_thumbnail_path = tmp.name
|
||||
|
||||
elif is_jpeg:
|
||||
# use first frame as thumbnail if is sequence of jpegs
|
||||
full_thumbnail_path = os.path.join(
|
||||
thumbnail_repre["stagingDir"], file
|
||||
)
|
||||
self.log.info(
|
||||
"For thumbnail is used file: {}".format(full_thumbnail_path)
|
||||
)
|
||||
self.log.info("output {}".format(full_thumbnail_path))
|
||||
|
||||
else:
|
||||
# Convert to jpeg if not yet
|
||||
full_input_path = os.path.join(thumbnail_repre["stagingDir"], file)
|
||||
self.log.info("input {}".format(full_input_path))
|
||||
instance.context.data["cleanupFullPaths"].append(full_thumbnail_path)
|
||||
|
||||
full_thumbnail_path = tempfile.mkstemp(suffix=".jpg")[1]
|
||||
self.log.info("output {}".format(full_thumbnail_path))
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
|
||||
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
jpeg_items = [
|
||||
path_to_subprocess_arg(ffmpeg_path),
|
||||
# override file if already exists
|
||||
"-y"
|
||||
]
|
||||
|
||||
jpeg_items = [
|
||||
"\"{}\"".format(ffmpeg_path),
|
||||
# override file if already exists
|
||||
"-y"
|
||||
]
|
||||
|
||||
# add input filters from peresets
|
||||
jpeg_items.extend(ffmpeg_args.get("input") or [])
|
||||
# input file
|
||||
jpeg_items.append("-i \"{}\"".format(full_input_path))
|
||||
# add input filters from peresets
|
||||
jpeg_items.extend(ffmpeg_args.get("input") or [])
|
||||
# input file
|
||||
jpeg_items.extend([
|
||||
"-i", path_to_subprocess_arg(full_input_path),
|
||||
# extract only single file
|
||||
jpeg_items.append("-frames:v 1")
|
||||
"-frames:v", "1",
|
||||
# Add black background for transparent images
|
||||
jpeg_items.append((
|
||||
"-filter_complex"
|
||||
" \"color=black,format=rgb24[c]"
|
||||
"-filter_complex", (
|
||||
"\"color=black,format=rgb24[c]"
|
||||
";[c][0]scale2ref[c][i]"
|
||||
";[c][i]overlay=format=auto:shortest=1,setsar=1\""
|
||||
))
|
||||
),
|
||||
])
|
||||
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
|
||||
# output file
|
||||
jpeg_items.append("\"{}\"".format(full_thumbnail_path))
|
||||
# output file
|
||||
jpeg_items.append(path_to_subprocess_arg(full_thumbnail_path))
|
||||
|
||||
subprocess_jpeg = " ".join(jpeg_items)
|
||||
subprocess_jpeg = " ".join(jpeg_items)
|
||||
|
||||
# run subprocess
|
||||
self.log.debug("Executing: {}".format(subprocess_jpeg))
|
||||
openpype.api.run_subprocess(
|
||||
subprocess_jpeg, shell=True, logger=self.log
|
||||
)
|
||||
# run subprocess
|
||||
self.log.debug("Executing: {}".format(subprocess_jpeg))
|
||||
openpype.api.run_subprocess(
|
||||
subprocess_jpeg, shell=True, logger=self.log
|
||||
)
|
||||
|
||||
# remove thumbnail key from origin repre
|
||||
thumbnail_repre.pop("thumbnail")
|
||||
streams = get_ffprobe_streams(full_thumbnail_path)
|
||||
width = height = None
|
||||
for stream in streams:
|
||||
if "width" in stream and "height" in stream:
|
||||
width = stream["width"]
|
||||
height = stream["height"]
|
||||
break
|
||||
|
||||
filename = os.path.basename(full_thumbnail_path)
|
||||
staging_dir = staging_dir or os.path.dirname(full_thumbnail_path)
|
||||
staging_dir, filename = os.path.split(full_thumbnail_path)
|
||||
|
||||
# create new thumbnail representation
|
||||
representation = {
|
||||
|
|
@ -120,12 +117,11 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
|
|||
'ext': 'jpg',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir,
|
||||
"tags": ["thumbnail"],
|
||||
"tags": ["thumbnail", "delete"],
|
||||
}
|
||||
|
||||
# # add Delete tag when temp file was rendered
|
||||
if not is_jpeg:
|
||||
representation["tags"].append("delete")
|
||||
if width and height:
|
||||
representation["width"] = width
|
||||
representation["height"] = height
|
||||
|
||||
self.log.info(f"New representation {representation}")
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
97
openpype/hosts/traypublisher/api/plugin.py
Normal file
97
openpype/hosts/traypublisher/api/plugin.py
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
from openpype.pipeline import (
|
||||
Creator,
|
||||
CreatedInstance
|
||||
)
|
||||
from openpype.lib import FileDef
|
||||
|
||||
from .pipeline import (
|
||||
list_instances,
|
||||
update_instances,
|
||||
remove_instances,
|
||||
HostContext,
|
||||
)
|
||||
|
||||
|
||||
class TrayPublishCreator(Creator):
|
||||
create_allow_context_change = True
|
||||
host_name = "traypublisher"
|
||||
|
||||
def collect_instances(self):
|
||||
for instance_data in list_instances():
|
||||
creator_id = instance_data.get("creator_identifier")
|
||||
if creator_id == self.identifier:
|
||||
instance = CreatedInstance.from_existing(
|
||||
instance_data, self
|
||||
)
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
update_instances(update_list)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
remove_instances(instances)
|
||||
for instance in instances:
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attrobites
|
||||
return self.get_instance_attr_defs()
|
||||
|
||||
|
||||
class SettingsCreator(TrayPublishCreator):
|
||||
create_allow_context_change = True
|
||||
|
||||
extensions = []
|
||||
|
||||
def collect_instances(self):
|
||||
for instance_data in list_instances():
|
||||
creator_id = instance_data.get("creator_identifier")
|
||||
if creator_id == self.identifier:
|
||||
instance = CreatedInstance.from_existing(
|
||||
instance_data, self
|
||||
)
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
def create(self, subset_name, data, pre_create_data):
|
||||
# Pass precreate data to creator attributes
|
||||
data["creator_attributes"] = pre_create_data
|
||||
data["settings_creator"] = True
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(self.family, subset_name, data, self)
|
||||
# Host implementation of storing metadata about instance
|
||||
HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
FileDef(
|
||||
"filepath",
|
||||
folders=False,
|
||||
extensions=self.extensions,
|
||||
allow_sequences=self.allow_sequences,
|
||||
label="Filepath",
|
||||
)
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def from_settings(cls, item_data):
|
||||
identifier = item_data["identifier"]
|
||||
family = item_data["family"]
|
||||
if not identifier:
|
||||
identifier = "settings_{}".format(family)
|
||||
return type(
|
||||
"{}{}".format(cls.__name__, identifier),
|
||||
(cls, ),
|
||||
{
|
||||
"family": family,
|
||||
"identifier": identifier,
|
||||
"label": item_data["label"].strip(),
|
||||
"icon": item_data["icon"],
|
||||
"description": item_data["description"],
|
||||
"detailed_description": item_data["detailed_description"],
|
||||
"extensions": item_data["extensions"],
|
||||
"allow_sequences": item_data["allow_sequences"],
|
||||
"default_variants": item_data["default_variants"]
|
||||
}
|
||||
)
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
import os
|
||||
|
||||
from openpype.api import get_project_settings
|
||||
|
||||
|
||||
def initialize():
|
||||
from openpype.hosts.traypublisher.api.plugin import SettingsCreator
|
||||
|
||||
project_name = os.environ["AVALON_PROJECT"]
|
||||
project_settings = get_project_settings(project_name)
|
||||
|
||||
simple_creators = project_settings["traypublisher"]["simple_creators"]
|
||||
|
||||
global_variables = globals()
|
||||
for item in simple_creators:
|
||||
dynamic_plugin = SettingsCreator.from_settings(item)
|
||||
global_variables[dynamic_plugin.__name__] = dynamic_plugin
|
||||
|
||||
|
||||
initialize()
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
from openpype.hosts.traypublisher.api import pipeline
|
||||
from openpype.lib import FileDef
|
||||
from openpype.pipeline import (
|
||||
Creator,
|
||||
CreatedInstance
|
||||
)
|
||||
|
||||
|
||||
class WorkfileCreator(Creator):
|
||||
identifier = "workfile"
|
||||
label = "Workfile"
|
||||
family = "workfile"
|
||||
description = "Publish backup of workfile"
|
||||
|
||||
create_allow_context_change = True
|
||||
|
||||
extensions = [
|
||||
# Maya
|
||||
".ma", ".mb",
|
||||
# Nuke
|
||||
".nk",
|
||||
# Hiero
|
||||
".hrox",
|
||||
# Houdini
|
||||
".hip", ".hiplc", ".hipnc",
|
||||
# Blender
|
||||
".blend",
|
||||
# Celaction
|
||||
".scn",
|
||||
# TVPaint
|
||||
".tvpp",
|
||||
# Fusion
|
||||
".comp",
|
||||
# Harmony
|
||||
".zip",
|
||||
# Premiere
|
||||
".prproj",
|
||||
# Resolve
|
||||
".drp",
|
||||
# Photoshop
|
||||
".psd", ".psb",
|
||||
# Aftereffects
|
||||
".aep"
|
||||
]
|
||||
|
||||
def get_icon(self):
|
||||
return "fa.file"
|
||||
|
||||
def collect_instances(self):
|
||||
for instance_data in pipeline.list_instances():
|
||||
creator_id = instance_data.get("creator_identifier")
|
||||
if creator_id == self.identifier:
|
||||
instance = CreatedInstance.from_existing(
|
||||
instance_data, self
|
||||
)
|
||||
self._add_instance_to_context(instance)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
pipeline.update_instances(update_list)
|
||||
|
||||
def remove_instances(self, instances):
|
||||
pipeline.remove_instances(instances)
|
||||
for instance in instances:
|
||||
self._remove_instance_from_context(instance)
|
||||
|
||||
def create(self, subset_name, data, pre_create_data):
|
||||
# Pass precreate data to creator attributes
|
||||
data["creator_attributes"] = pre_create_data
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(self.family, subset_name, data, self)
|
||||
# Host implementation of storing metadata about instance
|
||||
pipeline.HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
def get_default_variants(self):
|
||||
return [
|
||||
"Main"
|
||||
]
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
output = [
|
||||
FileDef(
|
||||
"filepath",
|
||||
folders=False,
|
||||
extensions=self.extensions,
|
||||
label="Filepath"
|
||||
)
|
||||
]
|
||||
return output
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attrobites
|
||||
return self.get_instance_attr_defs()
|
||||
|
||||
def get_detail_description(self):
|
||||
return """# Publish workfile backup"""
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
import pyblish.api
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectReviewFamily(
|
||||
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
|
||||
):
|
||||
"""Add review family."""
|
||||
|
||||
label = "Collect Review Family"
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
|
||||
hosts = ["traypublisher"]
|
||||
families = [
|
||||
"image",
|
||||
"render",
|
||||
"plate",
|
||||
"review"
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
values = self.get_attr_values_from_data(instance.data)
|
||||
if values.get("add_review_family"):
|
||||
instance.data["families"].append("review")
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
BoolDef("add_review_family", label="Review", default=True)
|
||||
]
|
||||
|
|
@ -0,0 +1,50 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
||||
"""Collect data for instances created by settings creators."""
|
||||
|
||||
label = "Collect Settings Simple Instances"
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
if not instance.data.get("settings_creator"):
|
||||
return
|
||||
|
||||
if "families" not in instance.data:
|
||||
instance.data["families"] = []
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
repres = instance.data["representations"]
|
||||
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
filepath_item = creator_attributes["filepath"]
|
||||
self.log.info(filepath_item)
|
||||
filepaths = [
|
||||
os.path.join(filepath_item["directory"], filename)
|
||||
for filename in filepath_item["filenames"]
|
||||
]
|
||||
|
||||
instance.data["sourceFilepaths"] = filepaths
|
||||
instance.data["stagingDir"] = filepath_item["directory"]
|
||||
|
||||
filenames = filepath_item["filenames"]
|
||||
_, ext = os.path.splitext(filenames[0])
|
||||
ext = ext[1:]
|
||||
if len(filenames) == 1:
|
||||
filenames = filenames[0]
|
||||
|
||||
repres.append({
|
||||
"ext": ext,
|
||||
"name": ext,
|
||||
"stagingDir": filepath_item["directory"],
|
||||
"files": filenames
|
||||
})
|
||||
|
||||
self.log.debug("Created Simple Settings instance {}".format(
|
||||
instance.data
|
||||
))
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.InstancePlugin):
|
||||
"""Collect representation of workfile instances."""
|
||||
|
||||
label = "Collect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
families = ["workfile"]
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
repres = instance.data["representations"]
|
||||
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
filepath = creator_attributes["filepath"]
|
||||
instance.data["sourceFilepath"] = filepath
|
||||
|
||||
staging_dir = os.path.dirname(filepath)
|
||||
filename = os.path.basename(filepath)
|
||||
ext = os.path.splitext(filename)[-1]
|
||||
|
||||
repres.append({
|
||||
"ext": ext,
|
||||
"name": ext,
|
||||
"stagingDir": staging_dir,
|
||||
"files": filename
|
||||
})
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateWorkfilePath(pyblish.api.InstancePlugin):
|
||||
"""Validate existence of workfile instance existence."""
|
||||
|
||||
label = "Validate Workfile"
|
||||
order = pyblish.api.ValidatorOrder - 0.49
|
||||
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
if "sourceFilepaths" not in instance.data:
|
||||
self.log.info((
|
||||
"Can't validate source filepaths existence."
|
||||
" Instance does not have collected 'sourceFilepaths'"
|
||||
))
|
||||
return
|
||||
|
||||
filepaths = instance.data.get("sourceFilepaths")
|
||||
|
||||
not_found_files = [
|
||||
filepath
|
||||
for filepath in filepaths
|
||||
if not os.path.exists(filepath)
|
||||
]
|
||||
if not_found_files:
|
||||
joined_paths = "\n".join([
|
||||
"- {}".format(filepath)
|
||||
for filepath in not_found_files
|
||||
])
|
||||
raise PublishValidationError(
|
||||
(
|
||||
"Filepath of '{}' instance \"{}\" does not exist:\n{}"
|
||||
).format(
|
||||
instance.data["family"],
|
||||
instance.data["name"],
|
||||
joined_paths
|
||||
),
|
||||
"File not found",
|
||||
(
|
||||
"## Files were not found\nFiles\n{}"
|
||||
"\n\nCheck if the path is still available."
|
||||
).format(joined_paths)
|
||||
)
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateWorkfilePath(pyblish.api.InstancePlugin):
|
||||
"""Validate existence of workfile instance existence."""
|
||||
|
||||
label = "Validate Workfile"
|
||||
order = pyblish.api.ValidatorOrder - 0.49
|
||||
families = ["workfile"]
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
filepath = instance.data["sourceFilepath"]
|
||||
if not filepath:
|
||||
raise PublishValidationError(
|
||||
(
|
||||
"Filepath of 'workfile' instance \"{}\" is not set"
|
||||
).format(instance.data["name"]),
|
||||
"File not filled",
|
||||
"## Missing file\nYou are supposed to fill the path."
|
||||
)
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise PublishValidationError(
|
||||
(
|
||||
"Filepath of 'workfile' instance \"{}\" does not exist: {}"
|
||||
).format(instance.data["name"], filepath),
|
||||
"File not found",
|
||||
(
|
||||
"## File was not found\nFile \"{}\" was not found."
|
||||
" Check if the path is still available."
|
||||
).format(filepath)
|
||||
)
|
||||
|
|
@ -165,12 +165,12 @@ def parse_group_data(data):
|
|||
if not group_raw:
|
||||
continue
|
||||
|
||||
parts = group_raw.split(" ")
|
||||
parts = group_raw.split("|")
|
||||
# Check for length and concatenate 2 last items until length match
|
||||
# - this happens if name contain spaces
|
||||
while len(parts) > 6:
|
||||
last_item = parts.pop(-1)
|
||||
parts[-1] = " ".join([parts[-1], last_item])
|
||||
parts[-1] = "|".join([parts[-1], last_item])
|
||||
clip_id, group_id, red, green, blue, name = parts
|
||||
|
||||
group = {
|
||||
|
|
@ -201,11 +201,16 @@ def get_groups_data(communicator=None):
|
|||
george_script_lines = (
|
||||
# Variable containing full path to output file
|
||||
"output_path = \"{}\"".format(output_filepath),
|
||||
"loop = 1",
|
||||
"FOR idx = 1 TO 12",
|
||||
"empty = 0",
|
||||
# Loop over 100 groups
|
||||
"FOR idx = 1 TO 100",
|
||||
# Receive information about groups
|
||||
"tv_layercolor \"getcolor\" 0 idx",
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result",
|
||||
"END"
|
||||
"PARSE result clip_id group_index c_red c_green c_blue group_name",
|
||||
# Create and add line to output file
|
||||
"line = clip_id'|'group_index'|'c_red'|'c_green'|'c_blue'|'group_name",
|
||||
"tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line",
|
||||
"END",
|
||||
)
|
||||
george_script = "\n".join(george_script_lines)
|
||||
execute_george_through_file(george_script, communicator)
|
||||
|
|
|
|||
|
|
@ -573,7 +573,7 @@ def composite_rendered_layers(
|
|||
layer_ids_by_position[layer_position] = layer["layer_id"]
|
||||
|
||||
# Sort layer positions
|
||||
sorted_positions = tuple(sorted(layer_ids_by_position.keys()))
|
||||
sorted_positions = tuple(reversed(sorted(layer_ids_by_position.keys())))
|
||||
# Prepare variable where filepaths without any rendered content
|
||||
# - transparent will be created
|
||||
transparent_filepaths = set()
|
||||
|
|
|
|||
|
|
@ -24,7 +24,9 @@ class CreateRenderlayer(plugin.Creator):
|
|||
" {clip_id} {group_id} {r} {g} {b} \"{name}\""
|
||||
)
|
||||
|
||||
dynamic_subset_keys = ["render_pass", "render_layer", "group"]
|
||||
dynamic_subset_keys = [
|
||||
"renderpass", "renderlayer", "render_pass", "render_layer", "group"
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def get_dynamic_data(
|
||||
|
|
@ -34,12 +36,17 @@ class CreateRenderlayer(plugin.Creator):
|
|||
variant, task_name, asset_id, project_name, host_name
|
||||
)
|
||||
# Use render pass name from creator's plugin
|
||||
dynamic_data["render_pass"] = cls.render_pass
|
||||
dynamic_data["renderpass"] = cls.render_pass
|
||||
# Add variant to render layer
|
||||
dynamic_data["render_layer"] = variant
|
||||
dynamic_data["renderlayer"] = variant
|
||||
# Change family for subset name fill
|
||||
dynamic_data["family"] = "render"
|
||||
|
||||
# TODO remove - Backwards compatibility for old subset name templates
|
||||
# - added 2022/04/28
|
||||
dynamic_data["render_pass"] = dynamic_data["renderpass"]
|
||||
dynamic_data["render_layer"] = dynamic_data["renderlayer"]
|
||||
|
||||
return dynamic_data
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -20,7 +20,9 @@ class CreateRenderPass(plugin.Creator):
|
|||
icon = "cube"
|
||||
defaults = ["Main"]
|
||||
|
||||
dynamic_subset_keys = ["render_pass", "render_layer"]
|
||||
dynamic_subset_keys = [
|
||||
"renderpass", "renderlayer", "render_pass", "render_layer"
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def get_dynamic_data(
|
||||
|
|
@ -29,9 +31,13 @@ class CreateRenderPass(plugin.Creator):
|
|||
dynamic_data = super(CreateRenderPass, cls).get_dynamic_data(
|
||||
variant, task_name, asset_id, project_name, host_name
|
||||
)
|
||||
dynamic_data["render_pass"] = variant
|
||||
dynamic_data["renderpass"] = variant
|
||||
dynamic_data["family"] = "render"
|
||||
|
||||
# TODO remove - Backwards compatibility for old subset name templates
|
||||
# - added 2022/04/28
|
||||
dynamic_data["render_pass"] = dynamic_data["renderpass"]
|
||||
|
||||
return dynamic_data
|
||||
|
||||
@classmethod
|
||||
|
|
@ -115,6 +121,7 @@ class CreateRenderPass(plugin.Creator):
|
|||
else:
|
||||
render_layer = beauty_instance["variant"]
|
||||
|
||||
subset_name_fill_data["renderlayer"] = render_layer
|
||||
subset_name_fill_data["render_layer"] = render_layer
|
||||
|
||||
# Format dynamic keys in subset name
|
||||
|
|
@ -129,7 +136,7 @@ class CreateRenderPass(plugin.Creator):
|
|||
|
||||
self.data["group_id"] = group_id
|
||||
self.data["pass"] = variant
|
||||
self.data["render_layer"] = render_layer
|
||||
self.data["renderlayer"] = render_layer
|
||||
|
||||
# Collect selected layer ids to be stored into instance
|
||||
layer_names = [layer["name"] for layer in selected_layers]
|
||||
|
|
|
|||
|
|
@ -45,6 +45,21 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
for instance_data in filtered_instance_data:
|
||||
instance_data["fps"] = context.data["sceneFps"]
|
||||
|
||||
# Conversion from older instances
|
||||
# - change 'render_layer' to 'renderlayer'
|
||||
render_layer = instance_data.get("instance_data")
|
||||
if not render_layer:
|
||||
# Render Layer has only variant
|
||||
if instance_data["family"] == "renderLayer":
|
||||
render_layer = instance_data.get("variant")
|
||||
|
||||
# Backwards compatibility for renderPasses
|
||||
elif "render_layer" in instance_data:
|
||||
render_layer = instance_data["render_layer"]
|
||||
|
||||
if render_layer:
|
||||
instance_data["renderlayer"] = render_layer
|
||||
|
||||
# Store workfile instance data to instance data
|
||||
instance_data["originData"] = copy.deepcopy(instance_data)
|
||||
# Global instance data modifications
|
||||
|
|
@ -191,7 +206,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
"Creating render pass instance. \"{}\"".format(pass_name)
|
||||
)
|
||||
# Change label
|
||||
render_layer = instance_data["render_layer"]
|
||||
render_layer = instance_data["renderlayer"]
|
||||
|
||||
# Backwards compatibility
|
||||
# - subset names were not stored as final subset names during creation
|
||||
|
|
|
|||
|
|
@ -69,9 +69,13 @@ class CollectRenderScene(pyblish.api.ContextPlugin):
|
|||
# Variant is using render pass name
|
||||
variant = self.render_layer
|
||||
dynamic_data = {
|
||||
"render_layer": self.render_layer,
|
||||
"render_pass": self.render_pass
|
||||
"renderlayer": self.render_layer,
|
||||
"renderpass": self.render_pass,
|
||||
}
|
||||
# TODO remove - Backwards compatibility for old subset name templates
|
||||
# - added 2022/04/28
|
||||
dynamic_data["render_layer"] = dynamic_data["renderlayer"]
|
||||
dynamic_data["render_pass"] = dynamic_data["renderpass"]
|
||||
|
||||
task_name = workfile_context["task"]
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
|
|
@ -100,7 +104,9 @@ class CollectRenderScene(pyblish.api.ContextPlugin):
|
|||
"representations": [],
|
||||
"layers": copy.deepcopy(context.data["layersData"]),
|
||||
"asset": asset_name,
|
||||
"task": task_name
|
||||
"task": task_name,
|
||||
# Add render layer to instance data
|
||||
"renderlayer": self.render_layer
|
||||
}
|
||||
|
||||
instance = context.create_instance(**instance_data)
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -1,15 +1,19 @@
|
|||
import os
|
||||
import openpype.hosts
|
||||
from openpype.lib.applications import Application
|
||||
|
||||
|
||||
def add_implementation_envs(env, _app):
|
||||
def add_implementation_envs(env: dict, _app: Application) -> None:
|
||||
"""Modify environments to contain all required for implementation."""
|
||||
# Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation
|
||||
|
||||
ue_plugin = "UE_5.0" if _app.name[:1] == "5" else "UE_4.7"
|
||||
unreal_plugin_path = os.path.join(
|
||||
os.path.dirname(os.path.abspath(openpype.hosts.__file__)),
|
||||
"unreal", "integration"
|
||||
"unreal", "integration", ue_plugin
|
||||
)
|
||||
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
|
||||
if not env.get("OPENPYPE_UNREAL_PLUGIN"):
|
||||
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
|
||||
|
||||
# Set default environments if are not set via settings
|
||||
defaults = {
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ def install():
|
|||
print("installing OpenPype for Unreal ...")
|
||||
print("-=" * 40)
|
||||
logger.info("installing OpenPype for Unreal")
|
||||
pyblish.api.register_host("unreal")
|
||||
pyblish.api.register_plugin_path(str(PUBLISH_PATH))
|
||||
register_loader_plugin_path(str(LOAD_PATH))
|
||||
register_creator_plugin_path(str(CREATE_PATH))
|
||||
|
|
@ -392,3 +393,24 @@ def cast_map_to_str_dict(umap) -> dict:
|
|||
|
||||
"""
|
||||
return {str(key): str(value) for (key, value) in umap.items()}
|
||||
|
||||
|
||||
def get_subsequences(sequence: unreal.LevelSequence):
|
||||
"""Get list of subsequences from sequence.
|
||||
|
||||
Args:
|
||||
sequence (unreal.LevelSequence): Sequence
|
||||
|
||||
Returns:
|
||||
list(unreal.LevelSequence): List of subsequences
|
||||
|
||||
"""
|
||||
tracks = sequence.get_master_tracks()
|
||||
subscene_track = None
|
||||
for t in tracks:
|
||||
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
|
||||
subscene_track = t
|
||||
break
|
||||
if subscene_track is not None and subscene_track.get_sections():
|
||||
return subscene_track.get_sections()
|
||||
return []
|
||||
|
|
|
|||
137
openpype/hosts/unreal/api/rendering.py
Normal file
137
openpype/hosts/unreal/api/rendering.py
Normal file
|
|
@ -0,0 +1,137 @@
|
|||
import os
|
||||
|
||||
import unreal
|
||||
|
||||
from openpype.api import Anatomy
|
||||
from openpype.hosts.unreal.api import pipeline
|
||||
|
||||
|
||||
queue = None
|
||||
executor = None
|
||||
|
||||
|
||||
def _queue_finish_callback(exec, success):
|
||||
unreal.log("Render completed. Success: " + str(success))
|
||||
|
||||
# Delete our reference so we don't keep it alive.
|
||||
global executor
|
||||
global queue
|
||||
del executor
|
||||
del queue
|
||||
|
||||
|
||||
def _job_finish_callback(job, success):
|
||||
# You can make any edits you want to the editor world here, and the world
|
||||
# will be duplicated when the next render happens. Make sure you undo your
|
||||
# edits in OnQueueFinishedCallback if you don't want to leak state changes
|
||||
# into the editor world.
|
||||
unreal.log("Individual job completed.")
|
||||
|
||||
|
||||
def start_rendering():
|
||||
"""
|
||||
Start the rendering process.
|
||||
"""
|
||||
print("Starting rendering...")
|
||||
|
||||
# Get selected sequences
|
||||
assets = unreal.EditorUtilityLibrary.get_selected_assets()
|
||||
|
||||
# instances = pipeline.ls_inst()
|
||||
instances = [
|
||||
a for a in assets
|
||||
if a.get_class().get_name() == "OpenPypePublishInstance"]
|
||||
|
||||
inst_data = []
|
||||
|
||||
for i in instances:
|
||||
data = pipeline.parse_container(i.get_path_name())
|
||||
if data["family"] == "render":
|
||||
inst_data.append(data)
|
||||
|
||||
try:
|
||||
project = os.environ.get("AVALON_PROJECT")
|
||||
anatomy = Anatomy(project)
|
||||
root = anatomy.roots['renders']
|
||||
except Exception:
|
||||
raise Exception("Could not find render root in anatomy settings.")
|
||||
|
||||
render_dir = f"{root}/{project}"
|
||||
|
||||
# subsystem = unreal.get_editor_subsystem(
|
||||
# unreal.MoviePipelineQueueSubsystem)
|
||||
# queue = subsystem.get_queue()
|
||||
global queue
|
||||
queue = unreal.MoviePipelineQueue()
|
||||
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
for i in inst_data:
|
||||
sequence = ar.get_asset_by_object_path(i["sequence"]).get_asset()
|
||||
|
||||
sequences = [{
|
||||
"sequence": sequence,
|
||||
"output": f"{i['output']}",
|
||||
"frame_range": (
|
||||
int(float(i["frameStart"])),
|
||||
int(float(i["frameEnd"])) + 1)
|
||||
}]
|
||||
render_list = []
|
||||
|
||||
# Get all the sequences to render. If there are subsequences,
|
||||
# add them and their frame ranges to the render list. We also
|
||||
# use the names for the output paths.
|
||||
for s in sequences:
|
||||
subscenes = pipeline.get_subsequences(s.get('sequence'))
|
||||
|
||||
if subscenes:
|
||||
for ss in subscenes:
|
||||
sequences.append({
|
||||
"sequence": ss.get_sequence(),
|
||||
"output": (f"{s.get('output')}/"
|
||||
f"{ss.get_sequence().get_name()}"),
|
||||
"frame_range": (
|
||||
ss.get_start_frame(), ss.get_end_frame())
|
||||
})
|
||||
else:
|
||||
# Avoid rendering camera sequences
|
||||
if "_camera" not in s.get('sequence').get_name():
|
||||
render_list.append(s)
|
||||
|
||||
# Create the rendering jobs and add them to the queue.
|
||||
for r in render_list:
|
||||
job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob)
|
||||
job.sequence = unreal.SoftObjectPath(i["master_sequence"])
|
||||
job.map = unreal.SoftObjectPath(i["master_level"])
|
||||
job.author = "OpenPype"
|
||||
|
||||
# User data could be used to pass data to the job, that can be
|
||||
# read in the job's OnJobFinished callback. We could,
|
||||
# for instance, pass the AvalonPublishInstance's path to the job.
|
||||
# job.user_data = ""
|
||||
|
||||
settings = job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineOutputSetting)
|
||||
settings.output_resolution = unreal.IntPoint(1920, 1080)
|
||||
settings.custom_start_frame = r.get("frame_range")[0]
|
||||
settings.custom_end_frame = r.get("frame_range")[1]
|
||||
settings.use_custom_playback_range = True
|
||||
settings.file_name_format = "{sequence_name}.{frame_number}"
|
||||
settings.output_directory.path = f"{render_dir}/{r.get('output')}"
|
||||
|
||||
renderPass = job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineDeferredPassBase)
|
||||
renderPass.disable_multisample_effects = True
|
||||
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_PNG)
|
||||
|
||||
# If there are jobs in the queue, start the rendering process.
|
||||
if queue.get_jobs():
|
||||
global executor
|
||||
executor = unreal.MoviePipelinePIEExecutor()
|
||||
executor.on_executor_finished_delegate.add_callable_unique(
|
||||
_queue_finish_callback)
|
||||
executor.on_individual_job_finished_delegate.add_callable_unique(
|
||||
_job_finish_callback) # Only available on PIE Executor
|
||||
executor.execute(queue)
|
||||
|
|
@ -7,6 +7,7 @@ from openpype import (
|
|||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.tools.utils.lib import qt_app_context
|
||||
from openpype.hosts.unreal.api import rendering
|
||||
|
||||
|
||||
class ToolsBtnsWidget(QtWidgets.QWidget):
|
||||
|
|
@ -20,6 +21,7 @@ class ToolsBtnsWidget(QtWidgets.QWidget):
|
|||
load_btn = QtWidgets.QPushButton("Load...", self)
|
||||
publish_btn = QtWidgets.QPushButton("Publish...", self)
|
||||
manage_btn = QtWidgets.QPushButton("Manage...", self)
|
||||
render_btn = QtWidgets.QPushButton("Render...", self)
|
||||
experimental_tools_btn = QtWidgets.QPushButton(
|
||||
"Experimental tools...", self
|
||||
)
|
||||
|
|
@ -30,6 +32,7 @@ class ToolsBtnsWidget(QtWidgets.QWidget):
|
|||
layout.addWidget(load_btn, 0)
|
||||
layout.addWidget(publish_btn, 0)
|
||||
layout.addWidget(manage_btn, 0)
|
||||
layout.addWidget(render_btn, 0)
|
||||
layout.addWidget(experimental_tools_btn, 0)
|
||||
layout.addStretch(1)
|
||||
|
||||
|
|
@ -37,6 +40,7 @@ class ToolsBtnsWidget(QtWidgets.QWidget):
|
|||
load_btn.clicked.connect(self._on_load)
|
||||
publish_btn.clicked.connect(self._on_publish)
|
||||
manage_btn.clicked.connect(self._on_manage)
|
||||
render_btn.clicked.connect(self._on_render)
|
||||
experimental_tools_btn.clicked.connect(self._on_experimental)
|
||||
|
||||
def _on_create(self):
|
||||
|
|
@ -51,6 +55,9 @@ class ToolsBtnsWidget(QtWidgets.QWidget):
|
|||
def _on_manage(self):
|
||||
self.tool_required.emit("sceneinventory")
|
||||
|
||||
def _on_render(self):
|
||||
rendering.start_rendering()
|
||||
|
||||
def _on_experimental(self):
|
||||
self.tool_required.emit("experimental_tools")
|
||||
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
self.signature = f"( {self.__class__.__name__} )"
|
||||
|
||||
def _get_work_filename(self):
|
||||
# Use last workfile if was found
|
||||
|
|
@ -71,7 +71,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
if int(engine_version.split(".")[0]) < 4 and \
|
||||
int(engine_version.split(".")[1]) < 26:
|
||||
raise ApplicationLaunchFailed((
|
||||
f"{self.signature} Old unsupported version of UE4 "
|
||||
f"{self.signature} Old unsupported version of UE "
|
||||
f"detected - {engine_version}"))
|
||||
except ValueError:
|
||||
# there can be string in minor version and in that case
|
||||
|
|
@ -99,18 +99,19 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
f"character ({unreal_project_name}). Appending 'P'"
|
||||
))
|
||||
unreal_project_name = f"P{unreal_project_name}"
|
||||
unreal_project_filename = f'{unreal_project_name}.uproject'
|
||||
|
||||
project_path = Path(os.path.join(workdir, unreal_project_name))
|
||||
|
||||
self.log.info((
|
||||
f"{self.signature} requested UE4 version: "
|
||||
f"{self.signature} requested UE version: "
|
||||
f"[ {engine_version} ]"
|
||||
))
|
||||
|
||||
detected = unreal_lib.get_engine_versions(self.launch_context.env)
|
||||
detected_str = ', '.join(detected.keys()) or 'none'
|
||||
self.log.info((
|
||||
f"{self.signature} detected UE4 versions: "
|
||||
f"{self.signature} detected UE versions: "
|
||||
f"[ {detected_str} ]"
|
||||
))
|
||||
if not detected:
|
||||
|
|
@ -123,10 +124,10 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
f"detected [ {engine_version} ]"
|
||||
))
|
||||
|
||||
ue4_path = unreal_lib.get_editor_executable_path(
|
||||
Path(detected[engine_version]))
|
||||
ue_path = unreal_lib.get_editor_executable_path(
|
||||
Path(detected[engine_version]), engine_version)
|
||||
|
||||
self.launch_context.launch_args = [ue4_path.as_posix()]
|
||||
self.launch_context.launch_args = [ue_path.as_posix()]
|
||||
project_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
project_file = project_path / unreal_project_filename
|
||||
|
|
@ -138,6 +139,11 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
))
|
||||
# Set "OPENPYPE_UNREAL_PLUGIN" to current process environment for
|
||||
# execution of `create_unreal_project`
|
||||
if self.launch_context.env.get("OPENPYPE_UNREAL_PLUGIN"):
|
||||
self.log.info((
|
||||
f"{self.signature} using OpenPype plugin from "
|
||||
f"{self.launch_context.env.get('OPENPYPE_UNREAL_PLUGIN')}"
|
||||
))
|
||||
env_key = "OPENPYPE_UNREAL_PLUGIN"
|
||||
if self.launch_context.env.get(env_key):
|
||||
os.environ[env_key] = self.launch_context.env[env_key]
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue