Merge remote-tracking branch 'origin/develop' into feature/powershell-enhancements

This commit is contained in:
Ondrej Samohel 2022-06-07 15:47:21 +02:00
commit 7e3faea37d
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
337 changed files with 12133 additions and 4747 deletions

View file

@ -309,7 +309,18 @@
"contributions": [ "contributions": [
"code" "code"
] ]
},
{
"login": "Tilix4",
"name": "Félix David",
"avatar_url": "https://avatars.githubusercontent.com/u/22875539?v=4",
"profile": "http://felixdavid.com/",
"contributions": [
"code",
"doc"
]
} }
], ],
"contributorsPerLine": 7 "contributorsPerLine": 7,
} "skipCi": true
}

View file

@ -69,16 +69,14 @@ jobs:
run: | run: |
git config user.email ${{ secrets.CI_EMAIL }} git config user.email ${{ secrets.CI_EMAIL }}
git config user.name ${{ secrets.CI_USER }} git config user.name ${{ secrets.CI_USER }}
cd repos/avalon-core
git checkout main git checkout main
git pull git pull
cd ../..
git add . git add .
git commit -m "[Automated] Bump version" git commit -m "[Automated] Bump version"
tag_name="CI/${{ steps.version.outputs.next_tag }}" tag_name="CI/${{ steps.version.outputs.next_tag }}"
echo $tag_name echo $tag_name
git tag -a $tag_name -m "nightly build" git tag -a $tag_name -m "nightly build"
- name: Push to protected main branch - name: Push to protected main branch
uses: CasperWA/push-protected@v2.10.0 uses: CasperWA/push-protected@v2.10.0
with: with:

View file

@ -1,166 +1,162 @@
# Changelog # Changelog
## [3.10.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD) ## [3.10.1-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.4...HEAD) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.10.0...HEAD)
### 📖 Documentation
- Docs: add all-contributors config and initial list [\#3094](https://github.com/pypeclub/OpenPype/pull/3094)
- Nuke docs with videos [\#3052](https://github.com/pypeclub/OpenPype/pull/3052)
**🚀 Enhancements** **🚀 Enhancements**
- Standalone publisher: add support for bgeo and vdb [\#3080](https://github.com/pypeclub/OpenPype/pull/3080) - General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268)
- Update collect\_render.py [\#3055](https://github.com/pypeclub/OpenPype/pull/3055) - Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
- SiteSync: Added compute\_resource\_sync\_sites to sync\_server\_module [\#2983](https://github.com/pypeclub/OpenPype/pull/2983) - TVPaint: Init file for TVPaint worker also handle guideline images [\#3250](https://github.com/pypeclub/OpenPype/pull/3250)
- Nuke: Change default icon path in settings [\#3247](https://github.com/pypeclub/OpenPype/pull/3247)
**🐛 Bug fixes** **🐛 Bug fixes**
- RoyalRender Control Submission - AVALON\_APP\_NAME default [\#3091](https://github.com/pypeclub/OpenPype/pull/3091) - Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281)
- Ftrack: Update Create Folders action [\#3089](https://github.com/pypeclub/OpenPype/pull/3089) - Nuke: bake reformat was failing on string type [\#3261](https://github.com/pypeclub/OpenPype/pull/3261)
- Project Manager: Avoid unnecessary updates of asset documents [\#3083](https://github.com/pypeclub/OpenPype/pull/3083) - Maya: hotfix Pxr multitexture in looks [\#3260](https://github.com/pypeclub/OpenPype/pull/3260)
- Standalone publisher: Fix plugins install [\#3077](https://github.com/pypeclub/OpenPype/pull/3077) - Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
- General: Extract review sequence is not converted with same names [\#3076](https://github.com/pypeclub/OpenPype/pull/3076) - Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240)
- Webpublisher: Use variant value [\#3068](https://github.com/pypeclub/OpenPype/pull/3068) - Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239)
- Nuke: Add aov matching even for remainder and prerender [\#3060](https://github.com/pypeclub/OpenPype/pull/3060) - Unreal: Fixed Camera loading in UE5 [\#3238](https://github.com/pypeclub/OpenPype/pull/3238)
- Flame: debugging [\#3224](https://github.com/pypeclub/OpenPype/pull/3224)
**🔀 Refactored code** - Ftrack: Push hierarchical attributes action works [\#3210](https://github.com/pypeclub/OpenPype/pull/3210)
- add silent audio to slate [\#3162](https://github.com/pypeclub/OpenPype/pull/3162)
- General: Move host install [\#3009](https://github.com/pypeclub/OpenPype/pull/3009)
**Merged pull requests:** **Merged pull requests:**
- Nuke: added suspend\_publish knob [\#3078](https://github.com/pypeclub/OpenPype/pull/3078) - Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269)
- Bump async from 2.6.3 to 2.6.4 in /website [\#3065](https://github.com/pypeclub/OpenPype/pull/3065) - Nuke: add pointcache and animation to loader [\#3186](https://github.com/pypeclub/OpenPype/pull/3186)
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
**🆕 New features**
- General: OpenPype modules publish plugins are registered in host [\#3180](https://github.com/pypeclub/OpenPype/pull/3180)
- General: Creator plugins from addons can be registered [\#3179](https://github.com/pypeclub/OpenPype/pull/3179)
- Ftrack: Single image reviewable [\#3157](https://github.com/pypeclub/OpenPype/pull/3157)
**🚀 Enhancements**
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
- Project Manager: Allow to paste Tasks into multiple assets at the same time [\#3226](https://github.com/pypeclub/OpenPype/pull/3226)
- Project manager: Sped up project load [\#3216](https://github.com/pypeclub/OpenPype/pull/3216)
- Loader UI: Speed issues of loader with sync server [\#3199](https://github.com/pypeclub/OpenPype/pull/3199)
- Looks: add basic support for Renderman [\#3190](https://github.com/pypeclub/OpenPype/pull/3190)
- Maya: added clean\_import option to Import loader [\#3181](https://github.com/pypeclub/OpenPype/pull/3181)
- Add the scripts menu definition to nuke [\#3168](https://github.com/pypeclub/OpenPype/pull/3168)
- Maya: add maya 2023 to default applications [\#3167](https://github.com/pypeclub/OpenPype/pull/3167)
- Compressed bgeo publishing in SAP and Houdini loader [\#3153](https://github.com/pypeclub/OpenPype/pull/3153)
- General: Add 'dataclasses' to required python modules [\#3149](https://github.com/pypeclub/OpenPype/pull/3149)
- Hooks: Tweak logging grammar [\#3147](https://github.com/pypeclub/OpenPype/pull/3147)
- Nuke: settings for reformat node in CreateWriteRender node [\#3143](https://github.com/pypeclub/OpenPype/pull/3143)
**🐛 Bug fixes**
- nuke: use framerange issue [\#3254](https://github.com/pypeclub/OpenPype/pull/3254)
- Ftrack: Chunk sizes for queries has minimal condition [\#3244](https://github.com/pypeclub/OpenPype/pull/3244)
- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242)
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236)
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
- Hiero: debugging frame range and other 3.10 [\#3222](https://github.com/pypeclub/OpenPype/pull/3222)
- Project Manager: Fix persistent editors on project change [\#3218](https://github.com/pypeclub/OpenPype/pull/3218)
- Deadline: instance data overwrite fix [\#3214](https://github.com/pypeclub/OpenPype/pull/3214)
- Standalone Publisher: Always create new representation for thumbnail [\#3203](https://github.com/pypeclub/OpenPype/pull/3203)
- Photoshop: skip collector when automatic testing [\#3202](https://github.com/pypeclub/OpenPype/pull/3202)
- Nuke: render/workfile version sync doesn't work on farm [\#3185](https://github.com/pypeclub/OpenPype/pull/3185)
- Ftrack: Review image only if there are no mp4 reviews [\#3183](https://github.com/pypeclub/OpenPype/pull/3183)
- Ftrack: Locations deepcopy issue [\#3177](https://github.com/pypeclub/OpenPype/pull/3177)
- General: Avoid creating multiple thumbnails [\#3176](https://github.com/pypeclub/OpenPype/pull/3176)
- General/Hiero: better clip duration calculation [\#3169](https://github.com/pypeclub/OpenPype/pull/3169)
- General: Oiio conversion for ffmpeg checks for invalid characters [\#3166](https://github.com/pypeclub/OpenPype/pull/3166)
- Fix for attaching render to subset [\#3164](https://github.com/pypeclub/OpenPype/pull/3164)
- Harmony: fixed missing task name in render instance [\#3163](https://github.com/pypeclub/OpenPype/pull/3163)
- Ftrack: Action delete old versions formatting works [\#3152](https://github.com/pypeclub/OpenPype/pull/3152)
- Deadline: fix the output directory [\#3144](https://github.com/pypeclub/OpenPype/pull/3144)
- General: New Session schema [\#3141](https://github.com/pypeclub/OpenPype/pull/3141)
**🔀 Refactored code**
- Avalon repo removed from Jobs workflow [\#3193](https://github.com/pypeclub/OpenPype/pull/3193)
**Merged pull requests:**
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
- Maya: added jpg to filter for Image Plane Loader [\#3223](https://github.com/pypeclub/OpenPype/pull/3223)
- Maya: added jpg to filter for Image Plane Loader [\#3221](https://github.com/pypeclub/OpenPype/pull/3221)
- Webpublisher: replace space by underscore in subset names [\#3160](https://github.com/pypeclub/OpenPype/pull/3160)
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)
**🚀 Enhancements**
- nuke: generate publishing nodes inside render group node [\#3206](https://github.com/pypeclub/OpenPype/pull/3206)
- Loader UI: Speed issues of loader with sync server [\#3200](https://github.com/pypeclub/OpenPype/pull/3200)
- Backport of fix for attaching renders to subsets [\#3195](https://github.com/pypeclub/OpenPype/pull/3195)
**🐛 Bug fixes**
- Standalone Publisher: Always create new representation for thumbnail [\#3204](https://github.com/pypeclub/OpenPype/pull/3204)
- Nuke: render/workfile version sync doesn't work on farm [\#3184](https://github.com/pypeclub/OpenPype/pull/3184)
- Ftrack: Review image only if there are no mp4 reviews [\#3182](https://github.com/pypeclub/OpenPype/pull/3182)
- Ftrack: Locations deepcopy issue [\#3175](https://github.com/pypeclub/OpenPype/pull/3175)
- General: Avoid creating multiple thumbnails [\#3174](https://github.com/pypeclub/OpenPype/pull/3174)
- General: TemplateResult can be copied [\#3170](https://github.com/pypeclub/OpenPype/pull/3170)
**Merged pull requests:**
- hiero: otio p3 compatibility issue - metadata on effect use update [\#3194](https://github.com/pypeclub/OpenPype/pull/3194)
## [3.9.7](https://github.com/pypeclub/OpenPype/tree/3.9.7) (2022-05-11)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.6...3.9.7)
**🆕 New features**
- Ftrack: Single image reviewable [\#3158](https://github.com/pypeclub/OpenPype/pull/3158)
**🚀 Enhancements**
- Deadline output dir issue to 3.9x [\#3155](https://github.com/pypeclub/OpenPype/pull/3155)
- nuke: removing redundant code from startup [\#3142](https://github.com/pypeclub/OpenPype/pull/3142)
**🐛 Bug fixes**
- Ftrack: Action delete old versions formatting works [\#3154](https://github.com/pypeclub/OpenPype/pull/3154)
- nuke: adding extract thumbnail settings [\#3148](https://github.com/pypeclub/OpenPype/pull/3148)
**Merged pull requests:**
- Webpublisher: replace space by underscore in subset names [\#3159](https://github.com/pypeclub/OpenPype/pull/3159)
## [3.9.6](https://github.com/pypeclub/OpenPype/tree/3.9.6) (2022-05-03)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.5...3.9.6)
## [3.9.5](https://github.com/pypeclub/OpenPype/tree/3.9.5) (2022-04-25)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.2...3.9.5)
## [3.9.4](https://github.com/pypeclub/OpenPype/tree/3.9.4) (2022-04-15) ## [3.9.4](https://github.com/pypeclub/OpenPype/tree/3.9.4) (2022-04-15)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.4-nightly.2...3.9.4) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.4-nightly.2...3.9.4)
### 📖 Documentation
- Documentation: more info about Tasks [\#3062](https://github.com/pypeclub/OpenPype/pull/3062)
- Documentation: Python requirements to 3.7.9 [\#3035](https://github.com/pypeclub/OpenPype/pull/3035)
- Website Docs: Remove unused pages [\#2974](https://github.com/pypeclub/OpenPype/pull/2974)
**🆕 New features**
- General: Local overrides for environment variables [\#3045](https://github.com/pypeclub/OpenPype/pull/3045)
**🚀 Enhancements**
- TVPaint: Added init file for worker to triggers missing sound file dialog [\#3053](https://github.com/pypeclub/OpenPype/pull/3053)
- Ftrack: Custom attributes can be filled in slate values [\#3036](https://github.com/pypeclub/OpenPype/pull/3036)
- Resolve environment variable in google drive credential path [\#3008](https://github.com/pypeclub/OpenPype/pull/3008)
**🐛 Bug fixes**
- GitHub: Updated push-protected action in github workflow [\#3064](https://github.com/pypeclub/OpenPype/pull/3064)
- Nuke: Typos in imports from Nuke implementation [\#3061](https://github.com/pypeclub/OpenPype/pull/3061)
- Hotfix: fixing deadline job publishing [\#3059](https://github.com/pypeclub/OpenPype/pull/3059)
- General: Extract Review handle invalid characters for ffmpeg [\#3050](https://github.com/pypeclub/OpenPype/pull/3050)
- Slate Review: Support to keep format on slate concatenation [\#3049](https://github.com/pypeclub/OpenPype/pull/3049)
- Webpublisher: fix processing of workfile [\#3048](https://github.com/pypeclub/OpenPype/pull/3048)
- Ftrack: Integrate ftrack api fix [\#3044](https://github.com/pypeclub/OpenPype/pull/3044)
- Webpublisher - removed wrong hardcoded family [\#3043](https://github.com/pypeclub/OpenPype/pull/3043)
- LibraryLoader: Use current project for asset query in families filter [\#3042](https://github.com/pypeclub/OpenPype/pull/3042)
- SiteSync: Providers ignore that site is disabled [\#3041](https://github.com/pypeclub/OpenPype/pull/3041)
- Unreal: Creator import fixes [\#3040](https://github.com/pypeclub/OpenPype/pull/3040)
- Settings UI: Version column can be extended so version are visible [\#3032](https://github.com/pypeclub/OpenPype/pull/3032)
- SiteSync: fix transitive alternate sites, fix dropdown in Local Settings [\#3018](https://github.com/pypeclub/OpenPype/pull/3018)
**Merged pull requests:**
- Deadline: reworked pools assignment [\#3051](https://github.com/pypeclub/OpenPype/pull/3051)
- Houdini: Avoid ImportError on `hdefereval` when Houdini runs without UI [\#2987](https://github.com/pypeclub/OpenPype/pull/2987)
## [3.9.3](https://github.com/pypeclub/OpenPype/tree/3.9.3) (2022-04-07) ## [3.9.3](https://github.com/pypeclub/OpenPype/tree/3.9.3) (2022-04-07)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.3-nightly.2...3.9.3) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.3-nightly.2...3.9.3)
### 📖 Documentation
- Website Docs: Manager Ftrack fix broken links [\#2979](https://github.com/pypeclub/OpenPype/pull/2979)
**🆕 New features**
- Ftrack: Add description integrator [\#3027](https://github.com/pypeclub/OpenPype/pull/3027)
- Publishing textures for Unreal [\#2988](https://github.com/pypeclub/OpenPype/pull/2988)
**🚀 Enhancements**
- Ftrack: Add more options for note text of integrate ftrack note [\#3025](https://github.com/pypeclub/OpenPype/pull/3025)
- Console Interpreter: Changed how console splitter size are reused on show [\#3016](https://github.com/pypeclub/OpenPype/pull/3016)
- Deadline: Use more suitable name for sequence review logic [\#3015](https://github.com/pypeclub/OpenPype/pull/3015)
- General: default workfile subset name for workfile [\#3011](https://github.com/pypeclub/OpenPype/pull/3011)
- Deadline: priority configurable in Maya jobs [\#2995](https://github.com/pypeclub/OpenPype/pull/2995)
**🐛 Bug fixes**
- Deadline: Fixed default value of use sequence for review [\#3033](https://github.com/pypeclub/OpenPype/pull/3033)
- General: Fix validate asset docs plug-in filename and class name [\#3029](https://github.com/pypeclub/OpenPype/pull/3029)
- General: Fix import after movements [\#3028](https://github.com/pypeclub/OpenPype/pull/3028)
- Harmony: Added creating subset name for workfile from template [\#3024](https://github.com/pypeclub/OpenPype/pull/3024)
- AfterEffects: Added creating subset name for workfile from template [\#3023](https://github.com/pypeclub/OpenPype/pull/3023)
- General: Add example addons to ignored [\#3022](https://github.com/pypeclub/OpenPype/pull/3022)
- Maya: Remove missing import [\#3017](https://github.com/pypeclub/OpenPype/pull/3017)
- Ftrack: multiple reviewable componets [\#3012](https://github.com/pypeclub/OpenPype/pull/3012)
- Tray publisher: Fixes after code movement [\#3010](https://github.com/pypeclub/OpenPype/pull/3010)
- Nuke: fixing unicode type detection in effect loaders [\#3002](https://github.com/pypeclub/OpenPype/pull/3002)
- Nuke: removing redundant Ftrack asset when farm publishing [\#2996](https://github.com/pypeclub/OpenPype/pull/2996)
**Merged pull requests:**
- Maya: Allow to select invalid camera contents if no cameras found [\#3030](https://github.com/pypeclub/OpenPype/pull/3030)
- General: adding limitations for pyright [\#2994](https://github.com/pypeclub/OpenPype/pull/2994)
## [3.9.2](https://github.com/pypeclub/OpenPype/tree/3.9.2) (2022-04-04) ## [3.9.2](https://github.com/pypeclub/OpenPype/tree/3.9.2) (2022-04-04)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.2-nightly.4...3.9.2) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.2-nightly.4...3.9.2)
### 📖 Documentation
- Documentation: Added mention of adding My Drive as a root [\#2999](https://github.com/pypeclub/OpenPype/pull/2999)
- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951)
**🆕 New features**
- nuke: bypass baking [\#2992](https://github.com/pypeclub/OpenPype/pull/2992)
- Maya to Unreal: Static and Skeletal Meshes [\#2978](https://github.com/pypeclub/OpenPype/pull/2978)
**🚀 Enhancements**
- Nuke: add concurrency attr to deadline job [\#3005](https://github.com/pypeclub/OpenPype/pull/3005)
- Photoshop: create image without instance [\#3001](https://github.com/pypeclub/OpenPype/pull/3001)
- TVPaint: Render scene family [\#3000](https://github.com/pypeclub/OpenPype/pull/3000)
- Nuke: ReviewDataMov Read RAW attribute [\#2985](https://github.com/pypeclub/OpenPype/pull/2985)
- General: `METADATA\_KEYS` constant as `frozenset` for optimal immutable lookup [\#2980](https://github.com/pypeclub/OpenPype/pull/2980)
- General: Tools with host filters [\#2975](https://github.com/pypeclub/OpenPype/pull/2975)
- Hero versions: Use custom templates [\#2967](https://github.com/pypeclub/OpenPype/pull/2967)
**🐛 Bug fixes**
- Hosts: Remove path existence checks in 'add\_implementation\_envs' [\#3004](https://github.com/pypeclub/OpenPype/pull/3004)
- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998)
- PS: fix renaming subset incorrectly in PS [\#2991](https://github.com/pypeclub/OpenPype/pull/2991)
- Fix: Disable setuptools auto discovery [\#2990](https://github.com/pypeclub/OpenPype/pull/2990)
- AEL: fix opening existing workfile if no scene opened [\#2989](https://github.com/pypeclub/OpenPype/pull/2989)
- Maya: Don't do hardlinks on windows for look publishing [\#2986](https://github.com/pypeclub/OpenPype/pull/2986)
- Settings UI: Fix version completer on linux [\#2981](https://github.com/pypeclub/OpenPype/pull/2981)
- Photoshop: Fix creation of subset names in PS review and workfile [\#2969](https://github.com/pypeclub/OpenPype/pull/2969)
- Slack: Added default for review\_upload\_limit for Slack [\#2965](https://github.com/pypeclub/OpenPype/pull/2965)
- General: OIIO conversion for ffmeg can handle sequences [\#2958](https://github.com/pypeclub/OpenPype/pull/2958)
- Settings: Conditional dictionary avoid invalid logs [\#2956](https://github.com/pypeclub/OpenPype/pull/2956)
- General: Smaller fixes and typos [\#2950](https://github.com/pypeclub/OpenPype/pull/2950)
**Merged pull requests:**
- Bump paramiko from 2.9.2 to 2.10.1 [\#2973](https://github.com/pypeclub/OpenPype/pull/2973)
- Bump minimist from 1.2.5 to 1.2.6 in /website [\#2954](https://github.com/pypeclub/OpenPype/pull/2954)
- Bump node-forge from 1.2.1 to 1.3.0 in /website [\#2953](https://github.com/pypeclub/OpenPype/pull/2953)
- Maya - added transparency into review creator [\#2952](https://github.com/pypeclub/OpenPype/pull/2952)
## [3.9.1](https://github.com/pypeclub/OpenPype/tree/3.9.1) (2022-03-18) ## [3.9.1](https://github.com/pypeclub/OpenPype/tree/3.9.1) (2022-03-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.1-nightly.3...3.9.1) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.1-nightly.3...3.9.1)

View file

@ -1,6 +1,6 @@
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section --> <!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-26-orange.svg?style=flat-square)](#contributors-) [![All Contributors](https://img.shields.io/badge/all_contributors-27-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END --> <!-- ALL-CONTRIBUTORS-BADGE:END -->
OpenPype OpenPype
==== ====
@ -328,6 +328,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
<td align="center"><a href="https://github.com/Malthaldar"><img src="https://avatars.githubusercontent.com/u/33671694?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Malthaldar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Malthaldar" title="Code">💻</a></td> <td align="center"><a href="https://github.com/Malthaldar"><img src="https://avatars.githubusercontent.com/u/33671694?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Malthaldar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Malthaldar" title="Code">💻</a></td>
<td align="center"><a href="http://www.svenneve.com/"><img src="https://avatars.githubusercontent.com/u/2472863?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Sven Neve</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=svenneve" title="Code">💻</a></td> <td align="center"><a href="http://www.svenneve.com/"><img src="https://avatars.githubusercontent.com/u/2472863?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Sven Neve</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=svenneve" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/zafrs"><img src="https://avatars.githubusercontent.com/u/26890002?v=4?s=100" width="100px;" alt=""/><br /><sub><b>zafrs</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=zafrs" title="Code">💻</a></td> <td align="center"><a href="https://github.com/zafrs"><img src="https://avatars.githubusercontent.com/u/26890002?v=4?s=100" width="100px;" alt=""/><br /><sub><b>zafrs</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=zafrs" title="Code">💻</a></td>
<td align="center"><a href="http://felixdavid.com/"><img src="https://avatars.githubusercontent.com/u/22875539?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Félix David</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Documentation">📖</a></td>
</tr> </tr>
</table> </table>

View file

@ -3,7 +3,6 @@ from .settings import (
get_project_settings, get_project_settings,
get_current_project_settings, get_current_project_settings,
get_anatomy_settings, get_anatomy_settings,
get_environments,
SystemSettings, SystemSettings,
ProjectSettings ProjectSettings
@ -23,7 +22,6 @@ from .lib import (
get_app_environments_for_context, get_app_environments_for_context,
source_hash, source_hash,
get_latest_version, get_latest_version,
get_global_environments,
get_local_site_id, get_local_site_id,
change_openpype_mongo_url, change_openpype_mongo_url,
create_project_folders, create_project_folders,
@ -46,6 +44,7 @@ from . import resources
from .plugin import ( from .plugin import (
Extractor, Extractor,
Integrator,
ValidatePipelineOrder, ValidatePipelineOrder,
ValidateContentsOrder, ValidateContentsOrder,
@ -69,10 +68,10 @@ __all__ = [
"get_project_settings", "get_project_settings",
"get_current_project_settings", "get_current_project_settings",
"get_anatomy_settings", "get_anatomy_settings",
"get_environments",
"get_project_basic_paths", "get_project_basic_paths",
"SystemSettings", "SystemSettings",
"ProjectSettings",
"PypeLogger", "PypeLogger",
"Logger", "Logger",
@ -88,6 +87,7 @@ __all__ = [
# plugin classes # plugin classes
"Extractor", "Extractor",
"Integrator",
# ordering # ordering
"ValidatePipelineOrder", "ValidatePipelineOrder",
"ValidateContentsOrder", "ValidateContentsOrder",
@ -102,8 +102,9 @@ __all__ = [
# get contextual data # get contextual data
"version_up", "version_up",
"get_hierarchy",
"get_asset", "get_asset",
"get_hierarchy",
"get_workdir_data",
"get_version_from_path", "get_version_from_path",
"get_last_version_from_path", "get_last_version_from_path",
"get_app_environments_for_context", "get_app_environments_for_context",
@ -111,7 +112,6 @@ __all__ = [
"run_subprocess", "run_subprocess",
"get_latest_version", "get_latest_version",
"get_global_environments",
"get_local_site_id", "get_local_site_id",
"change_openpype_mongo_url", "change_openpype_mongo_url",

View file

@ -266,7 +266,7 @@ class AssetLoader(LoaderPlugin):
# Only containerise if it's not already a collection from a .blend file. # Only containerise if it's not already a collection from a .blend file.
# representation = context["representation"]["name"] # representation = context["representation"]["name"]
# if representation != "blend": # if representation != "blend":
# from avalon.blender.pipeline import containerise # from openpype.hosts.blender.api.pipeline import containerise
# return containerise( # return containerise(
# name=name, # name=name,
# namespace=namespace, # namespace=namespace,

View file

@ -3,6 +3,7 @@ import os
import re import re
import json import json
import pickle import pickle
import clique
import tempfile import tempfile
import itertools import itertools
import contextlib import contextlib
@ -560,7 +561,7 @@ def get_segment_attributes(segment):
if not hasattr(segment, attr_name): if not hasattr(segment, attr_name):
continue continue
attr = getattr(segment, attr_name) attr = getattr(segment, attr_name)
segment_attrs_data[attr] = str(attr).replace("+", ":") segment_attrs_data[attr_name] = str(attr).replace("+", ":")
if attr_name in ["record_in", "record_out"]: if attr_name in ["record_in", "record_out"]:
clip_data[attr_name] = attr.relative_frame clip_data[attr_name] = attr.relative_frame
@ -762,6 +763,7 @@ class MediaInfoFile(object):
_start_frame = None _start_frame = None
_fps = None _fps = None
_drop_mode = None _drop_mode = None
_file_pattern = None
def __init__(self, path, **kwargs): def __init__(self, path, **kwargs):
@ -773,17 +775,28 @@ class MediaInfoFile(object):
self._validate_media_script_path() self._validate_media_script_path()
# derivate other feed variables # derivate other feed variables
self.feed_basename = os.path.basename(path) feed_basename = os.path.basename(path)
self.feed_dir = os.path.dirname(path) feed_dir = os.path.dirname(path)
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower() feed_ext = os.path.splitext(feed_basename)[1][1:].lower()
with maintained_temp_file_path(".clip") as tmp_path: with maintained_temp_file_path(".clip") as tmp_path:
self.log.info("Temp File: {}".format(tmp_path)) self.log.info("Temp File: {}".format(tmp_path))
self._generate_media_info_file(tmp_path) self._generate_media_info_file(tmp_path, feed_ext, feed_dir)
# get collection containing feed_basename from path
self.file_pattern = self._get_collection(
feed_basename, feed_dir, feed_ext)
if (
not self.file_pattern
and os.path.exists(os.path.join(feed_dir, feed_basename))
):
self.file_pattern = feed_basename
# get clip data and make them single if there is multiple # get clip data and make them single if there is multiple
# clips data # clips data
xml_data = self._make_single_clip_media_info(tmp_path) xml_data = self._make_single_clip_media_info(
tmp_path, feed_basename, self.file_pattern)
self.log.debug("xml_data: {}".format(xml_data)) self.log.debug("xml_data: {}".format(xml_data))
self.log.debug("type: {}".format(type(xml_data))) self.log.debug("type: {}".format(type(xml_data)))
@ -794,6 +807,123 @@ class MediaInfoFile(object):
self.log.debug("drop frame: {}".format(self.drop_mode)) self.log.debug("drop frame: {}".format(self.drop_mode))
self.clip_data = xml_data self.clip_data = xml_data
def _get_collection(self, feed_basename, feed_dir, feed_ext):
""" Get collection string
Args:
feed_basename (str): file base name
feed_dir (str): file's directory
feed_ext (str): file extension
Raises:
AttributeError: feed_ext is not matching feed_basename
Returns:
str: collection basename with range of sequence
"""
partialname = self._separate_file_head(feed_basename, feed_ext)
self.log.debug("__ partialname: {}".format(partialname))
# make sure partial input basename is having correct extensoon
if not partialname:
raise AttributeError(
"Wrong input attributes. Basename - {}, Ext - {}".format(
feed_basename, feed_ext
)
)
# get all related files
files = [
f for f in os.listdir(feed_dir)
if partialname == self._separate_file_head(f, feed_ext)
]
# ignore reminders as we dont need them
collections = clique.assemble(files)[0]
# in case no collection found return None
# it is probably just single file
if not collections:
return
# we expect only one collection
collection = collections[0]
self.log.debug("__ collection: {}".format(collection))
if collection.is_contiguous():
return self._format_collection(collection)
# add `[` in front to make sure it want capture
# shot name with the same number
number_from_path = self._separate_number(feed_basename, feed_ext)
search_number_pattern = "[" + number_from_path
# convert to multiple collections
_continues_colls = collection.separate()
for _coll in _continues_colls:
coll_to_text = self._format_collection(
_coll, len(number_from_path))
self.log.debug("__ coll_to_text: {}".format(coll_to_text))
if search_number_pattern in coll_to_text:
return coll_to_text
@staticmethod
def _format_collection(collection, padding=None):
padding = padding or collection.padding
# if no holes then return collection
head = collection.format("{head}")
tail = collection.format("{tail}")
range_template = "[{{:0{0}d}}-{{:0{0}d}}]".format(
padding)
ranges = range_template.format(
min(collection.indexes),
max(collection.indexes)
)
# if no holes then return collection
return "{}{}{}".format(head, ranges, tail)
def _separate_file_head(self, basename, extension):
""" Get only head with out sequence and extension
Args:
basename (str): file base name
extension (str): file extension
Returns:
str: file head
"""
# in case sequence file
found = re.findall(
r"(.*)[._][\d]*(?=.{})".format(extension),
basename,
)
if found:
return found.pop()
# in case single file
name, ext = os.path.splitext(basename)
if extension == ext[1:]:
return name
def _separate_number(self, basename, extension):
""" Get only sequence number as string
Args:
basename (str): file base name
extension (str): file extension
Returns:
str: number with padding
"""
# in case sequence file
found = re.findall(
r"[._]([\d]*)(?=.{})".format(extension),
basename,
)
if found:
return found.pop()
@property @property
def clip_data(self): def clip_data(self):
"""Clip's xml clip data """Clip's xml clip data
@ -846,18 +976,41 @@ class MediaInfoFile(object):
def drop_mode(self, text): def drop_mode(self, text):
self._drop_mode = str(text) self._drop_mode = str(text)
@property
def file_pattern(self):
"""Clips file patter
Returns:
str: file pattern. ex. file.[1-2].exr
"""
return self._file_pattern
@file_pattern.setter
def file_pattern(self, fpattern):
self._file_pattern = fpattern
def _validate_media_script_path(self): def _validate_media_script_path(self):
if not os.path.isfile(self.MEDIA_SCRIPT_PATH): if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
raise IOError("Media Scirpt does not exist: `{}`".format( raise IOError("Media Scirpt does not exist: `{}`".format(
self.MEDIA_SCRIPT_PATH)) self.MEDIA_SCRIPT_PATH))
def _generate_media_info_file(self, fpath): def _generate_media_info_file(self, fpath, feed_ext, feed_dir):
""" Generate media info xml .clip file
Args:
fpath (str): .clip file path
feed_ext (str): file extension to be filtered
feed_dir (str): look up directory
Raises:
TypeError: Type error if it fails
"""
# Create cmd arguments for gettig xml file info file # Create cmd arguments for gettig xml file info file
cmd_args = [ cmd_args = [
self.MEDIA_SCRIPT_PATH, self.MEDIA_SCRIPT_PATH,
"-e", self.feed_ext, "-e", feed_ext,
"-o", fpath, "-o", fpath,
self.feed_dir feed_dir
] ]
try: try:
@ -867,7 +1020,20 @@ class MediaInfoFile(object):
raise TypeError( raise TypeError(
"Error creating `{}` due: {}".format(fpath, error)) "Error creating `{}` due: {}".format(fpath, error))
def _make_single_clip_media_info(self, fpath): def _make_single_clip_media_info(self, fpath, feed_basename, path_pattern):
""" Separate only relative clip object form .clip file
Args:
fpath (str): clip file path
feed_basename (str): search basename
path_pattern (str): search file pattern (file.[1-2].exr)
Raises:
ET.ParseError: if nothing found
Returns:
ET.Element: xml element data of matching clip
"""
with open(fpath) as f: with open(fpath) as f:
lines = f.readlines() lines = f.readlines()
_added_root = itertools.chain( _added_root = itertools.chain(
@ -878,14 +1044,30 @@ class MediaInfoFile(object):
xml_clips = new_root.findall("clip") xml_clips = new_root.findall("clip")
matching_clip = None matching_clip = None
for xml_clip in xml_clips: for xml_clip in xml_clips:
if xml_clip.find("name").text in self.feed_basename: clip_name = xml_clip.find("name").text
matching_clip = xml_clip self.log.debug("__ clip_name: `{}`".format(clip_name))
if clip_name not in feed_basename:
continue
# test path pattern
for out_track in xml_clip.iter("track"):
for out_feed in out_track.iter("feed"):
for span in out_feed.iter("span"):
# start frame
span_path = span.find("path")
self.log.debug(
"__ span_path.text: {}, path_pattern: {}".format(
span_path.text, path_pattern
)
)
if path_pattern in span_path.text:
matching_clip = xml_clip
if matching_clip is None: if matching_clip is None:
# return warning there is missing clip # return warning there is missing clip
raise ET.ParseError( raise ET.ParseError(
"Missing clip in `{}`. Available clips {}".format( "Missing clip in `{}`. Available clips {}".format(
self.feed_basename, [ feed_basename, [
xml_clip.find("name").text xml_clip.find("name").text
for xml_clip in xml_clips for xml_clip in xml_clips
] ]
@ -894,6 +1076,11 @@ class MediaInfoFile(object):
return matching_clip return matching_clip
def _get_time_info_from_origin(self, xml_data): def _get_time_info_from_origin(self, xml_data):
"""Set time info to class attributes
Args:
xml_data (ET.Element): clip data
"""
try: try:
for out_track in xml_data.iter('track'): for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'): for out_feed in out_track.iter('feed'):
@ -912,8 +1099,6 @@ class MediaInfoFile(object):
'startTimecode/dropMode') 'startTimecode/dropMode')
self.drop_mode = out_feed_drop_mode_obj.text self.drop_mode = out_feed_drop_mode_obj.text
break break
else:
continue
except Exception as msg: except Exception as msg:
self.log.warning(msg) self.log.warning(msg)

View file

@ -360,6 +360,7 @@ class PublishableClip:
driving_layer_default = "" driving_layer_default = ""
index_from_segment_default = False index_from_segment_default = False
use_shot_name_default = False use_shot_name_default = False
include_handles_default = False
def __init__(self, segment, **kwargs): def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"] self.rename_index = kwargs["rename_index"]
@ -493,6 +494,8 @@ class PublishableClip:
"reviewTrack", {}).get("value") or self.review_track_default "reviewTrack", {}).get("value") or self.review_track_default
self.audio = self.ui_inputs.get( self.audio = self.ui_inputs.get(
"audio", {}).get("value") or False "audio", {}).get("value") or False
self.include_handles = self.ui_inputs.get(
"includeHandles", {}).get("value") or self.include_handles_default
# build subset name from layer name # build subset name from layer name
if self.subset_name == "[ track name ]": if self.subset_name == "[ track name ]":
@ -873,6 +876,5 @@ class OpenClipSolver(flib.MediaInfoFile):
if feed_clr_obj is not None: if feed_clr_obj is not None:
feed_clr_obj = ET.Element( feed_clr_obj = ET.Element(
"colourSpace", {"type": "string"}) "colourSpace", {"type": "string"})
feed_clr_obj.text = profile_name
feed_storage_obj.append(feed_clr_obj) feed_storage_obj.append(feed_clr_obj)
feed_clr_obj.text = profile_name

View file

@ -1,5 +1,8 @@
import os import os
from xml.etree import ElementTree as ET from xml.etree import ElementTree as ET
from openpype.api import Logger
log = Logger.get_logger(__name__)
def export_clip(export_path, clip, preset_path, **kwargs): def export_clip(export_path, clip, preset_path, **kwargs):
@ -143,10 +146,40 @@ def modify_preset_file(xml_path, staging_dir, data):
# change xml following data keys # change xml following data keys
with open(xml_path, "r") as datafile: with open(xml_path, "r") as datafile:
tree = ET.parse(datafile) _root = ET.parse(datafile)
for key, value in data.items(): for key, value in data.items():
for element in tree.findall(".//{}".format(key)): try:
element.text = str(value) if "/" in key:
tree.write(temp_path) if not key.startswith("./"):
key = ".//" + key
split_key_path = key.split("/")
element_key = split_key_path[-1]
parent_obj_path = "/".join(split_key_path[:-1])
parent_obj = _root.find(parent_obj_path)
element_obj = parent_obj.find(element_key)
if not element_obj:
append_element(parent_obj, element_key, value)
else:
finds = _root.findall(".//{}".format(key))
if not finds:
raise AttributeError
for element in finds:
element.text = str(value)
except AttributeError:
log.warning(
"Cannot create attribute: {}: {}. Skipping".format(
key, value
))
_root.write(temp_path)
return temp_path return temp_path
def append_element(root_element_obj, key, value):
new_element_obj = ET.Element(key)
log.debug("__ new_element_obj: {}".format(new_element_obj))
new_element_obj.text = str(value)
root_element_obj.insert(0, new_element_obj)

View file

@ -94,83 +94,30 @@ def create_otio_time_range(start_frame, frame_duration, fps):
def _get_metadata(item): def _get_metadata(item):
if hasattr(item, 'metadata'): if hasattr(item, 'metadata'):
if not item.metadata: return dict(item.metadata) if item.metadata else {}
return {}
return {key: value for key, value in dict(item.metadata)}
return {} return {}
def create_time_effects(otio_clip, item): def create_time_effects(otio_clip, speed):
# todo #2426: add retiming effects to export otio_effect = None
# get all subtrack items
# subTrackItems = flatten(track_item.parent().subTrackItems())
# speed = track_item.playbackSpeed()
# otio_effect = None # retime on track item
# # retime on track item if speed != 1.:
# if speed != 1.: # make effect
# # make effect otio_effect = otio.schema.LinearTimeWarp()
# otio_effect = otio.schema.LinearTimeWarp() otio_effect.name = "Speed"
# otio_effect.name = "Speed" otio_effect.time_scalar = speed
# otio_effect.time_scalar = speed otio_effect.metadata = {}
# otio_effect.metadata = {}
# # freeze frame effect # freeze frame effect
# if speed == 0.: if speed == 0.:
# otio_effect = otio.schema.FreezeFrame() otio_effect = otio.schema.FreezeFrame()
# otio_effect.name = "FreezeFrame" otio_effect.name = "FreezeFrame"
# otio_effect.metadata = {} otio_effect.metadata = {}
# if otio_effect: if otio_effect:
# # add otio effect to clip effects # add otio effect to clip effects
# otio_clip.effects.append(otio_effect) otio_clip.effects.append(otio_effect)
# # loop through and get all Timewarps
# for effect in subTrackItems:
# if ((track_item not in effect.linkedItems())
# and (len(effect.linkedItems()) > 0)):
# continue
# # avoid all effect which are not TimeWarp and disabled
# if "TimeWarp" not in effect.name():
# continue
# if not effect.isEnabled():
# continue
# node = effect.node()
# name = node["name"].value()
# # solve effect class as effect name
# _name = effect.name()
# if "_" in _name:
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
# else:
# effect_name = re.sub(r"\d+", "", _name) # one number
# metadata = {}
# # add knob to metadata
# for knob in ["lookup", "length"]:
# value = node[knob].value()
# animated = node[knob].isAnimated()
# if animated:
# value = [
# ((node[knob].getValueAt(i)) - i)
# for i in range(
# track_item.timelineIn(),
# track_item.timelineOut() + 1)
# ]
# metadata[knob] = value
# # make effect
# otio_effect = otio.schema.TimeEffect()
# otio_effect.name = name
# otio_effect.effect_name = effect_name
# otio_effect.metadata = metadata
# # add otio effect to clip effects
# otio_clip.effects.append(otio_effect)
pass
def _get_marker_color(flame_colour): def _get_marker_color(flame_colour):
@ -260,6 +207,7 @@ def create_otio_markers(otio_item, item):
def create_otio_reference(clip_data, fps=None): def create_otio_reference(clip_data, fps=None):
metadata = _get_metadata(clip_data) metadata = _get_metadata(clip_data)
duration = int(clip_data["source_duration"])
# get file info for path and start frame # get file info for path and start frame
frame_start = 0 frame_start = 0
@ -273,7 +221,6 @@ def create_otio_reference(clip_data, fps=None):
# get padding and other file infos # get padding and other file infos
log.debug("_ path: {}".format(path)) log.debug("_ path: {}".format(path))
frame_duration = clip_data["source_duration"]
otio_ex_ref_item = None otio_ex_ref_item = None
is_sequence = frame_number = utils.get_frame_from_filename(file_name) is_sequence = frame_number = utils.get_frame_from_filename(file_name)
@ -300,7 +247,7 @@ def create_otio_reference(clip_data, fps=None):
rate=fps, rate=fps,
available_range=create_otio_time_range( available_range=create_otio_time_range(
frame_start, frame_start,
frame_duration, duration,
fps fps
) )
) )
@ -316,7 +263,7 @@ def create_otio_reference(clip_data, fps=None):
target_url=reformated_path, target_url=reformated_path,
available_range=create_otio_time_range( available_range=create_otio_time_range(
frame_start, frame_start,
frame_duration, duration,
fps fps
) )
) )
@ -333,23 +280,50 @@ def create_otio_clip(clip_data):
segment = clip_data["PySegment"] segment = clip_data["PySegment"]
# calculate source in # calculate source in
media_info = MediaInfoFile(clip_data["fpath"]) media_info = MediaInfoFile(clip_data["fpath"], logger=log)
media_timecode_start = media_info.start_frame media_timecode_start = media_info.start_frame
media_fps = media_info.fps media_fps = media_info.fps
# create media reference
media_reference = create_otio_reference(clip_data, media_fps)
# define first frame # define first frame
first_frame = media_timecode_start or utils.get_frame_from_filename( first_frame = media_timecode_start or utils.get_frame_from_filename(
clip_data["fpath"]) or 0 clip_data["fpath"]) or 0
source_in = int(clip_data["source_in"]) - int(first_frame) _clip_source_in = int(clip_data["source_in"])
_clip_source_out = int(clip_data["source_out"])
_clip_record_duration = int(clip_data["record_duration"])
# first solve if the reverse timing
speed = 1
if clip_data["source_in"] > clip_data["source_out"]:
source_in = _clip_source_out - int(first_frame)
source_out = _clip_source_in - int(first_frame)
speed = -1
else:
source_in = _clip_source_in - int(first_frame)
source_out = _clip_source_out - int(first_frame)
source_duration = (source_out - source_in + 1)
# secondly check if any change of speed
if source_duration != _clip_record_duration:
retime_speed = float(source_duration) / float(_clip_record_duration)
log.debug("_ retime_speed: {}".format(retime_speed))
speed *= retime_speed
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
log.debug("_ speed: {}".format(speed))
log.debug("_ source_duration: {}".format(source_duration))
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))
# create media reference
media_reference = create_otio_reference(
clip_data, media_fps)
# creatae source range # creatae source range
source_range = create_otio_time_range( source_range = create_otio_time_range(
source_in, source_in,
clip_data["record_duration"], _clip_record_duration,
CTX.get_fps() CTX.get_fps()
) )
@ -363,6 +337,9 @@ def create_otio_clip(clip_data):
if MARKERS_INCLUDE: if MARKERS_INCLUDE:
create_otio_markers(otio_clip, segment) create_otio_markers(otio_clip, segment)
if speed != 1:
create_time_effects(otio_clip, speed)
return otio_clip return otio_clip

View file

@ -268,6 +268,14 @@ class CreateShotClip(opfapi.Creator):
"target": "tag", "target": "tag",
"toolTip": "Handle at end of clip", # noqa "toolTip": "Handle at end of clip", # noqa
"order": 2 "order": 2
},
"includeHandles": {
"value": False,
"type": "QCheckBox",
"label": "Include handles",
"target": "tag",
"toolTip": "By default handles are excluded", # noqa
"order": 3
} }
} }
} }

View file

@ -1,8 +1,8 @@
import re import re
import pyblish import pyblish
import openpype
import openpype.hosts.flame.api as opfapi import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export from openpype.hosts.flame.otio import flame_export
import openpype.lib as oplib
# # developer reload modules # # developer reload modules
from pprint import pformat from pprint import pformat
@ -26,18 +26,17 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
add_tasks = [] add_tasks = []
def process(self, context): def process(self, context):
project = context.data["flameProject"]
selected_segments = context.data["flameSelectedSegments"] selected_segments = context.data["flameSelectedSegments"]
self.log.debug("__ selected_segments: {}".format(selected_segments)) self.log.debug("__ selected_segments: {}".format(selected_segments))
self.otio_timeline = context.data["otioTimeline"] self.otio_timeline = context.data["otioTimeline"]
self.clips_in_reels = opfapi.get_clips_in_reels(project)
self.fps = context.data["fps"] self.fps = context.data["fps"]
# process all sellected # process all sellected
for segment in selected_segments: for segment in selected_segments:
# get openpype tag data # get openpype tag data
marker_data = opfapi.get_segment_data_marker(segment) marker_data = opfapi.get_segment_data_marker(segment)
self.log.debug("__ marker_data: {}".format( self.log.debug("__ marker_data: {}".format(
pformat(marker_data))) pformat(marker_data)))
@ -60,27 +59,44 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
clip_name = clip_data["segment_name"] clip_name = clip_data["segment_name"]
self.log.debug("clip_name: {}".format(clip_name)) self.log.debug("clip_name: {}".format(clip_name))
# get otio clip data
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
# get file path # get file path
file_path = clip_data["fpath"] file_path = clip_data["fpath"]
# get source clip
source_clip = self._get_reel_clip(file_path)
first_frame = opfapi.get_frame_from_filename(file_path) or 0 first_frame = opfapi.get_frame_from_filename(file_path) or 0
head, tail = self._get_head_tail(clip_data, first_frame) head, tail = self._get_head_tail(
clip_data,
otio_data["otioClip"],
marker_data["handleStart"],
marker_data["handleEnd"]
)
# make sure value is absolute
if head != 0:
head = abs(head)
if tail != 0:
tail = abs(tail)
# solve handles length # solve handles length
marker_data["handleStart"] = min( marker_data["handleStart"] = min(
marker_data["handleStart"], abs(head)) marker_data["handleStart"], head)
marker_data["handleEnd"] = min( marker_data["handleEnd"] = min(
marker_data["handleEnd"], abs(tail)) marker_data["handleEnd"], tail)
workfile_start = self._set_workfile_start(marker_data)
with_audio = bool(marker_data.pop("audio")) with_audio = bool(marker_data.pop("audio"))
# add marker data to instance data # add marker data to instance data
inst_data = dict(marker_data.items()) inst_data = dict(marker_data.items())
# add ocio_data to instance data
inst_data.update(otio_data)
asset = marker_data["asset"] asset = marker_data["asset"]
subset = marker_data["subset"] subset = marker_data["subset"]
@ -103,7 +119,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"families": families, "families": families,
"publish": marker_data["publish"], "publish": marker_data["publish"],
"fps": self.fps, "fps": self.fps,
"flameSourceClip": source_clip, "workfileFrameStart": workfile_start,
"sourceFirstFrame": int(first_frame), "sourceFirstFrame": int(first_frame),
"path": file_path, "path": file_path,
"flameAddTasks": self.add_tasks, "flameAddTasks": self.add_tasks,
@ -111,13 +127,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
task["name"]: {"type": task["type"]} task["name"]: {"type": task["type"]}
for task in self.add_tasks} for task in self.add_tasks}
}) })
# get otio clip data
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
# add to instance data
inst_data.update(otio_data)
self.log.debug("__ inst_data: {}".format(pformat(inst_data))) self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
# add resolution # add resolution
@ -151,6 +160,17 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
if marker_data.get("reviewTrack") is not None: if marker_data.get("reviewTrack") is not None:
instance.data["reviewAudio"] = True instance.data["reviewAudio"] = True
@staticmethod
def _set_workfile_start(data):
include_handles = data.get("includeHandles")
workfile_start = data["workfileFrameStart"]
handle_start = data["handleStart"]
if include_handles:
workfile_start += handle_start
return workfile_start
def _get_comment_attributes(self, segment): def _get_comment_attributes(self, segment):
comment = segment.comment.get_value() comment = segment.comment.get_value()
@ -242,29 +262,25 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
return split_comments return split_comments
def _get_head_tail(self, clip_data, first_frame): def _get_head_tail(self, clip_data, otio_clip, handle_start, handle_end):
# calculate head and tail with forward compatibility # calculate head and tail with forward compatibility
head = clip_data.get("segment_head") head = clip_data.get("segment_head")
tail = clip_data.get("segment_tail") tail = clip_data.get("segment_tail")
self.log.debug("__ head: `{}`".format(head))
self.log.debug("__ tail: `{}`".format(tail))
# HACK: it is here to serve for versions bellow 2021.1 # HACK: it is here to serve for versions bellow 2021.1
if not head: if not any([head, tail]):
head = int(clip_data["source_in"]) - int(first_frame) retimed_attributes = oplib.get_media_range_with_retimes(
if not tail: otio_clip, handle_start, handle_end)
tail = int( self.log.debug(
clip_data["source_duration"] - ( ">> retimed_attributes: {}".format(retimed_attributes))
head + clip_data["record_duration"]
)
)
return head, tail
def _get_reel_clip(self, path): # retimed head and tail
match_reel_clip = [ head = int(retimed_attributes["handleStart"])
clip for clip in self.clips_in_reels tail = int(retimed_attributes["handleEnd"])
if clip["fpath"] == path
] return head, tail
if match_reel_clip:
return match_reel_clip.pop()
def _get_resolution_to_data(self, data, context): def _get_resolution_to_data(self, data, context):
assert data.get("otioClip"), "Missing `otioClip` data" assert data.get("otioClip"), "Missing `otioClip` data"
@ -354,7 +370,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
continue continue
if otio_clip.name not in segment.name.get_value(): if otio_clip.name not in segment.name.get_value():
continue continue
if openpype.lib.is_overlapping_otio_ranges( if oplib.is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True): parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata # add pypedata marker to otio_clip metadata

View file

@ -39,7 +39,8 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
"name": subset_name, "name": subset_name,
"asset": asset_doc["name"], "asset": asset_doc["name"],
"subset": subset_name, "subset": subset_name,
"family": "workfile" "family": "workfile",
"families": []
} }
# create instance with workfile # create instance with workfile

View file

@ -1,10 +1,14 @@
import os import os
import re
from pprint import pformat from pprint import pformat
from copy import deepcopy from copy import deepcopy
import pyblish.api import pyblish.api
import openpype.api import openpype.api
from openpype.hosts.flame import api as opfapi from openpype.hosts.flame import api as opfapi
from openpype.hosts.flame.api import MediaInfoFile
import flame
class ExtractSubsetResources(openpype.api.Extractor): class ExtractSubsetResources(openpype.api.Extractor):
@ -20,30 +24,18 @@ class ExtractSubsetResources(openpype.api.Extractor):
# plugin defaults # plugin defaults
default_presets = { default_presets = {
"thumbnail": { "thumbnail": {
"active": True,
"ext": "jpg", "ext": "jpg",
"xml_preset_file": "Jpeg (8-bit).xml", "xml_preset_file": "Jpeg (8-bit).xml",
"xml_preset_dir": "", "xml_preset_dir": "",
"export_type": "File Sequence", "export_type": "File Sequence",
"ignore_comment_attrs": True, "parsed_comment_attrs": False,
"colorspace_out": "Output - sRGB", "colorspace_out": "Output - sRGB",
"representation_add_range": False, "representation_add_range": False,
"representation_tags": ["thumbnail"] "representation_tags": ["thumbnail"],
}, "path_regex": ".*"
"ftrackpreview": {
"ext": "mov",
"xml_preset_file": "Apple iPad (1920x1080).xml",
"xml_preset_dir": "",
"export_type": "Movie",
"ignore_comment_attrs": True,
"colorspace_out": "Output - Rec.709",
"representation_add_range": True,
"representation_tags": [
"review",
"delete"
]
} }
} }
keep_original_representation = False
# hide publisher during exporting # hide publisher during exporting
hide_ui_on_process = True hide_ui_on_process = True
@ -52,22 +44,15 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {} export_presets_mapping = {}
def process(self, instance): def process(self, instance):
if ( if "representations" not in instance.data:
self.keep_original_representation
and "representations" not in instance.data
or not self.keep_original_representation
):
instance.data["representations"] = [] instance.data["representations"] = []
# flame objects # flame objects
segment = instance.data["item"] segment = instance.data["item"]
asset_name = instance.data["asset"]
segment_name = segment.name.get_value() segment_name = segment.name.get_value()
clip_path = instance.data["path"]
sequence_clip = instance.context.data["flameSequence"] sequence_clip = instance.context.data["flameSequence"]
clip_data = instance.data["flameSourceClip"]
reel_clip = None
if clip_data:
reel_clip = clip_data["PyClip"]
# segment's parent track name # segment's parent track name
s_track_name = segment.parent.name.get_value() s_track_name = segment.parent.name.get_value()
@ -87,7 +72,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
handles = max(handle_start, handle_end) handles = max(handle_start, handle_end)
# get media source range with handles # get media source range with handles
source_end_handles = instance.data["sourceEndH"]
source_start_handles = instance.data["sourceStartH"] source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"] source_end_handles = instance.data["sourceEndH"]
@ -104,192 +88,231 @@ class ExtractSubsetResources(openpype.api.Extractor):
for unique_name, preset_config in export_presets.items(): for unique_name, preset_config in export_presets.items():
modify_xml_data = {} modify_xml_data = {}
if self._should_skip(preset_config, clip_path, unique_name):
continue
# get all presets attributes # get all presets attributes
extension = preset_config["ext"]
preset_file = preset_config["xml_preset_file"] preset_file = preset_config["xml_preset_file"]
preset_dir = preset_config["xml_preset_dir"] preset_dir = preset_config["xml_preset_dir"]
export_type = preset_config["export_type"] export_type = preset_config["export_type"]
repre_tags = preset_config["representation_tags"] repre_tags = preset_config["representation_tags"]
ignore_comment_attrs = preset_config["ignore_comment_attrs"] parsed_comment_attrs = preset_config["parsed_comment_attrs"]
color_out = preset_config["colorspace_out"] color_out = preset_config["colorspace_out"]
# get attribures related loading in integrate_batch_group self.log.info(
load_to_batch_group = preset_config.get( "Processing `{}` as `{}` to `{}` type...".format(
"load_to_batch_group") preset_file, export_type, extension
batch_group_loader_name = preset_config.get( )
"batch_group_loader_name") )
# convert to None if empty string
if batch_group_loader_name == "":
batch_group_loader_name = None
# get frame range with handles for representation range # get frame range with handles for representation range
frame_start_handle = frame_start - handle_start frame_start_handle = frame_start - handle_start
# calculate duration with handles
source_duration_handles = ( source_duration_handles = (
source_end_handles - source_start_handles) + 1 source_end_handles - source_start_handles)
# define in/out marks # define in/out marks
in_mark = (source_start_handles - source_first_frame) + 1 in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles out_mark = in_mark + source_duration_handles
# make test for type of preset and available reel_clip exporting_clip = None
if ( name_patern_xml = "<name>_{}.".format(
not reel_clip unique_name)
and export_type != "Sequence Publish"
):
self.log.warning((
"Skipping preset {}. Not available "
"reel clip for {}").format(
preset_file, segment_name
))
continue
# by default export source clips
exporting_clip = reel_clip
if export_type == "Sequence Publish": if export_type == "Sequence Publish":
# change export clip to sequence # change export clip to sequence
exporting_clip = sequence_clip exporting_clip = flame.duplicate(sequence_clip)
# only keep visible layer where instance segment is child
self.hide_others(
exporting_clip, segment_name, s_track_name)
# change name patern
name_patern_xml = (
"<segment name>_<shot name>_{}.").format(
unique_name)
# change in/out marks to timeline in/out # change in/out marks to timeline in/out
in_mark = clip_in in_mark = clip_in
out_mark = clip_out out_mark = clip_out
else:
exporting_clip = self.import_clip(clip_path)
exporting_clip.name.set_value("{}_{}".format(
asset_name, segment_name))
# add xml tags modifications # add xml tags modifications
modify_xml_data.update({ modify_xml_data.update({
"exportHandles": True, "exportHandles": True,
"nbHandles": handles, "nbHandles": handles,
"startFrame": frame_start "startFrame": frame_start,
}) "namePattern": name_patern_xml
})
if not ignore_comment_attrs: if parsed_comment_attrs:
# add any xml overrides collected form segment.comment # add any xml overrides collected form segment.comment
modify_xml_data.update(instance.data["xml_overrides"]) modify_xml_data.update(instance.data["xml_overrides"])
self.log.debug("__ modify_xml_data: {}".format(pformat( export_kwargs = {}
modify_xml_data # validate xml preset file is filled
))) if preset_file == "":
raise ValueError(
("Check Settings for {} preset: "
"`XML preset file` is not filled").format(
unique_name)
)
# with maintained duplication loop all presets # resolve xml preset dir if not filled
with opfapi.maintained_object_duplication( if preset_dir == "":
exporting_clip) as duplclip: preset_dir = opfapi.get_preset_path_by_xml_name(
kwargs = {} preset_file)
if export_type == "Sequence Publish": if not preset_dir:
# only keep visible layer where instance segment is child
self.hide_others(duplclip, segment_name, s_track_name)
# validate xml preset file is filled
if preset_file == "":
raise ValueError( raise ValueError(
("Check Settings for {} preset: " ("Check Settings for {} preset: "
"`XML preset file` is not filled").format( "`XML preset file` {} is not found").format(
unique_name) unique_name, preset_file)
) )
# resolve xml preset dir if not filled # create preset path
if preset_dir == "": preset_orig_xml_path = str(os.path.join(
preset_dir = opfapi.get_preset_path_by_xml_name( preset_dir, preset_file
preset_file) ))
if not preset_dir: # define kwargs based on preset type
raise ValueError( if "thumbnail" in unique_name:
("Check Settings for {} preset: " modify_xml_data.update({
"`XML preset file` {} is not found").format( "video/posterFrame": True,
unique_name, preset_file) "video/useFrameAsPoster": 1,
) "namePattern": "__thumbnail"
})
thumb_frame_number = int(in_mark + (
source_duration_handles / 2))
# create preset path self.log.debug("__ in_mark: {}".format(in_mark))
preset_orig_xml_path = str(os.path.join( self.log.debug("__ thumb_frame_number: {}".format(
preset_dir, preset_file thumb_frame_number
)) ))
preset_path = opfapi.modify_preset_file( export_kwargs["thumb_frame_number"] = thumb_frame_number
preset_orig_xml_path, staging_dir, modify_xml_data) else:
export_kwargs.update({
"in_mark": in_mark,
"out_mark": out_mark
})
# define kwargs based on preset type self.log.debug("__ modify_xml_data: {}".format(
if "thumbnail" in unique_name: pformat(modify_xml_data)
kwargs["thumb_frame_number"] = in_mark + ( ))
source_duration_handles / 2) preset_path = opfapi.modify_preset_file(
else: preset_orig_xml_path, staging_dir, modify_xml_data)
kwargs.update({
"in_mark": in_mark,
"out_mark": out_mark
})
# get and make export dir paths # get and make export dir paths
export_dir_path = str(os.path.join( export_dir_path = str(os.path.join(
staging_dir, unique_name staging_dir, unique_name
)) ))
os.makedirs(export_dir_path) os.makedirs(export_dir_path)
# export # export
opfapi.export_clip( opfapi.export_clip(
export_dir_path, duplclip, preset_path, **kwargs) export_dir_path, exporting_clip, preset_path, **export_kwargs)
extension = preset_config["ext"] # make sure only first segment is used if underscore in name
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
repr_name = unique_name.split("_")[0]
# create representation data # create representation data
representation_data = { representation_data = {
"name": unique_name, "name": repr_name,
"outputName": unique_name, "outputName": repr_name,
"ext": extension, "ext": extension,
"stagingDir": export_dir_path, "stagingDir": export_dir_path,
"tags": repre_tags, "tags": repre_tags,
"data": { "data": {
"colorspace": color_out "colorspace": color_out
}, },
"load_to_batch_group": load_to_batch_group, "load_to_batch_group": preset_config.get(
"batch_group_loader_name": batch_group_loader_name "load_to_batch_group"),
} "batch_group_loader_name": preset_config.get(
"batch_group_loader_name") or None
}
# collect all available content of export dir # collect all available content of export dir
files = os.listdir(export_dir_path) files = os.listdir(export_dir_path)
# make sure no nested folders inside # make sure no nested folders inside
n_stage_dir, n_files = self._unfolds_nested_folders( n_stage_dir, n_files = self._unfolds_nested_folders(
export_dir_path, files, extension) export_dir_path, files, extension)
# fix representation in case of nested folders # fix representation in case of nested folders
if n_stage_dir: if n_stage_dir:
representation_data["stagingDir"] = n_stage_dir representation_data["stagingDir"] = n_stage_dir
files = n_files files = n_files
# add files to represetation but add # add files to represetation but add
# imagesequence as list # imagesequence as list
if ( if (
# first check if path in files is not mov extension # first check if path in files is not mov extension
[ [
f for f in files f for f in files
if os.path.splitext(f)[-1] == ".mov" if os.path.splitext(f)[-1] == ".mov"
] ]
# then try if thumbnail is not in unique name # then try if thumbnail is not in unique name
or unique_name == "thumbnail" or unique_name == "thumbnail"
): ):
representation_data["files"] = files.pop() representation_data["files"] = files.pop()
else: else:
representation_data["files"] = files representation_data["files"] = files
# add frame range # add frame range
if preset_config["representation_add_range"]: if preset_config["representation_add_range"]:
representation_data.update({ representation_data.update({
"frameStart": frame_start_handle, "frameStart": frame_start_handle,
"frameEnd": ( "frameEnd": (
frame_start_handle + source_duration_handles), frame_start_handle + source_duration_handles),
"fps": instance.data["fps"] "fps": instance.data["fps"]
}) })
instance.data["representations"].append(representation_data) instance.data["representations"].append(representation_data)
# add review family if found in tags # add review family if found in tags
if "review" in repre_tags: if "review" in repre_tags:
instance.data["families"].append("review") instance.data["families"].append("review")
self.log.info("Added representation: {}".format( self.log.info("Added representation: {}".format(
representation_data)) representation_data))
if export_type == "Sequence Publish":
# at the end remove the duplicated clip
flame.delete(exporting_clip)
self.log.debug("All representations: {}".format( self.log.debug("All representations: {}".format(
pformat(instance.data["representations"]))) pformat(instance.data["representations"])))
def _should_skip(self, preset_config, clip_path, unique_name):
# get activating attributes
activated_preset = preset_config["active"]
filter_path_regex = preset_config.get("filter_path_regex")
self.log.info(
"Preset `{}` is active `{}` with filter `{}`".format(
unique_name, activated_preset, filter_path_regex
)
)
self.log.debug(
"__ clip_path: `{}`".format(clip_path))
# skip if not activated presete
if not activated_preset:
return True
# exclude by regex filter if any
if (
filter_path_regex
and not re.search(filter_path_regex, clip_path)
):
return True
def _unfolds_nested_folders(self, stage_dir, files_list, ext): def _unfolds_nested_folders(self, stage_dir, files_list, ext):
"""Unfolds nested folders """Unfolds nested folders
@ -373,3 +396,27 @@ class ExtractSubsetResources(openpype.api.Extractor):
for segment in track.segments: for segment in track.segments:
if segment.name.get_value() != segment_name: if segment.name.get_value() != segment_name:
segment.hidden = True segment.hidden = True
def import_clip(self, path):
"""
Import clip from path
"""
dir_path = os.path.dirname(path)
media_info = MediaInfoFile(path, logger=self.log)
file_pattern = media_info.file_pattern
self.log.debug("__ file_pattern: {}".format(file_pattern))
# rejoin the pattern to dir path
new_path = os.path.join(dir_path, file_pattern)
clips = flame.import_clips(new_path)
self.log.info("Clips [{}] imported from `{}`".format(clips, path))
if not clips:
self.log.warning("Path `{}` is not having any clips".format(path))
return None
elif len(clips) > 1:
self.log.warning(
"Path `{}` is containing more that one clip".format(path)
)
return clips[0]

View file

@ -1,26 +0,0 @@
import pyblish
@pyblish.api.log
class ValidateSourceClip(pyblish.api.InstancePlugin):
"""Validate instance is not having empty `flameSourceClip`"""
order = pyblish.api.ValidatorOrder
label = "Validate Source Clip"
hosts = ["flame"]
families = ["clip"]
optional = True
active = False
def process(self, instance):
flame_source_clip = instance.data["flameSourceClip"]
self.log.debug("_ flame_source_clip: {}".format(flame_source_clip))
if flame_source_clip is None:
raise AttributeError((
"Timeline segment `{}` is not having "
"relative clip in reels. Please make sure "
"you push `Save Sources` button in Conform Tab").format(
instance.data["asset"]
))

View file

@ -45,7 +45,8 @@ def install():
This is where you install menus and register families, data This is where you install menus and register families, data
and loaders into fusion. and loaders into fusion.
It is called automatically when installing via `api.install(avalon.fusion)` It is called automatically when installing via
`openpype.pipeline.install_host(openpype.hosts.fusion.api)`
See the Maya equivalent for inspiration on how to implement this. See the Maya equivalent for inspiration on how to implement this.

View file

@ -6,7 +6,7 @@ from openpype.pipeline import load
class FusionSetFrameRangeLoader(load.LoaderPlugin): class FusionSetFrameRangeLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Set frame range excluding pre- and post-handles"""
families = ["animation", families = ["animation",
"camera", "camera",
@ -40,7 +40,7 @@ class FusionSetFrameRangeLoader(load.LoaderPlugin):
class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin): class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Set frame range including pre- and post-handles"""
families = ["animation", families = ["animation",
"camera", "camera",

View file

@ -35,7 +35,11 @@ function Client() {
self.pack = function(num) { self.pack = function(num) {
var ascii=''; var ascii='';
for (var i = 3; i >= 0; i--) { for (var i = 3; i >= 0; i--) {
ascii += String.fromCharCode((num >> (8 * i)) & 255); var hex = ((num >> (8 * i)) & 255).toString(16);
if (hex.length < 2){
ascii += "0";
}
ascii += hex;
} }
return ascii; return ascii;
}; };
@ -279,19 +283,22 @@ function Client() {
}; };
self._send = function(message) { self._send = function(message) {
var data = new QByteArray(); /** Harmony 21.1 doesn't have QDataStream anymore.
var outstr = new QDataStream(data, QIODevice.WriteOnly);
outstr.writeInt(0); This means we aren't able to write bytes into QByteArray so we had
data.append('UTF-8'); modify how content lenght is sent do the server.
outstr.device().seek(0); Content lenght is sent as string of 8 char convertible into integer
outstr.writeInt(data.size() - 4); (instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
var codec = QTextCodec.codecForUtfText(data); var codec_name = new QByteArray().append("UTF-8");
var msg = codec.fromUnicode(message);
var l = msg.size(); var codec = QTextCodec.codecForName(codec_name);
var coded = new QByteArray('AH').append(self.pack(l)); var msg = codec.fromUnicode(message);
coded = coded.append(msg); var l = msg.size();
self.socket.write(new QByteArray(coded)); var header = new QByteArray().append('AH').append(self.pack(l));
self.logDebug('Sent.'); var coded = msg.prepend(header);
self.socket.write(coded);
self.logDebug('Sent.');
}; };
self.waitForLock = function() { self.waitForLock = function() {
@ -351,7 +358,14 @@ function start() {
app.avalonClient = new Client(); app.avalonClient = new Client();
app.avalonClient.socket.connectToHost(host, port); app.avalonClient.socket.connectToHost(host, port);
} }
var menuBar = QApplication.activeWindow().menuBar(); var mainWindow = null;
var widgets = QApplication.topLevelWidgets();
for (var i = 0 ; i < widgets.length; i++) {
if (widgets[i] instanceof QMainWindow){
mainWindow = widgets[i];
}
}
var menuBar = mainWindow.menuBar();
var actions = menuBar.actions(); var actions = menuBar.actions();
app.avalonMenu = null; app.avalonMenu = null;

View file

@ -463,7 +463,7 @@ def imprint(node_id, data, remove=False):
remove (bool): Removes the data from the scene. remove (bool): Removes the data from the scene.
Example: Example:
>>> from avalon.harmony import lib >>> from openpype.hosts.harmony.api import lib
>>> node = "Top/Display" >>> node = "Top/Display"
>>> data = {"str": "someting", "int": 1, "float": 0.32, "bool": True} >>> data = {"str": "someting", "int": 1, "float": 0.32, "bool": True}
>>> lib.imprint(layer, data) >>> lib.imprint(layer, data)

View file

@ -88,21 +88,25 @@ class Server(threading.Thread):
""" """
current_time = time.time() current_time = time.time()
while True: while True:
self.log.info("wait ttt")
# Receive the data in small chunks and retransmit it # Receive the data in small chunks and retransmit it
request = None request = None
header = self.connection.recv(6) header = self.connection.recv(10)
if len(header) == 0: if len(header) == 0:
# null data received, socket is closing. # null data received, socket is closing.
self.log.info(f"[{self.timestamp()}] Connection closing.") self.log.info(f"[{self.timestamp()}] Connection closing.")
break break
if header[0:2] != b"AH": if header[0:2] != b"AH":
self.log.error("INVALID HEADER") self.log.error("INVALID HEADER")
length = struct.unpack(">I", header[2:])[0] content_length_str = header[2:].decode()
length = int(content_length_str, 16)
data = self.connection.recv(length) data = self.connection.recv(length)
while (len(data) < length): while (len(data) < length):
# we didn't received everything in first try, lets wait for # we didn't received everything in first try, lets wait for
# all data. # all data.
self.log.info("loop")
time.sleep(0.1) time.sleep(0.1)
if self.connection is None: if self.connection is None:
self.log.error(f"[{self.timestamp()}] " self.log.error(f"[{self.timestamp()}] "
@ -113,7 +117,7 @@ class Server(threading.Thread):
break break
data += self.connection.recv(length - len(data)) data += self.connection.recv(length - len(data))
self.log.debug("data:: {} {}".format(data, type(data)))
self.received += data.decode("utf-8") self.received += data.decode("utf-8")
pretty = self._pretty(self.received) pretty = self._pretty(self.received)
self.log.debug( self.log.debug(

View file

@ -144,6 +144,7 @@ class CollectFarmRender(openpype.lib.abstract_collect_render.
label=node.split("/")[1], label=node.split("/")[1],
subset=subset_name, subset=subset_name,
asset=legacy_io.Session["AVALON_ASSET"], asset=legacy_io.Session["AVALON_ASSET"],
task=task_name,
attachTo=False, attachTo=False,
setMembers=[node], setMembers=[node],
publish=info[4], publish=info[4],

View file

@ -27,7 +27,9 @@ from .lib import (
get_track_items, get_track_items,
get_current_project, get_current_project,
get_current_sequence, get_current_sequence,
get_timeline_selection,
get_current_track, get_current_track,
get_track_item_tags,
get_track_item_pype_tag, get_track_item_pype_tag,
set_track_item_pype_tag, set_track_item_pype_tag,
get_track_item_pype_data, get_track_item_pype_data,
@ -80,7 +82,9 @@ __all__ = [
"get_track_items", "get_track_items",
"get_current_project", "get_current_project",
"get_current_sequence", "get_current_sequence",
"get_timeline_selection",
"get_current_track", "get_current_track",
"get_track_item_tags",
"get_track_item_pype_tag", "get_track_item_pype_tag",
"set_track_item_pype_tag", "set_track_item_pype_tag",
"get_track_item_pype_data", "get_track_item_pype_data",

View file

@ -109,8 +109,9 @@ def register_hiero_events():
# hiero.core.events.registerInterest("kShutdown", shutDown) # hiero.core.events.registerInterest("kShutdown", shutDown)
# hiero.core.events.registerInterest("kStartup", startupCompleted) # hiero.core.events.registerInterest("kStartup", startupCompleted)
hiero.core.events.registerInterest( # INFO: was disabled because it was slowing down timeline operations
("kSelectionChanged", "kTimeline"), selection_changed_timeline) # hiero.core.events.registerInterest(
# ("kSelectionChanged", "kTimeline"), selection_changed_timeline)
# workfiles # workfiles
try: try:

View file

@ -1,6 +1,8 @@
""" """
Host specific functions where host api is connected Host specific functions where host api is connected
""" """
from copy import deepcopy
import os import os
import re import re
import sys import sys
@ -89,13 +91,19 @@ def get_current_sequence(name=None, new=False):
if not sequence: if not sequence:
# if nothing found create new with input name # if nothing found create new with input name
sequence = get_current_sequence(name, True) sequence = get_current_sequence(name, True)
elif not name and not new: else:
# if name is none and new is False then return current open sequence # if name is none and new is False then return current open sequence
sequence = hiero.ui.activeSequence() sequence = hiero.ui.activeSequence()
return sequence return sequence
def get_timeline_selection():
active_sequence = hiero.ui.activeSequence()
timeline_editor = hiero.ui.getTimelineEditor(active_sequence)
return list(timeline_editor.selection())
def get_current_track(sequence, name, audio=False): def get_current_track(sequence, name, audio=False):
""" """
Get current track in context of active project. Get current track in context of active project.
@ -118,7 +126,7 @@ def get_current_track(sequence, name, audio=False):
# get track by name # get track by name
track = None track = None
for _track in tracks: for _track in tracks:
if _track.name() in name: if _track.name() == name:
track = _track track = _track
if not track: if not track:
@ -126,13 +134,14 @@ def get_current_track(sequence, name, audio=False):
track = hiero.core.VideoTrack(name) track = hiero.core.VideoTrack(name)
else: else:
track = hiero.core.AudioTrack(name) track = hiero.core.AudioTrack(name)
sequence.addTrack(track) sequence.addTrack(track)
return track return track
def get_track_items( def get_track_items(
selected=False, selection=False,
sequence_name=None, sequence_name=None,
track_item_name=None, track_item_name=None,
track_name=None, track_name=None,
@ -143,7 +152,7 @@ def get_track_items(
"""Get all available current timeline track items. """Get all available current timeline track items.
Attribute: Attribute:
selected (bool)[optional]: return only selected items on timeline selection (list)[optional]: list of selected track items
sequence_name (str)[optional]: return only clips from input sequence sequence_name (str)[optional]: return only clips from input sequence
track_item_name (str)[optional]: return only item with input name track_item_name (str)[optional]: return only item with input name
track_name (str)[optional]: return only items from track name track_name (str)[optional]: return only items from track name
@ -155,32 +164,34 @@ def get_track_items(
Return: Return:
list or hiero.core.TrackItem: list of track items or single track item list or hiero.core.TrackItem: list of track items or single track item
""" """
return_list = list() track_type = track_type or "video"
track_items = list() selection = selection or []
return_list = []
# get selected track items or all in active sequence # get selected track items or all in active sequence
if selected: if selection:
try: try:
selected_items = list(hiero.selection) for track_item in selection:
for item in selected_items: log.info("___ track_item: {}".format(track_item))
if track_name and track_name in item.parent().name(): # make sure only trackitems are selected
# filter only items fitting input track name if not isinstance(track_item, hiero.core.TrackItem):
track_items.append(item) continue
elif not track_name:
# or add all if no track_name was defined if _validate_all_atrributes(
track_items.append(item) track_item,
track_item_name,
track_name,
track_type,
check_enabled,
check_tagged
):
log.info("___ valid trackitem: {}".format(track_item))
return_list.append(track_item)
except AttributeError: except AttributeError:
pass pass
# check if any collected track items are
# `core.Hiero.Python.TrackItem` instance
if track_items:
any_track_item = track_items[0]
if not isinstance(any_track_item, hiero.core.TrackItem):
selected_items = []
# collect all available active sequence track items # collect all available active sequence track items
if not track_items: if not return_list:
sequence = get_current_sequence(name=sequence_name) sequence = get_current_sequence(name=sequence_name)
# get all available tracks from sequence # get all available tracks from sequence
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks()) tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
@ -191,42 +202,101 @@ def get_track_items(
if check_enabled and not track.isEnabled(): if check_enabled and not track.isEnabled():
continue continue
# and all items in track # and all items in track
for item in track.items(): for track_item in track.items():
if check_tagged and not item.tags(): # make sure no subtrackitem is also track items
if not isinstance(track_item, hiero.core.TrackItem):
continue continue
# check if track item is enabled if _validate_all_atrributes(
if check_enabled: track_item,
if not item.isEnabled(): track_item_name,
continue track_name,
if track_item_name: track_type,
if track_item_name in item.name(): check_enabled,
return item check_tagged
# make sure only track items with correct track names are added ):
if track_name and track_name in track.name(): return_list.append(track_item)
# filter out only defined track_name items
track_items.append(item)
elif not track_name:
# or add all if no track_name is defined
track_items.append(item)
# filter out only track items with defined track_type return return_list
for track_item in track_items:
if track_type and track_type == "video" and isinstance(
def _validate_all_atrributes(
track_item,
track_item_name,
track_name,
track_type,
check_enabled,
check_tagged
):
def _validate_correct_name_track_item():
if track_item_name and track_item_name in track_item.name():
return True
elif not track_item_name:
return True
def _validate_tagged_track_item():
if check_tagged and track_item.tags():
return True
elif not check_tagged:
return True
def _validate_enabled_track_item():
if check_enabled and track_item.isEnabled():
return True
elif not check_enabled:
return True
def _validate_parent_track_item():
if track_name and track_name in track_item.parent().name():
# filter only items fitting input track name
return True
elif not track_name:
# or add all if no track_name was defined
return True
def _validate_type_track_item():
if track_type == "video" and isinstance(
track_item.parent(), hiero.core.VideoTrack): track_item.parent(), hiero.core.VideoTrack):
# only video track items are allowed # only video track items are allowed
return_list.append(track_item) return True
elif track_type and track_type == "audio" and isinstance( elif track_type == "audio" and isinstance(
track_item.parent(), hiero.core.AudioTrack): track_item.parent(), hiero.core.AudioTrack):
# only audio track items are allowed # only audio track items are allowed
return_list.append(track_item) return True
elif not track_type:
# add all if no track_type is defined
return_list.append(track_item)
# return output list but make sure all items are TrackItems # check if track item is enabled
return [_i for _i in return_list return all([
if type(_i) == hiero.core.TrackItem] _validate_enabled_track_item(),
_validate_type_track_item(),
_validate_tagged_track_item(),
_validate_parent_track_item(),
_validate_correct_name_track_item()
])
def get_track_item_tags(track_item):
"""
Get track item tags excluded openpype tag
Attributes:
trackItem (hiero.core.TrackItem): hiero object
Returns:
hiero.core.Tag: hierarchy, orig clip attributes
"""
returning_tag_data = []
# get all tags from track item
_tags = track_item.tags()
if not _tags:
return []
# collect all tags which are not openpype tag
returning_tag_data.extend(
tag for tag in _tags
if tag.name() != self.pype_tag_name
)
return returning_tag_data
def get_track_item_pype_tag(track_item): def get_track_item_pype_tag(track_item):
@ -245,7 +315,7 @@ def get_track_item_pype_tag(track_item):
return None return None
for tag in _tags: for tag in _tags:
# return only correct tag defined by global name # return only correct tag defined by global name
if tag.name() in self.pype_tag_name: if tag.name() == self.pype_tag_name:
return tag return tag
@ -266,7 +336,7 @@ def set_track_item_pype_tag(track_item, data=None):
"editable": "0", "editable": "0",
"note": "OpenPype data container", "note": "OpenPype data container",
"icon": "openpype_icon.png", "icon": "openpype_icon.png",
"metadata": {k: v for k, v in data.items()} "metadata": dict(data.items())
} }
# get available pype tag if any # get available pype tag if any
_tag = get_track_item_pype_tag(track_item) _tag = get_track_item_pype_tag(track_item)
@ -301,9 +371,9 @@ def get_track_item_pype_data(track_item):
return None return None
# get tag metadata attribute # get tag metadata attribute
tag_data = tag.metadata() tag_data = deepcopy(dict(tag.metadata()))
# convert tag metadata to normal keys names and values to correct types # convert tag metadata to normal keys names and values to correct types
for k, v in dict(tag_data).items(): for k, v in tag_data.items():
key = k.replace("tag.", "") key = k.replace("tag.", "")
try: try:
@ -324,7 +394,7 @@ def get_track_item_pype_data(track_item):
log.warning(msg) log.warning(msg)
value = v value = v
data.update({key: value}) data[key] = value
return data return data
@ -497,7 +567,7 @@ class PyblishSubmission(hiero.exporters.FnSubmission.Submission):
from . import publish from . import publish
# Add submission to Hiero module for retrieval in plugins. # Add submission to Hiero module for retrieval in plugins.
hiero.submission = self hiero.submission = self
publish() publish(hiero.ui.mainWindow())
def add_submission(): def add_submission():
@ -527,7 +597,7 @@ class PublishAction(QtWidgets.QAction):
# from getting picked up when not using the "Export" dialog. # from getting picked up when not using the "Export" dialog.
if hasattr(hiero, "submission"): if hasattr(hiero, "submission"):
del hiero.submission del hiero.submission
publish() publish(hiero.ui.mainWindow())
def eventHandler(self, event): def eventHandler(self, event):
# Add the Menu to the right-click menu # Add the Menu to the right-click menu
@ -553,10 +623,10 @@ class PublishAction(QtWidgets.QAction):
# #
# ''' # '''
# import hiero.core # import hiero.core
# from avalon.nuke import imprint # from openpype.hosts.nuke.api.lib import (
# from pype.hosts.nuke import ( # BuildWorkfile,
# lib as nklib # imprint
# ) # )
# #
# # check if the file exists if does then Raise "File exists!" # # check if the file exists if does then Raise "File exists!"
# if os.path.exists(filepath): # if os.path.exists(filepath):
@ -583,8 +653,7 @@ class PublishAction(QtWidgets.QAction):
# #
# nuke_script.addNode(root_node) # nuke_script.addNode(root_node)
# #
# # here to call pype.hosts.nuke.lib.BuildWorkfile # script_builder = BuildWorkfile(
# script_builder = nklib.BuildWorkfile(
# root_node=root_node, # root_node=root_node,
# root_path=root_path, # root_path=root_path,
# nodes=nuke_script.getNodes(), # nodes=nuke_script.getNodes(),
@ -894,32 +963,33 @@ def apply_colorspace_clips():
def is_overlapping(ti_test, ti_original, strict=False): def is_overlapping(ti_test, ti_original, strict=False):
covering_exp = bool( covering_exp = (
(ti_test.timelineIn() <= ti_original.timelineIn()) (ti_test.timelineIn() <= ti_original.timelineIn())
and (ti_test.timelineOut() >= ti_original.timelineOut()) and (ti_test.timelineOut() >= ti_original.timelineOut())
) )
inside_exp = bool(
if strict:
return covering_exp
inside_exp = (
(ti_test.timelineIn() >= ti_original.timelineIn()) (ti_test.timelineIn() >= ti_original.timelineIn())
and (ti_test.timelineOut() <= ti_original.timelineOut()) and (ti_test.timelineOut() <= ti_original.timelineOut())
) )
overlaying_right_exp = bool( overlaying_right_exp = (
(ti_test.timelineIn() < ti_original.timelineOut()) (ti_test.timelineIn() < ti_original.timelineOut())
and (ti_test.timelineOut() >= ti_original.timelineOut()) and (ti_test.timelineOut() >= ti_original.timelineOut())
) )
overlaying_left_exp = bool( overlaying_left_exp = (
(ti_test.timelineOut() > ti_original.timelineIn()) (ti_test.timelineOut() > ti_original.timelineIn())
and (ti_test.timelineIn() <= ti_original.timelineIn()) and (ti_test.timelineIn() <= ti_original.timelineIn())
) )
if not strict: return any((
return any(( covering_exp,
covering_exp, inside_exp,
inside_exp, overlaying_right_exp,
overlaying_right_exp, overlaying_left_exp
overlaying_left_exp ))
))
else:
return covering_exp
def get_sequence_pattern_and_padding(file): def get_sequence_pattern_and_padding(file):
@ -937,17 +1007,13 @@ def get_sequence_pattern_and_padding(file):
""" """
foundall = re.findall( foundall = re.findall(
r"(#+)|(%\d+d)|(?<=[^a-zA-Z0-9])(\d+)(?=\.\w+$)", file) r"(#+)|(%\d+d)|(?<=[^a-zA-Z0-9])(\d+)(?=\.\w+$)", file)
if foundall: if not foundall:
found = sorted(list(set(foundall[0])))[-1]
if "%" in found:
padding = int(re.findall(r"\d+", found)[-1])
else:
padding = len(found)
return found, padding
else:
return None, None return None, None
found = sorted(list(set(foundall[0])))[-1]
padding = int(
re.findall(r"\d+", found)[-1]) if "%" in found else len(found)
return found, padding
def sync_clip_name_to_data_asset(track_items_list): def sync_clip_name_to_data_asset(track_items_list):
@ -983,7 +1049,7 @@ def sync_clip_name_to_data_asset(track_items_list):
print("asset was changed in clip: {}".format(ti_name)) print("asset was changed in clip: {}".format(ti_name))
def check_inventory_versions(): def check_inventory_versions(track_items=None):
""" """
Actual version color idetifier of Loaded containers Actual version color idetifier of Loaded containers
@ -994,14 +1060,14 @@ def check_inventory_versions():
""" """
from . import parse_container from . import parse_container
track_item = track_items or get_track_items()
# presets # presets
clip_color_last = "green" clip_color_last = "green"
clip_color = "red" clip_color = "red"
# get all track items from current timeline # get all track items from current timeline
for track_item in get_track_items(): for track_item in track_item:
container = parse_container(track_item) container = parse_container(track_item)
if container: if container:
# get representation from io # get representation from io
representation = legacy_io.find_one({ representation = legacy_io.find_one({
@ -1039,29 +1105,31 @@ def selection_changed_timeline(event):
timeline_editor = event.sender timeline_editor = event.sender
selection = timeline_editor.selection() selection = timeline_editor.selection()
selection = [ti for ti in selection track_items = get_track_items(
if isinstance(ti, hiero.core.TrackItem)] selection=selection,
track_type="video",
check_enabled=True,
check_locked=True,
check_tagged=True
)
# run checking function # run checking function
sync_clip_name_to_data_asset(selection) sync_clip_name_to_data_asset(track_items)
# also mark old versions of loaded containers
check_inventory_versions()
def before_project_save(event): def before_project_save(event):
track_items = get_track_items( track_items = get_track_items(
selected=False,
track_type="video", track_type="video",
check_enabled=True, check_enabled=True,
check_locked=True, check_locked=True,
check_tagged=True) check_tagged=True
)
# run checking function # run checking function
sync_clip_name_to_data_asset(track_items) sync_clip_name_to_data_asset(track_items)
# also mark old versions of loaded containers # also mark old versions of loaded containers
check_inventory_versions() check_inventory_versions(track_items)
def get_main_window(): def get_main_window():

View file

@ -143,6 +143,11 @@ def parse_container(track_item, validate=True):
""" """
# convert tag metadata to normal keys names # convert tag metadata to normal keys names
data = lib.get_track_item_pype_data(track_item) data = lib.get_track_item_pype_data(track_item)
if (
not data
or data.get("id") != "pyblish.avalon.container"
):
return
if validate and data and data.get("schema"): if validate and data and data.get("schema"):
schema.validate(data) schema.validate(data)

View file

@ -1,4 +1,5 @@
import os import os
from pprint import pformat
import re import re
from copy import deepcopy from copy import deepcopy
@ -400,7 +401,8 @@ class ClipLoader:
# inject asset data to representation dict # inject asset data to representation dict
self._get_asset_data() self._get_asset_data()
log.debug("__init__ self.data: `{}`".format(self.data)) log.info("__init__ self.data: `{}`".format(pformat(self.data)))
log.info("__init__ options: `{}`".format(pformat(options)))
# add active components to class # add active components to class
if self.new_sequence: if self.new_sequence:
@ -482,7 +484,9 @@ class ClipLoader:
""" """
asset_name = self.context["representation"]["context"]["asset"] asset_name = self.context["representation"]["context"]["asset"]
self.data["assetData"] = openpype.get_asset(asset_name)["data"] asset_doc = openpype.get_asset(asset_name)
log.debug("__ asset_doc: {}".format(pformat(asset_doc)))
self.data["assetData"] = asset_doc["data"]
def _make_track_item(self, source_bin_item, audio=False): def _make_track_item(self, source_bin_item, audio=False):
""" Create track item with """ """ Create track item with """
@ -500,7 +504,7 @@ class ClipLoader:
track_item.setSource(clip) track_item.setSource(clip)
track_item.setSourceIn(self.handle_start) track_item.setSourceIn(self.handle_start)
track_item.setTimelineIn(self.timeline_in) track_item.setTimelineIn(self.timeline_in)
track_item.setSourceOut(self.media_duration - self.handle_end) track_item.setSourceOut((self.media_duration) - self.handle_end)
track_item.setTimelineOut(self.timeline_out) track_item.setTimelineOut(self.timeline_out)
track_item.setPlaybackSpeed(1) track_item.setPlaybackSpeed(1)
self.active_track.addTrackItem(track_item) self.active_track.addTrackItem(track_item)
@ -520,14 +524,18 @@ class ClipLoader:
self.handle_start = self.data["versionData"].get("handleStart") self.handle_start = self.data["versionData"].get("handleStart")
self.handle_end = self.data["versionData"].get("handleEnd") self.handle_end = self.data["versionData"].get("handleEnd")
if self.handle_start is None: if self.handle_start is None:
self.handle_start = int(self.data["assetData"]["handleStart"]) self.handle_start = self.data["assetData"]["handleStart"]
if self.handle_end is None: if self.handle_end is None:
self.handle_end = int(self.data["assetData"]["handleEnd"]) self.handle_end = self.data["assetData"]["handleEnd"]
self.handle_start = int(self.handle_start)
self.handle_end = int(self.handle_end)
if self.sequencial_load: if self.sequencial_load:
last_track_item = lib.get_track_items( last_track_item = lib.get_track_items(
sequence_name=self.active_sequence.name(), sequence_name=self.active_sequence.name(),
track_name=self.active_track.name()) track_name=self.active_track.name()
)
if len(last_track_item) == 0: if len(last_track_item) == 0:
last_timeline_out = 0 last_timeline_out = 0
else: else:
@ -541,17 +549,12 @@ class ClipLoader:
self.timeline_in = int(self.data["assetData"]["clipIn"]) self.timeline_in = int(self.data["assetData"]["clipIn"])
self.timeline_out = int(self.data["assetData"]["clipOut"]) self.timeline_out = int(self.data["assetData"]["clipOut"])
log.debug("__ self.timeline_in: {}".format(self.timeline_in))
log.debug("__ self.timeline_out: {}".format(self.timeline_out))
# check if slate is included # check if slate is included
# either in version data families or by calculating frame diff slate_on = "slate" in self.context["version"]["data"]["families"]
slate_on = next( log.debug("__ slate_on: {}".format(slate_on))
# check iterate if slate is in families
(f for f in self.context["version"]["data"]["families"]
if "slate" in f),
# if nothing was found then use default None
# so other bool could be used
None) or bool(int(
(self.timeline_out - self.timeline_in + 1)
+ self.handle_start + self.handle_end) < self.media_duration)
# if slate is on then remove the slate frame from beginning # if slate is on then remove the slate frame from beginning
if slate_on: if slate_on:
@ -572,7 +575,7 @@ class ClipLoader:
# there were some cases were hiero was not creating it # there were some cases were hiero was not creating it
source_bin_item = None source_bin_item = None
for item in self.active_bin.items(): for item in self.active_bin.items():
if self.data["clip_name"] in item.name(): if self.data["clip_name"] == item.name():
source_bin_item = item source_bin_item = item
if not source_bin_item: if not source_bin_item:
log.warning("Problem with created Source clip: `{}`".format( log.warning("Problem with created Source clip: `{}`".format(
@ -599,8 +602,8 @@ class Creator(LegacyCreator):
rename_index = None rename_index = None
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
import openpype.hosts.hiero.api as phiero
super(Creator, self).__init__(*args, **kwargs) super(Creator, self).__init__(*args, **kwargs)
import openpype.hosts.hiero.api as phiero
self.presets = openpype.get_current_project_settings()[ self.presets = openpype.get_current_project_settings()[
"hiero"]["create"].get(self.__class__.__name__, {}) "hiero"]["create"].get(self.__class__.__name__, {})
@ -609,7 +612,10 @@ class Creator(LegacyCreator):
self.sequence = phiero.get_current_sequence() self.sequence = phiero.get_current_sequence()
if (self.options or {}).get("useSelection"): if (self.options or {}).get("useSelection"):
self.selected = phiero.get_track_items(selected=True) timeline_selection = phiero.get_timeline_selection()
self.selected = phiero.get_track_items(
selection=timeline_selection
)
else: else:
self.selected = phiero.get_track_items() self.selected = phiero.get_track_items()
@ -716,6 +722,10 @@ class PublishClip:
else: else:
self.tag_data.update({"reviewTrack": None}) self.tag_data.update({"reviewTrack": None})
log.debug("___ self.tag_data: {}".format(
pformat(self.tag_data)
))
# create pype tag on track_item and add data # create pype tag on track_item and add data
lib.imprint(self.track_item, self.tag_data) lib.imprint(self.track_item, self.tag_data)

View file

@ -10,16 +10,6 @@ log = Logger.get_logger(__name__)
def tag_data(): def tag_data():
return { return {
# "Retiming": {
# "editable": "1",
# "note": "Clip has retime or TimeWarp effects (or multiple effects stacked on the clip)", # noqa
# "icon": "retiming.png",
# "metadata": {
# "family": "retiming",
# "marginIn": 1,
# "marginOut": 1
# }
# },
"[Lenses]": { "[Lenses]": {
"Set lense here": { "Set lense here": {
"editable": "1", "editable": "1",
@ -48,6 +38,16 @@ def tag_data():
"family": "comment", "family": "comment",
"subset": "main" "subset": "main"
} }
},
"FrameMain": {
"editable": "1",
"note": "Publishing a frame subset.",
"icon": "z_layer_main.png",
"metadata": {
"family": "frame",
"subset": "main",
"format": "png"
}
} }
} }
@ -86,7 +86,7 @@ def update_tag(tag, data):
# due to hiero bug we have to make sure keys which are not existent in # due to hiero bug we have to make sure keys which are not existent in
# data are cleared of value by `None` # data are cleared of value by `None`
for _mk in mtd.keys(): for _mk in mtd.dict().keys():
if _mk.replace("tag.", "") not in data_mtd.keys(): if _mk.replace("tag.", "") not in data_mtd.keys():
mtd.setValue(_mk, str(None)) mtd.setValue(_mk, str(None))

View file

@ -3,10 +3,6 @@ from openpype.pipeline import (
get_representation_path, get_representation_path,
) )
import openpype.hosts.hiero.api as phiero import openpype.hosts.hiero.api as phiero
# from openpype.hosts.hiero.api import plugin, lib
# reload(lib)
# reload(plugin)
# reload(phiero)
class LoadClip(phiero.SequenceLoader): class LoadClip(phiero.SequenceLoader):
@ -106,7 +102,7 @@ class LoadClip(phiero.SequenceLoader):
name = container['name'] name = container['name']
namespace = container['namespace'] namespace = container['namespace']
track_item = phiero.get_track_items( track_item = phiero.get_track_items(
track_item_name=namespace) track_item_name=namespace).pop()
version = legacy_io.find_one({ version = legacy_io.find_one({
"type": "version", "type": "version",
"_id": representation["parent"] "_id": representation["parent"]
@ -157,7 +153,7 @@ class LoadClip(phiero.SequenceLoader):
# load clip to timeline and get main variables # load clip to timeline and get main variables
namespace = container['namespace'] namespace = container['namespace']
track_item = phiero.get_track_items( track_item = phiero.get_track_items(
track_item_name=namespace) track_item_name=namespace).pop()
track = track_item.parent() track = track_item.parent()
# remove track item from track # remove track item from track

View file

@ -0,0 +1,142 @@
from pprint import pformat
import re
import ast
import json
import pyblish.api
class CollectFrameTagInstances(pyblish.api.ContextPlugin):
"""Collect frames from tags.
Tag is expected to have metadata:
{
"family": "frame"
"subset": "main"
}
"""
order = pyblish.api.CollectorOrder
label = "Collect Frames"
hosts = ["hiero"]
def process(self, context):
self._context = context
# collect all sequence tags
subset_data = self._create_frame_subset_data_sequence(context)
self.log.debug("__ subset_data: {}".format(
pformat(subset_data)
))
# create instances
self._create_instances(subset_data)
def _get_tag_data(self, tag):
data = {}
# get tag metadata attribute
tag_data = tag.metadata()
# convert tag metadata to normal keys names and values to correct types
for k, v in dict(tag_data).items():
key = k.replace("tag.", "")
try:
# capture exceptions which are related to strings only
if re.match(r"^[\d]+$", v):
value = int(v)
elif re.match(r"^True$", v):
value = True
elif re.match(r"^False$", v):
value = False
elif re.match(r"^None$", v):
value = None
elif re.match(r"^[\w\d_]+$", v):
value = v
else:
value = ast.literal_eval(v)
except (ValueError, SyntaxError):
value = v
data[key] = value
return data
def _create_frame_subset_data_sequence(self, context):
sequence_tags = []
sequence = context.data["activeTimeline"]
# get all publishable sequence frames
publish_frames = range(int(sequence.duration() + 1))
self.log.debug("__ publish_frames: {}".format(
pformat(publish_frames)
))
# get all sequence tags
for tag in sequence.tags():
tag_data = self._get_tag_data(tag)
self.log.debug("__ tag_data: {}".format(
pformat(tag_data)
))
if not tag_data:
continue
if "family" not in tag_data:
continue
if tag_data["family"] != "frame":
continue
sequence_tags.append(tag_data)
self.log.debug("__ sequence_tags: {}".format(
pformat(sequence_tags)
))
# first collect all available subset tag frames
subset_data = {}
for tag_data in sequence_tags:
frame = int(tag_data["start"])
if frame not in publish_frames:
continue
subset = tag_data["subset"]
if subset in subset_data:
# update existing subset key
subset_data[subset]["frames"].append(frame)
else:
# create new subset key
subset_data[subset] = {
"frames": [frame],
"format": tag_data["format"],
"asset": context.data["assetEntity"]["name"]
}
return subset_data
def _create_instances(self, subset_data):
# create instance per subset
for subset_name, subset_data in subset_data.items():
name = "frame" + subset_name.title()
data = {
"name": name,
"label": "{} {}".format(name, subset_data["frames"]),
"family": "image",
"families": ["frame"],
"asset": subset_data["asset"],
"subset": name,
"format": subset_data["format"],
"frames": subset_data["frames"]
}
self._context.create_instance(**data)
self.log.info(
"Created instance: {}".format(
json.dumps(data, sort_keys=True, indent=4)
)
)

View file

@ -4,16 +4,16 @@ from pyblish import api
class CollectClipTagTasks(api.InstancePlugin): class CollectClipTagTasks(api.InstancePlugin):
"""Collect Tags from selected track items.""" """Collect Tags from selected track items."""
order = api.CollectorOrder order = api.CollectorOrder - 0.077
label = "Collect Tag Tasks" label = "Collect Tag Tasks"
hosts = ["hiero"] hosts = ["hiero"]
families = ['clip'] families = ["shot"]
def process(self, instance): def process(self, instance):
# gets tags # gets tags
tags = instance.data["tags"] tags = instance.data["tags"]
tasks = dict() tasks = {}
for tag in tags: for tag in tags:
t_metadata = dict(tag.metadata()) t_metadata = dict(tag.metadata())
t_family = t_metadata.get("tag.family", "") t_family = t_metadata.get("tag.family", "")

View file

@ -0,0 +1,82 @@
import os
import pyblish.api
import openpype
class ExtractFrames(openpype.api.Extractor):
"""Extracts frames"""
order = pyblish.api.ExtractorOrder
label = "Extract Frames"
hosts = ["hiero"]
families = ["frame"]
movie_extensions = ["mov", "mp4"]
def process(self, instance):
oiio_tool_path = openpype.lib.get_oiio_tools_path()
staging_dir = self.staging_dir(instance)
output_template = os.path.join(staging_dir, instance.data["name"])
sequence = instance.context.data["activeTimeline"]
files = []
for frame in instance.data["frames"]:
track_item = sequence.trackItemAt(frame)
media_source = track_item.source().mediaSource()
input_path = media_source.fileinfos()[0].filename()
input_frame = (
track_item.mapTimelineToSource(frame) +
track_item.source().mediaSource().startTime()
)
output_ext = instance.data["format"]
output_path = output_template
output_path += ".{:04d}.{}".format(int(frame), output_ext)
args = [oiio_tool_path]
ext = os.path.splitext(input_path)[1][1:]
if ext in self.movie_extensions:
args.extend(["--subimage", str(int(input_frame))])
else:
args.extend(["--frames", str(int(input_frame))])
if ext == "exr":
args.extend(["--powc", "0.45,0.45,0.45,1.0"])
args.extend([input_path, "-o", output_path])
output = openpype.api.run_subprocess(args)
failed_output = "oiiotool produced no output."
if failed_output in output:
raise ValueError(
"oiiotool processing failed. Args: {}".format(args)
)
files.append(output_path)
# Feedback to user because "oiiotool" can make the publishing
# appear unresponsive.
self.log.info(
"Processed {} of {} frames".format(
instance.data["frames"].index(frame) + 1,
len(instance.data["frames"])
)
)
if len(files) == 1:
instance.data["representations"] = [
{
"name": output_ext,
"ext": output_ext,
"files": os.path.basename(files[0]),
"stagingDir": staging_dir
}
]
else:
instance.data["representations"] = [
{
"name": output_ext,
"ext": output_ext,
"files": [os.path.basename(x) for x in files],
"stagingDir": staging_dir
}
]

View file

@ -19,9 +19,12 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
def process(self, context): def process(self, context):
self.otio_timeline = context.data["otioTimeline"] self.otio_timeline = context.data["otioTimeline"]
timeline_selection = phiero.get_timeline_selection()
selected_timeline_items = phiero.get_track_items( selected_timeline_items = phiero.get_track_items(
selected=True, check_tagged=True, check_enabled=True) selection=timeline_selection,
check_tagged=True,
check_enabled=True
)
# only return enabled track items # only return enabled track items
if not selected_timeline_items: if not selected_timeline_items:
@ -103,7 +106,10 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
# clip's effect # clip's effect
"clipEffectItems": subtracks, "clipEffectItems": subtracks,
"clipAnnotations": annotations "clipAnnotations": annotations,
# add all additional tags
"tags": phiero.get_track_item_tags(track_item)
}) })
# otio clip data # otio clip data
@ -292,10 +298,12 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
for otio_clip in self.otio_timeline.each_clip(): for otio_clip in self.otio_timeline.each_clip():
track_name = otio_clip.parent().name track_name = otio_clip.parent().name
parent_range = otio_clip.range_in_parent() parent_range = otio_clip.range_in_parent()
if ti_track_name not in track_name: if ti_track_name != track_name:
continue continue
if otio_clip.name not in track_item.name(): if otio_clip.name != track_item.name():
continue continue
self.log.debug("__ parent_range: {}".format(parent_range))
self.log.debug("__ timeline_range: {}".format(timeline_range))
if openpype.lib.is_overlapping_otio_ranges( if openpype.lib.is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True): parent_range, timeline_range, strict=True):
@ -312,7 +320,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
speed = track_item.playbackSpeed() speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence() timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn()) frame_start = int(track_item.timelineIn())
frame_duration = int(track_item.sourceDuration() / speed) frame_duration = int((track_item.duration() - 1) / speed)
fps = timeline.framerate().toFloat() fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range( return hiero_export.create_otio_time_range(

View file

@ -16,7 +16,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context""" """Inject the current working file into context"""
label = "Precollect Workfile" label = "Precollect Workfile"
order = pyblish.api.CollectorOrder - 0.5 order = pyblish.api.CollectorOrder - 0.491
def process(self, context): def process(self, context):
@ -68,6 +68,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
"subset": "{}{}".format(asset, subset.capitalize()), "subset": "{}{}".format(asset, subset.capitalize()),
"item": project, "item": project,
"family": "workfile", "family": "workfile",
"families": [],
"representations": [workfile_representation, thumb_representation] "representations": [workfile_representation, thumb_representation]
} }
@ -77,11 +78,13 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
# update context with main project attributes # update context with main project attributes
context_data = { context_data = {
"activeProject": project, "activeProject": project,
"activeTimeline": active_timeline,
"otioTimeline": otio_timeline, "otioTimeline": otio_timeline,
"currentFile": curent_file, "currentFile": curent_file,
"colorspace": self.get_colorspace(project), "colorspace": self.get_colorspace(project),
"fps": fps "fps": fps
} }
self.log.debug("__ context_data: {}".format(pformat(context_data)))
context.data.update(context_data) context.data.update(context_data)
self.log.info("Creating instance: {}".format(instance)) self.log.info("Creating instance: {}".format(instance))

View file

@ -1,38 +0,0 @@
import pyblish.api
class CollectClipResolution(pyblish.api.InstancePlugin):
"""Collect clip geometry resolution"""
order = pyblish.api.CollectorOrder - 0.1
label = "Collect Clip Resolution"
hosts = ["hiero"]
families = ["clip"]
def process(self, instance):
sequence = instance.context.data['activeSequence']
item = instance.data["item"]
source_resolution = instance.data.get("sourceResolution", None)
resolution_width = int(sequence.format().width())
resolution_height = int(sequence.format().height())
pixel_aspect = sequence.format().pixelAspect()
# source exception
if source_resolution:
resolution_width = int(item.source().mediaSource().width())
resolution_height = int(item.source().mediaSource().height())
pixel_aspect = item.source().mediaSource().pixelAspect()
resolution_data = {
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height,
"pixelAspect": pixel_aspect
}
# add to instacne data
instance.data.update(resolution_data)
self.log.info("Resolution of instance '{}' is: {}".format(
instance,
resolution_data
))

View file

@ -1,15 +0,0 @@
import pyblish.api
class CollectHostVersion(pyblish.api.ContextPlugin):
"""Inject the hosts version into context"""
label = "Collect Host and HostVersion"
order = pyblish.api.CollectorOrder - 0.5
def process(self, context):
import nuke
import pyblish.api
context.set_data("host", pyblish.api.current_host())
context.set_data('hostVersion', value=nuke.NUKE_VERSION_STRING)

View file

@ -1,32 +0,0 @@
from pyblish import api
class CollectTagRetime(api.InstancePlugin):
"""Collect Retiming from Tags of selected track items."""
order = api.CollectorOrder + 0.014
label = "Collect Retiming Tag"
hosts = ["hiero"]
families = ['clip']
def process(self, instance):
# gets tags
tags = instance.data["tags"]
for t in tags:
t_metadata = dict(t["metadata"])
t_family = t_metadata.get("tag.family", "")
# gets only task family tags and collect labels
if "retiming" in t_family:
margin_in = t_metadata.get("tag.marginIn", "")
margin_out = t_metadata.get("tag.marginOut", "")
instance.data["retimeMarginIn"] = int(margin_in)
instance.data["retimeMarginOut"] = int(margin_out)
instance.data["retime"] = True
self.log.info("retimeMarginIn: `{}`".format(margin_in))
self.log.info("retimeMarginOut: `{}`".format(margin_out))
instance.data["families"] += ["retime"]

View file

@ -1,223 +0,0 @@
from compiler.ast import flatten
from pyblish import api
from openpype.hosts.hiero import api as phiero
import hiero
# from openpype.hosts.hiero.api import lib
# reload(lib)
# reload(phiero)
class PreCollectInstances(api.ContextPlugin):
"""Collect all Track items selection."""
order = api.CollectorOrder - 0.509
label = "Pre-collect Instances"
hosts = ["hiero"]
def process(self, context):
track_items = phiero.get_track_items(
selected=True, check_tagged=True, check_enabled=True)
# only return enabled track items
if not track_items:
track_items = phiero.get_track_items(
check_enabled=True, check_tagged=True)
# get sequence and video tracks
sequence = context.data["activeSequence"]
tracks = sequence.videoTracks()
# add collection to context
tracks_effect_items = self.collect_sub_track_items(tracks)
context.data["tracksEffectItems"] = tracks_effect_items
self.log.info(
"Processing enabled track items: {}".format(len(track_items)))
for _ti in track_items:
data = {}
clip = _ti.source()
# get clips subtracks and anotations
annotations = self.clip_annotations(clip)
subtracks = self.clip_subtrack(_ti)
self.log.debug("Annotations: {}".format(annotations))
self.log.debug(">> Subtracks: {}".format(subtracks))
# get pype tag data
tag_parsed_data = phiero.get_track_item_pype_data(_ti)
# self.log.debug(pformat(tag_parsed_data))
if not tag_parsed_data:
continue
if tag_parsed_data.get("id") != "pyblish.avalon.instance":
continue
# add tag data to instance data
data.update({
k: v for k, v in tag_parsed_data.items()
if k not in ("id", "applieswhole", "label")
})
asset = tag_parsed_data["asset"]
subset = tag_parsed_data["subset"]
review_track = tag_parsed_data.get("reviewTrack")
hiero_track = tag_parsed_data.get("heroTrack")
audio = tag_parsed_data.get("audio")
# remove audio attribute from data
data.pop("audio")
# insert family into families
family = tag_parsed_data["family"]
families = [str(f) for f in tag_parsed_data["families"]]
families.insert(0, str(family))
track = _ti.parent()
media_source = _ti.source().mediaSource()
source_path = media_source.firstpath()
file_head = media_source.filenameHead()
file_info = media_source.fileinfos().pop()
source_first_frame = int(file_info.startFrame())
# apply only for review and master track instance
if review_track and hiero_track:
families += ["review", "ftrack"]
data.update({
"name": "{} {} {}".format(asset, subset, families),
"asset": asset,
"item": _ti,
"families": families,
# tags
"tags": _ti.tags(),
# track item attributes
"track": track.name(),
"trackItem": track,
"reviewTrack": review_track,
# version data
"versionData": {
"colorspace": _ti.sourceMediaColourTransform()
},
# source attribute
"source": source_path,
"sourceMedia": media_source,
"sourcePath": source_path,
"sourceFileHead": file_head,
"sourceFirst": source_first_frame,
# clip's effect
"clipEffectItems": subtracks
})
instance = context.create_instance(**data)
self.log.info("Creating instance.data: {}".format(instance.data))
if audio:
a_data = dict()
# add tag data to instance data
a_data.update({
k: v for k, v in tag_parsed_data.items()
if k not in ("id", "applieswhole", "label")
})
# create main attributes
subset = "audioMain"
family = "audio"
families = ["clip", "ftrack"]
families.insert(0, str(family))
name = "{} {} {}".format(asset, subset, families)
a_data.update({
"name": name,
"subset": subset,
"asset": asset,
"family": family,
"families": families,
"item": _ti,
# tags
"tags": _ti.tags(),
})
a_instance = context.create_instance(**a_data)
self.log.info("Creating audio instance: {}".format(a_instance))
@staticmethod
def clip_annotations(clip):
"""
Returns list of Clip's hiero.core.Annotation
"""
annotations = []
subTrackItems = flatten(clip.subTrackItems())
annotations += [item for item in subTrackItems if isinstance(
item, hiero.core.Annotation)]
return annotations
@staticmethod
def clip_subtrack(clip):
"""
Returns list of Clip's hiero.core.SubTrackItem
"""
subtracks = []
subTrackItems = flatten(clip.parent().subTrackItems())
for item in subTrackItems:
# avoid all anotation
if isinstance(item, hiero.core.Annotation):
continue
# # avoid all not anaibled
if not item.isEnabled():
continue
subtracks.append(item)
return subtracks
@staticmethod
def collect_sub_track_items(tracks):
"""
Returns dictionary with track index as key and list of subtracks
"""
# collect all subtrack items
sub_track_items = dict()
for track in tracks:
items = track.items()
# skip if no clips on track > need track with effect only
if items:
continue
# skip all disabled tracks
if not track.isEnabled():
continue
track_index = track.trackIndex()
_sub_track_items = flatten(track.subTrackItems())
# continue only if any subtrack items are collected
if len(_sub_track_items) < 1:
continue
enabled_sti = list()
# loop all found subtrack items and check if they are enabled
for _sti in _sub_track_items:
# checking if not enabled
if not _sti.isEnabled():
continue
if isinstance(_sti, hiero.core.Annotation):
continue
# collect the subtrack item
enabled_sti.append(_sti)
# continue only if any subtrack items are collected
if len(enabled_sti) < 1:
continue
# add collection of subtrackitems to dict
sub_track_items[track_index] = enabled_sti
return sub_track_items

View file

@ -1,74 +0,0 @@
import os
import pyblish.api
from openpype.hosts.hiero import api as phiero
from openpype.pipeline import legacy_io
class PreCollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
label = "Pre-collect Workfile"
order = pyblish.api.CollectorOrder - 0.51
def process(self, context):
asset = legacy_io.Session["AVALON_ASSET"]
subset = "workfile"
project = phiero.get_current_project()
active_sequence = phiero.get_current_sequence()
video_tracks = active_sequence.videoTracks()
audio_tracks = active_sequence.audioTracks()
current_file = project.path()
staging_dir = os.path.dirname(current_file)
base_name = os.path.basename(current_file)
# get workfile's colorspace properties
_clrs = {}
_clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
_clrs["lutSetting16Bit"] = project.lutSetting16Bit()
_clrs["lutSetting8Bit"] = project.lutSetting8Bit()
_clrs["lutSettingFloat"] = project.lutSettingFloat()
_clrs["lutSettingLog"] = project.lutSettingLog()
_clrs["lutSettingViewer"] = project.lutSettingViewer()
_clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
_clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
_clrs["ocioConfigName"] = project.ocioConfigName()
_clrs["ocioConfigPath"] = project.ocioConfigPath()
# set main project attributes to context
context.data["activeProject"] = project
context.data["activeSequence"] = active_sequence
context.data["videoTracks"] = video_tracks
context.data["audioTracks"] = audio_tracks
context.data["currentFile"] = current_file
context.data["colorspace"] = _clrs
self.log.info("currentFile: {}".format(current_file))
# creating workfile representation
representation = {
'name': 'hrox',
'ext': 'hrox',
'files': base_name,
"stagingDir": staging_dir,
}
instance_data = {
"name": "{}_{}".format(asset, subset),
"asset": asset,
"subset": "{}{}".format(asset, subset.capitalize()),
"item": project,
"family": "workfile",
# version data
"versionData": {
"colorspace": _clrs
},
# source attribute
"sourcePath": current_file,
"representations": [representation]
}
instance = context.create_instance(**instance_data)
self.log.info("Creating instance: {}".format(instance))

View file

@ -6,7 +6,7 @@ from openpype.pipeline import load
class SetFrameRangeLoader(load.LoaderPlugin): class SetFrameRangeLoader(load.LoaderPlugin):
"""Set Houdini frame range""" """Set frame range excluding pre- and post-handles"""
families = [ families = [
"animation", "animation",
@ -44,7 +44,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin): class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
"""Set Maya frame range including pre- and post-handles""" """Set frame range including pre- and post-handles"""
families = [ families = [
"animation", "animation",

View file

@ -7,7 +7,7 @@ from openpype.hosts.houdini.api import pipeline
class AbcLoader(load.LoaderPlugin): class AbcLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Load Alembic"""
families = ["model", "animation", "pointcache", "gpuCache"] families = ["model", "animation", "pointcache", "gpuCache"]
label = "Load Alembic" label = "Load Alembic"

View file

@ -0,0 +1,75 @@
import os
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.houdini.api import pipeline
class AbcArchiveLoader(load.LoaderPlugin):
"""Load Alembic as full geometry network hierarchy """
families = ["model", "animation", "pointcache", "gpuCache"]
label = "Load Alembic as Archive"
representations = ["abc"]
order = -5
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
import hou
# Format file name, Houdini only wants forward slashes
file_path = os.path.normpath(self.fname)
file_path = file_path.replace("\\", "/")
# Get the root node
obj = hou.node("/obj")
# Define node name
namespace = namespace if namespace else context["asset"]["name"]
node_name = "{}_{}".format(namespace, name) if namespace else name
# Create an Alembic archive node
node = obj.createNode("alembicarchive", node_name=node_name)
node.moveToGoodPosition()
# TODO: add FPS of project / asset
node.setParms({"fileName": file_path,
"channelRef": True})
# Apply some magic
node.parm("buildHierarchy").pressButton()
node.moveToGoodPosition()
nodes = [node]
self[:] = nodes
return pipeline.containerise(node_name,
namespace,
nodes,
context,
self.__class__.__name__,
suffix="")
def update(self, container, representation):
node = container["node"]
# Update the file path
file_path = get_representation_path(representation)
file_path = file_path.replace("\\", "/")
# Update attributes
node.setParms({"fileName": file_path,
"representation": str(representation["_id"])})
# Rebuild
node.parm("buildHierarchy").pressButton()
def remove(self, container):
node = container["node"]
node.destroy()

View file

@ -0,0 +1,107 @@
# -*- coding: utf-8 -*-
import os
import re
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.houdini.api import pipeline
class BgeoLoader(load.LoaderPlugin):
"""Load bgeo files to Houdini."""
label = "Load bgeo"
families = ["model", "pointcache", "bgeo"]
representations = [
"bgeo", "bgeosc", "bgeogz",
"bgeo.sc", "bgeo.gz", "bgeo.lzma", "bgeo.bz2"]
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
import hou
# Get the root node
obj = hou.node("/obj")
# Define node name
namespace = namespace if namespace else context["asset"]["name"]
node_name = "{}_{}".format(namespace, name) if namespace else name
# Create a new geo node
container = obj.createNode("geo", node_name=node_name)
is_sequence = bool(context["representation"]["context"].get("frame"))
# Remove the file node, it only loads static meshes
# Houdini 17 has removed the file node from the geo node
file_node = container.node("file1")
if file_node:
file_node.destroy()
# Explicitly create a file node
file_node = container.createNode("file", node_name=node_name)
file_node.setParms({"file": self.format_path(self.fname, is_sequence)})
# Set display on last node
file_node.setDisplayFlag(True)
nodes = [container, file_node]
self[:] = nodes
return pipeline.containerise(
node_name,
namespace,
nodes,
context,
self.__class__.__name__,
suffix="",
)
@staticmethod
def format_path(path, is_sequence):
"""Format file path correctly for single bgeo or bgeo sequence."""
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
# The path is either a single file or sequence in a folder.
if not is_sequence:
filename = path
print("single")
else:
filename = re.sub(r"(.*)\.(\d+)\.(bgeo.*)", "\\1.$F4.\\3", path)
filename = os.path.join(path, filename)
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
return filename
def update(self, container, representation):
node = container["node"]
try:
file_node = next(
n for n in node.children() if n.type().name() == "file"
)
except StopIteration:
self.log.error("Could not find node of type `alembic`")
return
# Update the file path
file_path = get_representation_path(representation)
file_path = self.format_path(file_path)
file_node.setParms({"fileName": file_path})
# Update attribute
node.setParms({"representation": str(representation["_id"])})
def remove(self, container):
node = container["node"]
node.destroy()

View file

@ -78,7 +78,7 @@ def transfer_non_default_values(src, dest, ignore=None):
class CameraLoader(load.LoaderPlugin): class CameraLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Load camera from an Alembic file"""
families = ["camera"] families = ["camera"]
label = "Load Camera (abc)" label = "Load Camera (abc)"

View file

@ -42,9 +42,9 @@ def get_image_avalon_container():
class ImageLoader(load.LoaderPlugin): class ImageLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Load images into COP2"""
families = ["colorbleed.imagesequence"] families = ["imagesequence"]
label = "Load Image (COP2)" label = "Load Image (COP2)"
representations = ["*"] representations = ["*"]
order = -10 order = -10

View file

@ -9,7 +9,7 @@ from openpype.hosts.houdini.api import pipeline
class VdbLoader(load.LoaderPlugin): class VdbLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Load VDB"""
families = ["vdbcache"] families = ["vdbcache"]
label = "Load VDB" label = "Load VDB"

View file

@ -1,3 +1,7 @@
import os
import subprocess
from openpype.lib.vendor_bin_utils import find_executable
from openpype.pipeline import load from openpype.pipeline import load
@ -14,12 +18,7 @@ class ShowInUsdview(load.LoaderPlugin):
def load(self, context, name=None, namespace=None, data=None): def load(self, context, name=None, namespace=None, data=None):
import os usdview = find_executable("usdview")
import subprocess
import avalon.lib as lib
usdview = lib.which("usdview")
filepath = os.path.normpath(self.fname) filepath = os.path.normpath(self.fname)
filepath = filepath.replace("\\", "/") filepath = filepath.replace("\\", "/")

View file

@ -77,8 +77,10 @@ IMAGE_PREFIXES = {
"arnold": "defaultRenderGlobals.imageFilePrefix", "arnold": "defaultRenderGlobals.imageFilePrefix",
"renderman": "rmanGlobals.imageFileFormat", "renderman": "rmanGlobals.imageFileFormat",
"redshift": "defaultRenderGlobals.imageFilePrefix", "redshift": "defaultRenderGlobals.imageFilePrefix",
"mayahardware2": "defaultRenderGlobals.imageFilePrefix"
} }
RENDERMAN_IMAGE_DIR = "maya/<scene>/<layer>"
@attr.s @attr.s
class LayerMetadata(object): class LayerMetadata(object):
@ -154,7 +156,8 @@ def get(layer, render_instance=None):
"arnold": RenderProductsArnold, "arnold": RenderProductsArnold,
"vray": RenderProductsVray, "vray": RenderProductsVray,
"redshift": RenderProductsRedshift, "redshift": RenderProductsRedshift,
"renderman": RenderProductsRenderman "renderman": RenderProductsRenderman,
"mayahardware2": RenderProductsMayaHardware
}.get(renderer_name.lower(), None) }.get(renderer_name.lower(), None)
if renderer is None: if renderer is None:
raise UnsupportedRendererException( raise UnsupportedRendererException(
@ -1054,6 +1057,8 @@ class RenderProductsRenderman(ARenderProducts):
:func:`ARenderProducts.get_render_products()` :func:`ARenderProducts.get_render_products()`
""" """
from rfm2.api.displays import get_displays # noqa
cameras = [ cameras = [
self.sanitize_camera_name(c) self.sanitize_camera_name(c)
for c in self.get_renderable_cameras() for c in self.get_renderable_cameras()
@ -1066,47 +1071,127 @@ class RenderProductsRenderman(ARenderProducts):
] ]
products = [] products = []
default_ext = "exr" # NOTE: This is guessing extensions from renderman display types.
displays = cmds.listConnections("rmanGlobals.displays") # Some of them are just framebuffers, d_texture format can be
for aov in displays: # set in display setting. We set those now to None, but it
enabled = self._get_attr(aov, "enabled") # should be handled more gracefully.
display_types = {
"d_deepexr": "exr",
"d_it": None,
"d_null": None,
"d_openexr": "exr",
"d_png": "png",
"d_pointcloud": "ptc",
"d_targa": "tga",
"d_texture": None,
"d_tiff": "tif"
}
displays = get_displays()["displays"]
for name, display in displays.items():
enabled = display["params"]["enable"]["value"]
if not enabled: if not enabled:
continue continue
aov_name = str(aov) # Skip display types not producing any file output.
# Is there a better way to do it?
if not display_types.get(display["driverNode"]["type"]):
continue
aov_name = name
if aov_name == "rmanDefaultDisplay": if aov_name == "rmanDefaultDisplay":
aov_name = "beauty" aov_name = "beauty"
extensions = display_types.get(
display["driverNode"]["type"], "exr")
for camera in cameras: for camera in cameras:
product = RenderProduct(productName=aov_name, product = RenderProduct(productName=aov_name,
ext=default_ext, ext=extensions,
camera=camera) camera=camera)
products.append(product) products.append(product)
return products return products
def get_files(self, product, camera): def get_files(self, product):
"""Get expected files. """Get expected files.
In renderman we hack it with prepending path. This path would
normally be translated from `rmanGlobals.imageOutputDir`. We skip
this and hardcode prepend path we expect. There is no place for user
to mess around with this settings anyway and it is enforced in
render settings validator.
""" """
files = super(RenderProductsRenderman, self).get_files(product, camera) files = super(RenderProductsRenderman, self).get_files(product)
layer_data = self.layer_data layer_data = self.layer_data
new_files = [] new_files = []
resolved_image_dir = re.sub("<scene>", layer_data.sceneName, RENDERMAN_IMAGE_DIR, flags=re.IGNORECASE) # noqa: E501
resolved_image_dir = re.sub("<layer>", layer_data.layerName, resolved_image_dir, flags=re.IGNORECASE) # noqa: E501
for file in files: for file in files:
new_file = "{}/{}/{}".format( new_file = "{}/{}".format(resolved_image_dir, file)
layer_data["sceneName"], layer_data["layerName"], file
)
new_files.append(new_file) new_files.append(new_file)
return new_files return new_files
class RenderProductsMayaHardware(ARenderProducts):
"""Expected files for MayaHardware renderer."""
renderer = "mayahardware2"
extensions = [
{"label": "JPEG", "index": 8, "extension": "jpg"},
{"label": "PNG", "index": 32, "extension": "png"},
{"label": "EXR(exr)", "index": 40, "extension": "exr"}
]
def _get_extension(self, value):
result = None
if isinstance(value, int):
extensions = {
extension["index"]: extension["extension"]
for extension in self.extensions
}
try:
result = extensions[value]
except KeyError:
raise NotImplementedError(
"Could not find extension for {}".format(value)
)
if isinstance(value, six.string_types):
extensions = {
extension["label"]: extension["extension"]
for extension in self.extensions
}
try:
result = extensions[value]
except KeyError:
raise NotImplementedError(
"Could not find extension for {}".format(value)
)
if not result:
raise NotImplementedError(
"Could not find extension for {}".format(value)
)
return result
def get_render_products(self):
"""Get all AOVs.
See Also:
:func:`ARenderProducts.get_render_products()`
"""
ext = self._get_extension(
self._get_attr("defaultRenderGlobals.imageFormat")
)
products = []
for cam in self.get_renderable_cameras():
product = RenderProduct(productName="beauty", ext=ext, camera=cam)
products.append(product)
return products
class AOVError(Exception): class AOVError(Exception):
"""Custom exception for determining AOVs.""" """Custom exception for determining AOVs."""

View file

@ -66,13 +66,23 @@ def install():
log.info("Installing callbacks ... ") log.info("Installing callbacks ... ")
register_event_callback("init", on_init) register_event_callback("init", on_init)
# Callbacks below are not required for headless mode, the `init` however if os.environ.get("HEADLESS_PUBLISH"):
# is important to load referenced Alembics correctly at rendertime. # Maya launched on farm, lib.IS_HEADLESS might be triggered locally too
# target "farm" == rendering on farm, expects OPENPYPE_PUBLISH_DATA
# target "remote" == remote execution
print("Registering pyblish target: remote")
pyblish.api.register_target("remote")
return
if lib.IS_HEADLESS: if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya " log.info(("Running in headless mode, skipping Maya "
"save/open/new callback installation..")) "save/open/new callback installation.."))
return return
print("Registering pyblish target: local")
pyblish.api.register_target("local")
_set_project() _set_project()
_register_callbacks() _register_callbacks()

View file

@ -10,7 +10,8 @@ from openpype.pipeline import (
get_representation_path, get_representation_path,
AVALON_CONTAINER_ID, AVALON_CONTAINER_ID,
) )
from openpype.api import Anatomy
from openpype.settings import get_project_settings
from .pipeline import containerise from .pipeline import containerise
from . import lib from . import lib
@ -230,6 +231,10 @@ class ReferenceLoader(Loader):
self.log.debug("No alembic nodes found in {}".format(members)) self.log.debug("No alembic nodes found in {}".format(members))
try: try:
path = self.prepare_root_value(path,
representation["context"]
["project"]
["code"])
content = cmds.file(path, content = cmds.file(path,
loadReference=reference_node, loadReference=reference_node,
type=file_type, type=file_type,
@ -319,6 +324,29 @@ class ReferenceLoader(Loader):
except RuntimeError: except RuntimeError:
pass pass
def prepare_root_value(self, file_url, project_name):
"""Replace root value with env var placeholder.
Use ${OPENPYPE_ROOT_WORK} (or any other root) instead of proper root
value when storing referenced url into a workfile.
Useful for remote workflows with SiteSync.
Args:
file_url (str)
project_name (dict)
Returns:
(str)
"""
settings = get_project_settings(project_name)
use_env_var_as_root = (settings["maya"]
["maya-dirmap"]
["use_env_var_as_root"])
if use_env_var_as_root:
anatomy = Anatomy(project_name)
file_url = anatomy.replace_root_with_env_key(file_url, '${{{}}}')
return file_url
@staticmethod @staticmethod
def _organize_containers(nodes, container): def _organize_containers(nodes, container):
# type: (list, str) -> None # type: (list, str) -> None

View file

@ -38,3 +38,7 @@ class CreateAnimation(plugin.Creator):
# Default to exporting world-space # Default to exporting world-space
self.data["worldSpace"] = True self.data["worldSpace"] = True
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50

View file

@ -28,3 +28,7 @@ class CreatePointCache(plugin.Creator):
# Add options for custom attributes # Add options for custom attributes
self.data["attr"] = "" self.data["attr"] = ""
self.data["attrPrefix"] = "" self.data["attrPrefix"] = ""
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50

View file

@ -76,16 +76,20 @@ class CreateRender(plugin.Creator):
'mentalray': 'defaultRenderGlobals.imageFilePrefix', 'mentalray': 'defaultRenderGlobals.imageFilePrefix',
'vray': 'vraySettings.fileNamePrefix', 'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix', 'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix', 'renderman': 'rmanGlobals.imageFileFormat',
'redshift': 'defaultRenderGlobals.imageFilePrefix' 'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
} }
_image_prefixes = { _image_prefixes = {
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa 'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
'vray': 'maya/<scene>/<Layer>/<Layer>', 'vray': 'maya/<scene>/<Layer>/<Layer>',
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa 'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>', # this needs `imageOutputDir`
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>' # noqa # (<ws>/renders/maya/<scene>) set separately
'renderman': '<layer>_<aov>.<f4>.<ext>',
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
} }
_aov_chars = { _aov_chars = {
@ -440,6 +444,10 @@ class CreateRender(plugin.Creator):
self._set_global_output_settings() self._set_global_output_settings()
if renderer == "renderman":
cmds.setAttr("rmanGlobals.imageOutputDir",
"maya/<scene>/<layer>", type="string")
def _set_vray_settings(self, asset): def _set_vray_settings(self, asset):
# type: (dict) -> None # type: (dict) -> None
"""Sets important settings for Vray.""" """Sets important settings for Vray."""

View file

@ -2,7 +2,7 @@ import openpype.hosts.maya.api.plugin
class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"""Specific loader of Alembic for the avalon.animation family""" """Loader to reference an Alembic file"""
families = ["animation", families = ["animation",
"camera", "camera",
@ -35,8 +35,9 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# hero_001 (abc) # hero_001 (abc)
# asset_counter{optional} # asset_counter{optional}
file_url = self.prepare_root_value(self.fname,
nodes = cmds.file(self.fname, context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace, namespace=namespace,
sharedReferenceFile=False, sharedReferenceFile=False,
groupReference=True, groupReference=True,

View file

@ -1,7 +1,7 @@
"""A module containing generic loader actions that will display in the Loader. """A module containing generic loader actions that will display in the Loader.
""" """
import qargparse
from openpype.pipeline import load from openpype.pipeline import load
from openpype.hosts.maya.api.lib import ( from openpype.hosts.maya.api.lib import (
maintained_selection, maintained_selection,
@ -10,7 +10,7 @@ from openpype.hosts.maya.api.lib import (
class SetFrameRangeLoader(load.LoaderPlugin): class SetFrameRangeLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Set frame range excluding pre- and post-handles"""
families = ["animation", families = ["animation",
"camera", "camera",
@ -44,7 +44,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin): class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Set frame range including pre- and post-handles"""
families = ["animation", families = ["animation",
"camera", "camera",
@ -98,6 +98,15 @@ class ImportMayaLoader(load.LoaderPlugin):
icon = "arrow-circle-down" icon = "arrow-circle-down"
color = "#775555" color = "#775555"
options = [
qargparse.Boolean(
"clean_import",
label="Clean import",
default=False,
help="Should all occurences of cbId be purged?"
)
]
def load(self, context, name=None, namespace=None, data=None): def load(self, context, name=None, namespace=None, data=None):
import maya.cmds as cmds import maya.cmds as cmds
@ -114,13 +123,22 @@ class ImportMayaLoader(load.LoaderPlugin):
) )
with maintained_selection(): with maintained_selection():
cmds.file(self.fname, nodes = cmds.file(self.fname,
i=True, i=True,
preserveReferences=True, preserveReferences=True,
namespace=namespace, namespace=namespace,
returnNewNodes=True, returnNewNodes=True,
groupReference=True, groupReference=True,
groupName="{}:{}".format(namespace, name)) groupName="{}:{}".format(namespace, name))
if data.get("clean_import", False):
remove_attributes = ["cbId"]
for node in nodes:
for attr in remove_attributes:
if cmds.attributeQuery(attr, node=node, exists=True):
full_attr = "{}.{}".format(node, attr)
print("Removing {}".format(full_attr))
cmds.deleteAttr(full_attr)
# We do not containerize imported content, it remains unmanaged # We do not containerize imported content, it remains unmanaged
return return

View file

@ -16,7 +16,7 @@ from openpype.hosts.maya.api.pipeline import containerise
class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"""Load the Proxy""" """Load Arnold Proxy as reference"""
families = ["ass"] families = ["ass"]
representations = ["ass"] representations = ["ass"]
@ -64,9 +64,11 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
path = os.path.join(publish_folder, filename) path = os.path.join(publish_folder, filename)
proxyPath = proxyPath_base + ".ma" proxyPath = proxyPath_base + ".ma"
self.log.info
nodes = cmds.file(proxyPath, file_url = self.prepare_root_value(proxyPath,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace, namespace=namespace,
reference=True, reference=True,
returnNewNodes=True, returnNewNodes=True,
@ -123,7 +125,11 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
assert os.path.exists(proxyPath), "%s does not exist." % proxyPath assert os.path.exists(proxyPath), "%s does not exist." % proxyPath
try: try:
content = cmds.file(proxyPath, file_url = self.prepare_root_value(proxyPath,
representation["context"]
["project"]
["code"])
content = cmds.file(file_url,
loadReference=reference_node, loadReference=reference_node,
type="mayaAscii", type="mayaAscii",
returnNewNodes=True) returnNewNodes=True)

View file

@ -8,7 +8,7 @@ from openpype.api import get_project_settings
class GpuCacheLoader(load.LoaderPlugin): class GpuCacheLoader(load.LoaderPlugin):
"""Load model Alembic as gpuCache""" """Load Alembic as gpuCache"""
families = ["model"] families = ["model"]
representations = ["abc"] representations = ["abc"]

View file

@ -83,7 +83,7 @@ class ImagePlaneLoader(load.LoaderPlugin):
families = ["image", "plate", "render"] families = ["image", "plate", "render"]
label = "Load imagePlane" label = "Load imagePlane"
representations = ["mov", "exr", "preview", "png"] representations = ["mov", "exr", "preview", "png", "jpg"]
icon = "image" icon = "image"
color = "orange" color = "orange"

View file

@ -31,7 +31,9 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
import maya.cmds as cmds import maya.cmds as cmds
with lib.maintained_selection(): with lib.maintained_selection():
nodes = cmds.file(self.fname, file_url = self.prepare_root_value(self.fname,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace, namespace=namespace,
reference=True, reference=True,
returnNewNodes=True) returnNewNodes=True)

View file

@ -12,7 +12,7 @@ from openpype.hosts.maya.api.lib import maintained_selection
class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"""Load the model""" """Reference file"""
families = ["model", families = ["model",
"pointcache", "pointcache",
@ -51,7 +51,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
with maintained_selection(): with maintained_selection():
cmds.loadPlugin("AbcImport.mll", quiet=True) cmds.loadPlugin("AbcImport.mll", quiet=True)
nodes = cmds.file(self.fname, file_url = self.prepare_root_value(self.fname,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace, namespace=namespace,
sharedReferenceFile=False, sharedReferenceFile=False,
reference=True, reference=True,

View file

@ -74,6 +74,7 @@ def _fix_duplicate_vvg_callbacks():
class LoadVDBtoVRay(load.LoaderPlugin): class LoadVDBtoVRay(load.LoaderPlugin):
"""Load OpenVDB in a V-Ray Volume Grid"""
families = ["vdbcache"] families = ["vdbcache"]
representations = ["vdb"] representations = ["vdb"]

View file

@ -53,7 +53,9 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# load rig # load rig
with lib.maintained_selection(): with lib.maintained_selection():
nodes = cmds.file(self.fname, file_url = self.prepare_root_value(self.fname,
context["project"]["code"])
nodes = cmds.file(file_url,
namespace=namespace, namespace=namespace,
reference=True, reference=True,
returnNewNodes=True, returnNewNodes=True,

View file

@ -55,3 +55,6 @@ class CollectAnimationOutputGeometry(pyblish.api.InstancePlugin):
# Store data in the instance for the validator # Store data in the instance for the validator
instance.data["out_hierarchy"] = hierarchy instance.data["out_hierarchy"] = hierarchy
if instance.data.get("farm"):
instance.data["families"].append("publish.farm")

View file

@ -0,0 +1,20 @@
# -*- coding: utf-8 -*-
from maya import cmds # noqa
import pyblish.api
class CollectFbxCamera(pyblish.api.InstancePlugin):
"""Collect Camera for FBX export."""
order = pyblish.api.CollectorOrder + 0.2
label = "Collect Camera for FBX export"
families = ["camera"]
def process(self, instance):
if not instance.data.get("families"):
instance.data["families"] = []
if "fbx" not in instance.data["families"]:
instance.data["families"].append("fbx")
instance.data["cameras"] = True

View file

@ -22,10 +22,46 @@ RENDERER_NODE_TYPES = [
# redshift # redshift
"RedshiftMeshParameters" "RedshiftMeshParameters"
] ]
SHAPE_ATTRS = set(SHAPE_ATTRS) SHAPE_ATTRS = set(SHAPE_ATTRS)
def get_pxr_multitexture_file_attrs(node):
attrs = []
for i in range(9):
if cmds.attributeQuery("filename{}".format(i), node=node, ex=True):
file = cmds.getAttr("{}.filename{}".format(node, i))
if file:
attrs.append("filename{}".format(i))
return attrs
FILE_NODES = {
"file": "fileTextureName",
"aiImage": "filename",
"RedshiftNormalMap": "text0",
"PxrBump": "filename",
"PxrNormalMap": "filename",
"PxrMultiTexture": get_pxr_multitexture_file_attrs,
"PxrPtexture": "filename",
"PxrTexture": "filename"
}
def get_attributes(dictionary, attr, node=None):
# type: (dict, str, str) -> list
if callable(dictionary[attr]):
val = dictionary[attr](node)
else:
val = dictionary.get(attr, [])
if not isinstance(val, list):
return [val]
return val
def get_look_attrs(node): def get_look_attrs(node):
"""Returns attributes of a node that are important for the look. """Returns attributes of a node that are important for the look.
@ -51,15 +87,14 @@ def get_look_attrs(node):
if cmds.objectType(node, isAType="shape"): if cmds.objectType(node, isAType="shape"):
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or [] attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
for attr in attrs: for attr in attrs:
if attr in SHAPE_ATTRS: if attr in SHAPE_ATTRS or \
attr not in SHAPE_ATTRS and attr.startswith('ai'):
result.append(attr) result.append(attr)
elif attr.startswith('ai'):
result.append(attr)
return result return result
def node_uses_image_sequence(node): def node_uses_image_sequence(node, node_path):
# type: (str) -> bool
"""Return whether file node uses an image sequence or single image. """Return whether file node uses an image sequence or single image.
Determine if a node uses an image sequence or just a single image, Determine if a node uses an image sequence or just a single image,
@ -74,12 +109,15 @@ def node_uses_image_sequence(node):
""" """
# useFrameExtension indicates an explicit image sequence # useFrameExtension indicates an explicit image sequence
node_path = get_file_node_path(node).lower()
# The following tokens imply a sequence # The following tokens imply a sequence
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"] patterns = ["<udim>", "<tile>", "<uvtile>",
"u<u>_v<v>", "<frame0", "<f4>"]
try:
use_frame_extension = cmds.getAttr('%s.useFrameExtension' % node)
except ValueError:
use_frame_extension = False
return (cmds.getAttr('%s.useFrameExtension' % node) or return (use_frame_extension or
any(pattern in node_path for pattern in patterns)) any(pattern in node_path for pattern in patterns))
@ -137,14 +175,15 @@ def seq_to_glob(path):
return path return path
def get_file_node_path(node): def get_file_node_paths(node):
# type: (str) -> list
"""Get the file path used by a Maya file node. """Get the file path used by a Maya file node.
Args: Args:
node (str): Name of the Maya file node node (str): Name of the Maya file node
Returns: Returns:
str: the file path in use list: the file paths in use
""" """
# if the path appears to be sequence, use computedFileTextureNamePattern, # if the path appears to be sequence, use computedFileTextureNamePattern,
@ -163,15 +202,20 @@ def get_file_node_path(node):
"<uvtile>"] "<uvtile>"]
lower = texture_pattern.lower() lower = texture_pattern.lower()
if any(pattern in lower for pattern in patterns): if any(pattern in lower for pattern in patterns):
return texture_pattern return [texture_pattern]
if cmds.nodeType(node) == 'aiImage': try:
return cmds.getAttr('{0}.filename'.format(node)) file_attributes = get_attributes(
if cmds.nodeType(node) == 'RedshiftNormalMap': FILE_NODES, cmds.nodeType(node), node)
return cmds.getAttr('{}.tex0'.format(node)) except AttributeError:
file_attributes = "fileTextureName"
# otherwise use fileTextureName files = []
return cmds.getAttr('{0}.fileTextureName'.format(node)) for file_attr in file_attributes:
if cmds.attributeQuery(file_attr, node=node, exists=True):
files.append(cmds.getAttr("{}.{}".format(node, file_attr)))
return files
def get_file_node_files(node): def get_file_node_files(node):
@ -185,16 +229,21 @@ def get_file_node_files(node):
list: List of full file paths. list: List of full file paths.
""" """
paths = get_file_node_paths(node)
sequences = []
replaces = []
for index, path in enumerate(paths):
if node_uses_image_sequence(node, path):
glob_pattern = seq_to_glob(path)
sequences.extend(glob.glob(glob_pattern))
replaces.append(index)
path = get_file_node_path(node) for index in replaces:
path = cmds.workspace(expandName=path) paths.pop(index)
if node_uses_image_sequence(node):
glob_pattern = seq_to_glob(path) paths.extend(sequences)
return glob.glob(glob_pattern)
elif os.path.exists(path): return [p for p in paths if os.path.exists(p)]
return [path]
else:
return []
class CollectLook(pyblish.api.InstancePlugin): class CollectLook(pyblish.api.InstancePlugin):
@ -238,13 +287,13 @@ class CollectLook(pyblish.api.InstancePlugin):
"for %s" % instance.data['name']) "for %s" % instance.data['name'])
# Discover related object sets # Discover related object sets
self.log.info("Gathering sets..") self.log.info("Gathering sets ...")
sets = self.collect_sets(instance) sets = self.collect_sets(instance)
# Lookup set (optimization) # Lookup set (optimization)
instance_lookup = set(cmds.ls(instance, long=True)) instance_lookup = set(cmds.ls(instance, long=True))
self.log.info("Gathering set relations..") self.log.info("Gathering set relations ...")
# Ensure iteration happen in a list so we can remove keys from the # Ensure iteration happen in a list so we can remove keys from the
# dict within the loop # dict within the loop
@ -326,7 +375,10 @@ class CollectLook(pyblish.api.InstancePlugin):
"volumeShader", "volumeShader",
"displacementShader", "displacementShader",
"aiSurfaceShader", "aiSurfaceShader",
"aiVolumeShader"] "aiVolumeShader",
"rman__surface",
"rman__displacement"
]
if look_sets: if look_sets:
materials = [] materials = []
@ -374,15 +426,17 @@ class CollectLook(pyblish.api.InstancePlugin):
or [] or []
) )
files = cmds.ls(history, type="file", long=True) all_supported_nodes = FILE_NODES.keys()
files.extend(cmds.ls(history, type="aiImage", long=True)) files = []
files.extend(cmds.ls(history, type="RedshiftNormalMap", long=True)) for node_type in all_supported_nodes:
files.extend(cmds.ls(history, type=node_type, long=True))
self.log.info("Collected file nodes:\n{}".format(files)) self.log.info("Collected file nodes:\n{}".format(files))
# Collect textures if any file nodes are found # Collect textures if any file nodes are found
instance.data["resources"] = [] instance.data["resources"] = []
for n in files: for n in files:
instance.data["resources"].append(self.collect_resource(n)) for res in self.collect_resources(n):
instance.data["resources"].append(res)
self.log.info("Collected resources: {}".format(instance.data["resources"])) self.log.info("Collected resources: {}".format(instance.data["resources"]))
@ -502,7 +556,7 @@ class CollectLook(pyblish.api.InstancePlugin):
return attributes return attributes
def collect_resource(self, node): def collect_resources(self, node):
"""Collect the link to the file(s) used (resource) """Collect the link to the file(s) used (resource)
Args: Args:
node (str): name of the node node (str): name of the node
@ -510,68 +564,69 @@ class CollectLook(pyblish.api.InstancePlugin):
Returns: Returns:
dict dict
""" """
self.log.debug("processing: {}".format(node)) self.log.debug("processing: {}".format(node))
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]: all_supported_nodes = FILE_NODES.keys()
if cmds.nodeType(node) not in all_supported_nodes:
self.log.error( self.log.error(
"Unsupported file node: {}".format(cmds.nodeType(node))) "Unsupported file node: {}".format(cmds.nodeType(node)))
raise AssertionError("Unsupported file node") raise AssertionError("Unsupported file node")
if cmds.nodeType(node) == 'file': self.log.debug(" - got {}".format(cmds.nodeType(node)))
self.log.debug(" - file node")
attribute = "{}.fileTextureName".format(node)
computed_attribute = "{}.computedFileTextureNamePattern".format(node)
elif cmds.nodeType(node) == 'aiImage':
self.log.debug("aiImage node")
attribute = "{}.filename".format(node)
computed_attribute = attribute
elif cmds.nodeType(node) == 'RedshiftNormalMap':
self.log.debug("RedshiftNormalMap node")
attribute = "{}.tex0".format(node)
computed_attribute = attribute
source = cmds.getAttr(attribute) attributes = get_attributes(FILE_NODES, cmds.nodeType(node), node)
self.log.info(" - file source: {}".format(source)) for attribute in attributes:
color_space_attr = "{}.colorSpace".format(node) source = cmds.getAttr("{}.{}".format(
try: node,
color_space = cmds.getAttr(color_space_attr) attribute
except ValueError: ))
# node doesn't have colorspace attribute computed_attribute = "{}.{}".format(node, attribute)
color_space = "Raw" if attribute == "fileTextureName":
# Compare with the computed file path, e.g. the one with the <UDIM> computed_attribute = node + ".computedFileTextureNamePattern"
# pattern in it, to generate some logging information about this
# difference
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
computed_source = cmds.getAttr(computed_attribute)
if source != computed_source:
self.log.debug("Detected computed file pattern difference "
"from original pattern: {0} "
"({1} -> {2})".format(node,
source,
computed_source))
# We replace backslashes with forward slashes because V-Ray self.log.info(" - file source: {}".format(source))
# can't handle the UDIM files with the backslashes in the color_space_attr = "{}.colorSpace".format(node)
# paths as the computed patterns try:
source = source.replace("\\", "/") color_space = cmds.getAttr(color_space_attr)
except ValueError:
# node doesn't have colorspace attribute
color_space = "Raw"
# Compare with the computed file path, e.g. the one with
# the <UDIM> pattern in it, to generate some logging information
# about this difference
computed_source = cmds.getAttr(computed_attribute)
if source != computed_source:
self.log.debug("Detected computed file pattern difference "
"from original pattern: {0} "
"({1} -> {2})".format(node,
source,
computed_source))
files = get_file_node_files(node) # We replace backslashes with forward slashes because V-Ray
if len(files) == 0: # can't handle the UDIM files with the backslashes in the
self.log.error("No valid files found from node `%s`" % node) # paths as the computed patterns
source = source.replace("\\", "/")
self.log.info("collection of resource done:") files = get_file_node_files(node)
self.log.info(" - node: {}".format(node)) if len(files) == 0:
self.log.info(" - attribute: {}".format(attribute)) self.log.error("No valid files found from node `%s`" % node)
self.log.info(" - source: {}".format(source))
self.log.info(" - file: {}".format(files))
self.log.info(" - color space: {}".format(color_space))
# Define the resource self.log.info("collection of resource done:")
return {"node": node, self.log.info(" - node: {}".format(node))
"attribute": attribute, self.log.info(" - attribute: {}".format(attribute))
self.log.info(" - source: {}".format(source))
self.log.info(" - file: {}".format(files))
self.log.info(" - color space: {}".format(color_space))
# Define the resource
yield {
"node": node,
# here we are passing not only attribute, but with node again
# this should be simplified and changed extractor.
"attribute": "{}.{}".format(node, attribute),
"source": source, # required for resources "source": source, # required for resources
"files": files, "files": files,
"color_space": color_space} # required for resources "color_space": color_space
} # required for resources
class CollectModelRenderSets(CollectLook): class CollectModelRenderSets(CollectLook):

View file

@ -0,0 +1,14 @@
import pyblish.api
class CollectPointcache(pyblish.api.InstancePlugin):
"""Collect pointcache data for instance."""
order = pyblish.api.CollectorOrder + 0.4
families = ["pointcache"]
label = "Collect Pointcache"
hosts = ["maya"]
def process(self, instance):
if instance.data.get("farm"):
instance.data["families"].append("publish.farm")

View file

@ -326,8 +326,8 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"byFrameStep": int( "byFrameStep": int(
self.get_render_attribute("byFrameStep", self.get_render_attribute("byFrameStep",
layer=layer_name)), layer=layer_name)),
"renderer": self.get_render_attribute("currentRenderer", "renderer": self.get_render_attribute(
layer=layer_name), "currentRenderer", layer=layer_name).lower(),
# instance subset # instance subset
"family": "renderlayer", "family": "renderlayer",
"families": ["renderlayer"], "families": ["renderlayer"],
@ -339,9 +339,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"source": filepath, "source": filepath,
"expectedFiles": full_exp_files, "expectedFiles": full_exp_files,
"publishRenderMetadataFolder": common_publish_meta_path, "publishRenderMetadataFolder": common_publish_meta_path,
"resolutionWidth": cmds.getAttr("defaultResolution.width"), "resolutionWidth": lib.get_attr_in_layer(
"resolutionHeight": cmds.getAttr("defaultResolution.height"), "defaultResolution.height", layer=layer_name
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"), ),
"resolutionHeight": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
"pixelAspect": lib.get_attr_in_layer(
"defaultResolution.pixelAspect", layer=layer_name
),
"tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501 "tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501
"tilesX": render_instance.data.get("tilesX") or 2, "tilesX": render_instance.data.get("tilesX") or 2,
"tilesY": render_instance.data.get("tilesY") or 2, "tilesY": render_instance.data.get("tilesY") or 2,

View file

@ -77,15 +77,14 @@ class CollectReview(pyblish.api.InstancePlugin):
instance.data['remove'] = True instance.data['remove'] = True
self.log.debug('isntance data {}'.format(instance.data)) self.log.debug('isntance data {}'.format(instance.data))
else: else:
if self.legacy: legacy_subset_name = task + 'Review'
instance.data['subset'] = task + 'Review' asset_doc_id = instance.context.data['assetEntity']["_id"]
else: subsets = legacy_io.find({"type": "subset",
subset = "{}{}{}".format( "name": legacy_subset_name,
task, "parent": asset_doc_id}).distinct("_id")
instance.data["subset"][0].upper(), if len(list(subsets)) > 0:
instance.data["subset"][1:] self.log.debug("Existing subsets found, keep legacy name.")
) instance.data['subset'] = legacy_subset_name
instance.data['subset'] = subset
instance.data['review_camera'] = camera instance.data['review_camera'] = camera
instance.data['frameStartFtrack'] = \ instance.data['frameStartFtrack'] = \

View file

@ -124,9 +124,15 @@ class CollectVrayScene(pyblish.api.InstancePlugin):
# Add source to allow tracing back to the scene from # Add source to allow tracing back to the scene from
# which was submitted originally # which was submitted originally
"source": context.data["currentFile"].replace("\\", "/"), "source": context.data["currentFile"].replace("\\", "/"),
"resolutionWidth": cmds.getAttr("defaultResolution.width"), "resolutionWidth": lib.get_attr_in_layer(
"resolutionHeight": cmds.getAttr("defaultResolution.height"), "defaultResolution.height", layer=layer_name
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"), ),
"resolutionHeight": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
"pixelAspect": lib.get_attr_in_layer(
"defaultResolution.pixelAspect", layer=layer_name
),
"priority": instance.data.get("priority"), "priority": instance.data.get("priority"),
"useMultipleSceneFiles": instance.data.get( "useMultipleSceneFiles": instance.data.get(
"vraySceneMultipleFiles") "vraySceneMultipleFiles")

View file

@ -16,13 +16,19 @@ class ExtractAnimation(openpype.api.Extractor):
Positions and normals, uvs, creases are preserved, but nothing more, Positions and normals, uvs, creases are preserved, but nothing more,
for plain and predictable point caches. for plain and predictable point caches.
Plugin can run locally or remotely (on a farm - if instance is marked with
"farm" it will be skipped in local processing, but processed on farm)
""" """
label = "Extract Animation" label = "Extract Animation"
hosts = ["maya"] hosts = ["maya"]
families = ["animation"] families = ["animation"]
targets = ["local", "remote"]
def process(self, instance): def process(self, instance):
if instance.data.get("farm"):
self.log.debug("Should be processed on farm, skipping.")
return
# Collect the out set nodes # Collect the out set nodes
out_sets = [node for node in instance if node.endswith("out_SET")] out_sets = [node for node in instance if node.endswith("out_SET")]
@ -89,4 +95,6 @@ class ExtractAnimation(openpype.api.Extractor):
} }
instance.data["representations"].append(representation) instance.data["representations"].append(representation)
instance.context.data["cleanupFullPaths"].append(path)
self.log.info("Extracted {} to {}".format(instance, dirname)) self.log.info("Extracted {} to {}".format(instance, dirname))

View file

@ -372,10 +372,12 @@ class ExtractLook(openpype.api.Extractor):
if mode == COPY: if mode == COPY:
transfers.append((source, destination)) transfers.append((source, destination))
self.log.info('copying') self.log.info('file will be copied {} -> {}'.format(
source, destination))
elif mode == HARDLINK: elif mode == HARDLINK:
hardlinks.append((source, destination)) hardlinks.append((source, destination))
self.log.info('hardlinking') self.log.info('file will be hardlinked {} -> {}'.format(
source, destination))
# Store the hashes from hash to destination to include in the # Store the hashes from hash to destination to include in the
# database # database

View file

@ -16,6 +16,8 @@ class ExtractAlembic(openpype.api.Extractor):
Positions and normals, uvs, creases are preserved, but nothing more, Positions and normals, uvs, creases are preserved, but nothing more,
for plain and predictable point caches. for plain and predictable point caches.
Plugin can run locally or remotely (on a farm - if instance is marked with
"farm" it will be skipped in local processing, but processed on farm)
""" """
label = "Extract Pointcache (Alembic)" label = "Extract Pointcache (Alembic)"
@ -23,8 +25,12 @@ class ExtractAlembic(openpype.api.Extractor):
families = ["pointcache", families = ["pointcache",
"model", "model",
"vrayproxy"] "vrayproxy"]
targets = ["local", "remote"]
def process(self, instance): def process(self, instance):
if instance.data.get("farm"):
self.log.debug("Should be processed on farm, skipping.")
return
nodes = instance[:] nodes = instance[:]
@ -92,4 +98,6 @@ class ExtractAlembic(openpype.api.Extractor):
} }
instance.data["representations"].append(representation) instance.data["representations"].append(representation)
instance.context.data["cleanupFullPaths"].append(path)
self.log.info("Extracted {} to {}".format(instance, dirname)) self.log.info("Extracted {} to {}".format(instance, dirname))

View file

@ -0,0 +1,16 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Errors found</title>
<description>
## Publish process has errors
At least one plugin failed before this plugin, job won't be sent to Deadline for processing before all issues are fixed.
### How to repair?
Check all failing plugins (should be highlighted in red) and fix issues if possible.
</description>
</error>
</root>

View file

@ -0,0 +1,28 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Review subsets not unique</title>
<description>
## Non unique subset name found
Non unique subset names: '{non_unique}'
<detail>
### __Detailed Info__ (optional)
This might happen if you already published for this asset
review subset with legacy name {task}Review.
This legacy name limits possibility of publishing of multiple
reviews from a single workfile. Proper review subset name should
now
contain variant also (as 'Main', 'Default' etc.). That would
result in completely new subset though, so this situation must
be handled manually.
</detail>
### How to repair?
Legacy subsets must be removed from Openpype DB, please ask admin
to do that. Please provide them asset and subset names.
</description>
</error>
</root>

View file

@ -30,6 +30,10 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin):
assert 'out_hierarchy' in instance.data, "Missing `out_hierarchy` data" assert 'out_hierarchy' in instance.data, "Missing `out_hierarchy` data"
out_sets = [node for node in instance if node.endswith("out_SET")]
msg = "Couldn't find exactly one out_SET: {0}".format(out_sets)
assert len(out_sets) == 1, msg
# All nodes in the `out_hierarchy` must be among the nodes that are # All nodes in the `out_hierarchy` must be among the nodes that are
# in the instance. The nodes in the instance are found from the top # in the instance. The nodes in the instance are found from the top
# group, as such this tests whether all nodes are under that top group. # group, as such this tests whether all nodes are under that top group.

View file

@ -12,7 +12,8 @@ ImagePrefixes = {
'vray': 'vraySettings.fileNamePrefix', 'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix', 'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix', 'renderman': 'defaultRenderGlobals.imageFilePrefix',
'redshift': 'defaultRenderGlobals.imageFilePrefix' 'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
} }

View file

@ -50,15 +50,17 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
'vray': 'vraySettings.fileNamePrefix', 'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix', 'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'rmanGlobals.imageFileFormat', 'renderman': 'rmanGlobals.imageFileFormat',
'redshift': 'defaultRenderGlobals.imageFilePrefix' 'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
} }
ImagePrefixTokens = { ImagePrefixTokens = {
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa 'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', 'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
'vray': 'maya/<Scene>/<Layer>/<Layer>', 'vray': 'maya/<Scene>/<Layer>/<Layer>',
'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>' # noqa 'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>',
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
} }
_aov_chars = { _aov_chars = {
@ -69,14 +71,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>{aov_separator}<RenderPass>" # noqa: E501 redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>{aov_separator}<RenderPass>" # noqa: E501
# WARNING: There is bug? in renderman, translating <scene> token renderman_dir_prefix = "maya/<scene>/<layer>"
# to something left behind mayas default image prefix. So instead
# `SceneName_v01` it translates to:
# `SceneName_v01/<RenderLayer>/<RenderLayers_<RenderPass>` that means
# for example:
# `SceneName_v01/Main/Main_<RenderPass>`. Possible solution is to define
# custom token like <scene_name> to point to determined scene name.
RendermanDirPrefix = "<ws>/renders/maya/<scene>/<layer>"
R_AOV_TOKEN = re.compile( R_AOV_TOKEN = re.compile(
r'%a|<aov>|<renderpass>', re.IGNORECASE) r'%a|<aov>|<renderpass>', re.IGNORECASE)
@ -116,15 +111,22 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
prefix = prefix.replace( prefix = prefix.replace(
"{aov_separator}", instance.data.get("aovSeparator", "_")) "{aov_separator}", instance.data.get("aovSeparator", "_"))
required_prefix = "maya/<scene>"
if not anim_override: if not anim_override:
invalid = True invalid = True
cls.log.error("Animation needs to be enabled. Use the same " cls.log.error("Animation needs to be enabled. Use the same "
"frame for start and end to render single frame") "frame for start and end to render single frame")
if not prefix.lower().startswith("maya/<scene>"): if renderer != "renderman" and not prefix.lower().startswith(
required_prefix):
invalid = True invalid = True
cls.log.error("Wrong image prefix [ {} ] - " cls.log.error(
"doesn't start with: 'maya/<scene>'".format(prefix)) ("Wrong image prefix [ {} ] "
" - doesn't start with: '{}'").format(
prefix, required_prefix)
)
if not re.search(cls.R_LAYER_TOKEN, prefix): if not re.search(cls.R_LAYER_TOKEN, prefix):
invalid = True invalid = True
@ -198,7 +200,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
invalid = True invalid = True
cls.log.error("Wrong image prefix [ {} ]".format(file_prefix)) cls.log.error("Wrong image prefix [ {} ]".format(file_prefix))
if dir_prefix.lower() != cls.RendermanDirPrefix.lower(): if dir_prefix.lower() != cls.renderman_dir_prefix.lower():
invalid = True invalid = True
cls.log.error("Wrong directory prefix [ {} ]".format( cls.log.error("Wrong directory prefix [ {} ]".format(
dir_prefix)) dir_prefix))
@ -234,7 +236,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
# load validation definitions from settings # load validation definitions from settings
validation_settings = ( validation_settings = (
instance.context.data["project_settings"]["maya"]["publish"]["ValidateRenderSettings"].get( # noqa: E501 instance.context.data["project_settings"]["maya"]["publish"]["ValidateRenderSettings"].get( # noqa: E501
"{}_render_attributes".format(renderer)) "{}_render_attributes".format(renderer)) or []
) )
# go through definitions and test if such node.attribute exists. # go through definitions and test if such node.attribute exists.
@ -304,7 +306,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
default_prefix, default_prefix,
type="string") type="string")
cmds.setAttr("rmanGlobals.imageOutputDir", cmds.setAttr("rmanGlobals.imageOutputDir",
cls.RendermanDirPrefix, cls.renderman_dir_prefix,
type="string") type="string")
if renderer == "vray": if renderer == "vray":

View file

@ -0,0 +1,36 @@
# -*- coding: utf-8 -*-
import collections
import pyblish.api
import openpype.api
from openpype.pipeline import PublishXmlValidationError
class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
"""Validates that nodes has common root."""
order = openpype.api.ValidateContentsOrder
hosts = ["maya"]
families = ["review"]
label = "Validate Review Subset Unique"
def process(self, context):
subset_names = []
for instance in context:
self.log.info("instance:: {}".format(instance.data))
if instance.data.get('publish'):
subset_names.append(instance.data.get('subset'))
non_unique = \
[item
for item, count in collections.Counter(subset_names).items()
if count > 1]
msg = ("Instance subset names {} are not unique. ".format(non_unique) +
"Ask admin to remove subset from DB for multiple reviews.")
formatting_data = {
"non_unique": ",".join(non_unique)
}
if non_unique:
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)

View file

@ -0,0 +1,86 @@
import os
import re
import nuke
from openpype.api import Logger
log = Logger.get_logger(__name__)
class GizmoMenu():
def __init__(self, title, icon=None):
self.toolbar = self._create_toolbar_menu(
title,
icon=icon
)
self._script_actions = []
def _create_toolbar_menu(self, name, icon=None):
nuke_node_menu = nuke.menu("Nodes")
return nuke_node_menu.addMenu(
name,
icon=icon
)
def _make_menu_path(self, path, icon=None):
parent = self.toolbar
for folder in re.split(r"/|\\", path):
if not folder:
continue
existing_menu = parent.findItem(folder)
if existing_menu:
parent = existing_menu
else:
parent = parent.addMenu(folder, icon=icon)
return parent
def build_from_configuration(self, configuration):
for menu in configuration:
# Construct parent path else parent is toolbar
parent = self.toolbar
gizmo_toolbar_path = menu.get("gizmo_toolbar_path")
if gizmo_toolbar_path:
parent = self._make_menu_path(gizmo_toolbar_path)
for item in menu["sub_gizmo_list"]:
assert isinstance(item, dict), "Configuration is wrong!"
if not item.get("title"):
continue
item_type = item.get("sourcetype")
if item_type == ("python" or "file"):
parent.addCommand(
item["title"],
command=str(item["command"]),
icon=item.get("icon"),
shortcut=item.get("hotkey")
)
# add separator
# Special behavior for separators
elif item_type == "separator":
parent.addSeparator()
# add submenu
# items should hold a collection of submenu items (dict)
elif item_type == "menu":
# assert "items" in item, "Menu is missing 'items' key"
parent.addMenu(
item['title'],
icon=item.get('icon')
)
def add_gizmo_path(self, gizmo_paths):
for gizmo_path in gizmo_paths:
if os.path.isdir(gizmo_path):
for folder in os.listdir(gizmo_path):
if os.path.isdir(os.path.join(gizmo_path, folder)):
nuke.pluginAddPath(os.path.join(gizmo_path, folder))
nuke.pluginAddPath(gizmo_path)
else:
log.warning("This path doesn't exist: {}".format(gizmo_path))

File diff suppressed because it is too large Load diff

View file

@ -32,7 +32,7 @@ from .lib import (
launch_workfiles_app, launch_workfiles_app,
check_inventory_versions, check_inventory_versions,
set_avalon_knob_data, set_avalon_knob_data,
read, read_avalon_data,
Context Context
) )
@ -359,7 +359,7 @@ def parse_container(node):
dict: The container schema data for this container node. dict: The container schema data for this container node.
""" """
data = read(node) data = read_avalon_data(node)
# (TODO) Remove key validation when `ls` has re-implemented. # (TODO) Remove key validation when `ls` has re-implemented.
# #

View file

@ -17,7 +17,9 @@ from .lib import (
reset_selection, reset_selection,
maintained_selection, maintained_selection,
set_avalon_knob_data, set_avalon_knob_data,
add_publish_knob add_publish_knob,
get_nuke_imageio_settings,
set_node_knobs_from_settings
) )
@ -27,9 +29,6 @@ class OpenPypeCreator(LegacyCreator):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(OpenPypeCreator, self).__init__(*args, **kwargs) super(OpenPypeCreator, self).__init__(*args, **kwargs)
self.presets = get_current_project_settings()["nuke"]["create"].get(
self.__class__.__name__, {}
)
if check_subsetname_exists( if check_subsetname_exists(
nuke.allNodes(), nuke.allNodes(),
self.data["subset"]): self.data["subset"]):
@ -260,8 +259,6 @@ class ExporterReview(object):
return nuke_imageio["viewer"]["viewerProcess"] return nuke_imageio["viewer"]["viewerProcess"]
class ExporterReviewLut(ExporterReview): class ExporterReviewLut(ExporterReview):
""" """
Generator object for review lut from Nuke Generator object for review lut from Nuke
@ -501,16 +498,7 @@ class ExporterReviewMov(ExporterReview):
add_tags.append("reformated") add_tags.append("reformated")
rf_node = nuke.createNode("Reformat") rf_node = nuke.createNode("Reformat")
for kn_conf in reformat_node_config: set_node_knobs_from_settings(rf_node, reformat_node_config)
_type = kn_conf["type"]
k_name = str(kn_conf["name"])
k_value = kn_conf["value"]
# to remove unicode as nuke doesn't like it
if _type == "string":
k_value = str(kn_conf["value"])
rf_node[k_name].setValue(k_value)
# connect # connect
rf_node.setInput(0, self.previous_node) rf_node.setInput(0, self.previous_node)
@ -607,6 +595,8 @@ class AbstractWriteRender(OpenPypeCreator):
family = "render" family = "render"
icon = "sign-out" icon = "sign-out"
defaults = ["Main", "Mask"] defaults = ["Main", "Mask"]
knobs = []
prenodes = {}
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(AbstractWriteRender, self).__init__(*args, **kwargs) super(AbstractWriteRender, self).__init__(*args, **kwargs)
@ -673,7 +663,9 @@ class AbstractWriteRender(OpenPypeCreator):
write_data = { write_data = {
"nodeclass": self.n_class, "nodeclass": self.n_class,
"families": [self.family], "families": [self.family],
"avalon": self.data "avalon": self.data,
"subset": self.data["subset"],
"knobs": self.knobs
} }
# add creator data # add creator data
@ -681,21 +673,12 @@ class AbstractWriteRender(OpenPypeCreator):
self.data.update(creator_data) self.data.update(creator_data)
write_data.update(creator_data) write_data.update(creator_data)
if self.presets.get('fpath_template'): write_node = self._create_write_node(
self.log.info("Adding template path from preset") selected_node,
write_data.update( inputs,
{"fpath_template": self.presets["fpath_template"]} outputs,
) write_data
else: )
self.log.info("Adding template path from plugin")
write_data.update({
"fpath_template":
("{work}/" + self.family + "s/nuke/{subset}"
"/{subset}.{frame}.{ext}")})
write_node = self._create_write_node(selected_node,
inputs, outputs,
write_data)
# relinking to collected connections # relinking to collected connections
for i, input in enumerate(inputs): for i, input in enumerate(inputs):
@ -710,6 +693,28 @@ class AbstractWriteRender(OpenPypeCreator):
return write_node return write_node
def is_legacy(self):
"""Check if it needs to run legacy code
In case where `type` key is missing in singe
knob it is legacy project anatomy.
Returns:
bool: True if legacy
"""
imageio_nodes = get_nuke_imageio_settings()["nodes"]
node = imageio_nodes["requiredNodes"][0]
if "type" not in node["knobs"][0]:
# if type is not yet in project anatomy
return True
elif next(iter(
_k for _k in node["knobs"]
if _k.get("type") == "__legacy__"
), None):
# in case someone re-saved anatomy
# with old configuration
return True
@abstractmethod @abstractmethod
def _create_write_node(self, selected_node, inputs, outputs, write_data): def _create_write_node(self, selected_node, inputs, outputs, write_data):
"""Family dependent implementation of Write node creation """Family dependent implementation of Write node creation

View file

@ -1,7 +1,8 @@
import nuke import nuke
from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import create_write_node from openpype.hosts.nuke.api.lib import (
create_write_node, create_write_node_legacy)
class CreateWritePrerender(plugin.AbstractWriteRender): class CreateWritePrerender(plugin.AbstractWriteRender):
@ -12,22 +13,41 @@ class CreateWritePrerender(plugin.AbstractWriteRender):
n_class = "Write" n_class = "Write"
family = "prerender" family = "prerender"
icon = "sign-out" icon = "sign-out"
# settings
fpath_template = "{work}/render/nuke/{subset}/{subset}.{frame}.{ext}"
defaults = ["Key01", "Bg01", "Fg01", "Branch01", "Part01"] defaults = ["Key01", "Bg01", "Fg01", "Branch01", "Part01"]
reviewable = False
use_range_limit = True
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(CreateWritePrerender, self).__init__(*args, **kwargs) super(CreateWritePrerender, self).__init__(*args, **kwargs)
def _create_write_node(self, selected_node, inputs, outputs, write_data): def _create_write_node(self, selected_node, inputs, outputs, write_data):
reviewable = self.presets.get("reviewable") # add fpath_template
write_node = create_write_node( write_data["fpath_template"] = self.fpath_template
self.data["subset"], write_data["use_range_limit"] = self.use_range_limit
write_data, write_data["frame_range"] = (
input=selected_node, nuke.root()["first_frame"].value(),
prenodes=[], nuke.root()["last_frame"].value()
review=reviewable, )
linked_knobs=["channels", "___", "first", "last", "use_limit"])
return write_node if not self.is_legacy():
return create_write_node(
self.data["subset"],
write_data,
input=selected_node,
review=self.reviewable,
linked_knobs=["channels", "___", "first", "last", "use_limit"]
)
else:
return create_write_node_legacy(
self.data["subset"],
write_data,
input=selected_node,
review=self.reviewable,
linked_knobs=["channels", "___", "first", "last", "use_limit"]
)
def _modify_write_node(self, write_node): def _modify_write_node(self, write_node):
# open group node # open group node
@ -38,7 +58,7 @@ class CreateWritePrerender(plugin.AbstractWriteRender):
w_node = n w_node = n
write_node.end() write_node.end()
if self.presets.get("use_range_limit"): if self.use_range_limit:
w_node["use_limit"].setValue(True) w_node["use_limit"].setValue(True)
w_node["first"].setValue(nuke.root()["first_frame"].value()) w_node["first"].setValue(nuke.root()["first_frame"].value())
w_node["last"].setValue(nuke.root()["last_frame"].value()) w_node["last"].setValue(nuke.root()["last_frame"].value())

View file

@ -1,7 +1,8 @@
import nuke import nuke
from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import create_write_node from openpype.hosts.nuke.api.lib import (
create_write_node, create_write_node_legacy)
class CreateWriteRender(plugin.AbstractWriteRender): class CreateWriteRender(plugin.AbstractWriteRender):
@ -12,12 +13,36 @@ class CreateWriteRender(plugin.AbstractWriteRender):
n_class = "Write" n_class = "Write"
family = "render" family = "render"
icon = "sign-out" icon = "sign-out"
# settings
fpath_template = "{work}/render/nuke/{subset}/{subset}.{frame}.{ext}"
defaults = ["Main", "Mask"] defaults = ["Main", "Mask"]
prenodes = {
"Reformat01": {
"nodeclass": "Reformat",
"dependent": None,
"knobs": [
{
"type": "text",
"name": "resize",
"value": "none"
},
{
"type": "bool",
"name": "black_outside",
"value": True
}
]
}
}
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(CreateWriteRender, self).__init__(*args, **kwargs) super(CreateWriteRender, self).__init__(*args, **kwargs)
def _create_write_node(self, selected_node, inputs, outputs, write_data): def _create_write_node(self, selected_node, inputs, outputs, write_data):
# add fpath_template
write_data["fpath_template"] = self.fpath_template
# add reformat node to cut off all outside of format bounding box # add reformat node to cut off all outside of format bounding box
# get width and height # get width and height
try: try:
@ -26,25 +51,36 @@ class CreateWriteRender(plugin.AbstractWriteRender):
actual_format = nuke.root().knob('format').value() actual_format = nuke.root().knob('format').value()
width, height = (actual_format.width(), actual_format.height()) width, height = (actual_format.width(), actual_format.height())
_prenodes = [ if not self.is_legacy():
{ return create_write_node(
"name": "Reformat01", self.data["subset"],
"class": "Reformat", write_data,
"knobs": [ input=selected_node,
("resize", 0), prenodes=self.prenodes,
("black_outside", 1), **{
], "width": width,
"dependent": None "height": height
} }
] )
else:
_prenodes = [
{
"name": "Reformat01",
"class": "Reformat",
"knobs": [
("resize", 0),
("black_outside", 1),
],
"dependent": None
}
]
write_node = create_write_node( return create_write_node_legacy(
self.data["subset"], self.data["subset"],
write_data, write_data,
input=selected_node, input=selected_node,
prenodes=_prenodes) prenodes=_prenodes
)
return write_node
def _modify_write_node(self, write_node): def _modify_write_node(self, write_node):
return write_node return write_node

View file

@ -1,7 +1,8 @@
import nuke import nuke
from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import create_write_node from openpype.hosts.nuke.api.lib import (
create_write_node, create_write_node_legacy)
class CreateWriteStill(plugin.AbstractWriteRender): class CreateWriteStill(plugin.AbstractWriteRender):
@ -12,42 +13,69 @@ class CreateWriteStill(plugin.AbstractWriteRender):
n_class = "Write" n_class = "Write"
family = "still" family = "still"
icon = "image" icon = "image"
# settings
fpath_template = "{work}/render/nuke/{subset}/{subset}.{ext}"
defaults = [ defaults = [
"ImageFrame{:0>4}".format(nuke.frame()), "ImageFrame",
"MPFrame{:0>4}".format(nuke.frame()), "MPFrame",
"LayoutFrame{:0>4}".format(nuke.frame()) "LayoutFrame"
] ]
prenodes = {
"FrameHold01": {
"nodeclass": "FrameHold",
"dependent": None,
"knobs": [
{
"type": "formatable",
"name": "first_frame",
"template": "{frame}",
"to_type": "number"
}
]
}
}
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super(CreateWriteStill, self).__init__(*args, **kwargs) super(CreateWriteStill, self).__init__(*args, **kwargs)
def _create_write_node(self, selected_node, inputs, outputs, write_data): def _create_write_node(self, selected_node, inputs, outputs, write_data):
# explicitly reset template to 'renders', not same as other 2 writes # add fpath_template
write_data.update({ write_data["fpath_template"] = self.fpath_template
"fpath_template": (
"{work}/renders/nuke/{subset}/{subset}.{ext}")})
_prenodes = [ if not self.is_legacy():
{ return create_write_node(
"name": "FrameHold01", self.name,
"class": "FrameHold", write_data,
"knobs": [ input=selected_node,
("first_frame", nuke.frame()) review=False,
], prenodes=self.prenodes,
"dependent": None farm=False,
} linked_knobs=["channels", "___", "first", "last", "use_limit"],
] **{
"frame": nuke.frame()
write_node = create_write_node( }
self.name, )
write_data, else:
input=selected_node, _prenodes = [
review=False, {
prenodes=_prenodes, "name": "FrameHold01",
farm=False, "class": "FrameHold",
linked_knobs=["channels", "___", "first", "last", "use_limit"]) "knobs": [
("first_frame", nuke.frame())
return write_node ],
"dependent": None
}
]
return create_write_node_legacy(
self.name,
write_data,
input=selected_node,
review=False,
prenodes=_prenodes,
farm=False,
linked_knobs=["channels", "___", "first", "last", "use_limit"]
)
def _modify_write_node(self, write_node): def _modify_write_node(self, write_node):
write_node.begin() write_node.begin()

View file

@ -9,7 +9,7 @@ log = Logger().get_logger(__name__)
class SetFrameRangeLoader(load.LoaderPlugin): class SetFrameRangeLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Set frame range excluding pre- and post-handles"""
families = ["animation", families = ["animation",
"camera", "camera",
@ -43,7 +43,7 @@ class SetFrameRangeLoader(load.LoaderPlugin):
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin): class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
"""Specific loader of Alembic for the avalon.animation family""" """Set frame range including pre- and post-handles"""
families = ["animation", families = ["animation",
"camera", "camera",

View file

@ -15,13 +15,13 @@ from openpype.hosts.nuke.api import (
class AlembicModelLoader(load.LoaderPlugin): class AlembicModelLoader(load.LoaderPlugin):
""" """
This will load alembic model into script. This will load alembic model or anim into script.
""" """
families = ["model"] families = ["model", "pointcache", "animation"]
representations = ["abc"] representations = ["abc"]
label = "Load Alembic Model" label = "Load Alembic"
icon = "cube" icon = "cube"
color = "orange" color = "orange"
node_color = "0x4ecd91ff" node_color = "0x4ecd91ff"

View file

@ -52,7 +52,7 @@ class ExtractReviewDataMov(openpype.api.Extractor):
for o_name, o_data in self.outputs.items(): for o_name, o_data in self.outputs.items():
f_families = o_data["filter"]["families"] f_families = o_data["filter"]["families"]
f_task_types = o_data["filter"]["task_types"] f_task_types = o_data["filter"]["task_types"]
f_subsets = o_data["filter"]["sebsets"] f_subsets = o_data["filter"]["subsets"]
self.log.debug( self.log.debug(
"f_families `{}` > families: {}".format( "f_families `{}` > families: {}".format(

View file

@ -1,4 +1,5 @@
import nuke import nuke
import os
from openpype.api import Logger from openpype.api import Logger
from openpype.pipeline import install_host from openpype.pipeline import install_host
@ -7,8 +8,10 @@ from openpype.hosts.nuke.api.lib import (
on_script_load, on_script_load,
check_inventory_versions, check_inventory_versions,
WorkfileSettings, WorkfileSettings,
dirmap_file_name_filter dirmap_file_name_filter,
add_scripts_gizmo
) )
from openpype.settings import get_project_settings
log = Logger.get_logger(__name__) log = Logger.get_logger(__name__)
@ -28,3 +31,34 @@ nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
nuke.addFilenameFilter(dirmap_file_name_filter) nuke.addFilenameFilter(dirmap_file_name_filter)
log.info('Automatic syncing of write file knob to script version') log.info('Automatic syncing of write file knob to script version')
def add_scripts_menu():
try:
from scriptsmenu import launchfornuke
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["nuke"]["scriptsmenu"]["definition"]
_menu = project_settings["nuke"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Maya menu
studio_menu = launchfornuke.main(title=_menu.title())
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)
add_scripts_menu()
add_scripts_gizmo()

View file

@ -29,6 +29,16 @@ class PSItem(object):
color_code = attr.ib(default=None) # color code of layer color_code = attr.ib(default=None) # color code of layer
instance_id = attr.ib(default=None) instance_id = attr.ib(default=None)
@property
def clean_name(self):
"""Returns layer name without publish icon highlight
Returns:
(str)
"""
return (self.name.replace(PhotoshopServerStub.PUBLISH_ICON, '')
.replace(PhotoshopServerStub.LOADED_ICON, ''))
class PhotoshopServerStub: class PhotoshopServerStub:
""" """

View file

@ -39,6 +39,9 @@ class CollectBatchData(pyblish.api.ContextPlugin):
def process(self, context): def process(self, context):
self.log.info("CollectBatchData") self.log.info("CollectBatchData")
batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA") batch_dir = os.environ.get("OPENPYPE_PUBLISH_DATA")
if os.environ.get("IS_TEST"):
self.log.debug("Automatic testing, no batch data, skipping")
return
assert batch_dir, ( assert batch_dir, (
"Missing `OPENPYPE_PUBLISH_DATA`") "Missing `OPENPYPE_PUBLISH_DATA`")

View file

@ -5,6 +5,7 @@ import pyblish.api
from openpype.lib import prepare_template_data from openpype.lib import prepare_template_data
from openpype.hosts.photoshop import api as photoshop from openpype.hosts.photoshop import api as photoshop
from openpype.settings import get_project_settings
class CollectColorCodedInstances(pyblish.api.ContextPlugin): class CollectColorCodedInstances(pyblish.api.ContextPlugin):
@ -49,6 +50,12 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
asset_name = context.data["asset"] asset_name = context.data["asset"]
task_name = context.data["task"] task_name = context.data["task"]
variant = context.data["variant"] variant = context.data["variant"]
project_name = context.data["projectEntity"]["name"]
naming_conventions = get_project_settings(project_name).get(
"photoshop", {}).get(
"publish", {}).get(
"ValidateNaming", {})
stub = photoshop.stub() stub = photoshop.stub()
layers = stub.get_layers() layers = stub.get_layers()
@ -77,12 +84,15 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
"variant": variant, "variant": variant,
"family": resolved_family, "family": resolved_family,
"task": task_name, "task": task_name,
"layer": layer.name "layer": layer.clean_name
} }
subset = resolved_subset_template.format( subset = resolved_subset_template.format(
**prepare_template_data(fill_pairs)) **prepare_template_data(fill_pairs))
subset = self._clean_subset_name(stub, naming_conventions,
subset, layer)
if subset in existing_subset_names: if subset in existing_subset_names:
self.log.info( self.log.info(
"Subset {} already created, skipping.".format(subset)) "Subset {} already created, skipping.".format(subset))
@ -141,6 +151,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
instance.data["task"] = task_name instance.data["task"] = task_name
instance.data["subset"] = subset instance.data["subset"] = subset
instance.data["layer"] = layer instance.data["layer"] = layer
instance.data["families"] = []
return instance return instance
@ -186,3 +197,21 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
self.log.debug("resolved_subset_template {}".format( self.log.debug("resolved_subset_template {}".format(
resolved_subset_template)) resolved_subset_template))
return family, resolved_subset_template return family, resolved_subset_template
def _clean_subset_name(self, stub, naming_conventions, subset, layer):
"""Cleans invalid characters from subset name and layer name."""
if re.search(naming_conventions["invalid_chars"], subset):
subset = re.sub(
naming_conventions["invalid_chars"],
naming_conventions["replace_char"],
subset
)
layer_name = re.sub(
naming_conventions["invalid_chars"],
naming_conventions["replace_char"],
layer.clean_name
)
layer.name = layer_name
stub.rename_layer(layer.id, layer_name)
return subset

View file

@ -42,7 +42,8 @@ class ValidateNamingRepair(pyblish.api.Action):
layer_name = re.sub(invalid_chars, layer_name = re.sub(invalid_chars,
replace_char, replace_char,
current_layer_state.name) current_layer_state.clean_name)
layer_name = stub.PUBLISH_ICON + layer_name
stub.rename_layer(current_layer_state.id, layer_name) stub.rename_layer(current_layer_state.id, layer_name)
@ -73,13 +74,17 @@ class ValidateNaming(pyblish.api.InstancePlugin):
def process(self, instance): def process(self, instance):
help_msg = ' Use Repair action (A) in Pyblish to fix it.' help_msg = ' Use Repair action (A) in Pyblish to fix it.'
msg = "Name \"{}\" is not allowed.{}".format(instance.data["name"],
help_msg)
formatting_data = {"msg": msg} layer = instance.data.get("layer")
if re.search(self.invalid_chars, instance.data["name"]): if layer:
raise PublishXmlValidationError(self, msg, msg = "Name \"{}\" is not allowed.{}".format(layer.clean_name,
formatting_data=formatting_data) help_msg)
formatting_data = {"msg": msg}
if re.search(self.invalid_chars, layer.clean_name):
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data
)
msg = "Subset \"{}\" is not allowed.{}".format(instance.data["subset"], msg = "Subset \"{}\" is not allowed.{}".format(instance.data["subset"],
help_msg) help_msg)

View file

@ -1,70 +0,0 @@
import copy
import pyblish.api
from pprint import pformat
class CollectBatchInstances(pyblish.api.InstancePlugin):
"""Collect all available instances for batch publish."""
label = "Collect Batch Instances"
order = pyblish.api.CollectorOrder + 0.489
hosts = ["standalonepublisher"]
families = ["background_batch"]
# presets
default_subset_task = {
"background_batch": "background"
}
subsets = {
"background_batch": {
"backgroundLayout": {
"task": "background",
"family": "backgroundLayout"
},
"backgroundComp": {
"task": "background",
"family": "backgroundComp"
},
"workfileBackground": {
"task": "background",
"family": "workfile"
}
}
}
unchecked_by_default = []
def process(self, instance):
context = instance.context
asset_name = instance.data["asset"]
family = instance.data["family"]
default_task_name = self.default_subset_task.get(family)
for subset_name, subset_data in self.subsets[family].items():
instance_name = f"{asset_name}_{subset_name}"
task_name = subset_data.get("task") or default_task_name
# create new instance
new_instance = context.create_instance(instance_name)
# add original instance data except name key
for key, value in instance.data.items():
if key not in ["name"]:
# Make sure value is copy since value may be object which
# can be shared across all new created objects
new_instance.data[key] = copy.deepcopy(value)
# add subset data from preset
new_instance.data.update(subset_data)
new_instance.data["label"] = instance_name
new_instance.data["subset"] = subset_name
new_instance.data["task"] = task_name
if subset_name in self.unchecked_by_default:
new_instance.data["publish"] = False
self.log.info(f"Created new instance: {instance_name}")
self.log.debug(f"_ inst_data: {pformat(new_instance.data)}")
# delete original instance
context.remove(instance)

View file

@ -1,243 +0,0 @@
import os
import json
import copy
import openpype.api
from openpype.pipeline import legacy_io
PSDImage = None
class ExtractBGForComp(openpype.api.Extractor):
label = "Extract Background for Compositing"
families = ["backgroundComp"]
hosts = ["standalonepublisher"]
new_instance_family = "background"
# Presetable
allowed_group_names = [
"OL", "BG", "MG", "FG", "SB", "UL", "SKY", "Field Guide", "Field_Guide",
"ANIM"
]
def process(self, instance):
# Check if python module `psd_tools` is installed
try:
global PSDImage
from psd_tools import PSDImage
except Exception:
raise AssertionError(
"BUG: Python module `psd-tools` is not installed!"
)
self.allowed_group_names = [
name.lower()
for name in self.allowed_group_names
]
self.redo_global_plugins(instance)
repres = instance.data.get("representations")
if not repres:
self.log.info("There are no representations on instance.")
return
if not instance.data.get("transfers"):
instance.data["transfers"] = []
# Prepare staging dir
staging_dir = self.staging_dir(instance)
if not os.path.exists(staging_dir):
os.makedirs(staging_dir)
for repre in tuple(repres):
# Skip all files without .psd extension
repre_ext = repre["ext"].lower()
if repre_ext.startswith("."):
repre_ext = repre_ext[1:]
if repre_ext != "psd":
continue
# Prepare publish dir for transfers
publish_dir = instance.data["publishDir"]
# Prepare json filepath where extracted metadata are stored
json_filename = "{}.json".format(instance.name)
json_full_path = os.path.join(staging_dir, json_filename)
self.log.debug(f"`staging_dir` is \"{staging_dir}\"")
# Prepare new repre data
new_repre = {
"name": "json",
"ext": "json",
"files": json_filename,
"stagingDir": staging_dir
}
# TODO add check of list
psd_filename = repre["files"]
psd_folder_path = repre["stagingDir"]
psd_filepath = os.path.join(psd_folder_path, psd_filename)
self.log.debug(f"psd_filepath: \"{psd_filepath}\"")
psd_object = PSDImage.open(psd_filepath)
json_data, transfers = self.export_compositing_images(
psd_object, staging_dir, publish_dir
)
self.log.info("Json file path: {}".format(json_full_path))
with open(json_full_path, "w") as json_filestream:
json.dump(json_data, json_filestream, indent=4)
instance.data["transfers"].extend(transfers)
instance.data["representations"].remove(repre)
instance.data["representations"].append(new_repre)
def export_compositing_images(self, psd_object, output_dir, publish_dir):
json_data = {
"__schema_version__": 1,
"children": []
}
transfers = []
for main_idx, main_layer in enumerate(psd_object):
if (
not main_layer.is_visible()
or main_layer.name.lower() not in self.allowed_group_names
or not main_layer.is_group
):
continue
export_layers = []
layers_idx = 0
for layer in main_layer:
# TODO this way may be added also layers next to "ADJ"
if layer.name.lower() == "adj":
for _layer in layer:
export_layers.append((layers_idx, _layer))
layers_idx += 1
else:
export_layers.append((layers_idx, layer))
layers_idx += 1
if not export_layers:
continue
main_layer_data = {
"index": main_idx,
"name": main_layer.name,
"children": []
}
for layer_idx, layer in export_layers:
has_size = layer.width > 0 and layer.height > 0
if not has_size:
self.log.debug((
"Skipping layer \"{}\" because does "
"not have any content."
).format(layer.name))
continue
main_layer_name = main_layer.name.replace(" ", "_")
layer_name = layer.name.replace(" ", "_")
filename = "{:0>2}_{}_{:0>2}_{}.png".format(
main_idx + 1, main_layer_name, layer_idx + 1, layer_name
)
layer_data = {
"index": layer_idx,
"name": layer.name,
"filename": filename
}
output_filepath = os.path.join(output_dir, filename)
dst_filepath = os.path.join(publish_dir, filename)
transfers.append((output_filepath, dst_filepath))
pil_object = layer.composite(viewport=psd_object.viewbox)
pil_object.save(output_filepath, "PNG")
main_layer_data["children"].append(layer_data)
if main_layer_data["children"]:
json_data["children"].append(main_layer_data)
return json_data, transfers
def redo_global_plugins(self, instance):
# TODO do this in collection phase
# Copy `families` and check if `family` is not in current families
families = instance.data.get("families") or list()
if families:
families = list(set(families))
if self.new_instance_family in families:
families.remove(self.new_instance_family)
self.log.debug(
"Setting new instance families {}".format(str(families))
)
instance.data["families"] = families
# Override instance data with new information
instance.data["family"] = self.new_instance_family
subset_name = instance.data["anatomyData"]["subset"]
asset_doc = instance.data["assetEntity"]
latest_version = self.find_last_version(subset_name, asset_doc)
version_number = 1
if latest_version is not None:
version_number += latest_version
instance.data["latestVersion"] = latest_version
instance.data["version"] = version_number
# Same data apply to anatomy data
instance.data["anatomyData"].update({
"family": self.new_instance_family,
"version": version_number
})
# Redo publish and resources dir
anatomy = instance.context.data["anatomy"]
template_data = copy.deepcopy(instance.data["anatomyData"])
template_data.update({
"frame": "FRAME_TEMP",
"representation": "TEMP"
})
anatomy_filled = anatomy.format(template_data)
if "folder" in anatomy.templates["publish"]:
publish_folder = anatomy_filled["publish"]["folder"]
else:
publish_folder = os.path.dirname(anatomy_filled["publish"]["path"])
publish_folder = os.path.normpath(publish_folder)
resources_folder = os.path.join(publish_folder, "resources")
instance.data["publishDir"] = publish_folder
instance.data["resourcesDir"] = resources_folder
self.log.debug("publishDir: \"{}\"".format(publish_folder))
self.log.debug("resourcesDir: \"{}\"".format(resources_folder))
def find_last_version(self, subset_name, asset_doc):
subset_doc = legacy_io.find_one({
"type": "subset",
"name": subset_name,
"parent": asset_doc["_id"]
})
if subset_doc is None:
self.log.debug("Subset entity does not exist yet.")
else:
version_doc = legacy_io.find_one(
{
"type": "version",
"parent": subset_doc["_id"]
},
sort=[("name", -1)]
)
if version_doc:
return int(version_doc["name"])
return None

View file

@ -1,248 +0,0 @@
import os
import copy
import json
import pyblish.api
import openpype.api
from openpype.pipeline import legacy_io
PSDImage = None
class ExtractBGMainGroups(openpype.api.Extractor):
label = "Extract Background Layout"
order = pyblish.api.ExtractorOrder + 0.02
families = ["backgroundLayout"]
hosts = ["standalonepublisher"]
new_instance_family = "background"
# Presetable
allowed_group_names = [
"OL", "BG", "MG", "FG", "UL", "SB", "SKY", "Field Guide", "Field_Guide",
"ANIM"
]
def process(self, instance):
# Check if python module `psd_tools` is installed
try:
global PSDImage
from psd_tools import PSDImage
except Exception:
raise AssertionError(
"BUG: Python module `psd-tools` is not installed!"
)
self.allowed_group_names = [
name.lower()
for name in self.allowed_group_names
]
repres = instance.data.get("representations")
if not repres:
self.log.info("There are no representations on instance.")
return
self.redo_global_plugins(instance)
repres = instance.data.get("representations")
if not repres:
self.log.info("There are no representations on instance.")
return
if not instance.data.get("transfers"):
instance.data["transfers"] = []
# Prepare staging dir
staging_dir = self.staging_dir(instance)
if not os.path.exists(staging_dir):
os.makedirs(staging_dir)
# Prepare publish dir for transfers
publish_dir = instance.data["publishDir"]
for repre in tuple(repres):
# Skip all files without .psd extension
repre_ext = repre["ext"].lower()
if repre_ext.startswith("."):
repre_ext = repre_ext[1:]
if repre_ext != "psd":
continue
# Prepare json filepath where extracted metadata are stored
json_filename = "{}.json".format(instance.name)
json_full_path = os.path.join(staging_dir, json_filename)
self.log.debug(f"`staging_dir` is \"{staging_dir}\"")
# Prepare new repre data
new_repre = {
"name": "json",
"ext": "json",
"files": json_filename,
"stagingDir": staging_dir
}
# TODO add check of list
psd_filename = repre["files"]
psd_folder_path = repre["stagingDir"]
psd_filepath = os.path.join(psd_folder_path, psd_filename)
self.log.debug(f"psd_filepath: \"{psd_filepath}\"")
psd_object = PSDImage.open(psd_filepath)
json_data, transfers = self.export_compositing_images(
psd_object, staging_dir, publish_dir
)
self.log.info("Json file path: {}".format(json_full_path))
with open(json_full_path, "w") as json_filestream:
json.dump(json_data, json_filestream, indent=4)
instance.data["transfers"].extend(transfers)
instance.data["representations"].remove(repre)
instance.data["representations"].append(new_repre)
def export_compositing_images(self, psd_object, output_dir, publish_dir):
json_data = {
"__schema_version__": 1,
"children": []
}
output_ext = ".png"
to_export = []
for layer_idx, layer in enumerate(psd_object):
layer_name = layer.name.replace(" ", "_")
if (
not layer.is_visible()
or layer_name.lower() not in self.allowed_group_names
):
continue
has_size = layer.width > 0 and layer.height > 0
if not has_size:
self.log.debug((
"Skipping layer \"{}\" because does not have any content."
).format(layer.name))
continue
filebase = "{:0>2}_{}".format(layer_idx, layer_name)
if layer_name.lower() == "anim":
if not layer.is_group:
self.log.warning("ANIM layer is not a group layer.")
continue
children = []
for anim_idx, anim_layer in enumerate(layer):
anim_layer_name = anim_layer.name.replace(" ", "_")
filename = "{}_{:0>2}_{}{}".format(
filebase, anim_idx, anim_layer_name, output_ext
)
children.append({
"index": anim_idx,
"name": anim_layer.name,
"filename": filename
})
to_export.append((anim_layer, filename))
json_data["children"].append({
"index": layer_idx,
"name": layer.name,
"children": children
})
continue
filename = filebase + output_ext
json_data["children"].append({
"index": layer_idx,
"name": layer.name,
"filename": filename
})
to_export.append((layer, filename))
transfers = []
for layer, filename in to_export:
output_filepath = os.path.join(output_dir, filename)
dst_filepath = os.path.join(publish_dir, filename)
transfers.append((output_filepath, dst_filepath))
pil_object = layer.composite(viewport=psd_object.viewbox)
pil_object.save(output_filepath, "PNG")
return json_data, transfers
def redo_global_plugins(self, instance):
# TODO do this in collection phase
# Copy `families` and check if `family` is not in current families
families = instance.data.get("families") or list()
if families:
families = list(set(families))
if self.new_instance_family in families:
families.remove(self.new_instance_family)
self.log.debug(
"Setting new instance families {}".format(str(families))
)
instance.data["families"] = families
# Override instance data with new information
instance.data["family"] = self.new_instance_family
subset_name = instance.data["anatomyData"]["subset"]
asset_doc = instance.data["assetEntity"]
latest_version = self.find_last_version(subset_name, asset_doc)
version_number = 1
if latest_version is not None:
version_number += latest_version
instance.data["latestVersion"] = latest_version
instance.data["version"] = version_number
# Same data apply to anatomy data
instance.data["anatomyData"].update({
"family": self.new_instance_family,
"version": version_number
})
# Redo publish and resources dir
anatomy = instance.context.data["anatomy"]
template_data = copy.deepcopy(instance.data["anatomyData"])
template_data.update({
"frame": "FRAME_TEMP",
"representation": "TEMP"
})
anatomy_filled = anatomy.format(template_data)
if "folder" in anatomy.templates["publish"]:
publish_folder = anatomy_filled["publish"]["folder"]
else:
publish_folder = os.path.dirname(anatomy_filled["publish"]["path"])
publish_folder = os.path.normpath(publish_folder)
resources_folder = os.path.join(publish_folder, "resources")
instance.data["publishDir"] = publish_folder
instance.data["resourcesDir"] = resources_folder
self.log.debug("publishDir: \"{}\"".format(publish_folder))
self.log.debug("resourcesDir: \"{}\"".format(resources_folder))
def find_last_version(self, subset_name, asset_doc):
subset_doc = legacy_io.find_one({
"type": "subset",
"name": subset_name,
"parent": asset_doc["_id"]
})
if subset_doc is None:
self.log.debug("Subset entity does not exist yet.")
else:
version_doc = legacy_io.find_one(
{
"type": "version",
"parent": subset_doc["_id"]
},
sort=[("name", -1)]
)
if version_doc:
return int(version_doc["name"])
return None

View file

@ -1,171 +0,0 @@
import os
import copy
import pyblish.api
import openpype.api
from openpype.pipeline import legacy_io
PSDImage = None
class ExtractImagesFromPSD(openpype.api.Extractor):
# PLUGIN is not currently enabled because was decided to use different
# approach
enabled = False
active = False
label = "Extract Images from PSD"
order = pyblish.api.ExtractorOrder + 0.02
families = ["backgroundLayout"]
hosts = ["standalonepublisher"]
new_instance_family = "image"
ignored_instance_data_keys = ("name", "label", "stagingDir", "version")
# Presetable
allowed_group_names = [
"OL", "BG", "MG", "FG", "UL", "SKY", "Field Guide", "Field_Guide",
"ANIM"
]
def process(self, instance):
# Check if python module `psd_tools` is installed
try:
global PSDImage
from psd_tools import PSDImage
except Exception:
raise AssertionError(
"BUG: Python module `psd-tools` is not installed!"
)
self.allowed_group_names = [
name.lower()
for name in self.allowed_group_names
]
repres = instance.data.get("representations")
if not repres:
self.log.info("There are no representations on instance.")
return
for repre in tuple(repres):
# Skip all files without .psd extension
repre_ext = repre["ext"].lower()
if repre_ext.startswith("."):
repre_ext = repre_ext[1:]
if repre_ext != "psd":
continue
# TODO add check of list of "files" value
psd_filename = repre["files"]
psd_folder_path = repre["stagingDir"]
psd_filepath = os.path.join(psd_folder_path, psd_filename)
self.log.debug(f"psd_filepath: \"{psd_filepath}\"")
psd_object = PSDImage.open(psd_filepath)
self.create_new_instances(instance, psd_object)
# Remove the instance from context
instance.context.remove(instance)
def create_new_instances(self, instance, psd_object):
asset_doc = instance.data["assetEntity"]
for layer in psd_object:
if (
not layer.is_visible()
or layer.name.lower() not in self.allowed_group_names
):
continue
has_size = layer.width > 0 and layer.height > 0
if not has_size:
self.log.debug((
"Skipping layer \"{}\" because does "
"not have any content."
).format(layer.name))
continue
layer_name = layer.name.replace(" ", "_")
instance_name = subset_name = f"image{layer_name}"
self.log.info(
f"Creating new instance with name \"{instance_name}\""
)
new_instance = instance.context.create_instance(instance_name)
for key, value in instance.data.items():
if key not in self.ignored_instance_data_keys:
new_instance.data[key] = copy.deepcopy(value)
new_instance.data["label"] = " ".join(
(new_instance.data["asset"], instance_name)
)
# Find latest version
latest_version = self.find_last_version(subset_name, asset_doc)
version_number = 1
if latest_version is not None:
version_number += latest_version
self.log.info(
"Next version of instance \"{}\" will be {}".format(
instance_name, version_number
)
)
# Set family and subset
new_instance.data["family"] = self.new_instance_family
new_instance.data["subset"] = subset_name
new_instance.data["version"] = version_number
new_instance.data["latestVersion"] = latest_version
new_instance.data["anatomyData"].update({
"subset": subset_name,
"family": self.new_instance_family,
"version": version_number
})
# Copy `families` and check if `family` is not in current families
families = new_instance.data.get("families") or list()
if families:
families = list(set(families))
if self.new_instance_family in families:
families.remove(self.new_instance_family)
new_instance.data["families"] = families
# Prepare staging dir for new instance
staging_dir = self.staging_dir(new_instance)
output_filename = "{}.png".format(layer_name)
output_filepath = os.path.join(staging_dir, output_filename)
pil_object = layer.composite(viewport=psd_object.viewbox)
pil_object.save(output_filepath, "PNG")
new_repre = {
"name": "png",
"ext": "png",
"files": output_filename,
"stagingDir": staging_dir
}
self.log.debug(
"Creating new representation: {}".format(new_repre)
)
new_instance.data["representations"] = [new_repre]
def find_last_version(self, subset_name, asset_doc):
subset_doc = legacy_io.find_one({
"type": "subset",
"name": subset_name,
"parent": asset_doc["_id"]
})
if subset_doc is None:
self.log.debug("Subset entity does not exist yet.")
else:
version_doc = legacy_io.find_one(
{
"type": "version",
"parent": subset_doc["_id"]
},
sort=[("name", -1)]
)
if version_doc:
return int(version_doc["name"])
return None

View file

@ -2,7 +2,11 @@ import os
import tempfile import tempfile
import pyblish.api import pyblish.api
import openpype.api import openpype.api
import openpype.lib from openpype.lib import (
get_ffmpeg_tool_path,
get_ffprobe_streams,
path_to_subprocess_arg,
)
class ExtractThumbnailSP(pyblish.api.InstancePlugin): class ExtractThumbnailSP(pyblish.api.InstancePlugin):
@ -34,85 +38,78 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
if not thumbnail_repre: if not thumbnail_repre:
return return
thumbnail_repre.pop("thumbnail")
files = thumbnail_repre.get("files") files = thumbnail_repre.get("files")
if not files: if not files:
return return
if isinstance(files, list): if isinstance(files, list):
files_len = len(files) first_filename = str(files[0])
file = str(files[0])
else: else:
files_len = 1 first_filename = files
file = files
staging_dir = None staging_dir = None
is_jpeg = False
if file.endswith(".jpeg") or file.endswith(".jpg"):
is_jpeg = True
if is_jpeg and files_len == 1: # Convert to jpeg if not yet
# skip if already is single jpeg file full_input_path = os.path.join(
return thumbnail_repre["stagingDir"], first_filename
)
self.log.info("input {}".format(full_input_path))
with tempfile.NamedTemporaryFile(suffix=".jpg") as tmp:
full_thumbnail_path = tmp.name
elif is_jpeg: self.log.info("output {}".format(full_thumbnail_path))
# use first frame as thumbnail if is sequence of jpegs
full_thumbnail_path = os.path.join(
thumbnail_repre["stagingDir"], file
)
self.log.info(
"For thumbnail is used file: {}".format(full_thumbnail_path)
)
else: instance.context.data["cleanupFullPaths"].append(full_thumbnail_path)
# Convert to jpeg if not yet
full_input_path = os.path.join(thumbnail_repre["stagingDir"], file)
self.log.info("input {}".format(full_input_path))
full_thumbnail_path = tempfile.mkstemp(suffix=".jpg")[1] ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
self.log.info("output {}".format(full_thumbnail_path))
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") ffmpeg_args = self.ffmpeg_args or {}
ffmpeg_args = self.ffmpeg_args or {} jpeg_items = [
path_to_subprocess_arg(ffmpeg_path),
# override file if already exists
"-y"
]
jpeg_items = [ # add input filters from peresets
"\"{}\"".format(ffmpeg_path), jpeg_items.extend(ffmpeg_args.get("input") or [])
# override file if already exists # input file
"-y" jpeg_items.extend([
] "-i", path_to_subprocess_arg(full_input_path),
# add input filters from peresets
jpeg_items.extend(ffmpeg_args.get("input") or [])
# input file
jpeg_items.append("-i \"{}\"".format(full_input_path))
# extract only single file # extract only single file
jpeg_items.append("-frames:v 1") "-frames:v", "1",
# Add black background for transparent images # Add black background for transparent images
jpeg_items.append(( "-filter_complex", (
"-filter_complex" "\"color=black,format=rgb24[c]"
" \"color=black,format=rgb24[c]"
";[c][0]scale2ref[c][i]" ";[c][0]scale2ref[c][i]"
";[c][i]overlay=format=auto:shortest=1,setsar=1\"" ";[c][i]overlay=format=auto:shortest=1,setsar=1\""
)) ),
])
jpeg_items.extend(ffmpeg_args.get("output") or []) jpeg_items.extend(ffmpeg_args.get("output") or [])
# output file # output file
jpeg_items.append("\"{}\"".format(full_thumbnail_path)) jpeg_items.append(path_to_subprocess_arg(full_thumbnail_path))
subprocess_jpeg = " ".join(jpeg_items) subprocess_jpeg = " ".join(jpeg_items)
# run subprocess # run subprocess
self.log.debug("Executing: {}".format(subprocess_jpeg)) self.log.debug("Executing: {}".format(subprocess_jpeg))
openpype.api.run_subprocess( openpype.api.run_subprocess(
subprocess_jpeg, shell=True, logger=self.log subprocess_jpeg, shell=True, logger=self.log
) )
# remove thumbnail key from origin repre # remove thumbnail key from origin repre
thumbnail_repre.pop("thumbnail") streams = get_ffprobe_streams(full_thumbnail_path)
width = height = None
for stream in streams:
if "width" in stream and "height" in stream:
width = stream["width"]
height = stream["height"]
break
filename = os.path.basename(full_thumbnail_path) staging_dir, filename = os.path.split(full_thumbnail_path)
staging_dir = staging_dir or os.path.dirname(full_thumbnail_path)
# create new thumbnail representation # create new thumbnail representation
representation = { representation = {
@ -120,12 +117,11 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
'ext': 'jpg', 'ext': 'jpg',
'files': filename, 'files': filename,
"stagingDir": staging_dir, "stagingDir": staging_dir,
"tags": ["thumbnail"], "tags": ["thumbnail", "delete"],
} }
if width and height:
# # add Delete tag when temp file was rendered representation["width"] = width
if not is_jpeg: representation["height"] = height
representation["tags"].append("delete")
self.log.info(f"New representation {representation}") self.log.info(f"New representation {representation}")
instance.data["representations"].append(representation) instance.data["representations"].append(representation)

View file

@ -0,0 +1,97 @@
from openpype.pipeline import (
Creator,
CreatedInstance
)
from openpype.lib import FileDef
from .pipeline import (
list_instances,
update_instances,
remove_instances,
HostContext,
)
class TrayPublishCreator(Creator):
create_allow_context_change = True
host_name = "traypublisher"
def collect_instances(self):
for instance_data in list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
update_instances(update_list)
def remove_instances(self, instances):
remove_instances(instances)
for instance in instances:
self._remove_instance_from_context(instance)
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
class SettingsCreator(TrayPublishCreator):
create_allow_context_change = True
extensions = []
def collect_instances(self):
for instance_data in list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def create(self, subset_name, data, pre_create_data):
# Pass precreate data to creator attributes
data["creator_attributes"] = pre_create_data
data["settings_creator"] = True
# Create new instance
new_instance = CreatedInstance(self.family, subset_name, data, self)
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
def get_instance_attr_defs(self):
return [
FileDef(
"filepath",
folders=False,
extensions=self.extensions,
allow_sequences=self.allow_sequences,
label="Filepath",
)
]
@classmethod
def from_settings(cls, item_data):
identifier = item_data["identifier"]
family = item_data["family"]
if not identifier:
identifier = "settings_{}".format(family)
return type(
"{}{}".format(cls.__name__, identifier),
(cls, ),
{
"family": family,
"identifier": identifier,
"label": item_data["label"].strip(),
"icon": item_data["icon"],
"description": item_data["description"],
"detailed_description": item_data["detailed_description"],
"extensions": item_data["extensions"],
"allow_sequences": item_data["allow_sequences"],
"default_variants": item_data["default_variants"]
}
)

View file

@ -0,0 +1,20 @@
import os
from openpype.api import get_project_settings
def initialize():
from openpype.hosts.traypublisher.api.plugin import SettingsCreator
project_name = os.environ["AVALON_PROJECT"]
project_settings = get_project_settings(project_name)
simple_creators = project_settings["traypublisher"]["simple_creators"]
global_variables = globals()
for item in simple_creators:
dynamic_plugin = SettingsCreator.from_settings(item)
global_variables[dynamic_plugin.__name__] = dynamic_plugin
initialize()

View file

@ -1,97 +0,0 @@
from openpype.hosts.traypublisher.api import pipeline
from openpype.lib import FileDef
from openpype.pipeline import (
Creator,
CreatedInstance
)
class WorkfileCreator(Creator):
identifier = "workfile"
label = "Workfile"
family = "workfile"
description = "Publish backup of workfile"
create_allow_context_change = True
extensions = [
# Maya
".ma", ".mb",
# Nuke
".nk",
# Hiero
".hrox",
# Houdini
".hip", ".hiplc", ".hipnc",
# Blender
".blend",
# Celaction
".scn",
# TVPaint
".tvpp",
# Fusion
".comp",
# Harmony
".zip",
# Premiere
".prproj",
# Resolve
".drp",
# Photoshop
".psd", ".psb",
# Aftereffects
".aep"
]
def get_icon(self):
return "fa.file"
def collect_instances(self):
for instance_data in pipeline.list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
pipeline.update_instances(update_list)
def remove_instances(self, instances):
pipeline.remove_instances(instances)
for instance in instances:
self._remove_instance_from_context(instance)
def create(self, subset_name, data, pre_create_data):
# Pass precreate data to creator attributes
data["creator_attributes"] = pre_create_data
# Create new instance
new_instance = CreatedInstance(self.family, subset_name, data, self)
# Host implementation of storing metadata about instance
pipeline.HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
def get_default_variants(self):
return [
"Main"
]
def get_instance_attr_defs(self):
output = [
FileDef(
"filepath",
folders=False,
extensions=self.extensions,
label="Filepath"
)
]
return output
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
def get_detail_description(self):
return """# Publish workfile backup"""

Some files were not shown because too many files have changed in this diff Show more