Merge remote-tracking branch 'origin/develop' into enhancement/OP-3075_houdini-new-publisher

This commit is contained in:
Ondřej Samohel 2022-11-24 12:13:17 +01:00
commit 44a7e844b2
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
168 changed files with 5367 additions and 1314 deletions

View file

@ -2,7 +2,7 @@ name: Milestone - assign to PRs
on:
pull_request_target:
types: [opened, reopened, edited, synchronize]
types: [closed]
jobs:
run_if_release:

View file

@ -1,8 +1,101 @@
# Changelog
## [3.14.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.7](https://github.com/pypeclub/OpenPype/tree/3.14.7)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.4...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.6...3.14.7)
**🆕 New features**
- Hiero: loading effect family to timeline [\#4055](https://github.com/pypeclub/OpenPype/pull/4055)
**🚀 Enhancements**
- Photoshop: bug with pop-up window on Instance Creator [\#4121](https://github.com/pypeclub/OpenPype/pull/4121)
- Publisher: Open on specific tab [\#4120](https://github.com/pypeclub/OpenPype/pull/4120)
- Publisher: Hide unknown publish values [\#4116](https://github.com/pypeclub/OpenPype/pull/4116)
- Ftrack: Event server status give more information about version locations [\#4112](https://github.com/pypeclub/OpenPype/pull/4112)
- General: Allow higher numbers in frames and clips [\#4101](https://github.com/pypeclub/OpenPype/pull/4101)
- Publisher: Settings for validate frame range [\#4097](https://github.com/pypeclub/OpenPype/pull/4097)
- Publisher: Ignore escape button [\#4090](https://github.com/pypeclub/OpenPype/pull/4090)
- Flame: Loading clip with native colorspace resolved from mapping [\#4079](https://github.com/pypeclub/OpenPype/pull/4079)
- General: Extract review single frame output [\#4064](https://github.com/pypeclub/OpenPype/pull/4064)
- Publisher: Prepared common function for instance data cache [\#4063](https://github.com/pypeclub/OpenPype/pull/4063)
- Publisher: Easy access to publish page from create page [\#4058](https://github.com/pypeclub/OpenPype/pull/4058)
- General/TVPaint: Attribute defs dialog [\#4052](https://github.com/pypeclub/OpenPype/pull/4052)
- Publisher: Better reset defer [\#4048](https://github.com/pypeclub/OpenPype/pull/4048)
- Publisher: Add thumbnail sources [\#4042](https://github.com/pypeclub/OpenPype/pull/4042)
**🐛 Bug fixes**
- General: Move default settings for template name [\#4119](https://github.com/pypeclub/OpenPype/pull/4119)
- Slack: notification fail in new tray publisher [\#4118](https://github.com/pypeclub/OpenPype/pull/4118)
- Nuke: loaded nodes set to first tab [\#4114](https://github.com/pypeclub/OpenPype/pull/4114)
- Nuke: load image first frame [\#4113](https://github.com/pypeclub/OpenPype/pull/4113)
- Files Widget: Ignore case sensitivity of extensions [\#4096](https://github.com/pypeclub/OpenPype/pull/4096)
- Webpublisher: extension is lowercased in Setting and in uploaded files [\#4095](https://github.com/pypeclub/OpenPype/pull/4095)
- Publish Report Viewer: Fix small bugs [\#4086](https://github.com/pypeclub/OpenPype/pull/4086)
- Igniter: fix regex to match semver better [\#4085](https://github.com/pypeclub/OpenPype/pull/4085)
- Maya: aov filtering [\#4083](https://github.com/pypeclub/OpenPype/pull/4083)
- Flame/Flare: Loading to multiple batches [\#4080](https://github.com/pypeclub/OpenPype/pull/4080)
- hiero: creator from settings with set maximum [\#4077](https://github.com/pypeclub/OpenPype/pull/4077)
- Nuke: resolve hashes in file name only for frame token [\#4074](https://github.com/pypeclub/OpenPype/pull/4074)
- Publisher: Fix cache of asset docs [\#4070](https://github.com/pypeclub/OpenPype/pull/4070)
- Webpublisher: cleanup wp extract thumbnail [\#4067](https://github.com/pypeclub/OpenPype/pull/4067)
- Settings UI: Locked setting can't bypass lock [\#4066](https://github.com/pypeclub/OpenPype/pull/4066)
- Loader: Fix comparison of repre name [\#4053](https://github.com/pypeclub/OpenPype/pull/4053)
- Deadline: Extract environment subprocess failure [\#4050](https://github.com/pypeclub/OpenPype/pull/4050)
**🔀 Refactored code**
- General: Collect entities plugin minor changes [\#4089](https://github.com/pypeclub/OpenPype/pull/4089)
- General: Direct interfaces import [\#4065](https://github.com/pypeclub/OpenPype/pull/4065)
**Merged pull requests:**
- Bump loader-utils from 1.4.1 to 1.4.2 in /website [\#4100](https://github.com/pypeclub/OpenPype/pull/4100)
- Online family for Tray Publisher [\#4093](https://github.com/pypeclub/OpenPype/pull/4093)
- Bump loader-utils from 1.4.0 to 1.4.1 in /website [\#4081](https://github.com/pypeclub/OpenPype/pull/4081)
- remove underscore from subset name [\#4059](https://github.com/pypeclub/OpenPype/pull/4059)
- Alembic Loader as Arnold Standin [\#4047](https://github.com/pypeclub/OpenPype/pull/4047)
## [3.14.6](https://github.com/pypeclub/OpenPype/tree/3.14.6)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.5...3.14.6)
### 📖 Documentation
- Documentation: Minor updates to dev\_requirements.md [\#4025](https://github.com/pypeclub/OpenPype/pull/4025)
**🆕 New features**
- Nuke: add 13.2 variant [\#4041](https://github.com/pypeclub/OpenPype/pull/4041)
**🚀 Enhancements**
- Publish Report Viewer: Store reports locally on machine [\#4040](https://github.com/pypeclub/OpenPype/pull/4040)
- General: More specific error in burnins script [\#4026](https://github.com/pypeclub/OpenPype/pull/4026)
- General: Extract review does not crash with old settings overrides [\#4023](https://github.com/pypeclub/OpenPype/pull/4023)
- Publisher: Convertors for legacy instances [\#4020](https://github.com/pypeclub/OpenPype/pull/4020)
- workflows: adding milestone creator and assigner [\#4018](https://github.com/pypeclub/OpenPype/pull/4018)
- Publisher: Catch creator errors [\#4015](https://github.com/pypeclub/OpenPype/pull/4015)
**🐛 Bug fixes**
- Hiero - effect collection fixes [\#4038](https://github.com/pypeclub/OpenPype/pull/4038)
- Nuke - loader clip correct hash conversion in path [\#4037](https://github.com/pypeclub/OpenPype/pull/4037)
- Maya: Soft fail when applying capture preset [\#4034](https://github.com/pypeclub/OpenPype/pull/4034)
- Igniter: handle missing directory [\#4032](https://github.com/pypeclub/OpenPype/pull/4032)
- StandalonePublisher: Fix thumbnail publishing [\#4029](https://github.com/pypeclub/OpenPype/pull/4029)
- Experimental Tools: Fix publisher import [\#4027](https://github.com/pypeclub/OpenPype/pull/4027)
- Houdini: fix wrong path in ASS loader [\#4016](https://github.com/pypeclub/OpenPype/pull/4016)
**🔀 Refactored code**
- General: Import lib functions from lib [\#4017](https://github.com/pypeclub/OpenPype/pull/4017)
## [3.14.5](https://github.com/pypeclub/OpenPype/tree/3.14.5) (2022-10-24)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.4...3.14.5)
**🚀 Enhancements**

View file

@ -1,5 +1,222 @@
# Changelog
## [3.14.7](https://github.com/pypeclub/OpenPype/tree/3.14.7)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.6...3.14.7)
**🆕 New features**
- Hiero: loading effect family to timeline [\#4055](https://github.com/pypeclub/OpenPype/pull/4055)
**🚀 Enhancements**
- Photoshop: bug with pop-up window on Instance Creator [\#4121](https://github.com/pypeclub/OpenPype/pull/4121)
- Publisher: Open on specific tab [\#4120](https://github.com/pypeclub/OpenPype/pull/4120)
- Publisher: Hide unknown publish values [\#4116](https://github.com/pypeclub/OpenPype/pull/4116)
- Ftrack: Event server status give more information about version locations [\#4112](https://github.com/pypeclub/OpenPype/pull/4112)
- General: Allow higher numbers in frames and clips [\#4101](https://github.com/pypeclub/OpenPype/pull/4101)
- Publisher: Settings for validate frame range [\#4097](https://github.com/pypeclub/OpenPype/pull/4097)
- Publisher: Ignore escape button [\#4090](https://github.com/pypeclub/OpenPype/pull/4090)
- Flame: Loading clip with native colorspace resolved from mapping [\#4079](https://github.com/pypeclub/OpenPype/pull/4079)
- General: Extract review single frame output [\#4064](https://github.com/pypeclub/OpenPype/pull/4064)
- Publisher: Prepared common function for instance data cache [\#4063](https://github.com/pypeclub/OpenPype/pull/4063)
- Publisher: Easy access to publish page from create page [\#4058](https://github.com/pypeclub/OpenPype/pull/4058)
- General/TVPaint: Attribute defs dialog [\#4052](https://github.com/pypeclub/OpenPype/pull/4052)
- Publisher: Better reset defer [\#4048](https://github.com/pypeclub/OpenPype/pull/4048)
- Publisher: Add thumbnail sources [\#4042](https://github.com/pypeclub/OpenPype/pull/4042)
**🐛 Bug fixes**
- General: Move default settings for template name [\#4119](https://github.com/pypeclub/OpenPype/pull/4119)
- Slack: notification fail in new tray publisher [\#4118](https://github.com/pypeclub/OpenPype/pull/4118)
- Nuke: loaded nodes set to first tab [\#4114](https://github.com/pypeclub/OpenPype/pull/4114)
- Nuke: load image first frame [\#4113](https://github.com/pypeclub/OpenPype/pull/4113)
- Files Widget: Ignore case sensitivity of extensions [\#4096](https://github.com/pypeclub/OpenPype/pull/4096)
- Webpublisher: extension is lowercased in Setting and in uploaded files [\#4095](https://github.com/pypeclub/OpenPype/pull/4095)
- Publish Report Viewer: Fix small bugs [\#4086](https://github.com/pypeclub/OpenPype/pull/4086)
- Igniter: fix regex to match semver better [\#4085](https://github.com/pypeclub/OpenPype/pull/4085)
- Maya: aov filtering [\#4083](https://github.com/pypeclub/OpenPype/pull/4083)
- Flame/Flare: Loading to multiple batches [\#4080](https://github.com/pypeclub/OpenPype/pull/4080)
- hiero: creator from settings with set maximum [\#4077](https://github.com/pypeclub/OpenPype/pull/4077)
- Nuke: resolve hashes in file name only for frame token [\#4074](https://github.com/pypeclub/OpenPype/pull/4074)
- Publisher: Fix cache of asset docs [\#4070](https://github.com/pypeclub/OpenPype/pull/4070)
- Webpublisher: cleanup wp extract thumbnail [\#4067](https://github.com/pypeclub/OpenPype/pull/4067)
- Settings UI: Locked setting can't bypass lock [\#4066](https://github.com/pypeclub/OpenPype/pull/4066)
- Loader: Fix comparison of repre name [\#4053](https://github.com/pypeclub/OpenPype/pull/4053)
- Deadline: Extract environment subprocess failure [\#4050](https://github.com/pypeclub/OpenPype/pull/4050)
**🔀 Refactored code**
- General: Collect entities plugin minor changes [\#4089](https://github.com/pypeclub/OpenPype/pull/4089)
- General: Direct interfaces import [\#4065](https://github.com/pypeclub/OpenPype/pull/4065)
**Merged pull requests:**
- Bump loader-utils from 1.4.1 to 1.4.2 in /website [\#4100](https://github.com/pypeclub/OpenPype/pull/4100)
- Online family for Tray Publisher [\#4093](https://github.com/pypeclub/OpenPype/pull/4093)
- Bump loader-utils from 1.4.0 to 1.4.1 in /website [\#4081](https://github.com/pypeclub/OpenPype/pull/4081)
- remove underscore from subset name [\#4059](https://github.com/pypeclub/OpenPype/pull/4059)
- Alembic Loader as Arnold Standin [\#4047](https://github.com/pypeclub/OpenPype/pull/4047)
## [3.14.6](https://github.com/pypeclub/OpenPype/tree/3.14.6)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.5...3.14.6)
### 📖 Documentation
- Documentation: Minor updates to dev\_requirements.md [\#4025](https://github.com/pypeclub/OpenPype/pull/4025)
**🆕 New features**
- Nuke: add 13.2 variant [\#4041](https://github.com/pypeclub/OpenPype/pull/4041)
**🚀 Enhancements**
- Publish Report Viewer: Store reports locally on machine [\#4040](https://github.com/pypeclub/OpenPype/pull/4040)
- General: More specific error in burnins script [\#4026](https://github.com/pypeclub/OpenPype/pull/4026)
- General: Extract review does not crash with old settings overrides [\#4023](https://github.com/pypeclub/OpenPype/pull/4023)
- Publisher: Convertors for legacy instances [\#4020](https://github.com/pypeclub/OpenPype/pull/4020)
- workflows: adding milestone creator and assigner [\#4018](https://github.com/pypeclub/OpenPype/pull/4018)
- Publisher: Catch creator errors [\#4015](https://github.com/pypeclub/OpenPype/pull/4015)
**🐛 Bug fixes**
- Hiero - effect collection fixes [\#4038](https://github.com/pypeclub/OpenPype/pull/4038)
- Nuke - loader clip correct hash conversion in path [\#4037](https://github.com/pypeclub/OpenPype/pull/4037)
- Maya: Soft fail when applying capture preset [\#4034](https://github.com/pypeclub/OpenPype/pull/4034)
- Igniter: handle missing directory [\#4032](https://github.com/pypeclub/OpenPype/pull/4032)
- StandalonePublisher: Fix thumbnail publishing [\#4029](https://github.com/pypeclub/OpenPype/pull/4029)
- Experimental Tools: Fix publisher import [\#4027](https://github.com/pypeclub/OpenPype/pull/4027)
- Houdini: fix wrong path in ASS loader [\#4016](https://github.com/pypeclub/OpenPype/pull/4016)
**🔀 Refactored code**
- General: Import lib functions from lib [\#4017](https://github.com/pypeclub/OpenPype/pull/4017)
## [3.14.5](https://github.com/pypeclub/OpenPype/tree/3.14.5) (2022-10-24)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.4...3.14.5)
**🚀 Enhancements**
- Maya: add OBJ extractor to model family [\#4021](https://github.com/pypeclub/OpenPype/pull/4021)
- Publish report viewer tool [\#4010](https://github.com/pypeclub/OpenPype/pull/4010)
- Nuke | Global: adding custom tags representation filtering [\#4009](https://github.com/pypeclub/OpenPype/pull/4009)
- Publisher: Create context has shared data for collection phase [\#3995](https://github.com/pypeclub/OpenPype/pull/3995)
- Resolve: updating to v18 compatibility [\#3986](https://github.com/pypeclub/OpenPype/pull/3986)
**🐛 Bug fixes**
- TrayPublisher: Fix missing argument [\#4019](https://github.com/pypeclub/OpenPype/pull/4019)
- General: Fix python 2 compatibility of ffmpeg and oiio tools discovery [\#4011](https://github.com/pypeclub/OpenPype/pull/4011)
**🔀 Refactored code**
- Maya: Removed unused imports [\#4008](https://github.com/pypeclub/OpenPype/pull/4008)
- Unreal: Fix import of moved function [\#4007](https://github.com/pypeclub/OpenPype/pull/4007)
- Houdini: Change import of RepairAction [\#4005](https://github.com/pypeclub/OpenPype/pull/4005)
- Nuke/Hiero: Refactor openpype.api imports [\#4000](https://github.com/pypeclub/OpenPype/pull/4000)
- TVPaint: Defined with HostBase [\#3994](https://github.com/pypeclub/OpenPype/pull/3994)
**Merged pull requests:**
- Unreal: Remove redundant Creator stub [\#4012](https://github.com/pypeclub/OpenPype/pull/4012)
- Unreal: add `uproject` extension to Unreal project template [\#4004](https://github.com/pypeclub/OpenPype/pull/4004)
- Unreal: fix order of includes [\#4002](https://github.com/pypeclub/OpenPype/pull/4002)
- Fusion: Implement backwards compatibility \(+/- Fusion 17.2\) [\#3958](https://github.com/pypeclub/OpenPype/pull/3958)
## [3.14.4](https://github.com/pypeclub/OpenPype/tree/3.14.4) (2022-10-19)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.3...3.14.4)
**🆕 New features**
- Webpublisher: use max next published version number for all items in batch [\#3961](https://github.com/pypeclub/OpenPype/pull/3961)
- General: Control Thumbnail integration via explicit configuration profiles [\#3951](https://github.com/pypeclub/OpenPype/pull/3951)
**🚀 Enhancements**
- Publisher: Multiselection in card view [\#3993](https://github.com/pypeclub/OpenPype/pull/3993)
- TrayPublisher: Original Basename cause crash too early [\#3990](https://github.com/pypeclub/OpenPype/pull/3990)
- Tray Publisher: add `originalBasename` data to simple creators [\#3988](https://github.com/pypeclub/OpenPype/pull/3988)
- General: Custom paths to ffmpeg and OpenImageIO tools [\#3982](https://github.com/pypeclub/OpenPype/pull/3982)
- Integrate: Preserve existing subset group if instance does not set it for new version [\#3976](https://github.com/pypeclub/OpenPype/pull/3976)
- Publisher: Prepare publisher controller for remote publishing [\#3972](https://github.com/pypeclub/OpenPype/pull/3972)
- Maya: new style dataclasses in maya deadline submitter plugin [\#3968](https://github.com/pypeclub/OpenPype/pull/3968)
- Maya: Define preffered Qt bindings for Qt.py and qtpy [\#3963](https://github.com/pypeclub/OpenPype/pull/3963)
- Settings: Move imageio from project anatomy to project settings \[pypeclub\] [\#3959](https://github.com/pypeclub/OpenPype/pull/3959)
- TrayPublisher: Extract thumbnail for other families [\#3952](https://github.com/pypeclub/OpenPype/pull/3952)
- Publisher: Pass instance to subset name method on update [\#3949](https://github.com/pypeclub/OpenPype/pull/3949)
- General: Set root environments before DCC launch [\#3947](https://github.com/pypeclub/OpenPype/pull/3947)
- Refactor: changed legacy way to update database for Hero version integrate [\#3941](https://github.com/pypeclub/OpenPype/pull/3941)
- Maya: Moved plugin from global to maya [\#3939](https://github.com/pypeclub/OpenPype/pull/3939)
- Publisher: Create dialog is part of main window [\#3936](https://github.com/pypeclub/OpenPype/pull/3936)
- Fusion: Implement Alembic and FBX mesh loader [\#3927](https://github.com/pypeclub/OpenPype/pull/3927)
**🐛 Bug fixes**
- TrayPublisher: Disable sequences in batch mov creator [\#3996](https://github.com/pypeclub/OpenPype/pull/3996)
- Fix - tags might be missing on representation [\#3985](https://github.com/pypeclub/OpenPype/pull/3985)
- Resolve: Fix usage of functions from lib [\#3983](https://github.com/pypeclub/OpenPype/pull/3983)
- Maya: remove invalid prefix token for non-multipart outputs [\#3981](https://github.com/pypeclub/OpenPype/pull/3981)
- Ftrack: Fix schema cache for Python 2 [\#3980](https://github.com/pypeclub/OpenPype/pull/3980)
- Maya: add object to attr.s declaration [\#3973](https://github.com/pypeclub/OpenPype/pull/3973)
- Maya: Deadline OutputFilePath hack regression for Renderman [\#3950](https://github.com/pypeclub/OpenPype/pull/3950)
- Houdini: Fix validate workfile paths for non-parm file references [\#3948](https://github.com/pypeclub/OpenPype/pull/3948)
- Photoshop: missed sync published version of workfile with workfile [\#3946](https://github.com/pypeclub/OpenPype/pull/3946)
- Maya: Set default value for RenderSetupIncludeLights option [\#3944](https://github.com/pypeclub/OpenPype/pull/3944)
- Maya: fix regression of Renderman Deadline hack [\#3943](https://github.com/pypeclub/OpenPype/pull/3943)
- Kitsu: 2 fixes, nb\_frames and Shot type error [\#3940](https://github.com/pypeclub/OpenPype/pull/3940)
- Tray: Change order of attribute changes [\#3938](https://github.com/pypeclub/OpenPype/pull/3938)
- AttributeDefs: Fix crashing multivalue of files widget [\#3937](https://github.com/pypeclub/OpenPype/pull/3937)
- General: Fix links query on hero version [\#3900](https://github.com/pypeclub/OpenPype/pull/3900)
- Publisher: Files Drag n Drop cleanup [\#3888](https://github.com/pypeclub/OpenPype/pull/3888)
**🔀 Refactored code**
- Flame: Import lib functions from lib [\#3992](https://github.com/pypeclub/OpenPype/pull/3992)
- General: Fix deprecated warning in legacy creator [\#3978](https://github.com/pypeclub/OpenPype/pull/3978)
- Blender: Remove openpype api imports [\#3977](https://github.com/pypeclub/OpenPype/pull/3977)
- General: Use direct import of resources [\#3964](https://github.com/pypeclub/OpenPype/pull/3964)
- General: Direct settings imports [\#3934](https://github.com/pypeclub/OpenPype/pull/3934)
- General: import 'Logger' from 'openpype.lib' [\#3926](https://github.com/pypeclub/OpenPype/pull/3926)
- General: Remove deprecated functions from lib [\#3907](https://github.com/pypeclub/OpenPype/pull/3907)
**Merged pull requests:**
- Maya + Yeti: Load Yeti Cache fix frame number recognition [\#3942](https://github.com/pypeclub/OpenPype/pull/3942)
- Fusion: Implement callbacks to Fusion's event system thread [\#3928](https://github.com/pypeclub/OpenPype/pull/3928)
- Photoshop: create single frame image in Ftrack as review [\#3908](https://github.com/pypeclub/OpenPype/pull/3908)
## [3.14.3](https://github.com/pypeclub/OpenPype/tree/3.14.3) (2022-10-03)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.2...3.14.3)
**🚀 Enhancements**
- Publisher: Enhancement proposals [\#3897](https://github.com/pypeclub/OpenPype/pull/3897)
**🐛 Bug fixes**
- Maya: Fix Render single camera validator [\#3929](https://github.com/pypeclub/OpenPype/pull/3929)
- Flame: loading multilayer exr to batch/reel is working [\#3901](https://github.com/pypeclub/OpenPype/pull/3901)
- Hiero: Fix inventory check on launch [\#3895](https://github.com/pypeclub/OpenPype/pull/3895)
- WebPublisher: Fix import after refactor [\#3891](https://github.com/pypeclub/OpenPype/pull/3891)
**🔀 Refactored code**
- Maya: Remove unused 'openpype.api' imports in plugins [\#3925](https://github.com/pypeclub/OpenPype/pull/3925)
- Resolve: Use new Extractor location [\#3918](https://github.com/pypeclub/OpenPype/pull/3918)
- Unreal: Use new Extractor location [\#3917](https://github.com/pypeclub/OpenPype/pull/3917)
- Flame: Use new Extractor location [\#3916](https://github.com/pypeclub/OpenPype/pull/3916)
- Houdini: Use new Extractor location [\#3894](https://github.com/pypeclub/OpenPype/pull/3894)
- Harmony: Use new Extractor location [\#3893](https://github.com/pypeclub/OpenPype/pull/3893)
**Merged pull requests:**
- Maya: Fix Scene Inventory possibly starting off-screen due to maya preferences [\#3923](https://github.com/pypeclub/OpenPype/pull/3923)
## [3.14.2](https://github.com/pypeclub/OpenPype/tree/3.14.2) (2022-09-12)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...3.14.2)

View file

@ -63,7 +63,8 @@ class OpenPypeVersion(semver.VersionInfo):
"""
staging = False
path = None
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?") # noqa: E501
# this should match any string complying with https://semver.org/
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>[a-zA-Z\d\-.]*))?(?:\+(?P<buildmetadata>[a-zA-Z\d\-.]*))?") # noqa: E501
_installed_version = None
def __init__(self, *args, **kwargs):
@ -211,6 +212,8 @@ class OpenPypeVersion(semver.VersionInfo):
OpenPypeVersion: of detected or None.
"""
# strip .zip ext if present
string = re.sub(r"\.zip$", "", string, flags=re.IGNORECASE)
m = re.search(OpenPypeVersion._VERSION_REGEX, string)
if not m:
return None

View file

@ -389,10 +389,11 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
returned if 'None' is passed.
Returns:
None: If subset with specified filters was not found.
Dict: Subset document which can be reduced to specified 'fields'.
"""
Union[None, Dict[str, Any]]: None if subset with specified filters was
not found or dict subset document which can be reduced to
specified 'fields'.
"""
if not subset_name:
return None

View file

@ -0,0 +1,177 @@
import os
import shutil
from time import sleep
from openpype.client.entities import (
get_last_version_by_subset_id,
get_representations,
get_subsets,
)
from openpype.lib import PreLaunchHook
from openpype.lib.local_settings import get_local_site_id
from openpype.lib.profiles_filtering import filter_profiles
from openpype.pipeline.load.utils import get_representation_path
from openpype.settings.lib import get_project_settings
class CopyLastPublishedWorkfile(PreLaunchHook):
"""Copy last published workfile as first workfile.
Prelaunch hook works only if last workfile leads to not existing file.
- That is possible only if it's first version.
"""
# Before `AddLastWorkfileToLaunchArgs`
order = -1
app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"]
def execute(self):
"""Check if local workfile doesn't exist, else copy it.
1- Check if setting for this feature is enabled
2- Check if workfile in work area doesn't exist
3- Check if published workfile exists and is copied locally in publish
4- Substitute copied published workfile as first workfile
Returns:
None: This is a void method.
"""
sync_server = self.modules_manager.get("sync_server")
if not sync_server or not sync_server.enabled:
self.log.debug("Sync server module is not enabled or available")
return
# Check there is no workfile available
last_workfile = self.data.get("last_workfile_path")
if os.path.exists(last_workfile):
self.log.debug(
"Last workfile exists. Skipping {} process.".format(
self.__class__.__name__
)
)
return
# Get data
project_name = self.data["project_name"]
task_name = self.data["task_name"]
task_type = self.data["task_type"]
host_name = self.application.host_name
# Check settings has enabled it
project_settings = get_project_settings(project_name)
profiles = project_settings["global"]["tools"]["Workfiles"][
"last_workfile_on_startup"
]
filter_data = {
"tasks": task_name,
"task_types": task_type,
"hosts": host_name,
}
last_workfile_settings = filter_profiles(profiles, filter_data)
use_last_published_workfile = last_workfile_settings.get(
"use_last_published_workfile"
)
if use_last_published_workfile is None:
self.log.info(
(
"Seems like old version of settings is used."
' Can\'t access custom templates in host "{}".'.format(
host_name
)
)
)
return
elif use_last_published_workfile is False:
self.log.info(
(
'Project "{}" has turned off to use last published'
' workfile as first workfile for host "{}"'.format(
project_name, host_name
)
)
)
return
self.log.info("Trying to fetch last published workfile...")
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
# Check it can proceed
if not project_doc and not asset_doc:
return
# Get subset id
subset_id = next(
(
subset["_id"]
for subset in get_subsets(
project_name,
asset_ids=[asset_doc["_id"]],
fields=["_id", "data.family", "data.families"],
)
if subset["data"].get("family") == "workfile"
# Legacy compatibility
or "workfile" in subset["data"].get("families", {})
),
None,
)
if not subset_id:
self.log.debug(
'No any workfile for asset "{}".'.format(asset_doc["name"])
)
return
# Get workfile representation
last_version_doc = get_last_version_by_subset_id(
project_name, subset_id, fields=["_id"]
)
if not last_version_doc:
self.log.debug("Subset does not have any versions")
return
workfile_representation = next(
(
representation
for representation in get_representations(
project_name, version_ids=[last_version_doc["_id"]]
)
if representation["context"]["task"]["name"] == task_name
),
None,
)
if not workfile_representation:
self.log.debug(
'No published workfile for task "{}" and host "{}".'.format(
task_name, host_name
)
)
return
local_site_id = get_local_site_id()
sync_server.add_site(
project_name,
workfile_representation["_id"],
local_site_id,
force=True,
priority=99,
reset_timer=True,
)
while not sync_server.is_representation_on_site(
project_name, workfile_representation["_id"], local_site_id
):
sleep(5)
# Get paths
published_workfile_path = get_representation_path(
workfile_representation, root=anatomy.roots
)
local_workfile_dir = os.path.dirname(last_workfile)
# Copy file and substitute path
self.data["last_workfile_path"] = shutil.copy(
published_workfile_path, local_workfile_dir
)

View file

@ -1,5 +1,4 @@
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
class AfterEffectsAddon(OpenPypeModule, IHostAddon):

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
BLENDER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
HOST_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -225,7 +225,8 @@ class FlameMenuUniversal(_FlameMenuApp):
menu['actions'].append({
"name": "Load...",
"execute": lambda x: self.tools_helper.show_loader()
"execute": lambda x: callback_selection(
x, self.tools_helper.show_loader)
})
menu['actions'].append({
"name": "Manage...",

View file

@ -4,13 +4,13 @@ import shutil
from copy import deepcopy
from xml.etree import ElementTree as ET
import qargparse
from Qt import QtCore, QtWidgets
import qargparse
from openpype import style
from openpype.settings import get_current_project_settings
from openpype.lib import Logger
from openpype.pipeline import LegacyCreator, LoaderPlugin
from openpype.settings import get_current_project_settings
from . import constants
from . import lib as flib
@ -690,6 +690,54 @@ class ClipLoader(LoaderPlugin):
)
]
_mapping = None
def get_colorspace(self, context):
"""Get colorspace name
Look either to version data or representation data.
Args:
context (dict): version context data
Returns:
str: colorspace name or None
"""
version = context['version']
version_data = version.get("data", {})
colorspace = version_data.get(
"colorspace", None
)
if (
not colorspace
or colorspace == "Unknown"
):
colorspace = context["representation"]["data"].get(
"colorspace", None)
return colorspace
@classmethod
def get_native_colorspace(cls, input_colorspace):
"""Return native colorspace name.
Args:
input_colorspace (str | None): colorspace name
Returns:
str: native colorspace name defined in mapping or None
"""
if not cls._mapping:
settings = get_current_project_settings()["flame"]
mapping = settings["imageio"]["profilesMapping"]["inputs"]
cls._mapping = {
input["ocioName"]: input["flameName"]
for input in mapping
}
return cls._mapping.get(input_colorspace)
class OpenClipSolver(flib.MediaInfoFile):
create_new_clip = False

View file

@ -36,14 +36,15 @@ class LoadClip(opfapi.ClipLoader):
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
colorspace = self.get_colorspace(context)
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace
colorspace = self.get_native_colorspace(colorspace)
self.log.info("Loading with colorspace: `{}`".format(colorspace))
# create workfile path
workfile_dir = os.environ["AVALON_WORKDIR"]

View file

@ -1,3 +1,4 @@
from copy import deepcopy
import os
import flame
from pprint import pformat
@ -22,7 +23,7 @@ class LoadClipBatch(opfapi.ClipLoader):
# settings
reel_name = "OP_LoadedReel"
clip_name_template = "{asset}_{subset}<_{output}>"
clip_name_template = "{batch}_{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -34,19 +35,22 @@ class LoadClipBatch(opfapi.ClipLoader):
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
colorspace = self.get_colorspace(context)
# in case output is not in context replace key to representation
if not context["representation"]["context"].get("output"):
self.clip_name_template.replace("output", "representation")
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
formating_data = deepcopy(context["representation"]["context"])
formating_data["batch"] = self.batch.name.get_value()
clip_name = StringTemplate(self.clip_name_template).format(
formating_data)
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace
colorspace = self.get_native_colorspace(colorspace)
self.log.info("Loading with colorspace: `{}`".format(colorspace))
# create workfile path
workfile_dir = options.get("workdir") or os.environ["AVALON_WORKDIR"]
@ -56,6 +60,7 @@ class LoadClipBatch(opfapi.ClipLoader):
openclip_path = os.path.join(
openclip_dir, clip_name + ".clip"
)
if not os.path.exists(openclip_dir):
os.makedirs(openclip_dir)

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
HARMONY_HOST_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1,7 +1,6 @@
import os
import platform
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
HIERO_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -30,9 +30,15 @@ from .lib import (
get_timeline_selection,
get_current_track,
get_track_item_tags,
get_track_openpype_tag,
set_track_openpype_tag,
get_track_openpype_data,
get_track_item_pype_tag,
set_track_item_pype_tag,
get_track_item_pype_data,
get_trackitem_openpype_tag,
set_trackitem_openpype_tag,
get_trackitem_openpype_data,
set_publish_attribute,
get_publish_attribute,
imprint,
@ -85,9 +91,12 @@ __all__ = [
"get_timeline_selection",
"get_current_track",
"get_track_item_tags",
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",
"get_track_openpype_tag",
"set_track_openpype_tag",
"get_track_openpype_data",
"get_trackitem_openpype_tag",
"set_trackitem_openpype_tag",
"get_trackitem_openpype_data",
"set_publish_attribute",
"get_publish_attribute",
"imprint",
@ -99,6 +108,10 @@ __all__ = [
"apply_colorspace_project",
"apply_colorspace_clips",
"get_sequence_pattern_and_padding",
# depricated
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",
# plugins
"CreatorWidget",

View file

@ -7,11 +7,15 @@ import os
import re
import sys
import platform
import functools
import warnings
import json
import ast
import secrets
import shutil
import hiero
from Qt import QtWidgets
from Qt import QtWidgets, QtCore, QtXml
from openpype.client import get_project
from openpype.settings import get_project_settings
@ -20,15 +24,51 @@ from openpype.pipeline.load import filter_containers
from openpype.lib import Logger
from . import tags
try:
from PySide.QtCore import QFile, QTextStream
from PySide.QtXml import QDomDocument
except ImportError:
from PySide2.QtCore import QFile, QTextStream
from PySide2.QtXml import QDomDocument
# from opentimelineio import opentime
# from pprint import pformat
class DeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", DeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=DeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
log = Logger.get_logger(__name__)
@ -301,7 +341,124 @@ def get_track_item_tags(track_item):
return returning_tag_data
def _get_tag_unique_hash():
# sourcery skip: avoid-builtin-shadow
return secrets.token_hex(nbytes=4)
def set_track_openpype_tag(track, data=None):
"""
Set openpype track tag to input track object.
Attributes:
track (hiero.core.VideoTrack): hiero object
Returns:
hiero.core.Tag
"""
data = data or {}
# basic Tag's attribute
tag_data = {
"editable": "0",
"note": "OpenPype data container",
"icon": "openpype_icon.png",
"metadata": dict(data.items())
}
# get available pype tag if any
_tag = get_track_openpype_tag(track)
if _tag:
# it not tag then create one
tag = tags.update_tag(_tag, tag_data)
else:
# if pype tag available then update with input data
tag = tags.create_tag(
"{}_{}".format(
self.pype_tag_name,
_get_tag_unique_hash()
),
tag_data
)
# add it to the input track item
track.addTag(tag)
return tag
def get_track_openpype_tag(track):
"""
Get pype track item tag created by creator or loader plugin.
Attributes:
trackItem (hiero.core.TrackItem): hiero object
Returns:
hiero.core.Tag: hierarchy, orig clip attributes
"""
# get all tags from track item
_tags = track.tags()
if not _tags:
return None
for tag in _tags:
# return only correct tag defined by global name
if self.pype_tag_name in tag.name():
return tag
def get_track_openpype_data(track, container_name=None):
"""
Get track's openpype tag data.
Attributes:
trackItem (hiero.core.VideoTrack): hiero object
Returns:
dict: data found on pype tag
"""
return_data = {}
# get pype data tag from track item
tag = get_track_openpype_tag(track)
if not tag:
return None
# get tag metadata attribute
tag_data = deepcopy(dict(tag.metadata()))
for obj_name, obj_data in tag_data.items():
obj_name = obj_name.replace("tag.", "")
if obj_name in ["applieswhole", "note", "label"]:
continue
return_data[obj_name] = json.loads(obj_data)
return (
return_data[container_name]
if container_name
else return_data
)
@deprecated("openpype.hosts.hiero.api.lib.get_trackitem_openpype_tag")
def get_track_item_pype_tag(track_item):
# backward compatibility alias
return get_trackitem_openpype_tag(track_item)
@deprecated("openpype.hosts.hiero.api.lib.set_trackitem_openpype_tag")
def set_track_item_pype_tag(track_item, data=None):
# backward compatibility alias
return set_trackitem_openpype_tag(track_item, data)
@deprecated("openpype.hosts.hiero.api.lib.get_trackitem_openpype_data")
def get_track_item_pype_data(track_item):
# backward compatibility alias
return get_trackitem_openpype_data(track_item)
def get_trackitem_openpype_tag(track_item):
"""
Get pype track item tag created by creator or loader plugin.
@ -317,16 +474,16 @@ def get_track_item_pype_tag(track_item):
return None
for tag in _tags:
# return only correct tag defined by global name
if tag.name() == self.pype_tag_name:
if self.pype_tag_name in tag.name():
return tag
def set_track_item_pype_tag(track_item, data=None):
def set_trackitem_openpype_tag(track_item, data=None):
"""
Set pype track item tag to input track_item.
Set openpype track tag to input track object.
Attributes:
trackItem (hiero.core.TrackItem): hiero object
track (hiero.core.VideoTrack): hiero object
Returns:
hiero.core.Tag
@ -341,21 +498,26 @@ def set_track_item_pype_tag(track_item, data=None):
"metadata": dict(data.items())
}
# get available pype tag if any
_tag = get_track_item_pype_tag(track_item)
_tag = get_trackitem_openpype_tag(track_item)
if _tag:
# it not tag then create one
tag = tags.update_tag(_tag, tag_data)
else:
# if pype tag available then update with input data
tag = tags.create_tag(self.pype_tag_name, tag_data)
tag = tags.create_tag(
"{}_{}".format(
self.pype_tag_name,
_get_tag_unique_hash()
),
tag_data
)
# add it to the input track item
track_item.addTag(tag)
return tag
def get_track_item_pype_data(track_item):
def get_trackitem_openpype_data(track_item):
"""
Get track item's pype tag data.
@ -367,7 +529,7 @@ def get_track_item_pype_data(track_item):
"""
data = {}
# get pype data tag from track item
tag = get_track_item_pype_tag(track_item)
tag = get_trackitem_openpype_tag(track_item)
if not tag:
return None
@ -420,7 +582,7 @@ def imprint(track_item, data=None):
"""
data = data or {}
tag = set_track_item_pype_tag(track_item, data)
tag = set_trackitem_openpype_tag(track_item, data)
# add publish attribute
set_publish_attribute(tag, True)
@ -832,22 +994,22 @@ def set_selected_track_items(track_items_list, sequence=None):
def _read_doc_from_path(path):
# reading QDomDocument from HROX path
hrox_file = QFile(path)
if not hrox_file.open(QFile.ReadOnly):
# reading QtXml.QDomDocument from HROX path
hrox_file = QtCore.QFile(path)
if not hrox_file.open(QtCore.QFile.ReadOnly):
raise RuntimeError("Failed to open file for reading")
doc = QDomDocument()
doc = QtXml.QDomDocument()
doc.setContent(hrox_file)
hrox_file.close()
return doc
def _write_doc_to_path(doc, path):
# write QDomDocument to path as HROX
hrox_file = QFile(path)
if not hrox_file.open(QFile.WriteOnly):
# write QtXml.QDomDocument to path as HROX
hrox_file = QtCore.QFile(path)
if not hrox_file.open(QtCore.QFile.WriteOnly):
raise RuntimeError("Failed to open file for writing")
stream = QTextStream(hrox_file)
stream = QtCore.QTextStream(hrox_file)
doc.save(stream, 1)
hrox_file.close()
@ -1030,7 +1192,7 @@ def sync_clip_name_to_data_asset(track_items_list):
# get name and data
ti_name = track_item.name()
data = get_track_item_pype_data(track_item)
data = get_trackitem_openpype_data(track_item)
# ignore if no data on the clip or not publish instance
if not data:
@ -1042,10 +1204,10 @@ def sync_clip_name_to_data_asset(track_items_list):
if data["asset"] != ti_name:
data["asset"] = ti_name
# remove the original tag
tag = get_track_item_pype_tag(track_item)
tag = get_trackitem_openpype_tag(track_item)
track_item.removeTag(tag)
# create new tag with updated data
set_track_item_pype_tag(track_item, data)
set_trackitem_openpype_tag(track_item, data)
print("asset was changed in clip: {}".format(ti_name))
@ -1083,10 +1245,10 @@ def check_inventory_versions(track_items=None):
project_name = legacy_io.active_project()
filter_result = filter_containers(containers, project_name)
for container in filter_result.latest:
set_track_color(container["_track_item"], clip_color)
set_track_color(container["_item"], clip_color)
for container in filter_result.outdated:
set_track_color(container["_track_item"], clip_color_last)
set_track_color(container["_item"], clip_color_last)
def selection_changed_timeline(event):

View file

@ -1,6 +1,7 @@
"""
Basic avalon integration
"""
from copy import deepcopy
import os
import contextlib
from collections import OrderedDict
@ -17,6 +18,7 @@ from openpype.pipeline import (
)
from openpype.tools.utils import host_tools
from . import lib, menu, events
import hiero
log = Logger.get_logger(__name__)
@ -106,7 +108,7 @@ def containerise(track_item,
data_imprint.update({k: v})
log.debug("_ data_imprint: {}".format(data_imprint))
lib.set_track_item_pype_tag(track_item, data_imprint)
lib.set_trackitem_openpype_tag(track_item, data_imprint)
return track_item
@ -123,79 +125,131 @@ def ls():
"""
# get all track items from current timeline
all_track_items = lib.get_track_items()
all_items = lib.get_track_items()
for track_item in all_track_items:
container = parse_container(track_item)
if container:
yield container
# append all video tracks
for track in lib.get_current_sequence():
if type(track) != hiero.core.VideoTrack:
continue
all_items.append(track)
for item in all_items:
container_data = parse_container(item)
if isinstance(container_data, list):
for _c in container_data:
yield _c
elif container_data:
yield container_data
def parse_container(track_item, validate=True):
def parse_container(item, validate=True):
"""Return container data from track_item's pype tag.
Args:
track_item (hiero.core.TrackItem): A containerised track item.
item (hiero.core.TrackItem or hiero.core.VideoTrack):
A containerised track item.
validate (bool)[optional]: validating with avalon scheme
Returns:
dict: The container schema data for input containerized track item.
"""
def data_to_container(item, data):
if (
not data
or data.get("id") != "pyblish.avalon.container"
):
return
if validate and data and data.get("schema"):
schema.validate(data)
if not isinstance(data, dict):
return
# If not all required data return the empty container
required = ['schema', 'id', 'name',
'namespace', 'loader', 'representation']
if any(key not in data for key in required):
return
container = {key: data[key] for key in required}
container["objectName"] = item.name()
# Store reference to the node object
container["_item"] = item
return container
# convert tag metadata to normal keys names
data = lib.get_track_item_pype_data(track_item)
if (
not data
or data.get("id") != "pyblish.avalon.container"
):
return
if type(item) == hiero.core.VideoTrack:
return_list = []
_data = lib.get_track_openpype_data(item)
if validate and data and data.get("schema"):
schema.validate(data)
if not _data:
return
# convert the data to list and validate them
for _, obj_data in _data.items():
cotnainer = data_to_container(item, obj_data)
return_list.append(cotnainer)
return return_list
else:
_data = lib.get_trackitem_openpype_data(item)
return data_to_container(item, _data)
if not isinstance(data, dict):
return
# If not all required data return the empty container
required = ['schema', 'id', 'name',
'namespace', 'loader', 'representation']
if not all(key in data for key in required):
return
container = {key: data[key] for key in required}
container["objectName"] = track_item.name()
# Store reference to the node object
container["_track_item"] = track_item
def _update_container_data(container, data):
for key in container:
try:
container[key] = data[key]
except KeyError:
pass
return container
def update_container(track_item, data=None):
"""Update container data to input track_item's pype tag.
def update_container(item, data=None):
"""Update container data to input track_item or track's
openpype tag.
Args:
track_item (hiero.core.TrackItem): A containerised track item.
item (hiero.core.TrackItem or hiero.core.VideoTrack):
A containerised track item.
data (dict)[optional]: dictionery with data to be updated
Returns:
bool: True if container was updated correctly
"""
data = data or dict()
container = lib.get_track_item_pype_data(track_item)
data = data or {}
data = deepcopy(data)
for _key, _value in container.items():
try:
container[_key] = data[_key]
except KeyError:
pass
if type(item) == hiero.core.VideoTrack:
# form object data for test
object_name = data["objectName"]
log.info("Updating container: `{}`".format(track_item.name()))
return bool(lib.set_track_item_pype_tag(track_item, container))
# get all available containers
containers = lib.get_track_openpype_data(item)
container = lib.get_track_openpype_data(item, object_name)
containers = deepcopy(containers)
container = deepcopy(container)
# update data in container
updated_container = _update_container_data(container, data)
# merge updated container back to containers
containers.update({object_name: updated_container})
return bool(lib.set_track_openpype_tag(item, containers))
else:
container = lib.get_trackitem_openpype_data(item)
updated_container = _update_container_data(container, data)
log.info("Updating container: `{}`".format(item.name()))
return bool(lib.set_trackitem_openpype_tag(item, updated_container))
def launch_workfiles_app(*args):
@ -272,11 +326,11 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
instance, old_value, new_value))
from openpype.hosts.hiero.api import (
get_track_item_pype_tag,
get_trackitem_openpype_tag,
set_publish_attribute
)
# Whether instances should be passthrough based on new value
track_item = instance.data["item"]
tag = get_track_item_pype_tag(track_item)
tag = get_trackitem_openpype_tag(track_item)
set_publish_attribute(tag, new_value)

View file

@ -170,7 +170,10 @@ class CreatorWidget(QtWidgets.QDialog):
for func, val in kwargs.items():
if getattr(item, func):
func_attr = getattr(item, func)
func_attr(val)
if isinstance(val, tuple):
func_attr(*val)
else:
func_attr(val)
# add to layout
layout.addRow(label, item)
@ -273,8 +276,8 @@ class CreatorWidget(QtWidgets.QDialog):
elif v["type"] == "QSpinBox":
data[k]["value"] = self.create_row(
content_layout, "QSpinBox", v["label"],
setValue=v["value"], setMinimum=0,
setMaximum=100000, setToolTip=tool_tip)
setRange=(1, 9999999), setValue=v["value"],
setToolTip=tool_tip)
return data

View file

@ -1,3 +1,4 @@
import json
import re
import os
import hiero
@ -85,17 +86,16 @@ def update_tag(tag, data):
# get metadata key from data
data_mtd = data.get("metadata", {})
# due to hiero bug we have to make sure keys which are not existent in
# data are cleared of value by `None`
for _mk in mtd.dict().keys():
if _mk.replace("tag.", "") not in data_mtd.keys():
mtd.setValue(_mk, str(None))
# set all data metadata to tag metadata
for k, v in data_mtd.items():
for _k, _v in data_mtd.items():
value = str(_v)
if type(_v) == dict:
value = json.dumps(_v)
# set the value
mtd.setValue(
"tag.{}".format(str(k)),
str(v)
"tag.{}".format(str(_k)),
value
)
# set note description of tag

View file

@ -0,0 +1,308 @@
import json
from collections import OrderedDict
import six
from openpype.client import (
get_version_by_id
)
from openpype.pipeline import (
AVALON_CONTAINER_ID,
load,
legacy_io,
get_representation_path
)
from openpype.hosts.hiero import api as phiero
from openpype.lib import Logger
class LoadEffects(load.LoaderPlugin):
"""Loading colorspace soft effect exported from nukestudio"""
representations = ["effectJson"]
families = ["effect"]
label = "Load Effects"
order = 0
icon = "cc"
color = "white"
log = Logger.get_logger(__name__)
def load(self, context, name, namespace, data):
"""
Loading function to get the soft effects to particular read node
Arguments:
context (dict): context of version
name (str): name of the version
namespace (str): asset name
data (dict): compulsory attribute > not used
Returns:
nuke node: containerised nuke node object
"""
active_sequence = phiero.get_current_sequence()
active_track = phiero.get_current_track(
active_sequence, "Loaded_{}".format(name))
# get main variables
namespace = namespace or context["asset"]["name"]
object_name = "{}_{}".format(name, namespace)
clip_in = context["asset"]["data"]["clipIn"]
clip_out = context["asset"]["data"]["clipOut"]
data_imprint = {
"objectName": object_name,
"children_names": []
}
# getting file path
file = self.fname.replace("\\", "/")
if self._shared_loading(
file,
active_track,
clip_in,
clip_out,
data_imprint
):
self.containerise(
active_track,
name=name,
namespace=namespace,
object_name=object_name,
context=context,
loader=self.__class__.__name__,
data=data_imprint)
def _shared_loading(
self,
file,
active_track,
clip_in,
clip_out,
data_imprint,
update=False
):
# getting data from json file with unicode conversion
with open(file, "r") as f:
json_f = {self.byteify(key): self.byteify(value)
for key, value in json.load(f).items()}
# get correct order of nodes by positions on track and subtrack
nodes_order = self.reorder_nodes(json_f)
used_subtracks = {
stitem.name(): stitem
for stitem in phiero.flatten(active_track.subTrackItems())
}
loaded = False
for index_order, (ef_name, ef_val) in enumerate(nodes_order.items()):
new_name = "{}_loaded".format(ef_name)
if new_name not in used_subtracks:
effect_track_item = active_track.createEffect(
effectType=ef_val["class"],
timelineIn=clip_in,
timelineOut=clip_out,
subTrackIndex=index_order
)
effect_track_item.setName(new_name)
else:
effect_track_item = used_subtracks[new_name]
node = effect_track_item.node()
for knob_name, knob_value in ef_val["node"].items():
if (
not knob_value
or knob_name == "name"
):
continue
try:
# assume list means animation
# except 4 values could be RGBA or vector
if isinstance(knob_value, list) and len(knob_value) > 4:
node[knob_name].setAnimated()
for i, value in enumerate(knob_value):
if isinstance(value, list):
# list can have vector animation
for ci, cv in enumerate(value):
node[knob_name].setValueAt(
cv,
(clip_in + i),
ci
)
else:
# list is single values
node[knob_name].setValueAt(
value,
(clip_in + i)
)
else:
node[knob_name].setValue(knob_value)
except NameError:
self.log.warning("Knob: {} cannot be set".format(
knob_name))
# register all loaded children
data_imprint["children_names"].append(new_name)
# make sure containerisation will happen
loaded = True
return loaded
def update(self, container, representation):
""" Updating previously loaded effects
"""
active_track = container["_item"]
file = get_representation_path(representation).replace("\\", "/")
# get main variables
name = container['name']
namespace = container['namespace']
# get timeline in out data
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
version_data = version_doc["data"]
clip_in = version_data["clipIn"]
clip_out = version_data["clipOut"]
object_name = "{}_{}".format(name, namespace)
# Disable previously created nodes
used_subtracks = {
stitem.name(): stitem
for stitem in phiero.flatten(active_track.subTrackItems())
}
container = phiero.get_track_openpype_data(
active_track, object_name
)
loaded_subtrack_items = container["children_names"]
for loaded_stitem in loaded_subtrack_items:
if loaded_stitem not in used_subtracks:
continue
item_to_remove = used_subtracks.pop(loaded_stitem)
# TODO: find a way to erase nodes
self.log.debug(
"This node needs to be removed: {}".format(item_to_remove))
data_imprint = {
"objectName": object_name,
"name": name,
"representation": str(representation["_id"]),
"children_names": []
}
if self._shared_loading(
file,
active_track,
clip_in,
clip_out,
data_imprint,
update=True
):
return phiero.update_container(active_track, data_imprint)
def reorder_nodes(self, data):
new_order = OrderedDict()
trackNums = [v["trackIndex"] for k, v in data.items()
if isinstance(v, dict)]
subTrackNums = [v["subTrackIndex"] for k, v in data.items()
if isinstance(v, dict)]
for trackIndex in range(
min(trackNums), max(trackNums) + 1):
for subTrackIndex in range(
min(subTrackNums), max(subTrackNums) + 1):
item = self.get_item(data, trackIndex, subTrackIndex)
if item is not {}:
new_order.update(item)
return new_order
def get_item(self, data, trackIndex, subTrackIndex):
return {key: val for key, val in data.items()
if isinstance(val, dict)
if subTrackIndex == val["subTrackIndex"]
if trackIndex == val["trackIndex"]}
def byteify(self, input):
"""
Converts unicode strings to strings
It goes through all dictionary
Arguments:
input (dict/str): input
Returns:
dict: with fixed values and keys
"""
if isinstance(input, dict):
return {self.byteify(key): self.byteify(value)
for key, value in input.items()}
elif isinstance(input, list):
return [self.byteify(element) for element in input]
elif isinstance(input, six.text_type):
return str(input)
else:
return input
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
pass
def containerise(
self,
track,
name,
namespace,
object_name,
context,
loader=None,
data=None
):
"""Bundle Hiero's object into an assembly and imprint it with metadata
Containerisation enables a tracking of version, author and origin
for loaded assets.
Arguments:
track (hiero.core.VideoTrack): object to imprint as container
name (str): Name of resulting assembly
namespace (str): Namespace under which to host container
object_name (str): name of container
context (dict): Asset information
loader (str, optional): Name of node used to produce this
container.
Returns:
track_item (hiero.core.TrackItem): containerised object
"""
data_imprint = {
object_name: {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": str(name),
"namespace": str(namespace),
"loader": str(loader),
"representation": str(context["representation"]["_id"]),
}
}
if data:
for k, v in data.items():
data_imprint[object_name].update({k: v})
self.log.debug("_ data_imprint: {}".format(data_imprint))
phiero.set_track_openpype_tag(track, data_imprint)

View file

@ -16,6 +16,9 @@ class CollectClipEffects(pyblish.api.InstancePlugin):
review_track_index = instance.context.data.get("reviewTrackIndex")
item = instance.data["item"]
if "audio" in instance.data["family"]:
return
# frame range
self.handle_start = instance.data["handleStart"]
self.handle_end = instance.data["handleEnd"]

View file

@ -48,7 +48,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
self.log.debug("clip_name: {}".format(clip_name))
# get openpype tag data
tag_data = phiero.get_track_item_pype_data(track_item)
tag_data = phiero.get_trackitem_openpype_data(track_item)
self.log.debug("__ tag_data: {}".format(pformat(tag_data)))
if not tag_data:
@ -326,8 +326,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
return hiero_export.create_otio_time_range(
frame_start, frame_duration, fps)
@staticmethod
def collect_sub_track_items(tracks):
def collect_sub_track_items(self, tracks):
"""
Returns dictionary with track index as key and list of subtracks
"""
@ -336,8 +335,10 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
for track in tracks:
items = track.items()
effet_items = track.subTrackItems()
# skip if no clips on track > need track with effect only
if items:
if not effet_items:
continue
# skip all disabled tracks
@ -345,10 +346,11 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
continue
track_index = track.trackIndex()
_sub_track_items = phiero.flatten(track.subTrackItems())
_sub_track_items = phiero.flatten(effet_items)
_sub_track_items = list(_sub_track_items)
# continue only if any subtrack items are collected
if not list(_sub_track_items):
if not _sub_track_items:
continue
enabled_sti = []

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
HOUDINI_HOST_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1,11 +1,10 @@
import os
import re
import clique
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.houdini.api import pipeline
@ -20,7 +19,6 @@ class AssLoader(load.LoaderPlugin):
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
import hou
# Get the root node
@ -32,7 +30,11 @@ class AssLoader(load.LoaderPlugin):
# Create a new geo node
procedural = obj.createNode("arnold::procedural", node_name=node_name)
procedural.setParms({"ar_filename": self.get_path(self.fname)})
procedural.setParms(
{
"ar_filename": self.format_path(context["representation"])
})
nodes = [procedural]
self[:] = nodes
@ -46,62 +48,43 @@ class AssLoader(load.LoaderPlugin):
suffix="",
)
def get_path(self, path):
# Find all frames in the folder
ext = ".ass.gz" if path.endswith(".ass.gz") else ".ass"
folder = os.path.dirname(path)
frames = [f for f in os.listdir(folder) if f.endswith(ext)]
# Get the collection of frames to detect frame padding
patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(frames,
minimum_items=1,
patterns=patterns)
self.log.debug("Detected collections: {}".format(collections))
self.log.debug("Detected remainder: {}".format(remainder))
if not collections and remainder:
if len(remainder) != 1:
raise ValueError("Frames not correctly detected "
"in: {}".format(remainder))
# A single frame without frame range detected
filepath = remainder[0]
return os.path.normpath(filepath).replace("\\", "/")
# Frames detected with a valid "frame" number pattern
# Then we don't want to have any remainder files found
assert len(collections) == 1 and not remainder
collection = collections[0]
num_frames = len(collection.indexes)
if num_frames == 1:
# Return the input path without dynamic $F variable
result = path
else:
# More than a single frame detected - use $F{padding}
fname = "{}$F{}{}".format(collection.head,
collection.padding,
collection.tail)
result = os.path.join(folder, fname)
# Format file name, Houdini only wants forward slashes
return os.path.normpath(result).replace("\\", "/")
def update(self, container, representation):
# Update the file path
file_path = get_representation_path(representation)
file_path = file_path.replace("\\", "/")
procedural = container["node"]
procedural.setParms({"ar_filename": self.get_path(file_path)})
procedural.setParms({"ar_filename": self.format_path(representation)})
# Update attribute
procedural.setParms({"representation": str(representation["_id"])})
def remove(self, container):
node = container["node"]
node.destroy()
@staticmethod
def format_path(representation):
"""Format file path correctly for single ass.* or ass.* sequence.
Args:
representation (dict): representation to be loaded.
Returns:
str: Formatted path to be used by the input node.
"""
path = get_representation_path(representation)
if not os.path.exists(path):
raise RuntimeError("Path does not exist: {}".format(path))
is_sequence = bool(representation["context"].get("frame"))
# The path is either a single file or sequence in a folder.
if is_sequence:
dir_path, file_name = os.path.split(path)
path = os.path.join(
dir_path,
re.sub(r"(.*)\.(\d+)\.(ass.*)", "\\1.$F4.\\3", file_name)
)
return os.path.normpath(path).replace("\\", "/")
def switch(self, container, representation):
self.update(container, representation)

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
MAYA_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1532,7 +1532,7 @@ def get_container_members(container):
if ref.rsplit(":", 1)[-1].startswith("_UNKNOWN_REF_NODE_"):
continue
reference_members = cmds.referenceQuery(ref, nodes=True)
reference_members = cmds.referenceQuery(ref, nodes=True, dagPath=True)
reference_members = cmds.ls(reference_members,
long=True,
objectsOnly=True)

View file

@ -536,6 +536,11 @@ class RenderProductsArnold(ARenderProducts):
products = []
aov_name = self._get_attr(aov, "name")
multipart = False
multilayer = bool(self._get_attr("defaultArnoldDriver.multipart"))
merge_AOVs = bool(self._get_attr("defaultArnoldDriver.mergeAOVs"))
if multilayer or merge_AOVs:
multipart = True
ai_drivers = cmds.listConnections("{}.outputs".format(aov),
source=True,
destination=False,
@ -589,6 +594,7 @@ class RenderProductsArnold(ARenderProducts):
ext=ext,
aov=aov_name,
driver=ai_driver,
multipart=multipart,
camera=camera)
products.append(product)
@ -1016,7 +1022,11 @@ class RenderProductsRedshift(ARenderProducts):
# due to some AOVs still being written into separate files,
# like Cryptomatte.
# AOVs are merged in multi-channel file
multipart = bool(self._get_attr("redshiftOptions.exrForceMultilayer"))
multipart = False
force_layer = bool(self._get_attr("redshiftOptions.exrForceMultilayer")) # noqa
exMultipart = bool(self._get_attr("redshiftOptions.exrMultipart"))
if exMultipart or force_layer:
multipart = True
# Get Redshift Extension from image format
image_format = self._get_attr("redshiftOptions.imageFormat") # integer
@ -1044,7 +1054,6 @@ class RenderProductsRedshift(ARenderProducts):
# Any AOVs that still get processed, like Cryptomatte
# by themselves are not multipart files.
aov_multipart = not multipart
# Redshift skips rendering of masterlayer without AOV suffix
# when a Beauty AOV is rendered. It overrides the main layer.
@ -1075,7 +1084,7 @@ class RenderProductsRedshift(ARenderProducts):
productName=aov_light_group_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart,
multipart=multipart,
camera=camera)
products.append(product)
@ -1089,7 +1098,7 @@ class RenderProductsRedshift(ARenderProducts):
product = RenderProduct(productName=aov_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart,
multipart=multipart,
camera=camera)
products.append(product)
@ -1100,7 +1109,7 @@ class RenderProductsRedshift(ARenderProducts):
if light_groups_enabled:
return products
beauty_name = "Beauty_other" if has_beauty_aov else ""
beauty_name = "BeautyAux" if has_beauty_aov else ""
for camera in cameras:
products.insert(0,
RenderProduct(productName=beauty_name,

View file

@ -0,0 +1,132 @@
import os
from openpype.pipeline import (
legacy_io,
load,
get_representation_path
)
from openpype.settings import get_project_settings
class AlembicStandinLoader(load.LoaderPlugin):
"""Load Alembic as Arnold Standin"""
families = ["animation", "model", "pointcache"]
representations = ["abc"]
label = "Import Alembic as Arnold Standin"
order = -5
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, options):
import maya.cmds as cmds
import mtoa.ui.arnoldmenu
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
version = context["version"]
version_data = version.get("data", {})
family = version["data"]["families"]
self.log.info("version_data: {}\n".format(version_data))
self.log.info("family: {}\n".format(family))
frameStart = version_data.get("frameStart", None)
asset = context["asset"]["name"]
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings["maya"]["load"]["colors"]
fps = legacy_io.Session["AVALON_FPS"]
c = colors.get(family[0])
if c is not None:
r = (float(c[0]) / 255)
g = (float(c[1]) / 255)
b = (float(c[2]) / 255)
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
r, g, b)
transform_name = label + "_ABC"
standinShape = cmds.ls(mtoa.ui.arnoldmenu.createStandIn())[0]
standin = cmds.listRelatives(standinShape, parent=True,
typ="transform")
standin = cmds.rename(standin, transform_name)
standinShape = cmds.listRelatives(standin, children=True)[0]
cmds.parent(standin, root)
# Set the standin filepath
cmds.setAttr(standinShape + ".dso", self.fname, type="string")
cmds.setAttr(standinShape + ".abcFPS", float(fps))
if frameStart is None:
cmds.setAttr(standinShape + ".useFrameExtension", 0)
elif "model" in family:
cmds.setAttr(standinShape + ".useFrameExtension", 0)
else:
cmds.setAttr(standinShape + ".useFrameExtension", 1)
nodes = [root, standin]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
import pymel.core as pm
path = get_representation_path(representation)
fps = legacy_io.Session["AVALON_FPS"]
# Update the standin
standins = list()
members = pm.sets(container['objectName'], query=True)
self.log.info("container:{}".format(container))
for member in members:
shape = member.getShape()
if (shape and shape.type() == "aiStandIn"):
standins.append(shape)
for standin in standins:
standin.dso.set(path)
standin.abcFPS.set(float(fps))
if "modelMain" in container['objectName']:
standin.useFrameExtension.set(0)
else:
standin.useFrameExtension.set(1)
container = pm.PyNode(container["objectName"])
container.representation.set(str(representation["_id"]))
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
import maya.cmds as cmds
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -73,8 +73,8 @@ class YetiCacheLoader(load.LoaderPlugin):
c = colors.get(family)
if c is not None:
cmds.setAttr(group_name + ".useOutlinerColor", 1)
cmds.setAttr(group_name + ".outlinerColor",
cmds.setAttr(group_node + ".useOutlinerColor", 1)
cmds.setAttr(group_node + ".outlinerColor",
(float(c[0])/255),
(float(c[1])/255),
(float(c[2])/255)

View file

@ -1,7 +1,6 @@
import os
import platform
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
NUKE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -364,6 +364,9 @@ def containerise(node,
set_avalon_knob_data(node, data)
# set tab to first native
node.setTab(0)
return node

View file

@ -65,6 +65,9 @@ class AlembicCameraLoader(load.LoaderPlugin):
object_name, file),
inpanel=False
)
# hide property panel
camera_node.hideControlPanel()
camera_node.forceValidate()
camera_node["frame_rate"].setValue(float(fps))

View file

@ -1,6 +1,8 @@
import nuke
import qargparse
from pprint import pformat
from copy import deepcopy
from openpype.lib import Logger
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
@ -27,6 +29,7 @@ class LoadClip(plugin.NukeLoader):
Either it is image sequence or video file.
"""
log = Logger.get_logger(__name__)
families = [
"source",
@ -85,13 +88,18 @@ class LoadClip(plugin.NukeLoader):
)
def load(self, context, name, namespace, options):
repre = context["representation"]
representation = context["representation"]
# reste container id so it is always unique for each instance
self.reset_container_id()
is_sequence = len(repre["files"]) > 1
is_sequence = len(representation["files"]) > 1
file = self.fname.replace("\\", "/")
if is_sequence:
representation = self._representation_with_hash_in_frame(
representation
)
filepath = get_representation_path(representation).replace("\\", "/")
self.log.debug("_ filepath: {}".format(filepath))
start_at_workfile = options.get(
"start_at_workfile", self.options_defaults["start_at_workfile"])
@ -101,11 +109,10 @@ class LoadClip(plugin.NukeLoader):
version = context['version']
version_data = version.get("data", {})
repre_id = repre["_id"]
repre_id = representation["_id"]
repre_cont = repre["context"]
self.log.info("version_data: {}\n".format(version_data))
self.log.debug("_ version_data: {}\n".format(
pformat(version_data)))
self.log.debug(
"Representation id `{}` ".format(repre_id))
@ -121,36 +128,33 @@ class LoadClip(plugin.NukeLoader):
duration = last - first
first = 1
last = first + duration
elif "#" not in file:
frame = repre_cont.get("frame")
assert frame, "Representation is not sequence"
padding = len(frame)
file = file.replace(frame, "#" * padding)
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
if not file:
if not filepath:
self.log.warning(
"Representation id `{}` is failing to load".format(repre_id))
return
read_name = self._get_node_name(repre)
read_name = self._get_node_name(representation)
# Create the Loader with the filename path set
read_node = nuke.createNode(
"Read",
"name {}".format(read_name))
# hide property panel
read_node.hideControlPanel()
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
read_node["file"].setValue(file)
read_node["file"].setValue(filepath)
used_colorspace = self._set_colorspace(
read_node, version_data, repre["data"])
read_node, version_data, representation["data"])
self._set_range_to_node(read_node, first, last, start_at_workfile)
@ -172,7 +176,7 @@ class LoadClip(plugin.NukeLoader):
data_imprint[k] = version
elif k == 'colorspace':
colorspace = repre["data"].get(k)
colorspace = representation["data"].get(k)
colorspace = colorspace or version_data.get(k)
data_imprint["db_colorspace"] = colorspace
if used_colorspace:
@ -206,6 +210,20 @@ class LoadClip(plugin.NukeLoader):
def switch(self, container, representation):
self.update(container, representation)
def _representation_with_hash_in_frame(self, representation):
"""Convert frame key value to padded hash
Args:
representation (dict): representation data
Returns:
dict: altered representation data
"""
representation = deepcopy(representation)
frame = representation["context"]["frame"]
representation["context"]["frame"] = "#" * len(str(frame))
return representation
def update(self, container, representation):
"""Update the Loader's path
@ -218,7 +236,13 @@ class LoadClip(plugin.NukeLoader):
is_sequence = len(representation["files"]) > 1
read_node = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
if is_sequence:
representation = self._representation_with_hash_in_frame(
representation
)
filepath = get_representation_path(representation).replace("\\", "/")
self.log.debug("_ filepath: {}".format(filepath))
start_at_workfile = "start at" in read_node['frame_mode'].value()
@ -233,8 +257,6 @@ class LoadClip(plugin.NukeLoader):
version_data = version_doc.get("data", {})
repre_id = representation["_id"]
repre_cont = representation["context"]
# colorspace profile
colorspace = representation["data"].get("colorspace")
colorspace = colorspace or version_data.get("colorspace")
@ -251,14 +273,8 @@ class LoadClip(plugin.NukeLoader):
duration = last - first
first = 1
last = first + duration
elif "#" not in file:
frame = repre_cont.get("frame")
assert frame, "Representation is not sequence"
padding = len(frame)
file = file.replace(frame, "#" * padding)
if not file:
if not filepath:
self.log.warning(
"Representation id `{}` is failing to load".format(repre_id))
return
@ -266,14 +282,14 @@ class LoadClip(plugin.NukeLoader):
read_name = self._get_node_name(representation)
read_node["name"].setValue(read_name)
read_node["file"].setValue(file)
read_node["file"].setValue(filepath)
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
used_colorspace = self._set_colorspace(
read_node, version_data, representation["data"],
path=file)
path=filepath)
self._set_range_to_node(read_node, first, last, start_at_workfile)
@ -345,8 +361,10 @@ class LoadClip(plugin.NukeLoader):
time_warp_nodes = version_data.get('timewarps', [])
last_node = None
source_id = self.get_container_id(parent_node)
self.log.info("__ source_id: {}".format(source_id))
self.log.info("__ members: {}".format(self.get_members(parent_node)))
self.log.debug("__ source_id: {}".format(source_id))
self.log.debug("__ members: {}".format(
self.get_members(parent_node)))
dependent_nodes = self.clear_members(parent_node)
with maintained_selection():

View file

@ -89,6 +89,9 @@ class LoadEffects(load.LoaderPlugin):
"Group",
"name {}_1".format(object_name))
# hide property panel
GN.hideControlPanel()
# adding content to the group node
with GN:
pre_node = nuke.createNode("Input")

View file

@ -90,6 +90,9 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
"Group",
"name {}_1".format(object_name))
# hide property panel
GN.hideControlPanel()
# adding content to the group node
with GN:
pre_node = nuke.createNode("Input")

View file

@ -62,7 +62,9 @@ class LoadImage(load.LoaderPlugin):
def load(self, context, name, namespace, options):
self.log.info("__ options: `{}`".format(options))
frame_number = options.get("frame_number", 1)
frame_number = options.get(
"frame_number", int(nuke.root()["first_frame"].getValue())
)
version = context['version']
version_data = version.get("data", {})
@ -112,6 +114,10 @@ class LoadImage(load.LoaderPlugin):
r = nuke.createNode(
"Read",
"name {}".format(read_name))
# hide property panel
r.hideControlPanel()
r["file"].setValue(file)
# Set colorspace defined in version data

View file

@ -63,6 +63,10 @@ class AlembicModelLoader(load.LoaderPlugin):
object_name, file),
inpanel=False
)
# hide property panel
model_node.hideControlPanel()
model_node.forceValidate()
# Ensure all items are imported and selected.

View file

@ -71,6 +71,9 @@ class LinkAsGroup(load.LoaderPlugin):
"Precomp",
"file {}".format(file))
# hide property panel
P.hideControlPanel()
# Set colorspace defined in version data
colorspace = context["version"]["data"].get("colorspace", None)
self.log.info("colorspace: {}\n".format(colorspace))

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
PHOTOSHOP_HOST_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -29,7 +29,8 @@ class CreateImage(create.LegacyCreator):
if len(selection) > 1:
# Ask user whether to create one image or image per selected
# item.
msg_box = QtWidgets.QMessageBox()
active_window = QtWidgets.QApplication.activeWindow()
msg_box = QtWidgets.QMessageBox(parent=active_window)
msg_box.setIcon(QtWidgets.QMessageBox.Warning)
msg_box.setText(
"Multiple layers selected."
@ -102,7 +103,7 @@ class CreateImage(create.LegacyCreator):
if group.long_name:
for directory in group.long_name[::-1]:
name = directory.replace(stub.PUBLISH_ICON, '').\
replace(stub.LOADED_ICON, '')
replace(stub.LOADED_ICON, '')
long_names.append(name)
self.data.update({"subset": subset_name})

View file

@ -1,7 +1,6 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
from .utils import RESOLVE_ROOT_DIR

View file

@ -4,8 +4,7 @@ import click
from openpype.lib import get_openpype_execute_args
from openpype.lib.execute import run_detached_process
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import ITrayAction, IHostAddon
from openpype.modules import OpenPypeModule, ITrayAction, IHostAddon
STANDALONEPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -4,8 +4,7 @@ import click
from openpype.lib import get_openpype_execute_args
from openpype.lib.execute import run_detached_process
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import ITrayAction, IHostAddon
from openpype.modules import OpenPypeModule, ITrayAction, IHostAddon
TRAYPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1,49 +1,33 @@
from openpype.lib.attribute_definitions import FileDef
from openpype.lib.transcoding import IMAGE_EXTENSIONS, VIDEO_EXTENSIONS
from openpype.pipeline.create import (
Creator,
HiddenCreator,
CreatedInstance
CreatedInstance,
cache_and_get_instances,
PRE_CREATE_THUMBNAIL_KEY,
)
from .pipeline import (
list_instances,
update_instances,
remove_instances,
HostContext,
)
from openpype.lib.transcoding import IMAGE_EXTENSIONS, VIDEO_EXTENSIONS
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
def _cache_and_get_instances(creator):
"""Cache instances in shared data.
Args:
creator (Creator): Plugin which would like to get instances from host.
Returns:
List[Dict[str, Any]]: Cached instances list from host implementation.
"""
shared_key = "openpype.traypublisher.instances"
if shared_key not in creator.collection_shared_data:
creator.collection_shared_data[shared_key] = list_instances()
return creator.collection_shared_data[shared_key]
REVIEW_EXTENSIONS = set(IMAGE_EXTENSIONS) | set(VIDEO_EXTENSIONS)
SHARED_DATA_KEY = "openpype.traypublisher.instances"
class HiddenTrayPublishCreator(HiddenCreator):
host_name = "traypublisher"
def collect_instances(self):
for instance_data in _cache_and_get_instances(self):
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
instances_by_identifier = cache_and_get_instances(
self, SHARED_DATA_KEY, list_instances
)
for instance_data in instances_by_identifier[self.identifier]:
instance = CreatedInstance.from_existing(instance_data, self)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
update_instances(update_list)
@ -74,13 +58,12 @@ class TrayPublishCreator(Creator):
host_name = "traypublisher"
def collect_instances(self):
for instance_data in _cache_and_get_instances(self):
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
instances_by_identifier = cache_and_get_instances(
self, SHARED_DATA_KEY, list_instances
)
for instance_data in instances_by_identifier[self.identifier]:
instance = CreatedInstance.from_existing(instance_data, self)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
update_instances(update_list)
@ -110,11 +93,14 @@ class TrayPublishCreator(Creator):
class SettingsCreator(TrayPublishCreator):
create_allow_context_change = True
create_allow_thumbnail = True
extensions = []
def create(self, subset_name, data, pre_create_data):
# Pass precreate data to creator attributes
thumbnail_path = pre_create_data.pop(PRE_CREATE_THUMBNAIL_KEY, None)
data["creator_attributes"] = pre_create_data
data["settings_creator"] = True
# Create new instance
@ -122,6 +108,9 @@ class SettingsCreator(TrayPublishCreator):
self._store_new_instance(new_instance)
if thumbnail_path:
self.set_instance_thumbnail_path(new_instance.id, thumbnail_path)
def get_instance_attr_defs(self):
return [
FileDef(

View file

@ -0,0 +1,96 @@
# -*- coding: utf-8 -*-
"""Creator of online files.
Online file retain their original name and use it as subset name. To
avoid conflicts, this creator checks if subset with this name already
exists under selected asset.
"""
from pathlib import Path
from openpype.client import get_subset_by_name, get_asset_by_name
from openpype.lib.attribute_definitions import FileDef
from openpype.pipeline import (
CreatedInstance,
CreatorError
)
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
class OnlineCreator(TrayPublishCreator):
"""Creates instance from file and retains its original name."""
identifier = "io.openpype.creators.traypublisher.online"
label = "Online"
family = "online"
description = "Publish file retaining its original file name"
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg"]
def get_detail_description(self):
return """# Create file retaining its original file name.
This will publish files using template helping to retain original
file name and that file name is used as subset name.
Bz default it tries to guard against multiple publishes of the same
file."""
def get_icon(self):
return "fa.file"
def create(self, subset_name, instance_data, pre_create_data):
repr_file = pre_create_data.get("representation_file")
if not repr_file:
raise CreatorError("No files specified")
files = repr_file.get("filenames")
if not files:
# this should never happen
raise CreatorError("Missing files from representation")
origin_basename = Path(files[0]).stem
asset = get_asset_by_name(
self.project_name, instance_data["asset"], fields=["_id"])
if get_subset_by_name(
self.project_name, origin_basename, asset["_id"],
fields=["_id"]):
raise CreatorError(f"subset with {origin_basename} already "
"exists in selected asset")
instance_data["originalBasename"] = origin_basename
subset_name = origin_basename
instance_data["creator_attributes"] = {
"path": (Path(repr_file["directory"]) / files[0]).as_posix()
}
# Create new instance
new_instance = CreatedInstance(self.family, subset_name,
instance_data, self)
self._store_new_instance(new_instance)
def get_pre_create_attr_defs(self):
return [
FileDef(
"representation_file",
folders=False,
extensions=self.extensions,
allow_sequences=False,
single_item=True,
label="Representation",
)
]
def get_subset_name(
self,
variant,
task_name,
asset_doc,
project_name,
host_name=None,
instance=None
):
if instance is None:
return "{originalBasename}"
return instance.data["subset"]

View file

@ -40,7 +40,8 @@ class CollectMovieBatch(
if creator_attributes["add_review_family"]:
repre["tags"].append("review")
instance.data["families"].append("review")
instance.data["thumbnailSource"] = file_url
if not instance.data.get("thumbnailSource"):
instance.data["thumbnailSource"] = file_url
instance.data["source"] = file_url

View file

@ -0,0 +1,23 @@
# -*- coding: utf-8 -*-
import pyblish.api
from pathlib import Path
class CollectOnlineFile(pyblish.api.InstancePlugin):
"""Collect online file and retain its file name."""
label = "Collect Online File"
order = pyblish.api.CollectorOrder
families = ["online"]
hosts = ["traypublisher"]
def process(self, instance):
file = Path(instance.data["creator_attributes"]["path"])
instance.data["representations"].append(
{
"name": file.suffix.lstrip("."),
"ext": file.suffix.lstrip("."),
"files": file.name,
"stagingDir": file.parent.as_posix()
}
)

View file

@ -188,7 +188,8 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
if "review" not in instance.data["families"]:
instance.data["families"].append("review")
instance.data["thumbnailSource"] = first_filepath
if not instance.data.get("thumbnailSource"):
instance.data["thumbnailSource"] = first_filepath
review_representation["tags"].append("review")
self.log.debug("Representation {} was marked for review. {}".format(

View file

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
import pyblish.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishValidationError,
OptionalPyblishPluginMixin,
)
from openpype.client import get_subset_by_name
class ValidateOnlineFile(OptionalPyblishPluginMixin,
pyblish.api.InstancePlugin):
"""Validate that subset doesn't exist yet."""
label = "Validate Existing Online Files"
hosts = ["traypublisher"]
families = ["online"]
order = ValidateContentsOrder
optional = True
def process(self, instance):
project_name = instance.context.data["projectName"]
asset_id = instance.data["assetEntity"]["_id"]
subset = get_subset_by_name(
project_name, instance.data["subset"], asset_id)
if subset:
raise PublishValidationError(
"Subset to be published already exists.",
title=self.label
)

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
TVPAINT_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -1,4 +1,4 @@
import qargparse
from openpype.lib.attribute_definitions import BoolDef
from openpype.hosts.tvpaint.api import plugin
from openpype.hosts.tvpaint.api.lib import execute_george_through_file
@ -27,26 +27,28 @@ class ImportImage(plugin.Loader):
"preload": True
}
options = [
qargparse.Boolean(
"stretch",
label="Stretch to project size",
default=True,
help="Stretch loaded image/s to project resolution?"
),
qargparse.Boolean(
"timestretch",
label="Stretch to timeline length",
default=True,
help="Clip loaded image/s to timeline length?"
),
qargparse.Boolean(
"preload",
label="Preload loaded image/s",
default=True,
help="Preload image/s?"
)
]
@classmethod
def get_options(cls, contexts):
return [
BoolDef(
"stretch",
label="Stretch to project size",
default=cls.defaults["stretch"],
tooltip="Stretch loaded image/s to project resolution?"
),
BoolDef(
"timestretch",
label="Stretch to timeline length",
default=cls.defaults["timestretch"],
tooltip="Clip loaded image/s to timeline length?"
),
BoolDef(
"preload",
label="Preload loaded image/s",
default=cls.defaults["preload"],
tooltip="Preload image/s?"
)
]
def load(self, context, name, namespace, options):
stretch = options.get("stretch", self.defaults["stretch"])

View file

@ -1,7 +1,6 @@
import collections
import qargparse
from openpype.lib.attribute_definitions import BoolDef
from openpype.pipeline import (
get_representation_context,
register_host,
@ -42,26 +41,28 @@ class LoadImage(plugin.Loader):
"preload": True
}
options = [
qargparse.Boolean(
"stretch",
label="Stretch to project size",
default=True,
help="Stretch loaded image/s to project resolution?"
),
qargparse.Boolean(
"timestretch",
label="Stretch to timeline length",
default=True,
help="Clip loaded image/s to timeline length?"
),
qargparse.Boolean(
"preload",
label="Preload loaded image/s",
default=True,
help="Preload image/s?"
)
]
@classmethod
def get_options(cls, contexts):
return [
BoolDef(
"stretch",
label="Stretch to project size",
default=cls.defaults["stretch"],
tooltip="Stretch loaded image/s to project resolution?"
),
BoolDef(
"timestretch",
label="Stretch to timeline length",
default=cls.defaults["timestretch"],
tooltip="Clip loaded image/s to timeline length?"
),
BoolDef(
"preload",
label="Preload loaded image/s",
default=cls.defaults["preload"],
tooltip="Preload image/s?"
)
]
def load(self, context, name, namespace, options):
stretch = options.get("stretch", self.defaults["stretch"])

View file

@ -6,7 +6,7 @@ class CollectOutputFrameRange(pyblish.api.ContextPlugin):
When instances are collected context does not contain `frameStart` and
`frameEnd` keys yet. They are collected in global plugin
`CollectAvalonEntities`.
`CollectContextEntities`.
"""
label = "Collect output frame range"
order = pyblish.api.CollectorOrder

View file

@ -25,6 +25,7 @@ class ExtractSequence(pyblish.api.Extractor):
label = "Extract Sequence"
hosts = ["tvpaint"]
families = ["review", "renderPass", "renderLayer", "renderScene"]
families_to_review = ["review"]
# Modifiable with settings
review_bg = [255, 255, 255, 255]
@ -133,9 +134,9 @@ class ExtractSequence(pyblish.api.Extractor):
output_frame_start
)
# Fill tags and new families
# Fill tags and new families from project settings
tags = []
if family_lowered in ("review", "renderlayer", "renderscene"):
if family_lowered in self.families_to_review:
tags.append("review")
# Sequence of one frame

View file

@ -39,7 +39,7 @@ class ValidateMarks(pyblish.api.ContextPlugin):
def get_expected_data(context):
scene_mark_in = context.data["sceneMarkIn"]
# Data collected in `CollectAvalonEntities`
# Data collected in `CollectContextEntities`
frame_end = context.data["frameEnd"]
frame_start = context.data["frameStart"]
handle_start = context.data["handleStart"]

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -2,8 +2,7 @@ import os
import click
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
from openpype.modules import OpenPypeModule, IHostAddon
WEBPUBLISHER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -83,8 +83,10 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
self.log.info("task_data:: {}".format(task_data))
is_sequence = len(task_data["files"]) > 1
first_file = task_data["files"][0]
_, extension = os.path.splitext(task_data["files"][0])
_, extension = os.path.splitext(first_file)
extension = extension.lower()
family, families, tags = self._get_family(
self.task_type_to_family,
task_type,
@ -149,10 +151,13 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
self.log.warning("Unable to count frames "
"duration {}".format(no_of_frames))
# raise ValueError("STOP")
instance.data["handleStart"] = asset_doc["data"]["handleStart"]
instance.data["handleEnd"] = asset_doc["data"]["handleEnd"]
if "review" in tags:
first_file_path = os.path.join(task_dir, first_file)
instance.data["thumbnailSource"] = first_file_path
instances.append(instance)
self.log.info("instance.data:: {}".format(instance.data))
@ -176,6 +181,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
def _get_single_repre(self, task_dir, files, tags):
_, ext = os.path.splitext(files[0])
ext = ext.lower()
repre_data = {
"name": ext[1:],
"ext": ext[1:],
@ -195,6 +201,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
frame_start = list(collections[0].indexes)[0]
frame_end = list(collections[0].indexes)[-1]
ext = collections[0].tail
ext = ext.lower()
repre_data = {
"frameStart": frame_start,
"frameEnd": frame_end,
@ -240,8 +247,17 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
for config in families_config:
if is_sequence != config["is_sequence"]:
continue
if (extension in config["extensions"] or
'' in config["extensions"]): # all extensions setting
extensions = config.get("extensions") or []
lower_extensions = set()
for ext in extensions:
if ext:
ext = ext.lower()
if ext.startswith("."):
ext = ext[1:]
lower_extensions.add(ext)
# all extensions setting
if not lower_extensions or extension in lower_extensions:
found_family = config["result_family"]
break

View file

@ -1,137 +0,0 @@
import os
import shutil
import pyblish.api
from openpype.lib import (
get_ffmpeg_tool_path,
run_subprocess,
get_transcode_temp_directory,
convert_input_paths_for_ffmpeg,
should_convert_for_ffmpeg
)
class ExtractThumbnail(pyblish.api.InstancePlugin):
"""Create jpg thumbnail from input using ffmpeg."""
label = "Extract Thumbnail"
order = pyblish.api.ExtractorOrder
families = [
"render",
"image"
]
hosts = ["webpublisher"]
targets = ["filespublish"]
def process(self, instance):
self.log.info("subset {}".format(instance.data['subset']))
filtered_repres = self._get_filtered_repres(instance)
for repre in filtered_repres:
repre_files = repre["files"]
if not isinstance(repre_files, (list, tuple)):
input_file = repre_files
else:
file_index = int(float(len(repre_files)) * 0.5)
input_file = repre_files[file_index]
stagingdir = os.path.normpath(repre["stagingDir"])
full_input_path = os.path.join(stagingdir, input_file)
self.log.info("Input filepath: {}".format(full_input_path))
do_convert = should_convert_for_ffmpeg(full_input_path)
# If result is None the requirement of conversion can't be
# determined
if do_convert is None:
self.log.info((
"Can't determine if representation requires conversion."
" Skipped."
))
continue
# Do conversion if needed
# - change staging dir of source representation
# - must be set back after output definitions processing
convert_dir = None
if do_convert:
convert_dir = get_transcode_temp_directory()
filename = os.path.basename(full_input_path)
convert_input_paths_for_ffmpeg(
[full_input_path],
convert_dir,
self.log
)
full_input_path = os.path.join(convert_dir, filename)
filename = os.path.splitext(input_file)[0]
while filename.endswith("."):
filename = filename[:-1]
thumbnail_filename = filename + "_thumbnail.jpg"
full_output_path = os.path.join(stagingdir, thumbnail_filename)
self.log.info("output {}".format(full_output_path))
ffmpeg_args = [
get_ffmpeg_tool_path("ffmpeg"),
"-y",
"-i", full_input_path,
"-vframes", "1",
full_output_path
]
# run subprocess
self.log.debug("{}".format(" ".join(ffmpeg_args)))
try: # temporary until oiiotool is supported cross platform
run_subprocess(
ffmpeg_args, logger=self.log
)
except RuntimeError as exp:
if "Compression" in str(exp):
self.log.debug(
"Unsupported compression on input files. Skipping!!!"
)
return
self.log.warning("Conversion crashed", exc_info=True)
raise
new_repre = {
"name": "thumbnail",
"ext": "jpg",
"files": thumbnail_filename,
"stagingDir": stagingdir,
"thumbnail": True,
"tags": ["thumbnail"]
}
# adding representation
self.log.debug("Adding: {}".format(new_repre))
instance.data["representations"].append(new_repre)
# Cleanup temp folder
if convert_dir is not None and os.path.exists(convert_dir):
shutil.rmtree(convert_dir)
def _get_filtered_repres(self, instance):
filtered_repres = []
repres = instance.data.get("representations") or []
for repre in repres:
self.log.debug(repre)
tags = repre.get("tags") or []
# Skip instance if already has thumbnail representation
if "thumbnail" in tags:
return []
if "review" not in tags:
continue
if not repre.get("files"):
self.log.info((
"Representation \"{}\" don't have files. Skipping"
).format(repre["name"]))
continue
filtered_repres.append(repre)
return filtered_repres

View file

@ -13,7 +13,7 @@ class ValidateWorkfileData(pyblish.api.ContextPlugin):
targets = ["tvpaint_worker"]
def process(self, context):
# Data collected in `CollectAvalonEntities`
# Data collected in `CollectContextEntities`
frame_start = context.data["frameStart"]
frame_end = context.data["frameEnd"]
handle_start = context.data["handleStart"]

View file

@ -91,7 +91,7 @@ class AbstractAttrDefMeta(ABCMeta):
@six.add_metaclass(AbstractAttrDefMeta)
class AbtractAttrDef:
class AbtractAttrDef(object):
"""Abstraction of attribute definiton.
Each attribute definition must have implemented validation and
@ -105,11 +105,14 @@ class AbtractAttrDef:
How to force to set `key` attribute?
Args:
key(str): Under which key will be attribute value stored.
label(str): Attribute label.
tooltip(str): Attribute tooltip.
is_label_horizontal(bool): UI specific argument. Specify if label is
key (str): Under which key will be attribute value stored.
default (Any): Default value of an attribute.
label (str): Attribute label.
tooltip (str): Attribute tooltip.
is_label_horizontal (bool): UI specific argument. Specify if label is
next to value input or ahead.
hidden (bool): Will be item hidden (for UI purposes).
disabled (bool): Item will be visible but disabled (for UI purposes).
"""
type_attributes = []
@ -117,16 +120,29 @@ class AbtractAttrDef:
is_value_def = True
def __init__(
self, key, default, label=None, tooltip=None, is_label_horizontal=None
self,
key,
default,
label=None,
tooltip=None,
is_label_horizontal=None,
hidden=False,
disabled=False
):
if is_label_horizontal is None:
is_label_horizontal = True
if hidden is None:
hidden = False
self.key = key
self.label = label
self.tooltip = tooltip
self.default = default
self.is_label_horizontal = is_label_horizontal
self._id = uuid.uuid4()
self.hidden = hidden
self.disabled = disabled
self._id = uuid.uuid4().hex
self.__init__class__ = AbtractAttrDef
@ -173,7 +189,9 @@ class AbtractAttrDef:
"label": self.label,
"tooltip": self.tooltip,
"default": self.default,
"is_label_horizontal": self.is_label_horizontal
"is_label_horizontal": self.is_label_horizontal,
"hidden": self.hidden,
"disabled": self.disabled
}
for attr in self.type_attributes:
data[attr] = getattr(self, attr)
@ -235,6 +253,26 @@ class UnknownDef(AbtractAttrDef):
return value
class HiddenDef(AbtractAttrDef):
"""Hidden value of Any type.
This attribute can be used for UI purposes to pass values related
to other attributes (e.g. in multi-page UIs).
Keep in mind the value should be possible to parse by json parser.
"""
type = "hidden"
def __init__(self, key, default=None, **kwargs):
kwargs["default"] = default
kwargs["hidden"] = True
super(UnknownDef, self).__init__(key, **kwargs)
def convert_value(self, value):
return value
class NumberDef(AbtractAttrDef):
"""Number definition.
@ -541,6 +579,13 @@ class FileDefItem(object):
return ext
return None
@property
def lower_ext(self):
ext = self.ext
if ext is not None:
return ext.lower()
return ext
@property
def is_dir(self):
if self.is_empty:

View file

@ -42,7 +42,7 @@ XML_CHAR_REF_REGEX_HEX = re.compile(r"&#x?[0-9a-fA-F]+;")
# Regex to parse array attributes
ARRAY_TYPE_REGEX = re.compile(r"^(int|float|string)\[\d+\]$")
IMAGE_EXTENSIONS = [
IMAGE_EXTENSIONS = {
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
@ -54,15 +54,15 @@ IMAGE_EXTENSIONS = [
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
".xpm", ".xwd"
]
}
VIDEO_EXTENSIONS = [
VIDEO_EXTENSIONS = {
".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
]
}
def get_transcode_temp_directory():

View file

@ -1,4 +1,14 @@
# -*- coding: utf-8 -*-
from .interfaces import (
ILaunchHookPaths,
IPluginPaths,
ITrayModule,
ITrayAction,
ITrayService,
ISettingsChangeListener,
IHostAddon,
)
from .base import (
OpenPypeModule,
OpenPypeAddOn,
@ -17,6 +27,14 @@ from .base import (
__all__ = (
"ILaunchHookPaths",
"IPluginPaths",
"ITrayModule",
"ITrayAction",
"ITrayService",
"ISettingsChangeListener",
"IHostAddon",
"OpenPypeModule",
"OpenPypeAddOn",

View file

@ -1,7 +1,6 @@
import os
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule
from openpype.modules import OpenPypeModule, ITrayModule
class AvalonModule(OpenPypeModule, ITrayModule):

View file

@ -9,6 +9,7 @@ import logging
import platform
import threading
import collections
import traceback
from uuid import uuid4
from abc import ABCMeta, abstractmethod
import six
@ -139,6 +140,15 @@ class _InterfacesClass(_ModuleClass):
"cannot import name '{}' from 'openpype_interfaces'"
).format(attr_name))
if _LoadCache.interfaces_loaded and attr_name != "log":
stack = list(traceback.extract_stack())
stack.pop(-1)
self.log.warning((
"Using deprecated import of \"{}\" from 'openpype_interfaces'."
" Please switch to use import"
" from 'openpype.modules.interfaces'"
" (will be removed after 3.16.x).{}"
).format(attr_name, "".join(traceback.format_list(stack))))
return self.__attributes__[attr_name]

View file

@ -2,16 +2,17 @@ import os
import threading
import time
from openpype.modules import (
OpenPypeModule,
ITrayModule,
IPluginPaths
)
from .clockify_api import ClockifyAPI
from .constants import (
CLOCKIFY_FTRACK_USER_PATH,
CLOCKIFY_FTRACK_SERVER_PATH
)
from openpype.modules import OpenPypeModule
from openpype_interfaces import (
ITrayModule,
IPluginPaths
)
class ClockifyModule(

View file

@ -4,8 +4,7 @@ import six
import sys
from openpype.lib import requests_get, Logger
from openpype.modules import OpenPypeModule
from openpype_interfaces import IPluginPaths
from openpype.modules import OpenPypeModule, IPluginPaths
class DeadlineWebserviceError(Exception):

View file

@ -457,9 +457,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
cam = [c for c in cameras if c in col.head]
if cam:
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
if aov:
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
else:
subset_name = '{}_{}'.format(group_name, cam)
else:
subset_name = '{}_{}'.format(group_name, aov)
if aov:
subset_name = '{}_{}'.format(group_name, aov)
else:
subset_name = '{}'.format(group_name)
if isinstance(col, (list, tuple)):
staging = os.path.dirname(col[0])
@ -488,12 +494,13 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
else:
render_file_name = os.path.basename(col)
aov_patterns = self.aov_filter
preview = match_aov_pattern(app, aov_patterns, render_file_name)
preview = match_aov_pattern(app, aov_patterns, render_file_name)
# toggle preview on if multipart is on
if instance_data.get("multipartExr"):
preview = True
self.log.debug("preview:{}".format(preview))
new_instance = deepcopy(instance_data)
new_instance["subset"] = subset_name
new_instance["subsetGroup"] = group_name
@ -536,7 +543,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
if new_instance.get("extendFrames", False):
self._copy_extend_frames(new_instance, rep)
instances.append(new_instance)
self.log.debug("instances:{}".format(instances))
return instances
def _get_representations(self, instance, exp_files):

View file

@ -7,7 +7,143 @@ import json
import platform
import uuid
import re
from Deadline.Scripting import RepositoryUtils, FileUtils, DirectoryUtils
from Deadline.Scripting import (
RepositoryUtils,
FileUtils,
DirectoryUtils,
ProcessUtils,
)
VERSION_REGEX = re.compile(
r"(?P<major>0|[1-9]\d*)"
r"\.(?P<minor>0|[1-9]\d*)"
r"\.(?P<patch>0|[1-9]\d*)"
r"(?:-(?P<prerelease>[a-zA-Z\d\-.]*))?"
r"(?:\+(?P<buildmetadata>[a-zA-Z\d\-.]*))?"
)
class OpenPypeVersion:
"""Fake semver version class for OpenPype version purposes.
The version
"""
def __init__(self, major, minor, patch, prerelease, origin=None):
self.major = major
self.minor = minor
self.patch = patch
self.prerelease = prerelease
is_valid = True
if not major or not minor or not patch:
is_valid = False
self.is_valid = is_valid
if origin is None:
base = "{}.{}.{}".format(str(major), str(minor), str(patch))
if not prerelease:
origin = base
else:
origin = "{}-{}".format(base, str(prerelease))
self.origin = origin
@classmethod
def from_string(cls, version):
"""Create an object of version from string.
Args:
version (str): Version as a string.
Returns:
Union[OpenPypeVersion, None]: Version object if input is nonempty
string otherwise None.
"""
if not version:
return None
valid_parts = VERSION_REGEX.findall(version)
if len(valid_parts) != 1:
# Return invalid version with filled 'origin' attribute
return cls(None, None, None, None, origin=str(version))
# Unpack found version
major, minor, patch, pre, post = valid_parts[0]
prerelease = pre
# Post release is not important anymore and should be considered as
# part of prerelease
# - comparison is implemented to find suitable build and builds should
# never contain prerelease part so "not proper" parsing is
# acceptable for this use case.
if post:
prerelease = "{}+{}".format(pre, post)
return cls(
int(major), int(minor), int(patch), prerelease, origin=version
)
def has_compatible_release(self, other):
"""Version has compatible release as other version.
Both major and minor versions must be exactly the same. In that case
a build can be considered as release compatible with any version.
Args:
other (OpenPypeVersion): Other version.
Returns:
bool: Version is release compatible with other version.
"""
if self.is_valid and other.is_valid:
return self.major == other.major and self.minor == other.minor
return False
def __bool__(self):
return self.is_valid
def __repr__(self):
return "<{} {}>".format(self.__class__.__name__, self.origin)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return self.origin == other
return self.origin == other.origin
def __lt__(self, other):
if not isinstance(other, self.__class__):
return None
if not self.is_valid:
return True
if not other.is_valid:
return False
if self.origin == other.origin:
return None
same_major = self.major == other.major
if not same_major:
return self.major < other.major
same_minor = self.minor == other.minor
if not same_minor:
return self.minor < other.minor
same_patch = self.patch == other.patch
if not same_patch:
return self.patch < other.patch
if not self.prerelease:
return False
if not other.prerelease:
return True
pres = [self.prerelease, other.prerelease]
pres.sort()
return pres[0] == self.prerelease
def get_openpype_version_from_path(path, build=True):
@ -16,9 +152,9 @@ def get_openpype_version_from_path(path, build=True):
build (bool, optional): Get only builds, not sources
Returns:
str or None: version of OpenPype if found.
Union[OpenPypeVersion, None]: version of OpenPype if found.
"""
# fix path for application bundle on macos
if platform.system().lower() == "darwin":
path = os.path.join(path, "Contents", "MacOS", "lib", "Python")
@ -41,8 +177,10 @@ def get_openpype_version_from_path(path, build=True):
with open(version_file, "r") as vf:
exec(vf.read(), version)
version_match = re.search(r"(\d+\.\d+.\d+).*", version["__version__"])
return version_match[1]
version_str = version.get("__version__")
if version_str:
return OpenPypeVersion.from_string(version_str)
return None
def get_openpype_executable():
@ -54,6 +192,91 @@ def get_openpype_executable():
return exe_list, dir_list
def get_openpype_versions(dir_list):
print(">>> Getting OpenPype executable ...")
openpype_versions = []
install_dir = DirectoryUtils.SearchDirectoryList(dir_list)
if install_dir:
print("--- Looking for OpenPype at: {}".format(install_dir))
sub_dirs = [
f.path for f in os.scandir(install_dir)
if f.is_dir()
]
for subdir in sub_dirs:
version = get_openpype_version_from_path(subdir)
if not version:
continue
print(" - found: {} - {}".format(version, subdir))
openpype_versions.append((version, subdir))
return openpype_versions
def get_requested_openpype_executable(
exe, dir_list, requested_version
):
requested_version_obj = OpenPypeVersion.from_string(requested_version)
if not requested_version_obj:
print((
">>> Requested version does not match version regex \"{}\""
).format(VERSION_REGEX))
return None
print((
">>> Scanning for compatible requested version {}"
).format(requested_version))
openpype_versions = get_openpype_versions(dir_list)
if not openpype_versions:
return None
# if looking for requested compatible version,
# add the implicitly specified to the list too.
if exe:
exe_dir = os.path.dirname(exe)
print("Looking for OpenPype at: {}".format(exe_dir))
version = get_openpype_version_from_path(exe_dir)
if version:
print(" - found: {} - {}".format(version, exe_dir))
openpype_versions.append((version, exe_dir))
matching_item = None
compatible_versions = []
for version_item in openpype_versions:
version, version_dir = version_item
if requested_version_obj.has_compatible_release(version):
compatible_versions.append(version_item)
if version == requested_version_obj:
# Store version item if version match exactly
# - break if is found matching version
matching_item = version_item
break
if not compatible_versions:
return None
compatible_versions.sort(key=lambda item: item[0])
if matching_item:
version, version_dir = matching_item
print((
"*** Found exact match build version {} in {}"
).format(version_dir, version))
else:
version, version_dir = compatible_versions[-1]
print((
"*** Latest compatible version found is {} in {}"
).format(version_dir, version))
# create list of executables for different platform and let
# Deadline decide.
exe_list = [
os.path.join(version_dir, "openpype_console.exe"),
os.path.join(version_dir, "openpype_console")
]
return FileUtils.SearchFileList(";".join(exe_list))
def inject_openpype_environment(deadlinePlugin):
""" Pull env vars from OpenPype and push them to rendering process.
@ -63,93 +286,29 @@ def inject_openpype_environment(deadlinePlugin):
print(">>> Injecting OpenPype environments ...")
try:
print(">>> Getting OpenPype executable ...")
exe_list, dir_list = get_openpype_executable()
openpype_versions = []
# if the job requires specific OpenPype version,
# lets go over all available and find compatible build.
exe = FileUtils.SearchFileList(exe_list)
requested_version = job.GetJobEnvironmentKeyValue("OPENPYPE_VERSION")
if requested_version:
print((
">>> Scanning for compatible requested version {}"
).format(requested_version))
install_dir = DirectoryUtils.SearchDirectoryList(dir_list)
if install_dir:
print("--- Looking for OpenPype at: {}".format(install_dir))
sub_dirs = [
f.path for f in os.scandir(install_dir)
if f.is_dir()
]
for subdir in sub_dirs:
version = get_openpype_version_from_path(subdir)
if not version:
continue
print(" - found: {} - {}".format(version, subdir))
openpype_versions.append((version, subdir))
exe = get_requested_openpype_executable(
exe, dir_list, requested_version
)
if exe is None:
raise RuntimeError((
"Cannot find compatible version available for version {}"
" requested by the job. Please add it through plugin"
" configuration in Deadline or install it to configured"
" directory."
).format(requested_version))
exe = FileUtils.SearchFileList(exe_list)
if openpype_versions:
# if looking for requested compatible version,
# add the implicitly specified to the list too.
print("Looking for OpenPype at: {}".format(os.path.dirname(exe)))
version = get_openpype_version_from_path(
os.path.dirname(exe))
if version:
print(" - found: {} - {}".format(
version, os.path.dirname(exe)
))
openpype_versions.append((version, os.path.dirname(exe)))
if requested_version:
# sort detected versions
if openpype_versions:
# use natural sorting
openpype_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
print((
"*** Latest available version found is {}"
).format(openpype_versions[-1][0]))
requested_major, requested_minor, _ = requested_version.split(".")[:3] # noqa: E501
compatible_versions = []
for version in openpype_versions:
v = version[0].split(".")[:3]
if v[0] == requested_major and v[1] == requested_minor:
compatible_versions.append(version)
if not compatible_versions:
raise RuntimeError(
("Cannot find compatible version available "
"for version {} requested by the job. "
"Please add it through plugin configuration "
"in Deadline or install it to configured "
"directory.").format(requested_version))
# sort compatible versions nad pick the last one
compatible_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
print((
"*** Latest compatible version found is {}"
).format(compatible_versions[-1][0]))
# create list of executables for different platform and let
# Deadline decide.
exe_list = [
os.path.join(
compatible_versions[-1][1], "openpype_console.exe"),
os.path.join(
compatible_versions[-1][1], "openpype_console")
]
exe = FileUtils.SearchFileList(";".join(exe_list))
if exe == "":
raise RuntimeError(
"OpenPype executable was not found " +
"in the semicolon separated list " +
"\"" + ";".join(exe_list) + "\". " +
"The path to the render executable can be configured " +
"from the Plugin Configuration in the Deadline Monitor.")
if not exe:
raise RuntimeError((
"OpenPype executable was not found in the semicolon "
"separated list \"{}\"."
"The path to the render executable can be configured"
" from the Plugin Configuration in the Deadline Monitor."
).format(";".join(exe_list)))
print("--- OpenPype executable: {}".format(exe))
@ -162,51 +321,53 @@ def inject_openpype_environment(deadlinePlugin):
print(">>> Temporary path: {}".format(export_url))
args = [
exe,
"--headless",
'extractenvironments',
"extractenvironments",
export_url
]
add_args = {}
add_args['project'] = \
job.GetJobEnvironmentKeyValue('AVALON_PROJECT')
add_args['asset'] = job.GetJobEnvironmentKeyValue('AVALON_ASSET')
add_args['task'] = job.GetJobEnvironmentKeyValue('AVALON_TASK')
add_args['app'] = job.GetJobEnvironmentKeyValue('AVALON_APP_NAME')
add_args["envgroup"] = "farm"
add_kwargs = {
"project": job.GetJobEnvironmentKeyValue("AVALON_PROJECT"),
"asset": job.GetJobEnvironmentKeyValue("AVALON_ASSET"),
"task": job.GetJobEnvironmentKeyValue("AVALON_TASK"),
"app": job.GetJobEnvironmentKeyValue("AVALON_APP_NAME"),
"envgroup": "farm"
}
if all(add_kwargs.values()):
for key, value in add_kwargs.items():
args.extend(["--{}".format(key), value])
if all(add_args.values()):
for key, value in add_args.items():
args.append("--{}".format(key))
args.append(value)
else:
msg = "Required env vars: AVALON_PROJECT, AVALON_ASSET, " + \
"AVALON_TASK, AVALON_APP_NAME"
raise RuntimeError(msg)
raise RuntimeError((
"Missing required env vars: AVALON_PROJECT, AVALON_ASSET,"
" AVALON_TASK, AVALON_APP_NAME"
))
if not os.environ.get("OPENPYPE_MONGO"):
print(">>> Missing OPENPYPE_MONGO env var, process won't work")
env = os.environ
env["OPENPYPE_HEADLESS_MODE"] = "1"
env["AVALON_TIMEOUT"] = "5000"
os.environ["AVALON_TIMEOUT"] = "5000"
print(">>> Executing: {}".format(" ".join(args)))
std_output = subprocess.check_output(args,
cwd=os.path.dirname(exe),
env=env)
print(">>> Process result {}".format(std_output))
args_str = subprocess.list2cmdline(args)
print(">>> Executing: {} {}".format(exe, args_str))
process = ProcessUtils.SpawnProcess(
exe, args_str, os.path.dirname(exe)
)
ProcessUtils.WaitForExit(process, -1)
if process.ExitCode != 0:
raise RuntimeError(
"Failed to run OpenPype process to extract environments."
)
print(">>> Loading file ...")
with open(export_url) as fp:
contents = json.load(fp)
for key, value in contents.items():
deadlinePlugin.SetProcessEnvironmentVariable(key, value)
for key, value in contents.items():
deadlinePlugin.SetProcessEnvironmentVariable(key, value)
script_url = job.GetJobPluginInfoKeyValue("ScriptFilename")
if script_url:
script_url = script_url.format(**contents).replace("\\", "/")
print(">>> Setting script path {}".format(script_url))
job.SetJobPluginInfoKeyValue("ScriptFilename", script_url)

View file

@ -13,10 +13,7 @@ import click
from openpype.modules import (
JsonFilesSettingsDef,
OpenPypeAddOn,
ModulesManager
)
# Import interface defined by this addon to be able find other addons using it
from openpype_interfaces import (
ModulesManager,
IPluginPaths,
ITrayAction
)

View file

@ -5,8 +5,8 @@ import platform
import click
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import (
from openpype.modules import (
OpenPypeModule,
ITrayModule,
IPluginPaths,
ISettingsChangeListener

View file

@ -7,10 +7,8 @@ import pyblish.api
from openpype.client import get_asset_by_id
from openpype.lib import filter_profiles
from openpype.pipeline import KnownPublishError
# Copy of constant `openpype_modules.ftrack.lib.avalon_sync.CUST_ATTR_AUTO_SYNC`
CUST_ATTR_AUTO_SYNC = "avalon_auto_sync"
CUST_ATTR_GROUP = "openpype"
@ -19,7 +17,6 @@ CUST_ATTR_GROUP = "openpype"
def get_pype_attr(session, split_hierarchical=True):
custom_attributes = []
hier_custom_attributes = []
# TODO remove deprecated "avalon" group from query
cust_attrs_query = (
"select id, entity_type, object_type_id, is_hierarchical, default"
" from CustomAttributeConfiguration"
@ -79,120 +76,284 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
create_task_status_profiles = []
def process(self, context):
self.context = context
if "hierarchyContext" not in self.context.data:
if "hierarchyContext" not in context.data:
return
hierarchy_context = self._get_active_assets(context)
self.log.debug("__ hierarchy_context: {}".format(hierarchy_context))
session = self.context.data["ftrackSession"]
project_name = self.context.data["projectEntity"]["name"]
query = 'Project where full_name is "{}"'.format(project_name)
project = session.query(query).one()
auto_sync_state = project["custom_attributes"][CUST_ATTR_AUTO_SYNC]
session = context.data["ftrackSession"]
project_name = context.data["projectName"]
project = session.query(
'select id, full_name from Project where full_name is "{}"'.format(
project_name
)
).first()
if not project:
raise KnownPublishError(
"Project \"{}\" was not found on ftrack.".format(project_name)
)
self.context = context
self.session = session
self.ft_project = project
self.task_types = self.get_all_task_types(project)
self.task_statuses = self.get_task_statuses(project)
# disable termporarily ftrack project's autosyncing
if auto_sync_state:
self.auto_sync_off(project)
# import ftrack hierarchy
self.import_to_ftrack(project_name, hierarchy_context)
try:
# import ftrack hierarchy
self.import_to_ftrack(project_name, hierarchy_context)
except Exception:
raise
finally:
if auto_sync_state:
self.auto_sync_on(project)
def query_ftrack_entitites(self, session, ft_project):
project_id = ft_project["id"]
entities = session.query((
"select id, name, parent_id"
" from TypedContext where project_id is \"{}\""
).format(project_id)).all()
def import_to_ftrack(self, project_name, input_data, parent=None):
entities_by_id = {}
entities_by_parent_id = collections.defaultdict(list)
for entity in entities:
entities_by_id[entity["id"]] = entity
parent_id = entity["parent_id"]
entities_by_parent_id[parent_id].append(entity)
ftrack_hierarchy = []
ftrack_id_queue = collections.deque()
ftrack_id_queue.append((project_id, ftrack_hierarchy))
while ftrack_id_queue:
item = ftrack_id_queue.popleft()
ftrack_id, parent_list = item
if ftrack_id == project_id:
entity = ft_project
name = entity["full_name"]
else:
entity = entities_by_id[ftrack_id]
name = entity["name"]
children = []
parent_list.append({
"name": name,
"low_name": name.lower(),
"entity": entity,
"children": children,
})
for child in entities_by_parent_id[ftrack_id]:
ftrack_id_queue.append((child["id"], children))
return ftrack_hierarchy
def find_matching_ftrack_entities(
self, hierarchy_context, ftrack_hierarchy
):
walk_queue = collections.deque()
for entity_name, entity_data in hierarchy_context.items():
walk_queue.append(
(entity_name, entity_data, ftrack_hierarchy)
)
matching_ftrack_entities = []
while walk_queue:
item = walk_queue.popleft()
entity_name, entity_data, ft_children = item
matching_ft_child = None
for ft_child in ft_children:
if ft_child["low_name"] == entity_name.lower():
matching_ft_child = ft_child
break
if matching_ft_child is None:
continue
entity = matching_ft_child["entity"]
entity_data["ft_entity"] = entity
matching_ftrack_entities.append(entity)
hierarchy_children = entity_data.get("childs")
if not hierarchy_children:
continue
for child_name, child_data in hierarchy_children.items():
walk_queue.append(
(child_name, child_data, matching_ft_child["children"])
)
return matching_ftrack_entities
def query_custom_attribute_values(self, session, entities, hier_attrs):
attr_ids = {
attr["id"]
for attr in hier_attrs
}
entity_ids = {
entity["id"]
for entity in entities
}
output = {
entity_id: {}
for entity_id in entity_ids
}
if not attr_ids or not entity_ids:
return {}
joined_attr_ids = ",".join(
['"{}"'.format(attr_id) for attr_id in attr_ids]
)
# Query values in chunks
chunk_size = int(5000 / len(attr_ids))
# Make sure entity_ids is `list` for chunk selection
entity_ids = list(entity_ids)
results = []
for idx in range(0, len(entity_ids), chunk_size):
joined_entity_ids = ",".join([
'"{}"'.format(entity_id)
for entity_id in entity_ids[idx:idx + chunk_size]
])
results.extend(
session.query(
(
"select value, entity_id, configuration_id"
" from CustomAttributeValue"
" where entity_id in ({}) and configuration_id in ({})"
).format(
joined_entity_ids,
joined_attr_ids
)
).all()
)
for result in results:
attr_id = result["configuration_id"]
entity_id = result["entity_id"]
output[entity_id][attr_id] = result["value"]
return output
def import_to_ftrack(self, project_name, hierarchy_context):
# Prequery hiearchical custom attributes
hier_custom_attributes = get_pype_attr(self.session)[1]
hier_attrs = get_pype_attr(self.session)[1]
hier_attr_by_key = {
attr["key"]: attr
for attr in hier_custom_attributes
for attr in hier_attrs
}
# Query user entity (for comments)
user = self.session.query(
"User where username is \"{}\"".format(self.session.api_user)
).first()
if not user:
self.log.warning(
"Was not able to query current User {}".format(
self.session.api_user
)
)
# Query ftrack hierarchy with parenting
ftrack_hierarchy = self.query_ftrack_entitites(
self.session, self.ft_project)
# Fill ftrack entities to hierarchy context
# - there is no need to query entities again
matching_entities = self.find_matching_ftrack_entities(
hierarchy_context, ftrack_hierarchy)
# Query custom attribute values of each entity
custom_attr_values_by_id = self.query_custom_attribute_values(
self.session, matching_entities, hier_attrs)
# Get ftrack api module (as they are different per python version)
ftrack_api = self.context.data["ftrackPythonModule"]
for entity_name in input_data:
entity_data = input_data[entity_name]
# Use queue of hierarchy items to process
import_queue = collections.deque()
for entity_name, entity_data in hierarchy_context.items():
import_queue.append(
(entity_name, entity_data, None)
)
while import_queue:
item = import_queue.popleft()
entity_name, entity_data, parent = item
entity_type = entity_data['entity_type']
self.log.debug(entity_data)
self.log.debug(entity_type)
if entity_type.lower() == 'project':
entity = self.ft_project
elif self.ft_project is None or parent is None:
entity = entity_data.get("ft_entity")
if entity is None and entity_type.lower() == "project":
raise AssertionError(
"Collected items are not in right order!"
)
# try to find if entity already exists
else:
query = (
'TypedContext where name is "{0}" and '
'project_id is "{1}"'
).format(entity_name, self.ft_project["id"])
try:
entity = self.session.query(query).one()
except Exception:
entity = None
# Create entity if not exists
if entity is None:
entity = self.create_entity(
name=entity_name,
type=entity_type,
parent=parent
)
entity = self.session.create(entity_type, {
"name": entity_name,
"parent": parent
})
entity_data["ft_entity"] = entity
# self.log.info('entity: {}'.format(dict(entity)))
# CUSTOM ATTRIBUTES
custom_attributes = entity_data.get('custom_attributes', [])
instances = [
instance
for instance in self.context
if instance.data.get("asset") == entity["name"]
]
custom_attributes = entity_data.get('custom_attributes', {})
instances = []
for instance in self.context:
instance_asset_name = instance.data.get("asset")
if (
instance_asset_name
and instance_asset_name.lower() == entity["name"].lower()
):
instances.append(instance)
for instance in instances:
instance.data["ftrackEntity"] = entity
for key in custom_attributes:
for key, cust_attr_value in custom_attributes.items():
if cust_attr_value is None:
continue
hier_attr = hier_attr_by_key.get(key)
# Use simple method if key is not hierarchical
if not hier_attr:
assert (key in entity['custom_attributes']), (
'Missing custom attribute key: `{0}` in attrs: '
'`{1}`'.format(key, entity['custom_attributes'].keys())
if key not in entity["custom_attributes"]:
raise KnownPublishError((
"Missing custom attribute in ftrack with name '{}'"
).format(key))
entity['custom_attributes'][key] = cust_attr_value
continue
attr_id = hier_attr["id"]
entity_values = custom_attr_values_by_id.get(entity["id"], {})
# New value is defined by having id in values
# - it can be set to 'None' (ftrack allows that using API)
is_new_value = attr_id not in entity_values
attr_value = entity_values.get(attr_id)
# Use ftrack operations method to set hiearchical
# attribute value.
# - this is because there may be non hiearchical custom
# attributes with different properties
entity_key = collections.OrderedDict((
("configuration_id", hier_attr["id"]),
("entity_id", entity["id"])
))
op = None
if is_new_value:
op = ftrack_api.operation.CreateEntityOperation(
"CustomAttributeValue",
entity_key,
{"value": cust_attr_value}
)
entity['custom_attributes'][key] = custom_attributes[key]
else:
# Use ftrack operations method to set hiearchical
# attribute value.
# - this is because there may be non hiearchical custom
# attributes with different properties
entity_key = collections.OrderedDict()
entity_key["configuration_id"] = hier_attr["id"]
entity_key["entity_id"] = entity["id"]
self.session.recorded_operations.push(
ftrack_api.operation.UpdateEntityOperation(
"ContextCustomAttributeValue",
entity_key,
"value",
ftrack_api.symbol.NOT_SET,
custom_attributes[key]
)
elif attr_value != cust_attr_value:
op = ftrack_api.operation.UpdateEntityOperation(
"CustomAttributeValue",
entity_key,
"value",
attr_value,
cust_attr_value
)
if op is not None:
self.session.recorded_operations.push(op)
if self.session.recorded_operations:
try:
self.session.commit()
except Exception:
@ -206,7 +367,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
for instance in instances:
task_name = instance.data.get("task")
if task_name:
instances_by_task_name[task_name].append(instance)
instances_by_task_name[task_name.lower()].append(instance)
tasks = entity_data.get('tasks', [])
existing_tasks = []
@ -247,30 +408,28 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
six.reraise(tp, value, tb)
# Create notes.
user = self.session.query(
"User where username is \"{}\"".format(self.session.api_user)
).first()
if user:
for comment in entity_data.get("comments", []):
entity_comments = entity_data.get("comments")
if user and entity_comments:
for comment in entity_comments:
entity.create_note(comment, user)
else:
self.log.warning(
"Was not able to query current User {}".format(
self.session.api_user
)
)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Import children.
if 'childs' in entity_data:
self.import_to_ftrack(
project_name, entity_data['childs'], entity)
children = entity_data.get("childs")
if not children:
continue
for entity_name, entity_data in children.items():
import_queue.append(
(entity_name, entity_data, entity)
)
def create_links(self, project_name, entity_data, entity):
# Clear existing links.
@ -366,48 +525,6 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
return task
def create_entity(self, name, type, parent):
entity = self.session.create(type, {
'name': name,
'parent': parent
})
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
return entity
def auto_sync_off(self, project):
project["custom_attributes"][CUST_ATTR_AUTO_SYNC] = False
self.log.info("Ftrack autosync swithed off")
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
def auto_sync_on(self, project):
project["custom_attributes"][CUST_ATTR_AUTO_SYNC] = True
self.log.info("Ftrack autosync swithed on")
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
def _get_active_assets(self, context):
""" Returns only asset dictionary.
Usually the last part of deep dictionary which
@ -429,19 +546,17 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
hierarchy_context = context.data["hierarchyContext"]
active_assets = []
active_assets = set()
# filter only the active publishing insatnces
for instance in context:
if instance.data.get("publish") is False:
continue
if not instance.data.get("asset"):
continue
active_assets.append(instance.data["asset"])
asset_name = instance.data.get("asset")
if asset_name:
active_assets.add(asset_name)
# remove duplicity in list
active_assets = list(set(active_assets))
self.log.debug("__ active_assets: {}".format(active_assets))
self.log.debug("__ active_assets: {}".format(list(active_assets)))
return get_pure_hierarchy_data(hierarchy_context)

View file

@ -7,6 +7,8 @@ import signal
import socket
import datetime
import appdirs
import ftrack_api
from openpype_modules.ftrack.ftrack_server.ftrack_server import FtrackServer
from openpype_modules.ftrack.ftrack_server.lib import (
@ -253,6 +255,15 @@ class StatusFactory:
)
})
items.append({
"type": "label",
"value": (
"Local versions dir: {}<br/>Version repository path: {}"
).format(
appdirs.user_data_dir("openpype", "pypeclub"),
os.environ.get("OPENPYPE_PATH")
)
})
items.append({"type": "label", "value": "---"})
return items

View file

@ -3,8 +3,11 @@
import click
import os
from openpype.modules import OpenPypeModule
from openpype_interfaces import IPluginPaths, ITrayAction
from openpype.modules import (
OpenPypeModule,
IPluginPaths,
ITrayAction,
)
class KitsuModule(OpenPypeModule, IPluginPaths, ITrayAction):

View file

@ -0,0 +1,28 @@
# -*- coding: utf-8 -*-
import os
import re
import pyblish.api
class CollectKitsuUsername(pyblish.api.ContextPlugin):
"""Collect Kitsu username from the kitsu login"""
order = pyblish.api.CollectorOrder + 0.499
label = "Kitsu username"
def process(self, context):
kitsu_login = os.environ.get('KITSU_LOGIN')
if not kitsu_login:
return
kitsu_username = kitsu_login.split("@")[0].replace('.', ' ')
new_username = re.sub('[^a-zA-Z]', ' ', kitsu_username).title()
for instance in context:
# Don't override customData if it already exists
if 'customData' not in instance.data:
instance.data['customData'] = {}
instance.data['customData']["kitsuUsername"] = new_username

View file

@ -31,7 +31,6 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin):
continue
review_path = representation.get("published_path")
self.log.debug("Found review at: {}".format(review_path))
gazu.task.add_preview(

View file

@ -1,5 +1,7 @@
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayAction
from openpype.modules import (
OpenPypeModule,
ITrayAction,
)
class LauncherAction(OpenPypeModule, ITrayAction):

View file

@ -1,5 +1,4 @@
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule
from openpype.modules import OpenPypeModule, ITrayModule
class LogViewModule(OpenPypeModule, ITrayModule):

View file

@ -2,8 +2,7 @@ import os
import json
import appdirs
import requests
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule
from openpype.modules import OpenPypeModule, ITrayModule
class MusterModule(OpenPypeModule, ITrayModule):

View file

@ -1,5 +1,4 @@
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayAction
from openpype.modules import OpenPypeModule, ITrayAction
class ProjectManagerAction(OpenPypeModule, ITrayAction):

View file

@ -1,5 +1,4 @@
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayAction
from openpype.modules import OpenPypeModule, ITrayAction
class PythonInterpreterAction(OpenPypeModule, ITrayAction):

View file

@ -2,8 +2,7 @@
"""Module providing support for Royal Render."""
import os
import openpype.modules
from openpype.modules import OpenPypeModule
from openpype_interfaces import IPluginPaths
from openpype.modules import OpenPypeModule, IPluginPaths
class RoyalRenderModule(OpenPypeModule, IPluginPaths):

View file

@ -1,5 +1,4 @@
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayAction
from openpype.modules import OpenPypeModule, ITrayAction
class SettingsAction(OpenPypeModule, ITrayAction):

View file

@ -1,12 +1,11 @@
import os
from openpype_interfaces import (
from openpype.modules import (
OpenPypeModule,
ITrayModule,
IPluginPaths,
)
from openpype.modules import OpenPypeModule
SHOTGRID_MODULE_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -18,15 +18,15 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin):
profiles = None
def process(self, instance):
task_name = legacy_io.Session.get("AVALON_TASK")
task_data = instance.data["anatomyData"].get("task", {})
family = self.main_family_from_instance(instance)
key_values = {
"families": family,
"tasks": task_name,
"tasks": task_data.get("name"),
"task_types": task_data.get("type"),
"hosts": instance.data["anatomyData"]["app"],
"subsets": instance.data["subset"]
}
profile = filter_profiles(self.profiles, key_values,
logger=self.log)

View file

@ -112,7 +112,13 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
if review_path:
fill_pairs.append(("review_filepath", review_path))
task_data = fill_data.get("task")
task_data = (
copy.deepcopy(instance.data.get("anatomyData", {})).get("task")
or fill_data.get("task")
)
if not isinstance(task_data, dict):
# fallback for legacy - if task_data is only task name
task_data["name"] = task_data
if task_data:
if (
"{task}" in message_templ
@ -142,13 +148,17 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
def _get_thumbnail_path(self, instance):
"""Returns abs url for thumbnail if present in instance repres"""
published_path = None
thumbnail_path = None
for repre in instance.data.get("representations", []):
if repre.get('thumbnail') or "thumbnail" in repre.get('tags', []):
if os.path.exists(repre["published_path"]):
published_path = repre["published_path"]
repre_thumbnail_path = (
repre.get("published_path") or
os.path.join(repre["stagingDir"], repre["files"])
)
if os.path.exists(repre_thumbnail_path):
thumbnail_path = repre_thumbnail_path
break
return published_path
return thumbnail_path
def _get_review_path(self, instance):
"""Returns abs url for review if present in instance repres"""
@ -178,10 +188,17 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
channel=channel,
title=os.path.basename(p_file)
)
attachment_str += "\n<{}|{}>".format(
response["file"]["permalink"],
os.path.basename(p_file))
file_ids.append(response["file"]["id"])
if response.get("error"):
error_str = self._enrich_error(
str(response.get("error")),
channel)
self.log.warning(
"Error happened: {}".format(error_str))
else:
attachment_str += "\n<{}|{}>".format(
response["file"]["permalink"],
os.path.basename(p_file))
file_ids.append(response["file"]["id"])
if publish_files:
message += attachment_str

View file

@ -1,6 +1,5 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IPluginPaths
from openpype.modules import OpenPypeModule, IPluginPaths
SLACK_MODULE_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -0,0 +1,37 @@
from aiohttp.web_response import Response
from openpype.lib import Logger
class SyncServerModuleRestApi:
"""
REST API endpoint used for calling from hosts when context change
happens in Workfile app.
"""
def __init__(self, user_module, server_manager):
self._log = None
self.module = user_module
self.server_manager = server_manager
self.prefix = "/sync_server"
self.register()
@property
def log(self):
if self._log is None:
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
def register(self):
self.server_manager.add_route(
"POST",
self.prefix + "/reset_timer",
self.reset_timer,
)
async def reset_timer(self, _request):
"""Force timer to run immediately."""
self.module.reset_timer()
return Response(status=200)

View file

@ -236,6 +236,7 @@ class SyncServerThread(threading.Thread):
"""
def __init__(self, module):
self.log = Logger.get_logger(self.__class__.__name__)
super(SyncServerThread, self).__init__()
self.module = module
self.loop = None

View file

@ -11,9 +11,12 @@ from collections import deque, defaultdict
import click
from bson.objectid import ObjectId
from openpype.client import get_projects
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule
from openpype.client import (
get_projects,
get_representations,
get_representation_by_id,
)
from openpype.modules import OpenPypeModule, ITrayModule
from openpype.settings import (
get_project_settings,
get_system_settings,
@ -30,9 +33,6 @@ from .providers import lib
from .utils import time_function, SyncStatus, SiteAlreadyPresentError
from openpype.client import get_representations, get_representation_by_id
log = Logger.get_logger("SyncServer")
@ -136,14 +136,14 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
""" Start of Public API """
def add_site(self, project_name, representation_id, site_name=None,
force=False):
force=False, priority=None, reset_timer=False):
"""
Adds new site to representation to be synced.
'project_name' must have synchronization enabled (globally or
project only)
Used as a API endpoint from outside applications (Loader etc).
Used as an API endpoint from outside applications (Loader etc).
Use 'force' to reset existing site.
@ -152,6 +152,9 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
representation_id (string): MongoDB _id value
site_name (string): name of configured and active site
force (bool): reset site if exists
priority (int): set priority
reset_timer (bool): if delay timer should be reset, eg. user mark
some representation to be synced manually
Throws:
SiteAlreadyPresentError - if adding already existing site and
@ -167,7 +170,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
self.reset_site_on_representation(project_name,
representation_id,
site_name=site_name,
force=force)
force=force,
priority=priority)
if reset_timer:
self.reset_timer()
def remove_site(self, project_name, representation_id, site_name,
remove_local_files=False):
@ -911,7 +918,59 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
In case of user's involvement (reset site), start that right away.
"""
self.sync_server_thread.reset_timer()
if not self.enabled:
return
if self.sync_server_thread is None:
self._reset_timer_with_rest_api()
else:
self.sync_server_thread.reset_timer()
def is_representation_on_site(
self, project_name, representation_id, site_name
):
"""Checks if 'representation_id' has all files avail. on 'site_name'"""
representation = get_representation_by_id(project_name,
representation_id,
fields=["_id", "files"])
if not representation:
return False
on_site = False
for file_info in representation.get("files", []):
for site in file_info.get("sites", []):
if site["name"] != site_name:
continue
if (site.get("progress") or site.get("error") or
not site.get("created_dt")):
return False
on_site = True
return on_site
def _reset_timer_with_rest_api(self):
# POST to webserver sites to add to representations
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
if not webserver_url:
self.log.warning("Couldn't find webserver url")
return
rest_api_url = "{}/sync_server/reset_timer".format(
webserver_url
)
try:
import requests
except Exception:
self.log.warning(
"Couldn't add sites to representations "
"('requests' is not available)"
)
return
requests.post(rest_api_url)
def get_enabled_projects(self):
"""Returns list of projects which have SyncServer enabled."""
@ -1544,12 +1603,12 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
Args:
project_name (string): name of project - force to db connection as
each file might come from different collection
new_file_id (string):
new_file_id (string): only present if file synced successfully
file (dictionary): info about processed file (pulled from DB)
representation (dictionary): parent repr of file (from DB)
site (string): label ('gdrive', 'S3')
error (string): exception message
progress (float): 0-1 of progress of upload/download
progress (float): 0-0.99 of progress of upload/download
priority (int): 0-100 set priority
Returns:
@ -1655,7 +1714,8 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
def reset_site_on_representation(self, project_name, representation_id,
side=None, file_id=None, site_name=None,
remove=False, pause=None, force=False):
remove=False, pause=None, force=False,
priority=None):
"""
Reset information about synchronization for particular 'file_id'
and provider.
@ -1678,6 +1738,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
remove (bool): if True remove site altogether
pause (bool or None): if True - pause, False - unpause
force (bool): hard reset - currently only for add_site
priority (int): set priority
Raises:
SiteAlreadyPresentError - if adding already existing site and
@ -1705,6 +1766,10 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
elem = {"name": site_name}
# Add priority
if priority:
elem["priority"] = priority
if file_id: # reset site for particular file
self._reset_site_for_file(project_name, representation_id,
elem, file_id, site_name)
@ -2089,6 +2154,15 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
def cli(self, click_group):
click_group.add_command(cli_main)
# Webserver module implementation
def webserver_initialization(self, server_manager):
"""Add routes for syncs."""
if self.tray_initialized:
from .rest_api import SyncServerModuleRestApi
self.rest_api_obj = SyncServerModuleRestApi(
self, server_manager
)
@click.group(SyncServerModule.name, help="SyncServer module related commands.")
def cli_main():

View file

@ -21,7 +21,7 @@ class TimersManagerModuleRestApi:
@property
def log(self):
if self._log is None:
self._log = Logger.get_logger(self.__ckass__.__name__)
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
def register(self):

View file

@ -3,8 +3,8 @@ import platform
from openpype.client import get_asset_by_name
from openpype.modules import OpenPypeModule
from openpype_interfaces import (
from openpype.modules import (
OpenPypeModule,
ITrayService,
IPluginPaths
)

View file

@ -5,7 +5,7 @@ import logging
from concurrent.futures import CancelledError
from Qt import QtWidgets
from openpype_interfaces import ITrayService
from openpype.modules import ITrayService
log = logging.getLogger(__name__)

View file

@ -24,8 +24,7 @@ import os
import socket
from openpype import resources
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayService
from openpype.modules import OpenPypeModule, ITrayService
class WebServerModule(OpenPypeModule, ITrayService):

View file

@ -85,6 +85,7 @@ from .context_tools import (
register_host,
registered_host,
deregister_host,
get_process_id,
)
install = install_host
uninstall = uninstall_host

View file

@ -5,6 +5,7 @@ import json
import types
import logging
import platform
import uuid
import pyblish.api
from pyblish.lib import MessageHandler
@ -37,6 +38,7 @@ from . import (
_is_installed = False
_process_id = None
_registered_root = {"_": ""}
_registered_host = {"_": None}
# Keep modules manager (and it's modules) in memory
@ -546,3 +548,18 @@ def change_current_context(asset_doc, task_name, template_key=None):
emit_event("taskChanged", data)
return changes
def get_process_id():
"""Fake process id created on demand using uuid.
Can be used to create process specific folders in temp directory.
Returns:
str: Process id.
"""
global _process_id
if _process_id is None:
_process_id = str(uuid.uuid4())
return _process_id

View file

@ -1,6 +1,7 @@
from .constants import (
SUBSET_NAME_ALLOWED_SYMBOLS,
DEFAULT_SUBSET_TEMPLATE,
PRE_CREATE_THUMBNAIL_KEY,
)
from .subset_name import (
@ -24,6 +25,8 @@ from .creator_plugins import (
deregister_creator_plugin,
register_creator_plugin_path,
deregister_creator_plugin_path,
cache_and_get_instances,
)
from .context import (
@ -40,6 +43,7 @@ from .legacy_create import (
__all__ = (
"SUBSET_NAME_ALLOWED_SYMBOLS",
"DEFAULT_SUBSET_TEMPLATE",
"PRE_CREATE_THUMBNAIL_KEY",
"TaskNotSetError",
"get_subset_name",

View file

@ -1,8 +1,10 @@
SUBSET_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_."
DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}"
PRE_CREATE_THUMBNAIL_KEY = "thumbnail_source"
__all__ = (
"SUBSET_NAME_ALLOWED_SYMBOLS",
"DEFAULT_SUBSET_TEMPLATE",
"PRE_CREATE_THUMBNAIL_KEY",
)

Some files were not shown because too many files have changed in this diff Show more