mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
Merge branch '2.x/develop' into feature/451-Fusion_basic_integration
This commit is contained in:
commit
07f0a68366
410 changed files with 19041 additions and 2945 deletions
7
.github_changelog_generator
Normal file
7
.github_changelog_generator
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
pr-wo-labels=False
|
||||
exclude-labels=duplicate,question,invalid,wontfix,weekly-digest
|
||||
author=False
|
||||
unreleased=False
|
||||
since-tag=2.11.0
|
||||
release-branch=master
|
||||
enhancement-label=**Enhancements:**
|
||||
565
CHANGELOG.md
Normal file
565
CHANGELOG.md
Normal file
|
|
@ -0,0 +1,565 @@
|
|||
# Changelog
|
||||
|
||||
## [2.12.0](https://github.com/pypeclub/pype/tree/2.12.0) (2020-09-09)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.8...2.12.0)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- Less mongo connections [\#509](https://github.com/pypeclub/pype/pull/509)
|
||||
- Nuke: adding image loader [\#499](https://github.com/pypeclub/pype/pull/499)
|
||||
- Move launcher window to top if launcher action is clicked [\#450](https://github.com/pypeclub/pype/pull/450)
|
||||
- Maya: better tile rendering support in Pype [\#446](https://github.com/pypeclub/pype/pull/446)
|
||||
- Implementation of non QML launcher [\#443](https://github.com/pypeclub/pype/pull/443)
|
||||
- Optional skip review on renders. [\#441](https://github.com/pypeclub/pype/pull/441)
|
||||
- Ftrack: Option to push status from task to latest version [\#440](https://github.com/pypeclub/pype/pull/440)
|
||||
- Properly containerize image plane loads. [\#434](https://github.com/pypeclub/pype/pull/434)
|
||||
- Option to keep the review files. [\#426](https://github.com/pypeclub/pype/pull/426)
|
||||
- Isolate view on instance members. [\#425](https://github.com/pypeclub/pype/pull/425)
|
||||
- ftrack group is bcw compatible [\#418](https://github.com/pypeclub/pype/pull/418)
|
||||
- Maya: Publishing of tile renderings on Deadline [\#398](https://github.com/pypeclub/pype/pull/398)
|
||||
- Feature/little bit better logging gui [\#383](https://github.com/pypeclub/pype/pull/383)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Maya: Fix tile order for Draft Tile Assembler [\#511](https://github.com/pypeclub/pype/pull/511)
|
||||
- Remove extra dash [\#501](https://github.com/pypeclub/pype/pull/501)
|
||||
- Fix: strip dot from repre names in single frame renders [\#498](https://github.com/pypeclub/pype/pull/498)
|
||||
- Better handling of destination during integrating [\#485](https://github.com/pypeclub/pype/pull/485)
|
||||
- Fix: allow thumbnail creation for single frame renders [\#460](https://github.com/pypeclub/pype/pull/460)
|
||||
- added missing argument to launch\_application in ftrack app handler [\#453](https://github.com/pypeclub/pype/pull/453)
|
||||
- Burnins: Copy bit rate of input video to match quality. [\#448](https://github.com/pypeclub/pype/pull/448)
|
||||
- Standalone publisher is now independent from tray [\#442](https://github.com/pypeclub/pype/pull/442)
|
||||
- Bugfix/empty enumerator attributes [\#436](https://github.com/pypeclub/pype/pull/436)
|
||||
- Fixed wrong order of "other" category collapssing in publisher [\#435](https://github.com/pypeclub/pype/pull/435)
|
||||
- Multiple reviews where being overwritten to one. [\#424](https://github.com/pypeclub/pype/pull/424)
|
||||
- Cleanup plugin fail on instances without staging dir [\#420](https://github.com/pypeclub/pype/pull/420)
|
||||
- deprecated -intra parameter in ffmpeg to new `-g` [\#417](https://github.com/pypeclub/pype/pull/417)
|
||||
- Delivery action can now work with entered path [\#397](https://github.com/pypeclub/pype/pull/397)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Review on instance.data [\#473](https://github.com/pypeclub/pype/pull/473)
|
||||
|
||||
## [2.11.8](https://github.com/pypeclub/pype/tree/2.11.8) (2020-08-27)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.7...2.11.8)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- DWAA support for Maya [\#382](https://github.com/pypeclub/pype/issues/382)
|
||||
- Isolate View on Playblast [\#367](https://github.com/pypeclub/pype/issues/367)
|
||||
- Maya: Tile rendering [\#297](https://github.com/pypeclub/pype/issues/297)
|
||||
- single pype instance running [\#47](https://github.com/pypeclub/pype/issues/47)
|
||||
- PYPE-649: projects don't guarantee backwards compatible environment [\#8](https://github.com/pypeclub/pype/issues/8)
|
||||
- PYPE-663: separate venv for each deployed version [\#7](https://github.com/pypeclub/pype/issues/7)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- pyblish pype - other group is collapsed before plugins are done [\#431](https://github.com/pypeclub/pype/issues/431)
|
||||
- Alpha white edges in harmony on PNGs [\#412](https://github.com/pypeclub/pype/issues/412)
|
||||
- harmony image loader picks wrong representations [\#404](https://github.com/pypeclub/pype/issues/404)
|
||||
- Clockify crash when response contain symbol not allowed by UTF-8 [\#81](https://github.com/pypeclub/pype/issues/81)
|
||||
|
||||
## [2.11.7](https://github.com/pypeclub/pype/tree/2.11.7) (2020-08-21)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.6...2.11.7)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Clean Up Baked Movie [\#369](https://github.com/pypeclub/pype/issues/369)
|
||||
- celaction last workfile [\#459](https://github.com/pypeclub/pype/pull/459)
|
||||
|
||||
## [2.11.6](https://github.com/pypeclub/pype/tree/2.11.6) (2020-08-18)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.5...2.11.6)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- publisher app [\#56](https://github.com/pypeclub/pype/issues/56)
|
||||
|
||||
## [2.11.5](https://github.com/pypeclub/pype/tree/2.11.5) (2020-08-13)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.4...2.11.5)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- Switch from master to equivalent [\#220](https://github.com/pypeclub/pype/issues/220)
|
||||
- Standalone publisher now only groups sequence if the extension is known [\#439](https://github.com/pypeclub/pype/pull/439)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Logs have been disable for editorial by default to speed up publishing [\#433](https://github.com/pypeclub/pype/pull/433)
|
||||
- additional fixes for celaction [\#430](https://github.com/pypeclub/pype/pull/430)
|
||||
- Harmony: invalid variable scope in validate scene settings [\#428](https://github.com/pypeclub/pype/pull/428)
|
||||
- new representation name for audio was not accepted [\#427](https://github.com/pypeclub/pype/pull/427)
|
||||
|
||||
## [2.11.4](https://github.com/pypeclub/pype/tree/2.11.4) (2020-08-10)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.3...2.11.4)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- WebSocket server [\#135](https://github.com/pypeclub/pype/issues/135)
|
||||
- standalonepublisher: editorial family features expansion \[master branch\] [\#411](https://github.com/pypeclub/pype/pull/411)
|
||||
|
||||
## [2.11.3](https://github.com/pypeclub/pype/tree/2.11.3) (2020-08-04)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.2...2.11.3)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Harmony: publishing performance issues [\#408](https://github.com/pypeclub/pype/pull/408)
|
||||
|
||||
## [2.11.2](https://github.com/pypeclub/pype/tree/2.11.2) (2020-07-31)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.1...2.11.2)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Ftrack to Avalon bug [\#406](https://github.com/pypeclub/pype/issues/406)
|
||||
|
||||
## [2.11.1](https://github.com/pypeclub/pype/tree/2.11.1) (2020-07-29)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.0...2.11.1)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Celaction: metadata json folder fixes on path [\#393](https://github.com/pypeclub/pype/pull/393)
|
||||
- CelAction - version up method taken fro pype.lib [\#391](https://github.com/pypeclub/pype/pull/391)
|
||||
|
||||
<a name="2.11.0"></a>
|
||||
## 2.11.0 ##
|
||||
|
||||
_**release date:** 27 July 2020_
|
||||
|
||||
**new:**
|
||||
- _(blender)_ namespace support [\#341](https://github.com/pypeclub/pype/pull/341)
|
||||
- _(blender)_ start end frames [\#330](https://github.com/pypeclub/pype/pull/330)
|
||||
- _(blender)_ camera asset [\#322](https://github.com/pypeclub/pype/pull/322)
|
||||
- _(pype)_ toggle instances per family in pyblish GUI [\#320](https://github.com/pypeclub/pype/pull/320)
|
||||
- _(pype)_ current release version is now shown in the tray menu [#379](https://github.com/pypeclub/pype/pull/379)
|
||||
|
||||
|
||||
**improved:**
|
||||
- _(resolve)_ tagging for publish [\#239](https://github.com/pypeclub/pype/issues/239)
|
||||
- _(pype)_ Support publishing a subset of shots with standalone editorial [\#336](https://github.com/pypeclub/pype/pull/336)
|
||||
- _(harmony)_ Basic support for palettes [\#324](https://github.com/pypeclub/pype/pull/324)
|
||||
- _(photoshop)_ Flag outdated containers on startup and publish. [\#309](https://github.com/pypeclub/pype/pull/309)
|
||||
- _(harmony)_ Flag Outdated containers [\#302](https://github.com/pypeclub/pype/pull/302)
|
||||
- _(photoshop)_ Publish review [\#298](https://github.com/pypeclub/pype/pull/298)
|
||||
- _(pype)_ Optional Last workfile launch [\#365](https://github.com/pypeclub/pype/pull/365)
|
||||
|
||||
|
||||
**fixed:**
|
||||
- _(premiere)_ workflow fixes [\#346](https://github.com/pypeclub/pype/pull/346)
|
||||
- _(pype)_ pype-setup does not work with space in path [\#327](https://github.com/pypeclub/pype/issues/327)
|
||||
- _(ftrack)_ Ftrack delete action cause circular error [\#206](https://github.com/pypeclub/pype/issues/206)
|
||||
- _(nuke)_ Priority was forced to 50 [\#345](https://github.com/pypeclub/pype/pull/345)
|
||||
- _(nuke)_ Fix ValidateNukeWriteKnobs [\#340](https://github.com/pypeclub/pype/pull/340)
|
||||
- _(maya)_ If camera attributes are connected, we can ignore them. [\#339](https://github.com/pypeclub/pype/pull/339)
|
||||
- _(pype)_ stop appending of tools environment to existing env [\#337](https://github.com/pypeclub/pype/pull/337)
|
||||
- _(ftrack)_ Ftrack timeout needs to look at AVALON\_TIMEOUT [\#325](https://github.com/pypeclub/pype/pull/325)
|
||||
- _(harmony)_ Only zip files are supported. [\#310](https://github.com/pypeclub/pype/pull/310)
|
||||
- _(pype)_ hotfix/Fix event server mongo uri [\#305](https://github.com/pypeclub/pype/pull/305)
|
||||
- _(photoshop)_ Subset was not named or validated correctly. [\#304](https://github.com/pypeclub/pype/pull/304)
|
||||
|
||||
|
||||
|
||||
<a name="2.10.0"></a>
|
||||
## 2.10.0 ##
|
||||
|
||||
_**release date:** 17 June 2020_
|
||||
|
||||
**new:**
|
||||
- _(harmony)_ **Toon Boom Harmony** has been greatly extended to support rigging, scene build, animation and rendering workflows. [#270](https://github.com/pypeclub/pype/issues/270) [#271](https://github.com/pypeclub/pype/issues/271) [#190](https://github.com/pypeclub/pype/issues/190) [#191](https://github.com/pypeclub/pype/issues/191) [#172](https://github.com/pypeclub/pype/issues/172) [#168](https://github.com/pypeclub/pype/issues/168)
|
||||
- _(pype)_ Added support for rudimentary **edl publishing** into individual shots. [#265](https://github.com/pypeclub/pype/issues/265)
|
||||
- _(celaction)_ Simple **Celaction** integration has been added with support for workfiles and rendering. [#255](https://github.com/pypeclub/pype/issues/255)
|
||||
- _(maya)_ Support for multiple job types when submitting to the farm. We can now render Maya or Standalone render jobs for Vray and Arnold (limited support for arnold) [#204](https://github.com/pypeclub/pype/issues/204)
|
||||
- _(photoshop)_ Added initial support for Photoshop [#232](https://github.com/pypeclub/pype/issues/232)
|
||||
|
||||
**improved:**
|
||||
- _(blender)_ Updated support for rigs and added support Layout family [#233](https://github.com/pypeclub/pype/issues/233) [#226](https://github.com/pypeclub/pype/issues/226)
|
||||
- _(premiere)_ It is now possible to choose different storage root for workfiles of different task types. [#255](https://github.com/pypeclub/pype/issues/255)
|
||||
- _(maya)_ Support for unmerged AOVs in Redshift multipart EXRs [#197](https://github.com/pypeclub/pype/issues/197)
|
||||
- _(pype)_ Pype repository has been refactored in preparation for 3.0 release [#169](https://github.com/pypeclub/pype/issues/169)
|
||||
- _(deadline)_ All file dependencies are now passed to deadline from maya to prevent premature start of rendering if caches or textures haven't been coppied over yet. [#195](https://github.com/pypeclub/pype/issues/195)
|
||||
- _(nuke)_ Script validation can now be made optional. [#194](https://github.com/pypeclub/pype/issues/194)
|
||||
- _(pype)_ Publishing can now be stopped at any time. [#194](https://github.com/pypeclub/pype/issues/194)
|
||||
|
||||
**fix:**
|
||||
- _(pype)_ Pyblish-lite has been integrated into pype repository, plus various publishing GUI fixes. [#274](https://github.com/pypeclub/pype/issues/274) [#275](https://github.com/pypeclub/pype/issues/275) [#268](https://github.com/pypeclub/pype/issues/268) [#227](https://github.com/pypeclub/pype/issues/227) [#238](https://github.com/pypeclub/pype/issues/238)
|
||||
- _(maya)_ Alembic extractor was getting wrong frame range type in certain scenarios [#254](https://github.com/pypeclub/pype/issues/254)
|
||||
- _(maya)_ Attaching a render to subset in maya was not passing validation in certain scenarios [#256](https://github.com/pypeclub/pype/issues/256)
|
||||
- _(ftrack)_ Various small fixes to ftrack sync [#263](https://github.com/pypeclub/pype/issues/263) [#259](https://github.com/pypeclub/pype/issues/259)
|
||||
- _(maya)_ Look extraction is now able to skp invalid connections in shaders [#207](https://github.com/pypeclub/pype/issues/207)
|
||||
|
||||
|
||||
|
||||
<a name="2.9.0"></a>
|
||||
## 2.9.0 ##
|
||||
|
||||
_**release date:** 25 May 2020_
|
||||
|
||||
**new:**
|
||||
- _(pype)_ Support for **Multiroot projects**. You can now store project data on multiple physical or virtual storages and target individual publishes to these locations. For instance render can be stored on a faster storage than the rest of the project. [#145](https://github.com/pypeclub/pype/issues/145), [#38](https://github.com/pypeclub/pype/issues/38)
|
||||
- _(harmony)_ Basic implementation of **Toon Boom Harmony** has been added. [#142](https://github.com/pypeclub/pype/issues/142)
|
||||
- _(pype)_ OSX support is in public beta now. There are issues to be expected, but the main implementation should be functional. [#141](https://github.com/pypeclub/pype/issues/141)
|
||||
|
||||
|
||||
**improved:**
|
||||
|
||||
- _(pype)_ **Review extractor** has been completely rebuilt. It now supports granular filtering so you can create **multiple outputs** for different tasks, families or hosts. [#103](https://github.com/pypeclub/pype/issues/103), [#166](https://github.com/pypeclub/pype/issues/166), [#165](https://github.com/pypeclub/pype/issues/165)
|
||||
- _(pype)_ **Burnin** generation had been extended to **support same multi-output filtering** as review extractor [#103](https://github.com/pypeclub/pype/issues/103)
|
||||
- _(pype)_ Publishing file templates can now be specified in config for each individual family [#114](https://github.com/pypeclub/pype/issues/114)
|
||||
- _(pype)_ Studio specific plugins can now be appended to pype standard publishing plugins. [#112](https://github.com/pypeclub/pype/issues/112)
|
||||
- _(nukestudio)_ Reviewable clips no longer need to be previously cut, exported and re-imported to timeline. **Pype can now dynamically cut reviewable quicktimes** from continuous offline footage during publishing. [#23](https://github.com/pypeclub/pype/issues/23)
|
||||
- _(deadline)_ Deadline can now correctly differentiate between staging and production pype. [#154](https://github.com/pypeclub/pype/issues/154)
|
||||
- _(deadline)_ `PYPE_PYTHON_EXE` env variable can now be used to direct publishing to explicit python installation. [#120](https://github.com/pypeclub/pype/issues/120)
|
||||
- _(nuke)_ Nuke now check for new version of loaded data on file open. [#140](https://github.com/pypeclub/pype/issues/140)
|
||||
- _(nuke)_ frame range and limit checkboxes are now exposed on write node. [#119](https://github.com/pypeclub/pype/issues/119)
|
||||
|
||||
|
||||
|
||||
**fix:**
|
||||
|
||||
- _(nukestudio)_ Project Location was using backslashes which was breaking nukestudio native exporting in certains configurations [#82](https://github.com/pypeclub/pype/issues/82)
|
||||
- _(nukestudio)_ Duplicity in hierarchy tags was prone to throwing publishing error [#130](https://github.com/pypeclub/pype/issues/130), [#144](https://github.com/pypeclub/pype/issues/144)
|
||||
- _(ftrack)_ multiple stability improvements [#157](https://github.com/pypeclub/pype/issues/157), [#159](https://github.com/pypeclub/pype/issues/159), [#128](https://github.com/pypeclub/pype/issues/128), [#118](https://github.com/pypeclub/pype/issues/118), [#127](https://github.com/pypeclub/pype/issues/127)
|
||||
- _(deadline)_ multipart EXRs were stopping review publishing on the farm. They are still not supported for automatic review generation, but the publish will go through correctly without the quicktime. [#155](https://github.com/pypeclub/pype/issues/155)
|
||||
- _(deadline)_ If deadline is non-responsive it will no longer freeze host when publishing [#149](https://github.com/pypeclub/pype/issues/149)
|
||||
- _(deadline)_ Sometimes deadline was trying to launch render before all the source data was coppied over. [#137](https://github.com/pypeclub/pype/issues/137) _(harmony)_ Basic implementation of **Toon Boom Harmony** has been added. [#142](https://github.com/pypeclub/pype/issues/142)
|
||||
- _(nuke)_ Filepath knob wasn't updated properly. [#131](https://github.com/pypeclub/pype/issues/131)
|
||||
- _(maya)_ When extracting animation, the "Write Color Set" options on the instance were not respected. [#108](https://github.com/pypeclub/pype/issues/108)
|
||||
- _(maya)_ Attribute overrides for AOV only worked for the legacy render layers. Now it works for new render setup as well [#132](https://github.com/pypeclub/pype/issues/132)
|
||||
- _(maya)_ Stability and usability improvements in yeti workflow [#104](https://github.com/pypeclub/pype/issues/104)
|
||||
|
||||
|
||||
|
||||
<a name="2.8.0"></a>
|
||||
## 2.8.0 ##
|
||||
|
||||
_**release date:** 20 April 2020_
|
||||
|
||||
**new:**
|
||||
|
||||
- _(pype)_ Option to generate slates from json templates. [PYPE-628] [#26](https://github.com/pypeclub/pype/issues/26)
|
||||
- _(pype)_ It is now possible to automate loading of published subsets into any scene. Documentation will follow :). [PYPE-611] [#24](https://github.com/pypeclub/pype/issues/24)
|
||||
|
||||
**fix:**
|
||||
|
||||
- _(maya)_ Some Redshift render tokens could break publishing. [PYPE-778] [#33](https://github.com/pypeclub/pype/issues/33)
|
||||
- _(maya)_ Publish was not preserving maya file extension. [#39](https://github.com/pypeclub/pype/issues/39)
|
||||
- _(maya)_ Rig output validator was failing on nodes without shapes. [#40](https://github.com/pypeclub/pype/issues/40)
|
||||
- _(maya)_ Yeti caches can now be properly versioned up in the scene inventory. [#40](https://github.com/pypeclub/pype/issues/40)
|
||||
- _(nuke)_ Build first workfiles was not accepting jpeg sequences. [#34](https://github.com/pypeclub/pype/issues/34)
|
||||
- _(deadline)_ Trying to generate ffmpeg review from multipart EXRs no longer crashes publishing. [PYPE-781]
|
||||
- _(deadline)_ Render publishing is more stable in multiplatform environments. [PYPE-775]
|
||||
|
||||
|
||||
|
||||
<a name="2.7.0"></a>
|
||||
## 2.7.0 ##
|
||||
|
||||
_**release date:** 30 March 2020_
|
||||
|
||||
**new:**
|
||||
|
||||
- _(maya)_ Artist can now choose to load multiple references of the same subset at once [PYPE-646, PYPS-81]
|
||||
- _(nuke)_ Option to use named OCIO colorspaces for review colour baking. [PYPS-82]
|
||||
- _(pype)_ Pype can now work with `master` versions for publishing and loading. These are non-versioned publishes that are overwritten with the latest version during publish. These are now supported in all the GUIs, but their publishing is deactivated by default. [PYPE-653]
|
||||
- _(blender)_ Added support for basic blender workflow. We currently support `rig`, `model` and `animation` families. [PYPE-768]
|
||||
- _(pype)_ Source timecode can now be used in burn-ins. [PYPE-777]
|
||||
- _(pype)_ Review outputs profiles can now specify delivery resolution different than project setting [PYPE-759]
|
||||
- _(nuke)_ Bookmark to current context is now added automatically to all nuke browser windows. [PYPE-712]
|
||||
|
||||
**change:**
|
||||
|
||||
- _(maya)_ It is now possible to publish camera without. baking. Keep in mind that unbaked cameras can't be guaranteed to work in other hosts. [PYPE-595]
|
||||
- _(maya)_ All the renders from maya are now grouped in the loader by their Layer name. [PYPE-482]
|
||||
- _(nuke/hiero)_ Any publishes from nuke and hiero can now be versioned independently of the workfile. [PYPE-728]
|
||||
|
||||
|
||||
**fix:**
|
||||
|
||||
- _(nuke)_ Mixed slashes caused issues in ocio config path.
|
||||
- _(pype)_ Intent field in pyblish GUI was passing label instead of value to ftrack. [PYPE-733]
|
||||
- _(nuke)_ Publishing of pre-renders was inconsistent. [PYPE-766]
|
||||
- _(maya)_ Handles and frame ranges were inconsistent in various places during publishing.
|
||||
- _(nuke)_ Nuke was crashing if it ran into certain missing knobs. For example DPX output missing `autocrop` [PYPE-774]
|
||||
- _(deadline)_ Project overrides were not working properly with farm render publishing.
|
||||
- _(hiero)_ Problems with single frame plates publishing.
|
||||
- _(maya)_ Redshift RenderPass token were breaking render publishing. [PYPE-778]
|
||||
- _(nuke)_ Build first workfile was not accepting jpeg sequences.
|
||||
- _(maya)_ Multipart (Multilayer) EXRs were breaking review publishing due to FFMPEG incompatiblity [PYPE-781]
|
||||
|
||||
|
||||
<a name="2.6.0"></a>
|
||||
## 2.6.0 ##
|
||||
|
||||
_**release date:** 9 March 2020_
|
||||
|
||||
**change:**
|
||||
- _(maya)_ render publishing has been simplified and made more robust. Render setup layers are now automatically added to publishing subsets and `render globals` family has been replaced with simple `render` [PYPE-570]
|
||||
- _(avalon)_ change context and workfiles apps, have been merged into one, that allows both actions to be performed at the same time. [PYPE-747]
|
||||
- _(pype)_ thumbnails are now automatically propagate to asset from the last published subset in the loader
|
||||
- _(ftrack)_ publishing comment and intent are now being published to ftrack note as well as describtion. [PYPE-727]
|
||||
- _(pype)_ when overriding existing version new old representations are now overriden, instead of the new ones just being appended. (to allow this behaviour, the version validator need to be disabled. [PYPE-690])
|
||||
- _(pype)_ burnin preset has been significantly simplified. It now doesn't require passing function to each field, but only need the actual text template. to use this, all the current burnin PRESETS MUST BE UPDATED for all the projects.
|
||||
- _(ftrack)_ credentials are now stored on a per server basis, so it's possible to switch between ftrack servers without having to log in and out. [PYPE-723]
|
||||
|
||||
|
||||
**new:**
|
||||
- _(pype)_ production and development deployments now have different colour of the tray icon. Orange for Dev and Green for production [PYPE-718]
|
||||
- _(maya)_ renders can now be attached to a publishable subset rather than creating their own subset. For example it is possible to create a reviewable `look` or `model` render and have it correctly attached as a representation of the subsets [PYPE-451]
|
||||
- _(maya)_ after saving current scene into a new context (as a new shot for instance), all the scene publishing subsets data gets re-generated automatically to match the new context [PYPE-532]
|
||||
- _(pype)_ we now support project specific publish, load and create plugins [PYPE-740]
|
||||
- _(ftrack)_ new action that allow archiving/deleting old published versions. User can keep how many of the latest version to keep when the action is ran. [PYPE-748, PYPE-715]
|
||||
- _(ftrack)_ it is now possible to monitor and restart ftrack event server using ftrack action. [PYPE-658]
|
||||
- _(pype)_ validator that prevent accidental overwrites of previously published versions. [PYPE-680]
|
||||
- _(avalon)_ avalon core updated to version 5.6.0
|
||||
- _(maya)_ added validator to make sure that relative paths are used when publishing arnold standins.
|
||||
- _(nukestudio)_ it is now possible to extract and publish audio family from clip in nuke studio [PYPE-682]
|
||||
|
||||
**fix**:
|
||||
- _(maya)_ maya set framerange button was ignoring handles [PYPE-719]
|
||||
- _(ftrack)_ sync to avalon was sometime crashing when ran on empty project
|
||||
- _(nukestudio)_ publishing same shots after they've been previously archived/deleted would result in a crash. [PYPE-737]
|
||||
- _(nuke)_ slate workflow was breaking in certain scenarios. [PYPE-730]
|
||||
- _(pype)_ rendering publish workflow has been significantly improved to prevent error resulting from implicit render collection. [PYPE-665, PYPE-746]
|
||||
- _(pype)_ launching application on a non-synced project resulted in obscure [PYPE-528]
|
||||
- _(pype)_ missing keys in burnins no longer result in an error. [PYPE-706]
|
||||
- _(ftrack)_ create folder structure action was sometimes failing for project managers due to wrong permissions.
|
||||
- _(Nukestudio)_ using `source` in the start frame tag could result in wrong frame range calculation
|
||||
- _(ftrack)_ sync to avalon action and event have been improved by catching more edge cases and provessing them properly.
|
||||
|
||||
|
||||
<a name="2.5"></a>
|
||||
## 2.5.0 ##
|
||||
|
||||
_**release date:** 11 Feb 2020_
|
||||
|
||||
**change:**
|
||||
- _(pype)_ added many logs for easier debugging
|
||||
- _(pype)_ review presets can now be separated between 2d and 3d renders [PYPE-693]
|
||||
- _(pype)_ anatomy module has been greatly improved to allow for more dynamic pulblishing and faster debugging [PYPE-685]
|
||||
- _(pype)_ avalon schemas have been moved from `pype-config` to `pype` repository, for simplification. [PYPE-670]
|
||||
- _(ftrack)_ updated to latest ftrack API
|
||||
- _(ftrack)_ publishing comments now appear in ftrack also as a note on version with customisable category [PYPE-645]
|
||||
- _(ftrack)_ delete asset/subset action had been improved. It is now able to remove multiple entities and descendants of the selected entities [PYPE-361, PYPS-72]
|
||||
- _(workfiles)_ added date field to workfiles app [PYPE-603]
|
||||
- _(maya)_ old deprecated loader have been removed in favour of a single unified reference loader (old scenes will upgrade automatically to the new loader upon opening) [PYPE-633, PYPE-697]
|
||||
- _(avalon)_ core updated to 5.5.15 [PYPE-671]
|
||||
- _(nuke)_ library loader is now available in nuke [PYPE-698]
|
||||
|
||||
|
||||
**new:**
|
||||
- _(pype)_ added pype render wrapper to allow rendering on mixed platform farms. [PYPE-634]
|
||||
- _(pype)_ added `pype launch` command. It let's admin run applications with dynamically built environment based on the given context. [PYPE-634]
|
||||
- _(pype)_ added support for extracting review sequences with burnins [PYPE-657]
|
||||
- _(publish)_ users can now set intent next to a comment when publishing. This will then be reflected on an attribute in ftrack. [PYPE-632]
|
||||
- _(burnin)_ timecode can now be added to burnin
|
||||
- _(burnin)_ datetime keys can now be added to burnin and anatomy [PYPE-651]
|
||||
- _(burnin)_ anatomy templates can now be used in burnins. [PYPE=626]
|
||||
- _(nuke)_ new validator for render resolution
|
||||
- _(nuke)_ support for attach slate to nuke renders [PYPE-630]
|
||||
- _(nuke)_ png sequences were added to loaders
|
||||
- _(maya)_ added maya 2020 compatibility [PYPE-677]
|
||||
- _(maya)_ ability to publish and load .ASS standin sequences [PYPS-54]
|
||||
- _(pype)_ thumbnails can now be published and are visible in the loader. `AVALON_THUMBNAIL_ROOT` environment variable needs to be set for this to work [PYPE-573, PYPE-132]
|
||||
- _(blender)_ base implementation of blender was added with publishing and loading of .blend files [PYPE-612]
|
||||
- _(ftrack)_ new action for preparing deliveries [PYPE-639]
|
||||
|
||||
|
||||
**fix**:
|
||||
- _(burnin)_ more robust way of finding ffmpeg for burnins.
|
||||
- _(pype)_ improved UNC paths remapping when sending to farm.
|
||||
- _(pype)_ float frames sometimes made their way to representation context in database, breaking loaders [PYPE-668]
|
||||
- _(pype)_ `pype install --force` was failing sometimes [PYPE-600]
|
||||
- _(pype)_ padding in published files got calculated wrongly sometimes. It is now instead being always read from project anatomy. [PYPE-667]
|
||||
- _(publish)_ comment publishing was failing in certain situations
|
||||
- _(ftrack)_ multiple edge case scenario fixes in auto sync and sync-to-avalon action
|
||||
- _(ftrack)_ sync to avalon now works on empty projects
|
||||
- _(ftrack)_ thumbnail update event was failing when deleting entities [PYPE-561]
|
||||
- _(nuke)_ loader applies proper colorspaces from Presets
|
||||
- _(nuke)_ publishing handles didn't always work correctly [PYPE-686]
|
||||
- _(maya)_ assembly publishing and loading wasn't working correctly
|
||||
|
||||
|
||||
|
||||
|
||||
<a name="2.4.0"></a>
|
||||
## 2.4.0 ##
|
||||
|
||||
_**release date:** 9 Dec 2019_
|
||||
|
||||
**change:**
|
||||
- _(ftrack)_ version to status ftrack event can now be configured from Presets
|
||||
- based on preset `presets/ftracc/ftrack_config.json["status_version_to_task"]`
|
||||
- _(ftrack)_ sync to avalon event has been completely re-written. It now supports most of the project management situations on ftrack including moving, renaming and deleting entities, updating attributes and working with tasks.
|
||||
- _(ftrack)_ sync to avalon action has been also re-writen. It is now much faster (up to 100 times depending on a project structure), has much better logging and reporting on encountered problems, and is able to handle much more complex situations.
|
||||
- _(ftrack)_ sync to avalon trigger by checking `auto-sync` toggle on ftrack [PYPE-504]
|
||||
- _(pype)_ various new features in the REST api
|
||||
- _(pype)_ new visual identity used across pype
|
||||
- _(pype)_ started moving all requirements to pip installation rather than vendorising them in pype repository. Due to a few yet unreleased packages, this means that pype can temporarily be only installed in the offline mode.
|
||||
|
||||
**new:**
|
||||
- _(nuke)_ support for publishing gizmos and loading them as viewer processes
|
||||
- _(nuke)_ support for publishing nuke nodes from backdrops and loading them back
|
||||
- _(pype)_ burnins can now work with start and end frames as keys
|
||||
- use keys `{frame_start}`, `{frame_end}` and `{current_frame}` in burnin preset to use them. [PYPS-44,PYPS-73, PYPE-602]
|
||||
- _(pype)_ option to filter logs by user and level in loggin GUI
|
||||
- _(pype)_ image family added to standalone publisher [PYPE-574]
|
||||
- _(pype)_ matchmove family added to standalone publisher [PYPE-574]
|
||||
- _(nuke)_ validator for comparing arbitrary knobs with values from presets
|
||||
- _(maya)_ option to force maya to copy textures in the new look publish rather than hardlinking them
|
||||
- _(pype)_ comments from pyblish GUI are now being added to ftrack version
|
||||
- _(maya)_ validator for checking outdated containers in the scene
|
||||
- _(maya)_ option to publish and load arnold standin sequence [PYPE-579, PYPS-54]
|
||||
|
||||
**fix**:
|
||||
- _(pype)_ burnins were not respecting codec of the input video
|
||||
- _(nuke)_ lot's of various nuke and nuke studio fixes across the board [PYPS-45]
|
||||
- _(pype)_ workfiles app is not launching with the start of the app by default [PYPE-569]
|
||||
- _(ftrack)_ ftrack integration during publishing was failing under certain situations [PYPS-66]
|
||||
- _(pype)_ minor fixes in REST api
|
||||
- _(ftrack)_ status change event was crashing when the target status was missing [PYPS-68]
|
||||
- _(ftrack)_ actions will try to reconnect if they fail for some reason
|
||||
- _(maya)_ problems with fps mapping when using float FPS values
|
||||
- _(deadline)_ overall improvements to deadline publishing
|
||||
- _(setup)_ environment variables are now remapped on the fly based on the platform pype is running on. This fixes many issues in mixed platform environments.
|
||||
|
||||
|
||||
<a name="2.3.6"></a>
|
||||
## 2.3.6 #
|
||||
|
||||
_**release date:** 27 Nov 2019_
|
||||
|
||||
**hotfix**:
|
||||
- _(ftrack)_ was hiding important debug logo
|
||||
- _(nuke)_ crashes during workfile publishing
|
||||
- _(ftrack)_ event server crashes because of signal problems
|
||||
- _(muster)_ problems with muster render submissions
|
||||
- _(ftrack)_ thumbnail update event syntax errors
|
||||
|
||||
|
||||
## 2.3.0 ##
|
||||
_release date: 6 Oct 2019_
|
||||
|
||||
**new**:
|
||||
- _(maya)_ support for yeti rigs and yeti caches
|
||||
- _(maya)_ validator for comparing arbitrary attributes against ftrack
|
||||
- _(pype)_ burnins can now show current date and time
|
||||
- _(muster)_ pools can now be set in render globals in maya
|
||||
- _(pype)_ Rest API has been implemented in beta stage
|
||||
- _(nuke)_ LUT loader has been added
|
||||
- _(pype)_ rudimentary user module has been added as preparation for user management
|
||||
- _(pype)_ a simple logging GUI has been added to pype tray
|
||||
- _(nuke)_ nuke can now bake input process into mov
|
||||
- _(maya)_ imported models now have selection handle displayed by defaulting
|
||||
- _(avalon)_ it's is now possible to load multiple assets at once using loader
|
||||
- _(maya)_ added ability to automatically connect yeti rig to a mesh upon loading
|
||||
|
||||
**changed**:
|
||||
- _(ftrack)_ event server now runs two parallel processes and is able to keep queue of events to process.
|
||||
- _(nuke)_ task name is now added to all rendered subsets
|
||||
- _(pype)_ adding more families to standalone publisher
|
||||
- _(pype)_ standalone publisher now uses pyblish-lite
|
||||
- _(pype)_ standalone publisher can now create review quicktimes
|
||||
- _(ftrack)_ queries to ftrack were sped up
|
||||
- _(ftrack)_ multiple ftrack action have been deprecated
|
||||
- _(avalon)_ avalon upstream has been updated to 5.5.0
|
||||
- _(nukestudio)_ published transforms can now be animated
|
||||
-
|
||||
|
||||
**fix**:
|
||||
- _(maya)_ fps popup button didn't work in some cases
|
||||
- _(maya)_ geometry instances and references in maya were losing shader assignments
|
||||
- _(muster)_ muster rendering templates were not working correctly
|
||||
- _(maya)_ arnold tx texture conversion wasn't respecting colorspace set by the artist
|
||||
- _(pype)_ problems with avalon db sync
|
||||
- _(maya)_ ftrack was rounding FPS making it inconsistent
|
||||
- _(pype)_ wrong icon names in Creator
|
||||
- _(maya)_ scene inventory wasn't showing anything if representation was removed from database after it's been loaded to the scene
|
||||
- _(nukestudio)_ multiple bugs squashed
|
||||
- _(loader)_ loader was taking long time to show all the loading action when first launcher in maya
|
||||
|
||||
## 2.2.0 ##
|
||||
_release date: 8 Sept 2019_
|
||||
|
||||
**new**:
|
||||
- _(pype)_ add customisable workflow for creating quicktimes from renders or playblasts
|
||||
- _(nuke)_ option to choose deadline chunk size on write nodes
|
||||
- _(nukestudio)_ added option to publish soft effects (subTrackItems) from NukeStudio as subsets including LUT files. these can then be loaded in nuke or NukeStudio
|
||||
- _(nuke)_ option to build nuke script from previously published latest versions of plate and render subsets.
|
||||
- _(nuke)_ nuke writes now have deadline tab.
|
||||
- _(ftrack)_ Prepare Project action can now be used for creating the base folder structure on disk and in ftrack, setting up all the initial project attributes and it automatically prepares `pype_project_config` folder for the given project.
|
||||
- _(clockify)_ Added support for time tracking in clockify. This currently in addition to ftrack time logs, but does not completely replace them.
|
||||
- _(pype)_ any attributes in Creator and Loader plugins can now be customised using pype preset system
|
||||
|
||||
**changed**:
|
||||
- nukestudio now uses workio API for workfiles
|
||||
- _(maya)_ "FIX FPS" prompt in maya now appears in the middle of the screen
|
||||
- _(muster)_ can now be configured with custom templates
|
||||
- _(pype)_ global publishing plugins can now be configured using presets as well as host specific ones
|
||||
|
||||
|
||||
**fix**:
|
||||
- wrong version retrieval from path in certain scenarios
|
||||
- nuke reset resolution wasn't working in certain scenarios
|
||||
|
||||
## 2.1.0 ##
|
||||
_release date: 6 Aug 2019_
|
||||
|
||||
A large cleanup release. Most of the change are under the hood.
|
||||
|
||||
**new**:
|
||||
- _(pype)_ add customisable workflow for creating quicktimes from renders or playblasts
|
||||
- _(pype)_ Added configurable option to add burnins to any generated quicktimes
|
||||
- _(ftrack)_ Action that identifies what machines pype is running on.
|
||||
- _(system)_ unify subprocess calls
|
||||
- _(maya)_ add audio to review quicktimes
|
||||
- _(nuke)_ add crop before write node to prevent overscan problems in ffmpeg
|
||||
- **Nuke Studio** publishing and workfiles support
|
||||
- **Muster** render manager support
|
||||
- _(nuke)_ Framerange, FPS and Resolution are set automatically at startup
|
||||
- _(maya)_ Ability to load published sequences as image planes
|
||||
- _(system)_ Ftrack event that sets asset folder permissions based on task assignees in ftrack.
|
||||
- _(maya)_ Pyblish plugin that allow validation of maya attributes
|
||||
- _(system)_ added better startup logging to tray debug, including basic connection information
|
||||
- _(avalon)_ option to group published subsets to groups in the loader
|
||||
- _(avalon)_ loader family filters are working now
|
||||
|
||||
**changed**:
|
||||
- change multiple key attributes to unify their behaviour across the pipeline
|
||||
- `frameRate` to `fps`
|
||||
- `startFrame` to `frameStart`
|
||||
- `endFrame` to `frameEnd`
|
||||
- `fstart` to `frameStart`
|
||||
- `fend` to `frameEnd`
|
||||
- `handle_start` to `handleStart`
|
||||
- `handle_end` to `handleEnd`
|
||||
- `resolution_width` to `resolutionWidth`
|
||||
- `resolution_height` to `resolutionHeight`
|
||||
- `pixel_aspect` to `pixelAspect`
|
||||
|
||||
- _(nuke)_ write nodes are now created inside group with only some attributes editable by the artist
|
||||
- rendered frames are now deleted from temporary location after their publishing is finished.
|
||||
- _(ftrack)_ RV action can now be launched from any entity
|
||||
- after publishing only refresh button is now available in pyblish UI
|
||||
- added context instance pyblish-lite so that artist knows if context plugin fails
|
||||
- _(avalon)_ allow opening selected files using enter key
|
||||
- _(avalon)_ core updated to v5.2.9 with our forked changes on top
|
||||
|
||||
**fix**:
|
||||
- faster hierarchy retrieval from db
|
||||
- _(nuke)_ A lot of stability enhancements
|
||||
- _(nuke studio)_ A lot of stability enhancements
|
||||
- _(nuke)_ now only renders a single write node on farm
|
||||
- _(ftrack)_ pype would crash when launcher project level task
|
||||
- work directory was sometimes not being created correctly
|
||||
- major pype.lib cleanup. Removing of unused functions, merging those that were doing the same and general house cleaning.
|
||||
- _(avalon)_ subsets in maya 2019 weren't behaving correctly in the outliner
|
||||
|
||||
|
||||
\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*
|
||||
432
HISTORY.md
Normal file
432
HISTORY.md
Normal file
|
|
@ -0,0 +1,432 @@
|
|||
<a name="2.11.0"></a>
|
||||
## 2.11.0 ##
|
||||
|
||||
_**release date:** 27 July 2020_
|
||||
|
||||
**new:**
|
||||
- _(blender)_ namespace support [\#341](https://github.com/pypeclub/pype/pull/341)
|
||||
- _(blender)_ start end frames [\#330](https://github.com/pypeclub/pype/pull/330)
|
||||
- _(blender)_ camera asset [\#322](https://github.com/pypeclub/pype/pull/322)
|
||||
- _(pype)_ toggle instances per family in pyblish GUI [\#320](https://github.com/pypeclub/pype/pull/320)
|
||||
- _(pype)_ current release version is now shown in the tray menu [#379](https://github.com/pypeclub/pype/pull/379)
|
||||
|
||||
|
||||
**improved:**
|
||||
- _(resolve)_ tagging for publish [\#239](https://github.com/pypeclub/pype/issues/239)
|
||||
- _(pype)_ Support publishing a subset of shots with standalone editorial [\#336](https://github.com/pypeclub/pype/pull/336)
|
||||
- _(harmony)_ Basic support for palettes [\#324](https://github.com/pypeclub/pype/pull/324)
|
||||
- _(photoshop)_ Flag outdated containers on startup and publish. [\#309](https://github.com/pypeclub/pype/pull/309)
|
||||
- _(harmony)_ Flag Outdated containers [\#302](https://github.com/pypeclub/pype/pull/302)
|
||||
- _(photoshop)_ Publish review [\#298](https://github.com/pypeclub/pype/pull/298)
|
||||
- _(pype)_ Optional Last workfile launch [\#365](https://github.com/pypeclub/pype/pull/365)
|
||||
|
||||
|
||||
**fixed:**
|
||||
- _(premiere)_ workflow fixes [\#346](https://github.com/pypeclub/pype/pull/346)
|
||||
- _(pype)_ pype-setup does not work with space in path [\#327](https://github.com/pypeclub/pype/issues/327)
|
||||
- _(ftrack)_ Ftrack delete action cause circular error [\#206](https://github.com/pypeclub/pype/issues/206)
|
||||
- _(nuke)_ Priority was forced to 50 [\#345](https://github.com/pypeclub/pype/pull/345)
|
||||
- _(nuke)_ Fix ValidateNukeWriteKnobs [\#340](https://github.com/pypeclub/pype/pull/340)
|
||||
- _(maya)_ If camera attributes are connected, we can ignore them. [\#339](https://github.com/pypeclub/pype/pull/339)
|
||||
- _(pype)_ stop appending of tools environment to existing env [\#337](https://github.com/pypeclub/pype/pull/337)
|
||||
- _(ftrack)_ Ftrack timeout needs to look at AVALON\_TIMEOUT [\#325](https://github.com/pypeclub/pype/pull/325)
|
||||
- _(harmony)_ Only zip files are supported. [\#310](https://github.com/pypeclub/pype/pull/310)
|
||||
- _(pype)_ hotfix/Fix event server mongo uri [\#305](https://github.com/pypeclub/pype/pull/305)
|
||||
- _(photoshop)_ Subset was not named or validated correctly. [\#304](https://github.com/pypeclub/pype/pull/304)
|
||||
|
||||
|
||||
|
||||
<a name="2.10.0"></a>
|
||||
## 2.10.0 ##
|
||||
|
||||
_**release date:** 17 June 2020_
|
||||
|
||||
**new:**
|
||||
- _(harmony)_ **Toon Boom Harmony** has been greatly extended to support rigging, scene build, animation and rendering workflows. [#270](https://github.com/pypeclub/pype/issues/270) [#271](https://github.com/pypeclub/pype/issues/271) [#190](https://github.com/pypeclub/pype/issues/190) [#191](https://github.com/pypeclub/pype/issues/191) [#172](https://github.com/pypeclub/pype/issues/172) [#168](https://github.com/pypeclub/pype/issues/168)
|
||||
- _(pype)_ Added support for rudimentary **edl publishing** into individual shots. [#265](https://github.com/pypeclub/pype/issues/265)
|
||||
- _(celaction)_ Simple **Celaction** integration has been added with support for workfiles and rendering. [#255](https://github.com/pypeclub/pype/issues/255)
|
||||
- _(maya)_ Support for multiple job types when submitting to the farm. We can now render Maya or Standalone render jobs for Vray and Arnold (limited support for arnold) [#204](https://github.com/pypeclub/pype/issues/204)
|
||||
- _(photoshop)_ Added initial support for Photoshop [#232](https://github.com/pypeclub/pype/issues/232)
|
||||
|
||||
**improved:**
|
||||
- _(blender)_ Updated support for rigs and added support Layout family [#233](https://github.com/pypeclub/pype/issues/233) [#226](https://github.com/pypeclub/pype/issues/226)
|
||||
- _(premiere)_ It is now possible to choose different storage root for workfiles of different task types. [#255](https://github.com/pypeclub/pype/issues/255)
|
||||
- _(maya)_ Support for unmerged AOVs in Redshift multipart EXRs [#197](https://github.com/pypeclub/pype/issues/197)
|
||||
- _(pype)_ Pype repository has been refactored in preparation for 3.0 release [#169](https://github.com/pypeclub/pype/issues/169)
|
||||
- _(deadline)_ All file dependencies are now passed to deadline from maya to prevent premature start of rendering if caches or textures haven't been coppied over yet. [#195](https://github.com/pypeclub/pype/issues/195)
|
||||
- _(nuke)_ Script validation can now be made optional. [#194](https://github.com/pypeclub/pype/issues/194)
|
||||
- _(pype)_ Publishing can now be stopped at any time. [#194](https://github.com/pypeclub/pype/issues/194)
|
||||
|
||||
**fix:**
|
||||
- _(pype)_ Pyblish-lite has been integrated into pype repository, plus various publishing GUI fixes. [#274](https://github.com/pypeclub/pype/issues/274) [#275](https://github.com/pypeclub/pype/issues/275) [#268](https://github.com/pypeclub/pype/issues/268) [#227](https://github.com/pypeclub/pype/issues/227) [#238](https://github.com/pypeclub/pype/issues/238)
|
||||
- _(maya)_ Alembic extractor was getting wrong frame range type in certain scenarios [#254](https://github.com/pypeclub/pype/issues/254)
|
||||
- _(maya)_ Attaching a render to subset in maya was not passing validation in certain scenarios [#256](https://github.com/pypeclub/pype/issues/256)
|
||||
- _(ftrack)_ Various small fixes to ftrack sync [#263](https://github.com/pypeclub/pype/issues/263) [#259](https://github.com/pypeclub/pype/issues/259)
|
||||
- _(maya)_ Look extraction is now able to skp invalid connections in shaders [#207](https://github.com/pypeclub/pype/issues/207)
|
||||
|
||||
|
||||
|
||||
<a name="2.9.0"></a>
|
||||
## 2.9.0 ##
|
||||
|
||||
_**release date:** 25 May 2020_
|
||||
|
||||
**new:**
|
||||
- _(pype)_ Support for **Multiroot projects**. You can now store project data on multiple physical or virtual storages and target individual publishes to these locations. For instance render can be stored on a faster storage than the rest of the project. [#145](https://github.com/pypeclub/pype/issues/145), [#38](https://github.com/pypeclub/pype/issues/38)
|
||||
- _(harmony)_ Basic implementation of **Toon Boom Harmony** has been added. [#142](https://github.com/pypeclub/pype/issues/142)
|
||||
- _(pype)_ OSX support is in public beta now. There are issues to be expected, but the main implementation should be functional. [#141](https://github.com/pypeclub/pype/issues/141)
|
||||
|
||||
|
||||
**improved:**
|
||||
|
||||
- _(pype)_ **Review extractor** has been completely rebuilt. It now supports granular filtering so you can create **multiple outputs** for different tasks, families or hosts. [#103](https://github.com/pypeclub/pype/issues/103), [#166](https://github.com/pypeclub/pype/issues/166), [#165](https://github.com/pypeclub/pype/issues/165)
|
||||
- _(pype)_ **Burnin** generation had been extended to **support same multi-output filtering** as review extractor [#103](https://github.com/pypeclub/pype/issues/103)
|
||||
- _(pype)_ Publishing file templates can now be specified in config for each individual family [#114](https://github.com/pypeclub/pype/issues/114)
|
||||
- _(pype)_ Studio specific plugins can now be appended to pype standard publishing plugins. [#112](https://github.com/pypeclub/pype/issues/112)
|
||||
- _(nukestudio)_ Reviewable clips no longer need to be previously cut, exported and re-imported to timeline. **Pype can now dynamically cut reviewable quicktimes** from continuous offline footage during publishing. [#23](https://github.com/pypeclub/pype/issues/23)
|
||||
- _(deadline)_ Deadline can now correctly differentiate between staging and production pype. [#154](https://github.com/pypeclub/pype/issues/154)
|
||||
- _(deadline)_ `PYPE_PYTHON_EXE` env variable can now be used to direct publishing to explicit python installation. [#120](https://github.com/pypeclub/pype/issues/120)
|
||||
- _(nuke)_ Nuke now check for new version of loaded data on file open. [#140](https://github.com/pypeclub/pype/issues/140)
|
||||
- _(nuke)_ frame range and limit checkboxes are now exposed on write node. [#119](https://github.com/pypeclub/pype/issues/119)
|
||||
|
||||
|
||||
|
||||
**fix:**
|
||||
|
||||
- _(nukestudio)_ Project Location was using backslashes which was breaking nukestudio native exporting in certains configurations [#82](https://github.com/pypeclub/pype/issues/82)
|
||||
- _(nukestudio)_ Duplicity in hierarchy tags was prone to throwing publishing error [#130](https://github.com/pypeclub/pype/issues/130), [#144](https://github.com/pypeclub/pype/issues/144)
|
||||
- _(ftrack)_ multiple stability improvements [#157](https://github.com/pypeclub/pype/issues/157), [#159](https://github.com/pypeclub/pype/issues/159), [#128](https://github.com/pypeclub/pype/issues/128), [#118](https://github.com/pypeclub/pype/issues/118), [#127](https://github.com/pypeclub/pype/issues/127)
|
||||
- _(deadline)_ multipart EXRs were stopping review publishing on the farm. They are still not supported for automatic review generation, but the publish will go through correctly without the quicktime. [#155](https://github.com/pypeclub/pype/issues/155)
|
||||
- _(deadline)_ If deadline is non-responsive it will no longer freeze host when publishing [#149](https://github.com/pypeclub/pype/issues/149)
|
||||
- _(deadline)_ Sometimes deadline was trying to launch render before all the source data was coppied over. [#137](https://github.com/pypeclub/pype/issues/137) _(harmony)_ Basic implementation of **Toon Boom Harmony** has been added. [#142](https://github.com/pypeclub/pype/issues/142)
|
||||
- _(nuke)_ Filepath knob wasn't updated properly. [#131](https://github.com/pypeclub/pype/issues/131)
|
||||
- _(maya)_ When extracting animation, the "Write Color Set" options on the instance were not respected. [#108](https://github.com/pypeclub/pype/issues/108)
|
||||
- _(maya)_ Attribute overrides for AOV only worked for the legacy render layers. Now it works for new render setup as well [#132](https://github.com/pypeclub/pype/issues/132)
|
||||
- _(maya)_ Stability and usability improvements in yeti workflow [#104](https://github.com/pypeclub/pype/issues/104)
|
||||
|
||||
|
||||
|
||||
<a name="2.8.0"></a>
|
||||
## 2.8.0 ##
|
||||
|
||||
_**release date:** 20 April 2020_
|
||||
|
||||
**new:**
|
||||
|
||||
- _(pype)_ Option to generate slates from json templates. [PYPE-628] [#26](https://github.com/pypeclub/pype/issues/26)
|
||||
- _(pype)_ It is now possible to automate loading of published subsets into any scene. Documentation will follow :). [PYPE-611] [#24](https://github.com/pypeclub/pype/issues/24)
|
||||
|
||||
**fix:**
|
||||
|
||||
- _(maya)_ Some Redshift render tokens could break publishing. [PYPE-778] [#33](https://github.com/pypeclub/pype/issues/33)
|
||||
- _(maya)_ Publish was not preserving maya file extension. [#39](https://github.com/pypeclub/pype/issues/39)
|
||||
- _(maya)_ Rig output validator was failing on nodes without shapes. [#40](https://github.com/pypeclub/pype/issues/40)
|
||||
- _(maya)_ Yeti caches can now be properly versioned up in the scene inventory. [#40](https://github.com/pypeclub/pype/issues/40)
|
||||
- _(nuke)_ Build first workfiles was not accepting jpeg sequences. [#34](https://github.com/pypeclub/pype/issues/34)
|
||||
- _(deadline)_ Trying to generate ffmpeg review from multipart EXRs no longer crashes publishing. [PYPE-781]
|
||||
- _(deadline)_ Render publishing is more stable in multiplatform environments. [PYPE-775]
|
||||
|
||||
|
||||
|
||||
<a name="2.7.0"></a>
|
||||
## 2.7.0 ##
|
||||
|
||||
_**release date:** 30 March 2020_
|
||||
|
||||
**new:**
|
||||
|
||||
- _(maya)_ Artist can now choose to load multiple references of the same subset at once [PYPE-646, PYPS-81]
|
||||
- _(nuke)_ Option to use named OCIO colorspaces for review colour baking. [PYPS-82]
|
||||
- _(pype)_ Pype can now work with `master` versions for publishing and loading. These are non-versioned publishes that are overwritten with the latest version during publish. These are now supported in all the GUIs, but their publishing is deactivated by default. [PYPE-653]
|
||||
- _(blender)_ Added support for basic blender workflow. We currently support `rig`, `model` and `animation` families. [PYPE-768]
|
||||
- _(pype)_ Source timecode can now be used in burn-ins. [PYPE-777]
|
||||
- _(pype)_ Review outputs profiles can now specify delivery resolution different than project setting [PYPE-759]
|
||||
- _(nuke)_ Bookmark to current context is now added automatically to all nuke browser windows. [PYPE-712]
|
||||
|
||||
**change:**
|
||||
|
||||
- _(maya)_ It is now possible to publish camera without. baking. Keep in mind that unbaked cameras can't be guaranteed to work in other hosts. [PYPE-595]
|
||||
- _(maya)_ All the renders from maya are now grouped in the loader by their Layer name. [PYPE-482]
|
||||
- _(nuke/hiero)_ Any publishes from nuke and hiero can now be versioned independently of the workfile. [PYPE-728]
|
||||
|
||||
|
||||
**fix:**
|
||||
|
||||
- _(nuke)_ Mixed slashes caused issues in ocio config path.
|
||||
- _(pype)_ Intent field in pyblish GUI was passing label instead of value to ftrack. [PYPE-733]
|
||||
- _(nuke)_ Publishing of pre-renders was inconsistent. [PYPE-766]
|
||||
- _(maya)_ Handles and frame ranges were inconsistent in various places during publishing.
|
||||
- _(nuke)_ Nuke was crashing if it ran into certain missing knobs. For example DPX output missing `autocrop` [PYPE-774]
|
||||
- _(deadline)_ Project overrides were not working properly with farm render publishing.
|
||||
- _(hiero)_ Problems with single frame plates publishing.
|
||||
- _(maya)_ Redshift RenderPass token were breaking render publishing. [PYPE-778]
|
||||
- _(nuke)_ Build first workfile was not accepting jpeg sequences.
|
||||
- _(maya)_ Multipart (Multilayer) EXRs were breaking review publishing due to FFMPEG incompatiblity [PYPE-781]
|
||||
|
||||
|
||||
<a name="2.6.0"></a>
|
||||
## 2.6.0 ##
|
||||
|
||||
_**release date:** 9 March 2020_
|
||||
|
||||
**change:**
|
||||
- _(maya)_ render publishing has been simplified and made more robust. Render setup layers are now automatically added to publishing subsets and `render globals` family has been replaced with simple `render` [PYPE-570]
|
||||
- _(avalon)_ change context and workfiles apps, have been merged into one, that allows both actions to be performed at the same time. [PYPE-747]
|
||||
- _(pype)_ thumbnails are now automatically propagate to asset from the last published subset in the loader
|
||||
- _(ftrack)_ publishing comment and intent are now being published to ftrack note as well as describtion. [PYPE-727]
|
||||
- _(pype)_ when overriding existing version new old representations are now overriden, instead of the new ones just being appended. (to allow this behaviour, the version validator need to be disabled. [PYPE-690])
|
||||
- _(pype)_ burnin preset has been significantly simplified. It now doesn't require passing function to each field, but only need the actual text template. to use this, all the current burnin PRESETS MUST BE UPDATED for all the projects.
|
||||
- _(ftrack)_ credentials are now stored on a per server basis, so it's possible to switch between ftrack servers without having to log in and out. [PYPE-723]
|
||||
|
||||
|
||||
**new:**
|
||||
- _(pype)_ production and development deployments now have different colour of the tray icon. Orange for Dev and Green for production [PYPE-718]
|
||||
- _(maya)_ renders can now be attached to a publishable subset rather than creating their own subset. For example it is possible to create a reviewable `look` or `model` render and have it correctly attached as a representation of the subsets [PYPE-451]
|
||||
- _(maya)_ after saving current scene into a new context (as a new shot for instance), all the scene publishing subsets data gets re-generated automatically to match the new context [PYPE-532]
|
||||
- _(pype)_ we now support project specific publish, load and create plugins [PYPE-740]
|
||||
- _(ftrack)_ new action that allow archiving/deleting old published versions. User can keep how many of the latest version to keep when the action is ran. [PYPE-748, PYPE-715]
|
||||
- _(ftrack)_ it is now possible to monitor and restart ftrack event server using ftrack action. [PYPE-658]
|
||||
- _(pype)_ validator that prevent accidental overwrites of previously published versions. [PYPE-680]
|
||||
- _(avalon)_ avalon core updated to version 5.6.0
|
||||
- _(maya)_ added validator to make sure that relative paths are used when publishing arnold standins.
|
||||
- _(nukestudio)_ it is now possible to extract and publish audio family from clip in nuke studio [PYPE-682]
|
||||
|
||||
**fix**:
|
||||
- _(maya)_ maya set framerange button was ignoring handles [PYPE-719]
|
||||
- _(ftrack)_ sync to avalon was sometime crashing when ran on empty project
|
||||
- _(nukestudio)_ publishing same shots after they've been previously archived/deleted would result in a crash. [PYPE-737]
|
||||
- _(nuke)_ slate workflow was breaking in certain scenarios. [PYPE-730]
|
||||
- _(pype)_ rendering publish workflow has been significantly improved to prevent error resulting from implicit render collection. [PYPE-665, PYPE-746]
|
||||
- _(pype)_ launching application on a non-synced project resulted in obscure [PYPE-528]
|
||||
- _(pype)_ missing keys in burnins no longer result in an error. [PYPE-706]
|
||||
- _(ftrack)_ create folder structure action was sometimes failing for project managers due to wrong permissions.
|
||||
- _(Nukestudio)_ using `source` in the start frame tag could result in wrong frame range calculation
|
||||
- _(ftrack)_ sync to avalon action and event have been improved by catching more edge cases and provessing them properly.
|
||||
|
||||
|
||||
<a name="2.5"></a>
|
||||
## 2.5.0 ##
|
||||
|
||||
_**release date:** 11 Feb 2020_
|
||||
|
||||
**change:**
|
||||
- _(pype)_ added many logs for easier debugging
|
||||
- _(pype)_ review presets can now be separated between 2d and 3d renders [PYPE-693]
|
||||
- _(pype)_ anatomy module has been greatly improved to allow for more dynamic pulblishing and faster debugging [PYPE-685]
|
||||
- _(pype)_ avalon schemas have been moved from `pype-config` to `pype` repository, for simplification. [PYPE-670]
|
||||
- _(ftrack)_ updated to latest ftrack API
|
||||
- _(ftrack)_ publishing comments now appear in ftrack also as a note on version with customisable category [PYPE-645]
|
||||
- _(ftrack)_ delete asset/subset action had been improved. It is now able to remove multiple entities and descendants of the selected entities [PYPE-361, PYPS-72]
|
||||
- _(workfiles)_ added date field to workfiles app [PYPE-603]
|
||||
- _(maya)_ old deprecated loader have been removed in favour of a single unified reference loader (old scenes will upgrade automatically to the new loader upon opening) [PYPE-633, PYPE-697]
|
||||
- _(avalon)_ core updated to 5.5.15 [PYPE-671]
|
||||
- _(nuke)_ library loader is now available in nuke [PYPE-698]
|
||||
|
||||
|
||||
**new:**
|
||||
- _(pype)_ added pype render wrapper to allow rendering on mixed platform farms. [PYPE-634]
|
||||
- _(pype)_ added `pype launch` command. It let's admin run applications with dynamically built environment based on the given context. [PYPE-634]
|
||||
- _(pype)_ added support for extracting review sequences with burnins [PYPE-657]
|
||||
- _(publish)_ users can now set intent next to a comment when publishing. This will then be reflected on an attribute in ftrack. [PYPE-632]
|
||||
- _(burnin)_ timecode can now be added to burnin
|
||||
- _(burnin)_ datetime keys can now be added to burnin and anatomy [PYPE-651]
|
||||
- _(burnin)_ anatomy templates can now be used in burnins. [PYPE=626]
|
||||
- _(nuke)_ new validator for render resolution
|
||||
- _(nuke)_ support for attach slate to nuke renders [PYPE-630]
|
||||
- _(nuke)_ png sequences were added to loaders
|
||||
- _(maya)_ added maya 2020 compatibility [PYPE-677]
|
||||
- _(maya)_ ability to publish and load .ASS standin sequences [PYPS-54]
|
||||
- _(pype)_ thumbnails can now be published and are visible in the loader. `AVALON_THUMBNAIL_ROOT` environment variable needs to be set for this to work [PYPE-573, PYPE-132]
|
||||
- _(blender)_ base implementation of blender was added with publishing and loading of .blend files [PYPE-612]
|
||||
- _(ftrack)_ new action for preparing deliveries [PYPE-639]
|
||||
|
||||
|
||||
**fix**:
|
||||
- _(burnin)_ more robust way of finding ffmpeg for burnins.
|
||||
- _(pype)_ improved UNC paths remapping when sending to farm.
|
||||
- _(pype)_ float frames sometimes made their way to representation context in database, breaking loaders [PYPE-668]
|
||||
- _(pype)_ `pype install --force` was failing sometimes [PYPE-600]
|
||||
- _(pype)_ padding in published files got calculated wrongly sometimes. It is now instead being always read from project anatomy. [PYPE-667]
|
||||
- _(publish)_ comment publishing was failing in certain situations
|
||||
- _(ftrack)_ multiple edge case scenario fixes in auto sync and sync-to-avalon action
|
||||
- _(ftrack)_ sync to avalon now works on empty projects
|
||||
- _(ftrack)_ thumbnail update event was failing when deleting entities [PYPE-561]
|
||||
- _(nuke)_ loader applies proper colorspaces from Presets
|
||||
- _(nuke)_ publishing handles didn't always work correctly [PYPE-686]
|
||||
- _(maya)_ assembly publishing and loading wasn't working correctly
|
||||
|
||||
|
||||
|
||||
|
||||
<a name="2.4.0"></a>
|
||||
## 2.4.0 ##
|
||||
|
||||
_**release date:** 9 Dec 2019_
|
||||
|
||||
**change:**
|
||||
- _(ftrack)_ version to status ftrack event can now be configured from Presets
|
||||
- based on preset `presets/ftracc/ftrack_config.json["status_version_to_task"]`
|
||||
- _(ftrack)_ sync to avalon event has been completely re-written. It now supports most of the project management situations on ftrack including moving, renaming and deleting entities, updating attributes and working with tasks.
|
||||
- _(ftrack)_ sync to avalon action has been also re-writen. It is now much faster (up to 100 times depending on a project structure), has much better logging and reporting on encountered problems, and is able to handle much more complex situations.
|
||||
- _(ftrack)_ sync to avalon trigger by checking `auto-sync` toggle on ftrack [PYPE-504]
|
||||
- _(pype)_ various new features in the REST api
|
||||
- _(pype)_ new visual identity used across pype
|
||||
- _(pype)_ started moving all requirements to pip installation rather than vendorising them in pype repository. Due to a few yet unreleased packages, this means that pype can temporarily be only installed in the offline mode.
|
||||
|
||||
**new:**
|
||||
- _(nuke)_ support for publishing gizmos and loading them as viewer processes
|
||||
- _(nuke)_ support for publishing nuke nodes from backdrops and loading them back
|
||||
- _(pype)_ burnins can now work with start and end frames as keys
|
||||
- use keys `{frame_start}`, `{frame_end}` and `{current_frame}` in burnin preset to use them. [PYPS-44,PYPS-73, PYPE-602]
|
||||
- _(pype)_ option to filter logs by user and level in loggin GUI
|
||||
- _(pype)_ image family added to standalone publisher [PYPE-574]
|
||||
- _(pype)_ matchmove family added to standalone publisher [PYPE-574]
|
||||
- _(nuke)_ validator for comparing arbitrary knobs with values from presets
|
||||
- _(maya)_ option to force maya to copy textures in the new look publish rather than hardlinking them
|
||||
- _(pype)_ comments from pyblish GUI are now being added to ftrack version
|
||||
- _(maya)_ validator for checking outdated containers in the scene
|
||||
- _(maya)_ option to publish and load arnold standin sequence [PYPE-579, PYPS-54]
|
||||
|
||||
**fix**:
|
||||
- _(pype)_ burnins were not respecting codec of the input video
|
||||
- _(nuke)_ lot's of various nuke and nuke studio fixes across the board [PYPS-45]
|
||||
- _(pype)_ workfiles app is not launching with the start of the app by default [PYPE-569]
|
||||
- _(ftrack)_ ftrack integration during publishing was failing under certain situations [PYPS-66]
|
||||
- _(pype)_ minor fixes in REST api
|
||||
- _(ftrack)_ status change event was crashing when the target status was missing [PYPS-68]
|
||||
- _(ftrack)_ actions will try to reconnect if they fail for some reason
|
||||
- _(maya)_ problems with fps mapping when using float FPS values
|
||||
- _(deadline)_ overall improvements to deadline publishing
|
||||
- _(setup)_ environment variables are now remapped on the fly based on the platform pype is running on. This fixes many issues in mixed platform environments.
|
||||
|
||||
|
||||
<a name="2.3.6"></a>
|
||||
## 2.3.6 #
|
||||
|
||||
_**release date:** 27 Nov 2019_
|
||||
|
||||
**hotfix**:
|
||||
- _(ftrack)_ was hiding important debug logo
|
||||
- _(nuke)_ crashes during workfile publishing
|
||||
- _(ftrack)_ event server crashes because of signal problems
|
||||
- _(muster)_ problems with muster render submissions
|
||||
- _(ftrack)_ thumbnail update event syntax errors
|
||||
|
||||
|
||||
## 2.3.0 ##
|
||||
_release date: 6 Oct 2019_
|
||||
|
||||
**new**:
|
||||
- _(maya)_ support for yeti rigs and yeti caches
|
||||
- _(maya)_ validator for comparing arbitrary attributes against ftrack
|
||||
- _(pype)_ burnins can now show current date and time
|
||||
- _(muster)_ pools can now be set in render globals in maya
|
||||
- _(pype)_ Rest API has been implemented in beta stage
|
||||
- _(nuke)_ LUT loader has been added
|
||||
- _(pype)_ rudimentary user module has been added as preparation for user management
|
||||
- _(pype)_ a simple logging GUI has been added to pype tray
|
||||
- _(nuke)_ nuke can now bake input process into mov
|
||||
- _(maya)_ imported models now have selection handle displayed by defaulting
|
||||
- _(avalon)_ it's is now possible to load multiple assets at once using loader
|
||||
- _(maya)_ added ability to automatically connect yeti rig to a mesh upon loading
|
||||
|
||||
**changed**:
|
||||
- _(ftrack)_ event server now runs two parallel processes and is able to keep queue of events to process.
|
||||
- _(nuke)_ task name is now added to all rendered subsets
|
||||
- _(pype)_ adding more families to standalone publisher
|
||||
- _(pype)_ standalone publisher now uses pyblish-lite
|
||||
- _(pype)_ standalone publisher can now create review quicktimes
|
||||
- _(ftrack)_ queries to ftrack were sped up
|
||||
- _(ftrack)_ multiple ftrack action have been deprecated
|
||||
- _(avalon)_ avalon upstream has been updated to 5.5.0
|
||||
- _(nukestudio)_ published transforms can now be animated
|
||||
-
|
||||
|
||||
**fix**:
|
||||
- _(maya)_ fps popup button didn't work in some cases
|
||||
- _(maya)_ geometry instances and references in maya were losing shader assignments
|
||||
- _(muster)_ muster rendering templates were not working correctly
|
||||
- _(maya)_ arnold tx texture conversion wasn't respecting colorspace set by the artist
|
||||
- _(pype)_ problems with avalon db sync
|
||||
- _(maya)_ ftrack was rounding FPS making it inconsistent
|
||||
- _(pype)_ wrong icon names in Creator
|
||||
- _(maya)_ scene inventory wasn't showing anything if representation was removed from database after it's been loaded to the scene
|
||||
- _(nukestudio)_ multiple bugs squashed
|
||||
- _(loader)_ loader was taking long time to show all the loading action when first launcher in maya
|
||||
|
||||
## 2.2.0 ##
|
||||
_release date: 8 Sept 2019_
|
||||
|
||||
**new**:
|
||||
- _(pype)_ add customisable workflow for creating quicktimes from renders or playblasts
|
||||
- _(nuke)_ option to choose deadline chunk size on write nodes
|
||||
- _(nukestudio)_ added option to publish soft effects (subTrackItems) from NukeStudio as subsets including LUT files. these can then be loaded in nuke or NukeStudio
|
||||
- _(nuke)_ option to build nuke script from previously published latest versions of plate and render subsets.
|
||||
- _(nuke)_ nuke writes now have deadline tab.
|
||||
- _(ftrack)_ Prepare Project action can now be used for creating the base folder structure on disk and in ftrack, setting up all the initial project attributes and it automatically prepares `pype_project_config` folder for the given project.
|
||||
- _(clockify)_ Added support for time tracking in clockify. This currently in addition to ftrack time logs, but does not completely replace them.
|
||||
- _(pype)_ any attributes in Creator and Loader plugins can now be customised using pype preset system
|
||||
|
||||
**changed**:
|
||||
- nukestudio now uses workio API for workfiles
|
||||
- _(maya)_ "FIX FPS" prompt in maya now appears in the middle of the screen
|
||||
- _(muster)_ can now be configured with custom templates
|
||||
- _(pype)_ global publishing plugins can now be configured using presets as well as host specific ones
|
||||
|
||||
|
||||
**fix**:
|
||||
- wrong version retrieval from path in certain scenarios
|
||||
- nuke reset resolution wasn't working in certain scenarios
|
||||
|
||||
## 2.1.0 ##
|
||||
_release date: 6 Aug 2019_
|
||||
|
||||
A large cleanup release. Most of the change are under the hood.
|
||||
|
||||
**new**:
|
||||
- _(pype)_ add customisable workflow for creating quicktimes from renders or playblasts
|
||||
- _(pype)_ Added configurable option to add burnins to any generated quicktimes
|
||||
- _(ftrack)_ Action that identifies what machines pype is running on.
|
||||
- _(system)_ unify subprocess calls
|
||||
- _(maya)_ add audio to review quicktimes
|
||||
- _(nuke)_ add crop before write node to prevent overscan problems in ffmpeg
|
||||
- **Nuke Studio** publishing and workfiles support
|
||||
- **Muster** render manager support
|
||||
- _(nuke)_ Framerange, FPS and Resolution are set automatically at startup
|
||||
- _(maya)_ Ability to load published sequences as image planes
|
||||
- _(system)_ Ftrack event that sets asset folder permissions based on task assignees in ftrack.
|
||||
- _(maya)_ Pyblish plugin that allow validation of maya attributes
|
||||
- _(system)_ added better startup logging to tray debug, including basic connection information
|
||||
- _(avalon)_ option to group published subsets to groups in the loader
|
||||
- _(avalon)_ loader family filters are working now
|
||||
|
||||
**changed**:
|
||||
- change multiple key attributes to unify their behaviour across the pipeline
|
||||
- `frameRate` to `fps`
|
||||
- `startFrame` to `frameStart`
|
||||
- `endFrame` to `frameEnd`
|
||||
- `fstart` to `frameStart`
|
||||
- `fend` to `frameEnd`
|
||||
- `handle_start` to `handleStart`
|
||||
- `handle_end` to `handleEnd`
|
||||
- `resolution_width` to `resolutionWidth`
|
||||
- `resolution_height` to `resolutionHeight`
|
||||
- `pixel_aspect` to `pixelAspect`
|
||||
|
||||
- _(nuke)_ write nodes are now created inside group with only some attributes editable by the artist
|
||||
- rendered frames are now deleted from temporary location after their publishing is finished.
|
||||
- _(ftrack)_ RV action can now be launched from any entity
|
||||
- after publishing only refresh button is now available in pyblish UI
|
||||
- added context instance pyblish-lite so that artist knows if context plugin fails
|
||||
- _(avalon)_ allow opening selected files using enter key
|
||||
- _(avalon)_ core updated to v5.2.9 with our forked changes on top
|
||||
|
||||
**fix**:
|
||||
- faster hierarchy retrieval from db
|
||||
- _(nuke)_ A lot of stability enhancements
|
||||
- _(nuke studio)_ A lot of stability enhancements
|
||||
- _(nuke)_ now only renders a single write node on farm
|
||||
- _(ftrack)_ pype would crash when launcher project level task
|
||||
- work directory was sometimes not being created correctly
|
||||
- major pype.lib cleanup. Removing of unused functions, merging those that were doing the same and general house cleaning.
|
||||
- _(avalon)_ subsets in maya 2019 weren't behaving correctly in the outliner
|
||||
120
changelog.md
120
changelog.md
|
|
@ -1,120 +0,0 @@
|
|||
# Pype changelog #
|
||||
Welcome to pype changelog
|
||||
|
||||
## 2.3.0 ##
|
||||
_release date: 6 Oct 2019_
|
||||
|
||||
**new**:
|
||||
- _(maya)_ support for yeti rigs and yeti caches
|
||||
- _(maya)_ validator for comparing arbitrary attributes against ftrack
|
||||
- _(pype)_ burnins can now show current date and time
|
||||
- _(muster)_ pools can now be set in render globals in maya
|
||||
- _(pype)_ Rest API has been implemented in beta stage
|
||||
- _(nuke)_ LUT loader has been added
|
||||
- _(pype)_ rudimentary user module has been added as preparation for user management
|
||||
- _(pype)_ a simple logging GUI has been added to pype tray
|
||||
- _(nuke)_ nuke can now bake input process into mov
|
||||
- _(maya)_ imported models now have selection handle displayed by defaulting
|
||||
- _(avalon)_ it's is now possible to load multiple assets at once using loader
|
||||
- _(maya)_ added ability to automatically connect yeti rig to a mesh upon loading
|
||||
|
||||
**changed**:
|
||||
- _(ftrack)_ event server now runs two parallel processes and is able to keep queue of events to process.
|
||||
- _(nuke)_ task name is now added to all rendered subsets
|
||||
- _(pype)_ adding more families to standalone publisher
|
||||
- _(pype)_ standalone publisher now uses pyblish-lite
|
||||
- _(pype)_ standalone publisher can now create review quicktimes
|
||||
- _(ftrack)_ queries to ftrack were sped up
|
||||
- _(ftrack)_ multiple ftrack action have been deprecated
|
||||
- _(avalon)_ avalon upstream has been updated to 5.5.0
|
||||
- _(nukestudio)_ published transforms can now be animated
|
||||
-
|
||||
|
||||
**fix**:
|
||||
- _(maya)_ fps popup button didn't work in some cases
|
||||
- _(maya)_ geometry instances and references in maya were losing shader assignments
|
||||
- _(muster)_ muster rendering templates were not working correctly
|
||||
- _(maya)_ arnold tx texture conversion wasn't respecting colorspace set by the artist
|
||||
- _(pype)_ problems with avalon db sync
|
||||
- _(maya)_ ftrack was rounding FPS making it inconsistent
|
||||
- _(pype)_ wrong icon names in Creator
|
||||
- _(maya)_ scene inventory wasn't showing anything if representation was removed from database after it's been loaded to the scene
|
||||
- _(nukestudio)_ multiple bugs squashed
|
||||
- _(loader)_ loader was taking long time to show all the loading action when first launcher in maya
|
||||
|
||||
## 2.2.0 ##
|
||||
_release date: 8 Sept 2019_
|
||||
|
||||
**new**:
|
||||
- _(pype)_ add customisable workflow for creating quicktimes from renders or playblasts
|
||||
- _(nuke)_ option to choose deadline chunk size on write nodes
|
||||
- _(nukestudio)_ added option to publish soft effects (subTrackItems) from NukeStudio as subsets including LUT files. these can then be loaded in nuke or NukeStudio
|
||||
- _(nuke)_ option to build nuke script from previously published latest versions of plate and render subsets.
|
||||
- _(nuke)_ nuke writes now have deadline tab.
|
||||
- _(ftrack)_ Prepare Project action can now be used for creating the base folder structure on disk and in ftrack, setting up all the initial project attributes and it automatically prepares `pype_project_config` folder for the given project.
|
||||
- _(clockify)_ Added support for time tracking in clockify. This currently in addition to ftrack time logs, but does not completely replace them.
|
||||
- _(pype)_ any attributes in Creator and Loader plugins can now be customised using pype preset system
|
||||
|
||||
**changed**:
|
||||
- nukestudio now uses workio API for workfiles
|
||||
- _(maya)_ "FIX FPS" prompt in maya now appears in the middle of the screen
|
||||
- _(muster)_ can now be configured with custom templates
|
||||
- _(pype)_ global publishing plugins can now be configured using presets as well as host specific ones
|
||||
|
||||
|
||||
**fix**:
|
||||
- wrong version retrieval from path in certain scenarios
|
||||
- nuke reset resolution wasn't working in certain scenarios
|
||||
|
||||
## 2.1.0 ##
|
||||
_release date: 6 Aug 2019_
|
||||
|
||||
A large cleanup release. Most of the change are under the hood.
|
||||
|
||||
**new**:
|
||||
- _(pype)_ add customisable workflow for creating quicktimes from renders or playblasts
|
||||
- _(pype)_ Added configurable option to add burnins to any generated quicktimes
|
||||
- _(ftrack)_ Action that identifies what machines pype is running on.
|
||||
- _(system)_ unify subprocess calls
|
||||
- _(maya)_ add audio to review quicktimes
|
||||
- _(nuke)_ add crop before write node to prevent overscan problems in ffmpeg
|
||||
- **Nuke Studio** publishing and workfiles support
|
||||
- **Muster** render manager support
|
||||
- _(nuke)_ Framerange, FPS and Resolution are set automatically at startup
|
||||
- _(maya)_ Ability to load published sequences as image planes
|
||||
- _(system)_ Ftrack event that sets asset folder permissions based on task assignees in ftrack.
|
||||
- _(maya)_ Pyblish plugin that allow validation of maya attributes
|
||||
- _(system)_ added better startup logging to tray debug, including basic connection information
|
||||
- _(avalon)_ option to group published subsets to groups in the loader
|
||||
- _(avalon)_ loader family filters are working now
|
||||
|
||||
**changed**:
|
||||
- change multiple key attributes to unify their behaviour across the pipeline
|
||||
- `frameRate` to `fps`
|
||||
- `startFrame` to `frameStart`
|
||||
- `endFrame` to `frameEnd`
|
||||
- `fstart` to `frameStart`
|
||||
- `fend` to `frameEnd`
|
||||
- `handle_start` to `handleStart`
|
||||
- `handle_end` to `handleEnd`
|
||||
- `resolution_width` to `resolutionWidth`
|
||||
- `resolution_height` to `resolutionHeight`
|
||||
- `pixel_aspect` to `pixelAspect`
|
||||
|
||||
- _(nuke)_ write nodes are now created inside group with only some attributes editable by the artist
|
||||
- rendered frames are now deleted from temporary location after their publishing is finished.
|
||||
- _(ftrack)_ RV action can now be launched from any entity
|
||||
- after publishing only refresh button is now available in pyblish UI
|
||||
- added context instance pyblish-lite so that artist knows if context plugin fails
|
||||
- _(avalon)_ allow opening selected files using enter key
|
||||
- _(avalon)_ core updated to v5.2.9 with our forked changes on top
|
||||
|
||||
**fix**:
|
||||
- faster hierarchy retrieval from db
|
||||
- _(nuke)_ A lot of stability enhancements
|
||||
- _(nuke studio)_ A lot of stability enhancements
|
||||
- _(nuke)_ now only renders a single write node on farm
|
||||
- _(ftrack)_ pype would crash when launcher project level task
|
||||
- work directory was sometimes not being created correctly
|
||||
- major pype.lib cleanup. Removing of unused functions, merging those that were doing the same and general house cleaning.
|
||||
- _(avalon)_ subsets in maya 2019 weren't behaving correctly in the outliner
|
||||
|
|
@ -1,3 +1,7 @@
|
|||
from .settings import (
|
||||
system_settings,
|
||||
project_settings
|
||||
)
|
||||
from pypeapp import (
|
||||
Logger,
|
||||
Anatomy,
|
||||
|
|
@ -49,6 +53,9 @@ from .lib import (
|
|||
from .lib import _subprocess as subprocess
|
||||
|
||||
__all__ = [
|
||||
"system_settings",
|
||||
"project_settings",
|
||||
|
||||
"Logger",
|
||||
"Anatomy",
|
||||
"project_overrides_dir_path",
|
||||
|
|
|
|||
|
|
@ -46,13 +46,14 @@ class ResolvePrelaunch(PypeHook):
|
|||
"`RESOLVE_UTILITY_SCRIPTS_DIR` or reinstall DaVinci Resolve. \n"
|
||||
f"RESOLVE_UTILITY_SCRIPTS_DIR: `{us_dir}`"
|
||||
)
|
||||
self.log.debug(f"-- us_dir: `{us_dir}`")
|
||||
|
||||
# correctly format path for pre python script
|
||||
pre_py_sc = os.path.normpath(env.get("PRE_PYTHON_SCRIPT", ""))
|
||||
env["PRE_PYTHON_SCRIPT"] = pre_py_sc
|
||||
|
||||
self.log.debug(f"-- pre_py_sc: `{pre_py_sc}`...")
|
||||
try:
|
||||
__import__("pype.resolve")
|
||||
__import__("pype.hosts.resolve")
|
||||
__import__("pyblish")
|
||||
|
||||
except ImportError as e:
|
||||
|
|
@ -62,6 +63,7 @@ class ResolvePrelaunch(PypeHook):
|
|||
else:
|
||||
# Resolve Setup integration
|
||||
importlib.reload(utils)
|
||||
self.log.debug(f"-- utils.__file__: `{utils.__file__}`")
|
||||
utils.setup(env)
|
||||
|
||||
return True
|
||||
|
|
|
|||
|
|
@ -6,10 +6,13 @@ from avalon.vendor import Qt
|
|||
import avalon.tools.sceneinventory
|
||||
import pyblish.api
|
||||
from pype import lib
|
||||
from pype.api import config
|
||||
|
||||
|
||||
def set_scene_settings(settings):
|
||||
func = """function func(args)
|
||||
|
||||
signature = harmony.signature("set_scene_settings")
|
||||
func = """function %s(args)
|
||||
{
|
||||
if (args[0]["fps"])
|
||||
{
|
||||
|
|
@ -18,12 +21,7 @@ def set_scene_settings(settings):
|
|||
if (args[0]["frameStart"] && args[0]["frameEnd"])
|
||||
{
|
||||
var duration = args[0]["frameEnd"] - args[0]["frameStart"] + 1
|
||||
if (frame.numberOf() > duration)
|
||||
{
|
||||
frame.remove(
|
||||
duration, frame.numberOf() - duration
|
||||
);
|
||||
}
|
||||
|
||||
if (frame.numberOf() < duration)
|
||||
{
|
||||
frame.insert(
|
||||
|
|
@ -41,8 +39,8 @@ def set_scene_settings(settings):
|
|||
)
|
||||
}
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (signature, signature)
|
||||
harmony.send({"function": func, "args": [settings]})
|
||||
|
||||
|
||||
|
|
@ -54,7 +52,7 @@ def get_asset_settings():
|
|||
resolution_width = asset_data.get("resolutionWidth")
|
||||
resolution_height = asset_data.get("resolutionHeight")
|
||||
|
||||
return {
|
||||
scene_data = {
|
||||
"fps": fps,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
|
|
@ -62,6 +60,18 @@ def get_asset_settings():
|
|||
"resolutionHeight": resolution_height
|
||||
}
|
||||
|
||||
try:
|
||||
skip_resolution_check = \
|
||||
config.get_presets()["harmony"]["general"]["skip_resolution_check"]
|
||||
except KeyError:
|
||||
skip_resolution_check = []
|
||||
|
||||
if os.getenv('AVALON_TASK') in skip_resolution_check:
|
||||
scene_data.pop("resolutionWidth")
|
||||
scene_data.pop("resolutionHeight")
|
||||
|
||||
return scene_data
|
||||
|
||||
|
||||
def ensure_scene_settings():
|
||||
settings = get_asset_settings()
|
||||
|
|
@ -75,21 +85,20 @@ def ensure_scene_settings():
|
|||
valid_settings[key] = value
|
||||
|
||||
# Warn about missing attributes.
|
||||
print("Starting new QApplication..")
|
||||
app = Qt.QtWidgets.QApplication(sys.argv)
|
||||
|
||||
message_box = Qt.QtWidgets.QMessageBox()
|
||||
message_box.setIcon(Qt.QtWidgets.QMessageBox.Warning)
|
||||
msg = "Missing attributes:"
|
||||
if invalid_settings:
|
||||
print("Starting new QApplication..")
|
||||
app = Qt.QtWidgets.QApplication.instance()
|
||||
if not app:
|
||||
app = Qt.QtWidgets.QApplication(sys.argv)
|
||||
|
||||
message_box = Qt.QtWidgets.QMessageBox()
|
||||
message_box.setIcon(Qt.QtWidgets.QMessageBox.Warning)
|
||||
msg = "Missing attributes:"
|
||||
for item in invalid_settings:
|
||||
msg += f"\n{item}"
|
||||
message_box.setText(msg)
|
||||
message_box.exec_()
|
||||
|
||||
# Garbage collect QApplication.
|
||||
del app
|
||||
|
||||
set_scene_settings(valid_settings)
|
||||
|
||||
|
||||
|
|
@ -112,15 +121,17 @@ def check_inventory():
|
|||
outdated_containers.append(container)
|
||||
|
||||
# Colour nodes.
|
||||
func = """function func(args){
|
||||
sig = harmony.signature("set_color")
|
||||
func = """function %s(args){
|
||||
|
||||
for( var i =0; i <= args[0].length - 1; ++i)
|
||||
{
|
||||
var red_color = new ColorRGBA(255, 0, 0, 255);
|
||||
node.setColor(args[0][i], red_color);
|
||||
}
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
outdated_nodes = []
|
||||
for container in outdated_containers:
|
||||
if container["loader"] == "ImageSequenceLoader":
|
||||
|
|
@ -149,7 +160,9 @@ def application_launch():
|
|||
|
||||
|
||||
def export_template(backdrops, nodes, filepath):
|
||||
func = """function func(args)
|
||||
|
||||
sig = harmony.signature("set_color")
|
||||
func = """function %s(args)
|
||||
{
|
||||
|
||||
var temp_node = node.add("Top", "temp_note", "NOTE", 0, 0, 0);
|
||||
|
|
@ -184,8 +197,8 @@ def export_template(backdrops, nodes, filepath):
|
|||
Action.perform("onActionUpToParent()", "Node View");
|
||||
node.deleteNode(template_group, true, true);
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
harmony.send({
|
||||
"function": func,
|
||||
"args": [
|
||||
|
|
@ -226,12 +239,17 @@ def install():
|
|||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle node enabling on instance toggles."""
|
||||
func = """function func(args)
|
||||
|
||||
sig = harmony.signature("enable_node")
|
||||
func = """function %s(args)
|
||||
{
|
||||
node.setEnable(args[0], args[1])
|
||||
}
|
||||
func
|
||||
"""
|
||||
harmony.send(
|
||||
{"function": func, "args": [instance[0], new_value]}
|
||||
)
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
try:
|
||||
harmony.send(
|
||||
{"function": func, "args": [instance[0], new_value]}
|
||||
)
|
||||
except IndexError:
|
||||
print(f"Instance '{instance}' is missing node")
|
||||
|
|
|
|||
|
|
@ -32,8 +32,19 @@ def deferred():
|
|||
command=lambda *args: BuildWorkfile().process()
|
||||
)
|
||||
|
||||
def add_look_assigner_item():
|
||||
import mayalookassigner
|
||||
cmds.menuItem(
|
||||
"Look assigner",
|
||||
parent=pipeline._menu,
|
||||
command=lambda *args: mayalookassigner.show()
|
||||
)
|
||||
|
||||
log.info("Attempting to install scripts menu..")
|
||||
|
||||
add_build_workfiles_item()
|
||||
add_look_assigner_item()
|
||||
|
||||
try:
|
||||
import scriptsmenu.launchformaya as launchformaya
|
||||
import scriptsmenu.scriptsmenu as scriptsmenu
|
||||
|
|
@ -42,7 +53,6 @@ def deferred():
|
|||
"Skipping studio.menu install, because "
|
||||
"'scriptsmenu' module seems unavailable."
|
||||
)
|
||||
add_build_workfiles_item()
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
|
|
|
|||
|
|
@ -174,6 +174,25 @@ class ReferenceLoader(api.Loader):
|
|||
|
||||
assert os.path.exists(path), "%s does not exist." % path
|
||||
|
||||
# Need to save alembic settings and reapply, cause referencing resets
|
||||
# them to incoming data.
|
||||
alembic_attrs = ["speed", "offset", "cycleType"]
|
||||
alembic_data = {}
|
||||
if representation["name"] == "abc":
|
||||
alembic_nodes = cmds.ls(
|
||||
"{}:*".format(members[0].split(":")[0]), type="AlembicNode"
|
||||
)
|
||||
if alembic_nodes:
|
||||
for attr in alembic_attrs:
|
||||
node_attr = "{}.{}".format(alembic_nodes[0], attr)
|
||||
alembic_data[attr] = cmds.getAttr(node_attr)
|
||||
else:
|
||||
cmds.warning(
|
||||
"No alembic nodes found in {}".format(
|
||||
cmds.ls("{}:*".format(members[0].split(":")[0]))
|
||||
)
|
||||
)
|
||||
|
||||
try:
|
||||
content = cmds.file(path,
|
||||
loadReference=reference_node,
|
||||
|
|
@ -195,6 +214,16 @@ class ReferenceLoader(api.Loader):
|
|||
|
||||
self.log.warning("Ignoring file read error:\n%s", exc)
|
||||
|
||||
# Reapply alembic settings.
|
||||
if representation["name"] == "abc":
|
||||
alembic_nodes = cmds.ls(
|
||||
"{}:*".format(members[0].split(":")[0]), type="AlembicNode"
|
||||
)
|
||||
if alembic_nodes:
|
||||
for attr in alembic_attrs:
|
||||
value = alembic_data[attr]
|
||||
cmds.setAttr("{}.{}".format(alembic_nodes[0], attr), value)
|
||||
|
||||
# Fix PLN-40 for older containers created with Avalon that had the
|
||||
# `.verticesOnlySet` set to True.
|
||||
if cmds.getAttr("{}.verticesOnlySet".format(node)):
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import os
|
||||
import re
|
||||
import sys
|
||||
import getpass
|
||||
from collections import OrderedDict
|
||||
|
||||
from avalon import api, io, lib
|
||||
|
|
@ -1060,310 +1059,6 @@ def get_write_node_template_attr(node):
|
|||
return avalon.nuke.lib.fix_data_for_node_create(correct_data)
|
||||
|
||||
|
||||
class BuildWorkfile(WorkfileSettings):
|
||||
"""
|
||||
Building first version of workfile.
|
||||
|
||||
Settings are taken from presets and db. It will add all subsets
|
||||
in last version for defined representaions
|
||||
|
||||
Arguments:
|
||||
variable (type): description
|
||||
|
||||
"""
|
||||
xpos = 0
|
||||
ypos = 0
|
||||
xpos_size = 80
|
||||
ypos_size = 90
|
||||
xpos_gap = 50
|
||||
ypos_gap = 50
|
||||
pos_layer = 10
|
||||
|
||||
def __init__(self,
|
||||
root_path=None,
|
||||
root_node=None,
|
||||
nodes=None,
|
||||
to_script=None,
|
||||
**kwargs):
|
||||
"""
|
||||
A short description.
|
||||
|
||||
A bit longer description.
|
||||
|
||||
Argumetns:
|
||||
root_path (str): description
|
||||
root_node (nuke.Node): description
|
||||
nodes (list): list of nuke.Node
|
||||
nodes_effects (dict): dictionary with subsets
|
||||
|
||||
Example:
|
||||
nodes_effects = {
|
||||
"plateMain": {
|
||||
"nodes": [
|
||||
[("Class", "Reformat"),
|
||||
("resize", "distort"),
|
||||
("flip", True)],
|
||||
|
||||
[("Class", "Grade"),
|
||||
("blackpoint", 0.5),
|
||||
("multiply", 0.4)]
|
||||
]
|
||||
},
|
||||
}
|
||||
|
||||
"""
|
||||
|
||||
WorkfileSettings.__init__(self,
|
||||
root_node=root_node,
|
||||
nodes=nodes,
|
||||
**kwargs)
|
||||
self.to_script = to_script
|
||||
# collect data for formating
|
||||
self.data_tmp = {
|
||||
"project": {"name": self._project["name"],
|
||||
"code": self._project["data"].get("code", "")},
|
||||
"asset": self._asset or os.environ["AVALON_ASSET"],
|
||||
"task": kwargs.get("task") or api.Session["AVALON_TASK"],
|
||||
"hierarchy": kwargs.get("hierarchy") or pype.get_hierarchy(),
|
||||
"version": kwargs.get("version", {}).get("name", 1),
|
||||
"user": getpass.getuser(),
|
||||
"comment": "firstBuild",
|
||||
"ext": "nk"
|
||||
}
|
||||
|
||||
# get presets from anatomy
|
||||
anatomy = get_anatomy()
|
||||
# format anatomy
|
||||
anatomy_filled = anatomy.format(self.data_tmp)
|
||||
|
||||
# get dir and file for workfile
|
||||
self.work_dir = anatomy_filled["work"]["folder"]
|
||||
self.work_file = anatomy_filled["work"]["file"]
|
||||
|
||||
def save_script_as(self, path=None):
|
||||
# first clear anything in open window
|
||||
nuke.scriptClear()
|
||||
|
||||
if not path:
|
||||
dir = self.work_dir
|
||||
path = os.path.join(
|
||||
self.work_dir,
|
||||
self.work_file).replace("\\", "/")
|
||||
else:
|
||||
dir = os.path.dirname(path)
|
||||
|
||||
# check if folder is created
|
||||
if not os.path.exists(dir):
|
||||
os.makedirs(dir)
|
||||
|
||||
# save script to path
|
||||
nuke.scriptSaveAs(path)
|
||||
|
||||
def process(self,
|
||||
regex_filter=None,
|
||||
version=None,
|
||||
representations=["exr", "dpx", "lutJson", "mov",
|
||||
"preview", "png", "jpeg", "jpg"]):
|
||||
"""
|
||||
A short description.
|
||||
|
||||
A bit longer description.
|
||||
|
||||
Args:
|
||||
regex_filter (raw string): regex pattern to filter out subsets
|
||||
version (int): define a particular version, None gets last
|
||||
representations (list):
|
||||
|
||||
Returns:
|
||||
type: description
|
||||
|
||||
Raises:
|
||||
Exception: description
|
||||
|
||||
"""
|
||||
|
||||
if not self.to_script:
|
||||
# save the script
|
||||
self.save_script_as()
|
||||
|
||||
# create viewer and reset frame range
|
||||
viewer = self.get_nodes(nodes_filter=["Viewer"])
|
||||
if not viewer:
|
||||
vn = nuke.createNode("Viewer")
|
||||
vn["xpos"].setValue(self.xpos)
|
||||
vn["ypos"].setValue(self.ypos)
|
||||
else:
|
||||
vn = viewer[-1]
|
||||
|
||||
# move position
|
||||
self.position_up()
|
||||
|
||||
wn = self.write_create()
|
||||
wn["xpos"].setValue(self.xpos)
|
||||
wn["ypos"].setValue(self.ypos)
|
||||
wn["render"].setValue(True)
|
||||
vn.setInput(0, wn)
|
||||
|
||||
# adding backdrop under write
|
||||
self.create_backdrop(label="Render write \n\n\n\nOUTPUT",
|
||||
color='0xcc1102ff', layer=-1,
|
||||
nodes=[wn])
|
||||
|
||||
# move position
|
||||
self.position_up(4)
|
||||
|
||||
# set frame range for new viewer
|
||||
self.reset_frame_range_handles()
|
||||
|
||||
# get all available representations
|
||||
subsets = pype.get_subsets(self._asset,
|
||||
regex_filter=regex_filter,
|
||||
version=version,
|
||||
representations=representations)
|
||||
|
||||
for name, subset in subsets.items():
|
||||
log.debug("___________________")
|
||||
log.debug(name)
|
||||
log.debug(subset["version"])
|
||||
|
||||
nodes_backdrop = list()
|
||||
for name, subset in subsets.items():
|
||||
if "lut" in name:
|
||||
continue
|
||||
log.info("Building Loader to: `{}`".format(name))
|
||||
version = subset["version"]
|
||||
log.info("Version to: `{}`".format(version["name"]))
|
||||
representations = subset["representaions"]
|
||||
for repr in representations:
|
||||
rn = self.read_loader(repr)
|
||||
rn["xpos"].setValue(self.xpos)
|
||||
rn["ypos"].setValue(self.ypos)
|
||||
wn.setInput(0, rn)
|
||||
|
||||
# get editional nodes
|
||||
lut_subset = [s for n, s in subsets.items()
|
||||
if "lut{}".format(name.lower()) in n.lower()]
|
||||
log.debug(">> lut_subset: `{}`".format(lut_subset))
|
||||
|
||||
if len(lut_subset) > 0:
|
||||
lsub = lut_subset[0]
|
||||
fxn = self.effect_loader(lsub["representaions"][-1])
|
||||
fxn_ypos = fxn["ypos"].value()
|
||||
fxn["ypos"].setValue(fxn_ypos - 100)
|
||||
nodes_backdrop.append(fxn)
|
||||
|
||||
nodes_backdrop.append(rn)
|
||||
# move position
|
||||
self.position_right()
|
||||
|
||||
# adding backdrop under all read nodes
|
||||
self.create_backdrop(label="Loaded Reads",
|
||||
color='0x2d7702ff', layer=-1,
|
||||
nodes=nodes_backdrop)
|
||||
|
||||
def read_loader(self, representation):
|
||||
"""
|
||||
Gets Loader plugin for image sequence or mov
|
||||
|
||||
Arguments:
|
||||
representation (dict): avalon db entity
|
||||
|
||||
"""
|
||||
context = representation["context"]
|
||||
|
||||
loader_name = "LoadSequence"
|
||||
if "mov" in context["representation"]:
|
||||
loader_name = "LoadMov"
|
||||
|
||||
loader_plugin = None
|
||||
for Loader in api.discover(api.Loader):
|
||||
if Loader.__name__ != loader_name:
|
||||
continue
|
||||
|
||||
loader_plugin = Loader
|
||||
|
||||
return api.load(Loader=loader_plugin,
|
||||
representation=representation["_id"])
|
||||
|
||||
def effect_loader(self, representation):
|
||||
"""
|
||||
Gets Loader plugin for effects
|
||||
|
||||
Arguments:
|
||||
representation (dict): avalon db entity
|
||||
|
||||
"""
|
||||
loader_name = "LoadLuts"
|
||||
|
||||
loader_plugin = None
|
||||
for Loader in api.discover(api.Loader):
|
||||
if Loader.__name__ != loader_name:
|
||||
continue
|
||||
|
||||
loader_plugin = Loader
|
||||
|
||||
return api.load(Loader=loader_plugin,
|
||||
representation=representation["_id"])
|
||||
|
||||
def write_create(self):
|
||||
"""
|
||||
Create render write
|
||||
|
||||
Arguments:
|
||||
representation (dict): avalon db entity
|
||||
|
||||
"""
|
||||
task = self.data_tmp["task"]
|
||||
sanitized_task = re.sub('[^0-9a-zA-Z]+', '', task)
|
||||
subset_name = "render{}Main".format(
|
||||
sanitized_task.capitalize())
|
||||
|
||||
Create_name = "CreateWriteRender"
|
||||
|
||||
creator_plugin = None
|
||||
for Creator in api.discover(api.Creator):
|
||||
if Creator.__name__ != Create_name:
|
||||
continue
|
||||
|
||||
creator_plugin = Creator
|
||||
|
||||
# return api.create()
|
||||
return creator_plugin(subset_name, self._asset).process()
|
||||
|
||||
def create_backdrop(self, label="", color=None, layer=0,
|
||||
nodes=None):
|
||||
"""
|
||||
Create Backdrop node
|
||||
|
||||
Arguments:
|
||||
color (str): nuke compatible string with color code
|
||||
layer (int): layer of node usually used (self.pos_layer - 1)
|
||||
label (str): the message
|
||||
nodes (list): list of nodes to be wrapped into backdrop
|
||||
|
||||
"""
|
||||
assert isinstance(nodes, list), "`nodes` should be a list of nodes"
|
||||
layer = self.pos_layer + layer
|
||||
|
||||
create_backdrop(label=label, color=color, layer=layer, nodes=nodes)
|
||||
|
||||
def position_reset(self, xpos=0, ypos=0):
|
||||
self.xpos = xpos
|
||||
self.ypos = ypos
|
||||
|
||||
def position_right(self, multiply=1):
|
||||
self.xpos += (self.xpos_size * multiply) + self.xpos_gap
|
||||
|
||||
def position_left(self, multiply=1):
|
||||
self.xpos -= (self.xpos_size * multiply) + self.xpos_gap
|
||||
|
||||
def position_down(self, multiply=1):
|
||||
self.ypos -= (self.ypos_size * multiply) + self.ypos_gap
|
||||
|
||||
def position_up(self, multiply=1):
|
||||
self.ypos -= (self.ypos_size * multiply) + self.ypos_gap
|
||||
|
||||
|
||||
class ExporterReview:
|
||||
"""
|
||||
Base class object for generating review data from Nuke
|
||||
|
|
|
|||
|
|
@ -2,10 +2,12 @@ import nuke
|
|||
from avalon.api import Session
|
||||
|
||||
from pype.hosts.nuke import lib
|
||||
from ...lib import BuildWorkfile
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__, "nuke")
|
||||
|
||||
|
||||
def install():
|
||||
menubar = nuke.menu("Nuke")
|
||||
menu = menubar.findItem(Session["AVALON_LABEL"])
|
||||
|
|
@ -20,7 +22,11 @@ def install():
|
|||
log.debug("Changing Item: {}".format(rm_item))
|
||||
# rm_item[1].setEnabled(False)
|
||||
menu.removeItem(rm_item[1].name())
|
||||
menu.addCommand(new_name, lambda: workfile_settings().reset_resolution(), index=(rm_item[0]))
|
||||
menu.addCommand(
|
||||
new_name,
|
||||
lambda: workfile_settings().reset_resolution(),
|
||||
index=(rm_item[0])
|
||||
)
|
||||
|
||||
# replace reset frame range from avalon core to pype's
|
||||
name = "Reset Frame Range"
|
||||
|
|
@ -31,33 +37,38 @@ def install():
|
|||
log.debug("Changing Item: {}".format(rm_item))
|
||||
# rm_item[1].setEnabled(False)
|
||||
menu.removeItem(rm_item[1].name())
|
||||
menu.addCommand(new_name, lambda: workfile_settings().reset_frame_range_handles(), index=(rm_item[0]))
|
||||
menu.addCommand(
|
||||
new_name,
|
||||
lambda: workfile_settings().reset_frame_range_handles(),
|
||||
index=(rm_item[0])
|
||||
)
|
||||
|
||||
# add colorspace menu item
|
||||
name = "Set colorspace"
|
||||
name = "Set Colorspace"
|
||||
menu.addCommand(
|
||||
name, lambda: workfile_settings().set_colorspace(),
|
||||
index=(rm_item[0]+2)
|
||||
index=(rm_item[0] + 2)
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
# add workfile builder menu item
|
||||
name = "Build First Workfile.."
|
||||
name = "Build Workfile"
|
||||
menu.addCommand(
|
||||
name, lambda: lib.BuildWorkfile().process(),
|
||||
index=(rm_item[0]+7)
|
||||
name, lambda: BuildWorkfile().process(),
|
||||
index=(rm_item[0] + 7)
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
# add item that applies all setting above
|
||||
name = "Apply all settings"
|
||||
name = "Apply All Settings"
|
||||
menu.addCommand(
|
||||
name, lambda: workfile_settings().set_context_settings(), index=(rm_item[0]+3)
|
||||
name,
|
||||
lambda: workfile_settings().set_context_settings(),
|
||||
index=(rm_item[0] + 3)
|
||||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
|
||||
|
||||
def uninstall():
|
||||
|
||||
menubar = nuke.menu("Nuke")
|
||||
|
|
|
|||
|
|
@ -71,8 +71,8 @@ def add_tags_from_presets():
|
|||
# Get project task types.
|
||||
tasks = io.find_one({"type": "project"})["config"]["tasks"]
|
||||
nks_pres_tags["[Tasks]"] = {}
|
||||
for task in tasks:
|
||||
nks_pres_tags["[Tasks]"][task["name"]] = {
|
||||
for task_type in tasks.keys():
|
||||
nks_pres_tags["[Tasks]"][task_type] = {
|
||||
"editable": "1",
|
||||
"note": "",
|
||||
"icon": {
|
||||
|
|
|
|||
|
|
@ -1,17 +1,34 @@
|
|||
from .utils import (
|
||||
setup,
|
||||
get_resolve_module
|
||||
)
|
||||
|
||||
from .pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ls,
|
||||
containerise,
|
||||
publish,
|
||||
launch_workfiles_app
|
||||
launch_workfiles_app,
|
||||
maintained_selection
|
||||
)
|
||||
|
||||
from .utils import (
|
||||
setup,
|
||||
get_resolve_module
|
||||
from .lib import (
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
get_current_sequence,
|
||||
get_current_track_items,
|
||||
create_current_sequence_media_bin,
|
||||
create_compound_clip,
|
||||
swap_clips,
|
||||
get_pype_clip_metadata,
|
||||
set_project_manager_to_folder_name
|
||||
)
|
||||
|
||||
from .menu import launch_pype_menu
|
||||
|
||||
from .plugin import Creator
|
||||
|
||||
from .workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
|
|
@ -21,12 +38,8 @@ from .workio import (
|
|||
work_root
|
||||
)
|
||||
|
||||
from .lib import (
|
||||
get_project_manager,
|
||||
set_project_manager_to_folder_name
|
||||
)
|
||||
|
||||
from .menu import launch_pype_menu
|
||||
bmdvr = None
|
||||
bmdvf = None
|
||||
|
||||
__all__ = [
|
||||
# pipeline
|
||||
|
|
@ -37,6 +50,7 @@ __all__ = [
|
|||
"reload_pipeline",
|
||||
"publish",
|
||||
"launch_workfiles_app",
|
||||
"maintained_selection",
|
||||
|
||||
# utils
|
||||
"setup",
|
||||
|
|
@ -44,16 +58,30 @@ __all__ = [
|
|||
|
||||
# lib
|
||||
"get_project_manager",
|
||||
"get_current_project",
|
||||
"get_current_sequence",
|
||||
"get_current_track_items",
|
||||
"create_current_sequence_media_bin",
|
||||
"create_compound_clip",
|
||||
"swap_clips",
|
||||
"get_pype_clip_metadata",
|
||||
"set_project_manager_to_folder_name",
|
||||
|
||||
# menu
|
||||
"launch_pype_menu",
|
||||
|
||||
# plugin
|
||||
"Creator",
|
||||
|
||||
# workio
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
"work_root",
|
||||
|
||||
# singleton with black magic resolve module
|
||||
"bmdvr",
|
||||
"bmdvf"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -21,9 +21,9 @@ class SelectInvalidAction(pyblish.api.Action):
|
|||
def process(self, context, plugin):
|
||||
|
||||
try:
|
||||
from pype.hosts.resolve.utils import get_resolve_module
|
||||
resolve = get_resolve_module()
|
||||
self.log.debug(resolve)
|
||||
from . import get_project_manager
|
||||
pm = get_project_manager()
|
||||
self.log.debug(pm)
|
||||
except ImportError:
|
||||
raise ImportError("Current host is not Resolve")
|
||||
|
||||
|
|
|
|||
|
|
@ -1,20 +1,406 @@
|
|||
import sys
|
||||
from .utils import get_resolve_module
|
||||
from pypeapp import Logger
|
||||
import json
|
||||
from opentimelineio import opentime
|
||||
from pprint import pformat
|
||||
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__, "resolve")
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self.pm = None
|
||||
self.rename_index = 0
|
||||
self.rename_add = 0
|
||||
self.pype_metadata_key = "VFX Notes"
|
||||
|
||||
|
||||
def get_project_manager():
|
||||
from . import bmdvr
|
||||
if not self.pm:
|
||||
resolve = get_resolve_module()
|
||||
self.pm = resolve.GetProjectManager()
|
||||
self.pm = bmdvr.GetProjectManager()
|
||||
return self.pm
|
||||
|
||||
|
||||
def get_current_project():
|
||||
# initialize project manager
|
||||
get_project_manager()
|
||||
|
||||
return self.pm.GetCurrentProject()
|
||||
|
||||
|
||||
def get_current_sequence():
|
||||
# get current project
|
||||
project = get_current_project()
|
||||
|
||||
return project.GetCurrentTimeline()
|
||||
|
||||
|
||||
def get_current_track_items(
|
||||
filter=False,
|
||||
track_type=None,
|
||||
selecting_color=None):
|
||||
""" Gets all available current timeline track items
|
||||
"""
|
||||
track_type = track_type or "video"
|
||||
selecting_color = selecting_color or "Chocolate"
|
||||
project = get_current_project()
|
||||
sequence = get_current_sequence()
|
||||
selected_clips = list()
|
||||
|
||||
# get all tracks count filtered by track type
|
||||
selected_track_count = sequence.GetTrackCount(track_type)
|
||||
|
||||
# loop all tracks and get items
|
||||
_clips = dict()
|
||||
for track_index in range(1, (int(selected_track_count) + 1)):
|
||||
track_name = sequence.GetTrackName(track_type, track_index)
|
||||
track_track_items = sequence.GetItemListInTrack(
|
||||
track_type, track_index)
|
||||
_clips[track_index] = track_track_items
|
||||
|
||||
_data = {
|
||||
"project": project,
|
||||
"sequence": sequence,
|
||||
"track": {
|
||||
"name": track_name,
|
||||
"index": track_index,
|
||||
"type": track_type}
|
||||
}
|
||||
# get track item object and its color
|
||||
for clip_index, ti in enumerate(_clips[track_index]):
|
||||
data = _data.copy()
|
||||
data["clip"] = {
|
||||
"item": ti,
|
||||
"index": clip_index
|
||||
}
|
||||
ti_color = ti.GetClipColor()
|
||||
if filter is True:
|
||||
if selecting_color in ti_color:
|
||||
selected_clips.append(data)
|
||||
# ti.ClearClipColor()
|
||||
else:
|
||||
selected_clips.append(data)
|
||||
|
||||
return selected_clips
|
||||
|
||||
|
||||
def create_current_sequence_media_bin(sequence):
|
||||
seq_name = sequence.GetName()
|
||||
media_pool = get_current_project().GetMediaPool()
|
||||
root_folder = media_pool.GetRootFolder()
|
||||
sub_folders = root_folder.GetSubFolderList()
|
||||
testing_names = list()
|
||||
|
||||
print(f"_ sub_folders: {sub_folders}")
|
||||
for subfolder in sub_folders:
|
||||
subf_name = subfolder.GetName()
|
||||
if seq_name in subf_name:
|
||||
testing_names.append(subfolder)
|
||||
else:
|
||||
testing_names.append(False)
|
||||
|
||||
matching = next((f for f in testing_names if f is not False), None)
|
||||
|
||||
if not matching:
|
||||
new_folder = media_pool.AddSubFolder(root_folder, seq_name)
|
||||
media_pool.SetCurrentFolder(new_folder)
|
||||
else:
|
||||
media_pool.SetCurrentFolder(matching)
|
||||
|
||||
return media_pool.GetCurrentFolder()
|
||||
|
||||
|
||||
def get_name_with_data(clip_data, presets):
|
||||
"""
|
||||
Take hierarchy data from presets and build name with parents data
|
||||
|
||||
Args:
|
||||
clip_data (dict): clip data from `get_current_track_items()`
|
||||
presets (dict): data from create plugin
|
||||
|
||||
Returns:
|
||||
list: name, data
|
||||
|
||||
"""
|
||||
def _replace_hash_to_expression(name, text):
|
||||
_spl = text.split("#")
|
||||
_len = (len(_spl) - 1)
|
||||
_repl = f"{{{name}:0>{_len}}}"
|
||||
new_text = text.replace(("#" * _len), _repl)
|
||||
return new_text
|
||||
|
||||
# presets data
|
||||
clip_name = presets["clipName"]
|
||||
hierarchy = presets["hierarchy"]
|
||||
hierarchy_data = presets["hierarchyData"].copy()
|
||||
count_from = presets["countFrom"]
|
||||
steps = presets["steps"]
|
||||
|
||||
# reset rename_add
|
||||
if self.rename_add < count_from:
|
||||
self.rename_add = count_from
|
||||
|
||||
# shot num calculate
|
||||
if self.rename_index == 0:
|
||||
shot_num = self.rename_add
|
||||
else:
|
||||
shot_num = self.rename_add + steps
|
||||
|
||||
print(f"shot_num: {shot_num}")
|
||||
|
||||
# clip data
|
||||
_data = {
|
||||
"sequence": clip_data["sequence"].GetName(),
|
||||
"track": clip_data["track"]["name"].replace(" ", "_"),
|
||||
"shot": shot_num
|
||||
}
|
||||
|
||||
# solve # in test to pythonic explression
|
||||
for k, v in hierarchy_data.items():
|
||||
if "#" not in v:
|
||||
continue
|
||||
hierarchy_data[k] = _replace_hash_to_expression(k, v)
|
||||
|
||||
# fill up pythonic expresisons
|
||||
for k, v in hierarchy_data.items():
|
||||
hierarchy_data[k] = v.format(**_data)
|
||||
|
||||
# fill up clip name and hierarchy keys
|
||||
hierarchy = hierarchy.format(**hierarchy_data)
|
||||
clip_name = clip_name.format(**hierarchy_data)
|
||||
|
||||
self.rename_add = shot_num
|
||||
print(f"shot_num: {shot_num}")
|
||||
|
||||
return (clip_name, {
|
||||
"hierarchy": hierarchy,
|
||||
"hierarchyData": hierarchy_data
|
||||
})
|
||||
|
||||
|
||||
def create_compound_clip(clip_data, folder, rename=False, **kwargs):
|
||||
"""
|
||||
Convert timeline object into nested timeline object
|
||||
|
||||
Args:
|
||||
clip_data (dict): timeline item object packed into dict
|
||||
with project, timeline (sequence)
|
||||
folder (resolve.MediaPool.Folder): media pool folder object,
|
||||
rename (bool)[optional]: renaming in sequence or not
|
||||
kwargs (optional): additional data needed for rename=True (presets)
|
||||
|
||||
Returns:
|
||||
resolve.MediaPoolItem: media pool item with compound clip timeline(cct)
|
||||
"""
|
||||
# get basic objects form data
|
||||
project = clip_data["project"]
|
||||
sequence = clip_data["sequence"]
|
||||
clip = clip_data["clip"]
|
||||
|
||||
# get details of objects
|
||||
clip_item = clip["item"]
|
||||
track = clip_data["track"]
|
||||
|
||||
mp = project.GetMediaPool()
|
||||
|
||||
# get clip attributes
|
||||
clip_attributes = get_clip_attributes(clip_item)
|
||||
print(f"_ clip_attributes: {pformat(clip_attributes)}")
|
||||
|
||||
if rename:
|
||||
presets = kwargs.get("presets")
|
||||
if presets:
|
||||
name, data = get_name_with_data(clip_data, presets)
|
||||
# add hirarchy data to clip attributes
|
||||
clip_attributes.update(data)
|
||||
else:
|
||||
name = "{:0>3}_{:0>4}".format(
|
||||
int(track["index"]), int(clip["index"]))
|
||||
else:
|
||||
# build name
|
||||
clip_name_split = clip_item.GetName().split(".")
|
||||
name = "_".join([
|
||||
track["name"],
|
||||
str(track["index"]),
|
||||
clip_name_split[0],
|
||||
str(clip["index"])]
|
||||
)
|
||||
|
||||
# get metadata
|
||||
mp_item = clip_item.GetMediaPoolItem()
|
||||
mp_props = mp_item.GetClipProperty()
|
||||
|
||||
mp_first_frame = int(mp_props["Start"])
|
||||
mp_last_frame = int(mp_props["End"])
|
||||
|
||||
# initialize basic source timing for otio
|
||||
ci_l_offset = clip_item.GetLeftOffset()
|
||||
ci_duration = clip_item.GetDuration()
|
||||
rate = float(mp_props["FPS"])
|
||||
|
||||
# source rational times
|
||||
mp_in_rc = opentime.RationalTime((ci_l_offset), rate)
|
||||
mp_out_rc = opentime.RationalTime((ci_l_offset + ci_duration - 1), rate)
|
||||
|
||||
# get frame in and out for clip swaping
|
||||
in_frame = opentime.to_frames(mp_in_rc)
|
||||
out_frame = opentime.to_frames(mp_out_rc)
|
||||
|
||||
# keep original sequence
|
||||
sq_origin = sequence
|
||||
|
||||
# Set current folder to input media_pool_folder:
|
||||
mp.SetCurrentFolder(folder)
|
||||
|
||||
# check if clip doesnt exist already:
|
||||
clips = folder.GetClipList()
|
||||
cct = next((c for c in clips
|
||||
if c.GetName() in name), None)
|
||||
|
||||
if cct:
|
||||
print(f"_ cct exists: {cct}")
|
||||
else:
|
||||
# Create empty timeline in current folder and give name:
|
||||
cct = mp.CreateEmptyTimeline(name)
|
||||
|
||||
# check if clip doesnt exist already:
|
||||
clips = folder.GetClipList()
|
||||
cct = next((c for c in clips
|
||||
if c.GetName() in name), None)
|
||||
print(f"_ cct created: {cct}")
|
||||
|
||||
# Set current timeline to created timeline:
|
||||
project.SetCurrentTimeline(cct)
|
||||
|
||||
# Add input clip to the current timeline:
|
||||
mp.AppendToTimeline([{
|
||||
"mediaPoolItem": mp_item,
|
||||
"startFrame": mp_first_frame,
|
||||
"endFrame": mp_last_frame
|
||||
}])
|
||||
|
||||
# Set current timeline to the working timeline:
|
||||
project.SetCurrentTimeline(sq_origin)
|
||||
|
||||
# Add collected metadata and attributes to the comound clip:
|
||||
if mp_item.GetMetadata(self.pype_metadata_key):
|
||||
clip_attributes[self.pype_metadata_key] = mp_item.GetMetadata(
|
||||
self.pype_metadata_key)[self.pype_metadata_key]
|
||||
|
||||
# stringify
|
||||
clip_attributes = json.dumps(clip_attributes)
|
||||
|
||||
# add attributes to metadata
|
||||
for k, v in mp_item.GetMetadata().items():
|
||||
cct.SetMetadata(k, v)
|
||||
|
||||
# add metadata to cct
|
||||
cct.SetMetadata(self.pype_metadata_key, clip_attributes)
|
||||
|
||||
# reset start timecode of the compound clip
|
||||
cct.SetClipProperty("Start TC", mp_props["Start TC"])
|
||||
|
||||
# swap clips on timeline
|
||||
swap_clips(clip_item, cct, name, in_frame, out_frame)
|
||||
|
||||
cct.SetClipColor("Pink")
|
||||
return cct
|
||||
|
||||
|
||||
def swap_clips(from_clip, to_clip, to_clip_name, to_in_frame, to_out_frame):
|
||||
"""
|
||||
Swaping clips on timeline in timelineItem
|
||||
|
||||
It will add take and activate it to the frame range which is inputted
|
||||
|
||||
Args:
|
||||
from_clip (resolve.mediaPoolItem)
|
||||
to_clip (resolve.mediaPoolItem)
|
||||
to_clip_name (str): name of to_clip
|
||||
to_in_frame (float): cut in frame, usually `GetLeftOffset()`
|
||||
to_out_frame (float): cut out frame, usually left offset plus duration
|
||||
|
||||
Returns:
|
||||
bool: True if successfully replaced
|
||||
|
||||
"""
|
||||
# add clip item as take to timeline
|
||||
take = from_clip.AddTake(
|
||||
to_clip,
|
||||
float(to_in_frame),
|
||||
float(to_out_frame)
|
||||
)
|
||||
|
||||
if not take:
|
||||
return False
|
||||
|
||||
for take_index in range(1, (int(from_clip.GetTakesCount()) + 1)):
|
||||
take_item = from_clip.GetTakeByIndex(take_index)
|
||||
take_mp_item = take_item["mediaPoolItem"]
|
||||
if to_clip_name in take_mp_item.GetName():
|
||||
from_clip.SelectTakeByIndex(take_index)
|
||||
from_clip.FinalizeTake()
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def validate_tc(x):
|
||||
# Validate and reformat timecode string
|
||||
|
||||
if len(x) != 11:
|
||||
print('Invalid timecode. Try again.')
|
||||
|
||||
c = ':'
|
||||
colonized = x[:2] + c + x[3:5] + c + x[6:8] + c + x[9:]
|
||||
|
||||
if colonized.replace(':', '').isdigit():
|
||||
print(f"_ colonized: {colonized}")
|
||||
return colonized
|
||||
else:
|
||||
print('Invalid timecode. Try again.')
|
||||
|
||||
|
||||
def get_pype_clip_metadata(clip):
|
||||
"""
|
||||
Get pype metadata created by creator plugin
|
||||
|
||||
Attributes:
|
||||
clip (resolve.TimelineItem): resolve's object
|
||||
|
||||
Returns:
|
||||
dict: hierarchy, orig clip attributes
|
||||
"""
|
||||
mp_item = clip.GetMediaPoolItem()
|
||||
metadata = mp_item.GetMetadata()
|
||||
|
||||
return metadata.get(self.pype_metadata_key)
|
||||
|
||||
|
||||
def get_clip_attributes(clip):
|
||||
"""
|
||||
Collect basic atrributes from resolve timeline item
|
||||
|
||||
Args:
|
||||
clip (resolve.TimelineItem): timeline item object
|
||||
|
||||
Returns:
|
||||
dict: all collected attributres as key: values
|
||||
"""
|
||||
mp_item = clip.GetMediaPoolItem()
|
||||
|
||||
data = {
|
||||
"clipIn": clip.GetStart(),
|
||||
"clipOut": clip.GetEnd(),
|
||||
"clipLeftOffset": clip.GetLeftOffset(),
|
||||
"clipRightOffset": clip.GetRightOffset(),
|
||||
"clipMarkers": clip.GetMarkers(),
|
||||
"clipFlags": clip.GetFlagList(),
|
||||
"sourceId": mp_item.GetMediaId(),
|
||||
"sourceProperties": mp_item.GetClipProperty()
|
||||
}
|
||||
return data
|
||||
|
||||
|
||||
def set_project_manager_to_folder_name(folder_name):
|
||||
"""
|
||||
Sets context of Project manager to given folder by name.
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
QWidget {
|
||||
background-color: #282828;
|
||||
border-radius: 3;
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
QPushButton {
|
||||
|
|
@ -20,10 +21,38 @@ QPushButton:hover {
|
|||
color: #e64b3d;
|
||||
}
|
||||
|
||||
QSpinBox {
|
||||
border: 1px solid #090909;
|
||||
background-color: #201f1f;
|
||||
color: #ffffff;
|
||||
padding: 2;
|
||||
max-width: 8em;
|
||||
qproperty-alignment: AlignCenter;
|
||||
}
|
||||
|
||||
QLineEdit {
|
||||
border: 1px solid #090909;
|
||||
border-radius: 3px;
|
||||
background-color: #201f1f;
|
||||
color: #ffffff;
|
||||
padding: 2;
|
||||
min-width: 10em;
|
||||
qproperty-alignment: AlignCenter;
|
||||
}
|
||||
|
||||
#PypeMenu {
|
||||
border: 1px solid #fef9ef;
|
||||
}
|
||||
|
||||
#Spacer {
|
||||
QVBoxLayout {
|
||||
background-color: #282828;
|
||||
}
|
||||
|
||||
#Devider {
|
||||
border: 1px solid #090909;
|
||||
background-color: #585858;
|
||||
}
|
||||
|
||||
QLabel {
|
||||
color: #77776b;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,27 +2,23 @@
|
|||
Basic avalon integration
|
||||
"""
|
||||
import os
|
||||
# import sys
|
||||
import contextlib
|
||||
from avalon.tools import workfiles
|
||||
from avalon import api as avalon
|
||||
from pyblish import api as pyblish
|
||||
from pypeapp import Logger
|
||||
import pype
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__, "resolve")
|
||||
|
||||
# self = sys.modules[__name__]
|
||||
|
||||
AVALON_CONFIG = os.environ["AVALON_CONFIG"]
|
||||
PARENT_DIR = os.path.dirname(__file__)
|
||||
PACKAGE_DIR = os.path.dirname(PARENT_DIR)
|
||||
PLUGINS_DIR = os.path.join(PACKAGE_DIR, "plugins")
|
||||
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "resolve", "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "resolve", "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "resolve", "inventory")
|
||||
LOAD_PATH = os.path.join(pype.PLUGINS_DIR, "resolve", "load")
|
||||
CREATE_PATH = os.path.join(pype.PLUGINS_DIR, "resolve", "create")
|
||||
INVENTORY_PATH = os.path.join(pype.PLUGINS_DIR, "resolve", "inventory")
|
||||
|
||||
PUBLISH_PATH = os.path.join(
|
||||
PLUGINS_DIR, "resolve", "publish"
|
||||
pype.PLUGINS_DIR, "resolve", "publish"
|
||||
).replace("\\", "/")
|
||||
|
||||
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
|
||||
|
|
@ -40,11 +36,13 @@ def install():
|
|||
See the Maya equivalent for inspiration on how to implement this.
|
||||
|
||||
"""
|
||||
from . import get_resolve_module
|
||||
|
||||
# Disable all families except for the ones we explicitly want to see
|
||||
family_states = [
|
||||
"imagesequence",
|
||||
"mov"
|
||||
"mov",
|
||||
"clip"
|
||||
]
|
||||
avalon.data["familiesStateDefault"] = False
|
||||
avalon.data["familiesStateToggled"] = family_states
|
||||
|
|
@ -59,6 +57,8 @@ def install():
|
|||
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
||||
get_resolve_module()
|
||||
|
||||
|
||||
def uninstall():
|
||||
"""Uninstall all tha was installed
|
||||
|
|
@ -140,3 +140,26 @@ def publish(parent):
|
|||
"""Shorthand to publish from within host"""
|
||||
from avalon.tools import publish
|
||||
return publish.show(parent)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
"""Maintain selection during context
|
||||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... node['selected'].setValue(True)
|
||||
>>> print(node['selected'].value())
|
||||
False
|
||||
"""
|
||||
try:
|
||||
# do the operation
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
def reset_selection():
|
||||
"""Deselect all selected nodes
|
||||
"""
|
||||
pass
|
||||
|
|
|
|||
|
|
@ -1,6 +1,182 @@
|
|||
import re
|
||||
from avalon import api
|
||||
# from pype.hosts.resolve import lib as drlib
|
||||
from pype.hosts import resolve
|
||||
from avalon.vendor import qargparse
|
||||
from pype.api import config
|
||||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
|
||||
class CreatorWidget(QtWidgets.QDialog):
|
||||
|
||||
# output items
|
||||
items = dict()
|
||||
|
||||
def __init__(self, name, info, presets, parent=None):
|
||||
super(CreatorWidget, self).__init__(parent)
|
||||
|
||||
self.setObjectName(name)
|
||||
|
||||
self.setWindowFlags(
|
||||
QtCore.Qt.Window
|
||||
| QtCore.Qt.CustomizeWindowHint
|
||||
| QtCore.Qt.WindowTitleHint
|
||||
| QtCore.Qt.WindowCloseButtonHint
|
||||
| QtCore.Qt.WindowStaysOnTopHint
|
||||
)
|
||||
self.setWindowTitle(name or "Pype Creator Input")
|
||||
|
||||
# Where inputs and labels are set
|
||||
self.content_widget = [QtWidgets.QWidget(self)]
|
||||
top_layout = QtWidgets.QFormLayout(self.content_widget[0])
|
||||
top_layout.setObjectName("ContentLayout")
|
||||
top_layout.addWidget(Spacer(5, self))
|
||||
|
||||
# first add widget tag line
|
||||
top_layout.addWidget(QtWidgets.QLabel(info))
|
||||
|
||||
top_layout.addWidget(Spacer(5, self))
|
||||
|
||||
# main dynamic layout
|
||||
self.content_widget.append(QtWidgets.QWidget(self))
|
||||
content_layout = QtWidgets.QFormLayout(self.content_widget[-1])
|
||||
|
||||
# add preset data into input widget layout
|
||||
self.items = self.add_presets_to_layout(content_layout, presets)
|
||||
|
||||
# Confirmation buttons
|
||||
btns_widget = QtWidgets.QWidget(self)
|
||||
btns_layout = QtWidgets.QHBoxLayout(btns_widget)
|
||||
|
||||
cancel_btn = QtWidgets.QPushButton("Cancel")
|
||||
btns_layout.addWidget(cancel_btn)
|
||||
|
||||
ok_btn = QtWidgets.QPushButton("Ok")
|
||||
btns_layout.addWidget(ok_btn)
|
||||
|
||||
# Main layout of the dialog
|
||||
main_layout = QtWidgets.QVBoxLayout(self)
|
||||
main_layout.setContentsMargins(10, 10, 10, 10)
|
||||
main_layout.setSpacing(0)
|
||||
|
||||
# adding content widget
|
||||
for w in self.content_widget:
|
||||
main_layout.addWidget(w)
|
||||
|
||||
main_layout.addWidget(btns_widget)
|
||||
|
||||
ok_btn.clicked.connect(self._on_ok_clicked)
|
||||
cancel_btn.clicked.connect(self._on_cancel_clicked)
|
||||
|
||||
stylesheet = resolve.menu.load_stylesheet()
|
||||
self.setStyleSheet(stylesheet)
|
||||
|
||||
def _on_ok_clicked(self):
|
||||
self.result = self.value(self.items)
|
||||
self.close()
|
||||
|
||||
def _on_cancel_clicked(self):
|
||||
self.result = None
|
||||
self.close()
|
||||
|
||||
def value(self, data):
|
||||
for k, v in data.items():
|
||||
if isinstance(v, dict):
|
||||
print(f"nested: {k}")
|
||||
data[k] = self.value(v)
|
||||
elif getattr(v, "value", None):
|
||||
print(f"normal int: {k}")
|
||||
result = v.value()
|
||||
data[k] = result()
|
||||
else:
|
||||
print(f"normal text: {k}")
|
||||
result = v.text()
|
||||
data[k] = result()
|
||||
return data
|
||||
|
||||
def camel_case_split(self, text):
|
||||
matches = re.finditer(
|
||||
'.+?(?:(?<=[a-z])(?=[A-Z])|(?<=[A-Z])(?=[A-Z][a-z])|$)', text)
|
||||
return " ".join([str(m.group(0)).capitalize() for m in matches])
|
||||
|
||||
def create_row(self, layout, type, text, **kwargs):
|
||||
# get type attribute from qwidgets
|
||||
attr = getattr(QtWidgets, type)
|
||||
|
||||
# convert label text to normal capitalized text with spaces
|
||||
label_text = self.camel_case_split(text)
|
||||
|
||||
# assign the new text to lable widget
|
||||
label = QtWidgets.QLabel(label_text)
|
||||
label.setObjectName("LineLabel")
|
||||
|
||||
# create attribute name text strip of spaces
|
||||
attr_name = text.replace(" ", "")
|
||||
|
||||
# create attribute and assign default values
|
||||
setattr(
|
||||
self,
|
||||
attr_name,
|
||||
attr(parent=self))
|
||||
|
||||
# assign the created attribute to variable
|
||||
item = getattr(self, attr_name)
|
||||
for func, val in kwargs.items():
|
||||
if getattr(item, func):
|
||||
func_attr = getattr(item, func)
|
||||
func_attr(val)
|
||||
|
||||
# add to layout
|
||||
layout.addRow(label, item)
|
||||
|
||||
return item
|
||||
|
||||
def add_presets_to_layout(self, content_layout, data):
|
||||
for k, v in data.items():
|
||||
if isinstance(v, dict):
|
||||
# adding spacer between sections
|
||||
self.content_widget.append(QtWidgets.QWidget(self))
|
||||
devider = QtWidgets.QVBoxLayout(self.content_widget[-1])
|
||||
devider.addWidget(Spacer(5, self))
|
||||
devider.setObjectName("Devider")
|
||||
|
||||
# adding nested layout with label
|
||||
self.content_widget.append(QtWidgets.QWidget(self))
|
||||
nested_content_layout = QtWidgets.QFormLayout(
|
||||
self.content_widget[-1])
|
||||
nested_content_layout.setObjectName("NestedContentLayout")
|
||||
|
||||
# add nested key as label
|
||||
self.create_row(nested_content_layout, "QLabel", k)
|
||||
data[k] = self.add_presets_to_layout(nested_content_layout, v)
|
||||
elif isinstance(v, str):
|
||||
print(f"layout.str: {k}")
|
||||
print(f"content_layout: {content_layout}")
|
||||
data[k] = self.create_row(
|
||||
content_layout, "QLineEdit", k, setText=v)
|
||||
elif isinstance(v, int):
|
||||
print(f"layout.int: {k}")
|
||||
print(f"content_layout: {content_layout}")
|
||||
data[k] = self.create_row(
|
||||
content_layout, "QSpinBox", k, setValue=v)
|
||||
return data
|
||||
|
||||
|
||||
class Spacer(QtWidgets.QWidget):
|
||||
def __init__(self, height, *args, **kwargs):
|
||||
super(self.__class__, self).__init__(*args, **kwargs)
|
||||
|
||||
self.setFixedHeight(height)
|
||||
|
||||
real_spacer = QtWidgets.QWidget(self)
|
||||
real_spacer.setObjectName("Spacer")
|
||||
real_spacer.setFixedHeight(height)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.addWidget(real_spacer)
|
||||
|
||||
self.setLayout(layout)
|
||||
|
||||
|
||||
def get_reference_node_parents(ref):
|
||||
|
|
@ -73,3 +249,25 @@ class SequenceLoader(api.Loader):
|
|||
"""Remove an existing `container`
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class Creator(api.Creator):
|
||||
"""Creator class wrapper
|
||||
"""
|
||||
marker_color = "Purple"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(Creator, self).__init__(*args, **kwargs)
|
||||
self.presets = config.get_presets()['plugins']["resolve"][
|
||||
"create"].get(self.__class__.__name__, {})
|
||||
|
||||
# adding basic current context resolve objects
|
||||
self.project = resolve.get_current_project()
|
||||
self.sequence = resolve.get_current_sequence()
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
self.selected = resolve.get_current_track_items(filter=True)
|
||||
else:
|
||||
self.selected = resolve.get_current_track_items(filter=False)
|
||||
|
||||
self.widget = CreatorWidget
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
#!/usr/bin/env python
|
||||
import time
|
||||
from pype.hosts.resolve.utils import get_resolve_module
|
||||
from pypeapp import Logger
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__, "resolve")
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import sys
|
|||
import avalon.api as avalon
|
||||
import pype
|
||||
|
||||
from pypeapp import Logger
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,65 +0,0 @@
|
|||
#! python3
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
|
||||
# convert clip def
|
||||
def convert_clip(timeline=None):
|
||||
"""Convert timeline item (clip) into compound clip pype container
|
||||
|
||||
Args:
|
||||
timeline (MediaPool.Timeline): Object of timeline
|
||||
|
||||
Returns:
|
||||
bool: `True` if success
|
||||
|
||||
Raises:
|
||||
Exception: description
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
# decorator function create_current_timeline_media_bin()
|
||||
def create_current_timeline_media_bin(timeline=None):
|
||||
"""Convert timeline item (clip) into compound clip pype container
|
||||
|
||||
Args:
|
||||
timeline (MediaPool.Timeline): Object of timeline
|
||||
|
||||
Returns:
|
||||
bool: `True` if success
|
||||
|
||||
Raises:
|
||||
Exception: description
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
# decorator function get_selected_track_items()
|
||||
def get_selected_track_items():
|
||||
"""Convert timeline item (clip) into compound clip pype container
|
||||
|
||||
Args:
|
||||
timeline (MediaPool.Timeline): Object of timeline
|
||||
|
||||
Returns:
|
||||
bool: `True` if success
|
||||
|
||||
Raises:
|
||||
Exception: description
|
||||
|
||||
"""
|
||||
print("testText")
|
||||
|
||||
|
||||
# PypeCompoundClip() class
|
||||
class PypeCompoundClip(object):
|
||||
"""docstring for ."""
|
||||
|
||||
def __init__(self, arg):
|
||||
super(self).__init__()
|
||||
self.arg = arg
|
||||
|
||||
def create_compound_clip(self):
|
||||
pass
|
||||
|
|
@ -1,57 +0,0 @@
|
|||
import os
|
||||
import sys
|
||||
import pype
|
||||
import importlib
|
||||
import pyblish.api
|
||||
import pyblish.util
|
||||
import avalon.api
|
||||
from avalon.tools import publish
|
||||
from pypeapp import Logger
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def main(env):
|
||||
# Registers pype's Global pyblish plugins
|
||||
pype.install()
|
||||
|
||||
# Register Host (and it's pyblish plugins)
|
||||
host_name = env["AVALON_APP"]
|
||||
# TODO not sure if use "pype." or "avalon." for host import
|
||||
host_import_str = f"pype.{host_name}"
|
||||
|
||||
try:
|
||||
host_module = importlib.import_module(host_import_str)
|
||||
except ModuleNotFoundError:
|
||||
log.error((
|
||||
f"Host \"{host_name}\" can't be imported."
|
||||
f" Import string \"{host_import_str}\" failed."
|
||||
))
|
||||
return False
|
||||
|
||||
avalon.api.install(host_module)
|
||||
|
||||
# Register additional paths
|
||||
addition_paths_str = env.get("PUBLISH_PATHS") or ""
|
||||
addition_paths = addition_paths_str.split(os.pathsep)
|
||||
for path in addition_paths:
|
||||
path = os.path.normpath(path)
|
||||
if not os.path.exists(path):
|
||||
continue
|
||||
|
||||
pyblish.api.register_plugin_path(path)
|
||||
|
||||
# Register project specific plugins
|
||||
project_name = os.environ["AVALON_PROJECT"]
|
||||
project_plugins_paths = env.get("PYPE_PROJECT_PLUGINS") or ""
|
||||
for path in project_plugins_paths.split(os.pathsep):
|
||||
plugin_path = os.path.join(path, project_name, "plugins")
|
||||
if os.path.exists(plugin_path):
|
||||
pyblish.api.register_plugin_path(plugin_path)
|
||||
|
||||
return publish.show()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
result = main(os.environ)
|
||||
sys.exit(not bool(result))
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
#! python3
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
from pypeapp import execute, Logger
|
||||
from pype.hosts.resolve.utils import get_resolve_module
|
||||
|
||||
log = Logger().get_logger("Resolve")
|
||||
|
||||
CURRENT_DIR = os.getenv("RESOLVE_UTILITY_SCRIPTS_DIR", "")
|
||||
python_dir = os.getenv("PYTHON36_RESOLVE")
|
||||
python_exe = os.path.normpath(
|
||||
os.path.join(python_dir, "python.exe")
|
||||
)
|
||||
|
||||
resolve = get_resolve_module()
|
||||
PM = resolve.GetProjectManager()
|
||||
P = PM.GetCurrentProject()
|
||||
|
||||
log.info(P.GetName())
|
||||
|
||||
|
||||
# ______________________________________________________
|
||||
# testing subprocessing Scripts
|
||||
testing_py = os.path.join(CURRENT_DIR, "ResolvePageSwitcher.py")
|
||||
testing_py = os.path.normpath(testing_py)
|
||||
log.info(f"Testing path to script: `{testing_py}`")
|
||||
|
||||
returncode = execute(
|
||||
[python_exe, os.path.normpath(testing_py)],
|
||||
env=dict(os.environ)
|
||||
)
|
||||
|
||||
# Check if output file exists
|
||||
if returncode != 0:
|
||||
log.error("Executing failed!")
|
||||
21
pype/hosts/resolve/utility_scripts/test.py
Normal file
21
pype/hosts/resolve/utility_scripts/test.py
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
#! python3
|
||||
import sys
|
||||
from pype.api import Logger
|
||||
import DaVinciResolveScript as bmdvr
|
||||
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def main():
|
||||
import pype.hosts.resolve as bmdvr
|
||||
bm = bmdvr.utils.get_resolve_module()
|
||||
log.info(f"blackmagicmodule: {bm}")
|
||||
|
||||
|
||||
print(f"_>> bmdvr.scriptapp(Resolve): {bmdvr.scriptapp('Resolve')}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
result = main()
|
||||
sys.exit(not bool(result))
|
||||
|
|
@ -9,18 +9,16 @@ import os
|
|||
import shutil
|
||||
|
||||
from pypeapp import Logger
|
||||
|
||||
log = Logger().get_logger(__name__, "resolve")
|
||||
|
||||
self = sys.modules[__name__]
|
||||
self.bmd = None
|
||||
|
||||
|
||||
def get_resolve_module():
|
||||
from pype.hosts import resolve
|
||||
# dont run if already loaded
|
||||
if self.bmd:
|
||||
return self.bmd
|
||||
|
||||
if resolve.bmdvr:
|
||||
log.info(("resolve module is assigned to "
|
||||
f"`pype.hosts.resolve.bmdvr`: {resolve.bmdvr}"))
|
||||
return resolve.bmdvr
|
||||
try:
|
||||
"""
|
||||
The PYTHONPATH needs to be set correctly for this import
|
||||
|
|
@ -71,8 +69,14 @@ def get_resolve_module():
|
|||
)
|
||||
sys.exit()
|
||||
# assign global var and return
|
||||
self.bmd = bmd.scriptapp("Resolve")
|
||||
return self.bmd
|
||||
bmdvr = bmd.scriptapp("Resolve")
|
||||
# bmdvf = bmd.scriptapp("Fusion")
|
||||
resolve.bmdvr = bmdvr
|
||||
resolve.bmdvf = bmdvr.Fusion()
|
||||
log.info(("Assigning resolve module to "
|
||||
f"`pype.hosts.resolve.bmdvr`: {resolve.bmdvr}"))
|
||||
log.info(("Assigning resolve module to "
|
||||
f"`pype.hosts.resolve.bmdvf`: {resolve.bmdvf}"))
|
||||
|
||||
|
||||
def _sync_utility_scripts(env=None):
|
||||
|
|
|
|||
|
|
@ -2,8 +2,9 @@
|
|||
|
||||
import os
|
||||
from pypeapp import Logger
|
||||
from .lib import (
|
||||
from . import (
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
set_project_manager_to_folder_name
|
||||
)
|
||||
|
||||
|
|
@ -26,7 +27,7 @@ def save_file(filepath):
|
|||
pm = get_project_manager()
|
||||
file = os.path.basename(filepath)
|
||||
fname, _ = os.path.splitext(file)
|
||||
project = pm.GetCurrentProject()
|
||||
project = get_current_project()
|
||||
name = project.GetName()
|
||||
|
||||
if "Untitled Project" not in name:
|
||||
|
|
|
|||
176
pype/lib.py
176
pype/lib.py
|
|
@ -19,7 +19,7 @@ from abc import ABCMeta, abstractmethod
|
|||
from avalon import io, pipeline
|
||||
import six
|
||||
import avalon.api
|
||||
from .api import config, Anatomy
|
||||
from .api import config, Anatomy, Logger
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -746,8 +746,9 @@ class PypeHook:
|
|||
|
||||
def get_linked_assets(asset_entity):
|
||||
"""Return linked assets for `asset_entity`."""
|
||||
# TODO implement
|
||||
return []
|
||||
inputs = asset_entity["data"].get("inputs", [])
|
||||
inputs = [io.find_one({"_id": x}) for x in inputs]
|
||||
return inputs
|
||||
|
||||
|
||||
def map_subsets_by_family(subsets):
|
||||
|
|
@ -1621,7 +1622,7 @@ class ApplicationAction(avalon.api.Action):
|
|||
parsed application `.toml` this can launch the application.
|
||||
|
||||
"""
|
||||
|
||||
_log = None
|
||||
config = None
|
||||
group = None
|
||||
variant = None
|
||||
|
|
@ -1631,6 +1632,12 @@ class ApplicationAction(avalon.api.Action):
|
|||
"AVALON_TASK"
|
||||
)
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
if self._log is None:
|
||||
self._log = Logger().get_logger(self.__class__.__name__)
|
||||
return self._log
|
||||
|
||||
def is_compatible(self, session):
|
||||
for key in self.required_session_keys:
|
||||
if key not in session:
|
||||
|
|
@ -1643,6 +1650,165 @@ class ApplicationAction(avalon.api.Action):
|
|||
project_name = session["AVALON_PROJECT"]
|
||||
asset_name = session["AVALON_ASSET"]
|
||||
task_name = session["AVALON_TASK"]
|
||||
return launch_application(
|
||||
launch_application(
|
||||
project_name, asset_name, task_name, self.name
|
||||
)
|
||||
|
||||
self._ftrack_after_launch_procedure(
|
||||
project_name, asset_name, task_name
|
||||
)
|
||||
|
||||
def _ftrack_after_launch_procedure(
|
||||
self, project_name, asset_name, task_name
|
||||
):
|
||||
# TODO move to launch hook
|
||||
required_keys = ("FTRACK_SERVER", "FTRACK_API_USER", "FTRACK_API_KEY")
|
||||
for key in required_keys:
|
||||
if not os.environ.get(key):
|
||||
self.log.debug((
|
||||
"Missing required environment \"{}\""
|
||||
" for Ftrack after launch procedure."
|
||||
).format(key))
|
||||
return
|
||||
|
||||
try:
|
||||
import ftrack_api
|
||||
session = ftrack_api.Session(auto_connect_event_hub=True)
|
||||
self.log.debug("Ftrack session created")
|
||||
except Exception:
|
||||
self.log.warning("Couldn't create Ftrack session")
|
||||
return
|
||||
|
||||
try:
|
||||
entity = self._find_ftrack_task_entity(
|
||||
session, project_name, asset_name, task_name
|
||||
)
|
||||
self._ftrack_status_change(session, entity, project_name)
|
||||
self._start_timer(session, entity, ftrack_api)
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Couldn't finish Ftrack procedure.", exc_info=True
|
||||
)
|
||||
return
|
||||
|
||||
finally:
|
||||
session.close()
|
||||
|
||||
def _find_ftrack_task_entity(
|
||||
self, session, project_name, asset_name, task_name
|
||||
):
|
||||
project_entity = session.query(
|
||||
"Project where full_name is \"{}\"".format(project_name)
|
||||
).first()
|
||||
if not project_entity:
|
||||
self.log.warning(
|
||||
"Couldn't find project \"{}\" in Ftrack.".format(project_name)
|
||||
)
|
||||
return
|
||||
|
||||
potential_task_entities = session.query((
|
||||
"TypedContext where parent.name is \"{}\" and project_id is \"{}\""
|
||||
).format(asset_name, project_entity["id"])).all()
|
||||
filtered_entities = []
|
||||
for _entity in potential_task_entities:
|
||||
if (
|
||||
_entity.entity_type.lower() == "task"
|
||||
and _entity["name"] == task_name
|
||||
):
|
||||
filtered_entities.append(_entity)
|
||||
|
||||
if not filtered_entities:
|
||||
self.log.warning((
|
||||
"Couldn't find task \"{}\" under parent \"{}\" in Ftrack."
|
||||
).format(task_name, asset_name))
|
||||
return
|
||||
|
||||
if len(filtered_entities) > 1:
|
||||
self.log.warning((
|
||||
"Found more than one task \"{}\""
|
||||
" under parent \"{}\" in Ftrack."
|
||||
).format(task_name, asset_name))
|
||||
return
|
||||
|
||||
return filtered_entities[0]
|
||||
|
||||
def _ftrack_status_change(self, session, entity, project_name):
|
||||
presets = config.get_presets(project_name)["ftrack"]["ftrack_config"]
|
||||
statuses = presets.get("status_update")
|
||||
if not statuses:
|
||||
return
|
||||
|
||||
actual_status = entity["status"]["name"].lower()
|
||||
already_tested = set()
|
||||
ent_path = "/".join(
|
||||
[ent["name"] for ent in entity["link"]]
|
||||
)
|
||||
while True:
|
||||
next_status_name = None
|
||||
for key, value in statuses.items():
|
||||
if key in already_tested:
|
||||
continue
|
||||
if actual_status in value or "_any_" in value:
|
||||
if key != "_ignore_":
|
||||
next_status_name = key
|
||||
already_tested.add(key)
|
||||
break
|
||||
already_tested.add(key)
|
||||
|
||||
if next_status_name is None:
|
||||
break
|
||||
|
||||
try:
|
||||
query = "Status where name is \"{}\"".format(
|
||||
next_status_name
|
||||
)
|
||||
status = session.query(query).one()
|
||||
|
||||
entity["status"] = status
|
||||
session.commit()
|
||||
self.log.debug("Changing status to \"{}\" <{}>".format(
|
||||
next_status_name, ent_path
|
||||
))
|
||||
break
|
||||
|
||||
except Exception:
|
||||
session.rollback()
|
||||
msg = (
|
||||
"Status \"{}\" in presets wasn't found"
|
||||
" on Ftrack entity type \"{}\""
|
||||
).format(next_status_name, entity.entity_type)
|
||||
self.log.warning(msg)
|
||||
|
||||
def _start_timer(self, session, entity, _ftrack_api):
|
||||
self.log.debug("Triggering timer start.")
|
||||
|
||||
user_entity = session.query("User where username is \"{}\"".format(
|
||||
os.environ["FTRACK_API_USER"]
|
||||
)).first()
|
||||
if not user_entity:
|
||||
self.log.warning(
|
||||
"Couldn't find user with username \"{}\" in Ftrack".format(
|
||||
os.environ["FTRACK_API_USER"]
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
source = {
|
||||
"user": {
|
||||
"id": user_entity["id"],
|
||||
"username": user_entity["username"]
|
||||
}
|
||||
}
|
||||
event_data = {
|
||||
"actionIdentifier": "start.timer",
|
||||
"selection": [{"entityId": entity["id"], "entityType": "task"}]
|
||||
}
|
||||
session.event_hub.publish(
|
||||
_ftrack_api.event.base.Event(
|
||||
topic="ftrack.action.launch",
|
||||
data=event_data,
|
||||
source=source
|
||||
),
|
||||
on_error="ignore"
|
||||
)
|
||||
self.log.debug("Timer start triggered successfully.")
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
from .io_nonsingleton import DbConnector
|
||||
from .rest_api import AdobeRestApi, PUBLISH_PATHS
|
||||
|
||||
__all__ = [
|
||||
"PUBLISH_PATHS",
|
||||
"DbConnector",
|
||||
"AdobeRestApi"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,460 +0,0 @@
|
|||
"""
|
||||
Wrapper around interactions with the database
|
||||
|
||||
Copy of io module in avalon-core.
|
||||
- In this case not working as singleton with api.Session!
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import errno
|
||||
import shutil
|
||||
import logging
|
||||
import tempfile
|
||||
import functools
|
||||
import contextlib
|
||||
|
||||
from avalon import schema
|
||||
from avalon.vendor import requests
|
||||
from avalon.io import extract_port_from_url
|
||||
|
||||
# Third-party dependencies
|
||||
import pymongo
|
||||
|
||||
|
||||
def auto_reconnect(func):
|
||||
"""Handling auto reconnect in 3 retry times"""
|
||||
@functools.wraps(func)
|
||||
def decorated(*args, **kwargs):
|
||||
object = args[0]
|
||||
for retry in range(3):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except pymongo.errors.AutoReconnect:
|
||||
object.log.error("Reconnecting..")
|
||||
time.sleep(0.1)
|
||||
else:
|
||||
raise
|
||||
|
||||
return decorated
|
||||
|
||||
|
||||
class DbConnector(object):
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
def __init__(self):
|
||||
self.Session = {}
|
||||
self._mongo_client = None
|
||||
self._sentry_client = None
|
||||
self._sentry_logging_handler = None
|
||||
self._database = None
|
||||
self._is_installed = False
|
||||
|
||||
def __getitem__(self, key):
|
||||
# gives direct access to collection withou setting `active_table`
|
||||
return self._database[key]
|
||||
|
||||
def __getattribute__(self, attr):
|
||||
# not all methods of PyMongo database are implemented with this it is
|
||||
# possible to use them too
|
||||
try:
|
||||
return super(DbConnector, self).__getattribute__(attr)
|
||||
except AttributeError:
|
||||
cur_proj = self.Session["AVALON_PROJECT"]
|
||||
return self._database[cur_proj].__getattribute__(attr)
|
||||
|
||||
def install(self):
|
||||
"""Establish a persistent connection to the database"""
|
||||
if self._is_installed:
|
||||
return
|
||||
|
||||
logging.basicConfig()
|
||||
self.Session.update(self._from_environment())
|
||||
|
||||
timeout = int(self.Session["AVALON_TIMEOUT"])
|
||||
mongo_url = self.Session["AVALON_MONGO"]
|
||||
kwargs = {
|
||||
"host": mongo_url,
|
||||
"serverSelectionTimeoutMS": timeout
|
||||
}
|
||||
|
||||
port = extract_port_from_url(mongo_url)
|
||||
if port is not None:
|
||||
kwargs["port"] = int(port)
|
||||
|
||||
self._mongo_client = pymongo.MongoClient(**kwargs)
|
||||
|
||||
for retry in range(3):
|
||||
try:
|
||||
t1 = time.time()
|
||||
self._mongo_client.server_info()
|
||||
|
||||
except Exception:
|
||||
self.log.error("Retrying..")
|
||||
time.sleep(1)
|
||||
timeout *= 1.5
|
||||
|
||||
else:
|
||||
break
|
||||
|
||||
else:
|
||||
raise IOError(
|
||||
"ERROR: Couldn't connect to %s in "
|
||||
"less than %.3f ms" % (self.Session["AVALON_MONGO"], timeout))
|
||||
|
||||
self.log.info("Connected to %s, delay %.3f s" % (
|
||||
self.Session["AVALON_MONGO"], time.time() - t1))
|
||||
|
||||
self._install_sentry()
|
||||
|
||||
self._database = self._mongo_client[self.Session["AVALON_DB"]]
|
||||
self._is_installed = True
|
||||
|
||||
def _install_sentry(self):
|
||||
if "AVALON_SENTRY" not in self.Session:
|
||||
return
|
||||
|
||||
try:
|
||||
from raven import Client
|
||||
from raven.handlers.logging import SentryHandler
|
||||
from raven.conf import setup_logging
|
||||
except ImportError:
|
||||
# Note: There was a Sentry address in this Session
|
||||
return self.log.warning("Sentry disabled, raven not installed")
|
||||
|
||||
client = Client(self.Session["AVALON_SENTRY"])
|
||||
|
||||
# Transmit log messages to Sentry
|
||||
handler = SentryHandler(client)
|
||||
handler.setLevel(logging.WARNING)
|
||||
|
||||
setup_logging(handler)
|
||||
|
||||
self._sentry_client = client
|
||||
self._sentry_logging_handler = handler
|
||||
self.log.info(
|
||||
"Connected to Sentry @ %s" % self.Session["AVALON_SENTRY"]
|
||||
)
|
||||
|
||||
def _from_environment(self):
|
||||
Session = {
|
||||
item[0]: os.getenv(item[0], item[1])
|
||||
for item in (
|
||||
# Root directory of projects on disk
|
||||
("AVALON_PROJECTS", None),
|
||||
|
||||
# Name of current Project
|
||||
("AVALON_PROJECT", ""),
|
||||
|
||||
# Name of current Asset
|
||||
("AVALON_ASSET", ""),
|
||||
|
||||
# Name of current silo
|
||||
("AVALON_SILO", ""),
|
||||
|
||||
# Name of current task
|
||||
("AVALON_TASK", None),
|
||||
|
||||
# Name of current app
|
||||
("AVALON_APP", None),
|
||||
|
||||
# Path to working directory
|
||||
("AVALON_WORKDIR", None),
|
||||
|
||||
# Name of current Config
|
||||
# TODO(marcus): Establish a suitable default config
|
||||
("AVALON_CONFIG", "no_config"),
|
||||
|
||||
# Name of Avalon in graphical user interfaces
|
||||
# Use this to customise the visual appearance of Avalon
|
||||
# to better integrate with your surrounding pipeline
|
||||
("AVALON_LABEL", "Avalon"),
|
||||
|
||||
# Used during any connections to the outside world
|
||||
("AVALON_TIMEOUT", "1000"),
|
||||
|
||||
# Address to Asset Database
|
||||
("AVALON_MONGO", "mongodb://localhost:27017"),
|
||||
|
||||
# Name of database used in MongoDB
|
||||
("AVALON_DB", "avalon"),
|
||||
|
||||
# Address to Sentry
|
||||
("AVALON_SENTRY", None),
|
||||
|
||||
# Address to Deadline Web Service
|
||||
# E.g. http://192.167.0.1:8082
|
||||
("AVALON_DEADLINE", None),
|
||||
|
||||
# Enable features not necessarily stable. The user's own risk
|
||||
("AVALON_EARLY_ADOPTER", None),
|
||||
|
||||
# Address of central asset repository, contains
|
||||
# the following interface:
|
||||
# /upload
|
||||
# /download
|
||||
# /manager (optional)
|
||||
("AVALON_LOCATION", "http://127.0.0.1"),
|
||||
|
||||
# Boolean of whether to upload published material
|
||||
# to central asset repository
|
||||
("AVALON_UPLOAD", None),
|
||||
|
||||
# Generic username and password
|
||||
("AVALON_USERNAME", "avalon"),
|
||||
("AVALON_PASSWORD", "secret"),
|
||||
|
||||
# Unique identifier for instances in working files
|
||||
("AVALON_INSTANCE_ID", "avalon.instance"),
|
||||
("AVALON_CONTAINER_ID", "avalon.container"),
|
||||
|
||||
# Enable debugging
|
||||
("AVALON_DEBUG", None),
|
||||
|
||||
) if os.getenv(item[0], item[1]) is not None
|
||||
}
|
||||
|
||||
Session["schema"] = "avalon-core:session-2.0"
|
||||
try:
|
||||
schema.validate(Session)
|
||||
except schema.ValidationError as e:
|
||||
# TODO(marcus): Make this mandatory
|
||||
self.log.warning(e)
|
||||
|
||||
return Session
|
||||
|
||||
def uninstall(self):
|
||||
"""Close any connection to the database"""
|
||||
try:
|
||||
self._mongo_client.close()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
self._mongo_client = None
|
||||
self._database = None
|
||||
self._is_installed = False
|
||||
|
||||
def active_project(self):
|
||||
"""Return the name of the active project"""
|
||||
return self.Session["AVALON_PROJECT"]
|
||||
|
||||
def activate_project(self, project_name):
|
||||
self.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
def projects(self):
|
||||
"""List available projects
|
||||
|
||||
Returns:
|
||||
list of project documents
|
||||
|
||||
"""
|
||||
|
||||
collection_names = self.collections()
|
||||
for project in collection_names:
|
||||
if project in ("system.indexes",):
|
||||
continue
|
||||
|
||||
# Each collection will have exactly one project document
|
||||
document = self.find_project(project)
|
||||
|
||||
if document is not None:
|
||||
yield document
|
||||
|
||||
def locate(self, path):
|
||||
"""Traverse a hierarchy from top-to-bottom
|
||||
|
||||
Example:
|
||||
representation = locate(["hulk", "Bruce", "modelDefault", 1, "ma"])
|
||||
|
||||
Returns:
|
||||
representation (ObjectId)
|
||||
|
||||
"""
|
||||
|
||||
components = zip(
|
||||
("project", "asset", "subset", "version", "representation"),
|
||||
path
|
||||
)
|
||||
|
||||
parent = None
|
||||
for type_, name in components:
|
||||
latest = (type_ == "version") and name in (None, -1)
|
||||
|
||||
try:
|
||||
if latest:
|
||||
parent = self.find_one(
|
||||
filter={
|
||||
"type": type_,
|
||||
"parent": parent
|
||||
},
|
||||
projection={"_id": 1},
|
||||
sort=[("name", -1)]
|
||||
)["_id"]
|
||||
else:
|
||||
parent = self.find_one(
|
||||
filter={
|
||||
"type": type_,
|
||||
"name": name,
|
||||
"parent": parent
|
||||
},
|
||||
projection={"_id": 1},
|
||||
)["_id"]
|
||||
|
||||
except TypeError:
|
||||
return None
|
||||
|
||||
return parent
|
||||
|
||||
@auto_reconnect
|
||||
def collections(self):
|
||||
return self._database.collection_names()
|
||||
|
||||
@auto_reconnect
|
||||
def find_project(self, project):
|
||||
return self._database[project].find_one({"type": "project"})
|
||||
|
||||
@auto_reconnect
|
||||
def insert_one(self, item):
|
||||
assert isinstance(item, dict), "item must be of type <dict>"
|
||||
schema.validate(item)
|
||||
return self._database[self.Session["AVALON_PROJECT"]].insert_one(item)
|
||||
|
||||
@auto_reconnect
|
||||
def insert_many(self, items, ordered=True):
|
||||
# check if all items are valid
|
||||
assert isinstance(items, list), "`items` must be of type <list>"
|
||||
for item in items:
|
||||
assert isinstance(item, dict), "`item` must be of type <dict>"
|
||||
schema.validate(item)
|
||||
|
||||
return self._database[self.Session["AVALON_PROJECT"]].insert_many(
|
||||
items,
|
||||
ordered=ordered)
|
||||
|
||||
@auto_reconnect
|
||||
def find(self, filter, projection=None, sort=None):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].find(
|
||||
filter=filter,
|
||||
projection=projection,
|
||||
sort=sort
|
||||
)
|
||||
|
||||
@auto_reconnect
|
||||
def find_one(self, filter, projection=None, sort=None):
|
||||
assert isinstance(filter, dict), "filter must be <dict>"
|
||||
|
||||
return self._database[self.Session["AVALON_PROJECT"]].find_one(
|
||||
filter=filter,
|
||||
projection=projection,
|
||||
sort=sort
|
||||
)
|
||||
|
||||
@auto_reconnect
|
||||
def save(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].save(
|
||||
*args, **kwargs)
|
||||
|
||||
@auto_reconnect
|
||||
def replace_one(self, filter, replacement):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].replace_one(
|
||||
filter, replacement)
|
||||
|
||||
@auto_reconnect
|
||||
def update_many(self, filter, update):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].update_many(
|
||||
filter, update)
|
||||
|
||||
@auto_reconnect
|
||||
def distinct(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].distinct(
|
||||
*args, **kwargs)
|
||||
|
||||
@auto_reconnect
|
||||
def drop(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].drop(
|
||||
*args, **kwargs)
|
||||
|
||||
@auto_reconnect
|
||||
def delete_many(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].delete_many(
|
||||
*args, **kwargs)
|
||||
|
||||
def parenthood(self, document):
|
||||
assert document is not None, "This is a bug"
|
||||
|
||||
parents = list()
|
||||
|
||||
while document.get("parent") is not None:
|
||||
document = self.find_one({"_id": document["parent"]})
|
||||
|
||||
if document is None:
|
||||
break
|
||||
|
||||
if document.get("type") == "master_version":
|
||||
_document = self.find_one({"_id": document["version_id"]})
|
||||
document["data"] = _document["data"]
|
||||
|
||||
parents.append(document)
|
||||
|
||||
return parents
|
||||
|
||||
@contextlib.contextmanager
|
||||
def tempdir(self):
|
||||
tempdir = tempfile.mkdtemp()
|
||||
try:
|
||||
yield tempdir
|
||||
finally:
|
||||
shutil.rmtree(tempdir)
|
||||
|
||||
def download(self, src, dst):
|
||||
"""Download `src` to `dst`
|
||||
|
||||
Arguments:
|
||||
src (str): URL to source file
|
||||
dst (str): Absolute path to destination file
|
||||
|
||||
Yields tuple (progress, error):
|
||||
progress (int): Between 0-100
|
||||
error (Exception): Any exception raised when first making connection
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
response = requests.get(
|
||||
src,
|
||||
stream=True,
|
||||
auth=requests.auth.HTTPBasicAuth(
|
||||
self.Session["AVALON_USERNAME"],
|
||||
self.Session["AVALON_PASSWORD"]
|
||||
)
|
||||
)
|
||||
except requests.ConnectionError as e:
|
||||
yield None, e
|
||||
return
|
||||
|
||||
with self.tempdir() as dirname:
|
||||
tmp = os.path.join(dirname, os.path.basename(src))
|
||||
|
||||
with open(tmp, "wb") as f:
|
||||
total_length = response.headers.get("content-length")
|
||||
|
||||
if total_length is None: # no content length header
|
||||
f.write(response.content)
|
||||
else:
|
||||
downloaded = 0
|
||||
total_length = int(total_length)
|
||||
for data in response.iter_content(chunk_size=4096):
|
||||
downloaded += len(data)
|
||||
f.write(data)
|
||||
|
||||
yield int(100.0 * downloaded / total_length), None
|
||||
|
||||
try:
|
||||
os.makedirs(os.path.dirname(dst))
|
||||
except OSError as e:
|
||||
# An already existing destination directory is fine.
|
||||
if e.errno != errno.EEXIST:
|
||||
raise
|
||||
|
||||
shutil.copy(tmp, dst)
|
||||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import sys
|
||||
import copy
|
||||
from pype.modules.rest_api import RestApi, route, abort, CallbackResult
|
||||
from .io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
from pype.api import config, execute, Logger
|
||||
|
||||
log = Logger().get_logger("AdobeCommunicator")
|
||||
|
|
@ -14,7 +14,7 @@ PUBLISH_PATHS = []
|
|||
|
||||
|
||||
class AdobeRestApi(RestApi):
|
||||
dbcon = DbConnector()
|
||||
dbcon = AvalonMongoDB()
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
|
|
|||
|
|
@ -1,16 +1,27 @@
|
|||
from Qt import QtWidgets
|
||||
from avalon.tools import libraryloader
|
||||
from pype.api import Logger
|
||||
from pype.tools.launcher import LauncherWindow, actions
|
||||
|
||||
|
||||
class AvalonApps:
|
||||
def __init__(self, main_parent=None, parent=None):
|
||||
self.log = Logger().get_logger(__name__)
|
||||
self.main_parent = main_parent
|
||||
|
||||
self.tray_init(main_parent, parent)
|
||||
|
||||
def tray_init(self, main_parent, parent):
|
||||
from avalon.tools.libraryloader import app
|
||||
from avalon import style
|
||||
from pype.tools.launcher import LauncherWindow, actions
|
||||
|
||||
self.parent = parent
|
||||
self.main_parent = main_parent
|
||||
|
||||
self.app_launcher = LauncherWindow()
|
||||
self.libraryloader = app.Window(
|
||||
icon=self.parent.icon,
|
||||
show_projects=True,
|
||||
show_libraries=True
|
||||
)
|
||||
self.libraryloader.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
# actions.register_default_actions()
|
||||
actions.register_config_actions()
|
||||
|
|
@ -23,6 +34,7 @@ class AvalonApps:
|
|||
|
||||
# Definition of Tray menu
|
||||
def tray_menu(self, parent_menu=None):
|
||||
from Qt import QtWidgets
|
||||
# Actions
|
||||
if parent_menu is None:
|
||||
if self.parent is None:
|
||||
|
|
@ -48,11 +60,15 @@ class AvalonApps:
|
|||
def show_launcher(self):
|
||||
# if app_launcher don't exist create it/otherwise only show main window
|
||||
self.app_launcher.show()
|
||||
self.app_launcher.raise_()
|
||||
self.app_launcher.activateWindow()
|
||||
|
||||
def show_library_loader(self):
|
||||
libraryloader.show(
|
||||
parent=self.main_parent,
|
||||
icon=self.parent.icon,
|
||||
show_projects=True,
|
||||
show_libraries=True
|
||||
)
|
||||
self.libraryloader.show()
|
||||
|
||||
# Raise and activate the window
|
||||
# for MacOS
|
||||
self.libraryloader.raise_()
|
||||
# for Windows
|
||||
self.libraryloader.activateWindow()
|
||||
self.libraryloader.refresh()
|
||||
|
|
|
|||
|
|
@ -4,14 +4,14 @@ import json
|
|||
import bson
|
||||
import bson.json_util
|
||||
from pype.modules.rest_api import RestApi, abort, CallbackResult
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
|
||||
class AvalonRestApi(RestApi):
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.dbcon = DbConnector()
|
||||
self.dbcon = AvalonMongoDB()
|
||||
self.dbcon.install()
|
||||
|
||||
@RestApi.route("/projects/<project_name>", url_prefix="/avalon", methods="GET")
|
||||
|
|
|
|||
|
|
@ -1,9 +1,8 @@
|
|||
import os
|
||||
import threading
|
||||
import time
|
||||
|
||||
from pype.api import Logger
|
||||
from avalon import style
|
||||
from Qt import QtWidgets
|
||||
from .widgets import ClockifySettings, MessageWidget
|
||||
from .clockify_api import ClockifyAPI
|
||||
from .constants import CLOCKIFY_FTRACK_USER_PATH
|
||||
|
||||
|
|
@ -17,11 +16,21 @@ class ClockifyModule:
|
|||
|
||||
os.environ["CLOCKIFY_WORKSPACE"] = self.workspace_name
|
||||
|
||||
self.timer_manager = None
|
||||
self.MessageWidgetClass = None
|
||||
|
||||
self.clockapi = ClockifyAPI(master_parent=self)
|
||||
|
||||
self.log = Logger().get_logger(self.__class__.__name__, "PypeTray")
|
||||
self.tray_init(main_parent, parent)
|
||||
|
||||
def tray_init(self, main_parent, parent):
|
||||
from .widgets import ClockifySettings, MessageWidget
|
||||
|
||||
self.MessageWidgetClass = MessageWidget
|
||||
|
||||
self.main_parent = main_parent
|
||||
self.parent = parent
|
||||
self.clockapi = ClockifyAPI(master_parent=self)
|
||||
self.message_widget = None
|
||||
self.widget_settings = ClockifySettings(main_parent, self)
|
||||
self.widget_settings_required = None
|
||||
|
|
@ -57,11 +66,10 @@ class ClockifyModule:
|
|||
)
|
||||
|
||||
if 'AvalonApps' in modules:
|
||||
from launcher import lib
|
||||
actions_path = os.path.sep.join([
|
||||
actions_path = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
'launcher_actions'
|
||||
])
|
||||
)
|
||||
current = os.environ.get('AVALON_ACTIONS', '')
|
||||
if current:
|
||||
current += os.pathsep
|
||||
|
|
@ -78,12 +86,12 @@ class ClockifyModule:
|
|||
self.stop_timer()
|
||||
|
||||
def timer_started(self, data):
|
||||
if hasattr(self, 'timer_manager'):
|
||||
if self.timer_manager:
|
||||
self.timer_manager.start_timers(data)
|
||||
|
||||
def timer_stopped(self):
|
||||
self.bool_timer_run = False
|
||||
if hasattr(self, 'timer_manager'):
|
||||
if self.timer_manager:
|
||||
self.timer_manager.stop_timers()
|
||||
|
||||
def start_timer_check(self):
|
||||
|
|
@ -102,7 +110,7 @@ class ClockifyModule:
|
|||
self.thread_timer_check = None
|
||||
|
||||
def check_running(self):
|
||||
import time
|
||||
|
||||
while self.bool_thread_check_running is True:
|
||||
bool_timer_run = False
|
||||
if self.clockapi.get_in_progress() is not None:
|
||||
|
|
@ -156,15 +164,14 @@ class ClockifyModule:
|
|||
self.timer_stopped()
|
||||
|
||||
def signed_in(self):
|
||||
if hasattr(self, 'timer_manager'):
|
||||
if not self.timer_manager:
|
||||
return
|
||||
if not self.timer_manager:
|
||||
return
|
||||
|
||||
if not self.timer_manager.last_task:
|
||||
return
|
||||
if not self.timer_manager.last_task:
|
||||
return
|
||||
|
||||
if self.timer_manager.is_running:
|
||||
self.start_timer_manager(self.timer_manager.last_task)
|
||||
if self.timer_manager.is_running:
|
||||
self.start_timer_manager(self.timer_manager.last_task)
|
||||
|
||||
def start_timer(self, input_data):
|
||||
# If not api key is not entered then skip
|
||||
|
|
@ -197,11 +204,14 @@ class ClockifyModule:
|
|||
"<br><br>Please inform your Project Manager."
|
||||
).format(project_name, str(self.clockapi.workspace_name))
|
||||
|
||||
self.message_widget = MessageWidget(
|
||||
self.main_parent, msg, "Clockify - Info Message"
|
||||
)
|
||||
self.message_widget.closed.connect(self.on_message_widget_close)
|
||||
self.message_widget.show()
|
||||
if self.MessageWidgetClass:
|
||||
self.message_widget = self.MessageWidgetClass(
|
||||
self.main_parent, msg, "Clockify - Info Message"
|
||||
)
|
||||
self.message_widget.closed.connect(
|
||||
self.on_message_widget_close
|
||||
)
|
||||
self.message_widget.show()
|
||||
|
||||
return
|
||||
|
||||
|
|
@ -227,31 +237,29 @@ class ClockifyModule:
|
|||
# Definition of Tray menu
|
||||
def tray_menu(self, parent_menu):
|
||||
# Menu for Tray App
|
||||
self.menu = QtWidgets.QMenu('Clockify', parent_menu)
|
||||
self.menu.setProperty('submenu', 'on')
|
||||
self.menu.setStyleSheet(style.load_stylesheet())
|
||||
from Qt import QtWidgets
|
||||
menu = QtWidgets.QMenu("Clockify", parent_menu)
|
||||
menu.setProperty("submenu", "on")
|
||||
|
||||
# Actions
|
||||
self.aShowSettings = QtWidgets.QAction(
|
||||
"Settings", self.menu
|
||||
)
|
||||
self.aStopTimer = QtWidgets.QAction(
|
||||
"Stop timer", self.menu
|
||||
)
|
||||
action_show_settings = QtWidgets.QAction("Settings", menu)
|
||||
action_stop_timer = QtWidgets.QAction("Stop timer", menu)
|
||||
|
||||
self.menu.addAction(self.aShowSettings)
|
||||
self.menu.addAction(self.aStopTimer)
|
||||
menu.addAction(action_show_settings)
|
||||
menu.addAction(action_stop_timer)
|
||||
|
||||
self.aShowSettings.triggered.connect(self.show_settings)
|
||||
self.aStopTimer.triggered.connect(self.stop_timer)
|
||||
action_show_settings.triggered.connect(self.show_settings)
|
||||
action_stop_timer.triggered.connect(self.stop_timer)
|
||||
|
||||
self.action_stop_timer = action_stop_timer
|
||||
|
||||
self.set_menu_visibility()
|
||||
|
||||
parent_menu.addMenu(self.menu)
|
||||
parent_menu.addMenu(menu)
|
||||
|
||||
def show_settings(self):
|
||||
self.widget_settings.input_api_key.setText(self.clockapi.get_api_key())
|
||||
self.widget_settings.show()
|
||||
|
||||
def set_menu_visibility(self):
|
||||
self.aStopTimer.setVisible(self.bool_timer_run)
|
||||
self.action_stop_timer.setVisible(self.bool_timer_run)
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class ClockifySync(api.Action):
|
|||
|
||||
projects_info = {}
|
||||
for project in projects_to_sync:
|
||||
task_types = [task['name'] for task in project['config']['tasks']]
|
||||
task_types = project['config']['tasks'].keys()
|
||||
projects_info[project['name']] = task_types
|
||||
|
||||
clockify_projects = self.clockapi.get_projects()
|
||||
|
|
|
|||
|
|
@ -1,2 +1,12 @@
|
|||
from .lib import *
|
||||
from . import ftrack_server
|
||||
from .ftrack_server import FtrackServer, check_ftrack_url
|
||||
from .lib import BaseHandler, BaseEvent, BaseAction
|
||||
|
||||
__all__ = (
|
||||
"ftrack_server",
|
||||
"FtrackServer",
|
||||
"check_ftrack_url",
|
||||
"BaseHandler",
|
||||
"BaseEvent",
|
||||
"BaseAction"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ from queue import Queue
|
|||
|
||||
from bson.objectid import ObjectId
|
||||
from pype.modules.ftrack.lib import BaseAction, statics_icon
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
|
||||
class DeleteAssetSubset(BaseAction):
|
||||
|
|
@ -21,7 +21,7 @@ class DeleteAssetSubset(BaseAction):
|
|||
#: roles that are allowed to register this action
|
||||
role_list = ["Pypeclub", "Administrator", "Project Manager"]
|
||||
#: Db connection
|
||||
dbcon = DbConnector()
|
||||
dbcon = AvalonMongoDB()
|
||||
|
||||
splitter = {"type": "label", "value": "---"}
|
||||
action_data_by_id = {}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import clique
|
|||
from pymongo import UpdateOne
|
||||
|
||||
from pype.modules.ftrack.lib import BaseAction, statics_icon
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
from pype.api import Anatomy
|
||||
|
||||
import avalon.pipeline
|
||||
|
|
@ -24,7 +24,7 @@ class DeleteOldVersions(BaseAction):
|
|||
role_list = ["Pypeclub", "Project Manager", "Administrator"]
|
||||
icon = statics_icon("ftrack", "action_icons", "PypeAdmin.svg")
|
||||
|
||||
dbcon = DbConnector()
|
||||
dbcon = AvalonMongoDB()
|
||||
|
||||
inteface_title = "Choose your preferences"
|
||||
splitter_item = {"type": "label", "value": "---"}
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import os
|
||||
import copy
|
||||
import json
|
||||
import shutil
|
||||
import collections
|
||||
|
||||
|
|
@ -9,10 +10,10 @@ from bson.objectid import ObjectId
|
|||
from avalon import pipeline
|
||||
from avalon.vendor import filelink
|
||||
|
||||
from pype.api import Anatomy
|
||||
from pype.api import Anatomy, config
|
||||
from pype.modules.ftrack.lib import BaseAction, statics_icon
|
||||
from pype.modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
|
||||
class Delivery(BaseAction):
|
||||
|
|
@ -23,7 +24,7 @@ class Delivery(BaseAction):
|
|||
role_list = ["Pypeclub", "Administrator", "Project manager"]
|
||||
icon = statics_icon("ftrack", "action_icons", "Delivery.svg")
|
||||
|
||||
db_con = DbConnector()
|
||||
db_con = AvalonMongoDB()
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
for entity in entities:
|
||||
|
|
@ -41,36 +42,22 @@ class Delivery(BaseAction):
|
|||
items = []
|
||||
item_splitter = {"type": "label", "value": "---"}
|
||||
|
||||
# Prepare component names for processing
|
||||
components = None
|
||||
project = None
|
||||
for entity in entities:
|
||||
if project is None:
|
||||
project_id = None
|
||||
for ent_info in entity["link"]:
|
||||
if ent_info["type"].lower() == "project":
|
||||
project_id = ent_info["id"]
|
||||
break
|
||||
project_entity = self.get_project_from_entity(entities[0])
|
||||
project_name = project_entity["full_name"]
|
||||
self.db_con.install()
|
||||
self.db_con.Session["AVALON_PROJECT"] = project_name
|
||||
project_doc = self.db_con.find_one({"type": "project"})
|
||||
if not project_doc:
|
||||
return {
|
||||
"success": False,
|
||||
"message": (
|
||||
"Didn't found project \"{}\" in avalon."
|
||||
).format(project_name)
|
||||
}
|
||||
|
||||
if project_id is None:
|
||||
project = entity["asset"]["parent"]["project"]
|
||||
else:
|
||||
project = session.query((
|
||||
"select id, full_name from Project where id is \"{}\""
|
||||
).format(project_id)).one()
|
||||
repre_names = self._get_repre_names(entities)
|
||||
self.db_con.uninstall()
|
||||
|
||||
_components = set(
|
||||
[component["name"] for component in entity["components"]]
|
||||
)
|
||||
if components is None:
|
||||
components = _components
|
||||
continue
|
||||
|
||||
components = components.intersection(_components)
|
||||
if not components:
|
||||
break
|
||||
|
||||
project_name = project["full_name"]
|
||||
items.append({
|
||||
"type": "hidden",
|
||||
"name": "__project_name__",
|
||||
|
|
@ -93,7 +80,7 @@ class Delivery(BaseAction):
|
|||
|
||||
skipped = False
|
||||
# Add message if there are any common components
|
||||
if not components or not new_anatomies:
|
||||
if not repre_names or not new_anatomies:
|
||||
skipped = True
|
||||
items.append({
|
||||
"type": "label",
|
||||
|
|
@ -106,7 +93,7 @@ class Delivery(BaseAction):
|
|||
"value": skipped
|
||||
})
|
||||
|
||||
if not components:
|
||||
if not repre_names:
|
||||
if len(entities) == 1:
|
||||
items.append({
|
||||
"type": "label",
|
||||
|
|
@ -143,12 +130,12 @@ class Delivery(BaseAction):
|
|||
"type": "label"
|
||||
})
|
||||
|
||||
for component in components:
|
||||
for repre_name in repre_names:
|
||||
items.append({
|
||||
"type": "boolean",
|
||||
"value": False,
|
||||
"label": component,
|
||||
"name": component
|
||||
"label": repre_name,
|
||||
"name": repre_name
|
||||
})
|
||||
|
||||
items.append(item_splitter)
|
||||
|
|
@ -198,27 +185,233 @@ class Delivery(BaseAction):
|
|||
"title": title
|
||||
}
|
||||
|
||||
def _get_repre_names(self, entities):
|
||||
version_ids = self._get_interest_version_ids(entities)
|
||||
repre_docs = self.db_con.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids}
|
||||
})
|
||||
return list(sorted(repre_docs.distinct("name")))
|
||||
|
||||
def _get_interest_version_ids(self, entities):
|
||||
parent_ent_by_id = {}
|
||||
subset_names = set()
|
||||
version_nums = set()
|
||||
for entity in entities:
|
||||
asset = entity["asset"]
|
||||
parent = asset["parent"]
|
||||
parent_ent_by_id[parent["id"]] = parent
|
||||
|
||||
subset_name = asset["name"]
|
||||
subset_names.add(subset_name)
|
||||
|
||||
version = entity["version"]
|
||||
version_nums.add(version)
|
||||
|
||||
asset_docs_by_ftrack_id = self._get_asset_docs(parent_ent_by_id)
|
||||
subset_docs = self._get_subset_docs(
|
||||
asset_docs_by_ftrack_id, subset_names, entities
|
||||
)
|
||||
version_docs = self._get_version_docs(
|
||||
asset_docs_by_ftrack_id, subset_docs, version_nums, entities
|
||||
)
|
||||
|
||||
return [version_doc["_id"] for version_doc in version_docs]
|
||||
|
||||
def _get_version_docs(
|
||||
self, asset_docs_by_ftrack_id, subset_docs, version_nums, entities
|
||||
):
|
||||
subset_docs_by_id = {
|
||||
subset_doc["_id"]: subset_doc
|
||||
for subset_doc in subset_docs
|
||||
}
|
||||
version_docs = list(self.db_con.find({
|
||||
"type": "version",
|
||||
"parent": {"$in": list(subset_docs_by_id.keys())},
|
||||
"name": {"$in": list(version_nums)}
|
||||
}))
|
||||
version_docs_by_parent_id = collections.defaultdict(dict)
|
||||
for version_doc in version_docs:
|
||||
subset_doc = subset_docs_by_id[version_doc["parent"]]
|
||||
|
||||
asset_id = subset_doc["parent"]
|
||||
subset_name = subset_doc["name"]
|
||||
version = version_doc["name"]
|
||||
if version_docs_by_parent_id[asset_id].get(subset_name) is None:
|
||||
version_docs_by_parent_id[asset_id][subset_name] = {}
|
||||
|
||||
version_docs_by_parent_id[asset_id][subset_name][version] = (
|
||||
version_doc
|
||||
)
|
||||
|
||||
filtered_versions = []
|
||||
for entity in entities:
|
||||
asset = entity["asset"]
|
||||
|
||||
parent = asset["parent"]
|
||||
asset_doc = asset_docs_by_ftrack_id[parent["id"]]
|
||||
|
||||
subsets_by_name = version_docs_by_parent_id.get(asset_doc["_id"])
|
||||
if not subsets_by_name:
|
||||
continue
|
||||
|
||||
subset_name = asset["name"]
|
||||
version_docs_by_version = subsets_by_name.get(subset_name)
|
||||
if not version_docs_by_version:
|
||||
continue
|
||||
|
||||
version = entity["version"]
|
||||
version_doc = version_docs_by_version.get(version)
|
||||
if version_doc:
|
||||
filtered_versions.append(version_doc)
|
||||
return filtered_versions
|
||||
|
||||
def _get_subset_docs(
|
||||
self, asset_docs_by_ftrack_id, subset_names, entities
|
||||
):
|
||||
asset_doc_ids = list()
|
||||
for asset_doc in asset_docs_by_ftrack_id.values():
|
||||
asset_doc_ids.append(asset_doc["_id"])
|
||||
|
||||
subset_docs = list(self.db_con.find({
|
||||
"type": "subset",
|
||||
"parent": {"$in": asset_doc_ids},
|
||||
"name": {"$in": list(subset_names)}
|
||||
}))
|
||||
subset_docs_by_parent_id = collections.defaultdict(dict)
|
||||
for subset_doc in subset_docs:
|
||||
asset_id = subset_doc["parent"]
|
||||
subset_name = subset_doc["name"]
|
||||
subset_docs_by_parent_id[asset_id][subset_name] = subset_doc
|
||||
|
||||
filtered_subsets = []
|
||||
for entity in entities:
|
||||
asset = entity["asset"]
|
||||
|
||||
parent = asset["parent"]
|
||||
asset_doc = asset_docs_by_ftrack_id[parent["id"]]
|
||||
|
||||
subsets_by_name = subset_docs_by_parent_id.get(asset_doc["_id"])
|
||||
if not subsets_by_name:
|
||||
continue
|
||||
|
||||
subset_name = asset["name"]
|
||||
subset_doc = subsets_by_name.get(subset_name)
|
||||
if subset_doc:
|
||||
filtered_subsets.append(subset_doc)
|
||||
return filtered_subsets
|
||||
|
||||
def _get_asset_docs(self, parent_ent_by_id):
|
||||
asset_docs = list(self.db_con.find({
|
||||
"type": "asset",
|
||||
"data.ftrackId": {"$in": list(parent_ent_by_id.keys())}
|
||||
}))
|
||||
asset_docs_by_ftrack_id = {
|
||||
asset_doc["data"]["ftrackId"]: asset_doc
|
||||
for asset_doc in asset_docs
|
||||
}
|
||||
|
||||
entities_by_mongo_id = {}
|
||||
entities_by_names = {}
|
||||
for ftrack_id, entity in parent_ent_by_id.items():
|
||||
if ftrack_id not in asset_docs_by_ftrack_id:
|
||||
parent_mongo_id = entity["custom_attributes"].get(
|
||||
CUST_ATTR_ID_KEY
|
||||
)
|
||||
if parent_mongo_id:
|
||||
entities_by_mongo_id[ObjectId(parent_mongo_id)] = entity
|
||||
else:
|
||||
entities_by_names[entity["name"]] = entity
|
||||
|
||||
expressions = []
|
||||
if entities_by_mongo_id:
|
||||
expression = {
|
||||
"type": "asset",
|
||||
"_id": {"$in": list(entities_by_mongo_id.keys())}
|
||||
}
|
||||
expressions.append(expression)
|
||||
|
||||
if entities_by_names:
|
||||
expression = {
|
||||
"type": "asset",
|
||||
"name": {"$in": list(entities_by_names.keys())}
|
||||
}
|
||||
expressions.append(expression)
|
||||
|
||||
if expressions:
|
||||
if len(expressions) == 1:
|
||||
filter = expressions[0]
|
||||
else:
|
||||
filter = {"$or": expressions}
|
||||
|
||||
asset_docs = self.db_con.find(filter)
|
||||
for asset_doc in asset_docs:
|
||||
if asset_doc["_id"] in entities_by_mongo_id:
|
||||
entity = entities_by_mongo_id[asset_doc["_id"]]
|
||||
asset_docs_by_ftrack_id[entity["id"]] = asset_doc
|
||||
|
||||
elif asset_doc["name"] in entities_by_names:
|
||||
entity = entities_by_names[asset_doc["name"]]
|
||||
asset_docs_by_ftrack_id[entity["id"]] = asset_doc
|
||||
|
||||
return asset_docs_by_ftrack_id
|
||||
|
||||
def launch(self, session, entities, event):
|
||||
if "values" not in event["data"]:
|
||||
return
|
||||
|
||||
self.report_items = collections.defaultdict(list)
|
||||
|
||||
values = event["data"]["values"]
|
||||
skipped = values.pop("__skipped__")
|
||||
if skipped:
|
||||
return None
|
||||
|
||||
component_names = []
|
||||
user_id = event["source"]["user"]["id"]
|
||||
user_entity = session.query(
|
||||
"User where id is {}".format(user_id)
|
||||
).one()
|
||||
|
||||
job = session.create("Job", {
|
||||
"user": user_entity,
|
||||
"status": "running",
|
||||
"data": json.dumps({
|
||||
"description": "Delivery processing."
|
||||
})
|
||||
})
|
||||
session.commit()
|
||||
|
||||
try:
|
||||
self.db_con.install()
|
||||
self.real_launch(session, entities, event)
|
||||
job["status"] = "done"
|
||||
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed during processing delivery action.",
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
finally:
|
||||
if job["status"] != "done":
|
||||
job["status"] = "failed"
|
||||
session.commit()
|
||||
self.db_con.uninstall()
|
||||
|
||||
def real_launch(self, session, entities, event):
|
||||
self.log.info("Delivery action just started.")
|
||||
report_items = collections.defaultdict(list)
|
||||
|
||||
values = event["data"]["values"]
|
||||
|
||||
location_path = values.pop("__location_path__")
|
||||
anatomy_name = values.pop("__new_anatomies__")
|
||||
project_name = values.pop("__project_name__")
|
||||
|
||||
repre_names = []
|
||||
for key, value in values.items():
|
||||
if value is True:
|
||||
component_names.append(key)
|
||||
repre_names.append(key)
|
||||
|
||||
if not component_names:
|
||||
if not repre_names:
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Not selected components to deliver."
|
||||
|
|
@ -230,64 +423,15 @@ class Delivery(BaseAction):
|
|||
if not os.path.exists(location_path):
|
||||
os.makedirs(location_path)
|
||||
|
||||
self.db_con.install()
|
||||
self.db_con.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
repres_to_deliver = []
|
||||
for entity in entities:
|
||||
asset = entity["asset"]
|
||||
subset_name = asset["name"]
|
||||
version = entity["version"]
|
||||
|
||||
parent = asset["parent"]
|
||||
parent_mongo_id = parent["custom_attributes"].get(CUST_ATTR_ID_KEY)
|
||||
if parent_mongo_id:
|
||||
parent_mongo_id = ObjectId(parent_mongo_id)
|
||||
else:
|
||||
asset_ent = self.db_con.find_one({
|
||||
"type": "asset",
|
||||
"data.ftrackId": parent["id"]
|
||||
})
|
||||
if not asset_ent:
|
||||
ent_path = "/".join(
|
||||
[ent["name"] for ent in parent["link"]]
|
||||
)
|
||||
msg = "Not synchronized entities to avalon"
|
||||
self.report_items[msg].append(ent_path)
|
||||
self.log.warning("{} <{}>".format(msg, ent_path))
|
||||
continue
|
||||
|
||||
parent_mongo_id = asset_ent["_id"]
|
||||
|
||||
subset_ent = self.db_con.find_one({
|
||||
"type": "subset",
|
||||
"parent": parent_mongo_id,
|
||||
"name": subset_name
|
||||
})
|
||||
|
||||
version_ent = self.db_con.find_one({
|
||||
"type": "version",
|
||||
"name": version,
|
||||
"parent": subset_ent["_id"]
|
||||
})
|
||||
|
||||
repre_ents = self.db_con.find({
|
||||
"type": "representation",
|
||||
"parent": version_ent["_id"]
|
||||
})
|
||||
|
||||
repres_by_name = {}
|
||||
for repre in repre_ents:
|
||||
repre_name = repre["name"]
|
||||
repres_by_name[repre_name] = repre
|
||||
|
||||
for component in entity["components"]:
|
||||
comp_name = component["name"]
|
||||
if comp_name not in component_names:
|
||||
continue
|
||||
|
||||
repre = repres_by_name.get(comp_name)
|
||||
repres_to_deliver.append(repre)
|
||||
self.log.debug("Collecting representations to process.")
|
||||
version_ids = self._get_interest_version_ids(entities)
|
||||
repres_to_deliver = list(self.db_con.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids},
|
||||
"name": {"$in": repre_names}
|
||||
}))
|
||||
|
||||
anatomy = Anatomy(project_name)
|
||||
|
||||
|
|
@ -304,9 +448,17 @@ class Delivery(BaseAction):
|
|||
for name in root_names:
|
||||
format_dict["root"][name] = location_path
|
||||
|
||||
datetime_data = config.get_datetime_data()
|
||||
for repre in repres_to_deliver:
|
||||
source_path = repre.get("data", {}).get("path")
|
||||
debug_msg = "Processing representation {}".format(repre["_id"])
|
||||
if source_path:
|
||||
debug_msg += " with published path {}.".format(source_path)
|
||||
self.log.debug(debug_msg)
|
||||
|
||||
# Get destination repre path
|
||||
anatomy_data = copy.deepcopy(repre["context"])
|
||||
anatomy_data.update(datetime_data)
|
||||
anatomy_filled = anatomy.format_all(anatomy_data)
|
||||
test_path = anatomy_filled["delivery"][anatomy_name]
|
||||
|
||||
|
|
@ -333,7 +485,7 @@ class Delivery(BaseAction):
|
|||
"- Invalid value DataType: \"{}\"<br>"
|
||||
).format(str(repre["_id"]), keys)
|
||||
|
||||
self.report_items[msg].append(sub_msg)
|
||||
report_items[msg].append(sub_msg)
|
||||
self.log.warning(
|
||||
"{} Representation: \"{}\" Filled: <{}>".format(
|
||||
msg, str(repre["_id"]), str(test_path)
|
||||
|
|
@ -355,20 +507,19 @@ class Delivery(BaseAction):
|
|||
anatomy,
|
||||
anatomy_name,
|
||||
anatomy_data,
|
||||
format_dict
|
||||
format_dict,
|
||||
report_items
|
||||
)
|
||||
|
||||
if not frame:
|
||||
self.process_single_file(*args)
|
||||
else:
|
||||
self.process_sequence(*args)
|
||||
|
||||
self.db_con.uninstall()
|
||||
|
||||
return self.report()
|
||||
return self.report(report_items)
|
||||
|
||||
def process_single_file(
|
||||
self, repre_path, anatomy, anatomy_name, anatomy_data, format_dict
|
||||
self, repre_path, anatomy, anatomy_name, anatomy_data, format_dict,
|
||||
report_items
|
||||
):
|
||||
anatomy_filled = anatomy.format(anatomy_data)
|
||||
if format_dict:
|
||||
|
|
@ -384,7 +535,8 @@ class Delivery(BaseAction):
|
|||
self.copy_file(repre_path, delivery_path)
|
||||
|
||||
def process_sequence(
|
||||
self, repre_path, anatomy, anatomy_name, anatomy_data, format_dict
|
||||
self, repre_path, anatomy, anatomy_name, anatomy_data, format_dict,
|
||||
report_items
|
||||
):
|
||||
dir_path, file_name = os.path.split(str(repre_path))
|
||||
|
||||
|
|
@ -398,7 +550,7 @@ class Delivery(BaseAction):
|
|||
|
||||
if not file_name_items:
|
||||
msg = "Source file was not found"
|
||||
self.report_items[msg].append(repre_path)
|
||||
report_items[msg].append(repre_path)
|
||||
self.log.warning("{} <{}>".format(msg, repre_path))
|
||||
return
|
||||
|
||||
|
|
@ -418,7 +570,7 @@ class Delivery(BaseAction):
|
|||
if src_collection is None:
|
||||
# TODO log error!
|
||||
msg = "Source collection of files was not found"
|
||||
self.report_items[msg].append(repre_path)
|
||||
report_items[msg].append(repre_path)
|
||||
self.log.warning("{} <{}>".format(msg, repre_path))
|
||||
return
|
||||
|
||||
|
|
@ -491,10 +643,10 @@ class Delivery(BaseAction):
|
|||
except OSError:
|
||||
shutil.copyfile(src_path, dst_path)
|
||||
|
||||
def report(self):
|
||||
def report(self, report_items):
|
||||
items = []
|
||||
title = "Delivery report"
|
||||
for msg, _items in self.report_items.items():
|
||||
for msg, _items in report_items.items():
|
||||
if not _items:
|
||||
continue
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import json
|
|||
from bson.objectid import ObjectId
|
||||
from pype.modules.ftrack.lib import BaseAction, statics_icon
|
||||
from pype.api import Anatomy
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
from pype.modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
|
||||
|
||||
|
|
@ -25,7 +25,7 @@ class StoreThumbnailsToAvalon(BaseAction):
|
|||
icon = statics_icon("ftrack", "action_icons", "PypeAdmin.svg")
|
||||
|
||||
thumbnail_key = "AVALON_THUMBNAIL_ROOT"
|
||||
db_con = DbConnector()
|
||||
db_con = AvalonMongoDB()
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
for entity in entities:
|
||||
|
|
|
|||
|
|
@ -41,9 +41,9 @@ class ThumbToParent(BaseAction):
|
|||
parent = None
|
||||
thumbid = None
|
||||
if entity.entity_type.lower() == 'assetversion':
|
||||
try:
|
||||
parent = entity['task']
|
||||
except Exception:
|
||||
parent = entity['task']
|
||||
|
||||
if parent is None:
|
||||
par_ent = entity['link'][-2]
|
||||
parent = session.get(par_ent['type'], par_ent['id'])
|
||||
else:
|
||||
|
|
@ -51,7 +51,7 @@ class ThumbToParent(BaseAction):
|
|||
parent = entity['parent']
|
||||
except Exception as e:
|
||||
msg = (
|
||||
"Durin Action 'Thumb to Parent'"
|
||||
"During Action 'Thumb to Parent'"
|
||||
" went something wrong"
|
||||
)
|
||||
self.log.error(msg)
|
||||
|
|
@ -62,7 +62,10 @@ class ThumbToParent(BaseAction):
|
|||
parent['thumbnail_id'] = thumbid
|
||||
status = 'done'
|
||||
else:
|
||||
status = 'failed'
|
||||
raise Exception(
|
||||
"Parent or thumbnail id not found. Parent: {}. "
|
||||
"Thumbnail id: {}".format(parent, thumbid)
|
||||
)
|
||||
|
||||
# inform the user that the job is done
|
||||
job['status'] = status or 'done'
|
||||
|
|
|
|||
437
pype/modules/ftrack/events/action_push_frame_values_to_task.py
Normal file
437
pype/modules/ftrack/events/action_push_frame_values_to_task.py
Normal file
|
|
@ -0,0 +1,437 @@
|
|||
import json
|
||||
import collections
|
||||
import ftrack_api
|
||||
from pype.modules.ftrack.lib import BaseAction
|
||||
|
||||
|
||||
class PushFrameValuesToTaskAction(BaseAction):
|
||||
"""Action for testing purpose or as base for new actions."""
|
||||
|
||||
# Ignore event handler by default
|
||||
ignore_me = True
|
||||
|
||||
identifier = "admin.push_frame_values_to_task"
|
||||
label = "Pype Admin"
|
||||
variant = "- Push Frame values to Task"
|
||||
|
||||
entities_query = (
|
||||
"select id, name, parent_id, link from TypedContext"
|
||||
" where project_id is \"{}\" and object_type_id in ({})"
|
||||
)
|
||||
cust_attrs_query = (
|
||||
"select id, key, object_type_id, is_hierarchical, default"
|
||||
" from CustomAttributeConfiguration"
|
||||
" where key in ({})"
|
||||
)
|
||||
cust_attr_value_query = (
|
||||
"select value, entity_id from CustomAttributeValue"
|
||||
" where entity_id in ({}) and configuration_id in ({})"
|
||||
)
|
||||
|
||||
pushing_entity_types = {"Shot"}
|
||||
hierarchical_custom_attribute_keys = {"frameStart", "frameEnd"}
|
||||
custom_attribute_mapping = {
|
||||
"frameStart": "fstart",
|
||||
"frameEnd": "fend"
|
||||
}
|
||||
discover_role_list = {"Pypeclub", "Administrator", "Project Manager"}
|
||||
|
||||
def register(self):
|
||||
modified_role_names = set()
|
||||
for role_name in self.discover_role_list:
|
||||
modified_role_names.add(role_name.lower())
|
||||
self.discover_role_list = modified_role_names
|
||||
|
||||
self.session.event_hub.subscribe(
|
||||
"topic=ftrack.action.discover",
|
||||
self._discover,
|
||||
priority=self.priority
|
||||
)
|
||||
|
||||
launch_subscription = (
|
||||
"topic=ftrack.action.launch and data.actionIdentifier={0}"
|
||||
).format(self.identifier)
|
||||
self.session.event_hub.subscribe(launch_subscription, self._launch)
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
""" Validation """
|
||||
# Check if selection is valid
|
||||
valid_selection = False
|
||||
for ent in event["data"]["selection"]:
|
||||
# Ignore entities that are not tasks or projects
|
||||
if ent["entityType"].lower() == "show":
|
||||
valid_selection = True
|
||||
break
|
||||
|
||||
if not valid_selection:
|
||||
return False
|
||||
|
||||
# Get user and check his roles
|
||||
user_id = event.get("source", {}).get("user", {}).get("id")
|
||||
if not user_id:
|
||||
return False
|
||||
|
||||
user = session.query("User where id is \"{}\"".format(user_id)).first()
|
||||
if not user:
|
||||
return False
|
||||
|
||||
for role in user["user_security_roles"]:
|
||||
lowered_role = role["security_role"]["name"].lower()
|
||||
if lowered_role in self.discover_role_list:
|
||||
return True
|
||||
return False
|
||||
|
||||
def launch(self, session, entities, event):
|
||||
self.log.debug("{}: Creating job".format(self.label))
|
||||
|
||||
user_entity = session.query(
|
||||
"User where id is {}".format(event["source"]["user"]["id"])
|
||||
).one()
|
||||
job = session.create("Job", {
|
||||
"user": user_entity,
|
||||
"status": "running",
|
||||
"data": json.dumps({
|
||||
"description": "Propagation of Frame attribute values to task."
|
||||
})
|
||||
})
|
||||
session.commit()
|
||||
|
||||
try:
|
||||
project_entity = self.get_project_from_entity(entities[0])
|
||||
result = self.propagate_values(session, project_entity, event)
|
||||
job["status"] = "done"
|
||||
session.commit()
|
||||
|
||||
return result
|
||||
|
||||
except Exception:
|
||||
session.rollback()
|
||||
job["status"] = "failed"
|
||||
session.commit()
|
||||
|
||||
msg = "Pushing Custom attribute values to task Failed"
|
||||
self.log.warning(msg, exc_info=True)
|
||||
return {
|
||||
"success": False,
|
||||
"message": msg
|
||||
}
|
||||
|
||||
finally:
|
||||
if job["status"] == "running":
|
||||
job["status"] = "failed"
|
||||
session.commit()
|
||||
|
||||
def task_attributes(self, session):
|
||||
task_object_type = session.query(
|
||||
"ObjectType where name is \"Task\""
|
||||
).one()
|
||||
|
||||
hier_attr_names = list(
|
||||
self.custom_attribute_mapping.keys()
|
||||
)
|
||||
entity_type_specific_names = list(
|
||||
self.custom_attribute_mapping.values()
|
||||
)
|
||||
joined_keys = self.join_keys(
|
||||
hier_attr_names + entity_type_specific_names
|
||||
)
|
||||
attribute_entities = session.query(
|
||||
self.cust_attrs_query.format(joined_keys)
|
||||
).all()
|
||||
|
||||
hier_attrs = []
|
||||
task_attrs = {}
|
||||
for attr in attribute_entities:
|
||||
attr_key = attr["key"]
|
||||
if attr["is_hierarchical"]:
|
||||
if attr_key in hier_attr_names:
|
||||
hier_attrs.append(attr)
|
||||
elif attr["object_type_id"] == task_object_type["id"]:
|
||||
if attr_key in entity_type_specific_names:
|
||||
task_attrs[attr_key] = attr["id"]
|
||||
return task_attrs, hier_attrs
|
||||
|
||||
def join_keys(self, items):
|
||||
return ",".join(["\"{}\"".format(item) for item in items])
|
||||
|
||||
def propagate_values(self, session, project_entity, event):
|
||||
self.log.debug("Querying project's entities \"{}\".".format(
|
||||
project_entity["full_name"]
|
||||
))
|
||||
pushing_entity_types = tuple(
|
||||
ent_type.lower()
|
||||
for ent_type in self.pushing_entity_types
|
||||
)
|
||||
destination_object_types = []
|
||||
all_object_types = session.query("ObjectType").all()
|
||||
for object_type in all_object_types:
|
||||
lowered_name = object_type["name"].lower()
|
||||
if (
|
||||
lowered_name == "task"
|
||||
or lowered_name in pushing_entity_types
|
||||
):
|
||||
destination_object_types.append(object_type)
|
||||
|
||||
destination_object_type_ids = tuple(
|
||||
obj_type["id"]
|
||||
for obj_type in destination_object_types
|
||||
)
|
||||
entities = session.query(self.entities_query.format(
|
||||
project_entity["id"],
|
||||
self.join_keys(destination_object_type_ids)
|
||||
)).all()
|
||||
|
||||
entities_by_id = {
|
||||
entity["id"]: entity
|
||||
for entity in entities
|
||||
}
|
||||
|
||||
self.log.debug("Filtering Task entities.")
|
||||
task_entities_by_parent_id = collections.defaultdict(list)
|
||||
non_task_entities = []
|
||||
non_task_entity_ids = []
|
||||
for entity in entities:
|
||||
if entity.entity_type.lower() != "task":
|
||||
non_task_entities.append(entity)
|
||||
non_task_entity_ids.append(entity["id"])
|
||||
continue
|
||||
|
||||
parent_id = entity["parent_id"]
|
||||
if parent_id in entities_by_id:
|
||||
task_entities_by_parent_id[parent_id].append(entity)
|
||||
|
||||
task_attr_id_by_keys, hier_attrs = self.task_attributes(session)
|
||||
|
||||
self.log.debug("Getting Custom attribute values from tasks' parents.")
|
||||
hier_values_by_entity_id = self.get_hier_values(
|
||||
session,
|
||||
hier_attrs,
|
||||
non_task_entity_ids
|
||||
)
|
||||
|
||||
self.log.debug("Setting parents' values to task.")
|
||||
task_missing_keys = self.set_task_attr_values(
|
||||
session,
|
||||
task_entities_by_parent_id,
|
||||
hier_values_by_entity_id,
|
||||
task_attr_id_by_keys
|
||||
)
|
||||
|
||||
self.log.debug("Setting values to entities themselves.")
|
||||
missing_keys_by_object_name = self.push_values_to_entities(
|
||||
session,
|
||||
non_task_entities,
|
||||
hier_values_by_entity_id
|
||||
)
|
||||
if task_missing_keys:
|
||||
missing_keys_by_object_name["Task"] = task_missing_keys
|
||||
if missing_keys_by_object_name:
|
||||
self.report(missing_keys_by_object_name, event)
|
||||
return True
|
||||
|
||||
def report(self, missing_keys_by_object_name, event):
|
||||
splitter = {"type": "label", "value": "---"}
|
||||
|
||||
title = "Push Custom Attribute values report:"
|
||||
|
||||
items = []
|
||||
items.append({
|
||||
"type": "label",
|
||||
"value": "# Pushing values was not complete"
|
||||
})
|
||||
items.append({
|
||||
"type": "label",
|
||||
"value": (
|
||||
"<p>It was due to missing custom"
|
||||
" attribute configurations for specific entity type/s."
|
||||
" These configurations are not created automatically.</p>"
|
||||
)
|
||||
})
|
||||
|
||||
log_message_items = []
|
||||
log_message_item_template = (
|
||||
"Entity type \"{}\" does not have created Custom Attribute/s: {}"
|
||||
)
|
||||
for object_name, missing_attr_names in (
|
||||
missing_keys_by_object_name.items()
|
||||
):
|
||||
log_message_items.append(log_message_item_template.format(
|
||||
object_name, self.join_keys(missing_attr_names)
|
||||
))
|
||||
|
||||
items.append(splitter)
|
||||
items.append({
|
||||
"type": "label",
|
||||
"value": "## Entity type: {}".format(object_name)
|
||||
})
|
||||
|
||||
items.append({
|
||||
"type": "label",
|
||||
"value": "<p>{}</p>".format("<br>".join(missing_attr_names))
|
||||
})
|
||||
|
||||
self.log.warning((
|
||||
"Couldn't finish pushing attribute values because"
|
||||
" few entity types miss Custom attribute configurations:\n{}"
|
||||
).format("\n".join(log_message_items)))
|
||||
|
||||
self.show_interface(items, title, event)
|
||||
|
||||
def get_hier_values(self, session, hier_attrs, focus_entity_ids):
|
||||
joined_entity_ids = self.join_keys(focus_entity_ids)
|
||||
hier_attr_ids = self.join_keys(
|
||||
tuple(hier_attr["id"] for hier_attr in hier_attrs)
|
||||
)
|
||||
hier_attrs_key_by_id = {
|
||||
hier_attr["id"]: hier_attr["key"]
|
||||
for hier_attr in hier_attrs
|
||||
}
|
||||
call_expr = [{
|
||||
"action": "query",
|
||||
"expression": self.cust_attr_value_query.format(
|
||||
joined_entity_ids, hier_attr_ids
|
||||
)
|
||||
}]
|
||||
if hasattr(session, "call"):
|
||||
[values] = session.call(call_expr)
|
||||
else:
|
||||
[values] = session._call(call_expr)
|
||||
|
||||
values_per_entity_id = {}
|
||||
for item in values["data"]:
|
||||
entity_id = item["entity_id"]
|
||||
key = hier_attrs_key_by_id[item["configuration_id"]]
|
||||
|
||||
if entity_id not in values_per_entity_id:
|
||||
values_per_entity_id[entity_id] = {}
|
||||
value = item["value"]
|
||||
if value is not None:
|
||||
values_per_entity_id[entity_id][key] = value
|
||||
|
||||
output = {}
|
||||
for entity_id in focus_entity_ids:
|
||||
value = values_per_entity_id.get(entity_id)
|
||||
if value:
|
||||
output[entity_id] = value
|
||||
|
||||
return output
|
||||
|
||||
def set_task_attr_values(
|
||||
self,
|
||||
session,
|
||||
task_entities_by_parent_id,
|
||||
hier_values_by_entity_id,
|
||||
task_attr_id_by_keys
|
||||
):
|
||||
missing_keys = set()
|
||||
for parent_id, values in hier_values_by_entity_id.items():
|
||||
task_entities = task_entities_by_parent_id[parent_id]
|
||||
for hier_key, value in values.items():
|
||||
key = self.custom_attribute_mapping[hier_key]
|
||||
if key not in task_attr_id_by_keys:
|
||||
missing_keys.add(key)
|
||||
continue
|
||||
|
||||
for task_entity in task_entities:
|
||||
_entity_key = collections.OrderedDict({
|
||||
"configuration_id": task_attr_id_by_keys[key],
|
||||
"entity_id": task_entity["id"]
|
||||
})
|
||||
|
||||
session.recorded_operations.push(
|
||||
ftrack_api.operation.UpdateEntityOperation(
|
||||
"ContextCustomAttributeValue",
|
||||
_entity_key,
|
||||
"value",
|
||||
ftrack_api.symbol.NOT_SET,
|
||||
value
|
||||
)
|
||||
)
|
||||
session.commit()
|
||||
|
||||
return missing_keys
|
||||
|
||||
def push_values_to_entities(
|
||||
self,
|
||||
session,
|
||||
non_task_entities,
|
||||
hier_values_by_entity_id
|
||||
):
|
||||
object_types = session.query(
|
||||
"ObjectType where name in ({})".format(
|
||||
self.join_keys(self.pushing_entity_types)
|
||||
)
|
||||
).all()
|
||||
object_type_names_by_id = {
|
||||
object_type["id"]: object_type["name"]
|
||||
for object_type in object_types
|
||||
}
|
||||
joined_keys = self.join_keys(
|
||||
self.custom_attribute_mapping.values()
|
||||
)
|
||||
attribute_entities = session.query(
|
||||
self.cust_attrs_query.format(joined_keys)
|
||||
).all()
|
||||
|
||||
attrs_by_obj_id = {}
|
||||
for attr in attribute_entities:
|
||||
if attr["is_hierarchical"]:
|
||||
continue
|
||||
|
||||
obj_id = attr["object_type_id"]
|
||||
if obj_id not in object_type_names_by_id:
|
||||
continue
|
||||
|
||||
if obj_id not in attrs_by_obj_id:
|
||||
attrs_by_obj_id[obj_id] = {}
|
||||
|
||||
attr_key = attr["key"]
|
||||
attrs_by_obj_id[obj_id][attr_key] = attr["id"]
|
||||
|
||||
entities_by_obj_id = collections.defaultdict(list)
|
||||
for entity in non_task_entities:
|
||||
entities_by_obj_id[entity["object_type_id"]].append(entity)
|
||||
|
||||
missing_keys_by_object_id = collections.defaultdict(set)
|
||||
for obj_type_id, attr_keys in attrs_by_obj_id.items():
|
||||
entities = entities_by_obj_id.get(obj_type_id)
|
||||
if not entities:
|
||||
continue
|
||||
|
||||
for entity in entities:
|
||||
values = hier_values_by_entity_id.get(entity["id"])
|
||||
if not values:
|
||||
continue
|
||||
|
||||
for hier_key, value in values.items():
|
||||
key = self.custom_attribute_mapping[hier_key]
|
||||
if key not in attr_keys:
|
||||
missing_keys_by_object_id[obj_type_id].add(key)
|
||||
continue
|
||||
|
||||
_entity_key = collections.OrderedDict({
|
||||
"configuration_id": attr_keys[key],
|
||||
"entity_id": entity["id"]
|
||||
})
|
||||
|
||||
session.recorded_operations.push(
|
||||
ftrack_api.operation.UpdateEntityOperation(
|
||||
"ContextCustomAttributeValue",
|
||||
_entity_key,
|
||||
"value",
|
||||
ftrack_api.symbol.NOT_SET,
|
||||
value
|
||||
)
|
||||
)
|
||||
session.commit()
|
||||
|
||||
missing_keys_by_object_name = {}
|
||||
for obj_id, missing_keys in missing_keys_by_object_id.items():
|
||||
obj_name = object_type_names_by_id[obj_id]
|
||||
missing_keys_by_object_name[obj_name] = missing_keys
|
||||
|
||||
return missing_keys_by_object_name
|
||||
|
||||
|
||||
def register(session, plugins_presets={}):
|
||||
PushFrameValuesToTaskAction(session, plugins_presets).register()
|
||||
230
pype/modules/ftrack/events/event_push_frame_values_to_task.py
Normal file
230
pype/modules/ftrack/events/event_push_frame_values_to_task.py
Normal file
|
|
@ -0,0 +1,230 @@
|
|||
import collections
|
||||
import ftrack_api
|
||||
from pype.modules.ftrack import BaseEvent
|
||||
|
||||
|
||||
class PushFrameValuesToTaskEvent(BaseEvent):
|
||||
# Ignore event handler by default
|
||||
ignore_me = True
|
||||
|
||||
cust_attrs_query = (
|
||||
"select id, key, object_type_id, is_hierarchical, default"
|
||||
" from CustomAttributeConfiguration"
|
||||
" where key in ({}) and object_type_id in ({})"
|
||||
)
|
||||
|
||||
interest_entity_types = {"Shot"}
|
||||
interest_attributes = {"frameStart", "frameEnd"}
|
||||
interest_attr_mapping = {
|
||||
"frameStart": "fstart",
|
||||
"frameEnd": "fend"
|
||||
}
|
||||
_cached_task_object_id = None
|
||||
_cached_interest_object_ids = None
|
||||
|
||||
@staticmethod
|
||||
def join_keys(keys):
|
||||
return ",".join(["\"{}\"".format(key) for key in keys])
|
||||
|
||||
@classmethod
|
||||
def task_object_id(cls, session):
|
||||
if cls._cached_task_object_id is None:
|
||||
task_object_type = session.query(
|
||||
"ObjectType where name is \"Task\""
|
||||
).one()
|
||||
cls._cached_task_object_id = task_object_type["id"]
|
||||
return cls._cached_task_object_id
|
||||
|
||||
@classmethod
|
||||
def interest_object_ids(cls, session):
|
||||
if cls._cached_interest_object_ids is None:
|
||||
object_types = session.query(
|
||||
"ObjectType where name in ({})".format(
|
||||
cls.join_keys(cls.interest_entity_types)
|
||||
)
|
||||
).all()
|
||||
cls._cached_interest_object_ids = tuple(
|
||||
object_type["id"]
|
||||
for object_type in object_types
|
||||
)
|
||||
return cls._cached_interest_object_ids
|
||||
|
||||
def launch(self, session, event):
|
||||
interesting_data = self.extract_interesting_data(session, event)
|
||||
if not interesting_data:
|
||||
return
|
||||
|
||||
entities = self.get_entities(session, interesting_data)
|
||||
if not entities:
|
||||
return
|
||||
|
||||
entities_by_id = {
|
||||
entity["id"]: entity
|
||||
for entity in entities
|
||||
}
|
||||
for entity_id in tuple(interesting_data.keys()):
|
||||
if entity_id not in entities_by_id:
|
||||
interesting_data.pop(entity_id)
|
||||
|
||||
task_entities = self.get_task_entities(session, interesting_data)
|
||||
|
||||
attrs_by_obj_id = self.attrs_configurations(session)
|
||||
if not attrs_by_obj_id:
|
||||
self.log.warning((
|
||||
"There is not created Custom Attributes {}"
|
||||
" for \"Task\" entity type."
|
||||
).format(self.join_keys(self.interest_attributes)))
|
||||
return
|
||||
|
||||
task_entities_by_parent_id = collections.defaultdict(list)
|
||||
for task_entity in task_entities:
|
||||
task_entities_by_parent_id[task_entity["parent_id"]].append(
|
||||
task_entity
|
||||
)
|
||||
|
||||
missing_keys_by_object_name = collections.defaultdict(set)
|
||||
for parent_id, values in interesting_data.items():
|
||||
entities = task_entities_by_parent_id.get(parent_id) or []
|
||||
entities.append(entities_by_id[parent_id])
|
||||
|
||||
for hier_key, value in values.items():
|
||||
changed_ids = []
|
||||
for entity in entities:
|
||||
key = self.interest_attr_mapping[hier_key]
|
||||
entity_attrs_mapping = (
|
||||
attrs_by_obj_id.get(entity["object_type_id"])
|
||||
)
|
||||
if not entity_attrs_mapping:
|
||||
missing_keys_by_object_name[entity.entity_type].add(
|
||||
key
|
||||
)
|
||||
continue
|
||||
|
||||
configuration_id = entity_attrs_mapping.get(key)
|
||||
if not configuration_id:
|
||||
missing_keys_by_object_name[entity.entity_type].add(
|
||||
key
|
||||
)
|
||||
continue
|
||||
|
||||
changed_ids.append(entity["id"])
|
||||
entity_key = collections.OrderedDict({
|
||||
"configuration_id": configuration_id,
|
||||
"entity_id": entity["id"]
|
||||
})
|
||||
if value is None:
|
||||
op = ftrack_api.operation.DeleteEntityOperation(
|
||||
"CustomAttributeValue",
|
||||
entity_key
|
||||
)
|
||||
else:
|
||||
op = ftrack_api.operation.UpdateEntityOperation(
|
||||
"ContextCustomAttributeValue",
|
||||
entity_key,
|
||||
"value",
|
||||
ftrack_api.symbol.NOT_SET,
|
||||
value
|
||||
)
|
||||
|
||||
session.recorded_operations.push(op)
|
||||
self.log.info((
|
||||
"Changing Custom Attribute \"{}\" to value"
|
||||
" \"{}\" on entities: {}"
|
||||
).format(key, value, self.join_keys(changed_ids)))
|
||||
try:
|
||||
session.commit()
|
||||
except Exception:
|
||||
session.rollback()
|
||||
self.log.warning(
|
||||
"Changing of values failed.",
|
||||
exc_info=True
|
||||
)
|
||||
if not missing_keys_by_object_name:
|
||||
return
|
||||
|
||||
msg_items = []
|
||||
for object_name, missing_keys in missing_keys_by_object_name.items():
|
||||
msg_items.append(
|
||||
"{}: ({})".format(object_name, self.join_keys(missing_keys))
|
||||
)
|
||||
|
||||
self.log.warning((
|
||||
"Missing Custom Attribute configuration"
|
||||
" per specific object types: {}"
|
||||
).format(", ".join(msg_items)))
|
||||
|
||||
def extract_interesting_data(self, session, event):
|
||||
# Filter if event contain relevant data
|
||||
entities_info = event["data"].get("entities")
|
||||
if not entities_info:
|
||||
return
|
||||
|
||||
interesting_data = {}
|
||||
for entity_info in entities_info:
|
||||
# Care only about tasks
|
||||
if entity_info.get("entityType") != "task":
|
||||
continue
|
||||
|
||||
# Care only about changes of status
|
||||
changes = entity_info.get("changes") or {}
|
||||
if not changes:
|
||||
continue
|
||||
|
||||
# Care only about changes if specific keys
|
||||
entity_changes = {}
|
||||
for key in self.interest_attributes:
|
||||
if key in changes:
|
||||
entity_changes[key] = changes[key]["new"]
|
||||
|
||||
if not entity_changes:
|
||||
continue
|
||||
|
||||
# Do not care about "Task" entity_type
|
||||
task_object_id = self.task_object_id(session)
|
||||
if entity_info.get("objectTypeId") == task_object_id:
|
||||
continue
|
||||
|
||||
interesting_data[entity_info["entityId"]] = entity_changes
|
||||
return interesting_data
|
||||
|
||||
def get_entities(self, session, interesting_data):
|
||||
entities = session.query(
|
||||
"TypedContext where id in ({})".format(
|
||||
self.join_keys(interesting_data.keys())
|
||||
)
|
||||
).all()
|
||||
|
||||
output = []
|
||||
interest_object_ids = self.interest_object_ids(session)
|
||||
for entity in entities:
|
||||
if entity["object_type_id"] in interest_object_ids:
|
||||
output.append(entity)
|
||||
return output
|
||||
|
||||
def get_task_entities(self, session, interesting_data):
|
||||
return session.query(
|
||||
"Task where parent_id in ({})".format(
|
||||
self.join_keys(interesting_data.keys())
|
||||
)
|
||||
).all()
|
||||
|
||||
def attrs_configurations(self, session):
|
||||
object_ids = list(self.interest_object_ids(session))
|
||||
object_ids.append(self.task_object_id(session))
|
||||
|
||||
attrs = session.query(self.cust_attrs_query.format(
|
||||
self.join_keys(self.interest_attr_mapping.values()),
|
||||
self.join_keys(object_ids)
|
||||
)).all()
|
||||
|
||||
output = {}
|
||||
for attr in attrs:
|
||||
obj_id = attr["object_type_id"]
|
||||
if obj_id not in output:
|
||||
output[obj_id] = {}
|
||||
output[obj_id][attr["key"]] = attr["id"]
|
||||
return output
|
||||
|
||||
|
||||
def register(session, plugins_presets):
|
||||
PushFrameValuesToTaskEvent(session, plugins_presets).register()
|
||||
|
|
@ -19,12 +19,12 @@ from pype.modules.ftrack.lib.avalon_sync import (
|
|||
import ftrack_api
|
||||
from pype.modules.ftrack import BaseEvent
|
||||
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
|
||||
class SyncToAvalonEvent(BaseEvent):
|
||||
|
||||
dbcon = DbConnector()
|
||||
dbcon = AvalonMongoDB()
|
||||
|
||||
interest_entTypes = ["show", "task"]
|
||||
ignore_ent_types = ["Milestone"]
|
||||
|
|
@ -717,6 +717,9 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not self.ftrack_removed:
|
||||
return
|
||||
ent_infos = self.ftrack_removed
|
||||
self.log.debug(
|
||||
"Processing removed entities: {}".format(str(ent_infos))
|
||||
)
|
||||
removable_ids = []
|
||||
recreate_ents = []
|
||||
removed_names = []
|
||||
|
|
@ -878,8 +881,9 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
self.process_session.commit()
|
||||
|
||||
found_idx = None
|
||||
for idx, _entity in enumerate(self._avalon_ents):
|
||||
if _entity["_id"] == avalon_entity["_id"]:
|
||||
proj_doc, asset_docs = self._avalon_ents
|
||||
for idx, asset_doc in enumerate(asset_docs):
|
||||
if asset_doc["_id"] == avalon_entity["_id"]:
|
||||
found_idx = idx
|
||||
break
|
||||
|
||||
|
|
@ -894,7 +898,8 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
new_entity_id
|
||||
)
|
||||
# Update cached entities
|
||||
self._avalon_ents[found_idx] = avalon_entity
|
||||
asset_docs[found_idx] = avalon_entity
|
||||
self._avalon_ents = proj_doc, asset_docs
|
||||
|
||||
if self._avalon_ents_by_id is not None:
|
||||
mongo_id = avalon_entity["_id"]
|
||||
|
|
@ -1258,6 +1263,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not ent_infos:
|
||||
return
|
||||
|
||||
self.log.debug(
|
||||
"Processing renamed entities: {}".format(str(ent_infos))
|
||||
)
|
||||
|
||||
renamed_tasks = {}
|
||||
not_found = {}
|
||||
changeable_queue = queue.Queue()
|
||||
|
|
@ -1453,6 +1462,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not ent_infos:
|
||||
return
|
||||
|
||||
self.log.debug(
|
||||
"Processing added entities: {}".format(str(ent_infos))
|
||||
)
|
||||
|
||||
cust_attrs, hier_attrs = self.avalon_cust_attrs
|
||||
entity_type_conf_ids = {}
|
||||
# Skip if already exit in avalon db or tasks entities
|
||||
|
|
@ -1729,6 +1742,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not self.ftrack_moved:
|
||||
return
|
||||
|
||||
self.log.debug(
|
||||
"Processing moved entities: {}".format(str(self.ftrack_moved))
|
||||
)
|
||||
|
||||
ftrack_moved = {k: v for k, v in sorted(
|
||||
self.ftrack_moved.items(),
|
||||
key=(lambda line: len(
|
||||
|
|
@ -1859,6 +1876,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not self.ftrack_updated:
|
||||
return
|
||||
|
||||
self.log.debug(
|
||||
"Processing updated entities: {}".format(str(self.ftrack_updated))
|
||||
)
|
||||
|
||||
ent_infos = self.ftrack_updated
|
||||
ftrack_mongo_mapping = {}
|
||||
not_found_ids = []
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import subprocess
|
|||
|
||||
from pype.modules.ftrack import BaseEvent
|
||||
from pype.modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
|
|
@ -37,7 +37,7 @@ class UserAssigmentEvent(BaseEvent):
|
|||
3) path to publish files of task user was (de)assigned to
|
||||
"""
|
||||
|
||||
db_con = DbConnector()
|
||||
db_con = AvalonMongoDB()
|
||||
|
||||
def error(self, *err):
|
||||
for e in err:
|
||||
|
|
|
|||
|
|
@ -1,2 +1,8 @@
|
|||
from .ftrack_server import FtrackServer
|
||||
from .lib import check_ftrack_url
|
||||
|
||||
|
||||
__all__ = (
|
||||
"FtrackServer",
|
||||
"check_ftrack_url"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -16,9 +16,9 @@ import pymongo
|
|||
from pype.api import decompose_url
|
||||
|
||||
|
||||
class NotActiveTable(Exception):
|
||||
class NotActiveCollection(Exception):
|
||||
def __init__(self, *args, **kwargs):
|
||||
msg = "Active table is not set. (This is bug)"
|
||||
msg = "Active collection is not set. (This is bug)"
|
||||
if not (args or kwargs):
|
||||
args = [msg]
|
||||
super().__init__(*args, **kwargs)
|
||||
|
|
@ -40,12 +40,12 @@ def auto_reconnect(func):
|
|||
return decorated
|
||||
|
||||
|
||||
def check_active_table(func):
|
||||
def check_active_collection(func):
|
||||
"""Check if CustomDbConnector has active collection."""
|
||||
@functools.wraps(func)
|
||||
def decorated(obj, *args, **kwargs):
|
||||
if not obj.active_table:
|
||||
raise NotActiveTable()
|
||||
if not obj.active_collection:
|
||||
raise NotActiveCollection()
|
||||
return func(obj, *args, **kwargs)
|
||||
return decorated
|
||||
|
||||
|
|
@ -55,7 +55,7 @@ class CustomDbConnector:
|
|||
timeout = int(os.environ["AVALON_TIMEOUT"])
|
||||
|
||||
def __init__(
|
||||
self, uri, database_name, port=None, table_name=None
|
||||
self, uri, database_name, port=None, collection_name=None
|
||||
):
|
||||
self._mongo_client = None
|
||||
self._sentry_client = None
|
||||
|
|
@ -76,10 +76,10 @@ class CustomDbConnector:
|
|||
self._port = port
|
||||
self._database_name = database_name
|
||||
|
||||
self.active_table = table_name
|
||||
self.active_collection = collection_name
|
||||
|
||||
def __getitem__(self, key):
|
||||
# gives direct access to collection withou setting `active_table`
|
||||
# gives direct access to collection withou setting `active_collection`
|
||||
return self._database[key]
|
||||
|
||||
def __getattribute__(self, attr):
|
||||
|
|
@ -88,9 +88,11 @@ class CustomDbConnector:
|
|||
try:
|
||||
return super(CustomDbConnector, self).__getattribute__(attr)
|
||||
except AttributeError:
|
||||
if self.active_table is None:
|
||||
raise NotActiveTable()
|
||||
return self._database[self.active_table].__getattribute__(attr)
|
||||
if self.active_collection is None:
|
||||
raise NotActiveCollection()
|
||||
return self._database[self.active_collection].__getattribute__(
|
||||
attr
|
||||
)
|
||||
|
||||
def install(self):
|
||||
"""Establish a persistent connection to the database"""
|
||||
|
|
@ -146,46 +148,30 @@ class CustomDbConnector:
|
|||
self._is_installed = False
|
||||
atexit.unregister(self.uninstall)
|
||||
|
||||
def create_table(self, name, **options):
|
||||
if self.exist_table(name):
|
||||
def collection_exists(self, collection_name):
|
||||
return collection_name in self.collections()
|
||||
|
||||
def create_collection(self, name, **options):
|
||||
if self.collection_exists(name):
|
||||
return
|
||||
|
||||
return self._database.create_collection(name, **options)
|
||||
|
||||
def exist_table(self, table_name):
|
||||
return table_name in self.tables()
|
||||
|
||||
def create_table(self, name, **options):
|
||||
if self.exist_table(name):
|
||||
return
|
||||
|
||||
return self._database.create_collection(name, **options)
|
||||
|
||||
def exist_table(self, table_name):
|
||||
return table_name in self.tables()
|
||||
|
||||
def tables(self):
|
||||
"""List available tables
|
||||
Returns:
|
||||
list of table names
|
||||
"""
|
||||
collection_names = self.collections()
|
||||
for table_name in collection_names:
|
||||
if table_name in ("system.indexes",):
|
||||
continue
|
||||
yield table_name
|
||||
|
||||
@auto_reconnect
|
||||
def collections(self):
|
||||
return self._database.collection_names()
|
||||
for col_name in self._database.collection_names():
|
||||
if col_name not in ("system.indexes",):
|
||||
yield col_name
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def insert_one(self, item, **options):
|
||||
assert isinstance(item, dict), "item must be of type <dict>"
|
||||
return self._database[self.active_table].insert_one(item, **options)
|
||||
return self._database[self.active_collection].insert_one(
|
||||
item, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def insert_many(self, items, ordered=True, **options):
|
||||
# check if all items are valid
|
||||
|
|
@ -194,72 +180,74 @@ class CustomDbConnector:
|
|||
assert isinstance(item, dict), "`item` must be of type <dict>"
|
||||
|
||||
options["ordered"] = ordered
|
||||
return self._database[self.active_table].insert_many(items, **options)
|
||||
return self._database[self.active_collection].insert_many(
|
||||
items, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def find(self, filter, projection=None, sort=None, **options):
|
||||
options["sort"] = sort
|
||||
return self._database[self.active_table].find(
|
||||
return self._database[self.active_collection].find(
|
||||
filter, projection, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def find_one(self, filter, projection=None, sort=None, **options):
|
||||
assert isinstance(filter, dict), "filter must be <dict>"
|
||||
options["sort"] = sort
|
||||
return self._database[self.active_table].find_one(
|
||||
return self._database[self.active_collection].find_one(
|
||||
filter,
|
||||
projection,
|
||||
**options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def replace_one(self, filter, replacement, **options):
|
||||
return self._database[self.active_table].replace_one(
|
||||
return self._database[self.active_collection].replace_one(
|
||||
filter, replacement, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def update_one(self, filter, update, **options):
|
||||
return self._database[self.active_table].update_one(
|
||||
return self._database[self.active_collection].update_one(
|
||||
filter, update, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def update_many(self, filter, update, **options):
|
||||
return self._database[self.active_table].update_many(
|
||||
return self._database[self.active_collection].update_many(
|
||||
filter, update, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def distinct(self, **options):
|
||||
return self._database[self.active_table].distinct(**options)
|
||||
return self._database[self.active_collection].distinct(**options)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def drop_collection(self, name_or_collection, **options):
|
||||
return self._database[self.active_table].drop(
|
||||
return self._database[self.active_collection].drop(
|
||||
name_or_collection, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def delete_one(self, filter, collation=None, **options):
|
||||
options["collation"] = collation
|
||||
return self._database[self.active_table].delete_one(
|
||||
return self._database[self.active_collection].delete_one(
|
||||
filter, **options
|
||||
)
|
||||
|
||||
@check_active_table
|
||||
@check_active_collection
|
||||
@auto_reconnect
|
||||
def delete_many(self, filter, collation=None, **options):
|
||||
options["collation"] = collation
|
||||
return self._database[self.active_table].delete_many(
|
||||
return self._database[self.active_collection].delete_many(
|
||||
filter, **options
|
||||
)
|
||||
|
|
@ -26,7 +26,7 @@ from pype.api import (
|
|||
compose_url
|
||||
)
|
||||
|
||||
from pype.modules.ftrack.lib.custom_db_connector import CustomDbConnector
|
||||
from .custom_db_connector import CustomDbConnector
|
||||
|
||||
|
||||
TOPIC_STATUS_SERVER = "pype.event.server.status"
|
||||
|
|
@ -153,9 +153,9 @@ class StorerEventHub(SocketBaseEventHub):
|
|||
class ProcessEventHub(SocketBaseEventHub):
|
||||
|
||||
hearbeat_msg = b"processor"
|
||||
uri, port, database, table_name = get_ftrack_event_mongo_info()
|
||||
uri, port, database, collection_name = get_ftrack_event_mongo_info()
|
||||
|
||||
is_table_created = False
|
||||
is_collection_created = False
|
||||
pypelog = Logger().get_logger("Session Processor")
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
@ -163,7 +163,7 @@ class ProcessEventHub(SocketBaseEventHub):
|
|||
self.uri,
|
||||
self.database,
|
||||
self.port,
|
||||
self.table_name
|
||||
self.collection_name
|
||||
)
|
||||
super(ProcessEventHub, self).__init__(*args, **kwargs)
|
||||
|
||||
|
|
@ -184,7 +184,7 @@ class ProcessEventHub(SocketBaseEventHub):
|
|||
"Error with Mongo access, probably permissions."
|
||||
"Check if exist database with name \"{}\""
|
||||
" and collection \"{}\" inside."
|
||||
).format(self.database, self.table_name))
|
||||
).format(self.database, self.collection_name))
|
||||
self.sock.sendall(b"MongoError")
|
||||
sys.exit(0)
|
||||
|
||||
|
|
@ -205,10 +205,16 @@ class ProcessEventHub(SocketBaseEventHub):
|
|||
else:
|
||||
try:
|
||||
self._handle(event)
|
||||
|
||||
mongo_id = event["data"].get("_event_mongo_id")
|
||||
if mongo_id is None:
|
||||
continue
|
||||
|
||||
self.dbcon.update_one(
|
||||
{"id": event["id"]},
|
||||
{"_id": mongo_id},
|
||||
{"$set": {"pype_data.is_processed": True}}
|
||||
)
|
||||
|
||||
except pymongo.errors.AutoReconnect:
|
||||
self.pypelog.error((
|
||||
"Mongo server \"{}\" is not responding, exiting."
|
||||
|
|
@ -244,6 +250,7 @@ class ProcessEventHub(SocketBaseEventHub):
|
|||
}
|
||||
try:
|
||||
event = ftrack_api.event.base.Event(**new_event_data)
|
||||
event["data"]["_event_mongo_id"] = event_data["_id"]
|
||||
except Exception:
|
||||
self.logger.exception(L(
|
||||
'Failed to convert payload into event: {0}',
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ from pype.modules.ftrack.ftrack_server.lib import (
|
|||
SocketSession, StatusEventHub,
|
||||
TOPIC_STATUS_SERVER, TOPIC_STATUS_SERVER_RESULT
|
||||
)
|
||||
from pype.api import Logger, config
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger("Event storer")
|
||||
action_identifier = (
|
||||
|
|
@ -23,17 +23,7 @@ action_data = {
|
|||
"label": "Pype Admin",
|
||||
"variant": "- Event server Status ({})".format(host_ip),
|
||||
"description": "Get Infromation about event server",
|
||||
"actionIdentifier": action_identifier,
|
||||
"icon": "{}/ftrack/action_icons/PypeAdmin.svg".format(
|
||||
os.environ.get(
|
||||
"PYPE_STATICS_SERVER",
|
||||
"http://localhost:{}".format(
|
||||
config.get_presets().get("services", {}).get(
|
||||
"rest_api", {}
|
||||
).get("default_port", 8021)
|
||||
)
|
||||
)
|
||||
)
|
||||
"actionIdentifier": action_identifier
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,9 @@ from pype.modules.ftrack.ftrack_server.lib import (
|
|||
get_ftrack_event_mongo_info,
|
||||
TOPIC_STATUS_SERVER, TOPIC_STATUS_SERVER_RESULT
|
||||
)
|
||||
from pype.modules.ftrack.lib.custom_db_connector import CustomDbConnector
|
||||
from pype.modules.ftrack.ftrack_server.custom_db_connector import (
|
||||
CustomDbConnector
|
||||
)
|
||||
from pype.api import Logger
|
||||
|
||||
log = Logger().get_logger("Event storer")
|
||||
|
|
@ -23,8 +25,8 @@ class SessionFactory:
|
|||
session = None
|
||||
|
||||
|
||||
uri, port, database, table_name = get_ftrack_event_mongo_info()
|
||||
dbcon = CustomDbConnector(uri, database, port, table_name)
|
||||
uri, port, database, collection_name = get_ftrack_event_mongo_info()
|
||||
dbcon = CustomDbConnector(uri, database, port, collection_name)
|
||||
|
||||
# ignore_topics = ["ftrack.meta.connected"]
|
||||
ignore_topics = []
|
||||
|
|
@ -200,7 +202,7 @@ def main(args):
|
|||
"Error with Mongo access, probably permissions."
|
||||
"Check if exist database with name \"{}\""
|
||||
" and collection \"{}\" inside."
|
||||
).format(database, table_name))
|
||||
).format(database, collection_name))
|
||||
sock.sendall(b"MongoError")
|
||||
|
||||
finally:
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import json
|
|||
import collections
|
||||
import copy
|
||||
|
||||
from pype.modules.ftrack.lib.io_nonsingleton import DbConnector
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
import avalon
|
||||
import avalon.api
|
||||
|
|
@ -16,6 +16,7 @@ from bson.objectid import ObjectId
|
|||
from bson.errors import InvalidId
|
||||
from pymongo import UpdateOne
|
||||
import ftrack_api
|
||||
from pype.api import config
|
||||
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
|
@ -23,9 +24,9 @@ log = Logger().get_logger(__name__)
|
|||
|
||||
# Current schemas for avalon types
|
||||
EntitySchemas = {
|
||||
"project": "avalon-core:project-2.0",
|
||||
"project": "avalon-core:project-2.1",
|
||||
"asset": "avalon-core:asset-3.0",
|
||||
"config": "avalon-core:config-1.0"
|
||||
"config": "avalon-core:config-1.1"
|
||||
}
|
||||
|
||||
# Group name of custom attributes
|
||||
|
|
@ -50,7 +51,7 @@ def check_regex(name, entity_type, in_schema=None, schema_patterns=None):
|
|||
if in_schema:
|
||||
schema_name = in_schema
|
||||
elif entity_type == "project":
|
||||
schema_name = "project-2.0"
|
||||
schema_name = "project-2.1"
|
||||
elif entity_type == "task":
|
||||
schema_name = "task"
|
||||
|
||||
|
|
@ -103,6 +104,14 @@ def get_pype_attr(session, split_hierarchical=True):
|
|||
|
||||
|
||||
def from_dict_to_set(data):
|
||||
"""
|
||||
Converts 'data' into $set part of MongoDB update command.
|
||||
Args:
|
||||
data: (dictionary) - up-to-date data from Ftrack
|
||||
|
||||
Returns:
|
||||
(dictionary) - { "$set" : "{..}"}
|
||||
"""
|
||||
result = {"$set": {}}
|
||||
dict_queue = queue.Queue()
|
||||
dict_queue.put((None, data))
|
||||
|
|
@ -114,7 +123,8 @@ def from_dict_to_set(data):
|
|||
if _key is not None:
|
||||
new_key = "{}.{}".format(_key, key)
|
||||
|
||||
if not isinstance(value, dict):
|
||||
if not isinstance(value, dict) or \
|
||||
(isinstance(value, dict) and not bool(value)): # empty dic
|
||||
result["$set"][new_key] = value
|
||||
continue
|
||||
dict_queue.put((new_key, value))
|
||||
|
|
@ -123,6 +133,8 @@ def from_dict_to_set(data):
|
|||
|
||||
def get_avalon_project_template(project_name):
|
||||
"""Get avalon template
|
||||
Args:
|
||||
project_name: (string)
|
||||
Returns:
|
||||
dictionary with templates
|
||||
"""
|
||||
|
|
@ -135,6 +147,16 @@ def get_avalon_project_template(project_name):
|
|||
|
||||
|
||||
def get_project_apps(in_app_list):
|
||||
"""
|
||||
Returns metadata information about apps in 'in_app_list' enhanced
|
||||
from toml files.
|
||||
Args:
|
||||
in_app_list: (list) - names of applications
|
||||
|
||||
Returns:
|
||||
tuple (list, dictionary) - list of dictionaries about apps
|
||||
dictionary of warnings
|
||||
"""
|
||||
apps = []
|
||||
# TODO report
|
||||
missing_toml_msg = "Missing config file for application"
|
||||
|
|
@ -239,8 +261,30 @@ def get_hierarchical_attributes(session, entity, attr_names, attr_defaults={}):
|
|||
return hier_values
|
||||
|
||||
|
||||
def get_task_short_name(task_type):
|
||||
"""
|
||||
Returns short name (code) for 'task_type'. Short name stored in
|
||||
metadata dictionary in project.config per each 'task_type'.
|
||||
Could be used in anatomy, paths etc.
|
||||
If no appropriate short name is found in mapping, 'task_type' is
|
||||
returned back unchanged.
|
||||
|
||||
Currently stores data in:
|
||||
'pype-config/presets/ftrack/project_defaults.json'
|
||||
Args:
|
||||
task_type: (string) - Animation | Modeling ...
|
||||
|
||||
Returns:
|
||||
(string) - anim | model ...
|
||||
"""
|
||||
presets = config.get_presets()['ftrack']['project_defaults']\
|
||||
.get("task_short_names")
|
||||
|
||||
return presets.get(task_type, task_type)
|
||||
|
||||
|
||||
class SyncEntitiesFactory:
|
||||
dbcon = DbConnector()
|
||||
dbcon = AvalonMongoDB()
|
||||
|
||||
project_query = (
|
||||
"select full_name, name, custom_attributes"
|
||||
|
|
@ -378,7 +422,7 @@ class SyncEntitiesFactory:
|
|||
"custom_attributes": {},
|
||||
"hier_attrs": {},
|
||||
"avalon_attrs": {},
|
||||
"tasks": []
|
||||
"tasks": {}
|
||||
})
|
||||
|
||||
for entity in all_project_entities:
|
||||
|
|
@ -389,7 +433,9 @@ class SyncEntitiesFactory:
|
|||
continue
|
||||
|
||||
elif entity_type_low == "task":
|
||||
entities_dict[parent_id]["tasks"].append(entity["name"])
|
||||
# enrich task info with additional metadata
|
||||
task = {"type": entity["type"]["name"]}
|
||||
entities_dict[parent_id]["tasks"][entity["name"]] = task
|
||||
continue
|
||||
|
||||
entity_id = entity["id"]
|
||||
|
|
@ -416,6 +462,13 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_ents_by_id(self):
|
||||
"""
|
||||
Returns dictionary of avalon tracked entities (assets stored in
|
||||
MongoDB) accessible by its '_id'
|
||||
(mongo intenal ID - example ObjectId("5f48de5830a9467b34b69798"))
|
||||
Returns:
|
||||
(dictionary) - {"(_id)": whole entity asset}
|
||||
"""
|
||||
if self._avalon_ents_by_id is None:
|
||||
self._avalon_ents_by_id = {}
|
||||
for entity in self.avalon_entities:
|
||||
|
|
@ -425,6 +478,14 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_ents_by_ftrack_id(self):
|
||||
"""
|
||||
Returns dictionary of Mongo ids of avalon tracked entities
|
||||
(assets stored in MongoDB) accessible by its 'ftrackId'
|
||||
(id from ftrack)
|
||||
(example '431ee3f2-e91a-11ea-bfa4-92591a5b5e3e')
|
||||
Returns:
|
||||
(dictionary) - {"(ftrackId)": "_id"}
|
||||
"""
|
||||
if self._avalon_ents_by_ftrack_id is None:
|
||||
self._avalon_ents_by_ftrack_id = {}
|
||||
for entity in self.avalon_entities:
|
||||
|
|
@ -437,6 +498,13 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_ents_by_name(self):
|
||||
"""
|
||||
Returns dictionary of Mongo ids of avalon tracked entities
|
||||
(assets stored in MongoDB) accessible by its 'name'
|
||||
(example 'Hero')
|
||||
Returns:
|
||||
(dictionary) - {"(name)": "_id"}
|
||||
"""
|
||||
if self._avalon_ents_by_name is None:
|
||||
self._avalon_ents_by_name = {}
|
||||
for entity in self.avalon_entities:
|
||||
|
|
@ -446,6 +514,15 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_ents_by_parent_id(self):
|
||||
"""
|
||||
Returns dictionary of avalon tracked entities
|
||||
(assets stored in MongoDB) accessible by its 'visualParent'
|
||||
(example ObjectId("5f48de5830a9467b34b69798"))
|
||||
|
||||
Fills 'self._avalon_archived_ents' for performance
|
||||
Returns:
|
||||
(dictionary) - {"(_id)": whole entity}
|
||||
"""
|
||||
if self._avalon_ents_by_parent_id is None:
|
||||
self._avalon_ents_by_parent_id = collections.defaultdict(list)
|
||||
for entity in self.avalon_entities:
|
||||
|
|
@ -458,6 +535,14 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_archived_ents(self):
|
||||
"""
|
||||
Returns list of archived assets from DB
|
||||
(their "type" == 'archived_asset')
|
||||
|
||||
Fills 'self._avalon_archived_ents' for performance
|
||||
Returns:
|
||||
(list) of assets
|
||||
"""
|
||||
if self._avalon_archived_ents is None:
|
||||
self._avalon_archived_ents = [
|
||||
ent for ent in self.dbcon.find({"type": "archived_asset"})
|
||||
|
|
@ -466,6 +551,14 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_archived_by_name(self):
|
||||
"""
|
||||
Returns list of archived assets from DB
|
||||
(their "type" == 'archived_asset')
|
||||
|
||||
Fills 'self._avalon_archived_by_name' for performance
|
||||
Returns:
|
||||
(dictionary of lists) of assets accessible by asset name
|
||||
"""
|
||||
if self._avalon_archived_by_name is None:
|
||||
self._avalon_archived_by_name = collections.defaultdict(list)
|
||||
for ent in self.avalon_archived_ents:
|
||||
|
|
@ -474,6 +567,14 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_archived_by_id(self):
|
||||
"""
|
||||
Returns dictionary of archived assets from DB
|
||||
(their "type" == 'archived_asset')
|
||||
|
||||
Fills 'self._avalon_archived_by_id' for performance
|
||||
Returns:
|
||||
(dictionary) of assets accessible by asset mongo _id
|
||||
"""
|
||||
if self._avalon_archived_by_id is None:
|
||||
self._avalon_archived_by_id = {
|
||||
str(ent["_id"]): ent for ent in self.avalon_archived_ents
|
||||
|
|
@ -482,6 +583,15 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def avalon_archived_by_parent_id(self):
|
||||
"""
|
||||
Returns dictionary of archived assets from DB per their's parent
|
||||
(their "type" == 'archived_asset')
|
||||
|
||||
Fills 'self._avalon_archived_by_parent_id' for performance
|
||||
Returns:
|
||||
(dictionary of lists) of assets accessible by asset parent
|
||||
mongo _id
|
||||
"""
|
||||
if self._avalon_archived_by_parent_id is None:
|
||||
self._avalon_archived_by_parent_id = collections.defaultdict(list)
|
||||
for entity in self.avalon_archived_ents:
|
||||
|
|
@ -494,6 +604,14 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def subsets_by_parent_id(self):
|
||||
"""
|
||||
Returns dictionary of subsets from Mongo ("type": "subset")
|
||||
grouped by their parent.
|
||||
|
||||
Fills 'self._subsets_by_parent_id' for performance
|
||||
Returns:
|
||||
(dictionary of lists)
|
||||
"""
|
||||
if self._subsets_by_parent_id is None:
|
||||
self._subsets_by_parent_id = collections.defaultdict(list)
|
||||
for subset in self.dbcon.find({"type": "subset"}):
|
||||
|
|
@ -515,6 +633,11 @@ class SyncEntitiesFactory:
|
|||
|
||||
@property
|
||||
def all_ftrack_names(self):
|
||||
"""
|
||||
Returns lists of names of all entities in Ftrack
|
||||
Returns:
|
||||
(list)
|
||||
"""
|
||||
return [
|
||||
ent_dict["name"] for ent_dict in self.entities_dict.values() if (
|
||||
ent_dict.get("name")
|
||||
|
|
@ -534,8 +657,9 @@ class SyncEntitiesFactory:
|
|||
name = entity_dict["name"]
|
||||
entity_type = entity_dict["entity_type"]
|
||||
# Tasks must be checked too
|
||||
for task_name in entity_dict["tasks"]:
|
||||
passed = task_names.get(task_name)
|
||||
for task in entity_dict["tasks"].items():
|
||||
task_name, task = task
|
||||
passed = task_name
|
||||
if passed is None:
|
||||
passed = check_regex(
|
||||
task_name, "task", schema_patterns=_schema_patterns
|
||||
|
|
@ -1014,22 +1138,26 @@ class SyncEntitiesFactory:
|
|||
if not msg or not items:
|
||||
continue
|
||||
self.report_items["warning"][msg] = items
|
||||
|
||||
tasks = {}
|
||||
for tt in task_types:
|
||||
tasks[tt["name"]] = {
|
||||
"short_name": get_task_short_name(tt["name"])
|
||||
}
|
||||
self.entities_dict[id]["final_entity"]["config"] = {
|
||||
"tasks": [{"name": tt["name"]} for tt in task_types],
|
||||
"tasks": tasks,
|
||||
"apps": proj_apps
|
||||
}
|
||||
continue
|
||||
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items)-1:]
|
||||
parents = ent_path_items[1:len(ent_path_items) - 1:]
|
||||
hierarchy = ""
|
||||
if len(parents) > 0:
|
||||
hierarchy = os.path.sep.join(parents)
|
||||
|
||||
data["parents"] = parents
|
||||
data["hierarchy"] = hierarchy
|
||||
data["tasks"] = self.entities_dict[id].pop("tasks", [])
|
||||
data["tasks"] = self.entities_dict[id].pop("tasks", {})
|
||||
self.entities_dict[id]["final_entity"]["data"] = data
|
||||
self.entities_dict[id]["final_entity"]["type"] = "asset"
|
||||
|
||||
|
|
@ -1141,7 +1269,7 @@ class SyncEntitiesFactory:
|
|||
if not is_right and not else_match_better:
|
||||
entity = entity_dict["entity"]
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items)-1:]
|
||||
parents = ent_path_items[1:len(ent_path_items) - 1:]
|
||||
av_parents = av_ent_by_mongo_id["data"]["parents"]
|
||||
if av_parents == parents:
|
||||
is_right = True
|
||||
|
|
@ -1904,10 +2032,10 @@ class SyncEntitiesFactory:
|
|||
filter = {"_id": ObjectId(mongo_id)}
|
||||
change_data = from_dict_to_set(changes)
|
||||
mongo_changes_bulk.append(UpdateOne(filter, change_data))
|
||||
|
||||
if not mongo_changes_bulk:
|
||||
# TODO LOG
|
||||
return
|
||||
log.debug("mongo_changes_bulk:: {}".format(mongo_changes_bulk))
|
||||
self.dbcon.bulk_write(mongo_changes_bulk)
|
||||
|
||||
def reload_parents(self, hierarchy_changing_ids):
|
||||
|
|
@ -2144,6 +2272,7 @@ class SyncEntitiesFactory:
|
|||
"name": _name,
|
||||
"parent": parent_entity
|
||||
})
|
||||
self.session.commit()
|
||||
|
||||
final_entity = {}
|
||||
for k, v in av_entity.items():
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import functools
|
|||
import time
|
||||
from pype.api import Logger
|
||||
import ftrack_api
|
||||
from pype.modules.ftrack.ftrack_server.lib import SocketSession
|
||||
from pype.modules.ftrack import ftrack_server
|
||||
|
||||
|
||||
class MissingPermision(Exception):
|
||||
|
|
@ -41,7 +41,7 @@ class BaseHandler(object):
|
|||
self.log = Logger().get_logger(self.__class__.__name__)
|
||||
if not(
|
||||
isinstance(session, ftrack_api.session.Session) or
|
||||
isinstance(session, SocketSession)
|
||||
isinstance(session, ftrack_server.lib.SocketSession)
|
||||
):
|
||||
raise Exception((
|
||||
"Session object entered with args is instance of \"{}\""
|
||||
|
|
@ -49,7 +49,7 @@ class BaseHandler(object):
|
|||
).format(
|
||||
str(type(session)),
|
||||
str(ftrack_api.session.Session),
|
||||
str(SocketSession)
|
||||
str(ftrack_server.lib.SocketSession)
|
||||
))
|
||||
|
||||
self._session = session
|
||||
|
|
|
|||
|
|
@ -1,460 +0,0 @@
|
|||
"""
|
||||
Wrapper around interactions with the database
|
||||
|
||||
Copy of io module in avalon-core.
|
||||
- In this case not working as singleton with api.Session!
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import errno
|
||||
import shutil
|
||||
import logging
|
||||
import tempfile
|
||||
import functools
|
||||
import contextlib
|
||||
|
||||
from avalon import schema
|
||||
from avalon.vendor import requests
|
||||
from avalon.io import extract_port_from_url
|
||||
|
||||
# Third-party dependencies
|
||||
import pymongo
|
||||
|
||||
|
||||
def auto_reconnect(func):
|
||||
"""Handling auto reconnect in 3 retry times"""
|
||||
@functools.wraps(func)
|
||||
def decorated(*args, **kwargs):
|
||||
object = args[0]
|
||||
for retry in range(3):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except pymongo.errors.AutoReconnect:
|
||||
object.log.error("Reconnecting..")
|
||||
time.sleep(0.1)
|
||||
else:
|
||||
raise
|
||||
|
||||
return decorated
|
||||
|
||||
|
||||
class DbConnector(object):
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
def __init__(self):
|
||||
self.Session = {}
|
||||
self._mongo_client = None
|
||||
self._sentry_client = None
|
||||
self._sentry_logging_handler = None
|
||||
self._database = None
|
||||
self._is_installed = False
|
||||
|
||||
def __getitem__(self, key):
|
||||
# gives direct access to collection withou setting `active_table`
|
||||
return self._database[key]
|
||||
|
||||
def __getattribute__(self, attr):
|
||||
# not all methods of PyMongo database are implemented with this it is
|
||||
# possible to use them too
|
||||
try:
|
||||
return super(DbConnector, self).__getattribute__(attr)
|
||||
except AttributeError:
|
||||
cur_proj = self.Session["AVALON_PROJECT"]
|
||||
return self._database[cur_proj].__getattribute__(attr)
|
||||
|
||||
def install(self):
|
||||
"""Establish a persistent connection to the database"""
|
||||
if self._is_installed:
|
||||
return
|
||||
|
||||
logging.basicConfig()
|
||||
self.Session.update(self._from_environment())
|
||||
|
||||
timeout = int(self.Session["AVALON_TIMEOUT"])
|
||||
mongo_url = self.Session["AVALON_MONGO"]
|
||||
kwargs = {
|
||||
"host": mongo_url,
|
||||
"serverSelectionTimeoutMS": timeout
|
||||
}
|
||||
|
||||
port = extract_port_from_url(mongo_url)
|
||||
if port is not None:
|
||||
kwargs["port"] = int(port)
|
||||
|
||||
self._mongo_client = pymongo.MongoClient(**kwargs)
|
||||
|
||||
for retry in range(3):
|
||||
try:
|
||||
t1 = time.time()
|
||||
self._mongo_client.server_info()
|
||||
|
||||
except Exception:
|
||||
self.log.error("Retrying..")
|
||||
time.sleep(1)
|
||||
timeout *= 1.5
|
||||
|
||||
else:
|
||||
break
|
||||
|
||||
else:
|
||||
raise IOError(
|
||||
"ERROR: Couldn't connect to %s in "
|
||||
"less than %.3f ms" % (self.Session["AVALON_MONGO"], timeout))
|
||||
|
||||
self.log.info("Connected to %s, delay %.3f s" % (
|
||||
self.Session["AVALON_MONGO"], time.time() - t1))
|
||||
|
||||
self._install_sentry()
|
||||
|
||||
self._database = self._mongo_client[self.Session["AVALON_DB"]]
|
||||
self._is_installed = True
|
||||
|
||||
def _install_sentry(self):
|
||||
if "AVALON_SENTRY" not in self.Session:
|
||||
return
|
||||
|
||||
try:
|
||||
from raven import Client
|
||||
from raven.handlers.logging import SentryHandler
|
||||
from raven.conf import setup_logging
|
||||
except ImportError:
|
||||
# Note: There was a Sentry address in this Session
|
||||
return self.log.warning("Sentry disabled, raven not installed")
|
||||
|
||||
client = Client(self.Session["AVALON_SENTRY"])
|
||||
|
||||
# Transmit log messages to Sentry
|
||||
handler = SentryHandler(client)
|
||||
handler.setLevel(logging.WARNING)
|
||||
|
||||
setup_logging(handler)
|
||||
|
||||
self._sentry_client = client
|
||||
self._sentry_logging_handler = handler
|
||||
self.log.info(
|
||||
"Connected to Sentry @ %s" % self.Session["AVALON_SENTRY"]
|
||||
)
|
||||
|
||||
def _from_environment(self):
|
||||
Session = {
|
||||
item[0]: os.getenv(item[0], item[1])
|
||||
for item in (
|
||||
# Root directory of projects on disk
|
||||
("AVALON_PROJECTS", None),
|
||||
|
||||
# Name of current Project
|
||||
("AVALON_PROJECT", ""),
|
||||
|
||||
# Name of current Asset
|
||||
("AVALON_ASSET", ""),
|
||||
|
||||
# Name of current silo
|
||||
("AVALON_SILO", ""),
|
||||
|
||||
# Name of current task
|
||||
("AVALON_TASK", None),
|
||||
|
||||
# Name of current app
|
||||
("AVALON_APP", None),
|
||||
|
||||
# Path to working directory
|
||||
("AVALON_WORKDIR", None),
|
||||
|
||||
# Name of current Config
|
||||
# TODO(marcus): Establish a suitable default config
|
||||
("AVALON_CONFIG", "no_config"),
|
||||
|
||||
# Name of Avalon in graphical user interfaces
|
||||
# Use this to customise the visual appearance of Avalon
|
||||
# to better integrate with your surrounding pipeline
|
||||
("AVALON_LABEL", "Avalon"),
|
||||
|
||||
# Used during any connections to the outside world
|
||||
("AVALON_TIMEOUT", "1000"),
|
||||
|
||||
# Address to Asset Database
|
||||
("AVALON_MONGO", "mongodb://localhost:27017"),
|
||||
|
||||
# Name of database used in MongoDB
|
||||
("AVALON_DB", "avalon"),
|
||||
|
||||
# Address to Sentry
|
||||
("AVALON_SENTRY", None),
|
||||
|
||||
# Address to Deadline Web Service
|
||||
# E.g. http://192.167.0.1:8082
|
||||
("AVALON_DEADLINE", None),
|
||||
|
||||
# Enable features not necessarily stable. The user's own risk
|
||||
("AVALON_EARLY_ADOPTER", None),
|
||||
|
||||
# Address of central asset repository, contains
|
||||
# the following interface:
|
||||
# /upload
|
||||
# /download
|
||||
# /manager (optional)
|
||||
("AVALON_LOCATION", "http://127.0.0.1"),
|
||||
|
||||
# Boolean of whether to upload published material
|
||||
# to central asset repository
|
||||
("AVALON_UPLOAD", None),
|
||||
|
||||
# Generic username and password
|
||||
("AVALON_USERNAME", "avalon"),
|
||||
("AVALON_PASSWORD", "secret"),
|
||||
|
||||
# Unique identifier for instances in working files
|
||||
("AVALON_INSTANCE_ID", "avalon.instance"),
|
||||
("AVALON_CONTAINER_ID", "avalon.container"),
|
||||
|
||||
# Enable debugging
|
||||
("AVALON_DEBUG", None),
|
||||
|
||||
) if os.getenv(item[0], item[1]) is not None
|
||||
}
|
||||
|
||||
Session["schema"] = "avalon-core:session-2.0"
|
||||
try:
|
||||
schema.validate(Session)
|
||||
except schema.ValidationError as e:
|
||||
# TODO(marcus): Make this mandatory
|
||||
self.log.warning(e)
|
||||
|
||||
return Session
|
||||
|
||||
def uninstall(self):
|
||||
"""Close any connection to the database"""
|
||||
try:
|
||||
self._mongo_client.close()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
self._mongo_client = None
|
||||
self._database = None
|
||||
self._is_installed = False
|
||||
|
||||
def active_project(self):
|
||||
"""Return the name of the active project"""
|
||||
return self.Session["AVALON_PROJECT"]
|
||||
|
||||
def activate_project(self, project_name):
|
||||
self.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
def projects(self):
|
||||
"""List available projects
|
||||
|
||||
Returns:
|
||||
list of project documents
|
||||
|
||||
"""
|
||||
|
||||
collection_names = self.collections()
|
||||
for project in collection_names:
|
||||
if project in ("system.indexes",):
|
||||
continue
|
||||
|
||||
# Each collection will have exactly one project document
|
||||
document = self.find_project(project)
|
||||
|
||||
if document is not None:
|
||||
yield document
|
||||
|
||||
def locate(self, path):
|
||||
"""Traverse a hierarchy from top-to-bottom
|
||||
|
||||
Example:
|
||||
representation = locate(["hulk", "Bruce", "modelDefault", 1, "ma"])
|
||||
|
||||
Returns:
|
||||
representation (ObjectId)
|
||||
|
||||
"""
|
||||
|
||||
components = zip(
|
||||
("project", "asset", "subset", "version", "representation"),
|
||||
path
|
||||
)
|
||||
|
||||
parent = None
|
||||
for type_, name in components:
|
||||
latest = (type_ == "version") and name in (None, -1)
|
||||
|
||||
try:
|
||||
if latest:
|
||||
parent = self.find_one(
|
||||
filter={
|
||||
"type": type_,
|
||||
"parent": parent
|
||||
},
|
||||
projection={"_id": 1},
|
||||
sort=[("name", -1)]
|
||||
)["_id"]
|
||||
else:
|
||||
parent = self.find_one(
|
||||
filter={
|
||||
"type": type_,
|
||||
"name": name,
|
||||
"parent": parent
|
||||
},
|
||||
projection={"_id": 1},
|
||||
)["_id"]
|
||||
|
||||
except TypeError:
|
||||
return None
|
||||
|
||||
return parent
|
||||
|
||||
@auto_reconnect
|
||||
def collections(self):
|
||||
return self._database.collection_names()
|
||||
|
||||
@auto_reconnect
|
||||
def find_project(self, project):
|
||||
return self._database[project].find_one({"type": "project"})
|
||||
|
||||
@auto_reconnect
|
||||
def insert_one(self, item):
|
||||
assert isinstance(item, dict), "item must be of type <dict>"
|
||||
schema.validate(item)
|
||||
return self._database[self.Session["AVALON_PROJECT"]].insert_one(item)
|
||||
|
||||
@auto_reconnect
|
||||
def insert_many(self, items, ordered=True):
|
||||
# check if all items are valid
|
||||
assert isinstance(items, list), "`items` must be of type <list>"
|
||||
for item in items:
|
||||
assert isinstance(item, dict), "`item` must be of type <dict>"
|
||||
schema.validate(item)
|
||||
|
||||
return self._database[self.Session["AVALON_PROJECT"]].insert_many(
|
||||
items,
|
||||
ordered=ordered)
|
||||
|
||||
@auto_reconnect
|
||||
def find(self, filter, projection=None, sort=None):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].find(
|
||||
filter=filter,
|
||||
projection=projection,
|
||||
sort=sort
|
||||
)
|
||||
|
||||
@auto_reconnect
|
||||
def find_one(self, filter, projection=None, sort=None):
|
||||
assert isinstance(filter, dict), "filter must be <dict>"
|
||||
|
||||
return self._database[self.Session["AVALON_PROJECT"]].find_one(
|
||||
filter=filter,
|
||||
projection=projection,
|
||||
sort=sort
|
||||
)
|
||||
|
||||
@auto_reconnect
|
||||
def save(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].save(
|
||||
*args, **kwargs)
|
||||
|
||||
@auto_reconnect
|
||||
def replace_one(self, filter, replacement):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].replace_one(
|
||||
filter, replacement)
|
||||
|
||||
@auto_reconnect
|
||||
def update_many(self, filter, update):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].update_many(
|
||||
filter, update)
|
||||
|
||||
@auto_reconnect
|
||||
def distinct(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].distinct(
|
||||
*args, **kwargs)
|
||||
|
||||
@auto_reconnect
|
||||
def drop(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].drop(
|
||||
*args, **kwargs)
|
||||
|
||||
@auto_reconnect
|
||||
def delete_many(self, *args, **kwargs):
|
||||
return self._database[self.Session["AVALON_PROJECT"]].delete_many(
|
||||
*args, **kwargs)
|
||||
|
||||
def parenthood(self, document):
|
||||
assert document is not None, "This is a bug"
|
||||
|
||||
parents = list()
|
||||
|
||||
while document.get("parent") is not None:
|
||||
document = self.find_one({"_id": document["parent"]})
|
||||
|
||||
if document is None:
|
||||
break
|
||||
|
||||
if document.get("type") == "master_version":
|
||||
_document = self.find_one({"_id": document["version_id"]})
|
||||
document["data"] = _document["data"]
|
||||
|
||||
parents.append(document)
|
||||
|
||||
return parents
|
||||
|
||||
@contextlib.contextmanager
|
||||
def tempdir(self):
|
||||
tempdir = tempfile.mkdtemp()
|
||||
try:
|
||||
yield tempdir
|
||||
finally:
|
||||
shutil.rmtree(tempdir)
|
||||
|
||||
def download(self, src, dst):
|
||||
"""Download `src` to `dst`
|
||||
|
||||
Arguments:
|
||||
src (str): URL to source file
|
||||
dst (str): Absolute path to destination file
|
||||
|
||||
Yields tuple (progress, error):
|
||||
progress (int): Between 0-100
|
||||
error (Exception): Any exception raised when first making connection
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
response = requests.get(
|
||||
src,
|
||||
stream=True,
|
||||
auth=requests.auth.HTTPBasicAuth(
|
||||
self.Session["AVALON_USERNAME"],
|
||||
self.Session["AVALON_PASSWORD"]
|
||||
)
|
||||
)
|
||||
except requests.ConnectionError as e:
|
||||
yield None, e
|
||||
return
|
||||
|
||||
with self.tempdir() as dirname:
|
||||
tmp = os.path.join(dirname, os.path.basename(src))
|
||||
|
||||
with open(tmp, "wb") as f:
|
||||
total_length = response.headers.get("content-length")
|
||||
|
||||
if total_length is None: # no content length header
|
||||
f.write(response.content)
|
||||
else:
|
||||
downloaded = 0
|
||||
total_length = int(total_length)
|
||||
for data in response.iter_content(chunk_size=4096):
|
||||
downloaded += len(data)
|
||||
f.write(data)
|
||||
|
||||
yield int(100.0 * downloaded / total_length), None
|
||||
|
||||
try:
|
||||
os.makedirs(os.path.dirname(dst))
|
||||
except OSError as e:
|
||||
# An already existing destination directory is fine.
|
||||
if e.errno != errno.EEXIST:
|
||||
raise
|
||||
|
||||
shutil.copy(tmp, dst)
|
||||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import time
|
||||
import datetime
|
||||
import threading
|
||||
from Qt import QtCore, QtWidgets
|
||||
from Qt import QtCore, QtWidgets, QtGui
|
||||
|
||||
import ftrack_api
|
||||
from ..ftrack_server.lib import check_ftrack_url
|
||||
|
|
@ -10,7 +10,7 @@ from ..ftrack_server import socket_thread
|
|||
from ..lib import credentials
|
||||
from . import login_dialog
|
||||
|
||||
from pype.api import Logger
|
||||
from pype.api import Logger, resources
|
||||
|
||||
|
||||
log = Logger().get_logger("FtrackModule", "ftrack")
|
||||
|
|
@ -19,7 +19,7 @@ log = Logger().get_logger("FtrackModule", "ftrack")
|
|||
class FtrackModule:
|
||||
def __init__(self, main_parent=None, parent=None):
|
||||
self.parent = parent
|
||||
self.widget_login = login_dialog.Login_Dialog_ui(self)
|
||||
|
||||
self.thread_action_server = None
|
||||
self.thread_socket_server = None
|
||||
self.thread_timer = None
|
||||
|
|
@ -29,8 +29,22 @@ class FtrackModule:
|
|||
self.bool_action_thread_running = False
|
||||
self.bool_timer_event = False
|
||||
|
||||
self.widget_login = login_dialog.CredentialsDialog()
|
||||
self.widget_login.login_changed.connect(self.on_login_change)
|
||||
self.widget_login.logout_signal.connect(self.on_logout)
|
||||
|
||||
self.action_credentials = None
|
||||
self.icon_logged = QtGui.QIcon(
|
||||
resources.get_resource("icons", "circle_green.png")
|
||||
)
|
||||
self.icon_not_logged = QtGui.QIcon(
|
||||
resources.get_resource("icons", "circle_orange.png")
|
||||
)
|
||||
|
||||
def show_login_widget(self):
|
||||
self.widget_login.show()
|
||||
self.widget_login.activateWindow()
|
||||
self.widget_login.raise_()
|
||||
|
||||
def validate(self):
|
||||
validation = False
|
||||
|
|
@ -39,9 +53,10 @@ class FtrackModule:
|
|||
ft_api_key = cred.get("api_key")
|
||||
validation = credentials.check_credentials(ft_user, ft_api_key)
|
||||
if validation:
|
||||
self.widget_login.set_credentials(ft_user, ft_api_key)
|
||||
credentials.set_env(ft_user, ft_api_key)
|
||||
log.info("Connected to Ftrack successfully")
|
||||
self.loginChange()
|
||||
self.on_login_change()
|
||||
|
||||
return validation
|
||||
|
||||
|
|
@ -60,15 +75,28 @@ class FtrackModule:
|
|||
return validation
|
||||
|
||||
# Necessary - login_dialog works with this method after logging in
|
||||
def loginChange(self):
|
||||
def on_login_change(self):
|
||||
self.bool_logged = True
|
||||
|
||||
if self.action_credentials:
|
||||
self.action_credentials.setIcon(self.icon_logged)
|
||||
self.action_credentials.setToolTip(
|
||||
"Logged as user \"{}\"".format(
|
||||
self.widget_login.user_input.text()
|
||||
)
|
||||
)
|
||||
|
||||
self.set_menu_visibility()
|
||||
self.start_action_server()
|
||||
|
||||
def logout(self):
|
||||
def on_logout(self):
|
||||
credentials.clear_credentials()
|
||||
self.stop_action_server()
|
||||
|
||||
if self.action_credentials:
|
||||
self.action_credentials.setIcon(self.icon_not_logged)
|
||||
self.action_credentials.setToolTip("Logged out")
|
||||
|
||||
log.info("Logged out of Ftrack")
|
||||
self.bool_logged = False
|
||||
self.set_menu_visibility()
|
||||
|
|
@ -218,43 +246,45 @@ class FtrackModule:
|
|||
# Definition of Tray menu
|
||||
def tray_menu(self, parent_menu):
|
||||
# Menu for Tray App
|
||||
self.menu = QtWidgets.QMenu('Ftrack', parent_menu)
|
||||
self.menu.setProperty('submenu', 'on')
|
||||
|
||||
# Actions - server
|
||||
self.smActionS = self.menu.addMenu("Action server")
|
||||
|
||||
self.aRunActionS = QtWidgets.QAction(
|
||||
"Run action server", self.smActionS
|
||||
)
|
||||
self.aResetActionS = QtWidgets.QAction(
|
||||
"Reset action server", self.smActionS
|
||||
)
|
||||
self.aStopActionS = QtWidgets.QAction(
|
||||
"Stop action server", self.smActionS
|
||||
)
|
||||
|
||||
self.aRunActionS.triggered.connect(self.start_action_server)
|
||||
self.aResetActionS.triggered.connect(self.reset_action_server)
|
||||
self.aStopActionS.triggered.connect(self.stop_action_server)
|
||||
|
||||
self.smActionS.addAction(self.aRunActionS)
|
||||
self.smActionS.addAction(self.aResetActionS)
|
||||
self.smActionS.addAction(self.aStopActionS)
|
||||
tray_menu = QtWidgets.QMenu("Ftrack", parent_menu)
|
||||
|
||||
# Actions - basic
|
||||
self.aLogin = QtWidgets.QAction("Login", self.menu)
|
||||
self.aLogin.triggered.connect(self.validate)
|
||||
self.aLogout = QtWidgets.QAction("Logout", self.menu)
|
||||
self.aLogout.triggered.connect(self.logout)
|
||||
action_credentials = QtWidgets.QAction("Credentials", tray_menu)
|
||||
action_credentials.triggered.connect(self.show_login_widget)
|
||||
if self.bool_logged:
|
||||
icon = self.icon_logged
|
||||
else:
|
||||
icon = self.icon_not_logged
|
||||
action_credentials.setIcon(icon)
|
||||
tray_menu.addAction(action_credentials)
|
||||
self.action_credentials = action_credentials
|
||||
|
||||
self.menu.addAction(self.aLogin)
|
||||
self.menu.addAction(self.aLogout)
|
||||
# Actions - server
|
||||
tray_server_menu = tray_menu.addMenu("Action server")
|
||||
|
||||
self.action_server_run = QtWidgets.QAction(
|
||||
"Run action server", tray_server_menu
|
||||
)
|
||||
self.action_server_reset = QtWidgets.QAction(
|
||||
"Reset action server", tray_server_menu
|
||||
)
|
||||
self.action_server_stop = QtWidgets.QAction(
|
||||
"Stop action server", tray_server_menu
|
||||
)
|
||||
|
||||
self.action_server_run.triggered.connect(self.start_action_server)
|
||||
self.action_server_reset.triggered.connect(self.reset_action_server)
|
||||
self.action_server_stop.triggered.connect(self.stop_action_server)
|
||||
|
||||
tray_server_menu.addAction(self.action_server_run)
|
||||
tray_server_menu.addAction(self.action_server_reset)
|
||||
tray_server_menu.addAction(self.action_server_stop)
|
||||
|
||||
self.tray_server_menu = tray_server_menu
|
||||
self.bool_logged = False
|
||||
self.set_menu_visibility()
|
||||
|
||||
parent_menu.addMenu(self.menu)
|
||||
parent_menu.addMenu(tray_menu)
|
||||
|
||||
def tray_start(self):
|
||||
self.validate()
|
||||
|
|
@ -264,19 +294,15 @@ class FtrackModule:
|
|||
|
||||
# Definition of visibility of each menu actions
|
||||
def set_menu_visibility(self):
|
||||
|
||||
self.smActionS.menuAction().setVisible(self.bool_logged)
|
||||
self.aLogin.setVisible(not self.bool_logged)
|
||||
self.aLogout.setVisible(self.bool_logged)
|
||||
|
||||
self.tray_server_menu.menuAction().setVisible(self.bool_logged)
|
||||
if self.bool_logged is False:
|
||||
if self.bool_timer_event is True:
|
||||
self.stop_timer_thread()
|
||||
return
|
||||
|
||||
self.aRunActionS.setVisible(not self.bool_action_server_running)
|
||||
self.aResetActionS.setVisible(self.bool_action_thread_running)
|
||||
self.aStopActionS.setVisible(self.bool_action_server_running)
|
||||
self.action_server_run.setVisible(not self.bool_action_server_running)
|
||||
self.action_server_reset.setVisible(self.bool_action_thread_running)
|
||||
self.action_server_stop.setVisible(self.bool_action_server_running)
|
||||
|
||||
if self.bool_timer_event is False:
|
||||
self.start_timer_thread()
|
||||
|
|
|
|||
|
|
@ -1,315 +1,322 @@
|
|||
import os
|
||||
import requests
|
||||
from avalon import style
|
||||
from pype.modules.ftrack import credentials
|
||||
from pype.modules.ftrack.lib import credentials
|
||||
from . import login_tools
|
||||
from pype.api import resources
|
||||
from Qt import QtCore, QtGui, QtWidgets
|
||||
|
||||
|
||||
class Login_Dialog_ui(QtWidgets.QWidget):
|
||||
|
||||
class CredentialsDialog(QtWidgets.QDialog):
|
||||
SIZE_W = 300
|
||||
SIZE_H = 230
|
||||
|
||||
loginSignal = QtCore.Signal(object, object, object)
|
||||
_login_server_thread = None
|
||||
inputs = []
|
||||
buttons = []
|
||||
labels = []
|
||||
login_changed = QtCore.Signal()
|
||||
logout_signal = QtCore.Signal()
|
||||
|
||||
def __init__(self, parent=None, is_event=False):
|
||||
def __init__(self, parent=None):
|
||||
super(CredentialsDialog, self).__init__(parent)
|
||||
|
||||
super(Login_Dialog_ui, self).__init__()
|
||||
self.setWindowTitle("Pype - Ftrack Login")
|
||||
|
||||
self.parent = parent
|
||||
self.is_event = is_event
|
||||
self._login_server_thread = None
|
||||
self._is_logged = False
|
||||
self._in_advance_mode = False
|
||||
|
||||
if hasattr(parent, 'icon'):
|
||||
self.setWindowIcon(self.parent.icon)
|
||||
elif hasattr(parent, 'parent') and hasattr(parent.parent, 'icon'):
|
||||
self.setWindowIcon(self.parent.parent.icon)
|
||||
else:
|
||||
icon = QtGui.QIcon(resources.pype_icon_filepath())
|
||||
self.setWindowIcon(icon)
|
||||
icon = QtGui.QIcon(resources.pype_icon_filepath())
|
||||
self.setWindowIcon(icon)
|
||||
|
||||
self.setWindowFlags(
|
||||
QtCore.Qt.WindowCloseButtonHint |
|
||||
QtCore.Qt.WindowMinimizeButtonHint
|
||||
)
|
||||
|
||||
self.loginSignal.connect(self.loginWithCredentials)
|
||||
self._translate = QtCore.QCoreApplication.translate
|
||||
|
||||
self.font = QtGui.QFont()
|
||||
self.font.setFamily("DejaVu Sans Condensed")
|
||||
self.font.setPointSize(9)
|
||||
self.font.setBold(True)
|
||||
self.font.setWeight(50)
|
||||
self.font.setKerning(True)
|
||||
|
||||
self.resize(self.SIZE_W, self.SIZE_H)
|
||||
self.setMinimumSize(QtCore.QSize(self.SIZE_W, self.SIZE_H))
|
||||
self.setMaximumSize(QtCore.QSize(self.SIZE_W+100, self.SIZE_H+100))
|
||||
self.setMaximumSize(QtCore.QSize(self.SIZE_W + 100, self.SIZE_H + 100))
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
self.setLayout(self._main())
|
||||
self.setWindowTitle('Pype - Ftrack Login')
|
||||
self.login_changed.connect(self._on_login)
|
||||
|
||||
def _main(self):
|
||||
self.main = QtWidgets.QVBoxLayout()
|
||||
self.main.setObjectName("main")
|
||||
self.ui_init()
|
||||
|
||||
self.form = QtWidgets.QFormLayout()
|
||||
self.form.setContentsMargins(10, 15, 10, 5)
|
||||
self.form.setObjectName("form")
|
||||
|
||||
self.ftsite_label = QtWidgets.QLabel("FTrack URL:")
|
||||
self.ftsite_label.setFont(self.font)
|
||||
self.ftsite_label.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
|
||||
self.ftsite_label.setTextFormat(QtCore.Qt.RichText)
|
||||
self.ftsite_label.setObjectName("user_label")
|
||||
def ui_init(self):
|
||||
self.ftsite_label = QtWidgets.QLabel("Ftrack URL:")
|
||||
self.user_label = QtWidgets.QLabel("Username:")
|
||||
self.api_label = QtWidgets.QLabel("API Key:")
|
||||
|
||||
self.ftsite_input = QtWidgets.QLineEdit()
|
||||
self.ftsite_input.setEnabled(True)
|
||||
self.ftsite_input.setFrame(True)
|
||||
self.ftsite_input.setEnabled(False)
|
||||
self.ftsite_input.setReadOnly(True)
|
||||
self.ftsite_input.setObjectName("ftsite_input")
|
||||
|
||||
self.user_label = QtWidgets.QLabel("Username:")
|
||||
self.user_label.setFont(self.font)
|
||||
self.user_label.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
|
||||
self.user_label.setTextFormat(QtCore.Qt.RichText)
|
||||
self.user_label.setObjectName("user_label")
|
||||
self.ftsite_input.setCursor(QtGui.QCursor(QtCore.Qt.IBeamCursor))
|
||||
|
||||
self.user_input = QtWidgets.QLineEdit()
|
||||
self.user_input.setEnabled(True)
|
||||
self.user_input.setFrame(True)
|
||||
self.user_input.setObjectName("user_input")
|
||||
self.user_input.setPlaceholderText(
|
||||
self._translate("main", "user.name")
|
||||
)
|
||||
self.user_input.setPlaceholderText("user.name")
|
||||
self.user_input.textChanged.connect(self._user_changed)
|
||||
|
||||
self.api_label = QtWidgets.QLabel("API Key:")
|
||||
self.api_label.setFont(self.font)
|
||||
self.api_label.setCursor(QtGui.QCursor(QtCore.Qt.ArrowCursor))
|
||||
self.api_label.setTextFormat(QtCore.Qt.RichText)
|
||||
self.api_label.setObjectName("api_label")
|
||||
|
||||
self.api_input = QtWidgets.QLineEdit()
|
||||
self.api_input.setEnabled(True)
|
||||
self.api_input.setFrame(True)
|
||||
self.api_input.setObjectName("api_input")
|
||||
self.api_input.setPlaceholderText(self._translate(
|
||||
"main", "e.g. xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
|
||||
))
|
||||
self.api_input.setPlaceholderText(
|
||||
"e.g. xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
|
||||
)
|
||||
self.api_input.textChanged.connect(self._api_changed)
|
||||
|
||||
input_layout = QtWidgets.QFormLayout()
|
||||
input_layout.setContentsMargins(10, 15, 10, 5)
|
||||
|
||||
input_layout.addRow(self.ftsite_label, self.ftsite_input)
|
||||
input_layout.addRow(self.user_label, self.user_input)
|
||||
input_layout.addRow(self.api_label, self.api_input)
|
||||
|
||||
self.btn_advanced = QtWidgets.QPushButton("Advanced")
|
||||
self.btn_advanced.clicked.connect(self._on_advanced_clicked)
|
||||
|
||||
self.btn_simple = QtWidgets.QPushButton("Simple")
|
||||
self.btn_simple.clicked.connect(self._on_simple_clicked)
|
||||
|
||||
self.btn_login = QtWidgets.QPushButton("Login")
|
||||
self.btn_login.setToolTip(
|
||||
"Set Username and API Key with entered values"
|
||||
)
|
||||
self.btn_login.clicked.connect(self._on_login_clicked)
|
||||
|
||||
self.btn_ftrack_login = QtWidgets.QPushButton("Ftrack login")
|
||||
self.btn_ftrack_login.setToolTip("Open browser for Login to Ftrack")
|
||||
self.btn_ftrack_login.clicked.connect(self._on_ftrack_login_clicked)
|
||||
|
||||
self.btn_logout = QtWidgets.QPushButton("Logout")
|
||||
self.btn_logout.clicked.connect(self._on_logout_clicked)
|
||||
|
||||
self.btn_close = QtWidgets.QPushButton("Close")
|
||||
self.btn_close.setToolTip("Close this window")
|
||||
self.btn_close.clicked.connect(self._close_widget)
|
||||
|
||||
btns_layout = QtWidgets.QHBoxLayout()
|
||||
btns_layout.addWidget(self.btn_advanced)
|
||||
btns_layout.addWidget(self.btn_simple)
|
||||
btns_layout.addStretch(1)
|
||||
btns_layout.addWidget(self.btn_ftrack_login)
|
||||
btns_layout.addWidget(self.btn_login)
|
||||
btns_layout.addWidget(self.btn_logout)
|
||||
btns_layout.addWidget(self.btn_close)
|
||||
|
||||
self.note_label = QtWidgets.QLabel((
|
||||
"NOTE: Click on \"{}\" button to log with your default browser"
|
||||
" or click on \"{}\" button to enter API key manually."
|
||||
).format(self.btn_ftrack_login.text(), self.btn_advanced.text()))
|
||||
|
||||
self.note_label.setWordWrap(True)
|
||||
self.note_label.hide()
|
||||
|
||||
self.error_label = QtWidgets.QLabel("")
|
||||
self.error_label.setFont(self.font)
|
||||
self.error_label.setTextFormat(QtCore.Qt.RichText)
|
||||
self.error_label.setObjectName("error_label")
|
||||
self.error_label.setWordWrap(True)
|
||||
self.error_label.hide()
|
||||
|
||||
self.form.addRow(self.ftsite_label, self.ftsite_input)
|
||||
self.form.addRow(self.user_label, self.user_input)
|
||||
self.form.addRow(self.api_label, self.api_input)
|
||||
self.form.addRow(self.error_label)
|
||||
label_layout = QtWidgets.QVBoxLayout()
|
||||
label_layout.setContentsMargins(10, 5, 10, 5)
|
||||
label_layout.addWidget(self.note_label)
|
||||
label_layout.addWidget(self.error_label)
|
||||
|
||||
self.btnGroup = QtWidgets.QHBoxLayout()
|
||||
self.btnGroup.addStretch(1)
|
||||
self.btnGroup.setObjectName("btnGroup")
|
||||
main = QtWidgets.QVBoxLayout(self)
|
||||
main.addLayout(input_layout)
|
||||
main.addLayout(label_layout)
|
||||
main.addStretch(1)
|
||||
main.addLayout(btns_layout)
|
||||
|
||||
self.btnEnter = QtWidgets.QPushButton("Login")
|
||||
self.btnEnter.setToolTip(
|
||||
'Set Username and API Key with entered values'
|
||||
)
|
||||
self.btnEnter.clicked.connect(self.enter_credentials)
|
||||
self.fill_ftrack_url()
|
||||
|
||||
self.btnClose = QtWidgets.QPushButton("Close")
|
||||
self.btnClose.setToolTip('Close this window')
|
||||
self.btnClose.clicked.connect(self._close_widget)
|
||||
self.set_is_logged(self._is_logged)
|
||||
|
||||
self.btnFtrack = QtWidgets.QPushButton("Ftrack")
|
||||
self.btnFtrack.setToolTip('Open browser for Login to Ftrack')
|
||||
self.btnFtrack.clicked.connect(self.open_ftrack)
|
||||
self.setLayout(main)
|
||||
|
||||
self.btnGroup.addWidget(self.btnFtrack)
|
||||
self.btnGroup.addWidget(self.btnEnter)
|
||||
self.btnGroup.addWidget(self.btnClose)
|
||||
def fill_ftrack_url(self):
|
||||
url = os.getenv("FTRACK_SERVER")
|
||||
checked_url = self.check_url(url)
|
||||
|
||||
self.main.addLayout(self.form)
|
||||
self.main.addLayout(self.btnGroup)
|
||||
if checked_url is None:
|
||||
checked_url = ""
|
||||
self.btn_login.setEnabled(False)
|
||||
self.btn_ftrack_login.setEnabled(False)
|
||||
|
||||
self.inputs.append(self.api_input)
|
||||
self.inputs.append(self.user_input)
|
||||
self.inputs.append(self.ftsite_input)
|
||||
self.api_input.setEnabled(False)
|
||||
self.user_input.setEnabled(False)
|
||||
self.ftsite_input.setEnabled(False)
|
||||
|
||||
self.enter_site()
|
||||
return self.main
|
||||
self.ftsite_input.setText(checked_url)
|
||||
|
||||
def enter_site(self):
|
||||
try:
|
||||
url = os.getenv('FTRACK_SERVER')
|
||||
newurl = self.checkUrl(url)
|
||||
def set_advanced_mode(self, is_advanced):
|
||||
self._in_advance_mode = is_advanced
|
||||
|
||||
if newurl is None:
|
||||
self.btnEnter.setEnabled(False)
|
||||
self.btnFtrack.setEnabled(False)
|
||||
for input in self.inputs:
|
||||
input.setEnabled(False)
|
||||
newurl = url
|
||||
self.error_label.setVisible(False)
|
||||
|
||||
self.ftsite_input.setText(newurl)
|
||||
is_logged = self._is_logged
|
||||
|
||||
except Exception:
|
||||
self.setError("FTRACK_SERVER is not set in templates")
|
||||
self.btnEnter.setEnabled(False)
|
||||
self.btnFtrack.setEnabled(False)
|
||||
for input in self.inputs:
|
||||
input.setEnabled(False)
|
||||
self.note_label.setVisible(not is_logged and not is_advanced)
|
||||
self.btn_ftrack_login.setVisible(not is_logged and not is_advanced)
|
||||
self.btn_advanced.setVisible(not is_logged and not is_advanced)
|
||||
|
||||
def setError(self, msg):
|
||||
self.btn_login.setVisible(not is_logged and is_advanced)
|
||||
self.btn_simple.setVisible(not is_logged and is_advanced)
|
||||
|
||||
self.user_label.setVisible(is_logged or is_advanced)
|
||||
self.user_input.setVisible(is_logged or is_advanced)
|
||||
self.api_label.setVisible(is_logged or is_advanced)
|
||||
self.api_input.setVisible(is_logged or is_advanced)
|
||||
if is_advanced:
|
||||
self.user_input.setFocus()
|
||||
else:
|
||||
self.btn_ftrack_login.setFocus()
|
||||
|
||||
def set_is_logged(self, is_logged):
|
||||
self._is_logged = is_logged
|
||||
|
||||
self.user_input.setReadOnly(is_logged)
|
||||
self.api_input.setReadOnly(is_logged)
|
||||
self.user_input.setCursor(QtGui.QCursor(QtCore.Qt.IBeamCursor))
|
||||
self.api_input.setCursor(QtGui.QCursor(QtCore.Qt.IBeamCursor))
|
||||
|
||||
self.btn_logout.setVisible(is_logged)
|
||||
|
||||
self.set_advanced_mode(self._in_advance_mode)
|
||||
|
||||
def set_error(self, msg):
|
||||
self.error_label.setText(msg)
|
||||
self.error_label.show()
|
||||
|
||||
def _on_logout_clicked(self):
|
||||
self.user_input.setText("")
|
||||
self.api_input.setText("")
|
||||
self.set_is_logged(False)
|
||||
self.logout_signal.emit()
|
||||
|
||||
def _on_simple_clicked(self):
|
||||
self.set_advanced_mode(False)
|
||||
|
||||
def _on_advanced_clicked(self):
|
||||
self.set_advanced_mode(True)
|
||||
|
||||
def _user_changed(self):
|
||||
self.user_input.setStyleSheet("")
|
||||
self._not_invalid_input(self.user_input)
|
||||
|
||||
def _api_changed(self):
|
||||
self.api_input.setStyleSheet("")
|
||||
self._not_invalid_input(self.api_input)
|
||||
|
||||
def _invalid_input(self, entity):
|
||||
entity.setStyleSheet("border: 1px solid red;")
|
||||
def _not_invalid_input(self, input_widget):
|
||||
input_widget.setStyleSheet("")
|
||||
|
||||
def enter_credentials(self):
|
||||
def _invalid_input(self, input_widget):
|
||||
input_widget.setStyleSheet("border: 1px solid red;")
|
||||
|
||||
def _on_login(self):
|
||||
self.set_is_logged(True)
|
||||
self._close_widget()
|
||||
|
||||
def _on_login_clicked(self):
|
||||
username = self.user_input.text().strip()
|
||||
apiKey = self.api_input.text().strip()
|
||||
msg = "You didn't enter "
|
||||
api_key = self.api_input.text().strip()
|
||||
missing = []
|
||||
if username == "":
|
||||
missing.append("Username")
|
||||
self._invalid_input(self.user_input)
|
||||
|
||||
if apiKey == "":
|
||||
if api_key == "":
|
||||
missing.append("API Key")
|
||||
self._invalid_input(self.api_input)
|
||||
|
||||
if len(missing) > 0:
|
||||
self.setError("{0} {1}".format(msg, " and ".join(missing)))
|
||||
self.set_error("You didn't enter {}".format(" and ".join(missing)))
|
||||
return
|
||||
|
||||
verification = credentials.check_credentials(username, apiKey)
|
||||
|
||||
if verification:
|
||||
credentials.save_credentials(username, apiKey, self.is_event)
|
||||
credentials.set_env(username, apiKey)
|
||||
if self.parent is not None:
|
||||
self.parent.loginChange()
|
||||
self._close_widget()
|
||||
else:
|
||||
if not self.login_with_credentials(username, api_key):
|
||||
self._invalid_input(self.user_input)
|
||||
self._invalid_input(self.api_input)
|
||||
self.setError(
|
||||
self.set_error(
|
||||
"We're unable to sign in to Ftrack with these credentials"
|
||||
)
|
||||
|
||||
def open_ftrack(self):
|
||||
url = self.ftsite_input.text()
|
||||
self.loginWithCredentials(url, None, None)
|
||||
|
||||
def checkUrl(self, url):
|
||||
url = url.strip('/ ')
|
||||
|
||||
def _on_ftrack_login_clicked(self):
|
||||
url = self.check_url(self.ftsite_input.text())
|
||||
if not url:
|
||||
self.setError("There is no URL set in Templates")
|
||||
return
|
||||
|
||||
if 'http' not in url:
|
||||
if url.endswith('ftrackapp.com'):
|
||||
url = 'https://' + url
|
||||
else:
|
||||
url = 'https://{0}.ftrackapp.com'.format(url)
|
||||
try:
|
||||
result = requests.get(
|
||||
url,
|
||||
# Old python API will not work with redirect.
|
||||
allow_redirects=False
|
||||
)
|
||||
except requests.exceptions.RequestException:
|
||||
self.setError(
|
||||
'The server URL set in Templates could not be reached.'
|
||||
)
|
||||
return
|
||||
|
||||
if (
|
||||
result.status_code != 200 or 'FTRACK_VERSION' not in result.headers
|
||||
):
|
||||
self.setError(
|
||||
'The server URL set in Templates is not a valid ftrack server.'
|
||||
)
|
||||
return
|
||||
return url
|
||||
|
||||
def loginWithCredentials(self, url, username, apiKey):
|
||||
url = url.strip('/ ')
|
||||
|
||||
if not url:
|
||||
self.setError(
|
||||
'You need to specify a valid server URL, '
|
||||
'for example https://server-name.ftrackapp.com'
|
||||
)
|
||||
return
|
||||
|
||||
if 'http' not in url:
|
||||
if url.endswith('ftrackapp.com'):
|
||||
url = 'https://' + url
|
||||
else:
|
||||
url = 'https://{0}.ftrackapp.com'.format(url)
|
||||
try:
|
||||
result = requests.get(
|
||||
url,
|
||||
# Old python API will not work with redirect.
|
||||
allow_redirects=False
|
||||
)
|
||||
except requests.exceptions.RequestException:
|
||||
self.setError(
|
||||
'The server URL you provided could not be reached.'
|
||||
)
|
||||
return
|
||||
|
||||
if (
|
||||
result.status_code != 200 or 'FTRACK_VERSION' not in result.headers
|
||||
):
|
||||
self.setError(
|
||||
'The server URL you provided is not a valid ftrack server.'
|
||||
)
|
||||
return
|
||||
|
||||
# If there is an existing server thread running we need to stop it.
|
||||
if self._login_server_thread:
|
||||
self._login_server_thread.quit()
|
||||
if self._login_server_thread.isAlive():
|
||||
self._login_server_thread.stop()
|
||||
self._login_server_thread.join()
|
||||
self._login_server_thread = None
|
||||
|
||||
# If credentials are not properly set, try to get them using a http
|
||||
# server.
|
||||
if not username or not apiKey:
|
||||
self._login_server_thread = login_tools.LoginServerThread()
|
||||
self._login_server_thread.loginSignal.connect(self.loginSignal)
|
||||
self._login_server_thread.start(url)
|
||||
self._login_server_thread = login_tools.LoginServerThread(
|
||||
url, self._result_of_ftrack_thread
|
||||
)
|
||||
self._login_server_thread.start()
|
||||
|
||||
def _result_of_ftrack_thread(self, username, api_key):
|
||||
if not self.login_with_credentials(username, api_key):
|
||||
self._invalid_input(self.api_input)
|
||||
self.set_error((
|
||||
"Somthing happened with Ftrack login."
|
||||
" Try enter Username and API key manually."
|
||||
))
|
||||
|
||||
def login_with_credentials(self, username, api_key):
|
||||
verification = credentials.check_credentials(username, api_key)
|
||||
if verification:
|
||||
credentials.save_credentials(username, api_key, False)
|
||||
credentials.set_env(username, api_key)
|
||||
self.set_credentials(username, api_key)
|
||||
self.login_changed.emit()
|
||||
return verification
|
||||
|
||||
def set_credentials(self, username, api_key, is_logged=True):
|
||||
self.user_input.setText(username)
|
||||
self.api_input.setText(api_key)
|
||||
|
||||
self.error_label.hide()
|
||||
|
||||
self._not_invalid_input(self.ftsite_input)
|
||||
self._not_invalid_input(self.user_input)
|
||||
self._not_invalid_input(self.api_input)
|
||||
|
||||
if is_logged is not None:
|
||||
self.set_is_logged(is_logged)
|
||||
|
||||
def check_url(self, url):
|
||||
if url is not None:
|
||||
url = url.strip("/ ")
|
||||
|
||||
if not url:
|
||||
self.set_error((
|
||||
"You need to specify a valid server URL, "
|
||||
"for example https://server-name.ftrackapp.com"
|
||||
))
|
||||
return
|
||||
|
||||
verification = credentials.check_credentials(username, apiKey)
|
||||
if "http" not in url:
|
||||
if url.endswith("ftrackapp.com"):
|
||||
url = "https://" + url
|
||||
else:
|
||||
url = "https://{}.ftrackapp.com".format(url)
|
||||
try:
|
||||
result = requests.get(
|
||||
url,
|
||||
# Old python API will not work with redirect.
|
||||
allow_redirects=False
|
||||
)
|
||||
except requests.exceptions.RequestException:
|
||||
self.set_error(
|
||||
"Specified URL could not be reached."
|
||||
)
|
||||
return
|
||||
|
||||
if verification is True:
|
||||
credentials.save_credentials(username, apiKey, self.is_event)
|
||||
credentials.set_env(username, apiKey)
|
||||
if self.parent is not None:
|
||||
self.parent.loginChange()
|
||||
self._close_widget()
|
||||
if (
|
||||
result.status_code != 200
|
||||
or "FTRACK_VERSION" not in result.headers
|
||||
):
|
||||
self.set_error(
|
||||
"Specified URL does not lead to a valid Ftrack server."
|
||||
)
|
||||
return
|
||||
return url
|
||||
|
||||
def closeEvent(self, event):
|
||||
event.ignore()
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ from http.server import BaseHTTPRequestHandler, HTTPServer
|
|||
from urllib import parse
|
||||
import webbrowser
|
||||
import functools
|
||||
from Qt import QtCore
|
||||
import threading
|
||||
from pype.api import resources
|
||||
|
||||
|
||||
|
|
@ -55,20 +55,22 @@ class LoginServerHandler(BaseHTTPRequestHandler):
|
|||
)
|
||||
|
||||
|
||||
class LoginServerThread(QtCore.QThread):
|
||||
class LoginServerThread(threading.Thread):
|
||||
'''Login server thread.'''
|
||||
|
||||
# Login signal.
|
||||
loginSignal = QtCore.Signal(object, object, object)
|
||||
|
||||
def start(self, url):
|
||||
'''Start thread.'''
|
||||
def __init__(self, url, callback):
|
||||
self.url = url
|
||||
super(LoginServerThread, self).start()
|
||||
self.callback = callback
|
||||
self._server = None
|
||||
super(LoginServerThread, self).__init__()
|
||||
|
||||
def _handle_login(self, api_user, api_key):
|
||||
'''Login to server with *api_user* and *api_key*.'''
|
||||
self.loginSignal.emit(self.url, api_user, api_key)
|
||||
self.callback(api_user, api_key)
|
||||
|
||||
def stop(self):
|
||||
if self._server:
|
||||
self._server.server_close()
|
||||
|
||||
def run(self):
|
||||
'''Listen for events.'''
|
||||
|
|
|
|||
|
|
@ -1,6 +1,4 @@
|
|||
from Qt import QtWidgets
|
||||
from pype.api import Logger
|
||||
from ..gui.app import LogsWindow
|
||||
|
||||
|
||||
class LoggingModule:
|
||||
|
|
@ -8,7 +6,13 @@ class LoggingModule:
|
|||
self.parent = parent
|
||||
self.log = Logger().get_logger(self.__class__.__name__, "logging")
|
||||
|
||||
self.window = None
|
||||
|
||||
self.tray_init(main_parent, parent)
|
||||
|
||||
def tray_init(self, main_parent, parent):
|
||||
try:
|
||||
from .gui.app import LogsWindow
|
||||
self.window = LogsWindow()
|
||||
self.tray_menu = self._tray_menu
|
||||
except Exception:
|
||||
|
|
@ -18,12 +22,12 @@ class LoggingModule:
|
|||
|
||||
# Definition of Tray menu
|
||||
def _tray_menu(self, parent_menu):
|
||||
from Qt import QtWidgets
|
||||
# Menu for Tray App
|
||||
menu = QtWidgets.QMenu('Logging', parent_menu)
|
||||
# menu.setProperty('submenu', 'on')
|
||||
|
||||
show_action = QtWidgets.QAction("Show Logs", menu)
|
||||
show_action.triggered.connect(self.on_show_logs)
|
||||
show_action.triggered.connect(self._show_logs_gui)
|
||||
menu.addAction(show_action)
|
||||
|
||||
parent_menu.addMenu(menu)
|
||||
|
|
@ -34,5 +38,6 @@ class LoggingModule:
|
|||
def process_modules(self, modules):
|
||||
return
|
||||
|
||||
def on_show_logs(self):
|
||||
self.window.show()
|
||||
def _show_logs_gui(self):
|
||||
if self.window:
|
||||
self.window.show()
|
||||
|
|
|
|||
|
|
@ -1,10 +1,7 @@
|
|||
import appdirs
|
||||
from avalon import style
|
||||
from Qt import QtWidgets
|
||||
import os
|
||||
import json
|
||||
from .widget_login import MusterLogin
|
||||
from avalon.vendor import requests
|
||||
import appdirs
|
||||
import requests
|
||||
|
||||
|
||||
class MusterModule:
|
||||
|
|
@ -21,6 +18,11 @@ class MusterModule:
|
|||
self.cred_path = os.path.join(
|
||||
self.cred_folder_path, self.cred_filename
|
||||
)
|
||||
self.tray_init(main_parent, parent)
|
||||
|
||||
def tray_init(self, main_parent, parent):
|
||||
from .widget_login import MusterLogin
|
||||
|
||||
self.main_parent = main_parent
|
||||
self.parent = parent
|
||||
self.widget_login = MusterLogin(main_parent, self)
|
||||
|
|
@ -38,10 +40,6 @@ class MusterModule:
|
|||
pass
|
||||
|
||||
def process_modules(self, modules):
|
||||
|
||||
def api_callback():
|
||||
self.aShowLogin.trigger()
|
||||
|
||||
if "RestApiServer" in modules:
|
||||
def api_show_login():
|
||||
self.aShowLogin.trigger()
|
||||
|
|
@ -51,13 +49,12 @@ class MusterModule:
|
|||
|
||||
# Definition of Tray menu
|
||||
def tray_menu(self, parent):
|
||||
"""
|
||||
Add **change credentials** option to tray menu.
|
||||
"""
|
||||
"""Add **change credentials** option to tray menu."""
|
||||
from Qt import QtWidgets
|
||||
|
||||
# Menu for Tray App
|
||||
self.menu = QtWidgets.QMenu('Muster', parent)
|
||||
self.menu.setProperty('submenu', 'on')
|
||||
self.menu.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
# Actions
|
||||
self.aShowLogin = QtWidgets.QAction(
|
||||
|
|
@ -91,9 +88,9 @@ class MusterModule:
|
|||
if not MUSTER_REST_URL:
|
||||
raise AttributeError("Muster REST API url not set")
|
||||
params = {
|
||||
'username': username,
|
||||
'password': password
|
||||
}
|
||||
'username': username,
|
||||
'password': password
|
||||
}
|
||||
api_entry = '/api/login'
|
||||
response = self._requests_post(
|
||||
MUSTER_REST_URL + api_entry, params=params)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import os
|
||||
import socket
|
||||
from Qt import QtCore
|
||||
import threading
|
||||
|
||||
from socketserver import ThreadingMixIn
|
||||
from http.server import HTTPServer
|
||||
|
|
@ -155,14 +155,15 @@ class RestApiServer:
|
|||
def is_running(self):
|
||||
return self.rest_api_thread.is_running
|
||||
|
||||
def tray_exit(self):
|
||||
self.stop()
|
||||
|
||||
def stop(self):
|
||||
self.rest_api_thread.is_running = False
|
||||
|
||||
def thread_stopped(self):
|
||||
self._is_running = False
|
||||
self.rest_api_thread.stop()
|
||||
self.rest_api_thread.join()
|
||||
|
||||
|
||||
class RestApiThread(QtCore.QThread):
|
||||
class RestApiThread(threading.Thread):
|
||||
""" Listener for REST requests.
|
||||
|
||||
It is possible to register callbacks for url paths.
|
||||
|
|
@ -174,6 +175,12 @@ class RestApiThread(QtCore.QThread):
|
|||
self.is_running = False
|
||||
self.module = module
|
||||
self.port = port
|
||||
self.httpd = None
|
||||
|
||||
def stop(self):
|
||||
self.is_running = False
|
||||
if self.httpd:
|
||||
self.httpd.server_close()
|
||||
|
||||
def run(self):
|
||||
self.is_running = True
|
||||
|
|
@ -185,12 +192,14 @@ class RestApiThread(QtCore.QThread):
|
|||
)
|
||||
|
||||
with ThreadingSimpleServer(("", self.port), Handler) as httpd:
|
||||
self.httpd = httpd
|
||||
while self.is_running:
|
||||
httpd.handle_request()
|
||||
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Rest Api Server service has failed", exc_info=True
|
||||
)
|
||||
|
||||
self.httpd = None
|
||||
self.is_running = False
|
||||
self.module.thread_stopped()
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ import os
|
|||
import sys
|
||||
import subprocess
|
||||
import pype
|
||||
from pype import tools
|
||||
|
||||
|
||||
class StandAlonePublishModule:
|
||||
|
|
@ -30,6 +29,7 @@ class StandAlonePublishModule:
|
|||
))
|
||||
|
||||
def show(self):
|
||||
from pype import tools
|
||||
standalone_publisher_tool_path = os.path.join(
|
||||
os.path.dirname(tools.__file__),
|
||||
"standalonepublish"
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
from .timers_manager import TimersManager
|
||||
from .widget_user_idle import WidgetUserIdle
|
||||
|
||||
CLASS_DEFINIION = TimersManager
|
||||
|
||||
|
|
|
|||
|
|
@ -1,21 +1,7 @@
|
|||
from .widget_user_idle import WidgetUserIdle, SignalHandler
|
||||
from pype.api import Logger, config
|
||||
from pype.api import Logger
|
||||
|
||||
|
||||
class Singleton(type):
|
||||
""" Signleton implementation
|
||||
"""
|
||||
_instances = {}
|
||||
|
||||
def __call__(cls, *args, **kwargs):
|
||||
if cls not in cls._instances:
|
||||
cls._instances[cls] = super(
|
||||
Singleton, cls
|
||||
).__call__(*args, **kwargs)
|
||||
return cls._instances[cls]
|
||||
|
||||
|
||||
class TimersManager(metaclass=Singleton):
|
||||
class TimersManager:
|
||||
""" Handles about Timers.
|
||||
|
||||
Should be able to start/stop all timers at once.
|
||||
|
|
@ -41,7 +27,13 @@ class TimersManager(metaclass=Singleton):
|
|||
|
||||
self.idle_man = None
|
||||
self.signal_handler = None
|
||||
|
||||
self.trat_init(tray_widget, main_widget)
|
||||
|
||||
def trat_init(self, tray_widget, main_widget):
|
||||
from .widget_user_idle import WidgetUserIdle, SignalHandler
|
||||
self.widget_user_idle = WidgetUserIdle(self, tray_widget)
|
||||
self.signal_handler = SignalHandler(self)
|
||||
|
||||
def set_signal_times(self):
|
||||
try:
|
||||
|
|
@ -119,7 +111,6 @@ class TimersManager(metaclass=Singleton):
|
|||
"""
|
||||
|
||||
if 'IdleManager' in modules:
|
||||
self.signal_handler = SignalHandler(self)
|
||||
if self.set_signal_times() is True:
|
||||
self.register_to_idle_manager(modules['IdleManager'])
|
||||
|
||||
|
|
|
|||
|
|
@ -3,8 +3,6 @@ import json
|
|||
import getpass
|
||||
|
||||
import appdirs
|
||||
from Qt import QtWidgets
|
||||
from .widget_user import UserWidget
|
||||
|
||||
from pype.api import Logger
|
||||
|
||||
|
|
@ -24,6 +22,12 @@ class UserModule:
|
|||
self.cred_path = os.path.normpath(os.path.join(
|
||||
self.cred_folder_path, self.cred_filename
|
||||
))
|
||||
self.widget_login = None
|
||||
|
||||
self.tray_init(main_parent, parent)
|
||||
|
||||
def tray_init(self, main_parent=None, parent=None):
|
||||
from .widget_user import UserWidget
|
||||
self.widget_login = UserWidget(self)
|
||||
|
||||
self.load_credentials()
|
||||
|
|
@ -66,6 +70,7 @@ class UserModule:
|
|||
|
||||
# Definition of Tray menu
|
||||
def tray_menu(self, parent_menu):
|
||||
from Qt import QtWidgets
|
||||
"""Add menu or action to Tray(or parent)'s menu"""
|
||||
action = QtWidgets.QAction("Username", parent_menu)
|
||||
action.triggered.connect(self.show_widget)
|
||||
|
|
@ -121,7 +126,8 @@ class UserModule:
|
|||
|
||||
self.cred = {"username": username}
|
||||
os.environ[self.env_name] = username
|
||||
self.widget_login.set_user(username)
|
||||
if self.widget_login:
|
||||
self.widget_login.set_user(username)
|
||||
try:
|
||||
file = open(self.cred_path, "w")
|
||||
file.write(json.dumps(self.cred))
|
||||
|
|
|
|||
5
pype/modules/websocket_server/__init__.py
Normal file
5
pype/modules/websocket_server/__init__.py
Normal file
|
|
@ -0,0 +1,5 @@
|
|||
from .websocket_server import WebSocketServer
|
||||
|
||||
|
||||
def tray_init(tray_widget, main_widget):
|
||||
return WebSocketServer()
|
||||
0
pype/modules/websocket_server/hosts/__init__.py
Normal file
0
pype/modules/websocket_server/hosts/__init__.py
Normal file
47
pype/modules/websocket_server/hosts/external_app_1.py
Normal file
47
pype/modules/websocket_server/hosts/external_app_1.py
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
import asyncio
|
||||
|
||||
from pype.api import Logger
|
||||
from wsrpc_aiohttp import WebSocketRoute
|
||||
|
||||
log = Logger().get_logger("WebsocketServer")
|
||||
|
||||
|
||||
class ExternalApp1(WebSocketRoute):
|
||||
"""
|
||||
One route, mimicking external application (like Harmony, etc).
|
||||
All functions could be called from client.
|
||||
'do_notify' function calls function on the client - mimicking
|
||||
notification after long running job on the server or similar
|
||||
"""
|
||||
|
||||
def init(self, **kwargs):
|
||||
# Python __init__ must be return "self".
|
||||
# This method might return anything.
|
||||
log.debug("someone called ExternalApp1 route")
|
||||
return kwargs
|
||||
|
||||
async def server_function_one(self):
|
||||
log.info('In function one')
|
||||
|
||||
async def server_function_two(self):
|
||||
log.info('In function two')
|
||||
return 'function two'
|
||||
|
||||
async def server_function_three(self):
|
||||
log.info('In function three')
|
||||
asyncio.ensure_future(self.do_notify())
|
||||
return '{"message":"function tree"}'
|
||||
|
||||
async def server_function_four(self, *args, **kwargs):
|
||||
log.info('In function four args {} kwargs {}'.format(args, kwargs))
|
||||
ret = dict(**kwargs)
|
||||
ret["message"] = "function four received arguments"
|
||||
return str(ret)
|
||||
|
||||
# This method calls function on the client side
|
||||
async def do_notify(self):
|
||||
import time
|
||||
time.sleep(5)
|
||||
log.info('Calling function on server after delay')
|
||||
awesome = 'Somebody server_function_three method!'
|
||||
await self.socket.call('notify', result=awesome)
|
||||
64
pype/modules/websocket_server/hosts/photoshop.py
Normal file
64
pype/modules/websocket_server/hosts/photoshop.py
Normal file
|
|
@ -0,0 +1,64 @@
|
|||
from pype.api import Logger
|
||||
from wsrpc_aiohttp import WebSocketRoute
|
||||
import functools
|
||||
|
||||
import avalon.photoshop as photoshop
|
||||
|
||||
log = Logger().get_logger("WebsocketServer")
|
||||
|
||||
|
||||
class Photoshop(WebSocketRoute):
|
||||
"""
|
||||
One route, mimicking external application (like Harmony, etc).
|
||||
All functions could be called from client.
|
||||
'do_notify' function calls function on the client - mimicking
|
||||
notification after long running job on the server or similar
|
||||
"""
|
||||
instance = None
|
||||
|
||||
def init(self, **kwargs):
|
||||
# Python __init__ must be return "self".
|
||||
# This method might return anything.
|
||||
log.debug("someone called Photoshop route")
|
||||
self.instance = self
|
||||
return kwargs
|
||||
|
||||
# server functions
|
||||
async def ping(self):
|
||||
log.debug("someone called Photoshop route ping")
|
||||
|
||||
# This method calls function on the client side
|
||||
# client functions
|
||||
|
||||
async def read(self):
|
||||
log.debug("photoshop.read client calls server server calls "
|
||||
"Photo client")
|
||||
return await self.socket.call('Photoshop.read')
|
||||
|
||||
# panel routes for tools
|
||||
async def creator_route(self):
|
||||
self._tool_route("creator")
|
||||
|
||||
async def workfiles_route(self):
|
||||
self._tool_route("workfiles")
|
||||
|
||||
async def loader_route(self):
|
||||
self._tool_route("loader")
|
||||
|
||||
async def publish_route(self):
|
||||
self._tool_route("publish")
|
||||
|
||||
async def sceneinventory_route(self):
|
||||
self._tool_route("sceneinventory")
|
||||
|
||||
async def projectmanager_route(self):
|
||||
self._tool_route("projectmanager")
|
||||
|
||||
def _tool_route(self, tool_name):
|
||||
"""The address accessed when clicking on the buttons."""
|
||||
partial_method = functools.partial(photoshop.show, tool_name)
|
||||
|
||||
photoshop.execute_in_main_thread(partial_method)
|
||||
|
||||
# Required return statement.
|
||||
return "nothing"
|
||||
283
pype/modules/websocket_server/stubs/photoshop_server_stub.py
Normal file
283
pype/modules/websocket_server/stubs/photoshop_server_stub.py
Normal file
|
|
@ -0,0 +1,283 @@
|
|||
from pype.modules.websocket_server import WebSocketServer
|
||||
"""
|
||||
Stub handling connection from server to client.
|
||||
Used anywhere solution is calling client methods.
|
||||
"""
|
||||
import json
|
||||
from collections import namedtuple
|
||||
|
||||
|
||||
class PhotoshopServerStub():
|
||||
"""
|
||||
Stub for calling function on client (Photoshop js) side.
|
||||
Expects that client is already connected (started when avalon menu
|
||||
is opened).
|
||||
'self.websocketserver.call' is used as async wrapper
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.websocketserver = WebSocketServer.get_instance()
|
||||
self.client = self.websocketserver.get_client()
|
||||
|
||||
def open(self, path):
|
||||
"""
|
||||
Open file located at 'path' (local).
|
||||
:param path: <string> file path locally
|
||||
:return: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.open', path=path)
|
||||
)
|
||||
|
||||
def read(self, layer, layers_meta=None):
|
||||
"""
|
||||
Parses layer metadata from Headline field of active document
|
||||
:param layer: <namedTuple Layer("id":XX, "name":"YYY")
|
||||
:param layers_meta: full list from Headline (for performance in loops)
|
||||
:return:
|
||||
"""
|
||||
if layers_meta is None:
|
||||
layers_meta = self.get_layers_metadata()
|
||||
|
||||
return layers_meta.get(str(layer.id))
|
||||
|
||||
def imprint(self, layer, data, all_layers=None, layers_meta=None):
|
||||
"""
|
||||
Save layer metadata to Headline field of active document
|
||||
:param layer: <namedTuple> Layer("id": XXX, "name":'YYY')
|
||||
:param data: <string> json representation for single layer
|
||||
:param all_layers: <list of namedTuples> - for performance, could be
|
||||
injected for usage in loop, if not, single call will be
|
||||
triggered
|
||||
:param layers_meta: <string> json representation from Headline
|
||||
(for performance - provide only if imprint is in
|
||||
loop - value should be same)
|
||||
:return: None
|
||||
"""
|
||||
if not layers_meta:
|
||||
layers_meta = self.get_layers_metadata()
|
||||
# json.dumps writes integer values in a dictionary to string, so
|
||||
# anticipating it here.
|
||||
if str(layer.id) in layers_meta and layers_meta[str(layer.id)]:
|
||||
layers_meta[str(layer.id)].update(data)
|
||||
else:
|
||||
layers_meta[str(layer.id)] = data
|
||||
|
||||
# Ensure only valid ids are stored.
|
||||
if not all_layers:
|
||||
all_layers = self.get_layers()
|
||||
layer_ids = [layer.id for layer in all_layers]
|
||||
cleaned_data = {}
|
||||
|
||||
for id in layers_meta:
|
||||
if int(id) in layer_ids:
|
||||
cleaned_data[id] = layers_meta[id]
|
||||
|
||||
payload = json.dumps(cleaned_data, indent=4)
|
||||
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.imprint', payload=payload)
|
||||
)
|
||||
|
||||
def get_layers(self):
|
||||
"""
|
||||
Returns JSON document with all(?) layers in active document.
|
||||
|
||||
:return: <list of namedtuples>
|
||||
Format of tuple: { 'id':'123',
|
||||
'name': 'My Layer 1',
|
||||
'type': 'GUIDE'|'FG'|'BG'|'OBJ'
|
||||
'visible': 'true'|'false'
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.get_layers'))
|
||||
|
||||
return self._to_records(res)
|
||||
|
||||
def get_layers_in_layers(self, layers):
|
||||
"""
|
||||
Return all layers that belong to layers (might be groups).
|
||||
:param layers: <list of namedTuples>
|
||||
:return: <list of namedTuples>
|
||||
"""
|
||||
all_layers = self.get_layers()
|
||||
ret = []
|
||||
parent_ids = set([lay.id for lay in layers])
|
||||
|
||||
for layer in all_layers:
|
||||
parents = set(layer.parents)
|
||||
if len(parent_ids & parents) > 0:
|
||||
ret.append(layer)
|
||||
if layer.id in parent_ids:
|
||||
ret.append(layer)
|
||||
|
||||
return ret
|
||||
|
||||
def create_group(self, name):
|
||||
"""
|
||||
Create new group (eg. LayerSet)
|
||||
:return: <namedTuple Layer("id":XX, "name":"YYY")>
|
||||
"""
|
||||
ret = self.websocketserver.call(self.client.call
|
||||
('Photoshop.create_group',
|
||||
name=name))
|
||||
# create group on PS is asynchronous, returns only id
|
||||
layer = {"id": ret, "name": name, "group": True}
|
||||
return namedtuple('Layer', layer.keys())(*layer.values())
|
||||
|
||||
def group_selected_layers(self, name):
|
||||
"""
|
||||
Group selected layers into new LayerSet (eg. group)
|
||||
:return: <json representation of Layer>
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.group_selected_layers',
|
||||
name=name)
|
||||
)
|
||||
return self._to_records(res)
|
||||
|
||||
def get_selected_layers(self):
|
||||
"""
|
||||
Get a list of actually selected layers
|
||||
:return: <list of Layer('id':XX, 'name':"YYY")>
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.get_selected_layers'))
|
||||
return self._to_records(res)
|
||||
|
||||
def select_layers(self, layers):
|
||||
"""
|
||||
Selecte specified layers in Photoshop
|
||||
:param layers: <list of Layer('id':XX, 'name':"YYY")>
|
||||
:return: None
|
||||
"""
|
||||
layer_ids = [layer.id for layer in layers]
|
||||
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.get_layers',
|
||||
layers=layer_ids)
|
||||
)
|
||||
|
||||
def get_active_document_full_name(self):
|
||||
"""
|
||||
Returns full name with path of active document via ws call
|
||||
:return: <string> full path with name
|
||||
"""
|
||||
res = self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_active_document_full_name'))
|
||||
|
||||
return res
|
||||
|
||||
def get_active_document_name(self):
|
||||
"""
|
||||
Returns just a name of active document via ws call
|
||||
:return: <string> file name
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.get_active_document_name'))
|
||||
|
||||
return res
|
||||
|
||||
def is_saved(self):
|
||||
"""
|
||||
Returns true if no changes in active document
|
||||
:return: <boolean>
|
||||
"""
|
||||
return self.websocketserver.call(self.client.call
|
||||
('Photoshop.is_saved'))
|
||||
|
||||
def save(self):
|
||||
"""
|
||||
Saves active document
|
||||
:return: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.save'))
|
||||
|
||||
def saveAs(self, image_path, ext, as_copy):
|
||||
"""
|
||||
Saves active document to psd (copy) or png or jpg
|
||||
:param image_path: <string> full local path
|
||||
:param ext: <string psd|jpg|png>
|
||||
:param as_copy: <boolean>
|
||||
:return: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.saveAs',
|
||||
image_path=image_path,
|
||||
ext=ext,
|
||||
as_copy=as_copy))
|
||||
|
||||
def set_visible(self, layer_id, visibility):
|
||||
"""
|
||||
Set layer with 'layer_id' to 'visibility'
|
||||
:param layer_id: <int>
|
||||
:param visibility: <true - set visible, false - hide>
|
||||
:return: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.set_visible',
|
||||
layer_id=layer_id,
|
||||
visibility=visibility))
|
||||
|
||||
def get_layers_metadata(self):
|
||||
"""
|
||||
Reads layers metadata from Headline from active document in PS.
|
||||
(Headline accessible by File > File Info)
|
||||
:return: <string> - json documents
|
||||
"""
|
||||
layers_data = {}
|
||||
res = self.websocketserver.call(self.client.call('Photoshop.read'))
|
||||
try:
|
||||
layers_data = json.loads(res)
|
||||
except json.decoder.JSONDecodeError:
|
||||
pass
|
||||
return layers_data
|
||||
|
||||
def import_smart_object(self, path):
|
||||
"""
|
||||
Import the file at `path` as a smart object to active document.
|
||||
|
||||
Args:
|
||||
path (str): File path to import.
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.import_smart_object',
|
||||
path=path))
|
||||
|
||||
return self._to_records(res).pop()
|
||||
|
||||
def replace_smart_object(self, layer, path):
|
||||
"""
|
||||
Replace the smart object `layer` with file at `path`
|
||||
|
||||
Args:
|
||||
layer (namedTuple): Layer("id":XX, "name":"YY"..).
|
||||
path (str): File to import.
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.replace_smart_object',
|
||||
layer=layer,
|
||||
path=path))
|
||||
|
||||
def close(self):
|
||||
self.client.close()
|
||||
|
||||
def _to_records(self, res):
|
||||
"""
|
||||
Converts string json representation into list of named tuples for
|
||||
dot notation access to work.
|
||||
:return: <list of named tuples>
|
||||
:param res: <string> - json representation
|
||||
"""
|
||||
try:
|
||||
layers_data = json.loads(res)
|
||||
except json.decoder.JSONDecodeError:
|
||||
raise ValueError("Received broken JSON {}".format(res))
|
||||
ret = []
|
||||
# convert to namedtuple to use dot donation
|
||||
if isinstance(layers_data, dict): # TODO refactore
|
||||
layers_data = [layers_data]
|
||||
for d in layers_data:
|
||||
ret.append(namedtuple('Layer', d.keys())(*d.values()))
|
||||
return ret
|
||||
179
pype/modules/websocket_server/test_client/wsrpc_client.html
Normal file
179
pype/modules/websocket_server/test_client/wsrpc_client.html
Normal file
|
|
@ -0,0 +1,179 @@
|
|||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<title>Title</title>
|
||||
|
||||
<!-- CSS only -->
|
||||
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/css/bootstrap.min.css" integrity="sha384-9aIt2nRpC12Uk9gS9baDl411NQApFmC26EwAOH8WgZl5MYYxFfc+NcPb1dKGj7Sk" crossorigin="anonymous">
|
||||
|
||||
<script src="https://code.jquery.com/jquery-3.5.1.slim.min.js" integrity="sha384-DfXdz2htPH0lsSSs5nCTpuj/zy4C+OGpamoFVy38MVBnE+IbbVYUew+OrCXaRkfj" crossorigin="anonymous"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/popper.js@1.16.0/dist/umd/popper.min.js" integrity="sha384-Q6E9RHvbIyZFJoft+2mJbHaEWldlvI9IOYy5n3zV9zzTtmI3UksdQRVvoxMfooAo" crossorigin="anonymous"></script>
|
||||
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.5.0/js/bootstrap.min.js" integrity="sha384-OgVRvuATP1z7JjHLkuOU7Xw704+h835Lr+6QL9UvYjZE3Ipu6Tp75j7Bh/kR0JKI" crossorigin="anonymous"></script>
|
||||
|
||||
<script type="text/javascript" src="//unpkg.com/@wsrpc/client"></script>
|
||||
<script>
|
||||
WSRPC.DEBUG = true;
|
||||
WSRPC.TRACE = true;
|
||||
var url = (window.location.protocol==="https):"?"wss://":"ws://") + window.location.host + '/ws/';
|
||||
url = 'ws://localhost:8099/ws/';
|
||||
RPC = new WSRPC(url, 5000);
|
||||
|
||||
console.log(RPC.state());
|
||||
// Configure client API, that can be called from server
|
||||
RPC.addRoute('notify', function (data) {
|
||||
console.log('Server called client route "notify":', data);
|
||||
alert('Server called client route "notify":', data)
|
||||
return data.result;
|
||||
});
|
||||
RPC.connect();
|
||||
console.log(RPC.state());
|
||||
|
||||
$(document).ready(function() {
|
||||
function NoReturn(){
|
||||
// Call stateful route
|
||||
// After you call that route, server would execute 'notify' route on the
|
||||
// client, that is registered above.
|
||||
RPC.call('ExternalApp1.server_function_one').then(function (data) {
|
||||
console.log('Result for calling server route "server_function_one": ', data);
|
||||
alert('Function "server_function_two" returned: '+data);
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
}
|
||||
|
||||
function ReturnValue(){
|
||||
// Call stateful route
|
||||
// After you call that route, server would execute 'notify' route on the
|
||||
// client, that is registered above.
|
||||
RPC.call('ExternalApp1.server_function_two').then(function (data) {
|
||||
console.log('Result for calling server route "server_function_two": ', data);
|
||||
alert('Function "server_function_two" returned: '+data);
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
}
|
||||
|
||||
function ValueAndNotify(){
|
||||
// After you call that route, server would execute 'notify' route on the
|
||||
// client, that is registered above.
|
||||
RPC.call('ExternalApp1.server_function_three').then(function (data) {
|
||||
console.log('Result for calling server route "server_function_three": ', data);
|
||||
alert('Function "server_function_three" returned: '+data);
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
}
|
||||
|
||||
function SendValue(){
|
||||
// After you call that route, server would execute 'notify' route on the
|
||||
// client, that is registered above.
|
||||
RPC.call('ExternalApp1.server_function_four', {foo: 'one', bar:'two'}).then(function (data) {
|
||||
console.log('Result for calling server route "server_function_four": ', data);
|
||||
alert('Function "server_function_four" returned: '+data);
|
||||
}, function (error) {
|
||||
alert(error);
|
||||
});
|
||||
}
|
||||
|
||||
$('#noReturn').click(function() {
|
||||
NoReturn();
|
||||
})
|
||||
|
||||
$('#returnValue').click(function() {
|
||||
ReturnValue();
|
||||
})
|
||||
|
||||
$('#valueAndNotify').click(function() {
|
||||
ValueAndNotify();
|
||||
})
|
||||
|
||||
$('#sendValue').click(function() {
|
||||
SendValue();
|
||||
})
|
||||
|
||||
})
|
||||
|
||||
<!-- // Call stateless method-->
|
||||
<!-- RPC.call('test2').then(function (data) {-->
|
||||
<!-- console.log('Result for calling server route "test2"', data);-->
|
||||
<!-- });-->
|
||||
</script>
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div class="d-flex flex-column flex-md-row align-items-center p-3 px-md-4 mb-3 bg-white border-bottom shadow-sm">
|
||||
<h5 class="my-0 mr-md-auto font-weight-normal">Test of wsrpc javascript client</h5>
|
||||
|
||||
</div>
|
||||
|
||||
<div class="container">
|
||||
<div class="card-deck mb-3 text-center">
|
||||
<div class="card mb-4 shadow-sm">
|
||||
<div class="card-header">
|
||||
<h4 class="my-0 font-weight-normal">No return value</h4>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<ul class="list-unstyled mt-3 mb-4">
|
||||
<li>Calls server_function_one</li>
|
||||
<li>Function only logs on server</li>
|
||||
<li>No return value</li>
|
||||
<li> </li>
|
||||
<li> </li>
|
||||
<li> </li>
|
||||
</ul>
|
||||
<button type="button" id="noReturn" class="btn btn-lg btn-block btn-outline-primary">Call server</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card mb-4 shadow-sm">
|
||||
<div class="card-header">
|
||||
<h4 class="my-0 font-weight-normal">Return value</h4>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<ul class="list-unstyled mt-3 mb-4">
|
||||
<li>Calls server_function_two</li>
|
||||
<li>Function logs on server</li>
|
||||
<li>Returns simple text value</li>
|
||||
<li> </li>
|
||||
<li> </li>
|
||||
<li> </li>
|
||||
</ul>
|
||||
<button type="button" id="returnValue" class="btn btn-lg btn-block btn-outline-primary">Call server</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card mb-4 shadow-sm">
|
||||
<div class="card-header">
|
||||
<h4 class="my-0 font-weight-normal">Notify</h4>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<ul class="list-unstyled mt-3 mb-4">
|
||||
<li>Calls server_function_three</li>
|
||||
<li>Function logs on server</li>
|
||||
<li>Returns json payload </li>
|
||||
<li>Server then calls function ON the client after delay</li>
|
||||
<li> </li>
|
||||
</ul>
|
||||
<button type="button" id="valueAndNotify" class="btn btn-lg btn-block btn-outline-primary">Call server</button>
|
||||
</div>
|
||||
</div>
|
||||
<div class="card mb-4 shadow-sm">
|
||||
<div class="card-header">
|
||||
<h4 class="my-0 font-weight-normal">Send value</h4>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<ul class="list-unstyled mt-3 mb-4">
|
||||
<li>Calls server_function_four</li>
|
||||
<li>Function logs on server</li>
|
||||
<li>Returns modified sent values</li>
|
||||
<li> </li>
|
||||
<li> </li>
|
||||
<li> </li>
|
||||
</ul>
|
||||
<button type="button" id="sendValue" class="btn btn-lg btn-block btn-outline-primary">Call server</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</body>
|
||||
</html>
|
||||
34
pype/modules/websocket_server/test_client/wsrpc_client.py
Normal file
34
pype/modules/websocket_server/test_client/wsrpc_client.py
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
import asyncio
|
||||
|
||||
from wsrpc_aiohttp import WSRPCClient
|
||||
|
||||
"""
|
||||
Simple testing Python client for wsrpc_aiohttp
|
||||
Calls sequentially multiple methods on server
|
||||
"""
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
|
||||
|
||||
async def main():
|
||||
print("main")
|
||||
client = WSRPCClient("ws://127.0.0.1:8099/ws/",
|
||||
loop=asyncio.get_event_loop())
|
||||
|
||||
client.add_route('notify', notify)
|
||||
await client.connect()
|
||||
print("connected")
|
||||
print(await client.proxy.ExternalApp1.server_function_one())
|
||||
print(await client.proxy.ExternalApp1.server_function_two())
|
||||
print(await client.proxy.ExternalApp1.server_function_three())
|
||||
print(await client.proxy.ExternalApp1.server_function_four(foo="one"))
|
||||
await client.close()
|
||||
|
||||
|
||||
def notify(socket, *args, **kwargs):
|
||||
print("called from server")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# loop.run_until_complete(main())
|
||||
asyncio.run(main())
|
||||
222
pype/modules/websocket_server/websocket_server.py
Normal file
222
pype/modules/websocket_server/websocket_server.py
Normal file
|
|
@ -0,0 +1,222 @@
|
|||
from pype.api import Logger
|
||||
|
||||
import threading
|
||||
from aiohttp import web
|
||||
import asyncio
|
||||
from wsrpc_aiohttp import STATIC_DIR, WebSocketAsync
|
||||
|
||||
import os
|
||||
import sys
|
||||
import pyclbr
|
||||
import importlib
|
||||
import urllib
|
||||
|
||||
log = Logger().get_logger("WebsocketServer")
|
||||
|
||||
|
||||
class WebSocketServer():
|
||||
"""
|
||||
Basic POC implementation of asychronic websocket RPC server.
|
||||
Uses class in external_app_1.py to mimic implementation for single
|
||||
external application.
|
||||
'test_client' folder contains two test implementations of client
|
||||
"""
|
||||
_instance = None
|
||||
|
||||
def __init__(self):
|
||||
self.qaction = None
|
||||
self.failed_icon = None
|
||||
self._is_running = False
|
||||
WebSocketServer._instance = self
|
||||
self.client = None
|
||||
self.handlers = {}
|
||||
|
||||
port = None
|
||||
websocket_url = os.getenv("WEBSOCKET_URL")
|
||||
if websocket_url:
|
||||
parsed = urllib.parse.urlparse(websocket_url)
|
||||
port = parsed.port
|
||||
if not port:
|
||||
port = 8098 # fallback
|
||||
|
||||
self.app = web.Application()
|
||||
|
||||
self.app.router.add_route("*", "/ws/", WebSocketAsync)
|
||||
self.app.router.add_static("/js", STATIC_DIR)
|
||||
self.app.router.add_static("/", ".")
|
||||
|
||||
# add route with multiple methods for single "external app"
|
||||
directories_with_routes = ['hosts']
|
||||
self.add_routes_for_directories(directories_with_routes)
|
||||
|
||||
self.websocket_thread = WebsocketServerThread(self, port)
|
||||
|
||||
def add_routes_for_directories(self, directories_with_routes):
|
||||
""" Loops through selected directories to find all modules and
|
||||
in them all classes implementing 'WebSocketRoute' that could be
|
||||
used as route.
|
||||
All methods in these classes are registered automatically.
|
||||
"""
|
||||
for dir_name in directories_with_routes:
|
||||
dir_name = os.path.join(os.path.dirname(__file__), dir_name)
|
||||
for file_name in os.listdir(dir_name):
|
||||
if '.py' in file_name and '__' not in file_name:
|
||||
self.add_routes_for_module(file_name, dir_name)
|
||||
|
||||
def add_routes_for_module(self, file_name, dir_name):
|
||||
""" Auto routes for all classes implementing 'WebSocketRoute'
|
||||
in 'file_name' in 'dir_name'
|
||||
"""
|
||||
module_name = file_name.replace('.py', '')
|
||||
module_info = pyclbr.readmodule(module_name, [dir_name])
|
||||
|
||||
for class_name, cls_object in module_info.items():
|
||||
sys.path.append(dir_name)
|
||||
if 'WebSocketRoute' in cls_object.super:
|
||||
log.debug('Adding route for {}'.format(class_name))
|
||||
module = importlib.import_module(module_name)
|
||||
cls = getattr(module, class_name)
|
||||
WebSocketAsync.add_route(class_name, cls)
|
||||
sys.path.pop()
|
||||
|
||||
def call(self, func):
|
||||
log.debug("websocket.call {}".format(func))
|
||||
future = asyncio.run_coroutine_threadsafe(func,
|
||||
self.websocket_thread.loop)
|
||||
result = future.result()
|
||||
return result
|
||||
|
||||
def get_client(self):
|
||||
"""
|
||||
Return first connected client to WebSocket
|
||||
TODO implement selection by Route
|
||||
:return: <WebSocketAsync> client
|
||||
"""
|
||||
clients = WebSocketAsync.get_clients()
|
||||
client = None
|
||||
if len(clients) > 0:
|
||||
key = list(clients.keys())[0]
|
||||
client = clients.get(key)
|
||||
|
||||
return client
|
||||
|
||||
@staticmethod
|
||||
def get_instance():
|
||||
if WebSocketServer._instance is None:
|
||||
WebSocketServer()
|
||||
return WebSocketServer._instance
|
||||
|
||||
def tray_start(self):
|
||||
self.websocket_thread.start()
|
||||
|
||||
def tray_exit(self):
|
||||
self.stop()
|
||||
|
||||
def stop_websocket_server(self):
|
||||
|
||||
self.stop()
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self.websocket_thread.is_running
|
||||
|
||||
def stop(self):
|
||||
if not self.is_running:
|
||||
return
|
||||
try:
|
||||
log.debug("Stopping websocket server")
|
||||
self.websocket_thread.is_running = False
|
||||
self.websocket_thread.stop()
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Error has happened during Killing websocket server",
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
def thread_stopped(self):
|
||||
self._is_running = False
|
||||
|
||||
|
||||
class WebsocketServerThread(threading.Thread):
|
||||
""" Listener for websocket rpc requests.
|
||||
|
||||
It would be probably better to "attach" this to main thread (as for
|
||||
example Harmony needs to run something on main thread), but currently
|
||||
it creates separate thread and separate asyncio event loop
|
||||
"""
|
||||
def __init__(self, module, port):
|
||||
super(WebsocketServerThread, self).__init__()
|
||||
self.is_running = False
|
||||
self.port = port
|
||||
self.module = module
|
||||
self.loop = None
|
||||
self.runner = None
|
||||
self.site = None
|
||||
self.tasks = []
|
||||
|
||||
def run(self):
|
||||
self.is_running = True
|
||||
|
||||
try:
|
||||
log.info("Starting websocket server")
|
||||
self.loop = asyncio.new_event_loop() # create new loop for thread
|
||||
asyncio.set_event_loop(self.loop)
|
||||
|
||||
self.loop.run_until_complete(self.start_server())
|
||||
|
||||
log.debug(
|
||||
"Running Websocket server on URL:"
|
||||
" \"ws://localhost:{}\"".format(self.port)
|
||||
)
|
||||
|
||||
asyncio.ensure_future(self.check_shutdown(), loop=self.loop)
|
||||
self.loop.run_forever()
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Websocket Server service has failed", exc_info=True
|
||||
)
|
||||
finally:
|
||||
self.loop.close() # optional
|
||||
|
||||
self.is_running = False
|
||||
self.module.thread_stopped()
|
||||
log.info("Websocket server stopped")
|
||||
|
||||
async def start_server(self):
|
||||
""" Starts runner and TCPsite """
|
||||
self.runner = web.AppRunner(self.module.app)
|
||||
await self.runner.setup()
|
||||
self.site = web.TCPSite(self.runner, 'localhost', self.port)
|
||||
await self.site.start()
|
||||
|
||||
def stop(self):
|
||||
"""Sets is_running flag to false, 'check_shutdown' shuts server down"""
|
||||
self.is_running = False
|
||||
|
||||
async def check_shutdown(self):
|
||||
""" Future that is running and checks if server should be running
|
||||
periodically.
|
||||
"""
|
||||
while self.is_running:
|
||||
while self.tasks:
|
||||
task = self.tasks.pop(0)
|
||||
log.debug("waiting for task {}".format(task))
|
||||
await task
|
||||
log.debug("returned value {}".format(task.result))
|
||||
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
log.debug("Starting shutdown")
|
||||
await self.site.stop()
|
||||
log.debug("Site stopped")
|
||||
await self.runner.cleanup()
|
||||
log.debug("Runner stopped")
|
||||
tasks = [task for task in asyncio.all_tasks() if
|
||||
task is not asyncio.current_task()]
|
||||
list(map(lambda task: task.cancel(), tasks)) # cancel all the tasks
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
log.debug(f'Finished awaiting cancelled tasks, results: {results}...')
|
||||
await self.loop.shutdown_asyncgens()
|
||||
# to really make sure everything else has time to stop
|
||||
await asyncio.sleep(0.07)
|
||||
self.loop.stop()
|
||||
|
|
@ -97,6 +97,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -178,6 +179,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Adding metadata
|
||||
|
|
@ -228,6 +230,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Adding metadata
|
||||
|
|
@ -242,6 +245,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
session.commit()
|
||||
except Exception:
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
self.log.warning((
|
||||
"Comment was not possible to set for AssetVersion"
|
||||
"\"{0}\". Can't set it's value to: \"{1}\""
|
||||
|
|
@ -258,6 +262,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
except Exception:
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
|
||||
self.log.warning((
|
||||
"Custom Attrubute \"{0}\""
|
||||
|
|
@ -272,6 +277,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Component
|
||||
|
|
@ -316,6 +322,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Reset members in memory
|
||||
|
|
@ -432,6 +439,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
if assetversion_entity not in used_asset_versions:
|
||||
|
|
|
|||
|
|
@ -88,8 +88,14 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
instance.data["frameEnd"] - instance.data["frameStart"]
|
||||
)
|
||||
|
||||
if not comp.get('fps'):
|
||||
comp['fps'] = instance.context.data['fps']
|
||||
fps = comp.get('fps')
|
||||
if fps is None:
|
||||
fps = instance.data.get(
|
||||
"fps", instance.context.data['fps']
|
||||
)
|
||||
|
||||
comp['fps'] = fps
|
||||
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.server', ft_session
|
||||
)
|
||||
|
|
|
|||
|
|
@ -145,4 +145,5 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
session.rollback()
|
||||
session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
|
|
|||
|
|
@ -40,9 +40,11 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
self.context = context
|
||||
if "hierarchyContext" not in context.data:
|
||||
if "hierarchyContext" not in self.context.data:
|
||||
return
|
||||
|
||||
hierarchy_context = self.context.data["hierarchyContext"]
|
||||
|
||||
self.session = self.context.data["ftrackSession"]
|
||||
project_name = self.context.data["projectEntity"]["name"]
|
||||
query = 'Project where full_name is "{}"'.format(project_name)
|
||||
|
|
@ -55,7 +57,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
|
||||
self.ft_project = None
|
||||
|
||||
input_data = context.data["hierarchyContext"]
|
||||
input_data = hierarchy_context
|
||||
|
||||
# disable termporarily ftrack project's autosyncing
|
||||
if auto_sync_state:
|
||||
|
|
@ -128,6 +130,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# TASKS
|
||||
|
|
@ -156,6 +159,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Incoming links.
|
||||
|
|
@ -165,8 +169,31 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Create notes.
|
||||
user = self.session.query(
|
||||
"User where username is \"{}\"".format(self.session.api_user)
|
||||
).first()
|
||||
if user:
|
||||
for comment in entity_data.get("comments", []):
|
||||
entity.create_note(comment, user)
|
||||
else:
|
||||
self.log.warning(
|
||||
"Was not able to query current User {}".format(
|
||||
self.session.api_user
|
||||
)
|
||||
)
|
||||
try:
|
||||
self.session.commit()
|
||||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Import children.
|
||||
if 'childs' in entity_data:
|
||||
self.import_to_ftrack(
|
||||
entity_data['childs'], entity)
|
||||
|
|
@ -180,6 +207,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
# Create new links.
|
||||
|
|
@ -221,6 +249,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
return task
|
||||
|
|
@ -235,6 +264,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
return entity
|
||||
|
|
@ -249,7 +279,8 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
raise
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
def auto_sync_on(self, project):
|
||||
|
||||
|
|
@ -262,4 +293,5 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
raise
|
||||
self.session._configure_locations()
|
||||
six.reraise(tp, value, tb)
|
||||
|
|
|
|||
|
|
@ -20,8 +20,8 @@ class CopyFile(api.Loader):
|
|||
def copy_file_to_clipboard(path):
|
||||
from avalon.vendor.Qt import QtCore, QtWidgets
|
||||
|
||||
app = QtWidgets.QApplication.instance()
|
||||
assert app, "Must have running QApplication instance"
|
||||
clipboard = QtWidgets.QApplication.clipboard()
|
||||
assert clipboard, "Must have running QApplication instance"
|
||||
|
||||
# Build mime data for clipboard
|
||||
data = QtCore.QMimeData()
|
||||
|
|
@ -29,5 +29,4 @@ class CopyFile(api.Loader):
|
|||
data.setUrls([url])
|
||||
|
||||
# Set to Clipboard
|
||||
clipboard = app.clipboard()
|
||||
clipboard.setMimeData(data)
|
||||
|
|
|
|||
|
|
@ -19,11 +19,10 @@ class CopyFilePath(api.Loader):
|
|||
|
||||
@staticmethod
|
||||
def copy_path_to_clipboard(path):
|
||||
from avalon.vendor.Qt import QtCore, QtWidgets
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
|
||||
app = QtWidgets.QApplication.instance()
|
||||
assert app, "Must have running QApplication instance"
|
||||
clipboard = QtWidgets.QApplication.clipboard()
|
||||
assert clipboard, "Must have running QApplication instance"
|
||||
|
||||
# Set to Clipboard
|
||||
clipboard = app.clipboard()
|
||||
clipboard.setText(os.path.normpath(path))
|
||||
|
|
|
|||
|
|
@ -23,123 +23,256 @@ Provides:
|
|||
|
||||
import copy
|
||||
import json
|
||||
import collections
|
||||
|
||||
from avalon import io
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectAnatomyInstanceData(pyblish.api.InstancePlugin):
|
||||
"""Collect Instance specific Anatomy data."""
|
||||
class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
||||
"""Collect Instance specific Anatomy data.
|
||||
|
||||
Plugin is running for all instances on context even not active instances.
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.49
|
||||
label = "Collect Anatomy Instance data"
|
||||
|
||||
def process(self, instance):
|
||||
# get all the stuff from the database
|
||||
anatomy_data = copy.deepcopy(instance.context.data["anatomyData"])
|
||||
project_entity = instance.context.data["projectEntity"]
|
||||
context_asset_entity = instance.context.data["assetEntity"]
|
||||
instance_asset_entity = instance.data.get("assetEntity")
|
||||
def process(self, context):
|
||||
self.log.info("Collecting anatomy data for all instances.")
|
||||
|
||||
asset_name = instance.data["asset"]
|
||||
self.fill_missing_asset_docs(context)
|
||||
self.fill_latest_versions(context)
|
||||
self.fill_anatomy_data(context)
|
||||
|
||||
# There is possibility that assetEntity on instance is already set
|
||||
# which can happen in standalone publisher
|
||||
if (
|
||||
instance_asset_entity
|
||||
and instance_asset_entity["name"] == asset_name
|
||||
):
|
||||
asset_entity = instance_asset_entity
|
||||
self.log.info("Anatomy Data collection finished.")
|
||||
|
||||
# Check if asset name is the same as what is in context
|
||||
# - they may be different, e.g. in NukeStudio
|
||||
elif context_asset_entity["name"] == asset_name:
|
||||
asset_entity = context_asset_entity
|
||||
def fill_missing_asset_docs(self, context):
|
||||
self.log.debug("Qeurying asset documents for instances.")
|
||||
|
||||
else:
|
||||
asset_entity = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name,
|
||||
"parent": project_entity["_id"]
|
||||
})
|
||||
context_asset_doc = context.data["assetEntity"]
|
||||
|
||||
subset_name = instance.data["subset"]
|
||||
version_number = instance.data.get("version")
|
||||
latest_version = None
|
||||
instances_with_missing_asset_doc = collections.defaultdict(list)
|
||||
for instance in context:
|
||||
instance_asset_doc = instance.data.get("assetEntity")
|
||||
_asset_name = instance.data["asset"]
|
||||
|
||||
if asset_entity:
|
||||
subset_entity = io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset_entity["_id"]
|
||||
})
|
||||
# There is possibility that assetEntity on instance is already set
|
||||
# which can happen in standalone publisher
|
||||
if (
|
||||
instance_asset_doc
|
||||
and instance_asset_doc["name"] == _asset_name
|
||||
):
|
||||
continue
|
||||
|
||||
# Check if asset name is the same as what is in context
|
||||
# - they may be different, e.g. in NukeStudio
|
||||
if context_asset_doc["name"] == _asset_name:
|
||||
instance.data["assetEntity"] = context_asset_doc
|
||||
|
||||
if subset_entity is None:
|
||||
self.log.debug("Subset entity does not exist yet.")
|
||||
else:
|
||||
version_entity = io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset_entity["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if version_entity:
|
||||
latest_version = version_entity["name"]
|
||||
instances_with_missing_asset_doc[_asset_name].append(instance)
|
||||
|
||||
# If version is not specified for instance or context
|
||||
if version_number is None:
|
||||
# TODO we should be able to change default version by studio
|
||||
# preferences (like start with version number `0`)
|
||||
version_number = 1
|
||||
# use latest version (+1) if already any exist
|
||||
if latest_version is not None:
|
||||
version_number += int(latest_version)
|
||||
if not instances_with_missing_asset_doc:
|
||||
self.log.debug("All instances already had right asset document.")
|
||||
return
|
||||
|
||||
anatomy_updates = {
|
||||
"asset": asset_name,
|
||||
"family": instance.data["family"],
|
||||
"subset": subset_name,
|
||||
"version": version_number
|
||||
asset_names = list(instances_with_missing_asset_doc.keys())
|
||||
self.log.debug("Querying asset documents with names: {}".format(
|
||||
", ".join(["\"{}\"".format(name) for name in asset_names])
|
||||
))
|
||||
asset_docs = io.find({
|
||||
"type": "asset",
|
||||
"name": {"$in": asset_names}
|
||||
})
|
||||
asset_docs_by_name = {
|
||||
asset_doc["name"]: asset_doc
|
||||
for asset_doc in asset_docs
|
||||
}
|
||||
if (
|
||||
asset_entity
|
||||
and asset_entity["_id"] != context_asset_entity["_id"]
|
||||
):
|
||||
parents = asset_entity["data"].get("parents") or list()
|
||||
anatomy_updates["hierarchy"] = "/".join(parents)
|
||||
|
||||
task_name = instance.data.get("task")
|
||||
if task_name:
|
||||
anatomy_updates["task"] = task_name
|
||||
not_found_asset_names = []
|
||||
for asset_name, instances in instances_with_missing_asset_doc.items():
|
||||
asset_doc = asset_docs_by_name.get(asset_name)
|
||||
if not asset_doc:
|
||||
not_found_asset_names.append(asset_name)
|
||||
continue
|
||||
|
||||
# Version should not be collected since may be instance
|
||||
anatomy_data.update(anatomy_updates)
|
||||
for _instance in instances:
|
||||
_instance.data["assetEntity"] = asset_doc
|
||||
|
||||
resolution_width = instance.data.get("resolutionWidth")
|
||||
if resolution_width:
|
||||
anatomy_data["resolution_width"] = resolution_width
|
||||
if not_found_asset_names:
|
||||
joined_asset_names = ", ".join(
|
||||
["\"{}\"".format(name) for name in not_found_asset_names]
|
||||
)
|
||||
self.log.warning((
|
||||
"Not found asset documents with names \"{}\"."
|
||||
).format(joined_asset_names))
|
||||
|
||||
resolution_height = instance.data.get("resolutionHeight")
|
||||
if resolution_height:
|
||||
anatomy_data["resolution_height"] = resolution_height
|
||||
def fill_latest_versions(self, context):
|
||||
"""Try to find latest version for each instance's subset.
|
||||
|
||||
pixel_aspect = instance.data.get("pixelAspect")
|
||||
if pixel_aspect:
|
||||
anatomy_data["pixel_aspect"] = float("{:0.2f}".format(
|
||||
float(pixel_aspect)))
|
||||
Key "latestVersion" is always set to latest version or `None`.
|
||||
|
||||
fps = instance.data.get("fps")
|
||||
if fps:
|
||||
anatomy_data["fps"] = float("{:0.2f}".format(
|
||||
float(fps)))
|
||||
Args:
|
||||
context (pyblish.Context)
|
||||
|
||||
instance.data["projectEntity"] = project_entity
|
||||
instance.data["assetEntity"] = asset_entity
|
||||
instance.data["anatomyData"] = anatomy_data
|
||||
instance.data["latestVersion"] = latest_version
|
||||
# TODO should be version number set here?
|
||||
instance.data["version"] = version_number
|
||||
Returns:
|
||||
None
|
||||
|
||||
self.log.info("Instance anatomy Data collected")
|
||||
self.log.debug(json.dumps(anatomy_data, indent=4))
|
||||
"""
|
||||
self.log.debug("Qeurying latest versions for instances.")
|
||||
|
||||
hierarchy = {}
|
||||
subset_names = set()
|
||||
asset_ids = set()
|
||||
for instance in context:
|
||||
# Make sure `"latestVersion"` key is set
|
||||
latest_version = instance.data.get("latestVersion")
|
||||
instance.data["latestVersion"] = latest_version
|
||||
|
||||
# Skip instances withou "assetEntity"
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
if not asset_doc:
|
||||
continue
|
||||
|
||||
# Store asset ids and subset names for queries
|
||||
asset_id = asset_doc["_id"]
|
||||
subset_name = instance.data["subset"]
|
||||
asset_ids.add(asset_id)
|
||||
subset_names.add(subset_name)
|
||||
|
||||
# Prepare instance hiearchy for faster filling latest versions
|
||||
if asset_id not in hierarchy:
|
||||
hierarchy[asset_id] = {}
|
||||
if subset_name not in hierarchy[asset_id]:
|
||||
hierarchy[asset_id][subset_name] = []
|
||||
hierarchy[asset_id][subset_name].append(instance)
|
||||
|
||||
subset_docs = list(io.find({
|
||||
"type": "subset",
|
||||
"parent": {"$in": list(asset_ids)},
|
||||
"name": {"$in": list(subset_names)}
|
||||
}))
|
||||
|
||||
subset_ids = [
|
||||
subset_doc["_id"]
|
||||
for subset_doc in subset_docs
|
||||
]
|
||||
|
||||
last_version_by_subset_id = self._query_last_versions(subset_ids)
|
||||
for subset_doc in subset_docs:
|
||||
subset_id = subset_doc["_id"]
|
||||
last_version = last_version_by_subset_id.get(subset_id)
|
||||
if last_version is None:
|
||||
continue
|
||||
|
||||
asset_id = subset_doc["parent"]
|
||||
subset_name = subset_doc["name"]
|
||||
_instances = hierarchy[asset_id][subset_name]
|
||||
for _instance in _instances:
|
||||
_instance.data["latestVersion"] = last_version
|
||||
|
||||
def _query_last_versions(self, subset_ids):
|
||||
"""Retrieve all latest versions for entered subset_ids.
|
||||
|
||||
Args:
|
||||
subset_ids (list): List of subset ids with type `ObjectId`.
|
||||
|
||||
Returns:
|
||||
dict: Key is subset id and value is last version name.
|
||||
"""
|
||||
_pipeline = [
|
||||
# Find all versions of those subsets
|
||||
{"$match": {
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}},
|
||||
# Sorting versions all together
|
||||
{"$sort": {"name": 1}},
|
||||
# Group them by "parent", but only take the last
|
||||
{"$group": {
|
||||
"_id": "$parent",
|
||||
"_version_id": {"$last": "$_id"},
|
||||
"name": {"$last": "$name"}
|
||||
}}
|
||||
]
|
||||
|
||||
last_version_by_subset_id = {}
|
||||
for doc in io.aggregate(_pipeline):
|
||||
subset_id = doc["_id"]
|
||||
last_version_by_subset_id[subset_id] = doc["name"]
|
||||
|
||||
return last_version_by_subset_id
|
||||
|
||||
def fill_anatomy_data(self, context):
|
||||
self.log.debug("Storing anatomy data to instance data.")
|
||||
|
||||
project_doc = context.data["projectEntity"]
|
||||
context_asset_doc = context.data["assetEntity"]
|
||||
|
||||
for instance in context:
|
||||
version_number = instance.data.get("version")
|
||||
# If version is not specified for instance or context
|
||||
if version_number is None:
|
||||
# TODO we should be able to change default version by studio
|
||||
# preferences (like start with version number `0`)
|
||||
version_number = 1
|
||||
# use latest version (+1) if already any exist
|
||||
latest_version = instance.data["latestVersion"]
|
||||
if latest_version is not None:
|
||||
version_number += int(latest_version)
|
||||
|
||||
anatomy_updates = {
|
||||
"asset": instance.data["asset"],
|
||||
"family": instance.data["family"],
|
||||
"subset": instance.data["subset"],
|
||||
"version": version_number
|
||||
}
|
||||
|
||||
# Hiearchy
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
if asset_doc and asset_doc["_id"] != context_asset_doc["_id"]:
|
||||
parents = asset_doc["data"].get("parents") or list()
|
||||
anatomy_updates["hierarchy"] = "/".join(parents)
|
||||
|
||||
# Task
|
||||
task_name = instance.data.get("task")
|
||||
if task_name:
|
||||
anatomy_updates["task"] = task_name
|
||||
|
||||
# Additional data
|
||||
resolution_width = instance.data.get("resolutionWidth")
|
||||
if resolution_width:
|
||||
anatomy_updates["resolution_width"] = resolution_width
|
||||
|
||||
resolution_height = instance.data.get("resolutionHeight")
|
||||
if resolution_height:
|
||||
anatomy_updates["resolution_height"] = resolution_height
|
||||
|
||||
pixel_aspect = instance.data.get("pixelAspect")
|
||||
if pixel_aspect:
|
||||
anatomy_updates["pixel_aspect"] = float(
|
||||
"{:0.2f}".format(float(pixel_aspect))
|
||||
)
|
||||
|
||||
fps = instance.data.get("fps")
|
||||
if fps:
|
||||
anatomy_updates["fps"] = float("{:0.2f}".format(float(fps)))
|
||||
|
||||
anatomy_data = copy.deepcopy(context.data["anatomyData"])
|
||||
anatomy_data.update(anatomy_updates)
|
||||
|
||||
# Store anatomy data
|
||||
instance.data["projectEntity"] = project_doc
|
||||
instance.data["anatomyData"] = anatomy_data
|
||||
instance.data["version"] = version_number
|
||||
|
||||
# Log collected data
|
||||
instance_name = instance.data["name"]
|
||||
instance_label = instance.data.get("label")
|
||||
if instance_label:
|
||||
instance_name += "({})".format(instance_label)
|
||||
self.log.debug("Anatomy data for instance {}: {}".format(
|
||||
instance_name,
|
||||
json.dumps(anatomy_data, indent=4)
|
||||
))
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ class CollectCurrentUserPype(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
user = os.getenv("PYPE_USERNAME", "").strip()
|
||||
if not user:
|
||||
return
|
||||
user = context.data.get("user", getpass.getuser())
|
||||
|
||||
context.data["user"] = user
|
||||
self.log.debug("Pype user is \"{}\"".format(user))
|
||||
self.log.debug("Colected user \"{}\"".format(user))
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@ class ExtractBurnin(pype.api.Extractor):
|
|||
"nukestudio",
|
||||
"premiere",
|
||||
"standalonepublisher",
|
||||
"harmony"
|
||||
"fusion"
|
||||
]
|
||||
optional = True
|
||||
|
|
@ -195,11 +196,14 @@ class ExtractBurnin(pype.api.Extractor):
|
|||
if "delete" in new_repre["tags"]:
|
||||
new_repre["tags"].remove("delete")
|
||||
|
||||
# Update name and outputName to be able have multiple outputs
|
||||
# Join previous "outputName" with filename suffix
|
||||
new_name = "_".join([new_repre["outputName"], filename_suffix])
|
||||
new_repre["name"] = new_name
|
||||
new_repre["outputName"] = new_name
|
||||
if len(repre_burnin_defs.keys()) > 1:
|
||||
# Update name and outputName to be
|
||||
# able have multiple outputs in case of more burnin presets
|
||||
# Join previous "outputName" with filename suffix
|
||||
new_name = "_".join(
|
||||
[new_repre["outputName"], filename_suffix])
|
||||
new_repre["name"] = new_name
|
||||
new_repre["outputName"] = new_name
|
||||
|
||||
# Prepare paths and files for process.
|
||||
self.input_output_paths(new_repre, temp_data, filename_suffix)
|
||||
|
|
@ -311,12 +315,15 @@ class ExtractBurnin(pype.api.Extractor):
|
|||
"comment": context.data.get("comment") or ""
|
||||
})
|
||||
|
||||
intent_label = context.data.get("intent")
|
||||
intent_label = context.data.get("intent") or ""
|
||||
if intent_label and isinstance(intent_label, dict):
|
||||
intent_label = intent_label.get("label")
|
||||
value = intent_label.get("value")
|
||||
if value:
|
||||
intent_label = intent_label["label"]
|
||||
else:
|
||||
intent_label = ""
|
||||
|
||||
if intent_label:
|
||||
burnin_data["intent"] = intent_label
|
||||
burnin_data["intent"] = intent_label
|
||||
|
||||
temp_data = {
|
||||
"frame_start": frame_start,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import pyblish.api
|
||||
from avalon import io
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
||||
"""Create entities in Avalon based on collected data."""
|
||||
|
|
@ -10,14 +10,35 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
families = ["clip", "shot"]
|
||||
|
||||
def process(self, context):
|
||||
# processing starts here
|
||||
if "hierarchyContext" not in context.data:
|
||||
self.log.info("skipping IntegrateHierarchyToAvalon")
|
||||
return
|
||||
hierarchy_context = deepcopy(context.data["hierarchyContext"])
|
||||
|
||||
if not io.Session:
|
||||
io.install()
|
||||
|
||||
input_data = context.data["hierarchyContext"]
|
||||
active_assets = []
|
||||
# filter only the active publishing insatnces
|
||||
for instance in context:
|
||||
if instance.data.get("publish") is False:
|
||||
continue
|
||||
|
||||
if not instance.data.get("asset"):
|
||||
continue
|
||||
|
||||
active_assets.append(instance.data["asset"])
|
||||
|
||||
# remove duplicity in list
|
||||
self.active_assets = list(set(active_assets))
|
||||
self.log.debug("__ self.active_assets: {}".format(self.active_assets))
|
||||
|
||||
hierarchy_context = self._get_assets(hierarchy_context)
|
||||
|
||||
self.log.debug("__ hierarchy_context: {}".format(hierarchy_context))
|
||||
input_data = context.data["hierarchyContext"] = hierarchy_context
|
||||
|
||||
self.project = None
|
||||
self.import_to_avalon(input_data)
|
||||
|
||||
|
|
@ -38,7 +59,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
data["inputs"] = entity_data.get("inputs", [])
|
||||
|
||||
# Tasks.
|
||||
tasks = entity_data.get("tasks", [])
|
||||
tasks = entity_data.get("tasks", {})
|
||||
if tasks is not None or len(tasks) > 0:
|
||||
data["tasks"] = tasks
|
||||
parents = []
|
||||
|
|
@ -78,12 +99,13 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
if entity:
|
||||
# Do not override data, only update
|
||||
cur_entity_data = entity.get("data") or {}
|
||||
new_tasks = data.pop("tasks", [])
|
||||
new_tasks = data.pop("tasks", {})
|
||||
if "tasks" in cur_entity_data and new_tasks:
|
||||
for task_name in new_tasks:
|
||||
if task_name not in cur_entity_data["tasks"]:
|
||||
cur_entity_data["tasks"].append(task_name)
|
||||
|
||||
for task_name in new_tasks.keys():
|
||||
if task_name \
|
||||
not in cur_entity_data["tasks"].keys():
|
||||
cur_entity_data["tasks"][task_name] = \
|
||||
new_tasks[task_name]
|
||||
cur_entity_data.update(data)
|
||||
data = cur_entity_data
|
||||
else:
|
||||
|
|
@ -150,3 +172,24 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
entity_id = io.insert_one(item).inserted_id
|
||||
|
||||
return io.find_one({"_id": entity_id})
|
||||
|
||||
def _get_assets(self, input_dict):
|
||||
""" Returns only asset dictionary.
|
||||
Usually the last part of deep dictionary which
|
||||
is not having any children
|
||||
"""
|
||||
input_dict_copy = deepcopy(input_dict)
|
||||
|
||||
for key in input_dict.keys():
|
||||
self.log.debug("__ key: {}".format(key))
|
||||
# check if child key is available
|
||||
if input_dict[key].get("childs"):
|
||||
# loop deeper
|
||||
input_dict_copy[key]["childs"] = self._get_assets(
|
||||
input_dict[key]["childs"])
|
||||
else:
|
||||
# filter out unwanted assets
|
||||
if key not in self.active_assets:
|
||||
input_dict_copy.pop(key, None)
|
||||
|
||||
return input_dict_copy
|
||||
|
|
|
|||
|
|
@ -48,10 +48,11 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
|
||||
if not isinstance(repre['files'], (list, tuple)):
|
||||
continue
|
||||
input_file = repre['files']
|
||||
else:
|
||||
input_file = repre['files'][0]
|
||||
|
||||
stagingdir = os.path.normpath(repre.get("stagingDir"))
|
||||
input_file = repre['files'][0]
|
||||
|
||||
# input_file = (
|
||||
# collections[0].format('{head}{padding}{tail}') % start
|
||||
|
|
@ -80,6 +81,11 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
jpeg_items.append("-i {}".format(full_input_path))
|
||||
# output arguments from presets
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
|
||||
# If its a movie file, we just want one frame.
|
||||
if repre["ext"] == "mov":
|
||||
jpeg_items.append("-vframes 1")
|
||||
|
||||
# output file
|
||||
jpeg_items.append(full_output_path)
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
to_height = 1080
|
||||
|
||||
def process(self, instance):
|
||||
# # Skip review when requested.
|
||||
# Skip review when requested.
|
||||
if not instance.data.get("review", True):
|
||||
return
|
||||
|
||||
|
|
@ -634,6 +634,26 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
input_width = int(input_data["width"])
|
||||
input_height = int(input_data["height"])
|
||||
|
||||
# Make sure input width and height is not an odd number
|
||||
input_width_is_odd = bool(input_width % 2 != 0)
|
||||
input_height_is_odd = bool(input_height % 2 != 0)
|
||||
if input_width_is_odd or input_height_is_odd:
|
||||
# Add padding to input and make sure this filter is at first place
|
||||
filters.append("pad=width=ceil(iw/2)*2:height=ceil(ih/2)*2")
|
||||
|
||||
# Change input width or height as first filter will change them
|
||||
if input_width_is_odd:
|
||||
self.log.info((
|
||||
"Converting input width from odd to even number. {} -> {}"
|
||||
).format(input_width, input_width + 1))
|
||||
input_width += 1
|
||||
|
||||
if input_height_is_odd:
|
||||
self.log.info((
|
||||
"Converting input height from odd to even number. {} -> {}"
|
||||
).format(input_height, input_height + 1))
|
||||
input_height += 1
|
||||
|
||||
self.log.debug("pixel_aspect: `{}`".format(pixel_aspect))
|
||||
self.log.debug("input_width: `{}`".format(input_width))
|
||||
self.log.debug("input_height: `{}`".format(input_height))
|
||||
|
|
@ -655,6 +675,22 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
output_width = int(output_width)
|
||||
output_height = int(output_height)
|
||||
|
||||
# Make sure output width and height is not an odd number
|
||||
# When this can happen:
|
||||
# - if output definition has set width and height with odd number
|
||||
# - `instance.data` contain width and height with odd numbeer
|
||||
if output_width % 2 != 0:
|
||||
self.log.warning((
|
||||
"Converting output width from odd to even number. {} -> {}"
|
||||
).format(output_width, output_width + 1))
|
||||
output_width += 1
|
||||
|
||||
if output_height % 2 != 0:
|
||||
self.log.warning((
|
||||
"Converting output height from odd to even number. {} -> {}"
|
||||
).format(output_height, output_height + 1))
|
||||
output_height += 1
|
||||
|
||||
self.log.debug(
|
||||
"Output resolution is {}x{}".format(output_width, output_height)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@ import copy
|
|||
import clique
|
||||
import errno
|
||||
import six
|
||||
import re
|
||||
import shutil
|
||||
|
||||
from pymongo import DeleteOne, InsertOne
|
||||
import pyblish.api
|
||||
|
|
@ -680,6 +682,14 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
instance.data.get('subsetGroup')}}
|
||||
)
|
||||
|
||||
# Update families on subset.
|
||||
families = [instance.data["family"]]
|
||||
families.extend(instance.data.get("families", []))
|
||||
io.update_many(
|
||||
{"type": "subset", "_id": io.ObjectId(subset["_id"])},
|
||||
{"$set": {"data.families": families}}
|
||||
)
|
||||
|
||||
return subset
|
||||
|
||||
def create_version(self, subset, version_number, data=None):
|
||||
|
|
@ -952,21 +962,37 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
if integrated_file_sizes:
|
||||
for file_url, _file_size in integrated_file_sizes.items():
|
||||
if not os.path.exists(file_url):
|
||||
self.log.debug(
|
||||
"File {} was not found.".format(file_url)
|
||||
)
|
||||
continue
|
||||
|
||||
try:
|
||||
if mode == 'remove':
|
||||
self.log.debug("Removing file ...{}".format(file_url))
|
||||
self.log.debug("Removing file {}".format(file_url))
|
||||
os.remove(file_url)
|
||||
if mode == 'finalize':
|
||||
self.log.debug("Renaming file ...{}".format(file_url))
|
||||
import re
|
||||
os.rename(file_url,
|
||||
re.sub('\.{}$'.format(self.TMP_FILE_EXT),
|
||||
'',
|
||||
file_url)
|
||||
)
|
||||
new_name = re.sub(
|
||||
r'\.{}$'.format(self.TMP_FILE_EXT),
|
||||
'',
|
||||
file_url
|
||||
)
|
||||
|
||||
except FileNotFoundError:
|
||||
pass # file not there, nothing to delete
|
||||
if os.path.exists(new_name):
|
||||
self.log.debug(
|
||||
"Overwriting file {} to {}".format(
|
||||
file_url, new_name
|
||||
)
|
||||
)
|
||||
shutil.copy(file_url, new_name)
|
||||
else:
|
||||
self.log.debug(
|
||||
"Renaming file {} to {}".format(
|
||||
file_url, new_name
|
||||
)
|
||||
)
|
||||
os.rename(file_url, new_name)
|
||||
except OSError:
|
||||
self.log.error("Cannot {} file {}".format(mode, file_url),
|
||||
exc_info=True)
|
||||
|
|
|
|||
|
|
@ -174,7 +174,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"FTRACK_SERVER",
|
||||
"PYPE_METADATA_FILE",
|
||||
"AVALON_PROJECT",
|
||||
"PYPE_LOG_NO_COLORS"
|
||||
"PYPE_LOG_NO_COLORS",
|
||||
"PYPE_USERNAME"
|
||||
]
|
||||
|
||||
# custom deadline atributes
|
||||
|
|
@ -278,26 +279,14 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
# Mandatory for Deadline, may be empty
|
||||
"AuxFiles": [],
|
||||
}
|
||||
"""
|
||||
In this part we will add file dependencies instead of job dependencies.
|
||||
This way we don't need to take care of tile assembly job, getting its
|
||||
id or name. We expect it to produce specific file with specific name
|
||||
and we are just waiting for them.
|
||||
"""
|
||||
|
||||
# add assembly jobs as dependencies
|
||||
if instance.data.get("tileRendering"):
|
||||
self.log.info("Adding tile assembly results as dependencies...")
|
||||
asset_index = 0
|
||||
for inst in instances:
|
||||
for represenation in inst.get("representations", []):
|
||||
if isinstance(represenation["files"], (list, tuple)):
|
||||
for file in represenation["files"]:
|
||||
dependency = os.path.join(output_dir, file)
|
||||
payload["JobInfo"]["AssetDependency{}".format(asset_index)] = dependency # noqa: E501
|
||||
else:
|
||||
dependency = os.path.join(
|
||||
output_dir, represenation["files"])
|
||||
payload["JobInfo"]["AssetDependency{}".format(asset_index)] = dependency # noqa: E501
|
||||
asset_index += 1
|
||||
self.log.info("Adding tile assembly jobs as dependencies...")
|
||||
job_index = 0
|
||||
for assembly_id in instance.data.get("assemblySubmissionJobs"):
|
||||
payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501
|
||||
job_index += 1
|
||||
else:
|
||||
payload["JobInfo"]["JobDependency0"] = job["_id"]
|
||||
|
||||
|
|
@ -309,6 +298,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
environment["PYPE_METADATA_FILE"] = roothless_metadata_path
|
||||
environment["AVALON_PROJECT"] = io.Session["AVALON_PROJECT"]
|
||||
environment["PYPE_LOG_NO_COLORS"] = "1"
|
||||
environment["PYPE_USERNAME"] = instance.context.data["user"]
|
||||
try:
|
||||
environment["PYPE_PYTHON_EXE"] = os.environ["PYPE_PYTHON_EXE"]
|
||||
except KeyError:
|
||||
|
|
@ -440,7 +430,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"to render, don't know what to do "
|
||||
"with them.")
|
||||
col = rem[0]
|
||||
_, ext = os.path.splitext(col)
|
||||
ext = os.path.splitext(col)[1].lstrip(".")
|
||||
else:
|
||||
# but we really expect only one collection.
|
||||
# Nothing else make sense.
|
||||
|
|
@ -729,7 +719,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"pixelAspect": data.get("pixelAspect", 1),
|
||||
"resolutionWidth": data.get("resolutionWidth", 1920),
|
||||
"resolutionHeight": data.get("resolutionHeight", 1080),
|
||||
"multipartExr": data.get("multipartExr", False)
|
||||
"multipartExr": data.get("multipartExr", False),
|
||||
"jobBatchName": data.get("jobBatchName", ""),
|
||||
"review": data.get("review", True)
|
||||
}
|
||||
|
||||
if "prerender" in instance.data["families"]:
|
||||
|
|
@ -906,8 +898,13 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
# We still use data from it so lets fake it.
|
||||
#
|
||||
# Batch name reflect original scene name
|
||||
render_job["Props"]["Batch"] = os.path.splitext(os.path.basename(
|
||||
context.data.get("currentFile")))[0]
|
||||
|
||||
if instance.data.get("assemblySubmissionJobs"):
|
||||
render_job["Props"]["Batch"] = instance.data.get(
|
||||
"jobBatchName")
|
||||
else:
|
||||
render_job["Props"]["Batch"] = os.path.splitext(
|
||||
os.path.basename(context.data.get("currentFile")))[0]
|
||||
# User is deadline user
|
||||
render_job["Props"]["User"] = context.data.get(
|
||||
"deadlineUser", getpass.getuser())
|
||||
|
|
|
|||
31
pype/plugins/global/publish/validate_intent.py
Normal file
31
pype/plugins/global/publish/validate_intent.py
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
|
||||
|
||||
class ValidateIntent(pyblish.api.ContextPlugin):
|
||||
"""Validate intent of the publish.
|
||||
|
||||
It is required to fill the intent of this publish. Chech the log
|
||||
for more details
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
|
||||
label = "Validate Intent"
|
||||
# TODO: this should be off by default and only activated viac config
|
||||
tasks = ["animation"]
|
||||
hosts = ["harmony"]
|
||||
if os.environ.get("AVALON_TASK") not in tasks:
|
||||
active = False
|
||||
|
||||
def process(self, context):
|
||||
msg = (
|
||||
"Please make sure that you select the intent of this publish."
|
||||
)
|
||||
|
||||
intent = context.data.get("intent")
|
||||
self.log.debug(intent)
|
||||
assert intent, msg
|
||||
|
||||
intent_value = intent.get("value")
|
||||
assert intent is not "", msg
|
||||
|
|
@ -10,7 +10,7 @@ class ValidateVersion(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.ValidatorOrder
|
||||
|
||||
label = "Validate Version"
|
||||
hosts = ["nuke", "maya", "blender"]
|
||||
hosts = ["nuke", "maya", "blender", "standalonepublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
version = instance.data.get("version")
|
||||
|
|
|
|||
|
|
@ -13,13 +13,14 @@ class CreateRender(harmony.Creator):
|
|||
super(CreateRender, self).__init__(*args, **kwargs)
|
||||
|
||||
def setup_node(self, node):
|
||||
func = """function func(args)
|
||||
sig = harmony.signature()
|
||||
func = """function %s(args)
|
||||
{
|
||||
node.setTextAttr(args[0], "DRAWING_TYPE", 1, "PNG4");
|
||||
node.setTextAttr(args[0], "DRAWING_NAME", 1, args[1]);
|
||||
node.setTextAttr(args[0], "MOVIE_PATH", 1, args[1]);
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
path = "{0}/{0}".format(node.split("/")[-1])
|
||||
harmony.send({"function": func, "args": [node, path]})
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from avalon import api, harmony
|
||||
|
||||
|
||||
sig = harmony.signature()
|
||||
func = """
|
||||
function getUniqueColumnName( column_prefix )
|
||||
{
|
||||
|
|
@ -18,20 +18,20 @@ function getUniqueColumnName( column_prefix )
|
|||
return column_name;
|
||||
}
|
||||
|
||||
function func(args)
|
||||
function %s(args)
|
||||
{
|
||||
var uniqueColumnName = getUniqueColumnName(args[0]);
|
||||
column.add(uniqueColumnName , "SOUND");
|
||||
column.importSound(uniqueColumnName, 1, args[1]);
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
|
||||
|
||||
class ImportAudioLoader(api.Loader):
|
||||
"""Import audio."""
|
||||
|
||||
families = ["shot"]
|
||||
families = ["shot", "audio"]
|
||||
representations = ["wav"]
|
||||
label = "Import Audio"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +1,9 @@
|
|||
import os
|
||||
import uuid
|
||||
|
||||
import clique
|
||||
import json
|
||||
|
||||
from avalon import api, harmony
|
||||
import pype.lib
|
||||
import json
|
||||
|
||||
|
||||
copy_files = """function copyFile(srcFilename, dstFilename)
|
||||
{
|
||||
|
|
@ -256,7 +254,9 @@ class BackgroundLoader(api.Loader):
|
|||
container_nodes = []
|
||||
|
||||
for layer in sorted(layers):
|
||||
file_to_import = [os.path.join(bg_folder, layer).replace("\\", "/")]
|
||||
file_to_import = [
|
||||
os.path.join(bg_folder, layer).replace("\\", "/")
|
||||
]
|
||||
|
||||
read_node = harmony.send(
|
||||
{
|
||||
|
|
@ -301,8 +301,10 @@ class BackgroundLoader(api.Loader):
|
|||
print(container)
|
||||
|
||||
for layer in sorted(layers):
|
||||
file_to_import = [os.path.join(bg_folder, layer).replace("\\", "/")]
|
||||
print(20*"#")
|
||||
file_to_import = [
|
||||
os.path.join(bg_folder, layer).replace("\\", "/")
|
||||
]
|
||||
print(20 * "#")
|
||||
print(f"FILE TO REPLACE: {file_to_import}")
|
||||
print(f"LAYER: {layer}")
|
||||
node = harmony.find_node_by_name(layer, "READ")
|
||||
|
|
@ -324,9 +326,9 @@ class BackgroundLoader(api.Loader):
|
|||
)["result"]
|
||||
container['nodes'].append(read_node)
|
||||
|
||||
|
||||
# Colour node.
|
||||
func = """function func(args){
|
||||
sig = harmony.signature("set_color")
|
||||
func = """function %s(args){
|
||||
for( var i =0; i <= args[0].length - 1; ++i)
|
||||
{
|
||||
var red_color = new ColorRGBA(255, 0, 0, 255);
|
||||
|
|
@ -339,8 +341,8 @@ class BackgroundLoader(api.Loader):
|
|||
}
|
||||
}
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
if pype.lib.is_latest(representation):
|
||||
harmony.send({"function": func, "args": [node, "green"]})
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -230,7 +230,7 @@ class ImageSequenceLoader(api.Loader):
|
|||
"""Load images
|
||||
Stores the imported asset in a container named after the asset.
|
||||
"""
|
||||
families = ["shot", "render", "image", "plate"]
|
||||
families = ["shot", "render", "image", "plate", "reference"]
|
||||
representations = ["jpeg", "png", "jpg"]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
|
|
@ -301,7 +301,8 @@ class ImageSequenceLoader(api.Loader):
|
|||
)
|
||||
|
||||
# Colour node.
|
||||
func = """function func(args){
|
||||
sig = harmony.signature("copyFile")
|
||||
func = """function %s(args){
|
||||
for( var i =0; i <= args[0].length - 1; ++i)
|
||||
{
|
||||
var red_color = new ColorRGBA(255, 0, 0, 255);
|
||||
|
|
@ -314,8 +315,8 @@ class ImageSequenceLoader(api.Loader):
|
|||
}
|
||||
}
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
if pype.lib.is_latest(representation):
|
||||
harmony.send({"function": func, "args": [node, "green"]})
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -21,15 +21,16 @@ class ImportTemplateLoader(api.Loader):
|
|||
with zipfile.ZipFile(zip_file, "r") as zip_ref:
|
||||
zip_ref.extractall(template_path)
|
||||
|
||||
func = """function func(args)
|
||||
sig = harmony.signature("paste")
|
||||
func = """function %s(args)
|
||||
{
|
||||
var template_path = args[0];
|
||||
var drag_object = copyPaste.pasteTemplateIntoGroup(
|
||||
template_path, "Top", 1
|
||||
);
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
|
||||
harmony.send({"function": func, "args": [template_path]})
|
||||
|
||||
|
|
|
|||
|
|
@ -13,15 +13,16 @@ class CollectCurrentFile(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
"""Inject the current working file"""
|
||||
func = """function func()
|
||||
sig = harmony.signature()
|
||||
func = """function %s()
|
||||
{
|
||||
return (
|
||||
scene.currentProjectPath() + "/" +
|
||||
scene.currentVersionName() + ".xstage"
|
||||
);
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
|
||||
current_file = harmony.send({"function": func})["result"]
|
||||
context.data["currentFile"] = os.path.normpath(current_file)
|
||||
|
|
|
|||
|
|
@ -13,7 +13,8 @@ class CollectPalettes(pyblish.api.ContextPlugin):
|
|||
hosts = ["harmony"]
|
||||
|
||||
def process(self, context):
|
||||
func = """function func()
|
||||
sig = harmony.signature()
|
||||
func = """function %s()
|
||||
{
|
||||
var palette_list = PaletteObjectManager.getScenePaletteList();
|
||||
|
||||
|
|
@ -26,8 +27,8 @@ class CollectPalettes(pyblish.api.ContextPlugin):
|
|||
|
||||
return palettes;
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
palettes = harmony.send({"function": func})["result"]
|
||||
|
||||
for name, id in palettes.items():
|
||||
|
|
|
|||
|
|
@ -13,14 +13,15 @@ class ExtractPalette(pype.api.Extractor):
|
|||
families = ["harmony.palette"]
|
||||
|
||||
def process(self, instance):
|
||||
func = """function func(args)
|
||||
sig = harmony.signature()
|
||||
func = """function %s(args)
|
||||
{
|
||||
var palette_list = PaletteObjectManager.getScenePaletteList();
|
||||
var palette = palette_list.getPaletteById(args[0]);
|
||||
return (palette.getPath() + "/" + palette.getName() + ".plt");
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
palette_file = harmony.send(
|
||||
{"function": func, "args": [instance.data["id"]]}
|
||||
)["result"]
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import subprocess
|
|||
|
||||
import pyblish.api
|
||||
from avalon import harmony
|
||||
import pype.lib
|
||||
|
||||
import clique
|
||||
|
||||
|
|
@ -20,7 +21,8 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
# Collect scene data.
|
||||
func = """function func(write_node)
|
||||
sig = harmony.signature()
|
||||
func = """function %s(write_node)
|
||||
{
|
||||
return [
|
||||
about.getApplicationPath(),
|
||||
|
|
@ -32,8 +34,8 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
sound.getSoundtrackAll().path()
|
||||
]
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
result = harmony.send(
|
||||
{"function": func, "args": [instance[0]]}
|
||||
)["result"]
|
||||
|
|
@ -44,14 +46,17 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
frame_end = result[5]
|
||||
audio_path = result[6]
|
||||
|
||||
instance.data["fps"] = frame_rate
|
||||
|
||||
# Set output path to temp folder.
|
||||
path = tempfile.mkdtemp()
|
||||
func = """function func(args)
|
||||
sig = harmony.signature()
|
||||
func = """function %s(args)
|
||||
{
|
||||
node.setTextAttr(args[0], "DRAWING_NAME", 1, args[1]);
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
result = harmony.send(
|
||||
{
|
||||
"function": func,
|
||||
|
|
@ -85,19 +90,15 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
if len(collections) > 1:
|
||||
for col in collections:
|
||||
if len(list(col)) > 1:
|
||||
collection = col
|
||||
collection = col
|
||||
else:
|
||||
# assert len(collections) == 1, (
|
||||
# "There should only be one image sequence in {}. Found: {}".format(
|
||||
# path, len(collections)
|
||||
# )
|
||||
# )
|
||||
collection = collections[0]
|
||||
|
||||
# Generate thumbnail.
|
||||
thumbnail_path = os.path.join(path, "thumbnail.png")
|
||||
ffmpeg_path = pype.lib.get_ffmpeg_tool_path("ffmpeg")
|
||||
args = [
|
||||
"ffmpeg", "-y",
|
||||
ffmpeg_path, "-y",
|
||||
"-i", os.path.join(path, list(collections[0])[0]),
|
||||
"-vf", "scale=300:-1",
|
||||
"-vframes", "1",
|
||||
|
|
@ -117,57 +118,17 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.debug(output.decode("utf-8"))
|
||||
|
||||
# Generate mov.
|
||||
mov_path = os.path.join(path, instance.data["name"] + ".mov")
|
||||
if os.path.isfile(audio_path):
|
||||
args = [
|
||||
"ffmpeg", "-y",
|
||||
"-i", audio_path,
|
||||
"-i",
|
||||
os.path.join(path, collection.head + "%04d" + collection.tail),
|
||||
mov_path
|
||||
]
|
||||
else:
|
||||
args = [
|
||||
"ffmpeg", "-y",
|
||||
"-i",
|
||||
os.path.join(path, collection.head + "%04d" + collection.tail),
|
||||
mov_path
|
||||
]
|
||||
|
||||
process = subprocess.Popen(
|
||||
args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
stdin=subprocess.PIPE
|
||||
)
|
||||
|
||||
output = process.communicate()[0]
|
||||
|
||||
if process.returncode != 0:
|
||||
raise ValueError(output.decode("utf-8"))
|
||||
|
||||
self.log.debug(output.decode("utf-8"))
|
||||
|
||||
# Generate representations.
|
||||
extension = collection.tail[1:]
|
||||
representation = {
|
||||
"name": extension,
|
||||
"ext": extension,
|
||||
"files": list(collection),
|
||||
"stagingDir": path
|
||||
}
|
||||
movie = {
|
||||
"name": "mov",
|
||||
"ext": "mov",
|
||||
"files": os.path.basename(mov_path),
|
||||
"stagingDir": path,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"fps": frame_rate,
|
||||
"preview": True,
|
||||
"tags": ["review", "ftrackreview"]
|
||||
"tags": ["review"],
|
||||
"fps": frame_rate
|
||||
}
|
||||
|
||||
thumbnail = {
|
||||
"name": "thumbnail",
|
||||
"ext": "png",
|
||||
|
|
@ -175,7 +136,10 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
"stagingDir": path,
|
||||
"tags": ["thumbnail"]
|
||||
}
|
||||
instance.data["representations"] = [representation, movie, thumbnail]
|
||||
instance.data["representations"] = [representation, thumbnail]
|
||||
|
||||
if audio_path and os.path.exists(audio_path):
|
||||
instance.data["audio"] = [{"filename": audio_path}]
|
||||
|
||||
# Required for extract_review plugin (L222 onwards).
|
||||
instance.data["frameStart"] = frame_start
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import shutil
|
||||
|
||||
import pype.api
|
||||
import avalon.harmony
|
||||
from avalon import harmony
|
||||
import pype.hosts.harmony
|
||||
|
||||
|
||||
|
|
@ -30,7 +30,7 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
unique_backdrops = [backdrops[x] for x in set(backdrops.keys())]
|
||||
|
||||
# Get non-connected nodes within backdrops.
|
||||
all_nodes = avalon.harmony.send(
|
||||
all_nodes = harmony.send(
|
||||
{"function": "node.subNodes", "args": ["Top"]}
|
||||
)["result"]
|
||||
for node in [x for x in all_nodes if x not in dependencies]:
|
||||
|
|
@ -66,7 +66,8 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
instance.data["representations"] = [representation]
|
||||
|
||||
def get_backdrops(self, node):
|
||||
func = """function func(probe_node)
|
||||
sig = harmony.signature()
|
||||
func = """function %s(probe_node)
|
||||
{
|
||||
var backdrops = Backdrop.backdrops("Top");
|
||||
var valid_backdrops = [];
|
||||
|
|
@ -92,14 +93,15 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
}
|
||||
return valid_backdrops;
|
||||
}
|
||||
func
|
||||
"""
|
||||
return avalon.harmony.send(
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
return harmony.send(
|
||||
{"function": func, "args": [node]}
|
||||
)["result"]
|
||||
|
||||
def get_dependencies(self, node, dependencies):
|
||||
func = """function func(args)
|
||||
sig = harmony.signature()
|
||||
func = """function %s(args)
|
||||
{
|
||||
var target_node = args[0];
|
||||
var numInput = node.numberOfInputPorts(target_node);
|
||||
|
|
@ -110,10 +112,10 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
}
|
||||
return dependencies;
|
||||
}
|
||||
func
|
||||
"""
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
|
||||
current_dependencies = avalon.harmony.send(
|
||||
current_dependencies = harmony.send(
|
||||
{"function": func, "args": [node]}
|
||||
)["result"]
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue