Merge branch 'develop' into 3.0/refactoring

This commit is contained in:
iLLiCiTiT 2020-12-04 12:58:45 +01:00
commit 443cb5c795
218 changed files with 9058 additions and 6191 deletions

View file

@ -2,7 +2,8 @@ pr-wo-labels=False
exclude-labels=duplicate,question,invalid,wontfix,weekly-digest
author=False
unreleased=True
since-tag=2.11.0
since-tag=2.13.6
release-branch=master
enhancement-label=**Enhancements:**
issues=False
pulls=False

2
.gitignore vendored
View file

@ -15,6 +15,8 @@ __pycache__/
Icon
# Thumbnails
._*
# rope project dir
.ropeproject
# Files that might appear in the root of a volume
.DocumentRevisions-V100
.fseventsd

View file

@ -1,5 +1,48 @@
# Changelog
## [2.14.0](https://github.com/pypeclub/pype/tree/2.14.0) (2020-11-24)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.7...2.14.0)
**Enhancements:**
- Shot asset build trigger status [\#736](https://github.com/pypeclub/pype/pull/736)
- Maya: add camera rig publishing option [\#721](https://github.com/pypeclub/pype/pull/721)
- Sort instances by label in pyblish gui [\#719](https://github.com/pypeclub/pype/pull/719)
- Synchronize ftrack hierarchical and shot attributes [\#716](https://github.com/pypeclub/pype/pull/716)
- 686 standalonepublisher editorial from image sequences [\#699](https://github.com/pypeclub/pype/pull/699)
- TV Paint: initial implementation of creators and local rendering [\#693](https://github.com/pypeclub/pype/pull/693)
- Render publish plugins abstraction [\#687](https://github.com/pypeclub/pype/pull/687)
- Ask user to select non-default camera from scene or create a new. [\#678](https://github.com/pypeclub/pype/pull/678)
- TVPaint: image loader with options [\#675](https://github.com/pypeclub/pype/pull/675)
- Maya: Camera name can be added to burnins. [\#674](https://github.com/pypeclub/pype/pull/674)
- After Effects: base integration with loaders [\#667](https://github.com/pypeclub/pype/pull/667)
- Harmony: Javascript refactoring and overall stability improvements [\#666](https://github.com/pypeclub/pype/pull/666)
**Fixed bugs:**
- TVPaint extract review fix [\#740](https://github.com/pypeclub/pype/pull/740)
- After Effects: Review were not being sent to ftrack [\#738](https://github.com/pypeclub/pype/pull/738)
- Asset fetch second fix [\#726](https://github.com/pypeclub/pype/pull/726)
- Maya: vray proxy was not loading [\#722](https://github.com/pypeclub/pype/pull/722)
- Maya: Vray expected file fixes [\#682](https://github.com/pypeclub/pype/pull/682)
**Deprecated:**
- Removed artist view from pyblish gui [\#717](https://github.com/pypeclub/pype/pull/717)
- Maya: disable legacy override check for cameras [\#715](https://github.com/pypeclub/pype/pull/715)
## [2.13.7](https://github.com/pypeclub/pype/tree/2.13.7) (2020-11-19)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.6...2.13.7)
**Merged pull requests:**
- fix\(SP\): getting fps from context instead of nonexistent entity [\#729](https://github.com/pypeclub/pype/pull/729)
# Changelog
## [2.13.6](https://github.com/pypeclub/pype/tree/2.13.6) (2020-11-15)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.5...2.13.6)
@ -789,4 +832,7 @@ A large cleanup release. Most of the change are under the hood.
- _(avalon)_ subsets in maya 2019 weren't behaving correctly in the outliner
\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*
\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*

View file

@ -1,3 +1,360 @@
# Changelog
## [2.13.6](https://github.com/pypeclub/pype/tree/2.13.6) (2020-11-15)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.5...2.13.6)
**Fixed bugs:**
- Maya workfile version wasn't syncing with renders properly [\#711](https://github.com/pypeclub/pype/pull/711)
- Maya: Fix for publishing multiple cameras with review from the same scene [\#710](https://github.com/pypeclub/pype/pull/710)
## [2.13.5](https://github.com/pypeclub/pype/tree/2.13.5) (2020-11-12)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.4...2.13.5)
**Enhancements:**
- 3.0 lib refactor [\#664](https://github.com/pypeclub/pype/issues/664)
**Fixed bugs:**
- Wrong thumbnail file was picked when publishing sequence in standalone publisher [\#703](https://github.com/pypeclub/pype/pull/703)
- Fix: Burnin data pass and FFmpeg tool check [\#701](https://github.com/pypeclub/pype/pull/701)
## [2.13.4](https://github.com/pypeclub/pype/tree/2.13.4) (2020-11-09)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.3...2.13.4)
**Enhancements:**
- AfterEffects integration with Websocket [\#663](https://github.com/pypeclub/pype/issues/663)
**Fixed bugs:**
- Photoshop uhiding hidden layers [\#688](https://github.com/pypeclub/pype/issues/688)
- \#688 - Fix publishing hidden layers [\#692](https://github.com/pypeclub/pype/pull/692)
**Closed issues:**
- Nuke Favorite directories "shot dir" "project dir" - not working [\#684](https://github.com/pypeclub/pype/issues/684)
**Merged pull requests:**
- Nuke Favorite directories "shot dir" "project dir" - not working \#684 [\#685](https://github.com/pypeclub/pype/pull/685)
## [2.13.3](https://github.com/pypeclub/pype/tree/2.13.3) (2020-11-03)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.2...2.13.3)
**Enhancements:**
- TV paint base integration [\#612](https://github.com/pypeclub/pype/issues/612)
**Fixed bugs:**
- Fix ffmpeg executable path with spaces [\#680](https://github.com/pypeclub/pype/pull/680)
- Hotfix: Added default version number [\#679](https://github.com/pypeclub/pype/pull/679)
## [2.13.2](https://github.com/pypeclub/pype/tree/2.13.2) (2020-10-28)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.1...2.13.2)
**Fixed bugs:**
- Nuke: wrong conditions when fixing legacy write nodes [\#665](https://github.com/pypeclub/pype/pull/665)
## [2.13.1](https://github.com/pypeclub/pype/tree/2.13.1) (2020-10-23)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.0...2.13.1)
**Enhancements:**
- move maya look assigner to pype menu [\#292](https://github.com/pypeclub/pype/issues/292)
**Fixed bugs:**
- Layer name is not propagating to metadata in Photoshop [\#654](https://github.com/pypeclub/pype/issues/654)
- Loader in Photoshop fails with "can't set attribute" [\#650](https://github.com/pypeclub/pype/issues/650)
- Nuke Load mp4 wrong frame range [\#661](https://github.com/pypeclub/pype/issues/661)
- Hiero: Review video file adding one frame to the end [\#659](https://github.com/pypeclub/pype/issues/659)
## [2.13.0](https://github.com/pypeclub/pype/tree/2.13.0) (2020-10-18)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.5...2.13.0)
**Enhancements:**
- Deadline Output Folder [\#636](https://github.com/pypeclub/pype/issues/636)
- Nuke Camera Loader [\#565](https://github.com/pypeclub/pype/issues/565)
- Deadline publish job shows publishing output folder [\#649](https://github.com/pypeclub/pype/pull/649)
- Get latest version in lib [\#642](https://github.com/pypeclub/pype/pull/642)
- Improved publishing of multiple representation from SP [\#638](https://github.com/pypeclub/pype/pull/638)
- Launch TvPaint shot work file from within Ftrack [\#631](https://github.com/pypeclub/pype/pull/631)
- Add mp4 support for RV action. [\#628](https://github.com/pypeclub/pype/pull/628)
- Maya: allow renders to have version synced with workfile [\#618](https://github.com/pypeclub/pype/pull/618)
- Renaming nukestudio host folder to hiero [\#617](https://github.com/pypeclub/pype/pull/617)
- Harmony: More efficient publishing [\#615](https://github.com/pypeclub/pype/pull/615)
- Ftrack server action improvement [\#608](https://github.com/pypeclub/pype/pull/608)
- Deadline user defaults to pype username if present [\#607](https://github.com/pypeclub/pype/pull/607)
- Standalone publisher now has icon [\#606](https://github.com/pypeclub/pype/pull/606)
- Nuke render write targeting knob improvement [\#603](https://github.com/pypeclub/pype/pull/603)
- Animated pyblish gui [\#602](https://github.com/pypeclub/pype/pull/602)
- Maya: Deadline - make use of asset dependencies optional [\#591](https://github.com/pypeclub/pype/pull/591)
- Nuke: Publishing, loading and updating alembic cameras [\#575](https://github.com/pypeclub/pype/pull/575)
- Maya: add look assigner to pype menu even if scriptsmenu is not available [\#573](https://github.com/pypeclub/pype/pull/573)
- Store task types in the database [\#572](https://github.com/pypeclub/pype/pull/572)
- Maya: Tiled EXRs to scanline EXRs render option [\#512](https://github.com/pypeclub/pype/pull/512)
- Fusion basic integration [\#452](https://github.com/pypeclub/pype/pull/452)
**Fixed bugs:**
- Burnin script did not propagate ffmpeg output [\#640](https://github.com/pypeclub/pype/issues/640)
- Pyblish-pype spacer in terminal wasn't transparent [\#646](https://github.com/pypeclub/pype/pull/646)
- Lib subprocess without logger [\#645](https://github.com/pypeclub/pype/pull/645)
- Nuke: prevent crash if we only have single frame in sequence [\#644](https://github.com/pypeclub/pype/pull/644)
- Burnin script logs better output [\#641](https://github.com/pypeclub/pype/pull/641)
- Missing audio on farm submission. [\#639](https://github.com/pypeclub/pype/pull/639)
- review from imagesequence error [\#633](https://github.com/pypeclub/pype/pull/633)
- Hiero: wrong order of fps clip instance data collecting [\#627](https://github.com/pypeclub/pype/pull/627)
- Add source for review instances. [\#625](https://github.com/pypeclub/pype/pull/625)
- Task processing in event sync [\#623](https://github.com/pypeclub/pype/pull/623)
- sync to avalon doesn t remove renamed task [\#619](https://github.com/pypeclub/pype/pull/619)
- Intent publish setting wasn't working with default value [\#562](https://github.com/pypeclub/pype/pull/562)
- Maya: Updating a look where the shader name changed, leaves the geo without a shader [\#514](https://github.com/pypeclub/pype/pull/514)
**Merged pull requests:**
- Avalon module without Qt [\#581](https://github.com/pypeclub/pype/pull/581)
- Ftrack module without Qt [\#577](https://github.com/pypeclub/pype/pull/577)
## [2.12.5](https://github.com/pypeclub/pype/tree/2.12.5) (2020-10-14)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.4...2.12.5)
**Enhancements:**
- Launch TvPaint shot work file from within Ftrack [\#629](https://github.com/pypeclub/pype/issues/629)
**Merged pull requests:**
- Harmony: Disable application launch logic [\#637](https://github.com/pypeclub/pype/pull/637)
## [2.12.4](https://github.com/pypeclub/pype/tree/2.12.4) (2020-10-08)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.3...2.12.4)
**Enhancements:**
- convert nukestudio to hiero host [\#616](https://github.com/pypeclub/pype/issues/616)
- Fusion basic integration [\#451](https://github.com/pypeclub/pype/issues/451)
**Fixed bugs:**
- Sync to avalon doesn't remove renamed task [\#605](https://github.com/pypeclub/pype/issues/605)
- NukeStudio: FPS collecting into clip instances [\#624](https://github.com/pypeclub/pype/pull/624)
**Merged pull requests:**
- NukeStudio: small fixes [\#622](https://github.com/pypeclub/pype/pull/622)
- NukeStudio: broken order of plugins [\#620](https://github.com/pypeclub/pype/pull/620)
## [2.12.3](https://github.com/pypeclub/pype/tree/2.12.3) (2020-10-06)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.2...2.12.3)
**Enhancements:**
- Nuke Publish Camera [\#567](https://github.com/pypeclub/pype/issues/567)
- Harmony: open xstage file no matter of its name [\#526](https://github.com/pypeclub/pype/issues/526)
- Stop integration of unwanted data [\#387](https://github.com/pypeclub/pype/issues/387)
- Move avalon-launcher functionality to pype [\#229](https://github.com/pypeclub/pype/issues/229)
- avalon workfiles api [\#214](https://github.com/pypeclub/pype/issues/214)
- Store task types [\#180](https://github.com/pypeclub/pype/issues/180)
- Avalon Mongo Connection split [\#136](https://github.com/pypeclub/pype/issues/136)
- nk camera workflow [\#71](https://github.com/pypeclub/pype/issues/71)
- Hiero integration added [\#590](https://github.com/pypeclub/pype/pull/590)
- Anatomy instance data collection is substantially faster for many instances [\#560](https://github.com/pypeclub/pype/pull/560)
**Fixed bugs:**
- test issue [\#596](https://github.com/pypeclub/pype/issues/596)
- Harmony: empty scene contamination [\#583](https://github.com/pypeclub/pype/issues/583)
- Edit publishing in SP doesn't respect shot selection for publishing [\#542](https://github.com/pypeclub/pype/issues/542)
- Pathlib breaks compatibility with python2 hosts [\#281](https://github.com/pypeclub/pype/issues/281)
- Updating a look where the shader name changed leaves the geo without a shader [\#237](https://github.com/pypeclub/pype/issues/237)
- Better error handling [\#84](https://github.com/pypeclub/pype/issues/84)
- Harmony: function signature [\#609](https://github.com/pypeclub/pype/pull/609)
- Nuke: gizmo publishing error [\#594](https://github.com/pypeclub/pype/pull/594)
- Harmony: fix clashing namespace of called js functions [\#584](https://github.com/pypeclub/pype/pull/584)
- Maya: fix maya scene type preset exception [\#569](https://github.com/pypeclub/pype/pull/569)
**Closed issues:**
- Nuke Gizmo publishing [\#597](https://github.com/pypeclub/pype/issues/597)
- nuke gizmo publishing error [\#592](https://github.com/pypeclub/pype/issues/592)
- Publish EDL [\#579](https://github.com/pypeclub/pype/issues/579)
- Publish render from SP [\#576](https://github.com/pypeclub/pype/issues/576)
- rename ftrack custom attribute group to `pype` [\#184](https://github.com/pypeclub/pype/issues/184)
**Merged pull requests:**
- Audio file existence check [\#614](https://github.com/pypeclub/pype/pull/614)
- NKS small fixes [\#587](https://github.com/pypeclub/pype/pull/587)
- Standalone publisher editorial plugins interfering [\#580](https://github.com/pypeclub/pype/pull/580)
## [2.12.2](https://github.com/pypeclub/pype/tree/2.12.2) (2020-09-25)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.1...2.12.2)
**Enhancements:**
- pype config GUI [\#241](https://github.com/pypeclub/pype/issues/241)
**Fixed bugs:**
- Harmony: Saving heavy scenes will crash [\#507](https://github.com/pypeclub/pype/issues/507)
- Extract review a representation name with `\*\_burnin` [\#388](https://github.com/pypeclub/pype/issues/388)
- Hierarchy data was not considering active isntances [\#551](https://github.com/pypeclub/pype/pull/551)
## [2.12.1](https://github.com/pypeclub/pype/tree/2.12.1) (2020-09-15)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.0...2.12.1)
**Fixed bugs:**
- Pype: changelog.md is outdated [\#503](https://github.com/pypeclub/pype/issues/503)
- dependency security alert ! [\#484](https://github.com/pypeclub/pype/issues/484)
- Maya: RenderSetup is missing update [\#106](https://github.com/pypeclub/pype/issues/106)
- \<pyblish plugin\> extract effects creates new instance [\#78](https://github.com/pypeclub/pype/issues/78)
## [2.12.0](https://github.com/pypeclub/pype/tree/2.12.0) (2020-09-10)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.8...2.12.0)
**Enhancements:**
- Less mongo connections [\#509](https://github.com/pypeclub/pype/pull/509)
- Nuke: adding image loader [\#499](https://github.com/pypeclub/pype/pull/499)
- Move launcher window to top if launcher action is clicked [\#450](https://github.com/pypeclub/pype/pull/450)
- Maya: better tile rendering support in Pype [\#446](https://github.com/pypeclub/pype/pull/446)
- Implementation of non QML launcher [\#443](https://github.com/pypeclub/pype/pull/443)
- Optional skip review on renders. [\#441](https://github.com/pypeclub/pype/pull/441)
- Ftrack: Option to push status from task to latest version [\#440](https://github.com/pypeclub/pype/pull/440)
- Properly containerize image plane loads. [\#434](https://github.com/pypeclub/pype/pull/434)
- Option to keep the review files. [\#426](https://github.com/pypeclub/pype/pull/426)
- Isolate view on instance members. [\#425](https://github.com/pypeclub/pype/pull/425)
- Maya: Publishing of tile renderings on Deadline [\#398](https://github.com/pypeclub/pype/pull/398)
- Feature/little bit better logging gui [\#383](https://github.com/pypeclub/pype/pull/383)
**Fixed bugs:**
- Maya: Fix tile order for Draft Tile Assembler [\#511](https://github.com/pypeclub/pype/pull/511)
- Remove extra dash [\#501](https://github.com/pypeclub/pype/pull/501)
- Fix: strip dot from repre names in single frame renders [\#498](https://github.com/pypeclub/pype/pull/498)
- Better handling of destination during integrating [\#485](https://github.com/pypeclub/pype/pull/485)
- Fix: allow thumbnail creation for single frame renders [\#460](https://github.com/pypeclub/pype/pull/460)
- added missing argument to launch\_application in ftrack app handler [\#453](https://github.com/pypeclub/pype/pull/453)
- Burnins: Copy bit rate of input video to match quality. [\#448](https://github.com/pypeclub/pype/pull/448)
- Standalone publisher is now independent from tray [\#442](https://github.com/pypeclub/pype/pull/442)
- Bugfix/empty enumerator attributes [\#436](https://github.com/pypeclub/pype/pull/436)
- Fixed wrong order of "other" category collapssing in publisher [\#435](https://github.com/pypeclub/pype/pull/435)
- Multiple reviews where being overwritten to one. [\#424](https://github.com/pypeclub/pype/pull/424)
- Cleanup plugin fail on instances without staging dir [\#420](https://github.com/pypeclub/pype/pull/420)
- deprecated -intra parameter in ffmpeg to new `-g` [\#417](https://github.com/pypeclub/pype/pull/417)
- Delivery action can now work with entered path [\#397](https://github.com/pypeclub/pype/pull/397)
**Merged pull requests:**
- Review on instance.data [\#473](https://github.com/pypeclub/pype/pull/473)
## [2.11.8](https://github.com/pypeclub/pype/tree/2.11.8) (2020-08-27)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.7...2.11.8)
**Enhancements:**
- DWAA support for Maya [\#382](https://github.com/pypeclub/pype/issues/382)
- Isolate View on Playblast [\#367](https://github.com/pypeclub/pype/issues/367)
- Maya: Tile rendering [\#297](https://github.com/pypeclub/pype/issues/297)
- single pype instance running [\#47](https://github.com/pypeclub/pype/issues/47)
- PYPE-649: projects don't guarantee backwards compatible environment [\#8](https://github.com/pypeclub/pype/issues/8)
- PYPE-663: separate venv for each deployed version [\#7](https://github.com/pypeclub/pype/issues/7)
**Fixed bugs:**
- pyblish pype - other group is collapsed before plugins are done [\#431](https://github.com/pypeclub/pype/issues/431)
- Alpha white edges in harmony on PNGs [\#412](https://github.com/pypeclub/pype/issues/412)
- harmony image loader picks wrong representations [\#404](https://github.com/pypeclub/pype/issues/404)
- Clockify crash when response contain symbol not allowed by UTF-8 [\#81](https://github.com/pypeclub/pype/issues/81)
## [2.11.7](https://github.com/pypeclub/pype/tree/2.11.7) (2020-08-21)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.6...2.11.7)
**Fixed bugs:**
- Clean Up Baked Movie [\#369](https://github.com/pypeclub/pype/issues/369)
- celaction last workfile [\#459](https://github.com/pypeclub/pype/pull/459)
## [2.11.6](https://github.com/pypeclub/pype/tree/2.11.6) (2020-08-18)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.5...2.11.6)
**Enhancements:**
- publisher app [\#56](https://github.com/pypeclub/pype/issues/56)
## [2.11.5](https://github.com/pypeclub/pype/tree/2.11.5) (2020-08-13)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.4...2.11.5)
**Enhancements:**
- Switch from master to equivalent [\#220](https://github.com/pypeclub/pype/issues/220)
- Standalone publisher now only groups sequence if the extension is known [\#439](https://github.com/pypeclub/pype/pull/439)
**Fixed bugs:**
- Logs have been disable for editorial by default to speed up publishing [\#433](https://github.com/pypeclub/pype/pull/433)
- additional fixes for celaction [\#430](https://github.com/pypeclub/pype/pull/430)
- Harmony: invalid variable scope in validate scene settings [\#428](https://github.com/pypeclub/pype/pull/428)
- new representation name for audio was not accepted [\#427](https://github.com/pypeclub/pype/pull/427)
## [2.11.4](https://github.com/pypeclub/pype/tree/2.11.4) (2020-08-10)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.3...2.11.4)
**Enhancements:**
- WebSocket server [\#135](https://github.com/pypeclub/pype/issues/135)
- standalonepublisher: editorial family features expansion \[master branch\] [\#411](https://github.com/pypeclub/pype/pull/411)
## [2.11.3](https://github.com/pypeclub/pype/tree/2.11.3) (2020-08-04)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.2...2.11.3)
**Fixed bugs:**
- Harmony: publishing performance issues [\#408](https://github.com/pypeclub/pype/pull/408)
## [2.11.2](https://github.com/pypeclub/pype/tree/2.11.2) (2020-07-31)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.1...2.11.2)
**Fixed bugs:**
- Ftrack to Avalon bug [\#406](https://github.com/pypeclub/pype/issues/406)
## [2.11.1](https://github.com/pypeclub/pype/tree/2.11.1) (2020-07-29)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.0...2.11.1)
**Merged pull requests:**
- Celaction: metadata json folder fixes on path [\#393](https://github.com/pypeclub/pype/pull/393)
- CelAction - version up method taken fro pype.lib [\#391](https://github.com/pypeclub/pype/pull/391)
<a name="2.11.0"></a>
## 2.11.0 ##
@ -430,3 +787,6 @@ A large cleanup release. Most of the change are under the hood.
- work directory was sometimes not being created correctly
- major pype.lib cleanup. Removing of unused functions, merging those that were doing the same and general house cleaning.
- _(avalon)_ subsets in maya 2019 weren't behaving correctly in the outliner
\* *This Changelog was automatically generated by [github_changelog_generator](https://github.com/github-changelog-generator/github-changelog-generator)*

View file

@ -72,16 +72,19 @@ def patched_discover(superclass):
elif superclass.__name__.split(".")[-1] == "Creator":
plugin_type = "create"
print(">>> trying to find presets for {}:{} ...".format(host, plugin_type))
print(">>> Finding presets for {}:{} ...".format(host, plugin_type))
try:
config_data = config.get_presets()['plugins'][host][plugin_type]
settings = (
get_project_settings(os.environ['AVALON_PROJECT'])
[host][plugin_type]
)
except KeyError:
print("*** no presets found.")
else:
for plugin in plugins:
if plugin.__name__ in config_data:
if plugin.__name__ in settings:
print(">>> We have preset for {}".format(plugin.__name__))
for option, value in config_data[plugin.__name__].items():
for option, value in settings[plugin.__name__].items():
if option == "enabled" and value is False:
setattr(plugin, "active", False)
print(" - is disabled by preset")
@ -130,6 +133,7 @@ def install():
anatomy.set_root_environments()
avalon.register_root(anatomy.roots)
# apply monkey patched discover to original one
log.info("Patching discovery")
avalon.discover = patched_discover

View file

@ -1,7 +1,9 @@
from .settings import (
system_settings,
project_settings,
environments
get_system_settings,
get_project_settings,
get_current_project_settings,
get_anatomy_settings,
get_environments
)
from .lib import (
PypeLogger,
@ -52,9 +54,11 @@ from .lib import _subprocess as subprocess
Logger = PypeLogger
__all__ = [
"system_settings",
"project_settings",
"environments",
"get_system_settings",
"get_project_settings",
"get_current_project_settings",
"get_anatomy_settings",
"get_environments",
"PypeLogger",
"Logger",

View file

@ -0,0 +1,45 @@
import os
from pype.lib import PreLaunchHook
class AfterEffectsPrelaunchHook(PreLaunchHook):
"""Launch arguments preparation.
Hook add python executable and execute python script of AfterEffects
implementation before AfterEffects executable.
"""
app_groups = ["aftereffects"]
def execute(self):
# Pop tvpaint executable
aftereffects_executable = self.launch_context.launch_args.pop(0)
# Pop rest of launch arguments - There should not be other arguments!
remainders = []
while self.launch_context.launch_args:
remainders.append(self.launch_context.launch_args.pop(0))
new_launch_args = [
self.python_executable(),
"-c",
(
"import avalon.aftereffects;"
"avalon.aftereffects.launch(\"{}\")"
).format(aftereffects_executable)
]
# Append as whole list as these areguments should not be separated
self.launch_context.launch_args.append(new_launch_args)
if remainders:
self.log.warning((
"There are unexpected launch arguments "
"in AfterEffects launch. {}"
).format(str(remainders)))
self.launch_context.launch_args.extend(remainders)
def python_executable(self):
"""Should lead to python executable."""
# TODO change in Pype 3
return os.environ["PYPE_PYTHON_EXE"]

View file

@ -0,0 +1,125 @@
import os
import shutil
import winreg
from pype.lib import PreLaunchHook
from pype.hosts import celaction
class CelactionPrelaunchHook(PreLaunchHook):
"""
This hook will check if current workfile path has Unreal
project inside. IF not, it initialize it and finally it pass
path to the project by environment variable to Unreal launcher
shell script.
"""
workfile_ext = "scn"
app_groups = ["celaction"]
platforms = ["windows"]
def execute(self):
# Add workfile path to launch arguments
workfile_path = self.workfile_path()
if workfile_path:
self.launch_context.launch_args.append(workfile_path)
project_name = self.data["project_name"]
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
# get publish version of celaction
app = "celaction_publish"
# setting output parameters
path = r"Software\CelAction\CelAction2D\User Settings"
winreg.CreateKey(winreg.HKEY_CURRENT_USER, path)
hKey = winreg.OpenKey(
winreg.HKEY_CURRENT_USER,
"Software\\CelAction\\CelAction2D\\User Settings", 0,
winreg.KEY_ALL_ACCESS)
# TODO: change to root path and pyblish standalone to premiere way
pype_root_path = os.getenv("PYPE_SETUP_PATH")
path = os.path.join(pype_root_path, "pype.bat")
winreg.SetValueEx(hKey, "SubmitAppTitle", 0, winreg.REG_SZ, path)
parameters = [
"launch",
f"--app {app}",
f"--project {project_name}",
f"--asset {asset_name}",
f"--task {task_name}",
"--currentFile \\\"\"*SCENE*\"\\\"",
"--chunk 10",
"--frameStart *START*",
"--frameEnd *END*",
"--resolutionWidth *X*",
"--resolutionHeight *Y*",
# "--programDir \"'*PROGPATH*'\""
]
winreg.SetValueEx(hKey, "SubmitParametersTitle", 0, winreg.REG_SZ,
" ".join(parameters))
# setting resolution parameters
path = r"Software\CelAction\CelAction2D\User Settings\Dialogs"
path += r"\SubmitOutput"
winreg.CreateKey(winreg.HKEY_CURRENT_USER, path)
hKey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0,
winreg.KEY_ALL_ACCESS)
winreg.SetValueEx(hKey, "SaveScene", 0, winreg.REG_DWORD, 1)
winreg.SetValueEx(hKey, "CustomX", 0, winreg.REG_DWORD, 1920)
winreg.SetValueEx(hKey, "CustomY", 0, winreg.REG_DWORD, 1080)
# making sure message dialogs don't appear when overwriting
path = r"Software\CelAction\CelAction2D\User Settings\Messages"
path += r"\OverwriteScene"
winreg.CreateKey(winreg.HKEY_CURRENT_USER, path)
hKey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0,
winreg.KEY_ALL_ACCESS)
winreg.SetValueEx(hKey, "Result", 0, winreg.REG_DWORD, 6)
winreg.SetValueEx(hKey, "Valid", 0, winreg.REG_DWORD, 1)
path = r"Software\CelAction\CelAction2D\User Settings\Messages"
path += r"\SceneSaved"
winreg.CreateKey(winreg.HKEY_CURRENT_USER, path)
hKey = winreg.OpenKey(winreg.HKEY_CURRENT_USER, path, 0,
winreg.KEY_ALL_ACCESS)
winreg.SetValueEx(hKey, "Result", 0, winreg.REG_DWORD, 1)
winreg.SetValueEx(hKey, "Valid", 0, winreg.REG_DWORD, 1)
def workfile_path(self):
workfile_path = self.data["last_workfile_path"]
# copy workfile from template if doesnt exist any on path
if not os.path.exists(workfile_path):
# TODO add ability to set different template workfile path via
# settings
pype_celaction_dir = os.path.dirname(
os.path.abspath(celaction.__file__)
)
template_path = os.path.join(
pype_celaction_dir,
"celaction_template_scene.scn"
)
if not os.path.exists(template_path):
self.log.warning(
"Couldn't find workfile template file in {}".format(
template_path
)
)
return
self.log.info(
f"Creating workfile from template: \"{template_path}\""
)
# Copy template workfile to new destinantion
shutil.copy2(
os.path.normpath(template_path),
os.path.normpath(workfile_path)
)
self.log.info(f"Workfile to open: \"{workfile_path}\"")
return workfile_path

View file

@ -0,0 +1,50 @@
import os
import importlib
from pype.lib import PreLaunchHook
from pype.hosts.fusion import utils
class FusionPrelaunch(PreLaunchHook):
"""
This hook will check if current workfile path has Fusion
project inside.
"""
app_groups = ["fusion"]
def execute(self):
# making sure pyton 3.6 is installed at provided path
py36_dir = os.path.normpath(self.env.get("PYTHON36", ""))
assert os.path.isdir(py36_dir), (
"Python 3.6 is not installed at the provided folder path. Either "
"make sure the `environments\resolve.json` is having correctly "
"set `PYTHON36` or make sure Python 3.6 is installed "
f"in given path. \nPYTHON36E: `{py36_dir}`"
)
self.log.info(f"Path to Fusion Python folder: `{py36_dir}`...")
self.env["PYTHON36"] = py36_dir
# setting utility scripts dir for scripts syncing
us_dir = os.path.normpath(
self.env.get("FUSION_UTILITY_SCRIPTS_DIR", "")
)
assert os.path.isdir(us_dir), (
"Fusion utility script dir does not exists. Either make sure "
"the `environments\fusion.json` is having correctly set "
"`FUSION_UTILITY_SCRIPTS_DIR` or reinstall DaVinci Resolve. \n"
f"FUSION_UTILITY_SCRIPTS_DIR: `{us_dir}`"
)
try:
__import__("avalon.fusion")
__import__("pyblish")
except ImportError:
self.log.warning(
"pyblish: Could not load Fusion integration.",
exc_info=True
)
else:
# Resolve Setup integration
importlib.reload(utils)
utils.setup(self.env)

View file

@ -0,0 +1,197 @@
import os
import ftrack_api
from pype.api import get_project_settings
from pype.lib import PostLaunchHook
class PostFtrackHook(PostLaunchHook):
order = None
def execute(self):
project_name = self.data.get("project_name")
asset_name = self.data.get("asset_name")
task_name = self.data.get("task_name")
missing_context_keys = set()
if not project_name:
missing_context_keys.add("project_name")
if not asset_name:
missing_context_keys.add("asset_name")
if not task_name:
missing_context_keys.add("task_name")
if missing_context_keys:
missing_keys_str = ", ".join([
"\"{}\"".format(key) for key in missing_context_keys
])
self.log.debug("Hook {} skipped. Missing data keys: {}".format(
self.__class__.__name__, missing_keys_str
))
return
required_keys = ("FTRACK_SERVER", "FTRACK_API_USER", "FTRACK_API_KEY")
for key in required_keys:
if not os.environ.get(key):
self.log.debug((
"Missing required environment \"{}\""
" for Ftrack after launch procedure."
).format(key))
return
try:
session = ftrack_api.Session(auto_connect_event_hub=True)
self.log.debug("Ftrack session created")
except Exception:
self.log.warning("Couldn't create Ftrack session")
return
try:
entity = self.find_ftrack_task_entity(
session, project_name, asset_name, task_name
)
if entity:
self.ftrack_status_change(session, entity, project_name)
self.start_timer(session, entity, ftrack_api)
except Exception:
self.log.warning(
"Couldn't finish Ftrack procedure.", exc_info=True
)
return
finally:
session.close()
def find_ftrack_task_entity(
self, session, project_name, asset_name, task_name
):
project_entity = session.query(
"Project where full_name is \"{}\"".format(project_name)
).first()
if not project_entity:
self.log.warning(
"Couldn't find project \"{}\" in Ftrack.".format(project_name)
)
return
potential_task_entities = session.query((
"TypedContext where parent.name is \"{}\" and project_id is \"{}\""
).format(asset_name, project_entity["id"])).all()
filtered_entities = []
for _entity in potential_task_entities:
if (
_entity.entity_type.lower() == "task"
and _entity["name"] == task_name
):
filtered_entities.append(_entity)
if not filtered_entities:
self.log.warning((
"Couldn't find task \"{}\" under parent \"{}\" in Ftrack."
).format(task_name, asset_name))
return
if len(filtered_entities) > 1:
self.log.warning((
"Found more than one task \"{}\""
" under parent \"{}\" in Ftrack."
).format(task_name, asset_name))
return
return filtered_entities[0]
def ftrack_status_change(self, session, entity, project_name):
project_settings = get_project_settings(project_name)
status_update = project_settings["ftrack"]["events"]["status_update"]
if not status_update["enabled"]:
self.log.debug(
"Status changes are disabled for project \"{}\"".format(
project_name
)
)
return
status_mapping = status_update["mapping"]
if not status_mapping:
self.log.warning(
"Project \"{}\" does not have set status changes.".format(
project_name
)
)
return
actual_status = entity["status"]["name"].lower()
already_tested = set()
ent_path = "/".join(
[ent["name"] for ent in entity["link"]]
)
while True:
next_status_name = None
for key, value in status_mapping.items():
if key in already_tested:
continue
if actual_status in value or "__any__" in value:
if key != "__ignore__":
next_status_name = key
already_tested.add(key)
break
already_tested.add(key)
if next_status_name is None:
break
try:
query = "Status where name is \"{}\"".format(
next_status_name
)
status = session.query(query).one()
entity["status"] = status
session.commit()
self.log.debug("Changing status to \"{}\" <{}>".format(
next_status_name, ent_path
))
break
except Exception:
session.rollback()
msg = (
"Status \"{}\" in presets wasn't found"
" on Ftrack entity type \"{}\""
).format(next_status_name, entity.entity_type)
self.log.warning(msg)
def start_timer(self, session, entity, _ftrack_api):
"""Start Ftrack timer on task from context."""
self.log.debug("Triggering timer start.")
user_entity = session.query("User where username is \"{}\"".format(
os.environ["FTRACK_API_USER"]
)).first()
if not user_entity:
self.log.warning(
"Couldn't find user with username \"{}\" in Ftrack".format(
os.environ["FTRACK_API_USER"]
)
)
return
source = {
"user": {
"id": user_entity["id"],
"username": user_entity["username"]
}
}
event_data = {
"actionIdentifier": "start.timer",
"selection": [{"entityId": entity["id"], "entityType": "task"}]
}
session.event_hub.publish(
_ftrack_api.event.base.Event(
topic="ftrack.action.launch",
data=event_data,
source=source
),
on_error="ignore"
)
self.log.debug("Timer start triggered successfully.")

View file

@ -0,0 +1,354 @@
import os
import re
import json
import getpass
import copy
from pype.api import (
Anatomy,
get_project_settings
)
from pype.lib import (
env_value_to_bool,
PreLaunchHook,
ApplicationLaunchFailed
)
import acre
import avalon.api
class GlobalHostDataHook(PreLaunchHook):
order = -100
def execute(self):
"""Prepare global objects to `data` that will be used for sure."""
if not self.application.is_host:
self.log.info(
"Skipped hook {}. Application is not marked as host.".format(
self.__class__.__name__
)
)
return
self.prepare_global_data()
self.prepare_host_environments()
self.prepare_context_environments()
def prepare_global_data(self):
"""Prepare global objects to `data` that will be used for sure."""
# Mongo documents
project_name = self.data.get("project_name")
if not project_name:
self.log.info(
"Skipping global data preparation."
" Key `project_name` was not found in launch context."
)
return
self.log.debug("Project name is set to \"{}\"".format(project_name))
# Anatomy
self.data["anatomy"] = Anatomy(project_name)
# Mongo connection
dbcon = avalon.api.AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
dbcon.install()
self.data["dbcon"] = dbcon
# Project document
project_doc = dbcon.find_one({"type": "project"})
self.data["project_doc"] = project_doc
asset_name = self.data.get("asset_name")
if not asset_name:
self.log.warning(
"Asset name was not set. Skipping asset document query."
)
return
asset_doc = dbcon.find_one({
"type": "asset",
"name": asset_name
})
self.data["asset_doc"] = asset_doc
def _merge_env(self, env, current_env):
"""Modified function(merge) from acre module."""
result = current_env.copy()
for key, value in env.items():
# Keep missing keys by not filling `missing` kwarg
value = acre.lib.partial_format(value, data=current_env)
result[key] = value
return result
def prepare_host_environments(self):
"""Modify launch environments based on launched app and context."""
# Keys for getting environments
env_keys = [self.app_group, self.app_name]
asset_doc = self.data.get("asset_doc")
if asset_doc:
# Add tools environments
for key in asset_doc["data"].get("tools_env") or []:
tool = self.manager.tools.get(key)
if tool:
if tool.group_name not in env_keys:
env_keys.append(tool.group_name)
if tool.name not in env_keys:
env_keys.append(tool.name)
self.log.debug(
"Finding environment groups for keys: {}".format(env_keys)
)
settings_env = self.data["settings_env"]
env_values = {}
for env_key in env_keys:
_env_values = settings_env.get(env_key)
if not _env_values:
continue
# Choose right platform
tool_env = acre.parse(_env_values)
# Merge dictionaries
env_values = self._merge_env(tool_env, env_values)
final_env = self._merge_env(
acre.compute(env_values), self.launch_context.env
)
# Update env
self.launch_context.env.update(final_env)
def prepare_context_environments(self):
"""Modify launch environemnts with context data for launched host."""
# Context environments
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
task_name = self.data.get("task_name")
if (
not project_doc
or not asset_doc
or not task_name
):
self.log.info(
"Skipping context environments preparation."
" Launch context does not contain required data."
)
return
workdir_data = self._prepare_workdir_data(
project_doc, asset_doc, task_name
)
self.data["workdir_data"] = workdir_data
hierarchy = workdir_data["hierarchy"]
anatomy = self.data["anatomy"]
try:
anatomy_filled = anatomy.format(workdir_data)
workdir = os.path.normpath(anatomy_filled["work"]["folder"])
if not os.path.exists(workdir):
self.log.debug(
"Creating workdir folder: \"{}\"".format(workdir)
)
os.makedirs(workdir)
except Exception as exc:
raise ApplicationLaunchFailed(
"Error in anatomy.format: {}".format(str(exc))
)
context_env = {
"AVALON_PROJECT": project_doc["name"],
"AVALON_ASSET": asset_doc["name"],
"AVALON_TASK": task_name,
"AVALON_APP": self.host_name,
"AVALON_APP_NAME": self.app_name,
"AVALON_HIERARCHY": hierarchy,
"AVALON_WORKDIR": workdir
}
self.log.debug(
"Context environemnts set:\n{}".format(
json.dumps(context_env, indent=4)
)
)
self.launch_context.env.update(context_env)
self.prepare_last_workfile(workdir)
def _prepare_workdir_data(self, project_doc, asset_doc, task_name):
hierarchy = "/".join(asset_doc["data"]["parents"])
data = {
"project": {
"name": project_doc["name"],
"code": project_doc["data"].get("code")
},
"task": task_name,
"asset": asset_doc["name"],
"app": self.host_name,
"hierarchy": hierarchy
}
return data
def prepare_last_workfile(self, workdir):
"""last workfile workflow preparation.
Function check if should care about last workfile workflow and tries
to find the last workfile. Both information are stored to `data` and
environments.
Last workfile is filled always (with version 1) even if any workfile
exists yet.
Args:
workdir (str): Path to folder where workfiles should be stored.
"""
_workdir_data = self.data.get("workdir_data")
if not _workdir_data:
self.log.info(
"Skipping last workfile preparation."
" Key `workdir_data` not filled."
)
return
workdir_data = copy.deepcopy(_workdir_data)
project_name = self.data["project_name"]
task_name = self.data["task_name"]
start_last_workfile = self.should_start_last_workfile(
project_name, self.host_name, task_name
)
self.data["start_last_workfile"] = start_last_workfile
# Store boolean as "0"(False) or "1"(True)
self.launch_context.env["AVALON_OPEN_LAST_WORKFILE"] = (
str(int(bool(start_last_workfile)))
)
_sub_msg = "" if start_last_workfile else " not"
self.log.debug(
"Last workfile should{} be opened on start.".format(_sub_msg)
)
# Last workfile path
last_workfile_path = ""
extensions = avalon.api.HOST_WORKFILE_EXTENSIONS.get(
self.host_name
)
if extensions:
anatomy = self.data["anatomy"]
# Find last workfile
file_template = anatomy.templates["work"]["file"]
workdir_data.update({
"version": 1,
"user": os.environ.get("PYPE_USERNAME") or getpass.getuser(),
"ext": extensions[0]
})
last_workfile_path = avalon.api.last_workfile(
workdir, file_template, workdir_data, extensions, True
)
if os.path.exists(last_workfile_path):
self.log.debug((
"Workfiles for launch context does not exists"
" yet but path will be set."
))
self.log.debug(
"Setting last workfile path: {}".format(last_workfile_path)
)
self.launch_context.env["AVALON_LAST_WORKFILE"] = last_workfile_path
self.data["last_workfile_path"] = last_workfile_path
def should_start_last_workfile(self, project_name, host_name, task_name):
"""Define if host should start last version workfile if possible.
Default output is `False`. Can be overriden with environment variable
`AVALON_OPEN_LAST_WORKFILE`, valid values without case sensitivity are
`"0", "1", "true", "false", "yes", "no"`.
Args:
project_name (str): Name of project.
host_name (str): Name of host which is launched. In avalon's
application context it's value stored in app definition under
key `"application_dir"`. Is not case sensitive.
task_name (str): Name of task which is used for launching the host.
Task name is not case sensitive.
Returns:
bool: True if host should start workfile.
"""
project_settings = (
get_project_settings(project_name)['global']['tools'])
startup_presets = (
project_settings['Workfiles']['last_workfile_on_startup'])
if not startup_presets:
return default_output
host_name_lowered = host_name.lower()
task_name_lowered = task_name.lower()
max_points = 2
matching_points = -1
matching_item = None
for item in startup_presets:
hosts = item.get("hosts") or tuple()
tasks = item.get("tasks") or tuple()
hosts_lowered = tuple(_host_name.lower() for _host_name in hosts)
# Skip item if has set hosts and current host is not in
if hosts_lowered and host_name_lowered not in hosts_lowered:
continue
tasks_lowered = tuple(_task_name.lower() for _task_name in tasks)
# Skip item if has set tasks and current task is not in
if tasks_lowered:
task_match = False
for task_regex in self.compile_list_of_regexes(tasks_lowered):
if re.match(task_regex, task_name_lowered):
task_match = True
break
if not task_match:
continue
points = int(bool(hosts_lowered)) + int(bool(tasks_lowered))
if points > matching_points:
matching_item = item
matching_points = points
if matching_points == max_points:
break
if matching_item is not None:
output = matching_item.get("enabled")
if output is None:
output = default_output
return output
return default_output
@staticmethod
def compile_list_of_regexes(in_list):
"""Convert strings in entered list to compiled regex objects."""
regexes = list()
if not in_list:
return regexes
for item in in_list:
if item:
try:
regexes.append(re.compile(item))
except TypeError:
print((
"Invalid type \"{}\" value \"{}\"."
" Expected string based object. Skipping."
).format(str(type(item)), str(item)))
return regexes

View file

@ -0,0 +1,44 @@
import os
from pype.lib import PreLaunchHook
class HarmonyPrelaunchHook(PreLaunchHook):
"""Launch arguments preparation.
Hook add python executable and execute python script of harmony
implementation before harmony executable.
"""
app_groups = ["harmony"]
def execute(self):
# Pop tvpaint executable
harmony_executable = self.launch_context.launch_args.pop(0)
# Pop rest of launch arguments - There should not be other arguments!
remainders = []
while self.launch_context.launch_args:
remainders.append(self.launch_context.launch_args.pop(0))
new_launch_args = [
self.python_executable(),
"-c",
(
"import avalon.harmony;"
"avalon.harmony.launch(\"{}\")"
).format(harmony_executable)
]
# Append as whole list as these areguments should not be separated
self.launch_context.launch_args.append(new_launch_args)
if remainders:
self.log.warning((
"There are unexpected launch arguments in Harmony launch. {}"
).format(str(remainders)))
self.launch_context.launch_args.extend(remainders)
def python_executable(self):
"""Should lead to python executable."""
# TODO change in Pype 3
return os.environ["PYPE_PYTHON_EXE"]

View file

@ -0,0 +1,15 @@
import os
from pype.lib import PreLaunchHook
class HieroLaunchArguments(PreLaunchHook):
order = 0
app_groups = ["hiero"]
def execute(self):
"""Prepare suprocess launch arguments for Hiero."""
# Add path to workfile to arguments
if self.data.get("start_last_workfile"):
last_workfile = self.data.get("last_workfile_path")
if os.path.exists(last_workfile):
self.launch_context.launch_args.append(last_workfile)

View file

@ -0,0 +1,16 @@
import os
from pype.lib import PreLaunchHook
class MayaLaunchArguments(PreLaunchHook):
"""Add path to last workfile to launch arguments."""
order = 0
app_groups = ["maya"]
def execute(self):
"""Prepare suprocess launch arguments for Maya."""
# Add path to workfile to arguments
if self.data.get("start_last_workfile"):
last_workfile = self.data.get("last_workfile_path")
if os.path.exists(last_workfile):
self.launch_context.launch_args.append(last_workfile)

View file

@ -0,0 +1,15 @@
import os
from pype.lib import PreLaunchHook
class NukeStudioLaunchArguments(PreLaunchHook):
order = 0
app_groups = ["nukestudio"]
def execute(self):
"""Prepare suprocess launch arguments for NukeStudio."""
# Add path to workfile to arguments
if self.data.get("start_last_workfile"):
last_workfile = self.data.get("last_workfile_path")
if os.path.exists(last_workfile):
self.launch_context.launch_args.append(last_workfile)

View file

@ -0,0 +1,15 @@
import os
from pype.lib import PreLaunchHook
class NukeXLaunchArguments(PreLaunchHook):
order = 0
app_groups = ["nukex"]
def execute(self):
"""Prepare suprocess launch arguments for NukeX."""
# Add path to workfile to arguments
if self.data.get("start_last_workfile"):
last_workfile = self.data.get("last_workfile_path")
if os.path.exists(last_workfile):
self.launch_context.launch_args.append(last_workfile)

View file

@ -0,0 +1,44 @@
import os
from pype.lib import PreLaunchHook
class PhotoshopPrelaunchHook(PreLaunchHook):
"""Launch arguments preparation.
Hook add python executable and execute python script of photoshop
implementation before photoshop executable.
"""
app_groups = ["photoshop"]
def execute(self):
# Pop tvpaint executable
photoshop_executable = self.launch_context.launch_args.pop(0)
# Pop rest of launch arguments - There should not be other arguments!
remainders = []
while self.launch_context.launch_args:
remainders.append(self.launch_context.launch_args.pop(0))
new_launch_args = [
self.python_executable(),
"-c",
(
"import avalon.photoshop;"
"avalon.photoshop.launch(\"{}\")"
).format(photoshop_executable)
]
# Append as whole list as these areguments should not be separated
self.launch_context.launch_args.append(new_launch_args)
if remainders:
self.log.warning((
"There are unexpected launch arguments in Photoshop launch. {}"
).format(str(remainders)))
self.launch_context.launch_args.extend(remainders)
def python_executable(self):
"""Should lead to python executable."""
# TODO change in Pype 3
return os.environ["PYPE_PYTHON_EXE"]

View file

@ -0,0 +1,58 @@
import os
import importlib
from pype.lib import PreLaunchHook
from pype.hosts.resolve import utils
class ResolvePrelaunch(PreLaunchHook):
"""
This hook will check if current workfile path has Resolve
project inside. IF not, it initialize it and finally it pass
path to the project by environment variable to Premiere launcher
shell script.
"""
app_groups = ["resolve"]
def execute(self):
# making sure pyton 3.6 is installed at provided path
py36_dir = os.path.normpath(self.env.get("PYTHON36_RESOLVE", ""))
assert os.path.isdir(py36_dir), (
"Python 3.6 is not installed at the provided folder path. Either "
"make sure the `environments\resolve.json` is having correctly "
"set `PYTHON36_RESOLVE` or make sure Python 3.6 is installed "
f"in given path. \nPYTHON36_RESOLVE: `{py36_dir}`"
)
self.log.info(f"Path to Resolve Python folder: `{py36_dir}`...")
self.env["PYTHON36_RESOLVE"] = py36_dir
# setting utility scripts dir for scripts syncing
us_dir = os.path.normpath(
self.env.get("RESOLVE_UTILITY_SCRIPTS_DIR", "")
)
assert os.path.isdir(us_dir), (
"Resolve utility script dir does not exists. Either make sure "
"the `environments\resolve.json` is having correctly set "
"`RESOLVE_UTILITY_SCRIPTS_DIR` or reinstall DaVinci Resolve. \n"
f"RESOLVE_UTILITY_SCRIPTS_DIR: `{us_dir}`"
)
self.log.debug(f"-- us_dir: `{us_dir}`")
# correctly format path for pre python script
pre_py_sc = os.path.normpath(self.env.get("PRE_PYTHON_SCRIPT", ""))
self.env["PRE_PYTHON_SCRIPT"] = pre_py_sc
self.log.debug(f"-- pre_py_sc: `{pre_py_sc}`...")
try:
__import__("pype.hosts.resolve")
__import__("pyblish")
except ImportError:
self.log.warning(
"pyblish: Could not load Resolve integration.",
exc_info=True
)
else:
# Resolve Setup integration
importlib.reload(utils)
self.log.debug(f"-- utils.__file__: `{utils.__file__}`")
utils.setup(self.env)

View file

@ -0,0 +1,35 @@
from pype.lib import (
PreLaunchHook,
ApplicationLaunchFailed,
_subprocess
)
class PreInstallPyWin(PreLaunchHook):
"""Hook makes sure there is installed python module pywin32 on windows."""
# WARNING This hook will probably be deprecated in Pype 3 - kept for test
order = 10
app_groups = ["tvpaint"]
platforms = ["windows"]
def execute(self):
installed = False
try:
from win32com.shell import shell
self.log.debug("Python module `pywin32` already installed.")
installed = True
except Exception:
pass
if installed:
return
try:
output = _subprocess(
["pip", "install", "pywin32==227"]
)
self.log.debug("Pip install pywin32 output:\n{}'".format(output))
except RuntimeError:
msg = "Installation of python module `pywin32` crashed."
self.log.warning(msg, exc_info=True)
raise ApplicationLaunchFailed(msg)

View file

@ -0,0 +1,103 @@
import os
import shutil
from pype.hosts import tvpaint
from pype.lib import PreLaunchHook
import avalon
class TvpaintPrelaunchHook(PreLaunchHook):
"""Launch arguments preparation.
Hook add python executable and script path to tvpaint implementation before
tvpaint executable and add last workfile path to launch arguments.
Existence of last workfile is checked. If workfile does not exists tries
to copy templated workfile from predefined path.
"""
app_groups = ["tvpaint"]
def execute(self):
# Pop tvpaint executable
tvpaint_executable = self.launch_context.launch_args.pop(0)
# Pop rest of launch arguments - There should not be other arguments!
remainders = []
while self.launch_context.launch_args:
remainders.append(self.launch_context.launch_args.pop(0))
new_launch_args = [
self.main_executable(),
self.launch_script_path(),
tvpaint_executable
]
# Add workfile to launch arguments
workfile_path = self.workfile_path()
if workfile_path:
new_launch_args.append(workfile_path)
# How to create new command line
# if platform.system().lower() == "windows":
# new_launch_args = [
# "cmd.exe",
# "/c",
# "Call cmd.exe /k",
# *new_launch_args
# ]
# Append as whole list as these areguments should not be separated
self.launch_context.launch_args.append(new_launch_args)
if remainders:
self.log.warning((
"There are unexpected launch arguments in TVPaint launch. {}"
).format(str(remainders)))
self.launch_context.launch_args.extend(remainders)
def main_executable(self):
"""Should lead to python executable."""
# TODO change in Pype 3
return os.path.normpath(os.environ["PYPE_PYTHON_EXE"])
def launch_script_path(self):
avalon_dir = os.path.dirname(os.path.abspath(avalon.__file__))
script_path = os.path.join(
avalon_dir,
"tvpaint",
"launch_script.py"
)
return script_path
def workfile_path(self):
workfile_path = self.data["last_workfile_path"]
# copy workfile from template if doesnt exist any on path
if not os.path.exists(workfile_path):
# TODO add ability to set different template workfile path via
# settings
pype_dir = os.path.dirname(os.path.abspath(tvpaint.__file__))
template_path = os.path.join(pype_dir, "template.tvpp")
if not os.path.exists(template_path):
self.log.warning(
"Couldn't find workfile template file in {}".format(
template_path
)
)
return
self.log.info(
f"Creating workfile from template: \"{template_path}\""
)
# Copy template workfile to new destinantion
shutil.copy2(
os.path.normpath(template_path),
os.path.normpath(workfile_path)
)
self.log.info(f"Workfile to open: \"{workfile_path}\"")
return workfile_path

View file

@ -0,0 +1,95 @@
import os
from pype.lib import (
PreLaunchHook,
ApplicationLaunchFailed
)
from pype.hosts.unreal import lib as unreal_lib
class UnrealPrelaunchHook(PreLaunchHook):
"""
This hook will check if current workfile path has Unreal
project inside. IF not, it initialize it and finally it pass
path to the project by environment variable to Unreal launcher
shell script.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.signature = "( {} )".format(self.__class__.__name__)
def execute(self):
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
workdir = self.env["AVALON_WORKDIR"]
engine_version = self.app_name.split("_")[-1]
unreal_project_name = f"{asset_name}_{task_name}"
# Unreal is sensitive about project names longer then 20 chars
if len(unreal_project_name) > 20:
self.log.warning((
f"Project name exceed 20 characters ({unreal_project_name})!"
))
# Unreal doesn't accept non alphabet characters at the start
# of the project name. This is because project name is then used
# in various places inside c++ code and there variable names cannot
# start with non-alpha. We append 'P' before project name to solve it.
# 😱
if not unreal_project_name[:1].isalpha():
self.log.warning((
"Project name doesn't start with alphabet "
f"character ({unreal_project_name}). Appending 'P'"
))
unreal_project_name = f"P{unreal_project_name}"
project_path = os.path.join(workdir, unreal_project_name)
self.log.info((
f"{self.signature} requested UE4 version: "
f"[ {engine_version} ]"
))
detected = unreal_lib.get_engine_versions()
detected_str = ', '.join(detected.keys()) or 'none'
self.log.info((
f"{self.signature} detected UE4 versions: "
f"[ {detected_str} ]"
))
engine_version = ".".join(engine_version.split(".")[:2])
if engine_version not in detected.keys():
raise ApplicationLaunchFailed((
f"{self.signature} requested version not "
f"detected [ {engine_version} ]"
))
os.makedirs(project_path, exist_ok=True)
project_file = os.path.join(
project_path,
f"{unreal_project_name}.uproject"
)
if not os.path.isfile(project_file):
engine_path = detected[engine_version]
self.log.info((
f"{self.signature} creating unreal "
f"project [ {unreal_project_name} ]"
))
# Set "AVALON_UNREAL_PLUGIN" to current process environment for
# execution of `create_unreal_project`
env_key = "AVALON_UNREAL_PLUGIN"
if self.env.get(env_key):
os.environ[env_key] = self.env[env_key]
unreal_lib.create_unreal_project(
unreal_project_name,
engine_version,
project_path,
engine_path=engine_path
)
# Append project file to launch arguments
self.launch_context.launch_args.append(f"\"{project_file}\"")

View file

@ -9,7 +9,7 @@ import avalon.tools.sceneinventory
import pyblish.api
from pype import lib
from pype.api import config
from pype.api import get_current_project_settings
def set_scene_settings(settings):
@ -50,10 +50,18 @@ def get_asset_settings():
}
try:
skip_resolution_check = \
config.get_presets()["harmony"]["general"]["skip_resolution_check"]
skip_timelines_check = \
config.get_presets()["harmony"]["general"]["skip_timelines_check"]
skip_resolution_check = (
get_current_project_settings()
["harmony"]
["general"]
["skip_resolution_check"]
)
skip_timelines_check = (
get_current_project_settings()
["harmony"]
["general"]
["skip_timelines_check"]
)
except KeyError:
skip_resolution_check = []
skip_timelines_check = []

View file

@ -266,8 +266,8 @@ class AExpectedFiles:
def _generate_single_file_sequence(self, layer_data):
expected_files = []
file_prefix = layer_data["filePrefix"]
for cam in layer_data["cameras"]:
file_prefix = layer_data["filePrefix"]
mappings = (
(R_SUBSTITUTE_SCENE_TOKEN, layer_data["sceneName"]),
(R_SUBSTITUTE_LAYER_TOKEN, layer_data["layerName"]),
@ -299,9 +299,9 @@ class AExpectedFiles:
def _generate_aov_file_sequences(self, layer_data):
expected_files = []
aov_file_list = {}
file_prefix = layer_data["filePrefix"]
for aov in layer_data["enabledAOVs"]:
for cam in layer_data["cameras"]:
file_prefix = layer_data["filePrefix"]
mappings = (
(R_SUBSTITUTE_SCENE_TOKEN, layer_data["sceneName"]),
@ -418,13 +418,12 @@ class AExpectedFiles:
if connections:
for connection in connections:
if connection:
node_name = connection.split(".")[0]
if cmds.nodeType(node_name) == "renderLayer":
attr_name = "%s.value" % ".".join(
connection.split(".")[:-1]
)
if node_name == layer:
yield cmds.getAttr(attr_name)
# node_name = connection.split(".")[0]
attr_name = "%s.value" % ".".join(
connection.split(".")[:-1]
)
yield cmds.getAttr(attr_name)
def get_render_attribute(self, attr):
"""Get attribute from render options.
@ -572,11 +571,17 @@ class ExpectedFilesVray(AExpectedFiles):
expected_files = super(ExpectedFilesVray, self).get_files()
layer_data = self._get_layer_data()
# remove 'beauty' from filenames as vray doesn't output it
update = {}
if layer_data.get("enabledAOVs"):
expected_files[0][u"beauty"] = self._generate_single_file_sequence(
layer_data
) # noqa: E501
for aov, seq in expected_files[0].items():
if aov.startswith("beauty"):
new_list = []
for f in seq:
new_list.append(f.replace("_beauty", ""))
update[aov] = new_list
expected_files[0].update(update)
return expected_files
def get_aovs(self):
@ -630,28 +635,49 @@ class ExpectedFilesVray(AExpectedFiles):
# todo: find how vray set format for AOVs
enabled_aovs.append(
(self._get_vray_aov_name(aov), default_ext))
enabled_aovs.append(
(u"beauty", default_ext)
)
return enabled_aovs
def _get_vray_aov_name(self, node):
"""Get AOVs name from Vray.
# Get render element pass type
vray_node_attr = next(
attr
for attr in cmds.listAttr(node)
if attr.startswith("vray_name")
)
pass_type = vray_node_attr.rsplit("_", 1)[-1]
Args:
node (str): aov node name.
# Support V-Ray extratex explicit name (if set by user)
if pass_type == "extratex":
explicit_attr = "{}.vray_explicit_name_extratex".format(node)
explicit_name = cmds.getAttr(explicit_attr)
if explicit_name:
return explicit_name
Returns:
str: aov name.
# Node type is in the attribute name but we need to check if value
# of the attribute as it can be changed
return cmds.getAttr("{}.{}".format(node, vray_node_attr))
"""
vray_name = None
vray_explicit_name = None
vray_file_name = None
for attr in cmds.listAttr(node):
if attr.startswith("vray_filename"):
vray_file_name = cmds.getAttr("{}.{}".format(node, attr))
elif attr.startswith("vray_name"):
vray_name = cmds.getAttr("{}.{}".format(node, attr))
elif attr.startswith("vray_explicit_name"):
vray_explicit_name = cmds.getAttr("{}.{}".format(node, attr))
if vray_file_name is not None and vray_file_name != "":
final_name = vray_file_name
elif vray_explicit_name is not None and vray_explicit_name != "":
final_name = vray_explicit_name
elif vray_name is not None and vray_name != "":
final_name = vray_name
else:
continue
# special case for Material Select elements - these are named
# based on the materia they are connected to.
if "vray_mtl_mtlselect" in cmds.listAttr(node):
connections = cmds.listConnections(
"{}.vray_mtl_mtlselect".format(node))
if connections:
final_name += '_{}'.format(str(connections[0]))
return final_name
class ExpectedFilesRedshift(AExpectedFiles):

View file

@ -1,13 +1,13 @@
import re
import avalon.api
import avalon.nuke
from pype.api import config
from pype.api import get_current_project_settings
class PypeCreator(avalon.nuke.pipeline.Creator):
"""Pype Nuke Creator class wrapper
"""
def __init__(self, *args, **kwargs):
super(PypeCreator, self).__init__(*args, **kwargs)
self.presets = config.get_presets()['plugins']["nuke"]["create"].get(
self.presets = get_current_project_settings()["nuke"]["create"].get(
self.__class__.__name__, {}
)

View file

@ -4,7 +4,7 @@ import platform
import json
from distutils import dir_util
import subprocess
from pype.api import config
from pype.api import get_project_settings
def get_engine_versions():
@ -150,7 +150,7 @@ def create_unreal_project(project_name: str,
:type dev_mode: bool
:returns: None
"""
preset = config.get_presets()["unreal"]["project_setup"]
preset = get_project_settings(project_name)["unreal"]["project_setup"]
if os.path.isdir(os.environ.get("AVALON_UNREAL_PLUGIN", "")):
# copy plugin to correct path under project
@ -246,15 +246,18 @@ def create_unreal_project(project_name: str,
with open(project_file, mode="w") as pf:
json.dump(data, pf, indent=4)
# ensure we have PySide installed in engine
# TODO: make it work for other platforms 🍎 🐧
if platform.system().lower() == "windows":
python_path = os.path.join(engine_path, "Engine", "Binaries",
"ThirdParty", "Python", "Win64",
"python.exe")
# UE < 4.26 have Python2 by default, so we need PySide
# but we will not need it in 4.26 and up
if int(ue_version.split(".")[1]) < 26:
# ensure we have PySide installed in engine
# TODO: make it work for other platforms 🍎 🐧
if platform.system().lower() == "windows":
python_path = os.path.join(engine_path, "Engine", "Binaries",
"ThirdParty", "Python", "Win64",
"python.exe")
subprocess.run([python_path, "-m",
"pip", "install", "pyside"])
subprocess.run([python_path, "-m",
"pip", "install", "pyside"])
if dev_mode or preset["dev_mode"]:
_prepare_cpp_project(project_file, engine_path)

View file

@ -25,6 +25,12 @@ from .env_tools import (
get_paths_from_environ
)
from .python_module_tools import (
modules_from_path,
recursive_bases_from_class,
classes_from_module
)
from .avalon_context import (
is_latest,
any_outdated,
@ -42,8 +48,8 @@ from .applications import (
ApplictionExecutableNotFound,
ApplicationNotFound,
ApplicationManager,
launch_application,
ApplicationAction,
PreLaunchHook,
PostLaunchHook,
_subprocess
)
@ -65,12 +71,13 @@ from .ffmpeg_utils import (
terminal = Terminal
__all__ = [
"get_avalon_database",
"set_io_database",
"env_value_to_bool",
"get_paths_from_environ",
"modules_from_path",
"recursive_bases_from_class",
"classes_from_module",
"is_latest",
"any_outdated",
"get_asset",
@ -86,8 +93,8 @@ __all__ = [
"ApplictionExecutableNotFound",
"ApplicationNotFound",
"ApplicationManager",
"launch_application",
"ApplicationAction",
"PreLaunchHook",
"PostLaunchHook",
"filter_pyblish_plugins",

View file

@ -520,6 +520,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
f.replace(orig_scene, new_scene)
)
new_exp[aov] = replaced_files
# [] might be too much here, TODO
self._instance.data["expectedFiles"] = [new_exp]
else:
new_exp = []
@ -527,7 +528,8 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
new_exp.append(
f.replace(orig_scene, new_scene)
)
self._instance.data["expectedFiles"] = [new_exp]
self._instance.data["expectedFiles"] = new_exp
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))

File diff suppressed because it is too large Load diff

View file

@ -426,12 +426,12 @@ class BuildWorkfile:
(dict): preset per entered task name
"""
host_name = avalon.api.registered_host().__name__.rsplit(".", 1)[-1]
presets = config.get_presets(io.Session["AVALON_PROJECT"])
presets = get_project_settings(io.Session["AVALON_PROJECT"])
# Get presets for host
build_presets = (
presets["plugins"]
.get(host_name, {})
presets.get(host_name, {})
.get("workfile_build")
.get("profiles")
)
if not build_presets:
return

View file

@ -20,9 +20,9 @@ def env_value_to_bool(env_key=None, value=None, default=False):
if value is not None:
value = str(value).lower()
if value in ("true", "yes", "1"):
if value in ("true", "yes", "1", "on"):
return True
elif value in ("false", "no", "0"):
elif value in ("false", "no", "0", "off"):
return False
return default

View file

@ -55,7 +55,7 @@ def execute_hook(hook, *args, **kwargs):
module.__file__ = abspath
try:
with open(abspath) as f:
with open(abspath, errors='ignore') as f:
six.exec_(f.read(), module.__dict__)
sys.modules[abspath] = module

View file

@ -4,7 +4,7 @@ import os
import inspect
import logging
from . import get_presets
from ..api import get_project_settings
log = logging.getLogger(__name__)
@ -25,7 +25,7 @@ def filter_pyblish_plugins(plugins):
host = api.current_host()
presets = get_presets().get('plugins', {})
presets = get_project_settings(os.environ['AVALON_PROJECT']) or {}
# iterate over plugins
for plugin in plugins[:]:
@ -53,7 +53,7 @@ def filter_pyblish_plugins(plugins):
log.info('removing plugin {}'.format(plugin.__name__))
plugins.remove(plugin)
else:
log.info('setting {}:{} on plugin {}'.format(
log.info('setting XXX {}:{} on plugin {}'.format(
option, value, plugin.__name__))
setattr(plugin, option, value)

View file

@ -0,0 +1,113 @@
import os
import sys
import types
import importlib
import inspect
import logging
log = logging.getLogger(__name__)
PY3 = sys.version_info[0] == 3
def modules_from_path(folder_path):
"""Get python scripts as modules from a path.
Arguments:
path (str): Path to folder containing python scripts.
Returns:
List of modules.
"""
folder_path = os.path.normpath(folder_path)
modules = []
if not os.path.isdir(folder_path):
log.warning("Not a directory path: {}".format(folder_path))
return modules
for filename in os.listdir(folder_path):
# Ignore files which start with underscore
if filename.startswith("_"):
continue
mod_name, mod_ext = os.path.splitext(filename)
if not mod_ext == ".py":
continue
full_path = os.path.join(folder_path, filename)
if not os.path.isfile(full_path):
continue
try:
# Prepare module object where content of file will be parsed
module = types.ModuleType(mod_name)
if PY3:
# Use loader so module has full specs
module_loader = importlib.machinery.SourceFileLoader(
mod_name, full_path
)
module_loader.exec_module(module)
else:
# Execute module code and store content to module
with open(full_path) as _stream:
# Execute content and store it to module object
exec(_stream.read(), module.__dict__)
module.__file__ = full_path
modules.append(module)
except Exception:
log.warning(
"Failed to load path: \"{0}\"".format(full_path),
exc_info=True
)
continue
return modules
def recursive_bases_from_class(klass):
"""Extract all bases from entered class."""
result = []
bases = klass.__bases__
result.extend(bases)
for base in bases:
result.extend(recursive_bases_from_class(base))
return result
def classes_from_module(superclass, module):
"""Return plug-ins from module
Arguments:
superclass (superclass): Superclass of subclasses to look for
module (types.ModuleType): Imported module from which to
parse valid Avalon plug-ins.
Returns:
List of plug-ins, or empty list if none is found.
"""
classes = list()
for name in dir(module):
# It could be anything at this point
obj = getattr(module, name)
if not inspect.isclass(obj):
continue
# These are subclassed from nothing, not even `object`
if not len(obj.__bases__) > 0:
continue
# Use string comparison rather than `issubclass`
# in order to support reloading of this module.
bases = recursive_bases_from_class(obj)
if not any(base.__name__ == superclass.__name__ for base in bases):
continue
classes.append(obj)
return classes

View file

@ -1,5 +0,0 @@
from .adobe_comunicator import AdobeCommunicator
def tray_init(tray_widget, main_widget):
return AdobeCommunicator()

View file

@ -1,49 +0,0 @@
import os
import pype
from pype.api import Logger
from .lib import AdobeRestApi, PUBLISH_PATHS
log = Logger().get_logger("AdobeCommunicator")
class AdobeCommunicator:
rest_api_obj = None
def __init__(self):
self.rest_api_obj = None
# Add "adobecommunicator" publish paths
PUBLISH_PATHS.append(os.path.sep.join(
[pype.PLUGINS_DIR, "adobecommunicator", "publish"]
))
def tray_start(self):
return
def process_modules(self, modules):
# Module requires RestApiServer
rest_api_module = modules.get("RestApiServer")
if not rest_api_module:
log.warning(
"AdobeCommunicator won't work without RestApiServer."
)
return
# Register statics url
pype_module_root = os.environ["PYPE_MODULE_ROOT"].replace("\\", "/")
static_path = "{}/pype/hosts/premiere/ppro".format(pype_module_root)
rest_api_module.register_statics("/ppro", static_path)
# Register rest api object for communication
self.rest_api_obj = AdobeRestApi()
# Add Ftrack publish path if registered Ftrack mdule
if "FtrackModule" in modules:
PUBLISH_PATHS.append(os.path.sep.join(
[pype.PLUGINS_DIR, "ftrack", "publish"]
))
log.debug((
f"Adobe Communicator Registered PUBLISH_PATHS"
f"> `{PUBLISH_PATHS}`"
))

View file

@ -1,6 +0,0 @@
from .rest_api import AdobeRestApi, PUBLISH_PATHS
__all__ = [
"PUBLISH_PATHS",
"AdobeRestApi"
]

View file

@ -1,57 +0,0 @@
import os
import sys
import pype
import importlib
import pyblish.api
import pyblish.util
import avalon.api
from avalon.tools import publish
from pype.api import Logger
log = Logger().get_logger(__name__)
def main(env):
# Registers pype's Global pyblish plugins
pype.install()
# Register Host (and it's pyblish plugins)
host_name = env["AVALON_APP"]
# TODO not sure if use "pype." or "avalon." for host import
host_import_str = f"pype.hosts.{host_name}"
try:
host_module = importlib.import_module(host_import_str)
except ModuleNotFoundError:
log.error((
f"Host \"{host_name}\" can't be imported."
f" Import string \"{host_import_str}\" failed."
))
return False
avalon.api.install(host_module)
# Register additional paths
addition_paths_str = env.get("PUBLISH_PATHS") or ""
addition_paths = addition_paths_str.split(os.pathsep)
for path in addition_paths:
path = os.path.normpath(path)
if not os.path.exists(path):
continue
pyblish.api.register_plugin_path(path)
# Register project specific plugins
project_name = os.environ["AVALON_PROJECT"]
project_plugins_paths = env.get("PYPE_PROJECT_PLUGINS") or ""
for path in project_plugins_paths.split(os.pathsep):
plugin_path = os.path.join(path, project_name, "plugins")
if os.path.exists(plugin_path):
pyblish.api.register_plugin_path(plugin_path)
return publish.show()
if __name__ == "__main__":
result = main(os.environ)
sys.exit(not bool(result))

View file

@ -1,117 +0,0 @@
import os
import sys
import copy
from pype.modules.rest_api import RestApi, route, abort, CallbackResult
from avalon.api import AvalonMongoDB
from pype.api import config, execute, Logger
log = Logger().get_logger("AdobeCommunicator")
CURRENT_DIR = os.path.dirname(__file__)
PUBLISH_SCRIPT_PATH = os.path.join(CURRENT_DIR, "publish.py")
PUBLISH_PATHS = []
class AdobeRestApi(RestApi):
dbcon = AvalonMongoDB()
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.dbcon.install()
@route("/available", "/adobe")
def available(self):
return CallbackResult()
@route("/presets/<project_name>", "/adobe")
def get_presets(self, request):
project_name = request.url_data["project_name"]
return CallbackResult(data=config.get_presets(project_name))
@route("/publish", "/adobe", "POST")
def publish(self, request):
"""Triggers publishing script in subprocess.
The subprocess freeze process and during publishing is not possible to
handle other requests and is possible that freeze main application.
TODO: Freezing issue may be fixed with socket communication.
Example url:
http://localhost:8021/adobe/publish (POST)
"""
try:
publish_env = self._prepare_publish_environments(
request.request_data
)
except Exception as exc:
log.warning(
"Failed to prepare environments for publishing.",
exc_info=True
)
abort(400, str(exc))
output_data_path = publish_env["AC_PUBLISH_OUTPATH"]
log.info("Pyblish is running")
try:
# Trigger subprocess
# QUESTION should we check returncode?
returncode = execute(
[sys.executable, PUBLISH_SCRIPT_PATH],
env=publish_env
)
# Check if output file exists
if returncode != 0 or not os.path.exists(output_data_path):
abort(500, "Publishing failed")
log.info("Pyblish have stopped")
return CallbackResult(
data={"return_data_path": output_data_path}
)
except Exception:
log.warning("Publishing failed", exc_info=True)
abort(500, "Publishing failed")
def _prepare_publish_environments(self, data):
"""Prepares environments based on request data."""
env = copy.deepcopy(os.environ)
project_name = data["project"]
asset_name = data["asset"]
project_doc = self.dbcon[project_name].find_one({
"type": "project"
})
av_asset = self.dbcon[project_name].find_one({
"type": "asset",
"name": asset_name
})
parents = av_asset["data"]["parents"]
hierarchy = ""
if parents:
hierarchy = "/".join(parents)
env["AVALON_PROJECT"] = project_name
env["AVALON_ASSET"] = asset_name
env["AVALON_TASK"] = data["task"]
env["AVALON_WORKDIR"] = data["workdir"]
env["AVALON_HIERARCHY"] = hierarchy
env["AVALON_PROJECTCODE"] = project_doc["data"].get("code", "")
env["AVALON_APP"] = data["AVALON_APP"]
env["AVALON_APP_NAME"] = data["AVALON_APP_NAME"]
env["PYBLISH_HOSTS"] = data["AVALON_APP"]
env["PUBLISH_PATHS"] = os.pathsep.join(PUBLISH_PATHS)
# Input and Output paths where source data and result data will be
# stored
env["AC_PUBLISH_INPATH"] = data["adobePublishJsonPathSend"]
env["AC_PUBLISH_OUTPATH"] = data["adobePublishJsonPathGet"]
return env

View file

@ -1,105 +0,0 @@
import os
import toml
import time
from pype.modules.ftrack.lib import AppAction
from avalon import lib, api
from pype.api import Logger, config
log = Logger().get_logger(__name__)
def register_app(app, dbcon, session, plugins_presets):
name = app['name']
variant = ""
try:
variant = app['name'].split("_")[1]
except Exception:
pass
abspath = lib.which_app(app['name'])
if abspath is None:
log.error(
"'{0}' - App don't have config toml file".format(app['name'])
)
return
apptoml = toml.load(abspath)
''' REQUIRED '''
executable = apptoml['executable']
''' OPTIONAL '''
label = apptoml.get('ftrack_label', app.get('label', name))
icon = apptoml.get('ftrack_icon', None)
description = apptoml.get('description', None)
preactions = apptoml.get('preactions', [])
if icon:
icon = icon.format(os.environ.get('PYPE_STATICS_SERVER', ''))
# register action
AppAction(
session, dbcon, label, name, executable, variant,
icon, description, preactions, plugins_presets
).register()
if not variant:
log.info('- Variant is not set')
def register(session, plugins_presets={}):
from pype.lib import env_value_to_bool
if env_value_to_bool("PYPE_USE_APP_MANAGER", default=False):
return
app_usages = (
config.get_presets()
.get("global", {})
.get("applications")
) or {}
apps = []
missing_app_names = []
launchers_path = os.path.join(os.environ["PYPE_CONFIG"], "launchers")
for file in os.listdir(launchers_path):
filename, ext = os.path.splitext(file)
if ext.lower() != ".toml":
continue
app_usage = app_usages.get(filename)
if not app_usage:
if app_usage is None:
missing_app_names.append(filename)
continue
loaded_data = toml.load(os.path.join(launchers_path, file))
app_data = {
"name": filename,
"label": loaded_data.get("label", filename)
}
apps.append(app_data)
if missing_app_names:
log.debug(
"Apps not defined in applications usage. ({})".format(
", ".join((
"\"{}\"".format(app_name)
for app_name in missing_app_names
))
)
)
dbcon = api.AvalonMongoDB()
apps = sorted(apps, key=lambda app: app["name"])
app_counter = 0
for app in apps:
try:
register_app(app, dbcon, session, plugins_presets)
if app_counter % 5 == 0:
time.sleep(0.1)
app_counter += 1
except Exception as exc:
log.warning(
"\"{}\" - not a proper App ({})".format(app['name'], str(exc)),
exc_info=True
)

View file

@ -1,7 +1,6 @@
import os
from uuid import uuid4
from pype.api import config
from pype.modules.ftrack.lib import BaseAction
from pype.lib import (
ApplicationManager,
@ -205,55 +204,6 @@ class AppplicationsAction(BaseAction):
"message": msg
}
# TODO Move to prelaunch/afterlaunch hooks
# TODO change to settings
# Change status of task to In progress
presets = config.get_presets()["ftrack"]["ftrack_config"]
if "status_update" in presets:
statuses = presets["status_update"]
actual_status = entity["status"]["name"].lower()
already_tested = []
ent_path = "/".join(
[ent["name"] for ent in entity["link"]]
)
while True:
next_status_name = None
for key, value in statuses.items():
if key in already_tested:
continue
if actual_status in value or "_any_" in value:
if key != "_ignore_":
next_status_name = key
already_tested.append(key)
break
already_tested.append(key)
if next_status_name is None:
break
try:
query = "Status where name is \"{}\"".format(
next_status_name
)
status = session.query(query).one()
entity["status"] = status
session.commit()
self.log.debug("Changing status to \"{}\" <{}>".format(
next_status_name, ent_path
))
break
except Exception:
session.rollback()
msg = (
"Status \"{}\" in presets wasn't found"
" on Ftrack entity type \"{}\""
).format(next_status_name, entity.entity_type)
self.log.warning(msg)
return {
"success": True,
"message": "Launching {0}".format(self.label)
@ -261,7 +211,5 @@ class AppplicationsAction(BaseAction):
def register(session, plugins_presets=None):
'''Register action. Called when used as an event plugin.'''
from pype.lib import env_value_to_bool
if env_value_to_bool("PYPE_USE_APP_MANAGER", default=False):
AppplicationsAction(session, plugins_presets).register()
"""Register action. Called when used as an event plugin."""
AppplicationsAction(session, plugins_presets).register()

View file

@ -1,6 +1,4 @@
import os
import collections
import toml
import json
import arrow
import ftrack_api
@ -8,8 +6,8 @@ from pype.modules.ftrack.lib import BaseAction, statics_icon
from pype.modules.ftrack.lib.avalon_sync import (
CUST_ATTR_ID_KEY, CUST_ATTR_GROUP, default_custom_attributes_definition
)
from pype.api import config
from pype.lib import ApplicationManager, env_value_to_bool
from pype.api import get_system_settings
from pype.lib import ApplicationManager
"""
This action creates/updates custom attributes.
@ -146,9 +144,6 @@ class CustomAttributes(BaseAction):
"text", "boolean", "date", "enumerator",
"dynamic enumerator", "number"
)
# Pype 3 features
use_app_manager = env_value_to_bool("PYPE_USE_APP_MANAGER", default=False)
app_manager = None
def discover(self, session, entities, event):
'''
@ -171,8 +166,7 @@ class CustomAttributes(BaseAction):
})
session.commit()
if self.use_app_manager:
self.app_manager = ApplicationManager()
self.app_manager = ApplicationManager()
try:
self.prepare_global_data(session)
@ -217,15 +211,12 @@ class CustomAttributes(BaseAction):
self.groups = {}
self.presets = config.get_presets()
self.ftrack_settings = get_system_settings()["modules"]["Ftrack"]
self.attrs_presets = self.prepare_attribute_pressets()
def prepare_attribute_pressets(self):
output = {}
attr_presets = (
self.presets.get("ftrack", {}).get("ftrack_custom_attributes")
) or {}
attr_presets = self.ftrack_settings["custom_attributes"]
for entity_type, preset in attr_presets.items():
# Lower entity type
entity_type = entity_type.lower()
@ -391,54 +382,8 @@ class CustomAttributes(BaseAction):
app_definitions.append({"empty": "< Empty >"})
return app_definitions
def application_definitions(self):
app_usages = self.presets.get("global", {}).get("applications") or {}
app_definitions = []
launchers_path = os.path.join(os.environ["PYPE_CONFIG"], "launchers")
missing_app_names = []
for file in os.listdir(launchers_path):
app_name, ext = os.path.splitext(file)
if ext.lower() != ".toml":
continue
if not app_usages.get(app_name):
missing_app_names.append(app_name)
continue
loaded_data = toml.load(os.path.join(launchers_path, file))
ftrack_label = loaded_data.get("ftrack_label")
if ftrack_label:
parts = app_name.split("_")
if len(parts) > 1:
ftrack_label = " ".join((ftrack_label, parts[-1]))
else:
ftrack_label = loaded_data.get("label", app_name)
app_definitions.append({app_name: ftrack_label})
if missing_app_names:
self.log.warning(
"Apps not defined in applications usage. ({})".format(
", ".join((
"\"{}\"".format(app_name)
for app_name in missing_app_names
))
)
)
# Make sure there is at least one item
if not app_definitions:
app_definitions.append({"empty": "< Empty >"})
return app_definitions
def applications_attribute(self, event):
if self.use_app_manager:
apps_data = self.app_defs_from_app_manager()
else:
apps_data = self.application_definitions()
apps_data = self.app_defs_from_app_manager()
applications_custom_attr_data = {
"label": "Applications",
@ -453,28 +398,13 @@ class CustomAttributes(BaseAction):
}
self.process_attr_data(applications_custom_attr_data, event)
def tools_from_app_manager(self):
def tools_attribute(self, event):
tools_data = []
for tool_name, tool in self.app_manager.tools.items():
if tool.enabled:
tools_data.append({
tool_name: tool_name
})
return tools_data
def tools_data(self):
tool_usages = self.presets.get("global", {}).get("tools") or {}
tools_data = []
for tool_name, usage in tool_usages.items():
if usage:
tools_data.append({tool_name: tool_name})
return tools_data
def tools_attribute(self, event):
if self.use_app_manager:
tools_data = self.tools_from_app_manager()
else:
tools_data = self.tools_data()
# Make sure there is at least one item
if not tools_data:
@ -494,12 +424,7 @@ class CustomAttributes(BaseAction):
self.process_attr_data(tools_custom_attr_data, event)
def intent_attribute(self, event):
intent_key_values = (
self.presets
.get("global", {})
.get("intent", {})
.get("items", {})
) or {}
intent_key_values = self.ftrack_settings["intent"]["items"]
intent_values = []
for key, label in intent_key_values.items():
@ -805,6 +730,9 @@ class CustomAttributes(BaseAction):
return default
err_msg = 'Default value is not'
if type == 'number':
if isinstance(default, (str)) and default.isnumeric():
default = float(default)
if not isinstance(default, (float, int)):
raise CustAttrException('{} integer'.format(err_msg))
elif type == 'text':

View file

@ -1,7 +1,11 @@
import os
from pype.modules.ftrack.lib import BaseAction, statics_icon
from avalon import lib as avalonlib
from pype.api import config, Anatomy
from pype.api import (
Anatomy,
get_project_settings
)
from pype.lib import ApplicationManager
class CreateFolders(BaseAction):
@ -93,6 +97,7 @@ class CreateFolders(BaseAction):
all_entities = self.get_notask_children(entity)
anatomy = Anatomy(project_name)
project_settings = get_project_settings(project_name)
work_keys = ["work", "folder"]
work_template = anatomy.templates
@ -106,10 +111,13 @@ class CreateFolders(BaseAction):
publish_template = publish_template[key]
publish_has_apps = "{app" in publish_template
presets = config.get_presets()
app_presets = presets.get("tools", {}).get("sw_folders")
cached_apps = {}
tools_settings = project_settings["global"]["tools"]
app_presets = tools_settings["Workfiles"]["sw_folders"]
app_manager_apps = None
if app_presets and (work_has_apps or publish_has_apps):
app_manager_apps = ApplicationManager().applications
cached_apps = {}
collected_paths = []
for entity in all_entities:
if entity.entity_type.lower() == "project":
@ -140,18 +148,20 @@ class CreateFolders(BaseAction):
task_data["task"] = child["name"]
apps = []
if app_presets and (work_has_apps or publish_has_apps):
possible_apps = app_presets.get(task_type_name, [])
for app in possible_apps:
if app in cached_apps:
app_dir = cached_apps[app]
if app_manager_apps:
possible_apps = app_presets.get(task_type_name) or []
for app_name in possible_apps:
if app_name in cached_apps:
apps.append(cached_apps[app_name])
continue
app_def = app_manager_apps.get(app_name)
if app_def and app_def.is_host:
app_dir = app_def.host_name
else:
try:
app_data = avalonlib.get_application(app)
app_dir = app_data["application_dir"]
except ValueError:
app_dir = app
cached_apps[app] = app_dir
app_dir = app_name
cached_apps[app_name] = app_dir
apps.append(app_dir)
# Template wok

View file

@ -2,7 +2,7 @@ import os
import re
from pype.modules.ftrack.lib import BaseAction, statics_icon
from pype.api import config, Anatomy
from pype.api import Anatomy, get_project_settings
class CreateProjectFolders(BaseAction):
@ -69,25 +69,26 @@ class CreateProjectFolders(BaseAction):
return True
def launch(self, session, entities, event):
entity = entities[0]
project = self.get_project_from_entity(entity)
project_folder_presets = (
config.get_presets()
.get("tools", {})
.get("project_folder_structure")
# Get project entity
project_entity = self.get_project_from_entity(entities[0])
# Load settings for project
project_name = project_entity["full_name"]
project_settings = get_project_settings(project_name)
project_folder_structure = (
project_settings["global"]["project_folder_structure"]
)
if not project_folder_presets:
if not project_folder_structure:
return {
"success": False,
"message": "Project structure presets are not set."
"message": "Project structure is not set."
}
try:
# Get paths based on presets
basic_paths = self.get_path_items(project_folder_presets)
anatomy = Anatomy(project["full_name"])
self.create_folders(basic_paths, entity, project, anatomy)
self.create_ftrack_entities(basic_paths, project)
basic_paths = self.get_path_items(project_folder_structure)
anatomy = Anatomy(project_entity["full_name"])
self.create_folders(basic_paths, project_entity, anatomy)
self.create_ftrack_entities(basic_paths, project_entity)
except Exception as exc:
session.rollback()
@ -219,7 +220,7 @@ class CreateProjectFolders(BaseAction):
output.append(os.path.normpath(os.path.sep.join(clean_items)))
return output
def create_folders(self, basic_paths, entity, project, anatomy):
def create_folders(self, basic_paths, project, anatomy):
roots_paths = []
if isinstance(anatomy.roots, dict):
for root in anatomy.roots:

View file

@ -396,6 +396,13 @@ class Delivery(BaseAction):
session.commit()
self.db_con.uninstall()
if job["status"] == "failed":
return {
"success": False,
"message": "Delivery failed. Check logs for more information."
}
return True
def real_launch(self, session, entities, event):
self.log.info("Delivery action just started.")
report_items = collections.defaultdict(list)

View file

@ -3,7 +3,6 @@ import subprocess
import traceback
import json
from pype.api import config
from pype.modules.ftrack.lib import BaseAction, statics_icon
import ftrack_api
from avalon import io, api
@ -11,7 +10,6 @@ from avalon import io, api
class RVAction(BaseAction):
""" Launch RV action """
ignore_me = "rv" not in config.get_presets()
identifier = "rv.launch.action"
label = "rv"
description = "rv Launcher"
@ -19,6 +17,8 @@ class RVAction(BaseAction):
type = 'Application'
allowed_types = ["img", "mov", "exr", "mp4"]
def __init__(self, session, plugins_presets):
""" Constructor
@ -26,36 +26,30 @@ class RVAction(BaseAction):
:type session: :class:`ftrack_api.Session`
"""
super().__init__(session, plugins_presets)
self.rv_path = None
self.config_data = None
# QUESTION load RV application data from AppplicationManager?
rv_path = None
# RV_HOME should be set if properly installed
if os.environ.get('RV_HOME'):
self.rv_path = os.path.join(
rv_path = os.path.join(
os.environ.get('RV_HOME'),
'bin',
'rv'
)
else:
# if not, fallback to config file location
if "rv" in config.get_presets():
self.config_data = config.get_presets()['rv']['config']
self.set_rv_path()
if not os.path.exists(rv_path):
rv_path = None
if self.rv_path is None:
return
if not rv_path:
self.log.info("RV path was not found.")
self.ignore_me = True
self.allowed_types = self.config_data.get(
'file_ext', ["img", "mov", "exr", "mp4"]
)
self.rv_path = rv_path
def discover(self, session, entities, event):
"""Return available actions based on *event*. """
return True
def set_rv_path(self):
self.rv_path = self.config_data.get("rv_path")
def preregister(self):
if self.rv_path is None:
return (

View file

@ -8,7 +8,7 @@ from avalon.api import AvalonMongoDB
from bson.objectid import ObjectId
from pype.api import config, Anatomy
from pype.api import Anatomy, get_project_settings
class UserAssigmentEvent(BaseEvent):
@ -173,26 +173,50 @@ class UserAssigmentEvent(BaseEvent):
return t_data
def launch(self, session, event):
# load shell scripts presets
presets = config.get_presets()['ftrack'].get("user_assigment_event")
if not presets:
if not event.get("data"):
return
for entity in event.get('data', {}).get('entities', []):
if entity.get('entity_type') != 'Appointment':
entities_info = event["data"].get("entities")
if not entities_info:
return
# load shell scripts presets
tmp_by_project_name = {}
for entity_info in entities_info:
if entity_info.get('entity_type') != 'Appointment':
continue
task, user = self._get_task_and_user(session,
entity.get('action'),
entity.get('changes'))
task_entity, user_entity = self._get_task_and_user(
session,
entity_info.get('action'),
entity_info.get('changes')
)
if not task or not user:
self.log.error(
'Task or User was not found.')
if not task_entity or not user_entity:
self.log.error("Task or User was not found.")
continue
data = self._get_template_data(task)
# format directories to pass to shell script
anatomy = Anatomy(data["project"]["name"])
project_name = task_entity["project"]["full_name"]
project_data = tmp_by_project_name.get(project_name) or {}
if "scripts_by_action" not in project_data:
project_settings = get_project_settings(project_name)
_settings = (
project_settings["ftrack"]["events"]["user_assignment"]
)
project_data["scripts_by_action"] = _settings.get("scripts")
tmp_by_project_name[project_name] = project_data
scripts_by_action = project_data["scripts_by_action"]
if not scripts_by_action:
continue
if "anatomy" not in project_data:
project_data["anatomy"] = Anatomy(project_name)
tmp_by_project_name[project_name] = project_data
anatomy = project_data["anatomy"]
data = self._get_template_data(task_entity)
anatomy_filled = anatomy.format(data)
# formatting work dir is easiest part as we can use whole path
work_dir = anatomy_filled["work"]["folder"]
@ -201,8 +225,10 @@ class UserAssigmentEvent(BaseEvent):
publish = anatomy_filled["publish"]["folder"]
# now find path to {asset}
m = re.search("(^.+?{})".format(data['asset']),
publish)
m = re.search(
"(^.+?{})".format(data["asset"]),
publish
)
if not m:
msg = 'Cannot get part of publish path {}'.format(publish)
@ -213,12 +239,13 @@ class UserAssigmentEvent(BaseEvent):
}
publish_dir = m.group(1)
for script in presets.get(entity.get('action')):
self.log.info(
'[{}] : running script for user {}'.format(
entity.get('action'), user["username"]))
self._run_script(script, [user["username"],
work_dir, publish_dir])
username = user_entity["username"]
event_entity_action = entity_info["action"]
for script in scripts_by_action.get(event_entity_action):
self.log.info((
"[{}] : running script for user {}"
).format(event_entity_action, username))
self._run_script(script, [username, work_dir, publish_dir])
return True

View file

@ -1,12 +1,8 @@
from pype.modules.ftrack import BaseEvent
from pype.api import config
from pype.api import get_project_settings
class VersionToTaskStatus(BaseEvent):
# Presets usage
default_status_mapping = {}
def launch(self, session, event):
'''Propagates status from version to task when changed'''
@ -48,14 +44,19 @@ class VersionToTaskStatus(BaseEvent):
version_status_orig = version_status["name"]
# Get entities necessary for processing
version = session.get("AssetVersion", entity["entityId"])
task = version.get("task")
if not task:
continue
project_entity = self.get_project_from_entity(task)
project_name = project_entity["full_name"]
project_settings = get_project_settings(project_name)
# Load status mapping from presets
status_mapping = (
config.get_presets()
.get("ftrack", {})
.get("ftrack_config", {})
.get("status_version_to_task")
) or self.default_status_mapping
project_settings["ftrack"]["events"]["status_version_to_task"])
# Skip if mapping is empty
if not status_mapping:
continue
@ -78,16 +79,10 @@ class VersionToTaskStatus(BaseEvent):
# Lower all names from presets
new_status_names = [name.lower() for name in new_status_names]
# Get entities necessary for processing
version = session.get("AssetVersion", entity["entityId"])
task = version.get("task")
if not task:
continue
if version["asset"]["type"]["short"].lower() == "scene":
continue
project_schema = task["project"]["project_schema"]
project_schema = project_entity["project_schema"]
# Get all available statuses for Task
statuses = project_schema.get_statuses("Task", task["type_id"])
# map lowered status name with it's object

View file

@ -2,11 +2,13 @@ import os
import sys
import types
import importlib
import ftrack_api
import time
import logging
import inspect
from pype.api import Logger, config
import ftrack_api
from pype.api import Logger
log = Logger().get_logger(__name__)
@ -109,9 +111,8 @@ class FtrackServer:
key = "user"
if self.server_type.lower() == "event":
key = "server"
plugins_presets = config.get_presets().get(
"ftrack", {}
).get("plugins", {}).get(key, {})
# TODO replace with settings or get rid of passing the dictionary
plugins_presets = {}
function_counter = 0
for function_dict in register_functions_dict:

View file

@ -9,7 +9,7 @@ from pype.modules.ftrack.ftrack_server.lib import (
SocketSession, ProcessEventHub, TOPIC_STATUS_SERVER
)
import ftrack_api
from pype.api import Logger, config
from pype.api import Logger
log = Logger().get_logger("Event processor")
@ -56,32 +56,16 @@ def register(session):
def clockify_module_registration():
module_name = "Clockify"
menu_items = config.get_presets()["tray"]["menu_items"]
if not menu_items["item_usage"][module_name]:
return
api_key = os.environ.get("CLOCKIFY_API_KEY")
if not api_key:
log.warning("Clockify API key is not set.")
return
workspace_name = os.environ.get("CLOCKIFY_WORKSPACE")
if not workspace_name:
workspace_name = (
menu_items
.get("attributes", {})
.get(module_name, {})
.get("workspace_name", {})
)
if not workspace_name:
log.warning("Clockify Workspace is not set.")
return
os.environ["CLOCKIFY_WORKSPACE"] = workspace_name
from pype.modules.clockify.constants import CLOCKIFY_FTRACK_SERVER_PATH
current = os.environ.get("FTRACK_EVENTS_PATH") or ""

View file

@ -3,7 +3,6 @@ from . import credentials
from .ftrack_base_handler import BaseHandler
from .ftrack_event_handler import BaseEvent
from .ftrack_action_handler import BaseAction, ServerAction, statics_icon
from .ftrack_app_handler import AppAction
__all__ = (
"avalon_sync",
@ -12,6 +11,5 @@ __all__ = (
"BaseEvent",
"BaseAction",
"ServerAction",
"statics_icon",
"AppAction"
"statics_icon"
)

View file

@ -8,16 +8,13 @@ import copy
from avalon.api import AvalonMongoDB
import avalon
import avalon.api
from avalon.vendor import toml
from pype.api import Logger, Anatomy
from pype.api import Logger, Anatomy, get_anatomy_settings
from bson.objectid import ObjectId
from bson.errors import InvalidId
from pymongo import UpdateOne
import ftrack_api
from pype.api import config
from pype.lib import ApplicationManager
log = Logger().get_logger(__name__)
@ -175,40 +172,29 @@ def get_avalon_project_template(project_name):
def get_project_apps(in_app_list):
"""
Returns metadata information about apps in 'in_app_list' enhanced
from toml files.
""" Application definitions for app name.
Args:
in_app_list: (list) - names of applications
Returns:
tuple (list, dictionary) - list of dictionaries about apps
dictionary of warnings
tuple (list, dictionary) - list of dictionaries with apps definitions
dictionary of warnings
"""
apps = []
# TODO report
missing_toml_msg = "Missing config file for application"
error_msg = (
"Unexpected error happend during preparation of application"
)
warnings = collections.defaultdict(list)
for app in in_app_list:
try:
toml_path = avalon.lib.which_app(app)
if not toml_path:
log.warning(missing_toml_msg + ' "{}"'.format(app))
warnings[missing_toml_msg].append(app)
continue
missing_app_msg = "Missing definition of application"
application_manager = ApplicationManager()
for app_name in in_app_list:
app = application_manager.applications.get(app_name)
if app:
apps.append({
"name": app,
"label": toml.load(toml_path)["label"]
"name": app_name,
"label": app.full_label
})
except Exception:
warnings[error_msg].append(app)
log.warning((
"Error has happened during preparing application \"{}\""
).format(app), exc_info=True)
else:
warnings[missing_app_msg].append(app_name)
return apps, warnings
@ -289,28 +275,6 @@ def get_hierarchical_attributes(session, entity, attr_names, attr_defaults={}):
return hier_values
def get_task_short_name(task_type):
"""
Returns short name (code) for 'task_type'. Short name stored in
metadata dictionary in project.config per each 'task_type'.
Could be used in anatomy, paths etc.
If no appropriate short name is found in mapping, 'task_type' is
returned back unchanged.
Currently stores data in:
'pype-config/presets/ftrack/project_defaults.json'
Args:
task_type: (string) - Animation | Modeling ...
Returns:
(string) - anim | model ...
"""
presets = config.get_presets()['ftrack']['project_defaults']\
.get("task_short_names")
return presets.get(task_type, task_type)
class SyncEntitiesFactory:
dbcon = AvalonMongoDB()
@ -1131,6 +1095,13 @@ class SyncEntitiesFactory:
)
def prepare_ftrack_ent_data(self):
project_name = self.entities_dict[self.ft_project_id]["name"]
project_anatomy_data = get_anatomy_settings(project_name)
task_type_mapping = (
project_anatomy_data["attributes"]["task_short_names"]
)
not_set_ids = []
for id, entity_dict in self.entities_dict.items():
entity = entity_dict["entity"]
@ -1167,10 +1138,12 @@ class SyncEntitiesFactory:
continue
self.report_items["warning"][msg] = items
tasks = {}
for tt in task_types:
tasks[tt["name"]] = {
"short_name": get_task_short_name(tt["name"])
}
for task_type in task_types:
task_type_name = task_type["name"]
short_name = task_type_mapping.get(task_type_name)
tasks[task_type_name] = {
"short_name": short_name or task_type_name
}
self.entities_dict[id]["final_entity"]["config"] = {
"tasks": tasks,
"apps": proj_apps

View file

@ -1,227 +0,0 @@
from pype import lib as pypelib
from pype.api import config
from .ftrack_action_handler import BaseAction
class AppAction(BaseAction):
"""Application Action class.
Args:
session (ftrack_api.Session): Session where action will be registered.
label (str): A descriptive string identifing your action.
varaint (str, optional): To group actions together, give them the same
label and specify a unique variant per action.
identifier (str): An unique identifier for app.
description (str): A verbose descriptive text for you action.
icon (str): Url path to icon which will be shown in Ftrack web.
"""
type = "Application"
preactions = ["start.timer"]
def __init__(
self, session, dbcon, label, name, executable, variant=None,
icon=None, description=None, preactions=[], plugins_presets={}
):
self.label = label
self.identifier = name
self.executable = executable
self.variant = variant
self.icon = icon
self.description = description
self.preactions.extend(preactions)
self.dbcon = dbcon
super().__init__(session, plugins_presets)
if label is None:
raise ValueError("Action missing label.")
if name is None:
raise ValueError("Action missing identifier.")
if executable is None:
raise ValueError("Action missing executable.")
def register(self):
"""Registers the action, subscribing the discover and launch topics."""
discovery_subscription = (
"topic=ftrack.action.discover and source.user.username={0}"
).format(self.session.api_user)
self.session.event_hub.subscribe(
discovery_subscription,
self._discover,
priority=self.priority
)
launch_subscription = (
"topic=ftrack.action.launch"
" and data.actionIdentifier={0}"
" and source.user.username={1}"
).format(
self.identifier,
self.session.api_user
)
self.session.event_hub.subscribe(
launch_subscription,
self._launch
)
def discover(self, session, entities, event):
"""Return true if we can handle the selected entities.
Args:
session (ftrack_api.Session): Helps to query necessary data.
entities (list): Object of selected entities.
event (ftrack_api.Event): Ftrack event causing discover callback.
"""
if (
len(entities) != 1
or entities[0].entity_type.lower() != "task"
):
return False
entity = entities[0]
if entity["parent"].entity_type.lower() == "project":
return False
avalon_project_apps = event["data"].get("avalon_project_apps", None)
avalon_project_doc = event["data"].get("avalon_project_doc", None)
if avalon_project_apps is None:
if avalon_project_doc is None:
ft_project = self.get_project_from_entity(entity)
project_name = ft_project["full_name"]
self.dbcon.install()
database = self.dbcon.database
avalon_project_doc = database[project_name].find_one({
"type": "project"
}) or False
event["data"]["avalon_project_doc"] = avalon_project_doc
if not avalon_project_doc:
return False
project_apps_config = avalon_project_doc["config"].get("apps", [])
avalon_project_apps = [
app["name"] for app in project_apps_config
] or False
event["data"]["avalon_project_apps"] = avalon_project_apps
if not avalon_project_apps:
return False
return self.identifier in avalon_project_apps
def _launch(self, event):
entities = self._translate_event(event)
preactions_launched = self._handle_preactions(
self.session, event
)
if preactions_launched is False:
return
response = self.launch(self.session, entities, event)
return self._handle_result(response)
def launch(self, session, entities, event):
"""Callback method for the custom action.
return either a bool (True if successful or False if the action failed)
or a dictionary with they keys `message` and `success`, the message
should be a string and will be displayed as feedback to the user,
success should be a bool, True if successful or False if the action
failed.
*session* is a `ftrack_api.Session` instance
*entities* is a list of tuples each containing the entity type and
the entity id. If the entity is a hierarchical you will always get
the entity type TypedContext, once retrieved through a get operation
you will have the "real" entity type ie. example Shot, Sequence
or Asset Build.
*event* the unmodified original event
"""
entity = entities[0]
task_name = entity["name"]
asset_name = entity["parent"]["name"]
project_name = entity["project"]["full_name"]
try:
pypelib.launch_application(
project_name, asset_name, task_name, self.identifier
)
except pypelib.ApplicationLaunchFailed as exc:
self.log.error(str(exc))
return {
"success": False,
"message": str(exc)
}
except Exception:
msg = "Unexpected failure of application launch {}".format(
self.label
)
self.log.error(msg, exc_info=True)
return {
"success": False,
"message": msg
}
# Change status of task to In progress
presets = config.get_presets()["ftrack"]["ftrack_config"]
if "status_update" in presets:
statuses = presets["status_update"]
actual_status = entity["status"]["name"].lower()
already_tested = []
ent_path = "/".join(
[ent["name"] for ent in entity["link"]]
)
while True:
next_status_name = None
for key, value in statuses.items():
if key in already_tested:
continue
if actual_status in value or "_any_" in value:
if key != "_ignore_":
next_status_name = key
already_tested.append(key)
break
already_tested.append(key)
if next_status_name is None:
break
try:
query = "Status where name is \"{}\"".format(
next_status_name
)
status = session.query(query).one()
entity["status"] = status
session.commit()
self.log.debug("Changing status to \"{}\" <{}>".format(
next_status_name, ent_path
))
break
except Exception:
session.rollback()
msg = (
"Status \"{}\" in presets wasn't found"
" on Ftrack entity type \"{}\""
).format(next_status_name, entity.entity_type)
self.log.warning(msg)
return {
"success": True,
"message": "Launching {0}".format(self.label)
}

View file

@ -252,6 +252,9 @@ class AfterEffectsServerStub():
Args:
item_id (int):
Returns:
(namedtuple)
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_work_area',
@ -305,6 +308,35 @@ class AfterEffectsServerStub():
image_path=project_path,
as_copy=as_copy))
def get_render_info(self):
""" Get render queue info for render purposes
Returns:
(namedtuple): with 'file_name' field
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_render_info'))
records = self._to_records(res)
if records:
return records.pop()
log.debug("Render queue needs to have file extension in 'Output to'")
def get_audio_url(self, item_id):
""" Get audio layer absolute url for comp
Args:
item_id (int): composition id
Returns:
(str): absolute path url
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_audio_url',
item_id=item_id))
return res
def close(self):
self.client.close()

View file

@ -3,7 +3,7 @@ import inspect
import pype.modules
from pype.modules import PypeModule
from pype.settings import system_settings
from pype.settings import get_system_settings
from pype.api import Logger
@ -24,7 +24,7 @@ class PypeModuleManager:
return environments
def find_pype_modules(self):
settings = system_settings()
settings = get_system_settings()
modules = []
dirpath = os.path.dirname(pype.modules.__file__)
for module_name in os.listdir(dirpath):

View file

@ -2,7 +2,7 @@ import tempfile
import os
import pyblish.api
from pype.api import config
from pype.api import get_project_settings
import inspect
ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05
@ -24,12 +24,14 @@ def imprint_attributes(plugin):
plugin_host = file.split(os.path.sep)[-3:-2][0]
plugin_name = type(plugin).__name__
try:
config_data = config.get_presets()['plugins'][plugin_host][plugin_kind][plugin_name] # noqa: E501
settings = get_project_settings(os.environ['AVALON_PROJECT'])
settings_data = settings[plugin_host][plugin_kind][plugin_name] # noqa: E501
print(settings_data)
except KeyError:
print("preset not found")
return
for option, value in config_data.items():
for option, value in settings_data.items():
if option == "enabled" and value is False:
setattr(plugin, "active", False)
else:

View file

@ -0,0 +1,27 @@
import os
import pyblish.api
from avalon import aftereffects
class CollectAudio(pyblish.api.ContextPlugin):
"""Inject audio file url for rendered composition into context.
Needs to run AFTER 'collect_render'. Use collected comp_id to check
if there is an AVLayer in this composition
"""
order = pyblish.api.CollectorOrder + 0.499
label = "Collect Audio"
hosts = ["aftereffects"]
def process(self, context):
for instance in context:
if instance.data["family"] == 'render.farm':
comp_id = instance.data["comp_id"]
if not comp_id:
self.log.debug("No comp_id filled in instance")
return
context.data["audioFile"] = os.path.normpath(
aftereffects.stub().get_audio_url(comp_id)
).replace("\\", "/")

View file

@ -11,6 +11,7 @@ from avalon import aftereffects
class AERenderInstance(RenderInstance):
# extend generic, composition name is needed
comp_name = attr.ib(default=None)
comp_id = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
@ -39,13 +40,11 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
continue
work_area_info = aftereffects.stub().get_work_area(int(item_id))
frameStart = round(float(work_area_info.workAreaStart) *
float(work_area_info.frameRate))
frameStart = work_area_info.workAreaStart
frameEnd = round(float(work_area_info.workAreaStart) *
float(work_area_info.frameRate) +
frameEnd = round(work_area_info.workAreaStart +
float(work_area_info.workAreaDuration) *
float(work_area_info.frameRate))
float(work_area_info.frameRate)) - 1
if inst["family"] == "render" and inst["active"]:
instance = AERenderInstance(
@ -83,6 +82,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
raise ValueError("There is no composition for item {}".
format(item_id))
instance.comp_name = comp.name
instance.comp_id = item_id
instance._anatomy = context.data["anatomy"]
instance.anatomyData = context.data["anatomyData"]
@ -108,18 +108,29 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
start = render_instance.frameStart
end = render_instance.frameEnd
# pull file name from Render Queue Output module
render_q = aftereffects.stub().get_render_info()
_, ext = os.path.splitext(os.path.basename(render_q.file_name))
base_dir = self._get_output_dir(render_instance)
expected_files = []
for frame in range(start, end + 1):
path = os.path.join(base_dir, "{}_{}_{}.{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
str(frame).zfill(self.padding_width),
self.rendered_extension
))
if "#" not in render_q.file_name: # single frame (mov)W
path = os.path.join(base_dir, "{}_{}_{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
ext.replace('.', '')
))
expected_files.append(path)
else:
for frame in range(start, end + 1):
path = os.path.join(base_dir, "{}_{}_{}.{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
str(frame).zfill(self.padding_width),
ext.replace('.', '')
))
expected_files.append(path)
return expected_files
def _get_output_dir(self, render_instance):

View file

@ -0,0 +1,14 @@
import pype.api
from avalon import aftereffects
class ExtractSaveScene(pype.api.Extractor):
"""Save scene before extraction."""
order = pype.api.Extractor.order - 0.49
label = "Extract Save Scene"
hosts = ["aftereffects"]
families = ["workfile"]
def process(self, instance):
aftereffects.stub().save()

View file

@ -18,15 +18,18 @@ class DeadlinePluginInfo():
ProjectPath = attr.ib(default=None)
AWSAssetFile0 = attr.ib(default=None)
Version = attr.ib(default=None)
MultiProcess = attr.ib(default=None)
class AfterEffectsSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
label = "Submit AE to Deadline"
order = pyblish.api.IntegratorOrder
order = pyblish.api.IntegratorOrder + 0.1
hosts = ["aftereffects"]
families = ["render.farm"] # cannot be "render' as that is integrated
use_published = False
use_published = True
chunk_size = 1000000
def get_job_info(self):
dln_job_info = DeadlineJobInfo(Plugin="AfterEffects")
@ -39,9 +42,12 @@ class AfterEffectsSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline
dln_job_info.Plugin = "AfterEffects"
dln_job_info.UserName = context.data.get(
"deadlineUser", getpass.getuser())
frame_range = "{}-{}".format(self._instance.data["frameStart"],
self._instance.data["frameEnd"])
dln_job_info.Frames = frame_range
if self._instance.data["frameEnd"] > self._instance.data["frameStart"]:
frame_range = "{}-{}".format(self._instance.data["frameStart"],
self._instance.data["frameEnd"])
dln_job_info.Frames = frame_range
dln_job_info.ChunkSize = self.chunk_size
dln_job_info.OutputFilename = \
os.path.basename(self._instance.data["expectedFiles"][0])
dln_job_info.OutputDirectory = \
@ -77,21 +83,25 @@ class AfterEffectsSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline
script_path = context.data["currentFile"]
render_path = self._instance.data["expectedFiles"][0]
# replace frame info ('000001') with Deadline's required '[#######]'
# expects filename in format project_asset_subset_version.FRAME.ext
render_dir = os.path.dirname(render_path)
file_name = os.path.basename(render_path)
arr = file_name.split('.')
assert len(arr) == 3, \
"Unable to parse frames from {}".format(file_name)
hashed = '[{}]'.format(len(arr[1]) * "#")
render_path = os.path.join(render_dir,
'{}.{}.{}'.format(arr[0], hashed, arr[2]))
if len(self._instance.data["expectedFiles"]) > 1:
# replace frame ('000001') with Deadline's required '[#######]'
# expects filename in format project_asset_subset_version.FRAME.ext
render_dir = os.path.dirname(render_path)
file_name = os.path.basename(render_path)
arr = file_name.split('.')
assert len(arr) == 3, \
"Unable to parse frames from {}".format(file_name)
hashed = '[{}]'.format(len(arr[1]) * "#")
render_path = os.path.join(render_dir,
'{}.{}.{}'.format(arr[0], hashed,
arr[2]))
deadline_plugin_info.MultiProcess = True
deadline_plugin_info.Comp = self._instance.data["comp_name"]
deadline_plugin_info.Version = "17.5"
deadline_plugin_info.SceneFile = script_path
deadline_plugin_info.SceneFile = self.scene_path
deadline_plugin_info.Output = render_path.replace("\\", "/")
return attr.asdict(deadline_plugin_info)

View file

@ -96,6 +96,7 @@ class CollectFtrackApi(pyblish.api.ContextPlugin):
task_entity = None
self.log.warning("Task name is not set.")
context.data["ftrackPythonModule"] = ftrack_api
context.data["ftrackProject"] = project_entity
context.data["ftrackEntity"] = asset_entity
context.data["ftrackTask"] = task_entity

View file

@ -36,6 +36,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
order = pyblish.api.IntegratorOrder - 0.04
label = 'Integrate Hierarchy To Ftrack'
families = ["shot"]
hosts = ["hiero"]
optional = False
def process(self, context):

View file

@ -0,0 +1,331 @@
import sys
import six
import collections
import pyblish.api
from avalon import io
from pype.modules.ftrack.lib.avalon_sync import (
CUST_ATTR_AUTO_SYNC,
get_pype_attr
)
class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
"""
Create entities in ftrack based on collected data from premiere
Example of entry data:
{
"ProjectXS": {
"entity_type": "Project",
"custom_attributes": {
"fps": 24,...
},
"tasks": [
"Compositing",
"Lighting",... *task must exist as task type in project schema*
],
"childs": {
"sq01": {
"entity_type": "Sequence",
...
}
}
}
}
"""
order = pyblish.api.IntegratorOrder - 0.04
label = 'Integrate Hierarchy To Ftrack'
families = ["shot"]
hosts = ["standalonepublisher"]
optional = False
def process(self, context):
self.context = context
if "hierarchyContext" not in self.context.data:
return
hierarchy_context = self.context.data["hierarchyContext"]
self.session = self.context.data["ftrackSession"]
project_name = self.context.data["projectEntity"]["name"]
query = 'Project where full_name is "{}"'.format(project_name)
project = self.session.query(query).one()
auto_sync_state = project[
"custom_attributes"][CUST_ATTR_AUTO_SYNC]
if not io.Session:
io.install()
self.ft_project = None
input_data = hierarchy_context
# disable termporarily ftrack project's autosyncing
if auto_sync_state:
self.auto_sync_off(project)
try:
# import ftrack hierarchy
self.import_to_ftrack(input_data)
except Exception:
raise
finally:
if auto_sync_state:
self.auto_sync_on(project)
def import_to_ftrack(self, input_data, parent=None):
# Prequery hiearchical custom attributes
hier_custom_attributes = get_pype_attr(self.session)[1]
hier_attr_by_key = {
attr["key"]: attr
for attr in hier_custom_attributes
}
# Get ftrack api module (as they are different per python version)
ftrack_api = self.context.data["ftrackPythonModule"]
for entity_name in input_data:
entity_data = input_data[entity_name]
entity_type = entity_data['entity_type']
self.log.debug(entity_data)
self.log.debug(entity_type)
if entity_type.lower() == 'project':
query = 'Project where full_name is "{}"'.format(entity_name)
entity = self.session.query(query).one()
self.ft_project = entity
self.task_types = self.get_all_task_types(entity)
elif self.ft_project is None or parent is None:
raise AssertionError(
"Collected items are not in right order!"
)
# try to find if entity already exists
else:
query = (
'TypedContext where name is "{0}" and '
'project_id is "{1}"'
).format(entity_name, self.ft_project["id"])
try:
entity = self.session.query(query).one()
except Exception:
entity = None
# Create entity if not exists
if entity is None:
entity = self.create_entity(
name=entity_name,
type=entity_type,
parent=parent
)
# self.log.info('entity: {}'.format(dict(entity)))
# CUSTOM ATTRIBUTES
custom_attributes = entity_data.get('custom_attributes', [])
instances = [
i for i in self.context if i.data['asset'] in entity['name']
]
for key in custom_attributes:
hier_attr = hier_attr_by_key.get(key)
# Use simple method if key is not hierarchical
if not hier_attr:
assert (key in entity['custom_attributes']), (
'Missing custom attribute key: `{0}` in attrs: '
'`{1}`'.format(key, entity['custom_attributes'].keys())
)
entity['custom_attributes'][key] = custom_attributes[key]
else:
# Use ftrack operations method to set hiearchical
# attribute value.
# - this is because there may be non hiearchical custom
# attributes with different properties
entity_key = collections.OrderedDict({
"configuration_id": hier_attr["id"],
"entity_id": entity["id"]
})
self.session.recorded_operations.push(
ftrack_api.operation.UpdateEntityOperation(
"ContextCustomAttributeValue",
entity_key,
"value",
ftrack_api.symbol.NOT_SET,
custom_attributes[key]
)
)
for instance in instances:
instance.data['ftrackEntity'] = entity
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# TASKS
tasks = entity_data.get('tasks', [])
existing_tasks = []
tasks_to_create = []
for child in entity['children']:
if child.entity_type.lower() == 'task':
existing_tasks.append(child['name'].lower())
# existing_tasks.append(child['type']['name'])
for task_name in tasks:
task_type = tasks[task_name]["type"]
if task_name.lower() in existing_tasks:
print("Task {} already exists".format(task_name))
continue
tasks_to_create.append((task_name, task_type))
for task_name, task_type in tasks_to_create:
self.create_task(
name=task_name,
task_type=task_type,
parent=entity
)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Incoming links.
self.create_links(entity_data, entity)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Create notes.
user = self.session.query(
"User where username is \"{}\"".format(self.session.api_user)
).first()
if user:
for comment in entity_data.get("comments", []):
entity.create_note(comment, user)
else:
self.log.warning(
"Was not able to query current User {}".format(
self.session.api_user
)
)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Import children.
if 'childs' in entity_data:
self.import_to_ftrack(
entity_data['childs'], entity)
def create_links(self, entity_data, entity):
# Clear existing links.
for link in entity.get("incoming_links", []):
self.session.delete(link)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Create new links.
for input in entity_data.get("inputs", []):
input_id = io.find_one({"_id": input})["data"]["ftrackId"]
assetbuild = self.session.get("AssetBuild", input_id)
self.log.debug(
"Creating link from {0} to {1}".format(
assetbuild["name"], entity["name"]
)
)
self.session.create(
"TypedContextLink", {"from": assetbuild, "to": entity}
)
def get_all_task_types(self, project):
tasks = {}
proj_template = project['project_schema']
temp_task_types = proj_template['_task_type_schema']['types']
for type in temp_task_types:
if type['name'] not in tasks:
tasks[type['name']] = type
return tasks
def create_task(self, name, task_type, parent):
task = self.session.create('Task', {
'name': name,
'parent': parent
})
# TODO not secured!!! - check if task_type exists
self.log.info(task_type)
self.log.info(self.task_types)
task['type'] = self.task_types[task_type]
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
return task
def create_entity(self, name, type, parent):
entity = self.session.create(type, {
'name': name,
'parent': parent
})
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
return entity
def auto_sync_off(self, project):
project["custom_attributes"][CUST_ATTR_AUTO_SYNC] = False
self.log.info("Ftrack autosync swithed off")
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
def auto_sync_on(self, project):
project["custom_attributes"][CUST_ATTR_AUTO_SYNC] = True
self.log.info("Ftrack autosync swithed on")
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)

View file

@ -8,7 +8,7 @@ Provides:
"""
from pyblish import api
from pype.api import config
from pype.api import get_current_project_settings
class CollectPresets(api.ContextPlugin):
@ -18,23 +18,7 @@ class CollectPresets(api.ContextPlugin):
label = "Collect Presets"
def process(self, context):
presets = config.get_presets()
try:
# try if it is not in projects custom directory
# `{PYPE_PROJECT_CONFIGS}/[PROJECT_NAME]/init.json`
# init.json define preset names to be used
p_init = presets["init"]
presets["colorspace"] = presets["colorspace"][p_init["colorspace"]]
presets["dataflow"] = presets["dataflow"][p_init["dataflow"]]
except KeyError:
self.log.warning("No projects custom preset available...")
presets["colorspace"] = presets["colorspace"]["default"]
presets["dataflow"] = presets["dataflow"]["default"]
self.log.info(
"Presets `colorspace` and `dataflow` loaded from `default`..."
)
project_settings = get_current_project_settings()
context.data["presets"] = project_settings
context.data["presets"] = presets
# self.log.info(context.data["presets"])
return

View file

@ -280,6 +280,15 @@ class ExtractReview(pyblish.api.InstancePlugin):
handles_are_set = handle_start > 0 or handle_end > 0
with_audio = True
if (
# Check if has `no-audio` tag
"no-audio" in output_def["tags"]
# Check if instance has ny audio in data
or not instance.data.get("audio")
):
with_audio = False
return {
"fps": float(instance.data["fps"]),
"frame_start": frame_start,
@ -295,6 +304,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
"resolution_height": instance.data.get("resolutionHeight"),
"origin_repre": repre,
"input_is_sequence": self.input_is_sequence(repre),
"with_audio": with_audio,
"without_handles": without_handles,
"handles_are_set": handles_are_set
}
@ -389,7 +399,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
ffmpeg_output_args.append("-shortest")
# Add audio arguments if there are any. Skipped when output are images.
if not temp_data["output_ext_is_image"]:
if not temp_data["output_ext_is_image"] and temp_data["with_audio"]:
audio_in_args, audio_filters, audio_out_args = self.audio_args(
instance, temp_data
)

View file

@ -600,6 +600,13 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"files": os.path.basename(remainder),
"stagingDir": os.path.dirname(remainder),
}
if "render" in instance.get("families"):
rep.update({
"fps": instance.get("fps"),
"tags": ["review"]
})
self._solve_families(instance, True)
if remainder in bake_render_path:
rep.update({
"fps": instance.get("fps"),

View file

@ -23,6 +23,8 @@ class ExtractReviewCutUp(pype.api.Extractor):
def process(self, instance):
inst_data = instance.data
asset = inst_data['asset']
item = inst_data['item']
event_number = int(item.eventNumber())
# get representation and loop them
representations = inst_data["representations"]
@ -97,7 +99,12 @@ class ExtractReviewCutUp(pype.api.Extractor):
index = 0
for image in collection:
dst_file_num = frame_start + index
dst_file_name = head + str(padding % dst_file_num) + tail
dst_file_name = "".join([
str(event_number),
head,
str(padding % dst_file_num),
tail
])
src = os.path.join(staging_dir, image)
dst = os.path.join(full_output_dir, dst_file_name)
self.log.info("Creating temp hardlinks: {}".format(dst))

View file

@ -26,10 +26,10 @@ class CreateRenderSetup(avalon.maya.Creator):
# \__| |
# \_____/
# from pype.api import config
# from pype.api import get_project_settings
# import maya.app.renderSetup.model.renderSetup as renderSetup
# presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
# layer = presets['plugins']['maya']['create']['renderSetup']["layer"]
# settings = get_project_settings(os.environ['AVALON_PROJECT'])
# layer = settings['maya']['create']['renderSetup']["layer"]
# rs = renderSetup.instance()
# rs.createRenderLayer(layer)

View file

@ -1,7 +1,7 @@
from avalon import api
import pype.hosts.maya.plugin
import os
from pype.api import config
from pype.api import get_project_settings
import clique
@ -74,8 +74,8 @@ class AssProxyLoader(pype.hosts.maya.plugin.ReferenceLoader):
proxyShape.dso.set(path)
proxyShape.aiOverrideShaders.set(0)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
@ -196,8 +196,8 @@ class AssStandinLoader(api.Loader):
label = "{}:{}".format(namespace, name)
root = pm.group(name=label, empty=True)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get('ass')
if c is not None:

View file

@ -1,7 +1,7 @@
from avalon import api
import pype.hosts.maya.plugin
import os
from pype.api import config
from pype.api import get_project_settings
reload(config)
@ -35,8 +35,8 @@ class GpuCacheLoader(api.Loader):
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get('model')
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)

View file

@ -2,7 +2,7 @@ import pype.hosts.maya.plugin
from avalon import api, maya
from maya import cmds
import os
from pype.api import config
from pype.api import get_project_settings
class ReferenceLoader(pype.hosts.maya.plugin.ReferenceLoader):
@ -77,8 +77,8 @@ class ReferenceLoader(pype.hosts.maya.plugin.ReferenceLoader):
cmds.setAttr(groupName + ".displayHandle", 1)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
groupNode.useOutlinerColor.set(1)

View file

@ -1,6 +1,6 @@
from avalon import api
import os
from pype.api import config
from pype.api import get_project_settings
class LoadVDBtoRedShift(api.Loader):
"""Load OpenVDB in a Redshift Volume Shape"""
@ -55,8 +55,8 @@ class LoadVDBtoRedShift(api.Loader):
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:

View file

@ -1,5 +1,5 @@
from avalon import api
from pype.api import config
from pype.api import get_project_settings
import os
@ -48,8 +48,8 @@ class LoadVDBtoVRay(api.Loader):
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:

View file

@ -1,6 +1,6 @@
from avalon.maya import lib
from avalon import api
from pype.api import config
from pype.api import get_project_settings
import os
import maya.cmds as cmds
@ -47,8 +47,8 @@ class VRayProxyLoader(api.Loader):
return
# colour the group node
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr("{0}.useOutlinerColor".format(group_node), 1)

View file

@ -9,7 +9,7 @@ from maya import cmds
from avalon import api, io
from avalon.maya import lib as avalon_lib, pipeline
from pype.hosts.maya import lib
from pype.api import config
from pype.api import get_project_settings
from pprint import pprint
@ -59,8 +59,8 @@ class YetiCacheLoader(api.Loader):
group_name = "{}:{}".format(namespace, name)
group_node = cmds.group(nodes, name=group_name)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:

View file

@ -1,7 +1,7 @@
import os
from collections import defaultdict
from pype.api import config
from pype.api import get_project_settings
import pype.hosts.maya.plugin
from pype.hosts.maya import lib
@ -77,8 +77,8 @@ class YetiRigLoader(pype.hosts.maya.plugin.ReferenceLoader):
groupName = "{}:{}".format(namespace, name)
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get('yetiRig')
if c is not None:

View file

@ -102,7 +102,8 @@ class ExtractCameraMayaScene(pype.api.Extractor):
def process(self, instance):
"""Plugin entry point."""
# get settings
ext_mapping = instance.context.data["presets"]["maya"].get("ext_mapping") # noqa: E501
ext_mapping = (instance.context.data["presets"]["maya"]
.get("ext_mapping")) # noqa: E501
if ext_mapping:
self.log.info("Looking in presets for scene type ...")
# use extension mapping for first family found

View file

@ -172,10 +172,11 @@ class ExtractLook(pype.api.Extractor):
cspace = files_metadata[filepath]["color_space"]
linearise = False
if cspace == "sRGB":
linearise = True
# set its file node to 'raw' as tx will be linearized
files_metadata[filepath]["color_space"] = "raw"
if do_maketx:
if cspace == "sRGB":
linearise = True
# set its file node to 'raw' as tx will be linearized
files_metadata[filepath]["color_space"] = "raw"
source, mode, hash = self._process_texture(
filepath,
@ -350,7 +351,7 @@ class ExtractLook(pype.api.Extractor):
if existing and not force:
self.log.info("Found hash in database, preparing hardlink..")
source = next((p for p in existing if os.path.exists(p)), None)
if filepath:
if source:
return source, HARDLINK, texture_hash
else:
self.log.warning(

View file

@ -11,7 +11,7 @@ from avalon.vendor import requests
import pyblish.api
from pype.hosts.maya import lib
from pype.api import config
from pype.api import get_system_settings
# mapping between Maya renderer names and Muster template ids
@ -25,10 +25,10 @@ def _get_template_id(renderer):
:rtype: int
"""
templates = config.get_presets()["muster"]["templates_mapping"]
templates = get_system_settings()["modules"]["Muster"]["templates_mapping"]
if not templates:
raise RuntimeError(("Muster template mapping missing in pype-config "
"`presets/muster/templates_mapping.json`"))
raise RuntimeError(("Muster template mapping missing in "
"pype-settings"))
try:
template_id = templates[renderer]
except KeyError:

View file

@ -4,7 +4,7 @@ import contextlib
from avalon import api, io
from pype.hosts.nuke import presets
from pype.api import config
from pype.api import get_project_settings
@contextlib.contextmanager
@ -73,7 +73,8 @@ def add_review_presets_config():
"families": list(),
"representations": list()
}
review_presets = config.get_presets()["plugins"]["global"]["publish"].get(
settings = get_project_settings(io.Session["AVALON_PROJECT"])
review_presets = settings["global"]["publish"].get(
"ExtractReview", {})
outputs = review_presets.get("outputs", {})

View file

@ -162,6 +162,9 @@ class LoadImage(pipeline.Loader):
"""
# Create new containers first
context = get_representation_context(representation)
# Change `fname` to new representation
self.fname = self.filepath_from_context(context)
name = container["name"]
namespace = container["namespace"]
new_container = self.load(context, name, namespace, {})

View file

@ -5,7 +5,7 @@ import subprocess
import platform
import json
import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins
from pype.api import config, resources
from pype.api import resources
import pype.lib
@ -428,12 +428,6 @@ def burnins_from_data(
}
"""
# Use legacy processing when options are not set
if options is None or burnin_values is None:
presets = config.get_presets().get("tools", {}).get("burnins", {})
options = presets.get("options")
burnin_values = presets.get("burnins") or {}
burnin = ModifiedBurnins(input_path, options_init=options)
frame_start = data.get("frame_start")

View file

@ -12,11 +12,6 @@ from .items import (
ItemTable, ItemImage, ItemRectangle, ItemPlaceHolder
)
try:
from pype.api.config import get_presets
except Exception:
get_presets = dict
log = logging.getLogger(__name__)
@ -41,11 +36,7 @@ def create_slates(
)
elif slate_data is None:
slate_presets = (
get_presets()
.get("tools", {})
.get("slates")
) or {}
slate_presets = {}
slate_data = slate_presets.get(slate_name)
if slate_data is None:
raise ValueError(

View file

@ -1,11 +1,15 @@
from .lib import (
system_settings,
project_settings,
environments
get_system_settings,
get_project_settings,
get_current_project_settings,
get_anatomy_settings,
get_environments
)
__all__ = (
"system_settings",
"project_settings",
"environments"
"get_system_settings",
"get_project_settings",
"get_current_project_settings",
"get_anatomy_settings",
"get_environments"
)

View file

@ -1,16 +0,0 @@
{
"AVALON_CONFIG": "pype",
"AVALON_PROJECTS": "{PYPE_PROJECTS_PATH}",
"AVALON_USERNAME": "avalon",
"AVALON_PASSWORD": "secret",
"AVALON_DEBUG": "1",
"AVALON_MONGO": "mongodb://localhost:2707",
"AVALON_DB": "avalon",
"AVALON_DB_DATA": "{PYPE_SETUP_PATH}/../mongo_db_data",
"AVALON_EARLY_ADOPTER": "1",
"AVALON_SCHEMA": "{PYPE_MODULE_ROOT}/schema",
"AVALON_LOCATION": "http://127.0.0.1",
"AVALON_LABEL": "Pype",
"AVALON_TIMEOUT": "1000",
"AVALON_THUMBNAIL_ROOT": ""
}

View file

@ -0,0 +1,29 @@
{
"fps": 25,
"frameStart": 1001,
"frameEnd": 1001,
"clipIn": 1,
"clipOut": 1,
"handleStart": 0,
"handleEnd": 0,
"resolutionWidth": 1920,
"resolutionHeight": 1080,
"pixelAspect": 1,
"applications": [],
"task_short_names": {
"Generic": "gener",
"Art": "art",
"Modeling": "mdl",
"Texture": "tex",
"Lookdev": "look",
"Rigging": "rig",
"Edit": "edit",
"Layout": "lay",
"Setdress": "dress",
"Animation": "anim",
"FX": "fx",
"Lighting": "lgt",
"Paint": "paint",
"Compositing": "comp"
}
}

View file

@ -1,42 +0,0 @@
{
"nuke": {
"root": {
"colorManagement": "Nuke",
"OCIO_config": "nuke-default",
"defaultViewerLUT": "Nuke Root LUTs",
"monitorLut": "sRGB",
"int8Lut": "sRGB",
"int16Lut": "sRGB",
"logLut": "Cineon",
"floatLut": "linear"
},
"viewer": {
"viewerProcess": "sRGB"
},
"write": {
"render": {
"colorspace": "linear"
},
"prerender": {
"colorspace": "linear"
},
"still": {
"colorspace": "sRGB"
}
},
"read": {
"[^-a-zA-Z0-9]beauty[^-a-zA-Z0-9]": "linear",
"[^-a-zA-Z0-9](P|N|Z|crypto)[^-a-zA-Z0-9]": "linear",
"[^-a-zA-Z0-9](plateRef)[^-a-zA-Z0-9]": "sRGB"
}
},
"maya": {
},
"houdini": {
},
"resolve": {
}
}

View file

@ -1,55 +0,0 @@
{
"nuke": {
"nodes": {
"connected": true,
"modifymetadata": {
"_id": "connect_metadata",
"_previous": "ENDING",
"metadata.set.pype_studio_name": "{PYPE_STUDIO_NAME}",
"metadata.set.avalon_project_name": "{AVALON_PROJECT}",
"metadata.set.avalon_project_code": "{PYPE_STUDIO_CODE}",
"metadata.set.avalon_asset_name": "{AVALON_ASSET}"
},
"crop": {
"_id": "connect_crop",
"_previous": "connect_metadata",
"box": [
"{metadata.crop.x}",
"{metadata.crop.y}",
"{metadata.crop.right}",
"{metadata.crop.top}"
]
},
"write": {
"render": {
"_id": "output_write",
"_previous": "connect_crop",
"file_type": "exr",
"datatype": "16 bit half",
"compression": "Zip (1 scanline)",
"autocrop": true,
"tile_color": "0xff0000ff",
"channels": "rgb"
},
"prerender": {
"_id": "output_write",
"_previous": "connect_crop",
"file_type": "exr",
"datatype": "16 bit half",
"compression": "Zip (1 scanline)",
"autocrop": false,
"tile_color": "0xc9892aff",
"channels": "rgba"
},
"still": {
"_previous": "connect_crop",
"channels": "rgba",
"file_type": "tiff",
"datatype": "16 bit",
"compression": "LZW",
"tile_color": "0x4145afff"
}
}
}
}
}

View file

@ -0,0 +1,129 @@
{
"hiero": {
"workfile": {
"ocioConfigName": "nuke-default",
"ocioconfigpath": {
"windows": [],
"darwin": [],
"linux": []
},
"workingSpace": "linear",
"sixteenBitLut": "sRGB",
"eightBitLut": "sRGB",
"floatLut": "linear",
"logLut": "Cineon",
"viewerLut": "sRGB",
"thumbnailLut": "sRGB"
},
"regexInputs": {
"inputs": [
{
"regex": "[^-a-zA-Z0-9](plateRef).*(?=mp4)",
"colorspace": "sRGB"
}
]
}
},
"nuke": {
"workfile": {
"colorManagement": "Nuke",
"OCIO_config": "nuke-default",
"customOCIOConfigPath": {
"windows": [],
"darwin": [],
"linux": []
},
"workingSpaceLUT": "linear",
"monitorLut": "sRGB",
"int8Lut": "sRGB",
"int16Lut": "sRGB",
"logLut": "Cineon",
"floatLut": "linear"
},
"nodes": {
"requiredNodes": [
{
"plugins": [
"CreateWriteRender"
],
"nukeNodeClass": "Write",
"knobs": [
{
"name": "file_type",
"value": "exr"
},
{
"name": "datatype",
"value": "16 bit half"
},
{
"name": "compression",
"value": "Zip (1 scanline)"
},
{
"name": "autocrop",
"value": "True"
},
{
"name": "tile_color",
"value": "0xff0000ff"
},
{
"name": "channels",
"value": "rgb"
},
{
"name": "colorspace",
"value": "linear"
}
]
},
{
"plugins": [
"CreateWritePrerender"
],
"nukeNodeClass": "Write",
"knobs": [
{
"name": "file_type",
"value": "exr"
},
{
"name": "datatype",
"value": "16 bit half"
},
{
"name": "compression",
"value": "Zip (1 scanline)"
},
{
"name": "autocrop",
"value": "False"
},
{
"name": "tile_color",
"value": "0xff0000ff"
},
{
"name": "channels",
"value": "rgb"
},
{
"name": "colorspace",
"value": "linear"
}
]
}
],
"customNodes": []
},
"regexInputs": {
"inputs": [
{
"regex": "[^-a-zA-Z0-9]beauty[^-a-zA-Z0-9]",
"colorspace": "linear"
}
]
}
}
}

View file

@ -13,9 +13,6 @@
"file": "{project[code]}_{asset}_{subset}_{@version}<_{output}><.{@frame}>.{representation}",
"path": "{@folder}/{@file}"
},
"texture": {
"path": "{root}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}"
},
"publish": {
"folder": "{root}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/{@version}",
"file": "{project[code]}_{asset}_{subset}_{@version}<_{output}><.{@frame}>.{representation}",
@ -26,5 +23,7 @@
"folder": "{root}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/master",
"file": "{project[code]}_{asset}_{subset}_master<_{output}><.{frame}>.{representation}",
"path": "{@folder}/{@file}"
}
},
"delivery": {},
"other": {}
}

View file

@ -0,0 +1,13 @@
{
"publish": {
"ExtractCelactionDeadline": {
"enabled": true,
"deadline_department": "",
"deadline_priority": 50,
"deadline_pool": "",
"deadline_pool_secondary": "",
"deadline_group": "",
"deadline_chunk_size": 10
}
}
}

View file

@ -0,0 +1,98 @@
{
"ftrack_actions_path": [],
"ftrack_events_path": [],
"events": {
"sync_to_avalon": {
"enabled": true,
"statuses_name_change": [
"ready",
"not ready"
]
},
"push_frame_values_to_task": {
"enabled": true,
"interest_entity_types": [
"shot",
"asset build"
],
"interest_attributess": [
"frameStart",
"frameEnd"
]
},
"thumbnail_updates": {
"enabled": true,
"levels": 2
},
"user_assignment": {
"enabled": true
},
"status_update": {
"enabled": true,
"mapping": {
"In Progress": [
"__any__"
],
"Ready": [
"Not Ready"
],
"__ignore__": [
"in prgoress",
"omitted",
"on hold"
]
}
},
"status_task_to_parent": {
"enabled": true,
"parent_status_match_all_task_statuses": {
"Completed": [
"Approved",
"Omitted"
]
},
"parent_status_by_task_status": {
"In Progress": [
"in progress",
"change requested",
"retake",
"pending review"
]
}
},
"status_task_to_version": {
"enabled": true,
"mapping": {
"Approved": [
"Complete"
]
}
},
"status_version_to_task": {
"enabled": true,
"mapping": {
"Complete": [
"Approved",
"Complete"
]
}
},
"first_version_status": {
"enabled": true,
"status": ""
},
"next_task_update": {
"enabled": true,
"mapping": {
"Ready": "Not Ready"
}
}
},
"publish": {
"IntegrateFtrackNote": {
"enabled": true,
"note_with_intent_template": "",
"note_labels": []
}
}
}

View file

@ -1,16 +0,0 @@
{
"sync_to_avalon": {
"statuses_name_change": ["not ready", "ready"]
},
"status_update": {
"_ignore_": ["in progress", "ommited", "on hold"],
"Ready": ["not ready"],
"In Progress" : ["_any_"]
},
"status_version_to_task": {
"__description__": "Status `from` (key) must be lowered!",
"in progress": "in progress",
"approved": "approved"
}
}

View file

@ -1,165 +0,0 @@
[{
"label": "FPS",
"key": "fps",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"write_security_role": ["ALL"],
"read_security_role": ["ALL"],
"default": null,
"config": {
"isdecimal": true
}
}, {
"label": "Applications",
"key": "applications",
"type": "enumerator",
"entity_type": "show",
"group": "avalon",
"config": {
"multiselect": true,
"data": [
{"blender_2.80": "Blender 2.80"},
{"blender_2.81": "Blender 2.81"},
{"blender_2.82": "Blender 2.82"},
{"blender_2.83": "Blender 2.83"},
{"celaction_local": "CelAction2D Local"},
{"maya_2017": "Maya 2017"},
{"maya_2018": "Maya 2018"},
{"maya_2019": "Maya 2019"},
{"nuke_10.0": "Nuke 10.0"},
{"nuke_11.2": "Nuke 11.2"},
{"nuke_11.3": "Nuke 11.3"},
{"nuke_12.0": "Nuke 12.0"},
{"nukex_10.0": "NukeX 10.0"},
{"nukex_11.2": "NukeX 11.2"},
{"nukex_11.3": "NukeX 11.3"},
{"nukex_12.0": "NukeX 12.0"},
{"nukestudio_10.0": "NukeStudio 10.0"},
{"nukestudio_11.2": "NukeStudio 11.2"},
{"nukestudio_11.3": "NukeStudio 11.3"},
{"nukestudio_12.0": "NukeStudio 12.0"},
{"harmony_17": "Harmony 17"},
{"houdini_16.5": "Houdini 16.5"},
{"houdini_17": "Houdini 17"},
{"houdini_18": "Houdini 18"},
{"photoshop_2020": "Photoshop 2020"},
{"python_3": "Python 3"},
{"python_2": "Python 2"},
{"premiere_2019": "Premiere Pro 2019"},
{"premiere_2020": "Premiere Pro 2020"},
{"resolve_16": "BM DaVinci Resolve 16"}
]
}
}, {
"label": "Avalon auto-sync",
"key": "avalon_auto_sync",
"type": "boolean",
"entity_type": "show",
"group": "avalon",
"write_security_role": ["API", "Administrator"],
"read_security_role": ["API", "Administrator"]
}, {
"label": "Intent",
"key": "intent",
"type": "enumerator",
"entity_type": "assetversion",
"group": "avalon",
"config": {
"multiselect": false,
"data": [
{"test": "Test"},
{"wip": "WIP"},
{"final": "Final"}
]
}
}, {
"label": "Library Project",
"key": "library_project",
"type": "boolean",
"entity_type": "show",
"group": "avalon",
"write_security_role": ["API", "Administrator"],
"read_security_role": ["API", "Administrator"]
}, {
"label": "Clip in",
"key": "clipIn",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Clip out",
"key": "clipOut",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Frame start",
"key": "frameStart",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Frame end",
"key": "frameEnd",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Tools",
"key": "tools_env",
"type": "enumerator",
"is_hierarchical": true,
"group": "avalon",
"config": {
"multiselect": true,
"data": [
{"mtoa_3.0.1": "mtoa_3.0.1"},
{"mtoa_3.1.1": "mtoa_3.1.1"},
{"mtoa_3.2.0": "mtoa_3.2.0"},
{"yeti_2.1.2": "yeti_2.1"}
]
}
}, {
"label": "Resolution Width",
"key": "resolutionWidth",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Resolution Height",
"key": "resolutionHeight",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Pixel aspect",
"key": "pixelAspect",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"config": {
"isdecimal": true
}
}, {
"label": "Frame handles start",
"key": "handleStart",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}, {
"label": "Frame handles end",
"key": "handleEnd",
"type": "number",
"is_hierarchical": true,
"group": "avalon",
"default": null
}
]

View file

@ -1,5 +0,0 @@
{
"server_url": "",
"api_key": "",
"api_user": ""
}

View file

@ -1,5 +0,0 @@
{
"TestAction": {
"ignore_me": true
}
}

View file

@ -1,18 +0,0 @@
{
"fps": 25,
"frameStart": 1001,
"frameEnd": 1100,
"clipIn": 1001,
"clipOut": 1100,
"handleStart": 10,
"handleEnd": 10,
"resolutionHeight": 1080,
"resolutionWidth": 1920,
"pixelAspect": 1.0,
"applications": [
"maya_2019", "nuke_11.3", "nukex_11.3", "nukestudio_11.3", "deadline"
],
"tools_env": [],
"avalon_auto_sync": true
}

View file

@ -0,0 +1,39 @@
{
"object_types": ["Milestone", "Task", "Folder", "Asset Build", "Shot", "Library", "Sequence"],
"version_workflow": ["Pending Review", "Client Review", "On Farm", "Reviewed", "Render Complete", "Approved", "CBB", "Delivered", "Render Failed", "data"],
"task_workflow": ["Not Ready", "Ready", "Change Requested", "In progress", "Pending Review", "On Farm", "Waiting", "Render Complete", "Complete", "CBB", "On Hold", "Render Failed", "Omitted"],
"overrides": [{
"task_types": ["Animation"],
"statuses": ["Not Ready", "Ready", "Change Requested", "Blocking", "Animating", "blocking review", "anim review", "Complete", "CBB", "On Hold", "Omitted"]
}, {
"task_types": ["Lighting"],
"statuses": ["Not Ready", "Ready", "Change Requested", "In progress", "To render", "On Farm", "Render Complete", "Complete", "CBB", "On Hold", "Render Failed", "Omitted"]
}],
"task_type_schema": ["Layout", "Animation", "Modeling", "Previz", "Lookdev", "FX", "Lighting", "Compositing", "Rigging", "Texture", "Matte-paint", "Roto-paint", "Art", "Match-moving", "Production", "Build", "Setdress", "Edit", "R&D", "Boards"],
"schemas": [{
"object_type": "Shot",
"statuses": ["Omitted", "Normal", "Complete"],
"task_types": []
}, {
"object_type": "Asset Build",
"statuses": ["Omitted", "Normal", "Complete"],
"task_types": ["Setups", "Sets", "Characters", "Props", "Locations", "Assembly", "R&D", "Elements"]
}, {
"object_type": "Milestone",
"statuses": ["Normal", "Complete"],
"task_types": ["Generic"]
}],
"task_templates": [{
"name": "Character",
"task_types": ["Art", "Modeling", "Lookdev", "Rigging"]
}, {
"name": "Element",
"task_types": ["Modeling", "Lookdev"]
}, {
"name": "Prop",
"task_types": ["Modeling", "Lookdev", "Rigging"]
}, {
"name": "Location",
"task_types": ["Layout", "Setdress"]
}]
}

View file

@ -0,0 +1,182 @@
{
"publish": {
"IntegrateMasterVersion": {
"enabled": true
},
"ExtractJpegEXR": {
"enabled": true,
"ffmpeg_args": {
"input": [],
"output": []
}
},
"ExtractReview": {
"enabled": true,
"profiles": [
{
"families": [],
"hosts": [],
"outputs": {
"h264": {
"ext": "mp4",
"tags": [
"burnin",
"ftrackreview"
],
"ffmpeg_args": {
"video_filters": [],
"audio_filters": [],
"input": [
"-gamma 2.2"
],
"output": [
"-pix_fmt yuv420p",
"-crf 18",
"-intra"
]
},
"filter": {
"families": [
"render",
"review",
"ftrack"
]
}
}
}
}
]
},
"ExtractBurnin": {
"enabled": true,
"options": {
"font_size": 42,
"opacity": 1,
"bg_opacity": 0,
"x_offset": 5,
"y_offset": 5,
"bg_padding": 5
},
"profiles": [
{
"families": [],
"hosts": [],
"burnins": {
"burnin": {
"TOP_LEFT": "{yy}-{mm}-{dd}",
"TOP_CENTERED": "",
"TOP_RIGHT": "{anatomy[version]}",
"BOTTOM_LEFT": "{username}",
"BOTTOM_CENTERED": "{asset}",
"BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}"
}
}
}
]
},
"IntegrateAssetNew": {
"template_name_profiles": {
"publish": {
"families": [],
"tasks": []
},
"render": {
"families": [
"review",
"render",
"prerender"
]
}
}
},
"ProcessSubmittedJobOnFarm": {
"enabled": true,
"deadline_department": "",
"deadline_pool": "",
"deadline_group": "",
"deadline_chunk_size": "",
"deadline_priority": "",
"aov_filter": {
"maya": [
".+(?:\\.|_)([Bb]eauty)(?:\\.|_).*"
],
"nuke": [],
"aftereffects": [
".*"
],
"celaction": [
".*"
]
}
}
},
"tools": {
"Creator": {
"families_smart_select": {
"Render": [
"light",
"render"
],
"Model": [
"model"
],
"Layout": [
"layout"
],
"Look": [
"look"
],
"Rig": [
"rigging",
"rig"
]
}
},
"Workfiles": {
"last_workfile_on_startup": [
{
"hosts": [],
"tasks": [],
"enabled": true
}
],
"sw_folders": {
"compositing": [
"nuke",
"ae"
],
"modeling": [
"maya",
"blender",
"zbrush"
],
"lookdev": [
"substance",
"textures"
]
}
}
},
"project_folder_structure": {
"__project_root__": {
"prod": {},
"resources": {
"footage": {
"plates": {},
"offline": {}
},
"audio": {},
"art_dept": {}
},
"editorial": {},
"assets[ftrack.Library]": {
"characters[ftrack]": {},
"locations[ftrack]": {}
},
"shots[ftrack.Sequence]": {
"scripts": {},
"editorial[ftrack.Folder]": {}
}
}
}
}

Some files were not shown because too many files have changed in this diff Show more