mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into feature/OP-3408_Use-query-functions-in-deadline
This commit is contained in:
commit
33ff79cc6c
139 changed files with 4115 additions and 2419 deletions
7
.gitmodules
vendored
Normal file
7
.gitmodules
vendored
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
[submodule "tools/modules/powershell/BurntToast"]
|
||||
path = tools/modules/powershell/BurntToast
|
||||
url = https://github.com/Windos/BurntToast.git
|
||||
|
||||
[submodule "tools/modules/powershell/PSWriteColor"]
|
||||
path = tools/modules/powershell/PSWriteColor
|
||||
url = https://github.com/EvotecIT/PSWriteColor.git
|
||||
138
CHANGELOG.md
138
CHANGELOG.md
|
|
@ -1,8 +1,42 @@
|
|||
# Changelog
|
||||
|
||||
## [3.12.1-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.12.2-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.0...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...HEAD)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
|
||||
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
|
||||
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
|
||||
- Ftrack: Trigger custom ftrack topic of project structure creation [\#3506](https://github.com/pypeclub/OpenPype/pull/3506)
|
||||
- Settings UI: Add extract to file action on project view [\#3505](https://github.com/pypeclub/OpenPype/pull/3505)
|
||||
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
|
||||
- General: Event system [\#3499](https://github.com/pypeclub/OpenPype/pull/3499)
|
||||
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
|
||||
- Nuke: load clip with options from settings [\#3497](https://github.com/pypeclub/OpenPype/pull/3497)
|
||||
- TrayPublisher: implemented render\_mov\_batch [\#3486](https://github.com/pypeclub/OpenPype/pull/3486)
|
||||
- Migrate basic families to the new Tray Publisher [\#3469](https://github.com/pypeclub/OpenPype/pull/3469)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
|
||||
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
|
||||
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
|
||||
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
|
||||
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
|
||||
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
|
||||
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
|
||||
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
|
||||
- TimersManager: Use query functions [\#3495](https://github.com/pypeclub/OpenPype/pull/3495)
|
||||
|
||||
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
|
|
@ -14,11 +48,31 @@
|
|||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
|
||||
- NewPublisher: Align creator attributes from top to bottom [\#3487](https://github.com/pypeclub/OpenPype/pull/3487)
|
||||
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
|
||||
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
|
||||
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
|
||||
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
|
||||
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
|
||||
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
|
||||
- Hiero: Add custom scripts menu [\#3425](https://github.com/pypeclub/OpenPype/pull/3425)
|
||||
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TrayPublisher: Keep use instance label in list view [\#3493](https://github.com/pypeclub/OpenPype/pull/3493)
|
||||
- General: Extract review use first frame of input sequence [\#3491](https://github.com/pypeclub/OpenPype/pull/3491)
|
||||
- General: Fix Plist loading for application launch [\#3485](https://github.com/pypeclub/OpenPype/pull/3485)
|
||||
- Nuke: Workfile tools open on start [\#3479](https://github.com/pypeclub/OpenPype/pull/3479)
|
||||
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
|
||||
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
|
||||
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
|
||||
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
|
||||
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
|
||||
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
|
||||
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
|
||||
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
|
||||
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
|
||||
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
|
||||
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
|
||||
|
|
@ -26,8 +80,14 @@
|
|||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
|
||||
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
|
||||
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
|
||||
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
|
||||
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
|
||||
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
|
||||
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
|
||||
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
|
||||
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
|
||||
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
|
||||
|
||||
|
|
@ -44,9 +104,6 @@
|
|||
|
||||
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
|
||||
- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411)
|
||||
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
|
||||
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
|
||||
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -57,12 +114,6 @@
|
|||
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
|
||||
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
|
||||
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
|
||||
- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382)
|
||||
- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381)
|
||||
- Flame: bunch of publishing issues [\#3377](https://github.com/pypeclub/OpenPype/pull/3377)
|
||||
- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372)
|
||||
- Standalone: settings improvements [\#3355](https://github.com/pypeclub/OpenPype/pull/3355)
|
||||
- Nuke: Load full model hierarchy by default [\#3328](https://github.com/pypeclub/OpenPype/pull/3328)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
|
|
@ -72,80 +123,15 @@
|
|||
- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395)
|
||||
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
|
||||
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
|
||||
- Maya: Use client query functions [\#3385](https://github.com/pypeclub/OpenPype/pull/3385)
|
||||
- Harmony: Use client query functions [\#3378](https://github.com/pypeclub/OpenPype/pull/3378)
|
||||
- Celaction: Use client query functions [\#3376](https://github.com/pypeclub/OpenPype/pull/3376)
|
||||
- Photoshop: Use client query functions [\#3375](https://github.com/pypeclub/OpenPype/pull/3375)
|
||||
- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374)
|
||||
- TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340)
|
||||
- Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339)
|
||||
- Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Sync Queue: Added far future value for null values for dates [\#3371](https://github.com/pypeclub/OpenPype/pull/3371)
|
||||
- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369)
|
||||
|
||||
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: custom export temp folder [\#3346](https://github.com/pypeclub/OpenPype/pull/3346)
|
||||
- Nuke: removing third-party plugins [\#3344](https://github.com/pypeclub/OpenPype/pull/3344)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Pyblish Pype: Hiding/Close issues [\#3367](https://github.com/pypeclub/OpenPype/pull/3367)
|
||||
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
|
||||
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
|
||||
- Ftrack: Open browser from tray [\#3320](https://github.com/pypeclub/OpenPype/pull/3320)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: bake streams with slate on farm [\#3368](https://github.com/pypeclub/OpenPype/pull/3368)
|
||||
- Harmony: audio validator has wrong logic [\#3364](https://github.com/pypeclub/OpenPype/pull/3364)
|
||||
- Nuke: Fix missing variable in extract thumbnail [\#3363](https://github.com/pypeclub/OpenPype/pull/3363)
|
||||
- Nuke: Fix precollect writes [\#3361](https://github.com/pypeclub/OpenPype/pull/3361)
|
||||
- AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358)
|
||||
- deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356)
|
||||
- General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351)
|
||||
- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347)
|
||||
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
|
||||
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
|
||||
|
||||
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
|
||||
- updated poetry installation source [\#3316](https://github.com/pypeclub/OpenPype/pull/3316)
|
||||
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
|
||||
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
|
||||
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- General: Handle empty source key on instance [\#3342](https://github.com/pypeclub/OpenPype/pull/3342)
|
||||
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
|
||||
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
|
||||
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318)
|
||||
|
||||
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
|
||||
|
|
|
|||
|
|
@ -15,7 +15,6 @@ from .lib import (
|
|||
run_subprocess,
|
||||
version_up,
|
||||
get_asset,
|
||||
get_hierarchy,
|
||||
get_workdir_data,
|
||||
get_version_from_path,
|
||||
get_last_version_from_path,
|
||||
|
|
@ -101,7 +100,6 @@ __all__ = [
|
|||
# get contextual data
|
||||
"version_up",
|
||||
"get_asset",
|
||||
"get_hierarchy",
|
||||
"get_workdir_data",
|
||||
"get_version_from_path",
|
||||
"get_last_version_from_path",
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
"""Package for handling pype command line arguments."""
|
||||
import os
|
||||
import sys
|
||||
|
||||
import code
|
||||
import click
|
||||
|
||||
# import sys
|
||||
|
|
@ -424,3 +424,22 @@ def pack_project(project, dirpath):
|
|||
def unpack_project(zipfile, root):
|
||||
"""Create a package of project with all files and database dump."""
|
||||
PypeCommands().unpack_project(zipfile, root)
|
||||
|
||||
|
||||
@main.command()
|
||||
def interactive():
|
||||
"""Interative (Python like) console.
|
||||
|
||||
Helpfull command not only for development to directly work with python
|
||||
interpreter.
|
||||
|
||||
Warning:
|
||||
Executable 'openpype_gui' on windows won't work.
|
||||
"""
|
||||
|
||||
from openpype.version import __version__
|
||||
|
||||
banner = "OpenPype {}\nPython {} on {}".format(
|
||||
__version__, sys.version, sys.platform
|
||||
)
|
||||
code.interact(banner)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
from .entities import (
|
||||
get_projects,
|
||||
get_project,
|
||||
get_whole_project,
|
||||
|
||||
get_asset_by_id,
|
||||
get_asset_by_name,
|
||||
|
|
@ -29,15 +30,19 @@ from .entities import (
|
|||
get_representations,
|
||||
get_representation_parents,
|
||||
get_representations_parents,
|
||||
get_archived_representations,
|
||||
|
||||
get_thumbnail,
|
||||
get_thumbnails,
|
||||
get_thumbnail_id_from_source,
|
||||
|
||||
get_workfile_info,
|
||||
)
|
||||
|
||||
__all__ = (
|
||||
"get_projects",
|
||||
"get_project",
|
||||
"get_whole_project",
|
||||
|
||||
"get_asset_by_id",
|
||||
"get_asset_by_name",
|
||||
|
|
@ -66,8 +71,11 @@ __all__ = (
|
|||
"get_representations",
|
||||
"get_representation_parents",
|
||||
"get_representations_parents",
|
||||
"get_archived_representations",
|
||||
|
||||
"get_thumbnail",
|
||||
"get_thumbnails",
|
||||
"get_thumbnail_id_from_source",
|
||||
|
||||
"get_workfile_info",
|
||||
)
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -17,11 +17,8 @@ class RenderCreator(Creator):
|
|||
|
||||
create_allow_context_change = True
|
||||
|
||||
def __init__(
|
||||
self, create_context, system_settings, project_settings, headless=False
|
||||
):
|
||||
super(RenderCreator, self).__init__(create_context, system_settings,
|
||||
project_settings, headless)
|
||||
def __init__(self, project_settings, *args, **kwargs):
|
||||
super(RenderCreator, self).__init__(project_settings, *args, **kwargs)
|
||||
self._default_variants = (project_settings["aftereffects"]
|
||||
["create"]
|
||||
["RenderCreator"]
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@ import attr
|
|||
import pyblish.api
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.lib import abstract_collect_render
|
||||
from openpype.lib.abstract_collect_render import RenderInstance
|
||||
from openpype.pipeline import publish
|
||||
from openpype.pipeline.publish import RenderInstance
|
||||
|
||||
from openpype.hosts.aftereffects.api import get_stub
|
||||
|
||||
|
|
@ -25,7 +25,7 @@ class AERenderInstance(RenderInstance):
|
|||
file_name = attr.ib(default=None)
|
||||
|
||||
|
||||
class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
||||
class CollectAERender(publish.AbstractCollectRender):
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.405
|
||||
label = "Collect After Effects Render Layers"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import os
|
|||
import flame
|
||||
from pprint import pformat
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
|
||||
from openpype.lib import StringTemplate
|
||||
|
||||
class LoadClip(opfapi.ClipLoader):
|
||||
"""Load a subset to timeline as clip
|
||||
|
|
@ -22,7 +22,7 @@ class LoadClip(opfapi.ClipLoader):
|
|||
# settings
|
||||
reel_group_name = "OpenPype_Reels"
|
||||
reel_name = "Loaded"
|
||||
clip_name_template = "{asset}_{subset}_{output}"
|
||||
clip_name_template = "{asset}_{subset}<_{output}>"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
|
||||
|
|
@ -36,8 +36,8 @@ class LoadClip(opfapi.ClipLoader):
|
|||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
colorspace = version_data.get("colorspace", None)
|
||||
clip_name = self.clip_name_template.format(
|
||||
**context["representation"]["context"])
|
||||
clip_name = StringTemplate(self.clip_name_template).format(
|
||||
context["representation"]["context"])
|
||||
|
||||
# TODO: settings in imageio
|
||||
# convert colorspace with ocio to flame mapping
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
import flame
|
||||
from pprint import pformat
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.lib import StringTemplate
|
||||
|
||||
|
||||
class LoadClipBatch(opfapi.ClipLoader):
|
||||
|
|
@ -21,7 +22,7 @@ class LoadClipBatch(opfapi.ClipLoader):
|
|||
|
||||
# settings
|
||||
reel_name = "OP_LoadedReel"
|
||||
clip_name_template = "{asset}_{subset}_{output}"
|
||||
clip_name_template = "{asset}_{subset}<_{output}>"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
|
||||
|
|
@ -39,8 +40,8 @@ class LoadClipBatch(opfapi.ClipLoader):
|
|||
if not context["representation"]["context"].get("output"):
|
||||
self.clip_name_template.replace("output", "representation")
|
||||
|
||||
clip_name = self.clip_name_template.format(
|
||||
**context["representation"]["context"])
|
||||
clip_name = StringTemplate(self.clip_name_template).format(
|
||||
context["representation"]["context"])
|
||||
|
||||
# TODO: settings in imageio
|
||||
# convert colorspace with ocio to flame mapping
|
||||
|
|
|
|||
|
|
@ -4,11 +4,10 @@ from pathlib import Path
|
|||
|
||||
import attr
|
||||
|
||||
import openpype.lib
|
||||
import openpype.lib.abstract_collect_render
|
||||
from openpype.lib.abstract_collect_render import RenderInstance
|
||||
from openpype.lib import get_formatted_current_time
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline import publish
|
||||
from openpype.pipeline.publish import RenderInstance
|
||||
import openpype.hosts.harmony.api as harmony
|
||||
|
||||
|
||||
|
|
@ -20,8 +19,7 @@ class HarmonyRenderInstance(RenderInstance):
|
|||
leadingZeros = attr.ib(default=3)
|
||||
|
||||
|
||||
class CollectFarmRender(openpype.lib.abstract_collect_render.
|
||||
AbstractCollectRender):
|
||||
class CollectFarmRender(publish.AbstractCollectRender):
|
||||
"""Gather all publishable renders."""
|
||||
|
||||
# https://docs.toonboom.com/help/harmony-17/premium/reference/node/output/write-node-image-formats.html
|
||||
|
|
|
|||
85
openpype/hosts/hiero/api/launchforhiero.py
Normal file
85
openpype/hosts/hiero/api/launchforhiero.py
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
import logging
|
||||
|
||||
from scriptsmenu import scriptsmenu
|
||||
from Qt import QtWidgets
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _hiero_main_window():
|
||||
"""Return Hiero's main window"""
|
||||
for obj in QtWidgets.QApplication.topLevelWidgets():
|
||||
if (obj.inherits('QMainWindow') and
|
||||
obj.metaObject().className() == 'Foundry::UI::DockMainWindow'):
|
||||
return obj
|
||||
raise RuntimeError('Could not find HieroWindow instance')
|
||||
|
||||
|
||||
def _hiero_main_menubar():
|
||||
"""Retrieve the main menubar of the Hiero window"""
|
||||
hiero_window = _hiero_main_window()
|
||||
menubar = [i for i in hiero_window.children() if isinstance(
|
||||
i,
|
||||
QtWidgets.QMenuBar
|
||||
)]
|
||||
|
||||
assert len(menubar) == 1, "Error, could not find menu bar!"
|
||||
return menubar[0]
|
||||
|
||||
|
||||
def find_scripts_menu(title, parent):
|
||||
"""
|
||||
Check if the menu exists with the given title in the parent
|
||||
|
||||
Args:
|
||||
title (str): the title name of the scripts menu
|
||||
|
||||
parent (QtWidgets.QMenuBar): the menubar to check
|
||||
|
||||
Returns:
|
||||
QtWidgets.QMenu or None
|
||||
|
||||
"""
|
||||
|
||||
menu = None
|
||||
search = [i for i in parent.children() if
|
||||
isinstance(i, scriptsmenu.ScriptsMenu)
|
||||
and i.title() == title]
|
||||
if search:
|
||||
assert len(search) < 2, ("Multiple instances of menu '{}' "
|
||||
"in menu bar".format(title))
|
||||
menu = search[0]
|
||||
|
||||
return menu
|
||||
|
||||
|
||||
def main(title="Scripts", parent=None, objectName=None):
|
||||
"""Build the main scripts menu in Hiero
|
||||
|
||||
Args:
|
||||
title (str): name of the menu in the application
|
||||
|
||||
parent (QtWidgets.QtMenuBar): the parent object for the menu
|
||||
|
||||
objectName (str): custom objectName for scripts menu
|
||||
|
||||
Returns:
|
||||
scriptsmenu.ScriptsMenu instance
|
||||
|
||||
"""
|
||||
hieromainbar = parent or _hiero_main_menubar()
|
||||
try:
|
||||
# check menu already exists
|
||||
menu = find_scripts_menu(title, hieromainbar)
|
||||
if not menu:
|
||||
log.info("Attempting to build menu ...")
|
||||
object_name = objectName or title.lower()
|
||||
menu = scriptsmenu.ScriptsMenu(title=title,
|
||||
parent=hieromainbar,
|
||||
objectName=object_name)
|
||||
except Exception as e:
|
||||
log.error(e)
|
||||
return
|
||||
|
||||
return menu
|
||||
|
|
@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
|
|||
from openpype.tools.utils import host_tools
|
||||
|
||||
from . import tags
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
|
@ -41,6 +42,7 @@ def menu_install():
|
|||
Installing menu into Hiero
|
||||
|
||||
"""
|
||||
|
||||
from Qt import QtGui
|
||||
from . import (
|
||||
publish, launch_workfiles_app, reload_config,
|
||||
|
|
@ -138,3 +140,30 @@ def menu_install():
|
|||
exeprimental_action.triggered.connect(
|
||||
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
|
||||
)
|
||||
|
||||
|
||||
def add_scripts_menu():
|
||||
try:
|
||||
from . import launchforhiero
|
||||
except ImportError:
|
||||
|
||||
log.warning(
|
||||
"Skipping studio.menu install, because "
|
||||
"'scriptsmenu' module seems unavailable."
|
||||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
config = project_settings["hiero"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["hiero"]["scriptsmenu"]["name"]
|
||||
|
||||
if not config:
|
||||
log.warning("Skipping studio menu, no definition found.")
|
||||
return
|
||||
|
||||
# run the launcher for Hiero menu
|
||||
studio_menu = launchforhiero.main(title=_menu.title())
|
||||
|
||||
# apply configuration
|
||||
studio_menu.build_from_configuration(studio_menu, config)
|
||||
|
|
|
|||
|
|
@ -48,6 +48,7 @@ def install():
|
|||
|
||||
# install menu
|
||||
menu.menu_install()
|
||||
menu.add_scripts_menu()
|
||||
|
||||
# register hiero events
|
||||
events.register_hiero_events()
|
||||
|
|
|
|||
|
|
@ -2522,12 +2522,30 @@ def load_capture_preset(data=None):
|
|||
temp_options2['multiSampleEnable'] = False
|
||||
temp_options2['multiSampleCount'] = preset[id][key]
|
||||
|
||||
if key == 'renderDepthOfField':
|
||||
temp_options2['renderDepthOfField'] = preset[id][key]
|
||||
|
||||
if key == 'ssaoEnable':
|
||||
if preset[id][key] is True:
|
||||
temp_options2['ssaoEnable'] = True
|
||||
else:
|
||||
temp_options2['ssaoEnable'] = False
|
||||
|
||||
if key == 'ssaoSamples':
|
||||
temp_options2['ssaoSamples'] = preset[id][key]
|
||||
|
||||
if key == 'ssaoAmount':
|
||||
temp_options2['ssaoAmount'] = preset[id][key]
|
||||
|
||||
if key == 'ssaoRadius':
|
||||
temp_options2['ssaoRadius'] = preset[id][key]
|
||||
|
||||
if key == 'hwFogDensity':
|
||||
temp_options2['hwFogDensity'] = preset[id][key]
|
||||
|
||||
if key == 'ssaoFilterRadius':
|
||||
temp_options2['ssaoFilterRadius'] = preset[id][key]
|
||||
|
||||
if key == 'alphaCut':
|
||||
temp_options2['transparencyAlgorithm'] = 5
|
||||
temp_options2['transparencyQuality'] = 1
|
||||
|
|
@ -2535,6 +2553,48 @@ def load_capture_preset(data=None):
|
|||
if key == 'headsUpDisplay':
|
||||
temp_options['headsUpDisplay'] = True
|
||||
|
||||
if key == 'fogging':
|
||||
temp_options['fogging'] = preset[id][key] or False
|
||||
|
||||
if key == 'hwFogStart':
|
||||
temp_options2['hwFogStart'] = preset[id][key]
|
||||
|
||||
if key == 'hwFogEnd':
|
||||
temp_options2['hwFogEnd'] = preset[id][key]
|
||||
|
||||
if key == 'hwFogAlpha':
|
||||
temp_options2['hwFogAlpha'] = preset[id][key]
|
||||
|
||||
if key == 'hwFogFalloff':
|
||||
temp_options2['hwFogFalloff'] = int(preset[id][key])
|
||||
|
||||
if key == 'hwFogColorR':
|
||||
temp_options2['hwFogColorR'] = preset[id][key]
|
||||
|
||||
if key == 'hwFogColorG':
|
||||
temp_options2['hwFogColorG'] = preset[id][key]
|
||||
|
||||
if key == 'hwFogColorB':
|
||||
temp_options2['hwFogColorB'] = preset[id][key]
|
||||
|
||||
if key == 'motionBlurEnable':
|
||||
if preset[id][key] is True:
|
||||
temp_options2['motionBlurEnable'] = True
|
||||
else:
|
||||
temp_options2['motionBlurEnable'] = False
|
||||
|
||||
if key == 'motionBlurSampleCount':
|
||||
temp_options2['motionBlurSampleCount'] = preset[id][key]
|
||||
|
||||
if key == 'motionBlurShutterOpenFraction':
|
||||
temp_options2['motionBlurShutterOpenFraction'] = preset[id][key]
|
||||
|
||||
if key == 'lineAAEnable':
|
||||
if preset[id][key] is True:
|
||||
temp_options2['lineAAEnable'] = True
|
||||
else:
|
||||
temp_options2['lineAAEnable'] = False
|
||||
|
||||
else:
|
||||
temp_options[str(key)] = preset[id][key]
|
||||
|
||||
|
|
@ -2544,7 +2604,24 @@ def load_capture_preset(data=None):
|
|||
'gpuCacheDisplayFilter',
|
||||
'multiSample',
|
||||
'ssaoEnable',
|
||||
'textureMaxResolution'
|
||||
'ssaoSamples',
|
||||
'ssaoAmount',
|
||||
'ssaoFilterRadius',
|
||||
'ssaoRadius',
|
||||
'hwFogStart',
|
||||
'hwFogEnd',
|
||||
'hwFogAlpha',
|
||||
'hwFogFalloff',
|
||||
'hwFogColorR',
|
||||
'hwFogColorG',
|
||||
'hwFogColorB',
|
||||
'hwFogDensity',
|
||||
'textureMaxResolution',
|
||||
'motionBlurEnable',
|
||||
'motionBlurSampleCount',
|
||||
'motionBlurShutterOpenFraction',
|
||||
'lineAAEnable',
|
||||
'renderDepthOfField'
|
||||
]:
|
||||
temp_options.pop(key, None)
|
||||
|
||||
|
|
|
|||
|
|
@ -1087,7 +1087,7 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
"d_tiff": "tif"
|
||||
}
|
||||
|
||||
displays = get_displays()["displays"]
|
||||
displays = get_displays(override_dst="render")["displays"]
|
||||
for name, display in displays.items():
|
||||
enabled = display["params"]["enable"]["value"]
|
||||
if not enabled:
|
||||
|
|
@ -1106,9 +1106,33 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
display["driverNode"]["type"], "exr")
|
||||
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=extensions,
|
||||
camera=camera)
|
||||
# Create render product and set it as multipart only on
|
||||
# display types supporting it. In all other cases, Renderman
|
||||
# will create separate output per channel.
|
||||
if display["driverNode"]["type"] in ["d_openexr", "d_deepexr", "d_tiff"]: # noqa
|
||||
product = RenderProduct(
|
||||
productName=aov_name,
|
||||
ext=extensions,
|
||||
camera=camera,
|
||||
multipart=True
|
||||
)
|
||||
else:
|
||||
# this code should handle the case where no multipart
|
||||
# capable format is selected. But since it involves
|
||||
# shady logic to determine what channel become what
|
||||
# lets not do that as all productions will use exr anyway.
|
||||
"""
|
||||
for channel in display['params']['displayChannels']['value']: # noqa
|
||||
product = RenderProduct(
|
||||
productName="{}_{}".format(aov_name, channel),
|
||||
ext=extensions,
|
||||
camera=camera,
|
||||
multipart=False
|
||||
)
|
||||
"""
|
||||
raise UnsupportedImageFormatException(
|
||||
"Only exr, deep exr and tiff formats are supported.")
|
||||
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
|
@ -1201,3 +1225,7 @@ class UnsupportedRendererException(Exception):
|
|||
|
||||
Raised when requesting data from unsupported renderer.
|
||||
"""
|
||||
|
||||
|
||||
class UnsupportedImageFormatException(Exception):
|
||||
"""Custom exception to report unsupported output image format."""
|
||||
|
|
|
|||
|
|
@ -1,111 +0,0 @@
|
|||
import os
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import openpype.api
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
extract_alembic,
|
||||
suspended_refresh,
|
||||
maintained_selection,
|
||||
iter_visible_nodes_in_range
|
||||
)
|
||||
|
||||
|
||||
class ExtractAnimation(openpype.api.Extractor):
|
||||
"""Produce an alembic of just point positions and normals.
|
||||
|
||||
Positions and normals, uvs, creases are preserved, but nothing more,
|
||||
for plain and predictable point caches.
|
||||
|
||||
Plugin can run locally or remotely (on a farm - if instance is marked with
|
||||
"farm" it will be skipped in local processing, but processed on farm)
|
||||
"""
|
||||
|
||||
label = "Extract Animation"
|
||||
hosts = ["maya"]
|
||||
families = ["animation"]
|
||||
targets = ["local", "remote"]
|
||||
|
||||
def process(self, instance):
|
||||
if instance.data.get("farm"):
|
||||
self.log.debug("Should be processed on farm, skipping.")
|
||||
return
|
||||
|
||||
# Collect the out set nodes
|
||||
out_sets = [node for node in instance if node.endswith("out_SET")]
|
||||
if len(out_sets) != 1:
|
||||
raise RuntimeError("Couldn't find exactly one out_SET: "
|
||||
"{0}".format(out_sets))
|
||||
out_set = out_sets[0]
|
||||
roots = cmds.sets(out_set, query=True)
|
||||
|
||||
# Include all descendants
|
||||
nodes = roots + cmds.listRelatives(roots,
|
||||
allDescendents=True,
|
||||
fullPath=True) or []
|
||||
|
||||
# Collect the start and end including handles
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
self.log.info("Extracting animation..")
|
||||
dirname = self.staging_dir(instance)
|
||||
|
||||
parent_dir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
path = os.path.join(parent_dir, filename)
|
||||
|
||||
options = {
|
||||
"step": instance.data.get("step", 1.0) or 1.0,
|
||||
"attr": ["cbId"],
|
||||
"writeVisibility": True,
|
||||
"writeCreases": True,
|
||||
"uvWrite": True,
|
||||
"selection": True,
|
||||
"worldSpace": instance.data.get("worldSpace", True),
|
||||
"writeColorSets": instance.data.get("writeColorSets", False),
|
||||
"writeFaceSets": instance.data.get("writeFaceSets", False)
|
||||
}
|
||||
|
||||
if not instance.data.get("includeParentHierarchy", True):
|
||||
# Set the root nodes if we don't want to include parents
|
||||
# The roots are to be considered the ones that are the actual
|
||||
# direct members of the set
|
||||
options["root"] = roots
|
||||
|
||||
if int(cmds.about(version=True)) >= 2017:
|
||||
# Since Maya 2017 alembic supports multiple uv sets - write them.
|
||||
options["writeUVSets"] = True
|
||||
|
||||
if instance.data.get("visibleOnly", False):
|
||||
# If we only want to include nodes that are visible in the frame
|
||||
# range then we need to do our own check. Alembic's `visibleOnly`
|
||||
# flag does not filter out those that are only hidden on some
|
||||
# frames as it counts "animated" or "connected" visibilities as
|
||||
# if it's always visible.
|
||||
nodes = list(iter_visible_nodes_in_range(nodes,
|
||||
start=start,
|
||||
end=end))
|
||||
|
||||
with suspended_refresh():
|
||||
with maintained_selection():
|
||||
cmds.select(nodes, noExpand=True)
|
||||
extract_alembic(file=path,
|
||||
startFrame=float(start),
|
||||
endFrame=float(end),
|
||||
**options)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': 'abc',
|
||||
'files': filename,
|
||||
"stagingDir": dirname,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
instance.context.data["cleanupFullPaths"].append(path)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
|
|
@ -115,7 +115,7 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
else:
|
||||
preset["viewport_options"] = {"imagePlane": image_plane}
|
||||
|
||||
with maintained_time():
|
||||
with lib.maintained_time():
|
||||
filename = preset.get("filename", "%TEMP%")
|
||||
|
||||
# Force viewer to False in call to capture because we have our own
|
||||
|
|
@ -178,12 +178,3 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
'camera_name': camera_node_name
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_time():
|
||||
ct = cmds.currentTime(query=True)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
cmds.currentTime(ct, edit=True)
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
self.log.debug("Should be processed on farm, skipping.")
|
||||
return
|
||||
|
||||
nodes = instance[:]
|
||||
nodes, roots = self.get_members_and_roots(instance)
|
||||
|
||||
# Collect the start and end including handles
|
||||
start = float(instance.data.get("frameStartHandle", 1))
|
||||
|
|
@ -46,10 +46,6 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
|
||||
attr_prefixes = [value for value in attr_prefixes if value.strip()]
|
||||
|
||||
# Get extra export arguments
|
||||
writeColorSets = instance.data.get("writeColorSets", False)
|
||||
writeFaceSets = instance.data.get("writeFaceSets", False)
|
||||
|
||||
self.log.info("Extracting pointcache..")
|
||||
dirname = self.staging_dir(instance)
|
||||
|
||||
|
|
@ -63,8 +59,8 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
"attrPrefix": attr_prefixes,
|
||||
"writeVisibility": True,
|
||||
"writeCreases": True,
|
||||
"writeColorSets": writeColorSets,
|
||||
"writeFaceSets": writeFaceSets,
|
||||
"writeColorSets": instance.data.get("writeColorSets", False),
|
||||
"writeFaceSets": instance.data.get("writeFaceSets", False),
|
||||
"uvWrite": True,
|
||||
"selection": True,
|
||||
"worldSpace": instance.data.get("worldSpace", True)
|
||||
|
|
@ -74,7 +70,7 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
# Set the root nodes if we don't want to include parents
|
||||
# The roots are to be considered the ones that are the actual
|
||||
# direct members of the set
|
||||
options["root"] = instance.data.get("setMembers")
|
||||
options["root"] = roots
|
||||
|
||||
if int(cmds.about(version=True)) >= 2017:
|
||||
# Since Maya 2017 alembic supports multiple uv sets - write them.
|
||||
|
|
@ -112,3 +108,28 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
instance.context.data["cleanupFullPaths"].append(path)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
|
||||
def get_members_and_roots(self, instance):
|
||||
return instance[:], instance.data.get("setMembers")
|
||||
|
||||
|
||||
class ExtractAnimation(ExtractAlembic):
|
||||
label = "Extract Animation"
|
||||
families = ["animation"]
|
||||
|
||||
def get_members_and_roots(self, instance):
|
||||
|
||||
# Collect the out set nodes
|
||||
out_sets = [node for node in instance if node.endswith("out_SET")]
|
||||
if len(out_sets) != 1:
|
||||
raise RuntimeError("Couldn't find exactly one out_SET: "
|
||||
"{0}".format(out_sets))
|
||||
out_set = out_sets[0]
|
||||
roots = cmds.sets(out_set, query=True)
|
||||
|
||||
# Include all descendants
|
||||
nodes = roots + cmds.listRelatives(roots,
|
||||
allDescendents=True,
|
||||
fullPath=True) or []
|
||||
|
||||
return nodes, roots
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
import os
|
||||
import contextlib
|
||||
import glob
|
||||
|
||||
import capture
|
||||
|
|
@ -28,7 +27,6 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
|
||||
camera = instance.data['review_camera']
|
||||
|
||||
capture_preset = ""
|
||||
capture_preset = (
|
||||
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
|
||||
)
|
||||
|
|
@ -103,9 +101,7 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
|
||||
preset["isolate"] = instance.data["setMembers"]
|
||||
|
||||
with maintained_time():
|
||||
filename = preset.get("filename", "%TEMP%")
|
||||
|
||||
with lib.maintained_time():
|
||||
# Force viewer to False in call to capture because we have our own
|
||||
# viewer opening call to allow a signal to trigger between
|
||||
# playblast and viewer
|
||||
|
|
@ -174,12 +170,3 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
filepath = max(files, key=os.path.getmtime)
|
||||
|
||||
return filepath
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_time():
|
||||
ct = cmds.currentTime(query=True)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
cmds.currentTime(ct, edit=True)
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
"yeticache"]
|
||||
optional = True
|
||||
actions = [openpype.api.RepairAction]
|
||||
exclude_families = []
|
||||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
|
|
@ -56,7 +57,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
|
||||
# compare with data on instance
|
||||
errors = []
|
||||
|
||||
if [ef for ef in self.exclude_families
|
||||
if instance.data["family"] in ef]:
|
||||
return
|
||||
if(inst_start != frame_start_handle):
|
||||
errors.append("Instance start frame [ {} ] doesn't "
|
||||
"match the one set on instance [ {} ]: "
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ from collections import OrderedDict
|
|||
import clique
|
||||
|
||||
import nuke
|
||||
from Qt import QtCore, QtWidgets
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
|
|
@ -27,6 +28,7 @@ from openpype.api import (
|
|||
get_current_project_settings,
|
||||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.lib import env_value_to_bool
|
||||
from openpype.lib.path_tools import HostDirmap
|
||||
from openpype.settings import (
|
||||
get_project_settings,
|
||||
|
|
@ -63,7 +65,10 @@ class Context:
|
|||
main_window = None
|
||||
context_label = None
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
# Workfile related code
|
||||
workfiles_launched = False
|
||||
workfiles_tool_timer = None
|
||||
|
||||
# Seems unused
|
||||
_project_doc = None
|
||||
|
||||
|
|
@ -2384,12 +2389,19 @@ def select_nodes(nodes):
|
|||
|
||||
|
||||
def launch_workfiles_app():
|
||||
'''Function letting start workfiles after start of host
|
||||
'''
|
||||
from openpype.lib import (
|
||||
env_value_to_bool
|
||||
)
|
||||
from .pipeline import get_main_window
|
||||
"""Show workfiles tool on nuke launch.
|
||||
|
||||
Trigger to show workfiles tool on application launch. Can be executed only
|
||||
once all other calls are ignored.
|
||||
|
||||
Workfiles tool show is deffered after application initialization using
|
||||
QTimer.
|
||||
"""
|
||||
|
||||
if Context.workfiles_launched:
|
||||
return
|
||||
|
||||
Context.workfiles_launched = True
|
||||
|
||||
# get all imortant settings
|
||||
open_at_start = env_value_to_bool(
|
||||
|
|
@ -2400,10 +2412,38 @@ def launch_workfiles_app():
|
|||
if not open_at_start:
|
||||
return
|
||||
|
||||
if not Context.workfiles_launched:
|
||||
Context.workfiles_launched = True
|
||||
main_window = get_main_window()
|
||||
host_tools.show_workfiles(parent=main_window)
|
||||
# Show workfiles tool using timer
|
||||
# - this will be probably triggered during initialization in that case
|
||||
# the application is not be able to show uis so it must be
|
||||
# deffered using timer
|
||||
# - timer should be processed when initialization ends
|
||||
# When applications starts to process events.
|
||||
timer = QtCore.QTimer()
|
||||
timer.timeout.connect(_launch_workfile_app)
|
||||
timer.setInterval(100)
|
||||
Context.workfiles_tool_timer = timer
|
||||
timer.start()
|
||||
|
||||
|
||||
def _launch_workfile_app():
|
||||
# Safeguard to not show window when application is still starting up
|
||||
# or is already closing down.
|
||||
closing_down = QtWidgets.QApplication.closingDown()
|
||||
starting_up = QtWidgets.QApplication.startingUp()
|
||||
|
||||
# Stop the timer if application finished start up of is closing down
|
||||
if closing_down or not starting_up:
|
||||
Context.workfiles_tool_timer.stop()
|
||||
Context.workfiles_tool_timer = None
|
||||
|
||||
# Skip if application is starting up or closing down
|
||||
if starting_up or closing_down:
|
||||
return
|
||||
|
||||
from .pipeline import get_main_window
|
||||
|
||||
main_window = get_main_window()
|
||||
host_tools.show_workfiles(parent=main_window)
|
||||
|
||||
|
||||
def process_workfile_builder():
|
||||
|
|
|
|||
|
|
@ -120,8 +120,9 @@ def install():
|
|||
nuke.addOnCreate(workfile_settings.set_context_settings, nodeClass="Root")
|
||||
nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root")
|
||||
nuke.addOnCreate(process_workfile_builder, nodeClass="Root")
|
||||
nuke.addOnCreate(launch_workfiles_app, nodeClass="Root")
|
||||
|
||||
_install_menu()
|
||||
launch_workfiles_app()
|
||||
|
||||
|
||||
def uninstall():
|
||||
|
|
|
|||
|
|
@ -54,20 +54,28 @@ class LoadClip(plugin.NukeLoader):
|
|||
script_start = int(nuke.root()["first_frame"].value())
|
||||
|
||||
# option gui
|
||||
defaults = {
|
||||
"start_at_workfile": True
|
||||
options_defaults = {
|
||||
"start_at_workfile": True,
|
||||
"add_retime": True
|
||||
}
|
||||
|
||||
options = [
|
||||
qargparse.Boolean(
|
||||
"start_at_workfile",
|
||||
help="Load at workfile start frame",
|
||||
default=True
|
||||
)
|
||||
]
|
||||
|
||||
node_name_template = "{class_name}_{ext}"
|
||||
|
||||
@classmethod
|
||||
def get_options(cls, *args):
|
||||
return [
|
||||
qargparse.Boolean(
|
||||
"start_at_workfile",
|
||||
help="Load at workfile start frame",
|
||||
default=cls.options_defaults["start_at_workfile"]
|
||||
),
|
||||
qargparse.Boolean(
|
||||
"add_retime",
|
||||
help="Load with retime",
|
||||
default=cls.options_defaults["add_retime"]
|
||||
)
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def get_representations(cls):
|
||||
return (
|
||||
|
|
@ -86,7 +94,10 @@ class LoadClip(plugin.NukeLoader):
|
|||
file = self.fname.replace("\\", "/")
|
||||
|
||||
start_at_workfile = options.get(
|
||||
"start_at_workfile", self.defaults["start_at_workfile"])
|
||||
"start_at_workfile", self.options_defaults["start_at_workfile"])
|
||||
|
||||
add_retime = options.get(
|
||||
"add_retime", self.options_defaults["add_retime"])
|
||||
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
|
|
@ -151,7 +162,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
data_imprint = {}
|
||||
for k in add_keys:
|
||||
if k == 'version':
|
||||
data_imprint.update({k: context["version"]['name']})
|
||||
data_imprint[k] = context["version"]['name']
|
||||
elif k == 'colorspace':
|
||||
colorspace = repre["data"].get(k)
|
||||
colorspace = colorspace or version_data.get(k)
|
||||
|
|
@ -159,10 +170,13 @@ class LoadClip(plugin.NukeLoader):
|
|||
if used_colorspace:
|
||||
data_imprint["used_colorspace"] = used_colorspace
|
||||
else:
|
||||
data_imprint.update(
|
||||
{k: context["version"]['data'].get(k, str(None))})
|
||||
data_imprint[k] = context["version"]['data'].get(
|
||||
k, str(None))
|
||||
|
||||
data_imprint.update({"objectName": read_name})
|
||||
data_imprint["objectName"] = read_name
|
||||
|
||||
if add_retime and version_data.get("retime", None):
|
||||
data_imprint["addRetime"] = True
|
||||
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
|
|
@ -174,7 +188,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
loader=self.__class__.__name__,
|
||||
data=data_imprint)
|
||||
|
||||
if version_data.get("retime", None):
|
||||
if add_retime and version_data.get("retime", None):
|
||||
self._make_retimes(read_node, version_data)
|
||||
|
||||
self.set_as_member(read_node)
|
||||
|
|
@ -198,7 +212,12 @@ class LoadClip(plugin.NukeLoader):
|
|||
read_node = nuke.toNode(container['objectName'])
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
|
||||
start_at_workfile = "start at" in read_node['frame_mode'].value()
|
||||
|
||||
add_retime = [
|
||||
key for key in read_node.knobs().keys()
|
||||
if "addRetime" in key
|
||||
]
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
|
@ -286,7 +305,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
"updated to version: {}".format(version_doc.get("name"))
|
||||
)
|
||||
|
||||
if version_data.get("retime", None):
|
||||
if add_retime and version_data.get("retime", None):
|
||||
self._make_retimes(read_node, version_data)
|
||||
else:
|
||||
self.clear_members(read_node)
|
||||
|
|
|
|||
|
|
@ -152,6 +152,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
self.log.debug("__ first_frame: {}".format(first_frame))
|
||||
self.log.debug("__ slate_first_frame: {}".format(slate_first_frame))
|
||||
|
||||
above_slate_node = slate_node.dependencies().pop()
|
||||
# fallback if files does not exists
|
||||
if self._check_frames_exists(instance):
|
||||
# Read node
|
||||
|
|
@ -164,8 +165,16 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
r_node["colorspace"].setValue(instance.data["colorspace"])
|
||||
previous_node = r_node
|
||||
temporary_nodes = [previous_node]
|
||||
|
||||
# adding copy metadata node for correct frame metadata
|
||||
cm_node = nuke.createNode("CopyMetaData")
|
||||
cm_node.setInput(0, previous_node)
|
||||
cm_node.setInput(1, above_slate_node)
|
||||
previous_node = cm_node
|
||||
temporary_nodes.append(cm_node)
|
||||
|
||||
else:
|
||||
previous_node = slate_node.dependencies().pop()
|
||||
previous_node = above_slate_node
|
||||
temporary_nodes = []
|
||||
|
||||
# only create colorspace baking if toggled on
|
||||
|
|
|
|||
|
|
@ -319,14 +319,13 @@ def get_current_timeline_items(
|
|||
selected_track_count = timeline.GetTrackCount(track_type)
|
||||
|
||||
# loop all tracks and get items
|
||||
_clips = dict()
|
||||
_clips = {}
|
||||
for track_index in range(1, (int(selected_track_count) + 1)):
|
||||
_track_name = timeline.GetTrackName(track_type, track_index)
|
||||
|
||||
# filter out all unmathed track names
|
||||
if track_name:
|
||||
if _track_name not in track_name:
|
||||
continue
|
||||
if track_name and _track_name not in track_name:
|
||||
continue
|
||||
|
||||
timeline_items = timeline.GetItemListInTrack(
|
||||
track_type, track_index)
|
||||
|
|
@ -348,12 +347,8 @@ def get_current_timeline_items(
|
|||
"index": clip_index
|
||||
}
|
||||
ti_color = ti.GetClipColor()
|
||||
if filter is True:
|
||||
if selecting_color in ti_color:
|
||||
selected_clips.append(data)
|
||||
else:
|
||||
if filter and selecting_color in ti_color or not filter:
|
||||
selected_clips.append(data)
|
||||
|
||||
return selected_clips
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -506,7 +506,7 @@ class Creator(LegacyCreator):
|
|||
super(Creator, self).__init__(*args, **kwargs)
|
||||
from openpype.api import get_current_project_settings
|
||||
resolve_p_settings = get_current_project_settings().get("resolve")
|
||||
self.presets = dict()
|
||||
self.presets = {}
|
||||
if resolve_p_settings:
|
||||
self.presets = resolve_p_settings["create"].get(
|
||||
self.__class__.__name__, {})
|
||||
|
|
|
|||
|
|
@ -116,12 +116,13 @@ class CreateShotClip(resolve.Creator):
|
|||
"order": 0},
|
||||
"vSyncTrack": {
|
||||
"value": gui_tracks, # noqa
|
||||
"type": "QComboBox",
|
||||
"label": "Hero track",
|
||||
"target": "ui",
|
||||
"toolTip": "Select driving track name which should be mastering all others", # noqa
|
||||
"order": 1}
|
||||
"type": "QComboBox",
|
||||
"label": "Hero track",
|
||||
"target": "ui",
|
||||
"toolTip": "Select driving track name which should be mastering all others", # noqa
|
||||
"order": 1
|
||||
}
|
||||
}
|
||||
},
|
||||
"publishSettings": {
|
||||
"type": "section",
|
||||
|
|
@ -172,28 +173,31 @@ class CreateShotClip(resolve.Creator):
|
|||
"target": "ui",
|
||||
"order": 4,
|
||||
"value": {
|
||||
"workfileFrameStart": {
|
||||
"value": 1001,
|
||||
"type": "QSpinBox",
|
||||
"label": "Workfiles Start Frame",
|
||||
"target": "tag",
|
||||
"toolTip": "Set workfile starting frame number", # noqa
|
||||
"order": 0},
|
||||
"handleStart": {
|
||||
"value": 0,
|
||||
"type": "QSpinBox",
|
||||
"label": "Handle start (head)",
|
||||
"target": "tag",
|
||||
"toolTip": "Handle at start of clip", # noqa
|
||||
"order": 1},
|
||||
"handleEnd": {
|
||||
"value": 0,
|
||||
"type": "QSpinBox",
|
||||
"label": "Handle end (tail)",
|
||||
"target": "tag",
|
||||
"toolTip": "Handle at end of clip", # noqa
|
||||
"order": 2},
|
||||
}
|
||||
"workfileFrameStart": {
|
||||
"value": 1001,
|
||||
"type": "QSpinBox",
|
||||
"label": "Workfiles Start Frame",
|
||||
"target": "tag",
|
||||
"toolTip": "Set workfile starting frame number", # noqa
|
||||
"order": 0
|
||||
},
|
||||
"handleStart": {
|
||||
"value": 0,
|
||||
"type": "QSpinBox",
|
||||
"label": "Handle start (head)",
|
||||
"target": "tag",
|
||||
"toolTip": "Handle at start of clip", # noqa
|
||||
"order": 1
|
||||
},
|
||||
"handleEnd": {
|
||||
"value": 0,
|
||||
"type": "QSpinBox",
|
||||
"label": "Handle end (tail)",
|
||||
"target": "tag",
|
||||
"toolTip": "Handle at end of clip", # noqa
|
||||
"order": 2
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -229,8 +233,10 @@ class CreateShotClip(resolve.Creator):
|
|||
v_sync_track = widget.result["vSyncTrack"]["value"]
|
||||
|
||||
# sort selected trackItems by
|
||||
sorted_selected_track_items = list()
|
||||
unsorted_selected_track_items = list()
|
||||
sorted_selected_track_items = []
|
||||
unsorted_selected_track_items = []
|
||||
print("_____ selected ______")
|
||||
print(self.selected)
|
||||
for track_item_data in self.selected:
|
||||
if track_item_data["track"]["name"] in v_sync_track:
|
||||
sorted_selected_track_items.append(track_item_data)
|
||||
|
|
@ -253,10 +259,10 @@ class CreateShotClip(resolve.Creator):
|
|||
"sq_frame_start": sq_frame_start,
|
||||
"sq_markers": sq_markers
|
||||
}
|
||||
|
||||
print(kwargs)
|
||||
for i, track_item_data in enumerate(sorted_selected_track_items):
|
||||
self.rename_index = i
|
||||
|
||||
self.log.info(track_item_data)
|
||||
# convert track item to timeline media pool item
|
||||
track_item = resolve.PublishClip(
|
||||
self, track_item_data, **kwargs).convert()
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ class LoadClip(resolve.TimelineItemLoader):
|
|||
"""
|
||||
|
||||
families = ["render2d", "source", "plate", "render", "review"]
|
||||
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264", ".mov"]
|
||||
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264", "mov"]
|
||||
|
||||
label = "Load as clip"
|
||||
order = -10
|
||||
|
|
|
|||
|
|
@ -30,7 +30,8 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"asset": asset,
|
||||
"subset": "{}{}".format(asset, subset.capitalize()),
|
||||
"item": project,
|
||||
"family": "workfile"
|
||||
"family": "workfile",
|
||||
"families": []
|
||||
}
|
||||
|
||||
# create instance with workfile
|
||||
|
|
|
|||
|
|
@ -1,20 +1,8 @@
|
|||
from .pipeline import (
|
||||
install,
|
||||
ls,
|
||||
|
||||
set_project_name,
|
||||
get_context_title,
|
||||
get_context_data,
|
||||
update_context_data,
|
||||
TrayPublisherHost,
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"install",
|
||||
"ls",
|
||||
|
||||
"set_project_name",
|
||||
"get_context_title",
|
||||
"get_context_data",
|
||||
"update_context_data",
|
||||
"TrayPublisherHost",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -9,6 +9,8 @@ from openpype.pipeline import (
|
|||
register_creator_plugin_path,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.host import HostBase, INewPublisher
|
||||
|
||||
|
||||
ROOT_DIR = os.path.dirname(os.path.dirname(
|
||||
os.path.abspath(__file__)
|
||||
|
|
@ -17,6 +19,35 @@ PUBLISH_PATH = os.path.join(ROOT_DIR, "plugins", "publish")
|
|||
CREATE_PATH = os.path.join(ROOT_DIR, "plugins", "create")
|
||||
|
||||
|
||||
class TrayPublisherHost(HostBase, INewPublisher):
|
||||
name = "traypublisher"
|
||||
|
||||
def install(self):
|
||||
os.environ["AVALON_APP"] = self.name
|
||||
legacy_io.Session["AVALON_APP"] = self.name
|
||||
|
||||
pyblish.api.register_host("traypublisher")
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
|
||||
def get_context_title(self):
|
||||
return HostContext.get_project_name()
|
||||
|
||||
def get_context_data(self):
|
||||
return HostContext.get_context_data()
|
||||
|
||||
def update_context_data(self, data, changes):
|
||||
HostContext.save_context_data(data, changes)
|
||||
|
||||
def set_project_name(self, project_name):
|
||||
# TODO Deregister project specific plugins and register new project
|
||||
# plugins
|
||||
os.environ["AVALON_PROJECT"] = project_name
|
||||
legacy_io.Session["AVALON_PROJECT"] = project_name
|
||||
legacy_io.install()
|
||||
HostContext.set_project_name(project_name)
|
||||
|
||||
|
||||
class HostContext:
|
||||
_context_json_path = None
|
||||
|
||||
|
|
@ -150,32 +181,3 @@ def get_context_data():
|
|||
|
||||
def update_context_data(data, changes):
|
||||
HostContext.save_context_data(data)
|
||||
|
||||
|
||||
def get_context_title():
|
||||
return HostContext.get_project_name()
|
||||
|
||||
|
||||
def ls():
|
||||
"""Probably will never return loaded containers."""
|
||||
return []
|
||||
|
||||
|
||||
def install():
|
||||
"""This is called before a project is known.
|
||||
|
||||
Project is defined with 'set_project_name'.
|
||||
"""
|
||||
os.environ["AVALON_APP"] = "traypublisher"
|
||||
|
||||
pyblish.api.register_host("traypublisher")
|
||||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
|
||||
|
||||
def set_project_name(project_name):
|
||||
# TODO Deregister project specific plugins and register new project plugins
|
||||
os.environ["AVALON_PROJECT"] = project_name
|
||||
legacy_io.Session["AVALON_PROJECT"] = project_name
|
||||
legacy_io.install()
|
||||
HostContext.set_project_name(project_name)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
from openpype.lib.attribute_definitions import FileDef
|
||||
from openpype.pipeline import (
|
||||
Creator,
|
||||
CreatedInstance
|
||||
)
|
||||
from openpype.lib import FileDef
|
||||
|
||||
from .pipeline import (
|
||||
list_instances,
|
||||
|
|
@ -12,6 +12,29 @@ from .pipeline import (
|
|||
)
|
||||
|
||||
|
||||
IMAGE_EXTENSIONS = [
|
||||
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
|
||||
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
|
||||
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
|
||||
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
|
||||
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
|
||||
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
|
||||
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
|
||||
".pictor", ".png", ".psb", ".psp", ".qtvr", ".ras",
|
||||
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
|
||||
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
|
||||
".xpm", ".xwd"
|
||||
]
|
||||
VIDEO_EXTENSIONS = [
|
||||
".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
|
||||
".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
|
||||
".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
|
||||
".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
|
||||
".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
|
||||
]
|
||||
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
|
||||
|
||||
|
||||
class TrayPublishCreator(Creator):
|
||||
create_allow_context_change = True
|
||||
host_name = "traypublisher"
|
||||
|
|
@ -37,6 +60,21 @@ class TrayPublishCreator(Creator):
|
|||
# Use same attributes as for instance attrobites
|
||||
return self.get_instance_attr_defs()
|
||||
|
||||
def _store_new_instance(self, new_instance):
|
||||
"""Tray publisher specific method to store instance.
|
||||
|
||||
Instance is stored into "workfile" of traypublisher and also add it
|
||||
to CreateContext.
|
||||
|
||||
Args:
|
||||
new_instance (CreatedInstance): Instance that should be stored.
|
||||
"""
|
||||
|
||||
# Host implementation of storing metadata about instance
|
||||
HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
|
||||
class SettingsCreator(TrayPublishCreator):
|
||||
create_allow_context_change = True
|
||||
|
|
@ -58,19 +96,27 @@ class SettingsCreator(TrayPublishCreator):
|
|||
data["settings_creator"] = True
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(self.family, subset_name, data, self)
|
||||
# Host implementation of storing metadata about instance
|
||||
HostContext.add_instance(new_instance.data_to_store())
|
||||
# Add instance to current context
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
FileDef(
|
||||
"filepath",
|
||||
"representation_files",
|
||||
folders=False,
|
||||
extensions=self.extensions,
|
||||
allow_sequences=self.allow_sequences,
|
||||
label="Filepath",
|
||||
single_item=not self.allow_multiple_items,
|
||||
label="Representations",
|
||||
),
|
||||
FileDef(
|
||||
"reviewable",
|
||||
folders=False,
|
||||
extensions=REVIEW_EXTENSIONS,
|
||||
allow_sequences=True,
|
||||
single_item=True,
|
||||
label="Reviewable representations",
|
||||
extensions_label="Single reviewable item"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
@ -92,6 +138,7 @@ class SettingsCreator(TrayPublishCreator):
|
|||
"detailed_description": item_data["detailed_description"],
|
||||
"extensions": item_data["extensions"],
|
||||
"allow_sequences": item_data["allow_sequences"],
|
||||
"allow_multiple_items": item_data["allow_multiple_items"],
|
||||
"default_variants": item_data["default_variants"]
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,216 @@
|
|||
import copy
|
||||
import os
|
||||
import re
|
||||
|
||||
from openpype.client import get_assets, get_asset_by_name
|
||||
from openpype.lib import (
|
||||
FileDef,
|
||||
BoolDef,
|
||||
get_subset_name_with_asset_doc,
|
||||
TaskNotSetError,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
CreatedInstance,
|
||||
CreatorError
|
||||
)
|
||||
|
||||
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
|
||||
|
||||
|
||||
class BatchMovieCreator(TrayPublishCreator):
|
||||
"""Creates instances from movie file(s).
|
||||
|
||||
Intended for .mov files, but should work for any video file.
|
||||
Doesn't handle image sequences though.
|
||||
"""
|
||||
identifier = "render_movie_batch"
|
||||
label = "Batch Movies"
|
||||
family = "render"
|
||||
description = "Publish batch of video files"
|
||||
|
||||
create_allow_context_change = False
|
||||
version_regex = re.compile(r"^(.+)_v([0-9]+)$")
|
||||
|
||||
def __init__(self, project_settings, *args, **kwargs):
|
||||
super(BatchMovieCreator, self).__init__(project_settings,
|
||||
*args, **kwargs)
|
||||
creator_settings = (
|
||||
project_settings["traypublisher"]["BatchMovieCreator"]
|
||||
)
|
||||
self.default_variants = creator_settings["default_variants"]
|
||||
self.default_tasks = creator_settings["default_tasks"]
|
||||
self.extensions = creator_settings["extensions"]
|
||||
|
||||
def get_icon(self):
|
||||
return "fa.file"
|
||||
|
||||
def create(self, subset_name, data, pre_create_data):
|
||||
file_paths = pre_create_data.get("filepath")
|
||||
if not file_paths:
|
||||
return
|
||||
|
||||
for file_info in file_paths:
|
||||
instance_data = copy.deepcopy(data)
|
||||
file_name = file_info["filenames"][0]
|
||||
filepath = os.path.join(file_info["directory"], file_name)
|
||||
instance_data["creator_attributes"] = {"filepath": filepath}
|
||||
|
||||
asset_doc, version = self.get_asset_doc_from_file_name(
|
||||
file_name, self.project_name)
|
||||
|
||||
subset_name, task_name = self._get_subset_and_task(
|
||||
asset_doc, data["variant"], self.project_name)
|
||||
|
||||
instance_data["task"] = task_name
|
||||
instance_data["asset"] = asset_doc["name"]
|
||||
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(self.family, subset_name,
|
||||
instance_data, self)
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
def get_asset_doc_from_file_name(self, source_filename, project_name):
|
||||
"""Try to parse out asset name from file name provided.
|
||||
|
||||
Artists might provide various file name formats.
|
||||
Currently handled:
|
||||
- chair.mov
|
||||
- chair_v001.mov
|
||||
- my_chair_to_upload.mov
|
||||
"""
|
||||
version = None
|
||||
asset_name = os.path.splitext(source_filename)[0]
|
||||
# Always first check if source filename is in assets
|
||||
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
|
||||
project_name, asset_name)
|
||||
|
||||
if matching_asset_doc is None:
|
||||
matching_asset_doc, version = (
|
||||
self._parse_with_version(project_name, asset_name))
|
||||
|
||||
if matching_asset_doc is None:
|
||||
matching_asset_doc = self._parse_containing(project_name,
|
||||
asset_name)
|
||||
|
||||
if matching_asset_doc is None:
|
||||
raise CreatorError(
|
||||
"Cannot guess asset name from {}".format(source_filename))
|
||||
|
||||
return matching_asset_doc, version
|
||||
|
||||
def _parse_with_version(self, project_name, asset_name):
|
||||
"""Try to parse asset name from a file name containing version too
|
||||
|
||||
Eg. 'chair_v001.mov' >> 'chair', 1
|
||||
"""
|
||||
self.log.debug((
|
||||
"Asset doc by \"{}\" was not found, trying version regex."
|
||||
).format(asset_name))
|
||||
|
||||
matching_asset_doc = version_number = None
|
||||
|
||||
regex_result = self.version_regex.findall(asset_name)
|
||||
if regex_result:
|
||||
_asset_name, _version_number = regex_result[0]
|
||||
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
|
||||
project_name, _asset_name)
|
||||
if matching_asset_doc:
|
||||
version_number = int(_version_number)
|
||||
|
||||
return matching_asset_doc, version_number
|
||||
|
||||
def _parse_containing(self, project_name, asset_name):
|
||||
"""Look if file name contains any existing asset name"""
|
||||
for asset_doc in get_assets(project_name, fields=["name"]):
|
||||
if asset_doc["name"].lower() in asset_name.lower():
|
||||
return get_asset_by_name(project_name, asset_doc["name"])
|
||||
|
||||
def _get_subset_and_task(self, asset_doc, variant, project_name):
|
||||
"""Create subset name according to standard template process"""
|
||||
task_name = self._get_task_name(asset_doc)
|
||||
|
||||
try:
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name
|
||||
)
|
||||
except TaskNotSetError:
|
||||
# Create instance with fake task
|
||||
# - instance will be marked as invalid so it can't be published
|
||||
# but user have ability to change it
|
||||
# NOTE: This expect that there is not task 'Undefined' on asset
|
||||
task_name = "Undefined"
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name
|
||||
)
|
||||
|
||||
return subset_name, task_name
|
||||
|
||||
def _get_task_name(self, asset_doc):
|
||||
"""Get applicable task from 'asset_doc' """
|
||||
available_task_names = {}
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
for task_name in asset_tasks.keys():
|
||||
available_task_names[task_name.lower()] = task_name
|
||||
|
||||
task_name = None
|
||||
for _task_name in self.default_tasks:
|
||||
_task_name_low = _task_name.lower()
|
||||
if _task_name_low in available_task_names:
|
||||
task_name = available_task_names[_task_name_low]
|
||||
break
|
||||
|
||||
return task_name
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef(
|
||||
"add_review_family",
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
# Use same attributes as for instance attributes
|
||||
return [
|
||||
FileDef(
|
||||
"filepath",
|
||||
folders=False,
|
||||
single_item=False,
|
||||
extensions=self.extensions,
|
||||
label="Filepath"
|
||||
),
|
||||
BoolDef(
|
||||
"add_review_family",
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
def get_detail_description(self):
|
||||
return """# Publish batch of .mov to multiple assets.
|
||||
|
||||
File names must then contain only asset name, or asset name + version.
|
||||
(eg. 'chair.mov', 'chair_v001.mov', not really safe `my_chair_v001.mov`
|
||||
"""
|
||||
|
||||
def _get_asset_by_name_case_not_sensitive(self, project_name, asset_name):
|
||||
"""Handle more cases in file names"""
|
||||
asset_name = re.compile(asset_name, re.IGNORECASE)
|
||||
|
||||
assets = list(get_assets(project_name, asset_names=[asset_name]))
|
||||
if assets:
|
||||
if len(assets) > 1:
|
||||
self.log.warning("Too many records found for {}".format(
|
||||
asset_name))
|
||||
return
|
||||
|
||||
return assets.pop()
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectMovieBatch(
|
||||
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
|
||||
):
|
||||
"""Collect file url for batch movies and create representation.
|
||||
|
||||
Adds review on instance and to repre.tags based on value of toggle button
|
||||
on creator.
|
||||
"""
|
||||
|
||||
label = "Collect Movie Batch Files"
|
||||
order = pyblish.api.CollectorOrder
|
||||
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
if instance.data.get("creator_identifier") != "render_movie_batch":
|
||||
return
|
||||
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
|
||||
file_url = creator_attributes["filepath"]
|
||||
file_name = os.path.basename(file_url)
|
||||
_, ext = os.path.splitext(file_name)
|
||||
|
||||
repre = {
|
||||
"name": ext[1:],
|
||||
"ext": ext[1:],
|
||||
"files": file_name,
|
||||
"stagingDir": os.path.dirname(file_url),
|
||||
"tags": []
|
||||
}
|
||||
|
||||
if creator_attributes["add_review_family"]:
|
||||
repre["tags"].append("review")
|
||||
instance.data["families"].append("review")
|
||||
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
instance.data["source"] = file_url
|
||||
|
||||
self.log.debug("instance.data {}".format(instance.data))
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
import pyblish.api
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectReviewFamily(
|
||||
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
|
||||
):
|
||||
"""Add review family."""
|
||||
|
||||
label = "Collect Review Family"
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
|
||||
hosts = ["traypublisher"]
|
||||
families = [
|
||||
"image",
|
||||
"render",
|
||||
"plate",
|
||||
"review"
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
values = self.get_attr_values_from_data(instance.data)
|
||||
if values.get("add_review_family"):
|
||||
instance.data["families"].append("review")
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
BoolDef("add_review_family", label="Review", default=True)
|
||||
]
|
||||
|
|
@ -1,9 +1,31 @@
|
|||
import os
|
||||
import tempfile
|
||||
|
||||
import clique
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
||||
"""Collect data for instances created by settings creators."""
|
||||
"""Collect data for instances created by settings creators.
|
||||
|
||||
Plugin create representations for simple instances based
|
||||
on 'representation_files' attribute stored on instance data.
|
||||
|
||||
There is also possibility to have reviewable representation which can be
|
||||
stored under 'reviewable' attribute stored on instance data. If there was
|
||||
already created representation with the same files as 'revieable' containes
|
||||
|
||||
Representations can be marked for review and in that case is also added
|
||||
'review' family to instance families. For review can be marked only one
|
||||
representation so **first** representation that has extension available
|
||||
in '_review_extensions' is used for review.
|
||||
|
||||
For instance 'source' is used path from last representation created
|
||||
from 'representation_files'.
|
||||
|
||||
Set staging directory on instance. That is probably never used because
|
||||
each created representation has it's own staging dir.
|
||||
"""
|
||||
|
||||
label = "Collect Settings Simple Instances"
|
||||
order = pyblish.api.CollectorOrder - 0.49
|
||||
|
|
@ -14,37 +36,193 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
|||
if not instance.data.get("settings_creator"):
|
||||
return
|
||||
|
||||
if "families" not in instance.data:
|
||||
instance.data["families"] = []
|
||||
instance_label = instance.data["name"]
|
||||
# Create instance's staging dir in temp
|
||||
tmp_folder = tempfile.mkdtemp(prefix="traypublisher_")
|
||||
instance.data["stagingDir"] = tmp_folder
|
||||
instance.context.data["cleanupFullPaths"].append(tmp_folder)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
repres = instance.data["representations"]
|
||||
self.log.debug((
|
||||
"Created temp staging directory for instance {}. {}"
|
||||
).format(instance_label, tmp_folder))
|
||||
|
||||
# Store filepaths for validation of their existence
|
||||
source_filepaths = []
|
||||
# Make sure there are no representations with same name
|
||||
repre_names_counter = {}
|
||||
# Store created names for logging
|
||||
repre_names = []
|
||||
# Store set of filepaths per each representation
|
||||
representation_files_mapping = []
|
||||
source = self._create_main_representations(
|
||||
instance,
|
||||
source_filepaths,
|
||||
repre_names_counter,
|
||||
repre_names,
|
||||
representation_files_mapping
|
||||
)
|
||||
|
||||
self._create_review_representation(
|
||||
instance,
|
||||
source_filepaths,
|
||||
repre_names_counter,
|
||||
repre_names,
|
||||
representation_files_mapping
|
||||
)
|
||||
|
||||
instance.data["source"] = source
|
||||
instance.data["sourceFilepaths"] = list(set(source_filepaths))
|
||||
|
||||
self.log.debug(
|
||||
(
|
||||
"Created Simple Settings instance \"{}\""
|
||||
" with {} representations: {}"
|
||||
).format(
|
||||
instance_label,
|
||||
len(instance.data["representations"]),
|
||||
", ".join(repre_names)
|
||||
)
|
||||
)
|
||||
|
||||
def _create_main_representations(
|
||||
self,
|
||||
instance,
|
||||
source_filepaths,
|
||||
repre_names_counter,
|
||||
repre_names,
|
||||
representation_files_mapping
|
||||
):
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
filepath_items = creator_attributes["representation_files"]
|
||||
if not isinstance(filepath_items, list):
|
||||
filepath_items = [filepath_items]
|
||||
|
||||
source = None
|
||||
for filepath_item in filepath_items:
|
||||
# Skip if filepath item does not have filenames
|
||||
if not filepath_item["filenames"]:
|
||||
continue
|
||||
|
||||
filepaths = {
|
||||
os.path.join(filepath_item["directory"], filename)
|
||||
for filename in filepath_item["filenames"]
|
||||
}
|
||||
source_filepaths.extend(filepaths)
|
||||
|
||||
source = self._calculate_source(filepaths)
|
||||
representation = self._create_representation_data(
|
||||
filepath_item, repre_names_counter, repre_names
|
||||
)
|
||||
instance.data["representations"].append(representation)
|
||||
representation_files_mapping.append(
|
||||
(filepaths, representation, source)
|
||||
)
|
||||
return source
|
||||
|
||||
def _create_review_representation(
|
||||
self,
|
||||
instance,
|
||||
source_filepaths,
|
||||
repre_names_counter,
|
||||
repre_names,
|
||||
representation_files_mapping
|
||||
):
|
||||
# Skip review representation creation if there are no representations
|
||||
# created for "main" part
|
||||
# - review representation must not be created in that case so
|
||||
# validation can care about it
|
||||
if not representation_files_mapping:
|
||||
self.log.warning((
|
||||
"There are missing source representations."
|
||||
" Creation of review representation was skipped."
|
||||
))
|
||||
return
|
||||
|
||||
creator_attributes = instance.data["creator_attributes"]
|
||||
filepath_item = creator_attributes["filepath"]
|
||||
self.log.info(filepath_item)
|
||||
filepaths = [
|
||||
os.path.join(filepath_item["directory"], filename)
|
||||
for filename in filepath_item["filenames"]
|
||||
]
|
||||
review_file_item = creator_attributes["reviewable"]
|
||||
filenames = review_file_item.get("filenames")
|
||||
if not filenames:
|
||||
self.log.debug((
|
||||
"Filepath for review is not defined."
|
||||
" Skipping review representation creation."
|
||||
))
|
||||
return
|
||||
|
||||
instance.data["sourceFilepaths"] = filepaths
|
||||
instance.data["stagingDir"] = filepath_item["directory"]
|
||||
filepaths = {
|
||||
os.path.join(review_file_item["directory"], filename)
|
||||
for filename in filenames
|
||||
}
|
||||
source_filepaths.extend(filepaths)
|
||||
# First try to find out representation with same filepaths
|
||||
# so it's not needed to create new representation just for review
|
||||
review_representation = None
|
||||
# Review path (only for logging)
|
||||
review_path = None
|
||||
for item in representation_files_mapping:
|
||||
_filepaths, representation, repre_path = item
|
||||
if _filepaths == filepaths:
|
||||
review_representation = representation
|
||||
review_path = repre_path
|
||||
break
|
||||
|
||||
if review_representation is None:
|
||||
self.log.debug("Creating new review representation")
|
||||
review_path = self._calculate_source(filepaths)
|
||||
review_representation = self._create_representation_data(
|
||||
review_file_item, repre_names_counter, repre_names
|
||||
)
|
||||
instance.data["representations"].append(review_representation)
|
||||
|
||||
if "review" not in instance.data["families"]:
|
||||
instance.data["families"].append("review")
|
||||
|
||||
review_representation["tags"].append("review")
|
||||
self.log.debug("Representation {} was marked for review. {}".format(
|
||||
review_representation["name"], review_path
|
||||
))
|
||||
|
||||
def _create_representation_data(
|
||||
self, filepath_item, repre_names_counter, repre_names
|
||||
):
|
||||
"""Create new representation data based on file item.
|
||||
|
||||
Args:
|
||||
filepath_item (Dict[str, Any]): Item with information about
|
||||
representation paths.
|
||||
repre_names_counter (Dict[str, int]): Store count of representation
|
||||
names.
|
||||
repre_names (List[str]): All used representation names. For
|
||||
logging purposes.
|
||||
|
||||
Returns:
|
||||
Dict: Prepared base representation data.
|
||||
"""
|
||||
|
||||
filenames = filepath_item["filenames"]
|
||||
_, ext = os.path.splitext(filenames[0])
|
||||
ext = ext[1:]
|
||||
if len(filenames) == 1:
|
||||
filenames = filenames[0]
|
||||
|
||||
repres.append({
|
||||
"ext": ext,
|
||||
"name": ext,
|
||||
repre_name = repre_ext = ext[1:]
|
||||
if repre_name not in repre_names_counter:
|
||||
repre_names_counter[repre_name] = 2
|
||||
else:
|
||||
counter = repre_names_counter[repre_name]
|
||||
repre_names_counter[repre_name] += 1
|
||||
repre_name = "{}_{}".format(repre_name, counter)
|
||||
repre_names.append(repre_name)
|
||||
return {
|
||||
"ext": repre_ext,
|
||||
"name": repre_name,
|
||||
"stagingDir": filepath_item["directory"],
|
||||
"files": filenames
|
||||
})
|
||||
"files": filenames,
|
||||
"tags": []
|
||||
}
|
||||
|
||||
self.log.debug("Created Simple Settings instance {}".format(
|
||||
instance.data
|
||||
))
|
||||
def _calculate_source(self, filepaths):
|
||||
cols, rems = clique.assemble(filepaths)
|
||||
if cols:
|
||||
source = cols[0].format("{head}{padding}{tail}")
|
||||
elif rems:
|
||||
source = rems[0]
|
||||
return source
|
||||
|
|
|
|||
|
|
@ -3,8 +3,17 @@ import pyblish.api
|
|||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateWorkfilePath(pyblish.api.InstancePlugin):
|
||||
"""Validate existence of workfile instance existence."""
|
||||
class ValidateFilePath(pyblish.api.InstancePlugin):
|
||||
"""Validate existence of source filepaths on instance.
|
||||
|
||||
Plugins looks into key 'sourceFilepaths' and validate if paths there
|
||||
actually exist on disk.
|
||||
|
||||
Also validate if the key is filled but is empty. In that case also
|
||||
crashes so do not fill the key if unfilled value should not cause error.
|
||||
|
||||
This is primarily created for Simple Creator instances.
|
||||
"""
|
||||
|
||||
label = "Validate Workfile"
|
||||
order = pyblish.api.ValidatorOrder - 0.49
|
||||
|
|
@ -14,12 +23,28 @@ class ValidateWorkfilePath(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
if "sourceFilepaths" not in instance.data:
|
||||
self.log.info((
|
||||
"Can't validate source filepaths existence."
|
||||
"Skipped validation of source filepaths existence."
|
||||
" Instance does not have collected 'sourceFilepaths'"
|
||||
))
|
||||
return
|
||||
|
||||
filepaths = instance.data.get("sourceFilepaths")
|
||||
family = instance.data["family"]
|
||||
label = instance.data["name"]
|
||||
filepaths = instance.data["sourceFilepaths"]
|
||||
if not filepaths:
|
||||
raise PublishValidationError(
|
||||
(
|
||||
"Source filepaths of '{}' instance \"{}\" are not filled"
|
||||
).format(family, label),
|
||||
"File not filled",
|
||||
(
|
||||
"## Files were not filled"
|
||||
"\nThis mean that you didn't enter any files into required"
|
||||
" file input."
|
||||
"\n- Please refresh publishing and check instance"
|
||||
" <b>{}</b>"
|
||||
).format(label)
|
||||
)
|
||||
|
||||
not_found_files = [
|
||||
filepath
|
||||
|
|
@ -34,11 +59,7 @@ class ValidateWorkfilePath(pyblish.api.InstancePlugin):
|
|||
raise PublishValidationError(
|
||||
(
|
||||
"Filepath of '{}' instance \"{}\" does not exist:\n{}"
|
||||
).format(
|
||||
instance.data["family"],
|
||||
instance.data["name"],
|
||||
joined_paths
|
||||
),
|
||||
).format(family, label, joined_paths),
|
||||
"File not found",
|
||||
(
|
||||
"## Files were not found\nFiles\n{}"
|
||||
|
|
|
|||
|
|
@ -120,7 +120,6 @@ from .avalon_context import (
|
|||
is_latest,
|
||||
any_outdated,
|
||||
get_asset,
|
||||
get_hierarchy,
|
||||
get_linked_assets,
|
||||
get_latest_version,
|
||||
get_system_general_anatomy_data,
|
||||
|
|
@ -292,7 +291,6 @@ __all__ = [
|
|||
"is_latest",
|
||||
"any_outdated",
|
||||
"get_asset",
|
||||
"get_hierarchy",
|
||||
"get_linked_assets",
|
||||
"get_latest_version",
|
||||
"get_system_general_anatomy_data",
|
||||
|
|
|
|||
|
|
@ -1,269 +1,33 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect render template.
|
||||
"""Content was moved to 'openpype.pipeline.publish.abstract_collect_render'.
|
||||
|
||||
TODO: use @dataclass when times come.
|
||||
Please change your imports as soon as possible.
|
||||
|
||||
File will be probably removed in OpenPype 3.14.*
|
||||
"""
|
||||
from abc import abstractmethod
|
||||
|
||||
import attr
|
||||
import six
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
from .abstract_metaplugins import AbstractMetaContextPlugin
|
||||
import warnings
|
||||
from openpype.pipeline.publish import AbstractCollectRender, RenderInstance
|
||||
|
||||
|
||||
@attr.s
|
||||
class RenderInstance(object):
|
||||
"""Data collected by collectors.
|
||||
|
||||
This data class later on passed to collected instances.
|
||||
Those attributes are required later on.
|
||||
|
||||
"""
|
||||
|
||||
# metadata
|
||||
version = attr.ib() # instance version
|
||||
time = attr.ib() # time of instance creation (get_formatted_current_time)
|
||||
source = attr.ib() # path to source scene file
|
||||
label = attr.ib() # label to show in GUI
|
||||
subset = attr.ib() # subset name
|
||||
task = attr.ib() # task name
|
||||
asset = attr.ib() # asset name (AVALON_ASSET)
|
||||
attachTo = attr.ib() # subset name to attach render to
|
||||
setMembers = attr.ib() # list of nodes/members producing render output
|
||||
publish = attr.ib() # bool, True to publish instance
|
||||
name = attr.ib() # instance name
|
||||
|
||||
# format settings
|
||||
resolutionWidth = attr.ib() # resolution width (1920)
|
||||
resolutionHeight = attr.ib() # resolution height (1080)
|
||||
pixelAspect = attr.ib() # pixel aspect (1.0)
|
||||
|
||||
# time settings
|
||||
frameStart = attr.ib() # start frame
|
||||
frameEnd = attr.ib() # start end
|
||||
frameStep = attr.ib() # frame step
|
||||
|
||||
handleStart = attr.ib(default=None) # start frame
|
||||
handleEnd = attr.ib(default=None) # start frame
|
||||
|
||||
# for software (like Harmony) where frame range cannot be set by DB
|
||||
# handles need to be propagated if exist
|
||||
ignoreFrameHandleCheck = attr.ib(default=False)
|
||||
|
||||
# --------------------
|
||||
# With default values
|
||||
# metadata
|
||||
renderer = attr.ib(default="") # renderer - can be used in Deadline
|
||||
review = attr.ib(default=False) # generate review from instance (bool)
|
||||
priority = attr.ib(default=50) # job priority on farm
|
||||
|
||||
family = attr.ib(default="renderlayer")
|
||||
families = attr.ib(default=["renderlayer"]) # list of families
|
||||
|
||||
# format settings
|
||||
multipartExr = attr.ib(default=False) # flag for multipart exrs
|
||||
convertToScanline = attr.ib(default=False) # flag for exr conversion
|
||||
|
||||
tileRendering = attr.ib(default=False) # bool: treat render as tiles
|
||||
tilesX = attr.ib(default=0) # number of tiles in X
|
||||
tilesY = attr.ib(default=0) # number of tiles in Y
|
||||
|
||||
# submit_publish_job
|
||||
toBeRenderedOn = attr.ib(default=None)
|
||||
deadlineSubmissionJob = attr.ib(default=None)
|
||||
anatomyData = attr.ib(default=None)
|
||||
outputDir = attr.ib(default=None)
|
||||
context = attr.ib(default=None)
|
||||
|
||||
@frameStart.validator
|
||||
def check_frame_start(self, _, value):
|
||||
"""Validate if frame start is not larger then end."""
|
||||
if value > self.frameEnd:
|
||||
raise ValueError("frameStart must be smaller "
|
||||
"or equal then frameEnd")
|
||||
|
||||
@frameEnd.validator
|
||||
def check_frame_end(self, _, value):
|
||||
"""Validate if frame end is not less then start."""
|
||||
if value < self.frameStart:
|
||||
raise ValueError("frameEnd must be smaller "
|
||||
"or equal then frameStart")
|
||||
|
||||
@tilesX.validator
|
||||
def check_tiles_x(self, _, value):
|
||||
"""Validate if tile x isn't less then 1."""
|
||||
if not self.tileRendering:
|
||||
return
|
||||
if value < 1:
|
||||
raise ValueError("tile X size cannot be less then 1")
|
||||
|
||||
if value == 1 and self.tilesY == 1:
|
||||
raise ValueError("both tiles X a Y sizes are set to 1")
|
||||
|
||||
@tilesY.validator
|
||||
def check_tiles_y(self, _, value):
|
||||
"""Validate if tile y isn't less then 1."""
|
||||
if not self.tileRendering:
|
||||
return
|
||||
if value < 1:
|
||||
raise ValueError("tile Y size cannot be less then 1")
|
||||
|
||||
if value == 1 and self.tilesX == 1:
|
||||
raise ValueError("both tiles X a Y sizes are set to 1")
|
||||
class CollectRenderDeprecated(DeprecationWarning):
|
||||
pass
|
||||
|
||||
|
||||
@six.add_metaclass(AbstractMetaContextPlugin)
|
||||
class AbstractCollectRender(pyblish.api.ContextPlugin):
|
||||
"""Gather all publishable render layers from renderSetup."""
|
||||
warnings.simplefilter("always", CollectRenderDeprecated)
|
||||
warnings.warn(
|
||||
(
|
||||
"Content of 'abstract_collect_render' was moved."
|
||||
"\nUsing deprecated source of 'abstract_collect_render'. Content was"
|
||||
" move to 'openpype.pipeline.publish.abstract_collect_render'."
|
||||
" Please change your imports as soon as possible."
|
||||
),
|
||||
category=CollectRenderDeprecated,
|
||||
stacklevel=4
|
||||
)
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
label = "Collect Render"
|
||||
sync_workfile_version = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor."""
|
||||
super(AbstractCollectRender, self).__init__(*args, **kwargs)
|
||||
self._file_path = None
|
||||
self._asset = legacy_io.Session["AVALON_ASSET"]
|
||||
self._context = None
|
||||
|
||||
def process(self, context):
|
||||
"""Entry point to collector."""
|
||||
self._context = context
|
||||
for instance in context:
|
||||
# make sure workfile instance publishing is enabled
|
||||
try:
|
||||
if "workfile" in instance.data["families"]:
|
||||
instance.data["publish"] = True
|
||||
# TODO merge renderFarm and render.farm
|
||||
if ("renderFarm" in instance.data["families"] or
|
||||
"render.farm" in instance.data["families"]):
|
||||
instance.data["remove"] = True
|
||||
except KeyError:
|
||||
# be tolerant if 'families' is missing.
|
||||
pass
|
||||
|
||||
self._file_path = context.data["currentFile"].replace("\\", "/")
|
||||
|
||||
render_instances = self.get_instances(context)
|
||||
for render_instance in render_instances:
|
||||
exp_files = self.get_expected_files(render_instance)
|
||||
assert exp_files, "no file names were generated, this is bug"
|
||||
|
||||
# if we want to attach render to subset, check if we have AOV's
|
||||
# in expectedFiles. If so, raise error as we cannot attach AOV
|
||||
# (considered to be subset on its own) to another subset
|
||||
if render_instance.attachTo:
|
||||
assert isinstance(exp_files, list), (
|
||||
"attaching multiple AOVs or renderable cameras to "
|
||||
"subset is not supported"
|
||||
)
|
||||
|
||||
frame_start_render = int(render_instance.frameStart)
|
||||
frame_end_render = int(render_instance.frameEnd)
|
||||
if (render_instance.ignoreFrameHandleCheck or
|
||||
int(context.data['frameStartHandle']) == frame_start_render
|
||||
and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
|
||||
|
||||
handle_start = context.data['handleStart']
|
||||
handle_end = context.data['handleEnd']
|
||||
frame_start = context.data['frameStart']
|
||||
frame_end = context.data['frameEnd']
|
||||
frame_start_handle = context.data['frameStartHandle']
|
||||
frame_end_handle = context.data['frameEndHandle']
|
||||
else:
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
frame_start = frame_start_render
|
||||
frame_end = frame_end_render
|
||||
frame_start_handle = frame_start_render
|
||||
frame_end_handle = frame_end_render
|
||||
|
||||
data = {
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"frameStartHandle": frame_start_handle,
|
||||
"frameEndHandle": frame_end_handle,
|
||||
"byFrameStep": int(render_instance.frameStep),
|
||||
|
||||
"author": context.data["user"],
|
||||
# Add source to allow tracing back to the scene from
|
||||
# which was submitted originally
|
||||
"expectedFiles": exp_files,
|
||||
}
|
||||
if self.sync_workfile_version:
|
||||
data["version"] = context.data["version"]
|
||||
|
||||
# add additional data
|
||||
data = self.add_additional_data(data)
|
||||
render_instance_dict = attr.asdict(render_instance)
|
||||
|
||||
instance = context.create_instance(render_instance.name)
|
||||
instance.data["label"] = render_instance.label
|
||||
instance.data.update(render_instance_dict)
|
||||
instance.data.update(data)
|
||||
|
||||
self.post_collecting_action()
|
||||
|
||||
@abstractmethod
|
||||
def get_instances(self, context):
|
||||
"""Get all renderable instances and their data.
|
||||
|
||||
Args:
|
||||
context (pyblish.api.Context): Context object.
|
||||
|
||||
Returns:
|
||||
list of :class:`RenderInstance`: All collected renderable instances
|
||||
(like render layers, write nodes, etc.)
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_expected_files(self, render_instance):
|
||||
"""Get list of expected files.
|
||||
|
||||
Returns:
|
||||
list: expected files. This can be either simple list of files with
|
||||
their paths, or list of dictionaries, where key is name of AOV
|
||||
for example and value is list of files for that AOV.
|
||||
|
||||
Example::
|
||||
|
||||
['/path/to/file.001.exr', '/path/to/file.002.exr']
|
||||
|
||||
or as dictionary:
|
||||
|
||||
[
|
||||
{
|
||||
"beauty": ['/path/to/beauty.001.exr', ...],
|
||||
"mask": ['/path/to/mask.001.exr']
|
||||
}
|
||||
]
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
def add_additional_data(self, data):
|
||||
"""Add additional data to collected instance.
|
||||
|
||||
This can be overridden by host implementation to add custom
|
||||
additional data.
|
||||
|
||||
"""
|
||||
return data
|
||||
|
||||
def post_collecting_action(self):
|
||||
"""Execute some code after collection is done.
|
||||
|
||||
This is useful for example for restoring current render layer.
|
||||
|
||||
"""
|
||||
pass
|
||||
__all__ = (
|
||||
"AbstractCollectRender",
|
||||
"RenderInstance"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,53 +1,32 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Abstract ExpectedFile class definition."""
|
||||
from abc import ABCMeta, abstractmethod
|
||||
import six
|
||||
"""Content was moved to 'openpype.pipeline.publish.abstract_expected_files'.
|
||||
|
||||
Please change your imports as soon as possible.
|
||||
|
||||
File will be probably removed in OpenPype 3.14.*
|
||||
"""
|
||||
|
||||
import warnings
|
||||
from openpype.pipeline.publish import ExpectedFiles
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class ExpectedFiles:
|
||||
"""Class grouping functionality for all supported renderers.
|
||||
|
||||
Attributes:
|
||||
multipart (bool): Flag if multipart exrs are used.
|
||||
|
||||
"""
|
||||
|
||||
multipart = False
|
||||
|
||||
@abstractmethod
|
||||
def get(self, render_instance):
|
||||
"""Get expected files for given renderer and render layer.
|
||||
|
||||
This method should return dictionary of all files we are expecting
|
||||
to be rendered from the host. Usually `render_instance` corresponds
|
||||
to *render layer*. Result can be either flat list with the file
|
||||
paths or it can be list of dictionaries. Each key corresponds to
|
||||
for example AOV name or channel, etc.
|
||||
|
||||
Example::
|
||||
|
||||
['/path/to/file.001.exr', '/path/to/file.002.exr']
|
||||
|
||||
or as dictionary:
|
||||
|
||||
[
|
||||
{
|
||||
"beauty": ['/path/to/beauty.001.exr', ...],
|
||||
"mask": ['/path/to/mask.001.exr']
|
||||
}
|
||||
]
|
||||
class ExpectedFilesDeprecated(DeprecationWarning):
|
||||
pass
|
||||
|
||||
|
||||
Args:
|
||||
render_instance (:class:`RenderInstance`): Data passed from
|
||||
collector to determine files. This should be instance of
|
||||
:class:`abstract_collect_render.RenderInstance`
|
||||
warnings.simplefilter("always", ExpectedFilesDeprecated)
|
||||
warnings.warn(
|
||||
(
|
||||
"Content of 'abstract_expected_files' was moved."
|
||||
"\nUsing deprecated source of 'abstract_expected_files'. Content was"
|
||||
" move to 'openpype.pipeline.publish.abstract_expected_files'."
|
||||
" Please change your imports as soon as possible."
|
||||
),
|
||||
category=ExpectedFilesDeprecated,
|
||||
stacklevel=4
|
||||
)
|
||||
|
||||
Returns:
|
||||
list: Full paths to expected rendered files.
|
||||
list of dict: Path to expected rendered files categorized by
|
||||
AOVs, etc.
|
||||
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
__all__ = (
|
||||
"ExpectedFiles",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,35 @@
|
|||
from abc import ABCMeta
|
||||
from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin
|
||||
"""Content was moved to 'openpype.pipeline.publish.publish_plugins'.
|
||||
|
||||
Please change your imports as soon as possible.
|
||||
|
||||
File will be probably removed in OpenPype 3.14.*
|
||||
"""
|
||||
|
||||
import warnings
|
||||
from openpype.pipeline.publish import (
|
||||
AbstractMetaInstancePlugin,
|
||||
AbstractMetaContextPlugin
|
||||
)
|
||||
|
||||
|
||||
class AbstractMetaInstancePlugin(ABCMeta, MetaPlugin):
|
||||
class MetaPluginsDeprecated(DeprecationWarning):
|
||||
pass
|
||||
|
||||
|
||||
class AbstractMetaContextPlugin(ABCMeta, ExplicitMetaPlugin):
|
||||
pass
|
||||
warnings.simplefilter("always", MetaPluginsDeprecated)
|
||||
warnings.warn(
|
||||
(
|
||||
"Content of 'abstract_metaplugins' was moved."
|
||||
"\nUsing deprecated source of 'abstract_metaplugins'. Content was"
|
||||
" moved to 'openpype.pipeline.publish.publish_plugins'."
|
||||
" Please change your imports as soon as possible."
|
||||
),
|
||||
category=MetaPluginsDeprecated,
|
||||
stacklevel=4
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"AbstractMetaInstancePlugin",
|
||||
"AbstractMetaContextPlugin",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@ from abc import ABCMeta, abstractmethod
|
|||
|
||||
import six
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_name,
|
||||
)
|
||||
from openpype.settings import (
|
||||
get_system_settings,
|
||||
get_project_settings,
|
||||
|
|
@ -661,7 +665,11 @@ class ApplicationExecutable:
|
|||
if os.path.exists(plist_filepath):
|
||||
import plistlib
|
||||
|
||||
parsed_plist = plistlib.readPlist(plist_filepath)
|
||||
if hasattr(plistlib, "load"):
|
||||
with open(plist_filepath, "rb") as stream:
|
||||
parsed_plist = plistlib.load(stream)
|
||||
else:
|
||||
parsed_plist = plistlib.readPlist(plist_filepath)
|
||||
executable_filename = parsed_plist.get("CFBundleExecutable")
|
||||
|
||||
if executable_filename:
|
||||
|
|
@ -1310,11 +1318,8 @@ def get_app_environments_for_context(
|
|||
dbcon.install()
|
||||
|
||||
# Project document
|
||||
project_doc = dbcon.find_one({"type": "project"})
|
||||
asset_doc = dbcon.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
project_doc = get_project(project_name)
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
if modules_manager is None:
|
||||
from openpype.modules import ModulesManager
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@ class AbstractAttrDefMeta(ABCMeta):
|
|||
|
||||
Each object of `AbtractAttrDef` mus have defined 'key' attribute.
|
||||
"""
|
||||
|
||||
def __call__(self, *args, **kwargs):
|
||||
obj = super(AbstractAttrDefMeta, self).__call__(*args, **kwargs)
|
||||
init_class = getattr(obj, "__init__class__", None)
|
||||
|
|
@ -45,6 +46,7 @@ class AbtractAttrDef:
|
|||
is_label_horizontal(bool): UI specific argument. Specify if label is
|
||||
next to value input or ahead.
|
||||
"""
|
||||
|
||||
is_value_def = True
|
||||
|
||||
def __init__(
|
||||
|
|
@ -77,6 +79,7 @@ class AbtractAttrDef:
|
|||
Convert passed value to a valid type. Use default if value can't be
|
||||
converted.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
|
@ -113,6 +116,7 @@ class UnknownDef(AbtractAttrDef):
|
|||
This attribute can be used to keep existing data unchanged but does not
|
||||
have known definition of type.
|
||||
"""
|
||||
|
||||
def __init__(self, key, default=None, **kwargs):
|
||||
kwargs["default"] = default
|
||||
super(UnknownDef, self).__init__(key, **kwargs)
|
||||
|
|
@ -204,6 +208,7 @@ class TextDef(AbtractAttrDef):
|
|||
placeholder(str): UI placeholder for attribute.
|
||||
default(str, None): Default value. Empty string used when not defined.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, key, multiline=None, regex=None, placeholder=None, default=None,
|
||||
**kwargs
|
||||
|
|
@ -531,14 +536,15 @@ class FileDef(AbtractAttrDef):
|
|||
Args:
|
||||
single_item(bool): Allow only single path item.
|
||||
folders(bool): Allow folder paths.
|
||||
extensions(list<str>): Allow files with extensions. Empty list will
|
||||
extensions(List[str]): Allow files with extensions. Empty list will
|
||||
allow all extensions and None will disable files completely.
|
||||
default(str, list<str>): Defautl value.
|
||||
extensions_label(str): Custom label shown instead of extensions in UI.
|
||||
default(str, List[str]): Default value.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, key, single_item=True, folders=None, extensions=None,
|
||||
allow_sequences=True, default=None, **kwargs
|
||||
allow_sequences=True, extensions_label=None, default=None, **kwargs
|
||||
):
|
||||
if folders is None and extensions is None:
|
||||
folders = True
|
||||
|
|
@ -578,6 +584,7 @@ class FileDef(AbtractAttrDef):
|
|||
self.folders = folders
|
||||
self.extensions = set(extensions)
|
||||
self.allow_sequences = allow_sequences
|
||||
self.extensions_label = extensions_label
|
||||
super(FileDef, self).__init__(key, default=default, **kwargs)
|
||||
|
||||
def __eq__(self, other):
|
||||
|
|
|
|||
|
|
@ -7,9 +7,21 @@ import platform
|
|||
import logging
|
||||
import collections
|
||||
import functools
|
||||
import warnings
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_assets,
|
||||
get_asset_by_name,
|
||||
get_subset_by_name,
|
||||
get_subsets,
|
||||
get_version_by_id,
|
||||
get_last_versions,
|
||||
get_last_version_by_subset_id,
|
||||
get_representations,
|
||||
get_representation_by_id,
|
||||
get_workfile_info,
|
||||
)
|
||||
from openpype.settings import (
|
||||
get_project_settings,
|
||||
get_system_settings
|
||||
|
|
@ -35,6 +47,51 @@ PROJECT_NAME_REGEX = re.compile(
|
|||
)
|
||||
|
||||
|
||||
class AvalonContextDeprecatedWarning(DeprecationWarning):
|
||||
pass
|
||||
|
||||
|
||||
def deprecated(new_destination):
|
||||
"""Mark functions as deprecated.
|
||||
|
||||
It will result in a warning being emitted when the function is used.
|
||||
"""
|
||||
|
||||
func = None
|
||||
if callable(new_destination):
|
||||
func = new_destination
|
||||
new_destination = None
|
||||
|
||||
def _decorator(decorated_func):
|
||||
if new_destination is None:
|
||||
warning_message = (
|
||||
" Please check content of deprecated function to figure out"
|
||||
" possible replacement."
|
||||
)
|
||||
else:
|
||||
warning_message = " Please replace your usage with '{}'.".format(
|
||||
new_destination
|
||||
)
|
||||
|
||||
@functools.wraps(decorated_func)
|
||||
def wrapper(*args, **kwargs):
|
||||
warnings.simplefilter("always", AvalonContextDeprecatedWarning)
|
||||
warnings.warn(
|
||||
(
|
||||
"Call to deprecated function '{}'"
|
||||
"\nFunction was moved or removed.{}"
|
||||
).format(decorated_func.__name__, warning_message),
|
||||
category=AvalonContextDeprecatedWarning,
|
||||
stacklevel=4
|
||||
)
|
||||
return decorated_func(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
if func is None:
|
||||
return _decorator
|
||||
return _decorator(func)
|
||||
|
||||
|
||||
def create_project(
|
||||
project_name, project_code, library_project=False, dbcon=None
|
||||
):
|
||||
|
|
@ -64,6 +121,11 @@ def create_project(
|
|||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype.pipeline.schema import validate
|
||||
|
||||
if get_project(project_name, fields=["name"]):
|
||||
raise ValueError("Project with name \"{}\" already exists".format(
|
||||
project_name
|
||||
))
|
||||
|
||||
if dbcon is None:
|
||||
dbcon = AvalonMongoDB()
|
||||
|
||||
|
|
@ -73,15 +135,6 @@ def create_project(
|
|||
).format(project_name))
|
||||
|
||||
database = dbcon.database
|
||||
project_doc = database[project_name].find_one(
|
||||
{"type": "project"},
|
||||
{"name": 1}
|
||||
)
|
||||
if project_doc:
|
||||
raise ValueError("Project with name \"{}\" already exists".format(
|
||||
project_name
|
||||
))
|
||||
|
||||
project_doc = {
|
||||
"type": "project",
|
||||
"name": project_name,
|
||||
|
|
@ -104,7 +157,7 @@ def create_project(
|
|||
database[project_name].delete_one({"type": "project"})
|
||||
raise
|
||||
|
||||
project_doc = database[project_name].find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
|
||||
try:
|
||||
# Validate created project document
|
||||
|
|
@ -136,23 +189,23 @@ def is_latest(representation):
|
|||
|
||||
Returns:
|
||||
bool: Whether the representation is of latest version.
|
||||
|
||||
"""
|
||||
|
||||
version = legacy_io.find_one({"_id": representation['parent']})
|
||||
project_name = legacy_io.active_project()
|
||||
version = get_version_by_id(
|
||||
project_name,
|
||||
representation["parent"],
|
||||
fields=["_id", "type", "parent"]
|
||||
)
|
||||
if version["type"] == "hero_version":
|
||||
return True
|
||||
|
||||
# Get highest version under the parent
|
||||
highest_version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}, sort=[("name", -1)], projection={"name": True})
|
||||
last_version = get_last_version_by_subset_id(
|
||||
project_name, version["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
if version['name'] == highest_version['name']:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return version["_id"] == last_version["_id"]
|
||||
|
||||
|
||||
@with_pipeline_io
|
||||
|
|
@ -160,6 +213,7 @@ def any_outdated():
|
|||
"""Return whether the current scene has any outdated content"""
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
checked = set()
|
||||
host = registered_host()
|
||||
for container in host.ls():
|
||||
|
|
@ -167,12 +221,8 @@ def any_outdated():
|
|||
if representation in checked:
|
||||
continue
|
||||
|
||||
representation_doc = legacy_io.find_one(
|
||||
{
|
||||
"_id": ObjectId(representation),
|
||||
"type": "representation"
|
||||
},
|
||||
projection={"parent": True}
|
||||
representation_doc = get_representation_by_id(
|
||||
project_name, representation, fields=["parent"]
|
||||
)
|
||||
if representation_doc and not is_latest(representation_doc):
|
||||
return True
|
||||
|
|
@ -190,81 +240,29 @@ def any_outdated():
|
|||
def get_asset(asset_name=None):
|
||||
""" Returning asset document from database by its name.
|
||||
|
||||
Doesn't count with duplicities on asset names!
|
||||
Doesn't count with duplicities on asset names!
|
||||
|
||||
Args:
|
||||
asset_name (str)
|
||||
Args:
|
||||
asset_name (str)
|
||||
|
||||
Returns:
|
||||
(MongoDB document)
|
||||
Returns:
|
||||
(MongoDB document)
|
||||
"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
if not asset_name:
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
|
||||
asset_document = legacy_io.find_one({
|
||||
"name": asset_name,
|
||||
"type": "asset"
|
||||
})
|
||||
|
||||
asset_document = get_asset_by_name(project_name, asset_name)
|
||||
if not asset_document:
|
||||
raise TypeError("Entity \"{}\" was not found in DB".format(asset_name))
|
||||
|
||||
return asset_document
|
||||
|
||||
|
||||
@with_pipeline_io
|
||||
def get_hierarchy(asset_name=None):
|
||||
"""
|
||||
Obtain asset hierarchy path string from mongo db
|
||||
|
||||
Args:
|
||||
asset_name (str)
|
||||
|
||||
Returns:
|
||||
(string): asset hierarchy path
|
||||
|
||||
"""
|
||||
if not asset_name:
|
||||
asset_name = legacy_io.Session.get(
|
||||
"AVALON_ASSET",
|
||||
os.environ["AVALON_ASSET"]
|
||||
)
|
||||
|
||||
asset_entity = legacy_io.find_one({
|
||||
"type": 'asset',
|
||||
"name": asset_name
|
||||
})
|
||||
|
||||
not_set = "PARENTS_NOT_SET"
|
||||
entity_parents = asset_entity.get("data", {}).get("parents", not_set)
|
||||
|
||||
# If entity already have parents then just return joined
|
||||
if entity_parents != not_set:
|
||||
return "/".join(entity_parents)
|
||||
|
||||
# Else query parents through visualParents and store result to entity
|
||||
hierarchy_items = []
|
||||
entity = asset_entity
|
||||
while True:
|
||||
parent_id = entity.get("data", {}).get("visualParent")
|
||||
if not parent_id:
|
||||
break
|
||||
entity = legacy_io.find_one({"_id": parent_id})
|
||||
hierarchy_items.append(entity["name"])
|
||||
|
||||
# Add parents to entity data for next query
|
||||
entity_data = asset_entity.get("data", {})
|
||||
entity_data["parents"] = hierarchy_items
|
||||
legacy_io.update_many(
|
||||
{"_id": asset_entity["_id"]},
|
||||
{"$set": {"data": entity_data}}
|
||||
)
|
||||
|
||||
return "/".join(hierarchy_items)
|
||||
|
||||
|
||||
def get_system_general_anatomy_data():
|
||||
system_settings = get_system_settings()
|
||||
def get_system_general_anatomy_data(system_settings=None):
|
||||
if not system_settings:
|
||||
system_settings = get_system_settings()
|
||||
studio_name = system_settings["general"]["studio_name"]
|
||||
studio_code = system_settings["general"]["studio_code"]
|
||||
return {
|
||||
|
|
@ -312,11 +310,13 @@ def get_linked_assets(asset_doc):
|
|||
Returns:
|
||||
(list) Asset documents of input links for passed asset doc.
|
||||
"""
|
||||
|
||||
link_ids = get_linked_asset_ids(asset_doc)
|
||||
if not link_ids:
|
||||
return []
|
||||
|
||||
return list(legacy_io.find({"_id": {"$in": link_ids}}))
|
||||
project_name = legacy_io.active_project()
|
||||
return list(get_assets(project_name, link_ids))
|
||||
|
||||
|
||||
@with_pipeline_io
|
||||
|
|
@ -338,20 +338,14 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
|
|||
dict: Last version document for entered .
|
||||
"""
|
||||
|
||||
if not dbcon:
|
||||
log.debug("Using `legacy_io` for query.")
|
||||
dbcon = legacy_io
|
||||
# Make sure is installed
|
||||
dbcon.install()
|
||||
if not project_name:
|
||||
if not dbcon:
|
||||
log.debug("Using `legacy_io` for query.")
|
||||
dbcon = legacy_io
|
||||
# Make sure is installed
|
||||
dbcon.install()
|
||||
|
||||
if project_name and project_name != dbcon.Session.get("AVALON_PROJECT"):
|
||||
# `legacy_io` has only `_database` attribute
|
||||
# but `AvalonMongoDB` has `database`
|
||||
database = getattr(dbcon, "database", dbcon._database)
|
||||
collection = database[project_name]
|
||||
else:
|
||||
project_name = dbcon.Session.get("AVALON_PROJECT")
|
||||
collection = dbcon
|
||||
project_name = dbcon.active_project()
|
||||
|
||||
log.debug((
|
||||
"Getting latest version for Project: \"{}\" Asset: \"{}\""
|
||||
|
|
@ -359,19 +353,15 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
|
|||
).format(project_name, asset_name, subset_name))
|
||||
|
||||
# Query asset document id by asset name
|
||||
asset_doc = collection.find_one(
|
||||
{"type": "asset", "name": asset_name},
|
||||
{"_id": True}
|
||||
)
|
||||
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
|
||||
if not asset_doc:
|
||||
log.info(
|
||||
"Asset \"{}\" was not found in Database.".format(asset_name)
|
||||
)
|
||||
return None
|
||||
|
||||
subset_doc = collection.find_one(
|
||||
{"type": "subset", "name": subset_name, "parent": asset_doc["_id"]},
|
||||
{"_id": True}
|
||||
subset_doc = get_subset_by_name(
|
||||
project_name, subset_name, asset_doc["_id"]
|
||||
)
|
||||
if not subset_doc:
|
||||
log.info(
|
||||
|
|
@ -379,9 +369,8 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
|
|||
)
|
||||
return None
|
||||
|
||||
version_doc = collection.find_one(
|
||||
{"type": "version", "parent": subset_doc["_id"]},
|
||||
sort=[("name", -1)],
|
||||
version_doc = get_last_version_by_subset_id(
|
||||
project_name, subset_doc["_id"]
|
||||
)
|
||||
if not version_doc:
|
||||
log.info(
|
||||
|
|
@ -419,28 +408,17 @@ def get_workfile_template_key_from_context(
|
|||
ValueError: When both 'dbcon' and 'project_name' were not
|
||||
passed.
|
||||
"""
|
||||
if not dbcon:
|
||||
if not project_name:
|
||||
if not project_name:
|
||||
if not dbcon:
|
||||
raise ValueError((
|
||||
"`get_workfile_template_key_from_context` requires to pass"
|
||||
" one of 'dbcon' or 'project_name' arguments."
|
||||
))
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
project_name = dbcon.active_project()
|
||||
|
||||
elif not project_name:
|
||||
project_name = dbcon.Session["AVALON_PROJECT"]
|
||||
|
||||
asset_doc = dbcon.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
},
|
||||
{
|
||||
"data.tasks": 1
|
||||
}
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, asset_name, fields=["data.tasks"]
|
||||
)
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
|
|
@ -637,6 +615,7 @@ def get_workdir(
|
|||
Returns:
|
||||
TemplateResult: Workdir path.
|
||||
"""
|
||||
|
||||
if not anatomy:
|
||||
from openpype.pipeline import Anatomy
|
||||
anatomy = Anatomy(project_doc["name"])
|
||||
|
|
@ -665,15 +644,11 @@ def template_data_from_session(session=None):
|
|||
session = legacy_io.Session
|
||||
|
||||
project_name = session["AVALON_PROJECT"]
|
||||
project_doc = legacy_io.database[project_name].find_one({
|
||||
"type": "project"
|
||||
})
|
||||
asset_doc = legacy_io.database[project_name].find_one({
|
||||
"type": "asset",
|
||||
"name": session["AVALON_ASSET"]
|
||||
})
|
||||
asset_name = session["AVALON_ASSET"]
|
||||
task_name = session["AVALON_TASK"]
|
||||
host_name = session["AVALON_APP"]
|
||||
project_doc = get_project(project_name)
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
return get_workdir_data(project_doc, asset_doc, task_name, host_name)
|
||||
|
||||
|
||||
|
|
@ -698,8 +673,8 @@ def compute_session_changes(
|
|||
|
||||
Returns:
|
||||
dict: The required changes in the Session dictionary.
|
||||
|
||||
"""
|
||||
|
||||
changes = dict()
|
||||
|
||||
# If no changes, return directly
|
||||
|
|
@ -717,12 +692,9 @@ def compute_session_changes(
|
|||
|
||||
if not asset_document or not asset_tasks:
|
||||
# Assume asset name
|
||||
asset_document = legacy_io.find_one(
|
||||
{
|
||||
"name": asset,
|
||||
"type": "asset"
|
||||
},
|
||||
{"data.tasks": True}
|
||||
project_name = session["AVALON_PROJECT"]
|
||||
asset_document = get_asset_by_name(
|
||||
project_name, asset, fields=["data.tasks"]
|
||||
)
|
||||
assert asset_document, "Asset must exist"
|
||||
|
||||
|
|
@ -819,6 +791,7 @@ def update_current_task(task=None, asset=None, app=None, template_key=None):
|
|||
|
||||
|
||||
@with_pipeline_io
|
||||
@deprecated("openpype.client.get_workfile_info")
|
||||
def get_workfile_doc(asset_id, task_name, filename, dbcon=None):
|
||||
"""Return workfile document for entered context.
|
||||
|
||||
|
|
@ -835,16 +808,13 @@ def get_workfile_doc(asset_id, task_name, filename, dbcon=None):
|
|||
Returns:
|
||||
dict: Workfile document or None.
|
||||
"""
|
||||
|
||||
# Use legacy_io if dbcon is not entered
|
||||
if not dbcon:
|
||||
dbcon = legacy_io
|
||||
|
||||
return dbcon.find_one({
|
||||
"type": "workfile",
|
||||
"parent": asset_id,
|
||||
"task_name": task_name,
|
||||
"filename": filename
|
||||
})
|
||||
project_name = dbcon.active_project()
|
||||
return get_workfile_info(project_name, asset_id, task_name, filename)
|
||||
|
||||
|
||||
@with_pipeline_io
|
||||
|
|
@ -879,12 +849,13 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
|
|||
doc_data = copy.deepcopy(doc_filter)
|
||||
|
||||
# Prepare project for workdir data
|
||||
project_doc = dbcon.find_one({"type": "project"})
|
||||
project_name = dbcon.active_project()
|
||||
project_doc = get_project(project_name)
|
||||
workdir_data = get_workdir_data(
|
||||
project_doc, asset_doc, task_name, dbcon.Session["AVALON_APP"]
|
||||
)
|
||||
# Prepare anatomy
|
||||
anatomy = Anatomy(project_doc["name"])
|
||||
anatomy = Anatomy(project_name)
|
||||
# Get workdir path (result is anatomy.TemplateResult)
|
||||
template_workdir = get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy
|
||||
|
|
@ -999,12 +970,11 @@ class BuildWorkfile:
|
|||
from openpype.pipeline import discover_loader_plugins
|
||||
|
||||
# Get current asset name and entity
|
||||
project_name = legacy_io.active_project()
|
||||
current_asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
current_asset_entity = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": current_asset_name
|
||||
})
|
||||
|
||||
current_asset_entity = get_asset_by_name(
|
||||
project_name, current_asset_name
|
||||
)
|
||||
# Skip if asset was not found
|
||||
if not current_asset_entity:
|
||||
print("Asset entity with name `{}` was not found".format(
|
||||
|
|
@ -1509,7 +1479,7 @@ class BuildWorkfile:
|
|||
return loaded_containers
|
||||
|
||||
@with_pipeline_io
|
||||
def _collect_last_version_repres(self, asset_entities):
|
||||
def _collect_last_version_repres(self, asset_docs):
|
||||
"""Collect subsets, versions and representations for asset_entities.
|
||||
|
||||
Args:
|
||||
|
|
@ -1542,64 +1512,56 @@ class BuildWorkfile:
|
|||
```
|
||||
"""
|
||||
|
||||
if not asset_entities:
|
||||
return {}
|
||||
output = {}
|
||||
if not asset_docs:
|
||||
return output
|
||||
|
||||
asset_entity_by_ids = {asset["_id"]: asset for asset in asset_entities}
|
||||
asset_docs_by_ids = {asset["_id"]: asset for asset in asset_docs}
|
||||
|
||||
subsets = list(legacy_io.find({
|
||||
"type": "subset",
|
||||
"parent": {"$in": list(asset_entity_by_ids.keys())}
|
||||
}))
|
||||
project_name = legacy_io.active_project()
|
||||
subsets = list(get_subsets(
|
||||
project_name, asset_ids=asset_docs_by_ids.keys()
|
||||
))
|
||||
subset_entity_by_ids = {subset["_id"]: subset for subset in subsets}
|
||||
|
||||
sorted_versions = list(legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": {"$in": list(subset_entity_by_ids.keys())}
|
||||
}).sort("name", -1))
|
||||
last_version_by_subset_id = get_last_versions(
|
||||
project_name, subset_entity_by_ids.keys()
|
||||
)
|
||||
last_version_docs_by_id = {
|
||||
version["_id"]: version
|
||||
for version in last_version_by_subset_id.values()
|
||||
}
|
||||
repre_docs = get_representations(
|
||||
project_name, version_ids=last_version_docs_by_id.keys()
|
||||
)
|
||||
|
||||
subset_id_with_latest_version = []
|
||||
last_versions_by_id = {}
|
||||
for version in sorted_versions:
|
||||
subset_id = version["parent"]
|
||||
if subset_id in subset_id_with_latest_version:
|
||||
continue
|
||||
subset_id_with_latest_version.append(subset_id)
|
||||
last_versions_by_id[version["_id"]] = version
|
||||
for repre_doc in repre_docs:
|
||||
version_id = repre_doc["parent"]
|
||||
version_doc = last_version_docs_by_id[version_id]
|
||||
|
||||
repres = legacy_io.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": list(last_versions_by_id.keys())}
|
||||
})
|
||||
subset_id = version_doc["parent"]
|
||||
subset_doc = subset_entity_by_ids[subset_id]
|
||||
|
||||
output = {}
|
||||
for repre in repres:
|
||||
version_id = repre["parent"]
|
||||
version = last_versions_by_id[version_id]
|
||||
|
||||
subset_id = version["parent"]
|
||||
subset = subset_entity_by_ids[subset_id]
|
||||
|
||||
asset_id = subset["parent"]
|
||||
asset = asset_entity_by_ids[asset_id]
|
||||
asset_id = subset_doc["parent"]
|
||||
asset_doc = asset_docs_by_ids[asset_id]
|
||||
|
||||
if asset_id not in output:
|
||||
output[asset_id] = {
|
||||
"asset_entity": asset,
|
||||
"asset_entity": asset_doc,
|
||||
"subsets": {}
|
||||
}
|
||||
|
||||
if subset_id not in output[asset_id]["subsets"]:
|
||||
output[asset_id]["subsets"][subset_id] = {
|
||||
"subset_entity": subset,
|
||||
"subset_entity": subset_doc,
|
||||
"version": {
|
||||
"version_entity": version,
|
||||
"version_entity": version_doc,
|
||||
"repres": []
|
||||
}
|
||||
}
|
||||
|
||||
output[asset_id]["subsets"][subset_id]["version"]["repres"].append(
|
||||
repre
|
||||
repre_doc
|
||||
)
|
||||
|
||||
return output
|
||||
|
|
@ -1807,35 +1769,19 @@ def get_custom_workfile_template_by_string_context(
|
|||
context. (Existence of formatted path is not validated.)
|
||||
"""
|
||||
|
||||
if dbcon is None:
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
project_name = None
|
||||
if anatomy is not None:
|
||||
project_name = anatomy.project_name
|
||||
|
||||
dbcon = AvalonMongoDB()
|
||||
if not project_name and dbcon is not None:
|
||||
project_name = dbcon.active_project()
|
||||
|
||||
dbcon.install()
|
||||
if not project_name:
|
||||
raise ValueError("Can't determina project")
|
||||
|
||||
if dbcon.Session["AVALON_PROJECT"] != project_name:
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
project_doc = dbcon.find_one(
|
||||
{"type": "project"},
|
||||
# All we need is "name" and "data.code" keys
|
||||
{
|
||||
"name": 1,
|
||||
"data.code": 1
|
||||
}
|
||||
)
|
||||
asset_doc = dbcon.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
},
|
||||
# All we need is "name" and "data.tasks" keys
|
||||
{
|
||||
"name": 1,
|
||||
"data.tasks": 1
|
||||
}
|
||||
)
|
||||
project_doc = get_project(project_name, fields=["name", "data.code"])
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, asset_name, fields=["name", "data.tasks"])
|
||||
|
||||
return get_custom_workfile_template_by_context(
|
||||
template_profiles, project_doc, asset_doc, task_name, anatomy
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@ except Exception:
|
|||
from openpype.lib.python_2_comp import WeakMethod
|
||||
|
||||
|
||||
class MissingEventSystem(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class EventCallback(object):
|
||||
"""Callback registered to a topic.
|
||||
|
||||
|
|
@ -176,16 +180,20 @@ class Event(object):
|
|||
topic (str): Identifier of event.
|
||||
data (Any): Data specific for event. Dictionary is recommended.
|
||||
source (str): Identifier of source.
|
||||
event_system (EventSystem): Event system in which can be event
|
||||
triggered.
|
||||
"""
|
||||
|
||||
_data = {}
|
||||
|
||||
def __init__(self, topic, data=None, source=None):
|
||||
def __init__(self, topic, data=None, source=None, event_system=None):
|
||||
self._id = str(uuid4())
|
||||
self._topic = topic
|
||||
if data is None:
|
||||
data = {}
|
||||
self._data = data
|
||||
self._source = source
|
||||
self._event_system = event_system
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self._data[key]
|
||||
|
|
@ -211,28 +219,118 @@ class Event(object):
|
|||
|
||||
def emit(self):
|
||||
"""Emit event and trigger callbacks."""
|
||||
StoredCallbacks.emit_event(self)
|
||||
if self._event_system is None:
|
||||
raise MissingEventSystem(
|
||||
"Can't emit event {}. Does not have set event system.".format(
|
||||
str(repr(self))
|
||||
)
|
||||
)
|
||||
self._event_system.emit_event(self)
|
||||
|
||||
|
||||
class StoredCallbacks:
|
||||
_registered_callbacks = []
|
||||
class EventSystem(object):
|
||||
"""Encapsulate event handling into an object.
|
||||
|
||||
System wraps registered callbacks and triggered events into single object
|
||||
so it is possible to create mutltiple independent systems that have their
|
||||
topics and callbacks.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self._registered_callbacks = []
|
||||
|
||||
def add_callback(self, topic, callback):
|
||||
"""Register callback in event system.
|
||||
|
||||
Args:
|
||||
topic (str): Topic for EventCallback.
|
||||
callback (Callable): Function or method that will be called
|
||||
when topic is triggered.
|
||||
|
||||
Returns:
|
||||
EventCallback: Created callback object which can be used to
|
||||
stop listening.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def add_callback(cls, topic, callback):
|
||||
callback = EventCallback(topic, callback)
|
||||
cls._registered_callbacks.append(callback)
|
||||
self._registered_callbacks.append(callback)
|
||||
return callback
|
||||
|
||||
@classmethod
|
||||
def emit_event(cls, event):
|
||||
def create_event(self, topic, data, source):
|
||||
"""Create new event which is bound to event system.
|
||||
|
||||
Args:
|
||||
topic (str): Event topic.
|
||||
data (dict): Data related to event.
|
||||
source (str): Source of event.
|
||||
|
||||
Returns:
|
||||
Event: Object of event.
|
||||
"""
|
||||
|
||||
return Event(topic, data, source, self)
|
||||
|
||||
def emit(self, topic, data, source):
|
||||
"""Create event based on passed data and emit it.
|
||||
|
||||
This is easiest way how to trigger event in an event system.
|
||||
|
||||
Args:
|
||||
topic (str): Event topic.
|
||||
data (dict): Data related to event.
|
||||
source (str): Source of event.
|
||||
|
||||
Returns:
|
||||
Event: Created and emitted event.
|
||||
"""
|
||||
|
||||
event = self.create_event(topic, data, source)
|
||||
event.emit()
|
||||
return event
|
||||
|
||||
def emit_event(self, event):
|
||||
"""Emit event object.
|
||||
|
||||
Args:
|
||||
event (Event): Prepared event with topic and data.
|
||||
"""
|
||||
|
||||
invalid_callbacks = []
|
||||
for callback in cls._registered_callbacks:
|
||||
for callback in self._registered_callbacks:
|
||||
callback.process_event(event)
|
||||
if not callback.is_ref_valid:
|
||||
invalid_callbacks.append(callback)
|
||||
|
||||
for callback in invalid_callbacks:
|
||||
cls._registered_callbacks.remove(callback)
|
||||
self._registered_callbacks.remove(callback)
|
||||
|
||||
|
||||
class GlobalEventSystem:
|
||||
"""Event system living in global scope of process.
|
||||
|
||||
This is primarily used in host implementation to trigger events
|
||||
related to DCC changes or changes of context in the host implementation.
|
||||
"""
|
||||
|
||||
_global_event_system = None
|
||||
|
||||
@classmethod
|
||||
def get_global_event_system(cls):
|
||||
if cls._global_event_system is None:
|
||||
cls._global_event_system = EventSystem()
|
||||
return cls._global_event_system
|
||||
|
||||
@classmethod
|
||||
def add_callback(cls, topic, callback):
|
||||
event_system = cls.get_global_event_system()
|
||||
return event_system.add_callback(topic, callback)
|
||||
|
||||
@classmethod
|
||||
def emit(cls, topic, data, source):
|
||||
event_system = cls.get_global_event_system()
|
||||
return event_system.emit(topic, data, source)
|
||||
|
||||
|
||||
def register_event_callback(topic, callback):
|
||||
|
|
@ -249,7 +347,8 @@ def register_event_callback(topic, callback):
|
|||
enable/disable listening to a topic or remove the callback from
|
||||
the topic completely.
|
||||
"""
|
||||
return StoredCallbacks.add_callback(topic, callback)
|
||||
|
||||
return GlobalEventSystem.add_callback(topic, callback)
|
||||
|
||||
|
||||
def emit_event(topic, data=None, source=None):
|
||||
|
|
@ -263,6 +362,5 @@ def emit_event(topic, data=None, source=None):
|
|||
Returns:
|
||||
Event: Object of event that was emitted.
|
||||
"""
|
||||
event = Event(topic, data, source)
|
||||
event.emit()
|
||||
return event
|
||||
|
||||
return GlobalEventSystem.emit(topic, data, source)
|
||||
|
|
|
|||
|
|
@ -6,10 +6,10 @@ import logging
|
|||
import re
|
||||
import json
|
||||
|
||||
from .profiles_filtering import filter_profiles
|
||||
|
||||
from openpype.client import get_asset_by_id
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
from .profiles_filtering import filter_profiles
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -135,24 +135,17 @@ def get_subset_name(
|
|||
This is legacy function should be replaced with
|
||||
`get_subset_name_with_asset_doc` where asset document is expected.
|
||||
"""
|
||||
if dbcon is None:
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
if project_name is None:
|
||||
project_name = dbcon.project_name
|
||||
|
||||
dbcon.install()
|
||||
|
||||
asset_doc = dbcon.find_one(
|
||||
{"_id": asset_id},
|
||||
{"data.tasks": True}
|
||||
) or {}
|
||||
asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"])
|
||||
|
||||
return get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
asset_doc or {},
|
||||
project_name,
|
||||
host_name,
|
||||
default_template,
|
||||
|
|
|
|||
|
|
@ -24,7 +24,10 @@ from bson.json_util import (
|
|||
dumps,
|
||||
CANONICAL_JSON_OPTIONS
|
||||
)
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_whole_project,
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
DOCUMENTS_FILE_NAME = "database"
|
||||
|
|
@ -50,14 +53,12 @@ def pack_project(project_name, destination_dir=None):
|
|||
|
||||
Args:
|
||||
project_name(str): Project that should be packaged.
|
||||
destination_dir(str): Optinal path where zip will be stored. Project's
|
||||
destination_dir(str): Optional path where zip will be stored. Project's
|
||||
root is used if not passed.
|
||||
"""
|
||||
print("Creating package of project \"{}\"".format(project_name))
|
||||
# Validate existence of project
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
project_doc = dbcon.find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
if not project_doc:
|
||||
raise ValueError("Project \"{}\" was not found in database".format(
|
||||
project_name
|
||||
|
|
@ -118,7 +119,7 @@ def pack_project(project_name, destination_dir=None):
|
|||
temp_docs_json = s.name
|
||||
|
||||
# Query all project documents and store them to temp json
|
||||
docs = list(dbcon.find({}))
|
||||
docs = list(get_whole_project(project_name))
|
||||
data = dumps(
|
||||
docs, json_options=CANONICAL_JSON_OPTIONS
|
||||
)
|
||||
|
|
@ -147,7 +148,7 @@ def pack_project(project_name, destination_dir=None):
|
|||
# Cleanup
|
||||
os.remove(temp_docs_json)
|
||||
os.remove(temp_metadata_json)
|
||||
dbcon.uninstall()
|
||||
|
||||
print("*** Packing finished ***")
|
||||
|
||||
|
||||
|
|
@ -207,7 +208,7 @@ def unpack_project(path_to_zip, new_root=None):
|
|||
print("Using different root path {}".format(new_root))
|
||||
root_path = new_root
|
||||
|
||||
project_doc = collection.find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
roots = project_doc["config"]["roots"]
|
||||
key = tuple(roots.keys())[0]
|
||||
update_key = "config.roots.{}.{}".format(key, low_platform)
|
||||
|
|
|
|||
|
|
@ -8,10 +8,8 @@ except ImportError:
|
|||
# Allow to fall back on Multiverse 6.3.0+ pxr usd library
|
||||
from mvpxr import Usd, UsdGeom, Sdf, Kind
|
||||
|
||||
from openpype.pipeline import (
|
||||
registered_root,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.client import get_project, get_asset_by_name
|
||||
from openpype.pipeline import legacy_io, Anatomy
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -128,7 +126,8 @@ def create_model(filename, asset, variant_subsets):
|
|||
|
||||
"""
|
||||
|
||||
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = get_asset_by_name(project_name, asset)
|
||||
assert asset_doc, "Asset not found: %s" % asset
|
||||
|
||||
variants = []
|
||||
|
|
@ -178,7 +177,8 @@ def create_shade(filename, asset, variant_subsets):
|
|||
|
||||
"""
|
||||
|
||||
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = get_asset_by_name(project_name, asset)
|
||||
assert asset_doc, "Asset not found: %s" % asset
|
||||
|
||||
variants = []
|
||||
|
|
@ -213,7 +213,8 @@ def create_shade_variation(filename, asset, model_variant, shade_variants):
|
|||
|
||||
"""
|
||||
|
||||
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = get_asset_by_name(project_name, asset)
|
||||
assert asset_doc, "Asset not found: %s" % asset
|
||||
|
||||
variants = []
|
||||
|
|
@ -313,21 +314,25 @@ def get_usd_master_path(asset, subset, representation):
|
|||
|
||||
"""
|
||||
|
||||
project = legacy_io.find_one(
|
||||
{"type": "project"}, projection={"config.template.publish": True}
|
||||
project_name = legacy_io.active_project()
|
||||
anatomy = Anatomy(project_name)
|
||||
project_doc = get_project(
|
||||
project_name,
|
||||
fields=["name", "data.code"]
|
||||
)
|
||||
template = project["config"]["template"]["publish"]
|
||||
|
||||
if isinstance(asset, dict) and "name" in asset:
|
||||
# Allow explicitly passing asset document
|
||||
asset_doc = asset
|
||||
else:
|
||||
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
|
||||
asset_doc = get_asset_by_name(project_name, asset, fields=["name"])
|
||||
|
||||
path = template.format(
|
||||
**{
|
||||
"root": registered_root(),
|
||||
"project": legacy_io.Session["AVALON_PROJECT"],
|
||||
formatted_result = anatomy.format(
|
||||
{
|
||||
"project": {
|
||||
"name": project_name,
|
||||
"code": project_doc.get("data", {}).get("code")
|
||||
},
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset,
|
||||
"representation": representation,
|
||||
|
|
@ -335,6 +340,7 @@ def get_usd_master_path(asset, subset, representation):
|
|||
}
|
||||
)
|
||||
|
||||
path = formatted_result["publish"]["path"]
|
||||
# Remove the version folder
|
||||
subset_folder = os.path.dirname(os.path.dirname(path))
|
||||
master_folder = os.path.join(subset_folder, "master")
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ import attr
|
|||
import requests
|
||||
|
||||
import pyblish.api
|
||||
from openpype.lib.abstract_metaplugins import AbstractMetaInstancePlugin
|
||||
from openpype.pipeline.publish import AbstractMetaInstancePlugin
|
||||
|
||||
|
||||
def requests_post(*args, **kwargs):
|
||||
|
|
|
|||
|
|
@ -6,7 +6,10 @@ import collections
|
|||
import ftrack_api
|
||||
|
||||
from openpype.lib import get_datetime_data
|
||||
from openpype.api import get_project_settings
|
||||
from openpype.settings.lib import (
|
||||
get_project_settings,
|
||||
get_default_project_settings
|
||||
)
|
||||
from openpype_modules.ftrack.lib import ServerAction
|
||||
|
||||
|
||||
|
|
@ -79,6 +82,35 @@ class CreateDailyReviewSessionServerAction(ServerAction):
|
|||
)
|
||||
return True
|
||||
|
||||
def _calculate_next_cycle_delta(self):
|
||||
studio_default_settings = get_default_project_settings()
|
||||
action_settings = (
|
||||
studio_default_settings
|
||||
["ftrack"]
|
||||
[self.settings_frack_subkey]
|
||||
[self.settings_key]
|
||||
)
|
||||
cycle_hour_start = action_settings.get("cycle_hour_start")
|
||||
if not cycle_hour_start:
|
||||
h = m = s = 0
|
||||
else:
|
||||
h, m, s = cycle_hour_start
|
||||
|
||||
# Create threading timer which will trigger creation of report
|
||||
# at the 00:00:01 of next day
|
||||
# - callback will trigger another timer which will have 1 day offset
|
||||
now = datetime.datetime.now()
|
||||
# Create object of today morning
|
||||
expected_next_trigger = datetime.datetime(
|
||||
now.year, now.month, now.day, h, m, s
|
||||
)
|
||||
if expected_next_trigger > now:
|
||||
seconds = (expected_next_trigger - now).total_seconds()
|
||||
else:
|
||||
expected_next_trigger += self._day_delta
|
||||
seconds = (expected_next_trigger - now).total_seconds()
|
||||
return seconds, expected_next_trigger
|
||||
|
||||
def register(self, *args, **kwargs):
|
||||
"""Override register to be able trigger """
|
||||
# Register server action as would be normally
|
||||
|
|
@ -86,22 +118,14 @@ class CreateDailyReviewSessionServerAction(ServerAction):
|
|||
*args, **kwargs
|
||||
)
|
||||
|
||||
# Create threading timer which will trigger creation of report
|
||||
# at the 00:00:01 of next day
|
||||
# - callback will trigger another timer which will have 1 day offset
|
||||
now = datetime.datetime.now()
|
||||
# Create object of today morning
|
||||
today_morning = datetime.datetime(
|
||||
now.year, now.month, now.day, 0, 0, 1
|
||||
)
|
||||
# Add a day delta (to calculate next day date)
|
||||
next_day_morning = today_morning + self._day_delta
|
||||
# Calculate first delta in seconds for first threading timer
|
||||
first_delta = (next_day_morning - now).total_seconds()
|
||||
seconds_delta, cycle_time = self._calculate_next_cycle_delta()
|
||||
|
||||
# Store cycle time which will be used to create next timer
|
||||
self._last_cyle_time = next_day_morning
|
||||
self._last_cyle_time = cycle_time
|
||||
# Create timer thread
|
||||
self._cycle_timer = threading.Timer(first_delta, self._timer_callback)
|
||||
self._cycle_timer = threading.Timer(
|
||||
seconds_delta, self._timer_callback
|
||||
)
|
||||
self._cycle_timer.start()
|
||||
|
||||
self._check_review_session()
|
||||
|
|
@ -111,13 +135,12 @@ class CreateDailyReviewSessionServerAction(ServerAction):
|
|||
self._cycle_timer is not None
|
||||
and self._last_cyle_time is not None
|
||||
):
|
||||
now = datetime.datetime.now()
|
||||
while self._last_cyle_time < now:
|
||||
self._last_cyle_time = self._last_cyle_time + self._day_delta
|
||||
seconds_delta, cycle_time = self._calculate_next_cycle_delta()
|
||||
self._last_cyle_time = cycle_time
|
||||
|
||||
delay = (self._last_cyle_time - now).total_seconds()
|
||||
|
||||
self._cycle_timer = threading.Timer(delay, self._timer_callback)
|
||||
self._cycle_timer = threading.Timer(
|
||||
seconds_delta, self._timer_callback
|
||||
)
|
||||
self._cycle_timer.start()
|
||||
self._check_review_session()
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import json
|
||||
import copy
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype.api import ProjectSettings
|
||||
|
|
@ -373,6 +374,10 @@ class PrepareProjectServer(ServerAction):
|
|||
project_name, project_code
|
||||
))
|
||||
create_project(project_name, project_code)
|
||||
self.trigger_event(
|
||||
"openpype.project.created",
|
||||
{"project_name": project_name}
|
||||
)
|
||||
|
||||
project_settings = ProjectSettings(project_name)
|
||||
project_anatomy_settings = project_settings["project_anatomy"]
|
||||
|
|
@ -400,6 +405,10 @@ class PrepareProjectServer(ServerAction):
|
|||
self.log.debug("- Key \"{}\" set to \"{}\"".format(key, value))
|
||||
session.commit()
|
||||
|
||||
event_data = copy.deepcopy(in_data)
|
||||
event_data["project_name"] = project_name
|
||||
self.trigger_event("openpype.project.prepared", event_data)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import time
|
||||
import sys
|
||||
import json
|
||||
import traceback
|
||||
|
||||
import ftrack_api
|
||||
|
||||
from openpype_modules.ftrack.lib import ServerAction
|
||||
from openpype_modules.ftrack.lib.avalon_sync import SyncEntitiesFactory
|
||||
|
|
@ -180,6 +181,13 @@ class SyncToAvalonServer(ServerAction):
|
|||
"* Total time: {}".format(time_7 - time_start)
|
||||
)
|
||||
|
||||
if self.entities_factory.project_created:
|
||||
event = ftrack_api.event.base.Event(
|
||||
topic="openpype.project.created",
|
||||
data={"project_name": project_name}
|
||||
)
|
||||
self.session.event_hub.publish(event)
|
||||
|
||||
report = self.entities_factory.report()
|
||||
if report and report.get("items"):
|
||||
default_title = "Synchronization report ({}):".format(
|
||||
|
|
|
|||
|
|
@ -84,6 +84,11 @@ class CreateProjectFolders(BaseAction):
|
|||
create_project_folders(basic_paths, project_name)
|
||||
self.create_ftrack_entities(basic_paths, project_entity)
|
||||
|
||||
self.trigger_event(
|
||||
"openpype.project.structure.created",
|
||||
{"project_name": project_name}
|
||||
)
|
||||
|
||||
except Exception as exc:
|
||||
self.log.warning("Creating of structure crashed.", exc_info=True)
|
||||
session.rollback()
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import json
|
||||
import copy
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype.api import ProjectSettings
|
||||
|
|
@ -399,6 +400,10 @@ class PrepareProjectLocal(BaseAction):
|
|||
project_name, project_code
|
||||
))
|
||||
create_project(project_name, project_code)
|
||||
self.trigger_event(
|
||||
"openpype.project.created",
|
||||
{"project_name": project_name}
|
||||
)
|
||||
|
||||
project_settings = ProjectSettings(project_name)
|
||||
project_anatomy_settings = project_settings["project_anatomy"]
|
||||
|
|
@ -433,6 +438,10 @@ class PrepareProjectLocal(BaseAction):
|
|||
self.process_identifier()
|
||||
)
|
||||
self.trigger_action(trigger_identifier, event)
|
||||
|
||||
event_data = copy.deepcopy(in_data)
|
||||
event_data["project_name"] = project_name
|
||||
self.trigger_event("openpype.project.prepared", event_data)
|
||||
return True
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,8 @@
|
|||
import time
|
||||
import sys
|
||||
import json
|
||||
import traceback
|
||||
|
||||
import ftrack_api
|
||||
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
from openpype_modules.ftrack.lib.avalon_sync import SyncEntitiesFactory
|
||||
|
|
@ -184,6 +185,13 @@ class SyncToAvalonLocal(BaseAction):
|
|||
"* Total time: {}".format(time_7 - time_start)
|
||||
)
|
||||
|
||||
if self.entities_factory.project_created:
|
||||
event = ftrack_api.event.base.Event(
|
||||
topic="openpype.project.created",
|
||||
data={"project_name": project_name}
|
||||
)
|
||||
self.session.event_hub.publish(event)
|
||||
|
||||
report = self.entities_factory.report()
|
||||
if report and report.get("items"):
|
||||
default_title = "Synchronization report ({}):".format(
|
||||
|
|
|
|||
|
|
@ -443,6 +443,7 @@ class SyncEntitiesFactory:
|
|||
}
|
||||
|
||||
self.create_list = []
|
||||
self.project_created = False
|
||||
self.unarchive_list = []
|
||||
self.updates = collections.defaultdict(dict)
|
||||
|
||||
|
|
@ -2214,6 +2215,7 @@ class SyncEntitiesFactory:
|
|||
self._avalon_ents_by_name[project_item["name"]] = str(new_id)
|
||||
|
||||
self.create_list.append(project_item)
|
||||
self.project_created = True
|
||||
|
||||
# store mongo id to ftrack entity
|
||||
entity = self.entities_dict[self.ft_project_id]["entity"]
|
||||
|
|
|
|||
|
|
@ -535,7 +535,7 @@ class BaseHandler(object):
|
|||
)
|
||||
|
||||
def trigger_event(
|
||||
self, topic, event_data={}, session=None, source=None,
|
||||
self, topic, event_data=None, session=None, source=None,
|
||||
event=None, on_error="ignore"
|
||||
):
|
||||
if session is None:
|
||||
|
|
@ -543,6 +543,9 @@ class BaseHandler(object):
|
|||
|
||||
if not source and event:
|
||||
source = event.get("source")
|
||||
|
||||
if event_data is None:
|
||||
event_data = {}
|
||||
# Create and trigger event
|
||||
event = ftrack_api.event.base.Event(
|
||||
topic=topic,
|
||||
|
|
|
|||
|
|
@ -116,6 +116,7 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
|
|||
"app_name": app_name,
|
||||
"app_label": app_label,
|
||||
"published_paths": "<br/>".join(sorted(published_paths)),
|
||||
"source": instance.data.get("source", '')
|
||||
}
|
||||
comment = template.format(**format_data)
|
||||
if not comment:
|
||||
|
|
|
|||
|
|
@ -32,11 +32,17 @@ class CollectKitsuEntities(pyblish.api.ContextPlugin):
|
|||
context.data["kitsu_project"] = kitsu_project
|
||||
self.log.debug("Collect kitsu project: {}".format(kitsu_project))
|
||||
|
||||
kitsu_asset = gazu.asset.get_asset(zou_asset_data["id"])
|
||||
if not kitsu_asset:
|
||||
raise AssertionError("Asset not found in kitsu!")
|
||||
context.data["kitsu_asset"] = kitsu_asset
|
||||
self.log.debug("Collect kitsu asset: {}".format(kitsu_asset))
|
||||
entity_type = zou_asset_data["type"]
|
||||
if entity_type == "Shot":
|
||||
kitsu_entity = gazu.shot.get_shot(zou_asset_data["id"])
|
||||
else:
|
||||
kitsu_entity = gazu.asset.get_asset(zou_asset_data["id"])
|
||||
|
||||
if not kitsu_entity:
|
||||
raise AssertionError(f"{entity_type} not found in kitsu!")
|
||||
|
||||
context.data["kitsu_entity"] = kitsu_entity
|
||||
self.log.debug(f"Collect kitsu {entity_type}: {kitsu_entity}")
|
||||
|
||||
if zou_task_data:
|
||||
kitsu_task = gazu.task.get_task(zou_task_data["id"])
|
||||
|
|
@ -57,7 +63,7 @@ class CollectKitsuEntities(pyblish.api.ContextPlugin):
|
|||
)
|
||||
|
||||
kitsu_task = gazu.task.get_task_by_name(
|
||||
kitsu_asset, kitsu_task_type
|
||||
kitsu_entity, kitsu_task_type
|
||||
)
|
||||
if not kitsu_task:
|
||||
raise AssertionError("Task not found in kitsu!")
|
||||
|
|
|
|||
|
|
@ -165,10 +165,12 @@ class Listener:
|
|||
zou_ids_and_asset_docs[asset["project_id"]] = project_doc
|
||||
|
||||
# Update
|
||||
asset_doc_id, asset_update = update_op_assets(
|
||||
update_op_result = update_op_assets(
|
||||
self.dbcon, project_doc, [asset], zou_ids_and_asset_docs
|
||||
)[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
)
|
||||
if update_op_result:
|
||||
asset_doc_id, asset_update = update_op_result[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
|
||||
def _delete_asset(self, data):
|
||||
"""Delete asset of OP DB."""
|
||||
|
|
@ -212,10 +214,12 @@ class Listener:
|
|||
zou_ids_and_asset_docs[episode["project_id"]] = project_doc
|
||||
|
||||
# Update
|
||||
asset_doc_id, asset_update = update_op_assets(
|
||||
update_op_result = update_op_assets(
|
||||
self.dbcon, project_doc, [episode], zou_ids_and_asset_docs
|
||||
)[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
)
|
||||
if update_op_result:
|
||||
asset_doc_id, asset_update = update_op_result[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
|
||||
def _delete_episode(self, data):
|
||||
"""Delete shot of OP DB."""
|
||||
|
|
@ -260,10 +264,12 @@ class Listener:
|
|||
zou_ids_and_asset_docs[sequence["project_id"]] = project_doc
|
||||
|
||||
# Update
|
||||
asset_doc_id, asset_update = update_op_assets(
|
||||
update_op_result = update_op_assets(
|
||||
self.dbcon, project_doc, [sequence], zou_ids_and_asset_docs
|
||||
)[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
)
|
||||
if update_op_result:
|
||||
asset_doc_id, asset_update = update_op_result[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
|
||||
def _delete_sequence(self, data):
|
||||
"""Delete sequence of OP DB."""
|
||||
|
|
@ -308,10 +314,12 @@ class Listener:
|
|||
zou_ids_and_asset_docs[shot["project_id"]] = project_doc
|
||||
|
||||
# Update
|
||||
asset_doc_id, asset_update = update_op_assets(
|
||||
update_op_result = update_op_assets(
|
||||
self.dbcon, project_doc, [shot], zou_ids_and_asset_docs
|
||||
)[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
)
|
||||
if update_op_result:
|
||||
asset_doc_id, asset_update = update_op_result[0]
|
||||
self.dbcon.update_one({"_id": asset_doc_id}, asset_update)
|
||||
|
||||
def _delete_shot(self, data):
|
||||
"""Delete shot of OP DB."""
|
||||
|
|
|
|||
|
|
@ -82,22 +82,37 @@ def update_op_assets(
|
|||
item_data["zou"] = item
|
||||
|
||||
# == Asset settings ==
|
||||
# Frame in, fallback on 0
|
||||
frame_in = int(item_data.get("frame_in") or 0)
|
||||
# Frame in, fallback to project's value or default value (1001)
|
||||
# TODO: get default from settings/project_anatomy/attributes.json
|
||||
try:
|
||||
frame_in = int(
|
||||
item_data.pop(
|
||||
"frame_in", project_doc["data"].get("frameStart")
|
||||
)
|
||||
)
|
||||
except (TypeError, ValueError):
|
||||
frame_in = 1001
|
||||
item_data["frameStart"] = frame_in
|
||||
item_data.pop("frame_in", None)
|
||||
# Frame out, fallback on frame_in + duration
|
||||
frames_duration = int(item.get("nb_frames") or 1)
|
||||
frame_out = (
|
||||
item_data["frame_out"]
|
||||
if item_data.get("frame_out")
|
||||
else frame_in + frames_duration
|
||||
)
|
||||
item_data["frameEnd"] = int(frame_out)
|
||||
item_data.pop("frame_out", None)
|
||||
# Fps, fallback to project's value when entity fps is deleted
|
||||
if not item_data.get("fps") and item_doc["data"].get("fps"):
|
||||
item_data["fps"] = project_doc["data"]["fps"]
|
||||
# Frames duration, fallback on 0
|
||||
try:
|
||||
frames_duration = int(item_data.pop("nb_frames", 0))
|
||||
except (TypeError, ValueError):
|
||||
frames_duration = 0
|
||||
# Frame out, fallback on frame_in + duration or project's value or 1001
|
||||
frame_out = item_data.pop("frame_out", None)
|
||||
if not frame_out:
|
||||
frame_out = frame_in + frames_duration
|
||||
try:
|
||||
frame_out = int(frame_out)
|
||||
except (TypeError, ValueError):
|
||||
frame_out = 1001
|
||||
item_data["frameEnd"] = frame_out
|
||||
# Fps, fallback to project's value or default value (25.0)
|
||||
try:
|
||||
fps = float(item_data.get("fps", project_doc["data"].get("fps")))
|
||||
except (TypeError, ValueError):
|
||||
fps = 25.0
|
||||
item_data["fps"] = fps
|
||||
|
||||
# Tasks
|
||||
tasks_list = []
|
||||
|
|
@ -106,9 +121,8 @@ def update_op_assets(
|
|||
tasks_list = all_tasks_for_asset(item)
|
||||
elif item_type == "Shot":
|
||||
tasks_list = all_tasks_for_shot(item)
|
||||
# TODO frame in and out
|
||||
item_data["tasks"] = {
|
||||
t["task_type_name"]: {"type": t["task_type_name"]}
|
||||
t["task_type_name"]: {"type": t["task_type_name"], "zou": t}
|
||||
for t in tasks_list
|
||||
}
|
||||
|
||||
|
|
@ -229,9 +243,9 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
|
|||
project_data.update(
|
||||
{
|
||||
"code": project_code,
|
||||
"fps": project["fps"],
|
||||
"resolutionWidth": project["resolution"].split("x")[0],
|
||||
"resolutionHeight": project["resolution"].split("x")[1],
|
||||
"fps": float(project["fps"]),
|
||||
"resolutionWidth": int(project["resolution"].split("x")[0]),
|
||||
"resolutionHeight": int(project["resolution"].split("x")[1]),
|
||||
"zou_id": project["id"],
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -2,13 +2,13 @@ import os
|
|||
import platform
|
||||
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype_interfaces import (
|
||||
ITrayService,
|
||||
ILaunchHookPaths
|
||||
)
|
||||
from openpype.lib.events import register_event_callback
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
from .exceptions import InvalidContextError
|
||||
|
||||
|
|
@ -197,22 +197,13 @@ class TimersManager(OpenPypeModule, ITrayService, ILaunchHookPaths):
|
|||
" Project: \"{}\" Asset: \"{}\" Task: \"{}\""
|
||||
).format(str(project_name), str(asset_name), str(task_name)))
|
||||
|
||||
dbconn = AvalonMongoDB()
|
||||
dbconn.install()
|
||||
dbconn.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
asset_doc = dbconn.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
},
|
||||
{
|
||||
"data.tasks": True,
|
||||
"data.parents": True
|
||||
}
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name,
|
||||
asset_name,
|
||||
fields=["_id", "name", "data.tasks", "data.parents"]
|
||||
)
|
||||
|
||||
if not asset_doc:
|
||||
dbconn.uninstall()
|
||||
raise InvalidContextError((
|
||||
"Asset \"{}\" not found in project \"{}\""
|
||||
).format(asset_name, project_name))
|
||||
|
|
@ -220,7 +211,6 @@ class TimersManager(OpenPypeModule, ITrayService, ILaunchHookPaths):
|
|||
asset_data = asset_doc.get("data") or {}
|
||||
asset_tasks = asset_data.get("tasks") or {}
|
||||
if task_name not in asset_tasks:
|
||||
dbconn.uninstall()
|
||||
raise InvalidContextError((
|
||||
"Task \"{}\" not found on asset \"{}\" in project \"{}\""
|
||||
).format(task_name, asset_name, project_name))
|
||||
|
|
@ -238,9 +228,10 @@ class TimersManager(OpenPypeModule, ITrayService, ILaunchHookPaths):
|
|||
hierarchy_items = asset_data.get("parents") or []
|
||||
hierarchy_items.append(asset_name)
|
||||
|
||||
dbconn.uninstall()
|
||||
return {
|
||||
"project_name": project_name,
|
||||
"asset_id": str(asset_doc["_id"]),
|
||||
"asset_name": asset_doc["name"],
|
||||
"task_name": task_name,
|
||||
"task_type": task_type,
|
||||
"hierarchy": hierarchy_items
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ UpdateData = collections.namedtuple("UpdateData", ["instance", "changes"])
|
|||
|
||||
class ImmutableKeyError(TypeError):
|
||||
"""Accessed key is immutable so does not allow changes or removements."""
|
||||
|
||||
def __init__(self, key, msg=None):
|
||||
self.immutable_key = key
|
||||
if not msg:
|
||||
|
|
@ -40,6 +41,7 @@ class ImmutableKeyError(TypeError):
|
|||
|
||||
class HostMissRequiredMethod(Exception):
|
||||
"""Host does not have implemented required functions for creation."""
|
||||
|
||||
def __init__(self, host, missing_methods):
|
||||
self.missing_methods = missing_methods
|
||||
self.host = host
|
||||
|
|
@ -66,6 +68,7 @@ class InstanceMember:
|
|||
TODO:
|
||||
Implement and use!
|
||||
"""
|
||||
|
||||
def __init__(self, instance, name):
|
||||
self.instance = instance
|
||||
|
||||
|
|
@ -94,6 +97,7 @@ class AttributeValues:
|
|||
values(dict): Values after possible conversion.
|
||||
origin_data(dict): Values loaded from host before conversion.
|
||||
"""
|
||||
|
||||
def __init__(self, attr_defs, values, origin_data=None):
|
||||
from openpype.lib.attribute_definitions import UnknownDef
|
||||
|
||||
|
|
@ -174,6 +178,10 @@ class AttributeValues:
|
|||
output = {}
|
||||
for key in self._data:
|
||||
output[key] = self[key]
|
||||
|
||||
for key, attr_def in self._attr_defs_by_key.items():
|
||||
if key not in output:
|
||||
output[key] = attr_def.default
|
||||
return output
|
||||
|
||||
@staticmethod
|
||||
|
|
@ -196,6 +204,7 @@ class CreatorAttributeValues(AttributeValues):
|
|||
Args:
|
||||
instance (CreatedInstance): Instance for which are values hold.
|
||||
"""
|
||||
|
||||
def __init__(self, instance, *args, **kwargs):
|
||||
self.instance = instance
|
||||
super(CreatorAttributeValues, self).__init__(*args, **kwargs)
|
||||
|
|
@ -211,6 +220,7 @@ class PublishAttributeValues(AttributeValues):
|
|||
publish_attributes(PublishAttributes): Wrapper for multiple publish
|
||||
attributes is used as parent object.
|
||||
"""
|
||||
|
||||
def __init__(self, publish_attributes, *args, **kwargs):
|
||||
self.publish_attributes = publish_attributes
|
||||
super(PublishAttributeValues, self).__init__(*args, **kwargs)
|
||||
|
|
@ -232,6 +242,7 @@ class PublishAttributes:
|
|||
attr_plugins(list): List of publish plugins that may have defined
|
||||
attribute definitions.
|
||||
"""
|
||||
|
||||
def __init__(self, parent, origin_data, attr_plugins=None):
|
||||
self.parent = parent
|
||||
self._origin_data = copy.deepcopy(origin_data)
|
||||
|
|
@ -270,6 +281,7 @@ class PublishAttributes:
|
|||
key(str): Plugin name.
|
||||
default: Default value if plugin was not found.
|
||||
"""
|
||||
|
||||
if key not in self._data:
|
||||
return default
|
||||
|
||||
|
|
@ -287,11 +299,13 @@ class PublishAttributes:
|
|||
|
||||
def plugin_names_order(self):
|
||||
"""Plugin names order by their 'order' attribute."""
|
||||
|
||||
for name in self._plugin_names_order:
|
||||
yield name
|
||||
|
||||
def data_to_store(self):
|
||||
"""Convert attribute values to "data to store"."""
|
||||
|
||||
output = {}
|
||||
for key, attr_value in self._data.items():
|
||||
output[key] = attr_value.data_to_store()
|
||||
|
|
@ -299,6 +313,7 @@ class PublishAttributes:
|
|||
|
||||
def changes(self):
|
||||
"""Return changes per each key."""
|
||||
|
||||
changes = {}
|
||||
for key, attr_val in self._data.items():
|
||||
attr_changes = attr_val.changes()
|
||||
|
|
@ -314,6 +329,7 @@ class PublishAttributes:
|
|||
|
||||
def set_publish_plugins(self, attr_plugins):
|
||||
"""Set publish plugins attribute definitions."""
|
||||
|
||||
self._plugin_names_order = []
|
||||
self._missing_plugins = []
|
||||
self.attr_plugins = attr_plugins or []
|
||||
|
|
@ -365,6 +381,7 @@ class CreatedInstance:
|
|||
`openpype.pipeline.registered_host`.
|
||||
new(bool): Is instance new.
|
||||
"""
|
||||
|
||||
# Keys that can't be changed or removed from data after loading using
|
||||
# creator.
|
||||
# - 'creator_attributes' and 'publish_attributes' can change values of
|
||||
|
|
@ -496,6 +513,20 @@ class CreatedInstance:
|
|||
def subset_name(self):
|
||||
return self._data["subset"]
|
||||
|
||||
@property
|
||||
def label(self):
|
||||
label = self._data.get("label")
|
||||
if not label:
|
||||
label = self.subset_name
|
||||
return label
|
||||
|
||||
@property
|
||||
def group_label(self):
|
||||
label = self._data.get("group")
|
||||
if label:
|
||||
return label
|
||||
return self.creator.get_group_label()
|
||||
|
||||
@property
|
||||
def creator_identifier(self):
|
||||
return self.creator.identifier
|
||||
|
|
@ -552,6 +583,7 @@ class CreatedInstance:
|
|||
@property
|
||||
def id(self):
|
||||
"""Instance identifier."""
|
||||
|
||||
return self._data["instance_id"]
|
||||
|
||||
@property
|
||||
|
|
@ -560,10 +592,12 @@ class CreatedInstance:
|
|||
|
||||
Access to data is needed to modify values.
|
||||
"""
|
||||
|
||||
return self
|
||||
|
||||
def changes(self):
|
||||
"""Calculate and return changes."""
|
||||
|
||||
changes = {}
|
||||
new_keys = set()
|
||||
for key, new_value in self._data.items():
|
||||
|
|
@ -702,6 +736,7 @@ class CreateContext:
|
|||
self.manual_creators = {}
|
||||
|
||||
self.publish_discover_result = None
|
||||
self.publish_plugins_mismatch_targets = []
|
||||
self.publish_plugins = []
|
||||
self.plugins_with_defs = []
|
||||
self._attr_plugins_by_family = {}
|
||||
|
|
@ -748,6 +783,10 @@ class CreateContext:
|
|||
def host_name(self):
|
||||
return os.environ["AVALON_APP"]
|
||||
|
||||
@property
|
||||
def project_name(self):
|
||||
return self.dbcon.active_project()
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
"""Dynamic access to logger."""
|
||||
|
|
@ -820,6 +859,7 @@ class CreateContext:
|
|||
discover_result = DiscoverResult()
|
||||
plugins_with_defs = []
|
||||
plugins_by_targets = []
|
||||
plugins_mismatch_targets = []
|
||||
if discover_publish_plugins:
|
||||
discover_result = publish_plugins_discover()
|
||||
publish_plugins = discover_result.plugins
|
||||
|
|
@ -829,19 +869,26 @@ class CreateContext:
|
|||
plugins_by_targets = pyblish.logic.plugins_by_targets(
|
||||
publish_plugins, list(targets)
|
||||
)
|
||||
|
||||
# Collect plugins that can have attribute definitions
|
||||
for plugin in publish_plugins:
|
||||
if OpenPypePyblishPluginMixin in inspect.getmro(plugin):
|
||||
plugins_with_defs.append(plugin)
|
||||
|
||||
plugins_mismatch_targets = [
|
||||
plugin
|
||||
for plugin in publish_plugins
|
||||
if plugin not in plugins_by_targets
|
||||
]
|
||||
|
||||
self.publish_plugins_mismatch_targets = plugins_mismatch_targets
|
||||
self.publish_discover_result = discover_result
|
||||
self.publish_plugins = plugins_by_targets
|
||||
self.plugins_with_defs = plugins_with_defs
|
||||
|
||||
# Prepare settings
|
||||
project_name = self.dbcon.Session["AVALON_PROJECT"]
|
||||
system_settings = get_system_settings()
|
||||
project_settings = get_project_settings(project_name)
|
||||
project_settings = get_project_settings(self.project_name)
|
||||
|
||||
# Discover and prepare creators
|
||||
creators = {}
|
||||
|
|
@ -873,9 +920,9 @@ class CreateContext:
|
|||
continue
|
||||
|
||||
creator = creator_class(
|
||||
self,
|
||||
system_settings,
|
||||
project_settings,
|
||||
system_settings,
|
||||
self,
|
||||
self.headless
|
||||
)
|
||||
creators[creator_identifier] = creator
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
import copy
|
||||
import logging
|
||||
|
||||
from abc import (
|
||||
ABCMeta,
|
||||
|
|
@ -47,6 +46,9 @@ class BaseCreator:
|
|||
|
||||
# Label shown in UI
|
||||
label = None
|
||||
group_label = None
|
||||
# Cached group label after first call 'get_group_label'
|
||||
_cached_group_label = None
|
||||
|
||||
# Variable to store logger
|
||||
_log = None
|
||||
|
|
@ -70,7 +72,7 @@ class BaseCreator:
|
|||
host_name = None
|
||||
|
||||
def __init__(
|
||||
self, create_context, system_settings, project_settings, headless=False
|
||||
self, project_settings, system_settings, create_context, headless=False
|
||||
):
|
||||
# Reference to CreateContext
|
||||
self.create_context = create_context
|
||||
|
|
@ -85,15 +87,54 @@ class BaseCreator:
|
|||
|
||||
Default implementation returns plugin's family.
|
||||
"""
|
||||
|
||||
return self.family
|
||||
|
||||
@abstractproperty
|
||||
def family(self):
|
||||
"""Family that plugin represents."""
|
||||
|
||||
pass
|
||||
|
||||
@property
|
||||
def project_name(self):
|
||||
"""Family that plugin represents."""
|
||||
|
||||
return self.create_context.project_name
|
||||
|
||||
@property
|
||||
def host(self):
|
||||
return self.create_context.host
|
||||
|
||||
def get_group_label(self):
|
||||
"""Group label under which are instances grouped in UI.
|
||||
|
||||
Default implementation use attributes in this order:
|
||||
- 'group_label' -> 'label' -> 'identifier'
|
||||
Keep in mind that 'identifier' use 'family' by default.
|
||||
|
||||
Returns:
|
||||
str: Group label that can be used for grouping of instances in UI.
|
||||
Group label can be overriden by instance itself.
|
||||
"""
|
||||
|
||||
if self._cached_group_label is None:
|
||||
label = self.identifier
|
||||
if self.group_label:
|
||||
label = self.group_label
|
||||
elif self.label:
|
||||
label = self.label
|
||||
self._cached_group_label = label
|
||||
return self._cached_group_label
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
"""Logger of the plugin.
|
||||
|
||||
Returns:
|
||||
logging.Logger: Logger with name of the plugin.
|
||||
"""
|
||||
|
||||
if self._log is None:
|
||||
from openpype.api import Logger
|
||||
|
||||
|
|
@ -101,10 +142,30 @@ class BaseCreator:
|
|||
return self._log
|
||||
|
||||
def _add_instance_to_context(self, instance):
|
||||
"""Helper method to ad d"""
|
||||
"""Helper method to add instance to create context.
|
||||
|
||||
Instances should be stored to DCC workfile metadata to be able reload
|
||||
them and also stored to CreateContext in which is creator plugin
|
||||
existing at the moment to be able use it without refresh of
|
||||
CreateContext.
|
||||
|
||||
Args:
|
||||
instance (CreatedInstance): New created instance.
|
||||
"""
|
||||
|
||||
self.create_context.creator_adds_instance(instance)
|
||||
|
||||
def _remove_instance_from_context(self, instance):
|
||||
"""Helper method to remove instance from create context.
|
||||
|
||||
Instances must be removed from DCC workfile metadat aand from create
|
||||
context in which plugin is existing at the moment of removement to
|
||||
propagate the change without restarting create context.
|
||||
|
||||
Args:
|
||||
instance (CreatedInstance): Instance which should be removed.
|
||||
"""
|
||||
|
||||
self.create_context.creator_removed_instance(instance)
|
||||
|
||||
@abstractmethod
|
||||
|
|
@ -115,6 +176,7 @@ class BaseCreator:
|
|||
- must expect all data that were passed to init in previous
|
||||
implementation
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
|
|
@ -141,6 +203,7 @@ class BaseCreator:
|
|||
self._add_instance_to_context(instance)
|
||||
```
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
|
|
@ -148,9 +211,10 @@ class BaseCreator:
|
|||
"""Store changes of existing instances so they can be recollected.
|
||||
|
||||
Args:
|
||||
update_list(list<UpdateData>): Gets list of tuples. Each item
|
||||
update_list(List[UpdateData]): Gets list of tuples. Each item
|
||||
contain changed instance and it's changes.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
|
|
@ -161,9 +225,10 @@ class BaseCreator:
|
|||
'True' if did so.
|
||||
|
||||
Args:
|
||||
instance(list<CreatedInstance>): Instance objects which should be
|
||||
instance(List[CreatedInstance]): Instance objects which should be
|
||||
removed.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def get_icon(self):
|
||||
|
|
@ -171,6 +236,7 @@ class BaseCreator:
|
|||
|
||||
Can return path to image file or awesome icon name.
|
||||
"""
|
||||
|
||||
return self.icon
|
||||
|
||||
def get_dynamic_data(
|
||||
|
|
@ -181,6 +247,7 @@ class BaseCreator:
|
|||
These may be get dynamically created based on current context of
|
||||
workfile.
|
||||
"""
|
||||
|
||||
return {}
|
||||
|
||||
def get_subset_name(
|
||||
|
|
@ -205,6 +272,7 @@ class BaseCreator:
|
|||
project_name(str): Project name.
|
||||
host_name(str): Which host creates subset.
|
||||
"""
|
||||
|
||||
dynamic_data = self.get_dynamic_data(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
@ -231,9 +299,10 @@ class BaseCreator:
|
|||
keys/values when plugin attributes change.
|
||||
|
||||
Returns:
|
||||
list<AbtractAttrDef>: Attribute definitions that can be tweaked for
|
||||
List[AbtractAttrDef]: Attribute definitions that can be tweaked for
|
||||
created instance.
|
||||
"""
|
||||
|
||||
return self.instance_attr_defs
|
||||
|
||||
|
||||
|
|
@ -291,6 +360,7 @@ class Creator(BaseCreator):
|
|||
Returns:
|
||||
str: Short description of family.
|
||||
"""
|
||||
|
||||
return self.description
|
||||
|
||||
def get_detail_description(self):
|
||||
|
|
@ -301,6 +371,7 @@ class Creator(BaseCreator):
|
|||
Returns:
|
||||
str: Detailed description of family for artist.
|
||||
"""
|
||||
|
||||
return self.detailed_description
|
||||
|
||||
def get_default_variants(self):
|
||||
|
|
@ -312,8 +383,9 @@ class Creator(BaseCreator):
|
|||
By default returns `default_variants` value.
|
||||
|
||||
Returns:
|
||||
list<str>: Whisper variants for user input.
|
||||
List[str]: Whisper variants for user input.
|
||||
"""
|
||||
|
||||
return copy.deepcopy(self.default_variants)
|
||||
|
||||
def get_default_variant(self):
|
||||
|
|
@ -332,11 +404,13 @@ class Creator(BaseCreator):
|
|||
"""Plugin attribute definitions needed for creation.
|
||||
Attribute definitions of plugin that define how creation will work.
|
||||
Values of these definitions are passed to `create` method.
|
||||
NOTE:
|
||||
Convert method should be implemented which should care about updating
|
||||
keys/values when plugin attributes change.
|
||||
|
||||
Note:
|
||||
Convert method should be implemented which should care about
|
||||
updating keys/values when plugin attributes change.
|
||||
|
||||
Returns:
|
||||
list<AbtractAttrDef>: Attribute definitions that can be tweaked for
|
||||
List[AbtractAttrDef]: Attribute definitions that can be tweaked for
|
||||
created instance.
|
||||
"""
|
||||
return self.pre_create_attr_defs
|
||||
|
|
|
|||
|
|
@ -208,10 +208,12 @@ def get_representation_context(representation):
|
|||
|
||||
assert representation is not None, "This is a bug"
|
||||
|
||||
if not isinstance(representation, dict):
|
||||
representation = get_representation_by_id(representation)
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
if not isinstance(representation, dict):
|
||||
representation = get_representation_by_id(
|
||||
project_name, representation
|
||||
)
|
||||
|
||||
version, subset, asset, project = get_representation_parents(
|
||||
project_name, representation
|
||||
)
|
||||
|
|
@ -394,7 +396,7 @@ def update_container(container, version=-1):
|
|||
assert current_representation is not None, "This is a bug"
|
||||
|
||||
current_version = get_version_by_id(
|
||||
project_name, current_representation["_id"], fields=["parent"]
|
||||
project_name, current_representation["parent"], fields=["parent"]
|
||||
)
|
||||
if version == -1:
|
||||
new_version = get_last_version_by_subset_id(
|
||||
|
|
|
|||
|
|
@ -1,4 +1,7 @@
|
|||
from .publish_plugins import (
|
||||
AbstractMetaInstancePlugin,
|
||||
AbstractMetaContextPlugin,
|
||||
|
||||
PublishValidationError,
|
||||
PublishXmlValidationError,
|
||||
KnownPublishError,
|
||||
|
|
@ -13,8 +16,17 @@ from .lib import (
|
|||
load_help_content_from_filepath,
|
||||
)
|
||||
|
||||
from .abstract_expected_files import ExpectedFiles
|
||||
from .abstract_collect_render import (
|
||||
RenderInstance,
|
||||
AbstractCollectRender,
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"AbstractMetaInstancePlugin",
|
||||
"AbstractMetaContextPlugin",
|
||||
|
||||
"PublishValidationError",
|
||||
"PublishXmlValidationError",
|
||||
"KnownPublishError",
|
||||
|
|
@ -25,4 +37,9 @@ __all__ = (
|
|||
"publish_plugins_discover",
|
||||
"load_help_content_from_plugin",
|
||||
"load_help_content_from_filepath",
|
||||
|
||||
"ExpectedFiles",
|
||||
|
||||
"RenderInstance",
|
||||
"AbstractCollectRender",
|
||||
)
|
||||
|
|
|
|||
268
openpype/pipeline/publish/abstract_collect_render.py
Normal file
268
openpype/pipeline/publish/abstract_collect_render.py
Normal file
|
|
@ -0,0 +1,268 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect render template.
|
||||
|
||||
TODO: use @dataclass when times come.
|
||||
|
||||
"""
|
||||
from abc import abstractmethod
|
||||
|
||||
import attr
|
||||
import six
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from .publish_plugins import AbstractMetaContextPlugin
|
||||
|
||||
|
||||
@attr.s
|
||||
class RenderInstance(object):
|
||||
"""Data collected by collectors.
|
||||
|
||||
This data class later on passed to collected instances.
|
||||
Those attributes are required later on.
|
||||
|
||||
"""
|
||||
|
||||
# metadata
|
||||
version = attr.ib() # instance version
|
||||
time = attr.ib() # time of instance creation (get_formatted_current_time)
|
||||
source = attr.ib() # path to source scene file
|
||||
label = attr.ib() # label to show in GUI
|
||||
subset = attr.ib() # subset name
|
||||
task = attr.ib() # task name
|
||||
asset = attr.ib() # asset name (AVALON_ASSET)
|
||||
attachTo = attr.ib() # subset name to attach render to
|
||||
setMembers = attr.ib() # list of nodes/members producing render output
|
||||
publish = attr.ib() # bool, True to publish instance
|
||||
name = attr.ib() # instance name
|
||||
|
||||
# format settings
|
||||
resolutionWidth = attr.ib() # resolution width (1920)
|
||||
resolutionHeight = attr.ib() # resolution height (1080)
|
||||
pixelAspect = attr.ib() # pixel aspect (1.0)
|
||||
|
||||
# time settings
|
||||
frameStart = attr.ib() # start frame
|
||||
frameEnd = attr.ib() # start end
|
||||
frameStep = attr.ib() # frame step
|
||||
|
||||
handleStart = attr.ib(default=None) # start frame
|
||||
handleEnd = attr.ib(default=None) # start frame
|
||||
|
||||
# for software (like Harmony) where frame range cannot be set by DB
|
||||
# handles need to be propagated if exist
|
||||
ignoreFrameHandleCheck = attr.ib(default=False)
|
||||
|
||||
# --------------------
|
||||
# With default values
|
||||
# metadata
|
||||
renderer = attr.ib(default="") # renderer - can be used in Deadline
|
||||
review = attr.ib(default=False) # generate review from instance (bool)
|
||||
priority = attr.ib(default=50) # job priority on farm
|
||||
|
||||
family = attr.ib(default="renderlayer")
|
||||
families = attr.ib(default=["renderlayer"]) # list of families
|
||||
|
||||
# format settings
|
||||
multipartExr = attr.ib(default=False) # flag for multipart exrs
|
||||
convertToScanline = attr.ib(default=False) # flag for exr conversion
|
||||
|
||||
tileRendering = attr.ib(default=False) # bool: treat render as tiles
|
||||
tilesX = attr.ib(default=0) # number of tiles in X
|
||||
tilesY = attr.ib(default=0) # number of tiles in Y
|
||||
|
||||
# submit_publish_job
|
||||
toBeRenderedOn = attr.ib(default=None)
|
||||
deadlineSubmissionJob = attr.ib(default=None)
|
||||
anatomyData = attr.ib(default=None)
|
||||
outputDir = attr.ib(default=None)
|
||||
context = attr.ib(default=None)
|
||||
|
||||
@frameStart.validator
|
||||
def check_frame_start(self, _, value):
|
||||
"""Validate if frame start is not larger then end."""
|
||||
if value > self.frameEnd:
|
||||
raise ValueError("frameStart must be smaller "
|
||||
"or equal then frameEnd")
|
||||
|
||||
@frameEnd.validator
|
||||
def check_frame_end(self, _, value):
|
||||
"""Validate if frame end is not less then start."""
|
||||
if value < self.frameStart:
|
||||
raise ValueError("frameEnd must be smaller "
|
||||
"or equal then frameStart")
|
||||
|
||||
@tilesX.validator
|
||||
def check_tiles_x(self, _, value):
|
||||
"""Validate if tile x isn't less then 1."""
|
||||
if not self.tileRendering:
|
||||
return
|
||||
if value < 1:
|
||||
raise ValueError("tile X size cannot be less then 1")
|
||||
|
||||
if value == 1 and self.tilesY == 1:
|
||||
raise ValueError("both tiles X a Y sizes are set to 1")
|
||||
|
||||
@tilesY.validator
|
||||
def check_tiles_y(self, _, value):
|
||||
"""Validate if tile y isn't less then 1."""
|
||||
if not self.tileRendering:
|
||||
return
|
||||
if value < 1:
|
||||
raise ValueError("tile Y size cannot be less then 1")
|
||||
|
||||
if value == 1 and self.tilesX == 1:
|
||||
raise ValueError("both tiles X a Y sizes are set to 1")
|
||||
|
||||
|
||||
@six.add_metaclass(AbstractMetaContextPlugin)
|
||||
class AbstractCollectRender(pyblish.api.ContextPlugin):
|
||||
"""Gather all publishable render layers from renderSetup."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
label = "Collect Render"
|
||||
sync_workfile_version = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor."""
|
||||
super(AbstractCollectRender, self).__init__(*args, **kwargs)
|
||||
self._file_path = None
|
||||
self._asset = legacy_io.Session["AVALON_ASSET"]
|
||||
self._context = None
|
||||
|
||||
def process(self, context):
|
||||
"""Entry point to collector."""
|
||||
self._context = context
|
||||
for instance in context:
|
||||
# make sure workfile instance publishing is enabled
|
||||
try:
|
||||
if "workfile" in instance.data["families"]:
|
||||
instance.data["publish"] = True
|
||||
# TODO merge renderFarm and render.farm
|
||||
if ("renderFarm" in instance.data["families"] or
|
||||
"render.farm" in instance.data["families"]):
|
||||
instance.data["remove"] = True
|
||||
except KeyError:
|
||||
# be tolerant if 'families' is missing.
|
||||
pass
|
||||
|
||||
self._file_path = context.data["currentFile"].replace("\\", "/")
|
||||
|
||||
render_instances = self.get_instances(context)
|
||||
for render_instance in render_instances:
|
||||
exp_files = self.get_expected_files(render_instance)
|
||||
assert exp_files, "no file names were generated, this is bug"
|
||||
|
||||
# if we want to attach render to subset, check if we have AOV's
|
||||
# in expectedFiles. If so, raise error as we cannot attach AOV
|
||||
# (considered to be subset on its own) to another subset
|
||||
if render_instance.attachTo:
|
||||
assert isinstance(exp_files, list), (
|
||||
"attaching multiple AOVs or renderable cameras to "
|
||||
"subset is not supported"
|
||||
)
|
||||
|
||||
frame_start_render = int(render_instance.frameStart)
|
||||
frame_end_render = int(render_instance.frameEnd)
|
||||
if (render_instance.ignoreFrameHandleCheck or
|
||||
int(context.data['frameStartHandle']) == frame_start_render
|
||||
and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
|
||||
|
||||
handle_start = context.data['handleStart']
|
||||
handle_end = context.data['handleEnd']
|
||||
frame_start = context.data['frameStart']
|
||||
frame_end = context.data['frameEnd']
|
||||
frame_start_handle = context.data['frameStartHandle']
|
||||
frame_end_handle = context.data['frameEndHandle']
|
||||
else:
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
frame_start = frame_start_render
|
||||
frame_end = frame_end_render
|
||||
frame_start_handle = frame_start_render
|
||||
frame_end_handle = frame_end_render
|
||||
|
||||
data = {
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"frameStartHandle": frame_start_handle,
|
||||
"frameEndHandle": frame_end_handle,
|
||||
"byFrameStep": int(render_instance.frameStep),
|
||||
|
||||
"author": context.data["user"],
|
||||
# Add source to allow tracing back to the scene from
|
||||
# which was submitted originally
|
||||
"expectedFiles": exp_files,
|
||||
}
|
||||
if self.sync_workfile_version:
|
||||
data["version"] = context.data["version"]
|
||||
|
||||
# add additional data
|
||||
data = self.add_additional_data(data)
|
||||
render_instance_dict = attr.asdict(render_instance)
|
||||
|
||||
instance = context.create_instance(render_instance.name)
|
||||
instance.data["label"] = render_instance.label
|
||||
instance.data.update(render_instance_dict)
|
||||
instance.data.update(data)
|
||||
|
||||
self.post_collecting_action()
|
||||
|
||||
@abstractmethod
|
||||
def get_instances(self, context):
|
||||
"""Get all renderable instances and their data.
|
||||
|
||||
Args:
|
||||
context (pyblish.api.Context): Context object.
|
||||
|
||||
Returns:
|
||||
list of :class:`RenderInstance`: All collected renderable instances
|
||||
(like render layers, write nodes, etc.)
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def get_expected_files(self, render_instance):
|
||||
"""Get list of expected files.
|
||||
|
||||
Returns:
|
||||
list: expected files. This can be either simple list of files with
|
||||
their paths, or list of dictionaries, where key is name of AOV
|
||||
for example and value is list of files for that AOV.
|
||||
|
||||
Example::
|
||||
|
||||
['/path/to/file.001.exr', '/path/to/file.002.exr']
|
||||
|
||||
or as dictionary:
|
||||
|
||||
[
|
||||
{
|
||||
"beauty": ['/path/to/beauty.001.exr', ...],
|
||||
"mask": ['/path/to/mask.001.exr']
|
||||
}
|
||||
]
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
def add_additional_data(self, data):
|
||||
"""Add additional data to collected instance.
|
||||
|
||||
This can be overridden by host implementation to add custom
|
||||
additional data.
|
||||
|
||||
"""
|
||||
return data
|
||||
|
||||
def post_collecting_action(self):
|
||||
"""Execute some code after collection is done.
|
||||
|
||||
This is useful for example for restoring current render layer.
|
||||
|
||||
"""
|
||||
pass
|
||||
53
openpype/pipeline/publish/abstract_expected_files.py
Normal file
53
openpype/pipeline/publish/abstract_expected_files.py
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Abstract ExpectedFile class definition."""
|
||||
from abc import ABCMeta, abstractmethod
|
||||
import six
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class ExpectedFiles:
|
||||
"""Class grouping functionality for all supported renderers.
|
||||
|
||||
Attributes:
|
||||
multipart (bool): Flag if multipart exrs are used.
|
||||
|
||||
"""
|
||||
|
||||
multipart = False
|
||||
|
||||
@abstractmethod
|
||||
def get(self, render_instance):
|
||||
"""Get expected files for given renderer and render layer.
|
||||
|
||||
This method should return dictionary of all files we are expecting
|
||||
to be rendered from the host. Usually `render_instance` corresponds
|
||||
to *render layer*. Result can be either flat list with the file
|
||||
paths or it can be list of dictionaries. Each key corresponds to
|
||||
for example AOV name or channel, etc.
|
||||
|
||||
Example::
|
||||
|
||||
['/path/to/file.001.exr', '/path/to/file.002.exr']
|
||||
|
||||
or as dictionary:
|
||||
|
||||
[
|
||||
{
|
||||
"beauty": ['/path/to/beauty.001.exr', ...],
|
||||
"mask": ['/path/to/mask.001.exr']
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
Args:
|
||||
render_instance (:class:`RenderInstance`): Data passed from
|
||||
collector to determine files. This should be instance of
|
||||
:class:`abstract_collect_render.RenderInstance`
|
||||
|
||||
Returns:
|
||||
list: Full paths to expected rendered files.
|
||||
list of dict: Path to expected rendered files categorized by
|
||||
AOVs, etc.
|
||||
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
|
@ -1,7 +1,17 @@
|
|||
from abc import ABCMeta
|
||||
from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin
|
||||
from openpype.lib import BoolDef
|
||||
from .lib import load_help_content_from_plugin
|
||||
|
||||
|
||||
class AbstractMetaInstancePlugin(ABCMeta, MetaPlugin):
|
||||
pass
|
||||
|
||||
|
||||
class AbstractMetaContextPlugin(ABCMeta, ExplicitMetaPlugin):
|
||||
pass
|
||||
|
||||
|
||||
class PublishValidationError(Exception):
|
||||
"""Validation error happened during publishing.
|
||||
|
||||
|
|
@ -16,6 +26,7 @@ class PublishValidationError(Exception):
|
|||
description(str): Detailed description of an error. It is possible
|
||||
to use Markdown syntax.
|
||||
"""
|
||||
|
||||
def __init__(self, message, title=None, description=None, detail=None):
|
||||
self.message = message
|
||||
self.title = title or "< Missing title >"
|
||||
|
|
@ -49,6 +60,7 @@ class KnownPublishError(Exception):
|
|||
|
||||
Message will be shown in UI for artist.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
|
|
@ -92,6 +104,7 @@ class OpenPypePyblishPluginMixin:
|
|||
Returns:
|
||||
list<AbtractAttrDef>: Attribute definitions for plugin.
|
||||
"""
|
||||
|
||||
return []
|
||||
|
||||
@classmethod
|
||||
|
|
@ -116,6 +129,7 @@ class OpenPypePyblishPluginMixin:
|
|||
Args:
|
||||
data(dict): Data from instance or context.
|
||||
"""
|
||||
|
||||
return (
|
||||
data
|
||||
.get("publish_attributes", {})
|
||||
|
|
|
|||
|
|
@ -4,13 +4,14 @@ import uuid
|
|||
|
||||
import clique
|
||||
from pymongo import UpdateOne
|
||||
import ftrack_api
|
||||
import qargparse
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.client import get_versions, get_representations
|
||||
from openpype import style
|
||||
from openpype.pipeline import load, AvalonMongoDB, Anatomy
|
||||
from openpype.lib import StringTemplate
|
||||
from openpype.modules import ModulesManager
|
||||
|
||||
|
||||
class DeleteOldVersions(load.SubsetLoaderPlugin):
|
||||
|
|
@ -197,18 +198,10 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
def get_data(self, context, versions_count):
|
||||
subset = context["subset"]
|
||||
asset = context["asset"]
|
||||
anatomy = Anatomy(context["project"]["name"])
|
||||
project_name = context["project"]["name"]
|
||||
anatomy = Anatomy(project_name)
|
||||
|
||||
self.dbcon = AvalonMongoDB()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = context["project"]["name"]
|
||||
self.dbcon.install()
|
||||
|
||||
versions = list(
|
||||
self.dbcon.find({
|
||||
"type": "version",
|
||||
"parent": {"$in": [subset["_id"]]}
|
||||
})
|
||||
)
|
||||
versions = list(get_versions(project_name, subset_ids=[subset["_id"]]))
|
||||
|
||||
versions_by_parent = collections.defaultdict(list)
|
||||
for ent in versions:
|
||||
|
|
@ -267,10 +260,9 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
print(msg)
|
||||
return
|
||||
|
||||
repres = list(self.dbcon.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids}
|
||||
}))
|
||||
repres = list(get_representations(
|
||||
project_name, version_ids=version_ids
|
||||
))
|
||||
|
||||
self.log.debug(
|
||||
"Collected representations to remove ({})".format(len(repres))
|
||||
|
|
@ -329,7 +321,7 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
|
||||
return data
|
||||
|
||||
def main(self, data, remove_publish_folder):
|
||||
def main(self, project_name, data, remove_publish_folder):
|
||||
# Size of files.
|
||||
size = 0
|
||||
if not data:
|
||||
|
|
@ -366,30 +358,70 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
))
|
||||
|
||||
if mongo_changes_bulk:
|
||||
self.dbcon.bulk_write(mongo_changes_bulk)
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
dbcon.install()
|
||||
dbcon.bulk_write(mongo_changes_bulk)
|
||||
dbcon.uninstall()
|
||||
|
||||
self.dbcon.uninstall()
|
||||
self._ftrack_delete_versions(data)
|
||||
|
||||
return size
|
||||
|
||||
def _ftrack_delete_versions(self, data):
|
||||
"""Delete version on ftrack.
|
||||
|
||||
Handling of ftrack logic in this plugin is not ideal. But in OP3 it is
|
||||
almost impossible to solve the issue other way.
|
||||
|
||||
Note:
|
||||
Asset versions on ftrack are not deleted but marked as
|
||||
"not published" which cause that they're invisible.
|
||||
|
||||
Args:
|
||||
data (dict): Data sent to subset loader with full context.
|
||||
"""
|
||||
|
||||
# First check for ftrack id on asset document
|
||||
# - skip if ther is none
|
||||
asset_ftrack_id = data["asset"]["data"].get("ftrackId")
|
||||
if not asset_ftrack_id:
|
||||
self.log.info((
|
||||
"Asset does not have filled ftrack id. Skipped delete"
|
||||
" of ftrack version."
|
||||
))
|
||||
return
|
||||
|
||||
# Check if ftrack module is enabled
|
||||
modules_manager = ModulesManager()
|
||||
ftrack_module = modules_manager.modules_by_name.get("ftrack")
|
||||
if not ftrack_module or not ftrack_module.enabled:
|
||||
return
|
||||
|
||||
import ftrack_api
|
||||
|
||||
session = ftrack_api.Session()
|
||||
subset_name = data["subset"]["name"]
|
||||
versions = {
|
||||
'"{}"'.format(version_doc["name"])
|
||||
for version_doc in data["versions"]
|
||||
}
|
||||
asset_versions = session.query(
|
||||
(
|
||||
"select id, is_published from AssetVersion where"
|
||||
" asset.parent.id is \"{}\""
|
||||
" and asset.name is \"{}\""
|
||||
" and version in ({})"
|
||||
).format(
|
||||
asset_ftrack_id,
|
||||
subset_name,
|
||||
",".join(versions)
|
||||
)
|
||||
).all()
|
||||
|
||||
# Set attribute `is_published` to `False` on ftrack AssetVersions
|
||||
session = ftrack_api.Session()
|
||||
query = (
|
||||
"AssetVersion where asset.parent.id is \"{}\""
|
||||
" and asset.name is \"{}\""
|
||||
" and version is \"{}\""
|
||||
)
|
||||
for v in data["versions"]:
|
||||
try:
|
||||
ftrack_version = session.query(
|
||||
query.format(
|
||||
data["asset"]["data"]["ftrackId"],
|
||||
data["subset"]["name"],
|
||||
v["name"]
|
||||
)
|
||||
).one()
|
||||
except ftrack_api.exception.NoResultFoundError:
|
||||
continue
|
||||
|
||||
ftrack_version["is_published"] = False
|
||||
for asset_version in asset_versions:
|
||||
asset_version["is_published"] = False
|
||||
|
||||
try:
|
||||
session.commit()
|
||||
|
|
@ -402,8 +434,6 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
self.log.error(msg)
|
||||
self.message(msg)
|
||||
|
||||
return size
|
||||
|
||||
def load(self, contexts, name=None, namespace=None, options=None):
|
||||
try:
|
||||
size = 0
|
||||
|
|
@ -422,7 +452,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
if not data:
|
||||
continue
|
||||
|
||||
size += self.main(data, remove_publish_folder)
|
||||
project_name = context["project"]["name"]
|
||||
size += self.main(project_name, data, remove_publish_folder)
|
||||
print("Progressing {}/{}".format(count + 1, len(contexts)))
|
||||
|
||||
msg = "Total size of files: " + self.sizeof_fmt(size)
|
||||
|
|
@ -448,7 +479,7 @@ class CalculateOldVersions(DeleteOldVersions):
|
|||
)
|
||||
]
|
||||
|
||||
def main(self, data, remove_publish_folder):
|
||||
def main(self, project_name, data, remove_publish_folder):
|
||||
size = 0
|
||||
|
||||
if not data:
|
||||
|
|
|
|||
|
|
@ -3,8 +3,9 @@ from collections import defaultdict
|
|||
|
||||
from Qt import QtWidgets, QtCore, QtGui
|
||||
|
||||
from openpype.client import get_representations
|
||||
from openpype.lib import config
|
||||
from openpype.pipeline import load, AvalonMongoDB, Anatomy
|
||||
from openpype.pipeline import load, Anatomy
|
||||
from openpype import resources, style
|
||||
|
||||
from openpype.lib.delivery import (
|
||||
|
|
@ -68,17 +69,13 @@ class DeliveryOptionsDialog(QtWidgets.QDialog):
|
|||
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
project = contexts[0]["project"]["name"]
|
||||
self.anatomy = Anatomy(project)
|
||||
project_name = contexts[0]["project"]["name"]
|
||||
self.anatomy = Anatomy(project_name)
|
||||
self._representations = None
|
||||
self.log = log
|
||||
self.currently_uploaded = 0
|
||||
|
||||
self.dbcon = AvalonMongoDB()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project
|
||||
self.dbcon.install()
|
||||
|
||||
self._set_representations(contexts)
|
||||
self._set_representations(project_name, contexts)
|
||||
|
||||
dropdown = QtWidgets.QComboBox()
|
||||
self.templates = self._get_templates(self.anatomy)
|
||||
|
|
@ -238,13 +235,12 @@ class DeliveryOptionsDialog(QtWidgets.QDialog):
|
|||
|
||||
return templates
|
||||
|
||||
def _set_representations(self, contexts):
|
||||
def _set_representations(self, project_name, contexts):
|
||||
version_ids = [context["version"]["_id"] for context in contexts]
|
||||
|
||||
repres = list(self.dbcon.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids}
|
||||
}))
|
||||
repres = list(get_representations(
|
||||
project_name, version_ids=version_ids
|
||||
))
|
||||
|
||||
self._representations = repres
|
||||
|
||||
|
|
|
|||
|
|
@ -27,6 +27,11 @@ import collections
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import (
|
||||
get_assets,
|
||||
get_subsets,
|
||||
get_last_versions
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -44,13 +49,15 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
self.log.info("Collecting anatomy data for all instances.")
|
||||
|
||||
self.fill_missing_asset_docs(context)
|
||||
self.fill_latest_versions(context)
|
||||
project_name = legacy_io.active_project()
|
||||
self.fill_missing_asset_docs(context, project_name)
|
||||
self.fill_instance_data_from_asset(context)
|
||||
self.fill_latest_versions(context, project_name)
|
||||
self.fill_anatomy_data(context)
|
||||
|
||||
self.log.info("Anatomy Data collection finished.")
|
||||
|
||||
def fill_missing_asset_docs(self, context):
|
||||
def fill_missing_asset_docs(self, context, project_name):
|
||||
self.log.debug("Qeurying asset documents for instances.")
|
||||
|
||||
context_asset_doc = context.data.get("assetEntity")
|
||||
|
|
@ -84,10 +91,8 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
self.log.debug("Querying asset documents with names: {}".format(
|
||||
", ".join(["\"{}\"".format(name) for name in asset_names])
|
||||
))
|
||||
asset_docs = legacy_io.find({
|
||||
"type": "asset",
|
||||
"name": {"$in": asset_names}
|
||||
})
|
||||
|
||||
asset_docs = get_assets(project_name, asset_names=asset_names)
|
||||
asset_docs_by_name = {
|
||||
asset_doc["name"]: asset_doc
|
||||
for asset_doc in asset_docs
|
||||
|
|
@ -111,7 +116,24 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
"Not found asset documents with names \"{}\"."
|
||||
).format(joined_asset_names))
|
||||
|
||||
def fill_latest_versions(self, context):
|
||||
def fill_instance_data_from_asset(self, context):
|
||||
for instance in context:
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
if not asset_doc:
|
||||
continue
|
||||
|
||||
asset_data = asset_doc["data"]
|
||||
for key in (
|
||||
"fps",
|
||||
"frameStart",
|
||||
"frameEnd",
|
||||
"handleStart",
|
||||
"handleEnd",
|
||||
):
|
||||
if key not in instance.data and key in asset_data:
|
||||
instance.data[key] = asset_data[key]
|
||||
|
||||
def fill_latest_versions(self, context, project_name):
|
||||
"""Try to find latest version for each instance's subset.
|
||||
|
||||
Key "latestVersion" is always set to latest version or `None`.
|
||||
|
|
@ -126,7 +148,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
self.log.debug("Qeurying latest versions for instances.")
|
||||
|
||||
hierarchy = {}
|
||||
subset_filters = []
|
||||
names_by_asset_ids = collections.defaultdict(set)
|
||||
for instance in context:
|
||||
# Make sure `"latestVersion"` key is set
|
||||
latest_version = instance.data.get("latestVersion")
|
||||
|
|
@ -147,67 +169,33 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
if subset_name not in hierarchy[asset_id]:
|
||||
hierarchy[asset_id][subset_name] = []
|
||||
hierarchy[asset_id][subset_name].append(instance)
|
||||
subset_filters.append({
|
||||
"parent": asset_id,
|
||||
"name": subset_name
|
||||
})
|
||||
names_by_asset_ids[asset_id].add(subset_name)
|
||||
|
||||
subset_docs = []
|
||||
if subset_filters:
|
||||
subset_docs = list(legacy_io.find({
|
||||
"type": "subset",
|
||||
"$or": subset_filters
|
||||
}))
|
||||
if names_by_asset_ids:
|
||||
subset_docs = list(get_subsets(
|
||||
project_name, names_by_asset_ids=names_by_asset_ids
|
||||
))
|
||||
|
||||
subset_ids = [
|
||||
subset_doc["_id"]
|
||||
for subset_doc in subset_docs
|
||||
]
|
||||
|
||||
last_version_by_subset_id = self._query_last_versions(subset_ids)
|
||||
last_version_docs_by_subset_id = get_last_versions(
|
||||
project_name, subset_ids, fields=["name"]
|
||||
)
|
||||
for subset_doc in subset_docs:
|
||||
subset_id = subset_doc["_id"]
|
||||
last_version = last_version_by_subset_id.get(subset_id)
|
||||
if last_version is None:
|
||||
last_version_doc = last_version_docs_by_subset_id.get(subset_id)
|
||||
if last_version_docs_by_subset_id is None:
|
||||
continue
|
||||
|
||||
asset_id = subset_doc["parent"]
|
||||
subset_name = subset_doc["name"]
|
||||
_instances = hierarchy[asset_id][subset_name]
|
||||
for _instance in _instances:
|
||||
_instance.data["latestVersion"] = last_version
|
||||
|
||||
def _query_last_versions(self, subset_ids):
|
||||
"""Retrieve all latest versions for entered subset_ids.
|
||||
|
||||
Args:
|
||||
subset_ids (list): List of subset ids with type `ObjectId`.
|
||||
|
||||
Returns:
|
||||
dict: Key is subset id and value is last version name.
|
||||
"""
|
||||
_pipeline = [
|
||||
# Find all versions of those subsets
|
||||
{"$match": {
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}},
|
||||
# Sorting versions all together
|
||||
{"$sort": {"name": 1}},
|
||||
# Group them by "parent", but only take the last
|
||||
{"$group": {
|
||||
"_id": "$parent",
|
||||
"_version_id": {"$last": "$_id"},
|
||||
"name": {"$last": "$name"}
|
||||
}}
|
||||
]
|
||||
|
||||
last_version_by_subset_id = {}
|
||||
for doc in legacy_io.aggregate(_pipeline):
|
||||
subset_id = doc["_id"]
|
||||
last_version_by_subset_id[subset_id] = doc["name"]
|
||||
|
||||
return last_version_by_subset_id
|
||||
_instance.data["latestVersion"] = last_version_doc["name"]
|
||||
|
||||
def fill_anatomy_data(self, context):
|
||||
self.log.debug("Storing anatomy data to instance data.")
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ Provides:
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_project, get_asset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -25,10 +26,7 @@ class CollectAvalonEntities(pyblish.api.ContextPlugin):
|
|||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
|
||||
project_entity = legacy_io.find_one({
|
||||
"type": "project",
|
||||
"name": project_name
|
||||
})
|
||||
project_entity = get_project(project_name)
|
||||
assert project_entity, (
|
||||
"Project '{0}' was not found."
|
||||
).format(project_name)
|
||||
|
|
@ -39,11 +37,8 @@ class CollectAvalonEntities(pyblish.api.ContextPlugin):
|
|||
if not asset_name:
|
||||
self.log.info("Context is not set. Can't collect global data.")
|
||||
return
|
||||
asset_entity = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name,
|
||||
"parent": project_entity["_id"]
|
||||
})
|
||||
|
||||
asset_entity = get_asset_by_name(project_name, asset_name)
|
||||
assert asset_entity, (
|
||||
"No asset found by the name '{0}' in project '{1}'"
|
||||
).format(asset_name, project_name)
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class CollectCleanupKeys(pyblish.api.ContextPlugin):
|
|||
"""Prepare keys for 'ExplicitCleanUp' plugin."""
|
||||
|
||||
label = "Collect Cleanup Keys"
|
||||
order = pyblish.api.CollectorOrder
|
||||
order = pyblish.api.CollectorOrder - 0.5
|
||||
|
||||
def process(self, context):
|
||||
context.data["cleanupFullPaths"] = []
|
||||
|
|
|
|||
|
|
@ -47,12 +47,11 @@ class CollectFromCreateContext(pyblish.api.ContextPlugin):
|
|||
"label": subset,
|
||||
"name": subset,
|
||||
"family": in_data["family"],
|
||||
"families": instance_families
|
||||
"families": instance_families,
|
||||
"representations": []
|
||||
})
|
||||
for key, value in in_data.items():
|
||||
if key not in instance.data:
|
||||
instance.data[key] = value
|
||||
self.log.info("collected instance: {}".format(instance.data))
|
||||
self.log.info("parsing data: {}".format(in_data))
|
||||
|
||||
instance.data["representations"] = list()
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
from bson.objectid import ObjectId
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_representations
|
||||
from openpype.pipeline import (
|
||||
registered_host,
|
||||
legacy_io,
|
||||
|
|
@ -39,23 +38,29 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
|||
return
|
||||
|
||||
loaded_versions = []
|
||||
_containers = list(host.ls())
|
||||
_repr_ids = [ObjectId(c["representation"]) for c in _containers]
|
||||
repre_docs = legacy_io.find(
|
||||
{"_id": {"$in": _repr_ids}},
|
||||
projection={"_id": 1, "parent": 1}
|
||||
containers = list(host.ls())
|
||||
repre_ids = {
|
||||
container["representation"]
|
||||
for container in containers
|
||||
}
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
representation_ids=repre_ids,
|
||||
fields=["_id", "parent"]
|
||||
)
|
||||
version_by_repr = {
|
||||
str(doc["_id"]): doc["parent"]
|
||||
repre_doc_by_str_id = {
|
||||
str(doc["_id"]): doc
|
||||
for doc in repre_docs
|
||||
}
|
||||
|
||||
# QUESTION should we add same representation id when loaded multiple
|
||||
# times?
|
||||
for con in _containers:
|
||||
for con in containers:
|
||||
repre_id = con["representation"]
|
||||
version_id = version_by_repr.get(repre_id)
|
||||
if version_id is None:
|
||||
repre_doc = repre_doc_by_str_id.get(repre_id)
|
||||
if repre_doc is None:
|
||||
self.log.warning((
|
||||
"Skipping container,"
|
||||
" did not find representation document. {}"
|
||||
|
|
@ -66,8 +71,8 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
|||
# may have more then one representation that are same version
|
||||
version = {
|
||||
"subsetName": con["name"],
|
||||
"representation": ObjectId(repre_id),
|
||||
"version": version_id,
|
||||
"representation": repre_doc["_id"],
|
||||
"version": repre_doc["parent"],
|
||||
}
|
||||
loaded_versions.append(version)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,11 @@
|
|||
from copy import deepcopy
|
||||
import pyblish.api
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_id,
|
||||
get_asset_by_name,
|
||||
get_archived_assets
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -19,14 +25,14 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
if not legacy_io.Session:
|
||||
legacy_io.install()
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
hierarchy_context = self._get_active_assets(context)
|
||||
self.log.debug("__ hierarchy_context: {}".format(hierarchy_context))
|
||||
|
||||
self.project = None
|
||||
self.import_to_avalon(hierarchy_context)
|
||||
self.import_to_avalon(project_name, hierarchy_context)
|
||||
|
||||
|
||||
def import_to_avalon(self, input_data, parent=None):
|
||||
def import_to_avalon(self, project_name, input_data, parent=None):
|
||||
for name in input_data:
|
||||
self.log.info("input_data[name]: {}".format(input_data[name]))
|
||||
entity_data = input_data[name]
|
||||
|
|
@ -62,7 +68,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
update_data = True
|
||||
# Process project
|
||||
if entity_type.lower() == "project":
|
||||
entity = legacy_io.find_one({"type": "project"})
|
||||
entity = get_project(project_name)
|
||||
# TODO: should be in validator?
|
||||
assert (entity is not None), "Did not find project in DB"
|
||||
|
||||
|
|
@ -79,7 +85,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
)
|
||||
# Else process assset
|
||||
else:
|
||||
entity = legacy_io.find_one({"type": "asset", "name": name})
|
||||
entity = get_asset_by_name(project_name, name)
|
||||
if entity:
|
||||
# Do not override data, only update
|
||||
cur_entity_data = entity.get("data") or {}
|
||||
|
|
@ -103,10 +109,10 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
# Skip updating data
|
||||
update_data = False
|
||||
|
||||
archived_entities = legacy_io.find({
|
||||
"type": "archived_asset",
|
||||
"name": name
|
||||
})
|
||||
archived_entities = get_archived_assets(
|
||||
project_name,
|
||||
asset_names=[name]
|
||||
)
|
||||
unarchive_entity = None
|
||||
for archived_entity in archived_entities:
|
||||
archived_parents = (
|
||||
|
|
@ -120,7 +126,9 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
|
||||
if unarchive_entity is None:
|
||||
# Create entity if doesn"t exist
|
||||
entity = self.create_avalon_asset(name, data)
|
||||
entity = self.create_avalon_asset(
|
||||
project_name, name, data
|
||||
)
|
||||
else:
|
||||
# Unarchive if entity was archived
|
||||
entity = self.unarchive_entity(unarchive_entity, data)
|
||||
|
|
@ -133,7 +141,9 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
)
|
||||
|
||||
if "childs" in entity_data:
|
||||
self.import_to_avalon(entity_data["childs"], entity)
|
||||
self.import_to_avalon(
|
||||
project_name, entity_data["childs"], entity
|
||||
)
|
||||
|
||||
def unarchive_entity(self, entity, data):
|
||||
# Unarchived asset should not use same data
|
||||
|
|
@ -151,7 +161,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
)
|
||||
return new_entity
|
||||
|
||||
def create_avalon_asset(self, name, data):
|
||||
def create_avalon_asset(self, project_name, name, data):
|
||||
item = {
|
||||
"schema": "openpype:asset-3.0",
|
||||
"name": name,
|
||||
|
|
@ -162,7 +172,7 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
self.log.debug("Creating asset: {}".format(item))
|
||||
entity_id = legacy_io.insert_one(item).inserted_id
|
||||
|
||||
return legacy_io.find_one({"_id": entity_id})
|
||||
return get_asset_by_id(project_name, entity_id)
|
||||
|
||||
def _get_active_assets(self, context):
|
||||
""" Returns only asset dictionary.
|
||||
|
|
|
|||
|
|
@ -447,7 +447,22 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
|
||||
input_is_sequence = self.input_is_sequence(repre)
|
||||
input_allow_bg = False
|
||||
first_sequence_frame = None
|
||||
if input_is_sequence and repre["files"]:
|
||||
# Calculate first frame that should be used
|
||||
cols, _ = clique.assemble(repre["files"])
|
||||
input_frames = list(sorted(cols[0].indexes))
|
||||
first_sequence_frame = input_frames[0]
|
||||
# WARNING: This is an issue as we don't know if first frame
|
||||
# is with or without handles!
|
||||
# - handle start is added but how do not know if we should
|
||||
output_duration = (output_frame_end - output_frame_start) + 1
|
||||
if (
|
||||
without_handles
|
||||
and len(input_frames) - handle_start >= output_duration
|
||||
):
|
||||
first_sequence_frame += handle_start
|
||||
|
||||
ext = os.path.splitext(repre["files"][0])[1].replace(".", "")
|
||||
if ext in self.alpha_exts:
|
||||
input_allow_bg = True
|
||||
|
|
@ -467,6 +482,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"resolution_height": instance.data.get("resolutionHeight"),
|
||||
"origin_repre": repre,
|
||||
"input_is_sequence": input_is_sequence,
|
||||
"first_sequence_frame": first_sequence_frame,
|
||||
"input_allow_bg": input_allow_bg,
|
||||
"with_audio": with_audio,
|
||||
"without_handles": without_handles,
|
||||
|
|
@ -545,9 +561,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
if temp_data["input_is_sequence"]:
|
||||
# Set start frame of input sequence (just frame in filename)
|
||||
# - definition of input filepath
|
||||
ffmpeg_input_args.append(
|
||||
"-start_number {}".format(temp_data["output_frame_start"])
|
||||
)
|
||||
ffmpeg_input_args.extend([
|
||||
"-start_number", str(temp_data["first_sequence_frame"])
|
||||
])
|
||||
|
||||
# TODO add fps mapping `{fps: fraction}` ?
|
||||
# - e.g.: {
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
"imagesequence", "render", "render2d", "prerender",
|
||||
"source", "plate", "take"
|
||||
]
|
||||
hosts = ["shell", "fusion", "resolve"]
|
||||
hosts = ["shell", "fusion", "resolve", "traypublisher"]
|
||||
enabled = False
|
||||
|
||||
# presetable attribute
|
||||
|
|
@ -46,6 +46,10 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
self.log.info("Skipping - no review set on instance.")
|
||||
return
|
||||
|
||||
if self._already_has_thumbnail(instance):
|
||||
self.log.info("Thumbnail representation already present.")
|
||||
return
|
||||
|
||||
filtered_repres = self._get_filtered_repres(instance)
|
||||
for repre in filtered_repres:
|
||||
repre_files = repre["files"]
|
||||
|
|
@ -71,18 +75,12 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
if not is_oiio_supported():
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(full_input_path, full_output_path) # noqa
|
||||
else:
|
||||
# Check if the file can be read by OIIO
|
||||
oiio_tool_path = get_oiio_tools_path()
|
||||
args = [
|
||||
oiio_tool_path, "--info", "-i", full_output_path
|
||||
]
|
||||
returncode = execute(args, silent=True)
|
||||
# If the input can read by OIIO then use OIIO method for
|
||||
# conversion otherwise use ffmpeg
|
||||
if returncode == 0:
|
||||
self.log.info("Input can be read by OIIO, converting with oiiotool now.") # noqa
|
||||
thumbnail_created = self.create_thumbnail_oiio(full_input_path, full_output_path) # noqa
|
||||
else:
|
||||
self.log.info("Trying to convert with OIIO") # noqa
|
||||
thumbnail_created = self.create_thumbnail_oiio(full_input_path, full_output_path) # noqa
|
||||
|
||||
if not thumbnail_created:
|
||||
self.log.info("Converting with FFMPEG because input can't be read by OIIO.") # noqa
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(full_input_path, full_output_path) # noqa
|
||||
|
||||
|
|
@ -108,6 +106,14 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
# There is no need to create more then one thumbnail
|
||||
break
|
||||
|
||||
def _already_has_thumbnail(self, instance):
|
||||
for repre in instance.data.get("representations", []):
|
||||
self.log.info("repre {}".format(repre))
|
||||
if repre["name"] == "thumbnail":
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _get_filtered_repres(self, instance):
|
||||
filtered_repres = []
|
||||
src_repres = instance.data.get("representations") or []
|
||||
|
|
@ -8,6 +8,12 @@ from bson.objectid import ObjectId
|
|||
from pymongo import InsertOne, ReplaceOne
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_hero_version_by_subset_id,
|
||||
get_archived_representations,
|
||||
get_representations,
|
||||
)
|
||||
from openpype.lib import (
|
||||
create_hard_link,
|
||||
filter_profiles
|
||||
|
|
@ -85,9 +91,13 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
hero_template
|
||||
))
|
||||
|
||||
self.integrate_instance(instance, template_key, hero_template)
|
||||
self.integrate_instance(
|
||||
instance, project_name, template_key, hero_template
|
||||
)
|
||||
|
||||
def integrate_instance(self, instance, template_key, hero_template):
|
||||
def integrate_instance(
|
||||
self, instance, project_name, template_key, hero_template
|
||||
):
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
published_repres = instance.data["published_representations"]
|
||||
hero_publish_dir = self.get_publish_dir(instance, template_key)
|
||||
|
|
@ -118,8 +128,8 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
"Published version entity was not sent in representation data."
|
||||
" Querying entity from database."
|
||||
))
|
||||
src_version_entity = (
|
||||
self.version_from_representations(published_repres)
|
||||
src_version_entity = self.version_from_representations(
|
||||
project_name, published_repres
|
||||
)
|
||||
|
||||
if not src_version_entity:
|
||||
|
|
@ -170,8 +180,8 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
other_file_paths_mapping.append((file_path, dst_filepath))
|
||||
|
||||
# Current version
|
||||
old_version, old_repres = (
|
||||
self.current_hero_ents(src_version_entity)
|
||||
old_version, old_repres = self.current_hero_ents(
|
||||
project_name, src_version_entity
|
||||
)
|
||||
|
||||
old_repres_by_name = {
|
||||
|
|
@ -223,11 +233,11 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
if old_repres_by_name:
|
||||
old_repres_to_delete = old_repres_by_name
|
||||
|
||||
archived_repres = list(legacy_io.find({
|
||||
archived_repres = list(get_archived_representations(
|
||||
project_name,
|
||||
# Check what is type of archived representation
|
||||
"type": "archived_repsentation",
|
||||
"parent": new_version_id
|
||||
}))
|
||||
version_ids=[new_version_id]
|
||||
))
|
||||
archived_repres_by_name = {}
|
||||
for repre in archived_repres:
|
||||
repre_name_low = repre["name"].lower()
|
||||
|
|
@ -586,25 +596,23 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
|
|||
|
||||
shutil.copy(src_path, dst_path)
|
||||
|
||||
def version_from_representations(self, repres):
|
||||
def version_from_representations(self, project_name, repres):
|
||||
for repre in repres:
|
||||
version = legacy_io.find_one({"_id": repre["parent"]})
|
||||
version = get_version_by_id(project_name, repre["parent"])
|
||||
if version:
|
||||
return version
|
||||
|
||||
def current_hero_ents(self, version):
|
||||
hero_version = legacy_io.find_one({
|
||||
"parent": version["parent"],
|
||||
"type": "hero_version"
|
||||
})
|
||||
def current_hero_ents(self, project_name, version):
|
||||
hero_version = get_hero_version_by_subset_id(
|
||||
project_name, version["parent"]
|
||||
)
|
||||
|
||||
if not hero_version:
|
||||
return (None, [])
|
||||
|
||||
hero_repres = list(legacy_io.find({
|
||||
"parent": hero_version["_id"],
|
||||
"type": "representation"
|
||||
}))
|
||||
hero_repres = list(get_representations(
|
||||
project_name, version_ids=[hero_version["_id"]]
|
||||
))
|
||||
return (hero_version, hero_repres)
|
||||
|
||||
def _update_path(self, anatomy, path, src_file, dst_file):
|
||||
|
|
|
|||
|
|
@ -16,6 +16,15 @@ from pymongo import DeleteOne, InsertOne
|
|||
import pyblish.api
|
||||
|
||||
import openpype.api
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
get_subset_by_id,
|
||||
get_subset_by_name,
|
||||
get_version_by_id,
|
||||
get_version_by_name,
|
||||
get_representations,
|
||||
get_archived_representations,
|
||||
)
|
||||
from openpype.lib.profiles_filtering import filter_profiles
|
||||
from openpype.lib import (
|
||||
prepare_template_data,
|
||||
|
|
@ -201,6 +210,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
context = instance.context
|
||||
|
||||
project_entity = instance.data["projectEntity"]
|
||||
project_name = project_entity["name"]
|
||||
|
||||
context_asset_name = None
|
||||
context_asset_doc = context.data.get("assetEntity")
|
||||
|
|
@ -210,11 +220,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
asset_name = instance.data["asset"]
|
||||
asset_entity = instance.data.get("assetEntity")
|
||||
if not asset_entity or asset_entity["name"] != context_asset_name:
|
||||
asset_entity = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name,
|
||||
"parent": project_entity["_id"]
|
||||
})
|
||||
asset_entity = get_asset_by_name(project_name, asset_name)
|
||||
assert asset_entity, (
|
||||
"No asset found by the name \"{0}\" in project \"{1}\""
|
||||
).format(asset_name, project_entity["name"])
|
||||
|
|
@ -270,7 +276,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"Establishing staging directory @ {0}".format(stagingdir)
|
||||
)
|
||||
|
||||
subset = self.get_subset(asset_entity, instance)
|
||||
subset = self.get_subset(project_name, asset_entity, instance)
|
||||
instance.data["subsetEntity"] = subset
|
||||
|
||||
version_number = instance.data["version"]
|
||||
|
|
@ -297,11 +303,9 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
for _repre in repres
|
||||
]
|
||||
|
||||
existing_version = legacy_io.find_one({
|
||||
'type': 'version',
|
||||
'parent': subset["_id"],
|
||||
'name': version_number
|
||||
})
|
||||
existing_version = get_version_by_name(
|
||||
project_name, version_number, subset["_id"]
|
||||
)
|
||||
|
||||
if existing_version is None:
|
||||
version_id = legacy_io.insert_one(version).inserted_id
|
||||
|
|
@ -322,10 +326,9 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
version_id = existing_version['_id']
|
||||
|
||||
# Find representations of existing version and archive them
|
||||
current_repres = list(legacy_io.find({
|
||||
"type": "representation",
|
||||
"parent": version_id
|
||||
}))
|
||||
current_repres = list(get_representations(
|
||||
project_name, version_ids=[version_id]
|
||||
))
|
||||
bulk_writes = []
|
||||
for repre in current_repres:
|
||||
if append_repres:
|
||||
|
|
@ -345,18 +348,17 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
|
||||
# bulk updates
|
||||
if bulk_writes:
|
||||
project_name = legacy_io.Session["AVALON_PROJECT"]
|
||||
legacy_io.database[project_name].bulk_write(
|
||||
bulk_writes
|
||||
)
|
||||
|
||||
version = legacy_io.find_one({"_id": version_id})
|
||||
version = get_version_by_id(project_name, version_id)
|
||||
instance.data["versionEntity"] = version
|
||||
|
||||
existing_repres = list(legacy_io.find({
|
||||
"parent": version_id,
|
||||
"type": "archived_representation"
|
||||
}))
|
||||
existing_repres = list(get_archived_representations(
|
||||
project_name,
|
||||
version_ids=[version_id]
|
||||
))
|
||||
|
||||
instance.data['version'] = version['name']
|
||||
|
||||
|
|
@ -792,13 +794,9 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
|
||||
create_hard_link(src, dst)
|
||||
|
||||
def get_subset(self, asset, instance):
|
||||
def get_subset(self, project_name, asset, instance):
|
||||
subset_name = instance.data["subset"]
|
||||
subset = legacy_io.find_one({
|
||||
"type": "subset",
|
||||
"parent": asset["_id"],
|
||||
"name": subset_name
|
||||
})
|
||||
subset = get_subset_by_name(project_name, subset_name, asset["_id"])
|
||||
|
||||
if subset is None:
|
||||
self.log.info("Subset '%s' not found, creating ..." % subset_name)
|
||||
|
|
@ -825,7 +823,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"parent": asset["_id"]
|
||||
}).inserted_id
|
||||
|
||||
subset = legacy_io.find_one({"_id": _id})
|
||||
subset = get_subset_by_id(project_name, _id)
|
||||
|
||||
# QUESTION Why is changing of group and updating it's
|
||||
# families in 'get_subset'?
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import six
|
|||
import pyblish.api
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import get_version_by_id
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -70,7 +71,7 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
|
||||
thumbnail_template = anatomy.templates["publish"]["thumbnail"]
|
||||
|
||||
version = legacy_io.find_one({"_id": thumb_repre["parent"]})
|
||||
version = get_version_by_id(project_name, thumb_repre["parent"])
|
||||
if not version:
|
||||
raise AssertionError(
|
||||
"There does not exist version with id {}".format(
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ from pprint import pformat
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_assets
|
||||
|
||||
|
||||
class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
|
||||
|
|
@ -29,8 +30,10 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
|
|||
if not legacy_io.Session:
|
||||
legacy_io.install()
|
||||
|
||||
db_assets = list(legacy_io.find(
|
||||
{"type": "asset"}, {"name": 1, "data.parents": 1}))
|
||||
project_name = legacy_io.active_project()
|
||||
db_assets = list(get_assets(
|
||||
project_name, fields=["name", "data.parents"]
|
||||
))
|
||||
self.log.debug("__ db_assets: {}".format(db_assets))
|
||||
|
||||
asset_db_docs = {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import time
|
|||
|
||||
from openpype.lib import PypeLogger
|
||||
from openpype.api import get_app_environments_for_context
|
||||
from openpype.lib.plugin_tools import parse_json, get_batch_asset_task_info
|
||||
from openpype.lib.plugin_tools import get_batch_asset_task_info
|
||||
from openpype.lib.remote_publish import (
|
||||
get_webpublish_conn,
|
||||
start_webpublish_log,
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@
|
|||
],
|
||||
"reel_group_name": "OpenPype_Reels",
|
||||
"reel_name": "Loaded",
|
||||
"clip_name_template": "{asset}_{subset}_{output}"
|
||||
"clip_name_template": "{asset}_{subset}<_{output}>"
|
||||
},
|
||||
"LoadClipBatch": {
|
||||
"enabled": true,
|
||||
|
|
@ -121,7 +121,7 @@
|
|||
"exr16fpdwaa"
|
||||
],
|
||||
"reel_name": "OP_LoadedReel",
|
||||
"clip_name_template": "{asset}_{subset}_{output}"
|
||||
"clip_name_template": "{asset}_{subset}<_{output}>"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -124,6 +124,11 @@
|
|||
"Project Manager"
|
||||
],
|
||||
"cycle_enabled": false,
|
||||
"cycle_hour_start": [
|
||||
0,
|
||||
0,
|
||||
0
|
||||
],
|
||||
"review_session_template": "{yy}{mm}{dd}"
|
||||
}
|
||||
},
|
||||
|
|
@ -268,6 +273,49 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"hosts": [
|
||||
"traypublisher"
|
||||
],
|
||||
"families": [],
|
||||
"task_types": [],
|
||||
"tasks": [],
|
||||
"add_ftrack_family": true,
|
||||
"advanced_filtering": []
|
||||
},
|
||||
{
|
||||
"hosts": [
|
||||
"traypublisher"
|
||||
],
|
||||
"families": [
|
||||
"matchmove",
|
||||
"shot"
|
||||
],
|
||||
"task_types": [],
|
||||
"tasks": [],
|
||||
"add_ftrack_family": false,
|
||||
"advanced_filtering": []
|
||||
},
|
||||
{
|
||||
"hosts": [
|
||||
"traypublisher"
|
||||
],
|
||||
"families": [
|
||||
"plate"
|
||||
],
|
||||
"task_types": [],
|
||||
"tasks": [],
|
||||
"add_ftrack_family": false,
|
||||
"advanced_filtering": [
|
||||
{
|
||||
"families": [
|
||||
"clip",
|
||||
"review"
|
||||
],
|
||||
"add_ftrack_family": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"hosts": [
|
||||
"maya"
|
||||
|
|
|
|||
|
|
@ -51,5 +51,17 @@
|
|||
]
|
||||
}
|
||||
},
|
||||
"filters": {}
|
||||
"filters": {},
|
||||
"scriptsmenu": {
|
||||
"name": "OpenPype Tools",
|
||||
"definition": [
|
||||
{
|
||||
"type": "action",
|
||||
"sourcetype": "python",
|
||||
"title": "OpenPype Docs",
|
||||
"command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_hiero')",
|
||||
"tooltip": "Open the OpenPype Hiero user doc page"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
@ -204,7 +204,8 @@
|
|||
"ValidateFrameRange": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
"active": true,
|
||||
"exclude_families": ["model", "rig", "staticMesh"]
|
||||
},
|
||||
"ValidateShaderName": {
|
||||
"enabled": false,
|
||||
|
|
@ -496,11 +497,29 @@
|
|||
"override_viewport_options": true,
|
||||
"displayLights": "default",
|
||||
"textureMaxResolution": 1024,
|
||||
"multiSample": 4,
|
||||
"renderDepthOfField": true,
|
||||
"shadows": true,
|
||||
"textures": true,
|
||||
"twoSidedLighting": true,
|
||||
"ssaoEnable": true,
|
||||
"lineAAEnable": true,
|
||||
"multiSample": 8,
|
||||
"ssaoEnable": false,
|
||||
"ssaoAmount": 1,
|
||||
"ssaoRadius": 16,
|
||||
"ssaoFilterRadius": 16,
|
||||
"ssaoSamples": 16,
|
||||
"fogging": false,
|
||||
"hwFogFalloff": "0",
|
||||
"hwFogDensity": 0.0,
|
||||
"hwFogStart": 0,
|
||||
"hwFogEnd": 100,
|
||||
"hwFogAlpha": 0,
|
||||
"hwFogColorR": 1.0,
|
||||
"hwFogColorG": 1.0,
|
||||
"hwFogColorB": 1.0,
|
||||
"motionBlurEnable": false,
|
||||
"motionBlurSampleCount": 8,
|
||||
"motionBlurShutterOpenFraction": 0.2,
|
||||
"cameras": false,
|
||||
"clipGhosts": false,
|
||||
"controlVertices": false,
|
||||
|
|
|
|||
|
|
@ -287,7 +287,11 @@
|
|||
"LoadClip": {
|
||||
"enabled": true,
|
||||
"_representations": [],
|
||||
"node_name_template": "{class_name}_{ext}"
|
||||
"node_name_template": "{class_name}_{ext}",
|
||||
"options_defaults": {
|
||||
"start_at_workfile": true,
|
||||
"add_retime": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"workfile_builder": {
|
||||
|
|
|
|||
|
|
@ -8,9 +8,10 @@
|
|||
"default_variants": [
|
||||
"Main"
|
||||
],
|
||||
"description": "Publish workfile backup",
|
||||
"detailed_description": "",
|
||||
"allow_sequences": true,
|
||||
"description": "Backup of a working scene",
|
||||
"detailed_description": "Workfiles are full scenes from any application that are directly edited by artists. They represent a state of work on a task at a given point and are usually not directly referenced into other scenes.",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": false,
|
||||
"extensions": [
|
||||
".ma",
|
||||
".mb",
|
||||
|
|
@ -30,6 +31,216 @@
|
|||
".psb",
|
||||
".aep"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "model",
|
||||
"identifier": "",
|
||||
"label": "Model",
|
||||
"icon": "fa.cubes",
|
||||
"default_variants": [
|
||||
"Main",
|
||||
"Proxy",
|
||||
"Sculpt"
|
||||
],
|
||||
"description": "Clean models",
|
||||
"detailed_description": "Models should only contain geometry data, without any extras like cameras, locators or bones.\n\nKeep in mind that models published from tray publisher are not validated for correctness. ",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".ma",
|
||||
".mb",
|
||||
".obj",
|
||||
".abc",
|
||||
".fbx",
|
||||
".bgeo",
|
||||
".bgeogz",
|
||||
".bgeosc",
|
||||
".usd",
|
||||
".blend"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "pointcache",
|
||||
"identifier": "",
|
||||
"label": "Pointcache",
|
||||
"icon": "fa.gears",
|
||||
"default_variants": [
|
||||
"Main"
|
||||
],
|
||||
"description": "Geometry Caches",
|
||||
"detailed_description": "Alembic or bgeo cache of animated data",
|
||||
"allow_sequences": true,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".abc",
|
||||
".bgeo",
|
||||
".bgeogz",
|
||||
".bgeosc"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "plate",
|
||||
"identifier": "",
|
||||
"label": "Plate",
|
||||
"icon": "mdi.camera-image",
|
||||
"default_variants": [
|
||||
"Main",
|
||||
"BG",
|
||||
"Animatic",
|
||||
"Reference",
|
||||
"Offline"
|
||||
],
|
||||
"description": "Footage Plates",
|
||||
"detailed_description": "Any type of image seqeuence coming from outside of the studio. Usually camera footage, but could also be animatics used for reference.",
|
||||
"allow_sequences": true,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".exr",
|
||||
".png",
|
||||
".dpx",
|
||||
".jpg",
|
||||
".tiff",
|
||||
".tif",
|
||||
".mov",
|
||||
".mp4",
|
||||
".avi"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "render",
|
||||
"identifier": "",
|
||||
"label": "Render",
|
||||
"icon": "mdi.folder-multiple-image",
|
||||
"default_variants": [],
|
||||
"description": "Rendered images or video",
|
||||
"detailed_description": "Sequence or single file renders",
|
||||
"allow_sequences": true,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".exr",
|
||||
".png",
|
||||
".dpx",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".tiff",
|
||||
".tif",
|
||||
".mov",
|
||||
".mp4",
|
||||
".avi"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "camera",
|
||||
"identifier": "",
|
||||
"label": "Camera",
|
||||
"icon": "fa.video-camera",
|
||||
"default_variants": [],
|
||||
"description": "3d Camera",
|
||||
"detailed_description": "Ideally this should be only camera itself with baked animation, however, it can technically also include helper geometry.",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".abc",
|
||||
".ma",
|
||||
".hip",
|
||||
".blend",
|
||||
".fbx",
|
||||
".usd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "image",
|
||||
"identifier": "",
|
||||
"label": "Image",
|
||||
"icon": "fa.image",
|
||||
"default_variants": [
|
||||
"Reference",
|
||||
"Texture",
|
||||
"Concept",
|
||||
"Background"
|
||||
],
|
||||
"description": "Single image",
|
||||
"detailed_description": "Any image data can be published as image family. References, textures, concept art, matte paints. This is a fallback 2d family for everything that doesn't fit more specific family.",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".exr",
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".dpx",
|
||||
".bmp",
|
||||
".tif",
|
||||
".tiff",
|
||||
".png",
|
||||
".psb",
|
||||
".psd"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "vdb",
|
||||
"identifier": "",
|
||||
"label": "VDB Volumes",
|
||||
"icon": "fa.cloud",
|
||||
"default_variants": [],
|
||||
"description": "Sparse volumetric data",
|
||||
"detailed_description": "Hierarchical data structure for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids",
|
||||
"allow_sequences": true,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": [
|
||||
".vdb"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "matchmove",
|
||||
"identifier": "",
|
||||
"label": "Matchmove",
|
||||
"icon": "fa.empire",
|
||||
"default_variants": [
|
||||
"Camera",
|
||||
"Object",
|
||||
"Mocap"
|
||||
],
|
||||
"description": "Matchmoving script",
|
||||
"detailed_description": "Script exported from matchmoving application to be later processed into a tracked camera with additional data",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": []
|
||||
},
|
||||
{
|
||||
"family": "rig",
|
||||
"identifier": "",
|
||||
"label": "Rig",
|
||||
"icon": "fa.wheelchair",
|
||||
"default_variants": [],
|
||||
"description": "CG rig file",
|
||||
"detailed_description": "CG rigged character or prop. Rig should be clean of any extra data and directly loadable into it's respective application\t",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": false,
|
||||
"extensions": [
|
||||
".ma",
|
||||
".blend",
|
||||
".hip",
|
||||
".hda"
|
||||
]
|
||||
},
|
||||
{
|
||||
"family": "simpleUnrealTexture",
|
||||
"identifier": "",
|
||||
"label": "Simple UE texture",
|
||||
"icon": "fa.image",
|
||||
"default_variants": [],
|
||||
"description": "Simple Unreal Engine texture",
|
||||
"detailed_description": "Texture files with Unreal Engine naming conventions",
|
||||
"allow_sequences": false,
|
||||
"allow_multiple_items": true,
|
||||
"extensions": []
|
||||
}
|
||||
]
|
||||
],
|
||||
"BatchMovieCreator": {
|
||||
"default_variants": ["Main"],
|
||||
"default_tasks": ["Compositing"],
|
||||
"extensions": [
|
||||
".mov"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
@ -169,6 +169,7 @@ class HostsEnumEntity(BaseEnumEntity):
|
|||
"tvpaint",
|
||||
"unreal",
|
||||
"standalonepublisher",
|
||||
"traypublisher",
|
||||
"webpublisher"
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -410,7 +410,41 @@
|
|||
{
|
||||
"type": "boolean",
|
||||
"key": "cycle_enabled",
|
||||
"label": "Create daily review session"
|
||||
"label": "Run automatically every day"
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"type": "list-strict",
|
||||
"key": "cycle_hour_start",
|
||||
"label": "Create daily review session at",
|
||||
"tooltip": "This may take affect on next day",
|
||||
"object_types": [
|
||||
{
|
||||
"label": "H:",
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 23,
|
||||
"decimal": 0
|
||||
}, {
|
||||
"label": "M:",
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 59,
|
||||
"decimal": 0
|
||||
}, {
|
||||
"label": "S:",
|
||||
"type": "number",
|
||||
"minimum": 0,
|
||||
"maximum": 59,
|
||||
"decimal": 0
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "This can't be overriden per project and any change will take effect on the next day or on restart of event server."
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
|
|
@ -822,7 +856,7 @@
|
|||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Template may contain formatting keys <b>intent</b>, <b>comment</b>, <b>host_name</b>, <b>app_name</b>, <b>app_label</b> and <b>published_paths</b>."
|
||||
"label": "Template may contain formatting keys <b>intent</b>, <b>comment</b>, <b>host_name</b>, <b>app_name</b>, <b>app_label</b>, <b>published_paths</b> and <b>source</b>."
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
|
|
|
|||
|
|
@ -206,6 +206,10 @@
|
|||
{
|
||||
"type": "schema",
|
||||
"name": "schema_publish_gui_filter"
|
||||
},
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_scriptsmenu"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -67,6 +67,11 @@
|
|||
"label": "Allow sequences",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"key": "allow_multiple_items",
|
||||
"label": "Allow multiple items",
|
||||
"type": "boolean"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "extensions",
|
||||
|
|
@ -78,6 +83,44 @@
|
|||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "BatchMovieCreator",
|
||||
"label": "Batch Movie Creator",
|
||||
"collapsible_key": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Allows to publish multiple video files in one go. <br />Name of matching asset is parsed from file names ('asset.mov', 'asset_v001.mov', 'my_asset_to_publish.mov')"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "default_variants",
|
||||
"label": "Default variants",
|
||||
"object_type": {
|
||||
"type": "text"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "default_tasks",
|
||||
"label": "Default tasks",
|
||||
"object_type": {
|
||||
"type": "text"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "extensions",
|
||||
"label": "Extensions",
|
||||
"use_label_wrap": true,
|
||||
"collapsible_key": true,
|
||||
"collapsed": false,
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -202,12 +202,15 @@
|
|||
"decimal": 0
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "multiSample",
|
||||
"label": "Anti Aliasing Samples",
|
||||
"decimal": 0,
|
||||
"minimum": 0,
|
||||
"maximum": 32
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type":"boolean",
|
||||
"key": "renderDepthOfField",
|
||||
"label": "Depth of Field"
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
|
|
@ -224,11 +227,145 @@
|
|||
"key": "twoSidedLighting",
|
||||
"label": "Two Sided Lighting"
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "lineAAEnable",
|
||||
"label": "Enable Anti-Aliasing"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "multiSample",
|
||||
"label": "Anti Aliasing Samples",
|
||||
"decimal": 0,
|
||||
"minimum": 0,
|
||||
"maximum": 32
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "ssaoEnable",
|
||||
"label": "Screen Space Ambient Occlusion"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "ssaoAmount",
|
||||
"label": "SSAO Amount"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "ssaoRadius",
|
||||
"label": "SSAO Radius"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "ssaoFilterRadius",
|
||||
"label": "SSAO Filter Radius",
|
||||
"decimal": 0,
|
||||
"minimum": 1,
|
||||
"maximum": 32
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "ssaoSamples",
|
||||
"label": "SSAO Samples",
|
||||
"decimal": 0,
|
||||
"minimum": 8,
|
||||
"maximum": 32
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "fogging",
|
||||
"label": "Enable Hardware Fog"
|
||||
},
|
||||
{
|
||||
"type": "enum",
|
||||
"key": "hwFogFalloff",
|
||||
"label": "Hardware Falloff",
|
||||
"enum_items": [
|
||||
{ "0": "Linear"},
|
||||
{ "1": "Exponential"},
|
||||
{ "2": "Exponential Squared"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogDensity",
|
||||
"label": "Fog Density",
|
||||
"decimal": 2,
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogStart",
|
||||
"label": "Fog Start"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogEnd",
|
||||
"label": "Fog End"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogAlpha",
|
||||
"label": "Fog Alpha"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogColorR",
|
||||
"label": "Fog Color R",
|
||||
"decimal": 2,
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogColorG",
|
||||
"label": "Fog Color G",
|
||||
"decimal": 2,
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "hwFogColorB",
|
||||
"label": "Fog Color B",
|
||||
"decimal": 2,
|
||||
"minimum": 0,
|
||||
"maximum": 1
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "motionBlurEnable",
|
||||
"label": "Enable Motion Blur"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "motionBlurSampleCount",
|
||||
"label": "Motion Blur Sample Count",
|
||||
"decimal": 0,
|
||||
"minimum": 8,
|
||||
"maximum": 32
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "motionBlurShutterOpenFraction",
|
||||
"label": "Shutter Open Fraction",
|
||||
"decimal": 3,
|
||||
"minimum": 0.01,
|
||||
"maximum": 32
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -62,13 +62,36 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "ValidateFrameRange",
|
||||
"label": "Validate Frame Range",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"key": "ValidateFrameRange",
|
||||
"label": "Validate Frame Range"
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "optional",
|
||||
"label": "Optional"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "active",
|
||||
"label": "Active"
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"key": "exclude_families",
|
||||
"label": "Families",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -11,10 +11,52 @@
|
|||
{
|
||||
"key": "LoadImage",
|
||||
"label": "Image Loader"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "LoadClip",
|
||||
"label": "Clip Loader",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"key": "LoadClip",
|
||||
"label": "Clip Loader"
|
||||
"type": "list",
|
||||
"key": "_representations",
|
||||
"label": "Representations",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "node_name_template",
|
||||
"label": "Node name template"
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": false,
|
||||
"key": "options_defaults",
|
||||
"label": "Loader option defaults",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "start_at_workfile",
|
||||
"label": "Start at worfile beggining"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "add_retime",
|
||||
"label": "Add retime"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue