Merge pull request #6 from pypeclub/bugfix/rever_poetry_lock_changes
Update and revert poetry lock changes
123
CHANGELOG.md
|
|
@ -1,15 +1,58 @@
|
|||
# Changelog
|
||||
|
||||
## [3.11.0-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.10.0...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.0...3.11.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: custom export temp folder [\#3346](https://github.com/pypeclub/OpenPype/pull/3346)
|
||||
- Nuke: removing third-party plugins [\#3344](https://github.com/pypeclub/OpenPype/pull/3344)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Pyblish Pype: Hiding/Close issues [\#3367](https://github.com/pypeclub/OpenPype/pull/3367)
|
||||
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
|
||||
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
|
||||
- Ftrack: Open browser from tray [\#3320](https://github.com/pypeclub/OpenPype/pull/3320)
|
||||
- Enhancement: More control over thumbnail processing. [\#3259](https://github.com/pypeclub/OpenPype/pull/3259)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: bake streams with slate on farm [\#3368](https://github.com/pypeclub/OpenPype/pull/3368)
|
||||
- Harmony: audio validator has wrong logic [\#3364](https://github.com/pypeclub/OpenPype/pull/3364)
|
||||
- Nuke: Fix missing variable in extract thumbnail [\#3363](https://github.com/pypeclub/OpenPype/pull/3363)
|
||||
- Nuke: Fix precollect writes [\#3361](https://github.com/pypeclub/OpenPype/pull/3361)
|
||||
- AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358)
|
||||
- deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356)
|
||||
- General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351)
|
||||
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
|
||||
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
|
||||
- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
|
||||
|
||||
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Add app key to template documentation [\#3299](https://github.com/pypeclub/OpenPype/pull/3299)
|
||||
- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
|
||||
- updated poetry installation source [\#3316](https://github.com/pypeclub/OpenPype/pull/3316)
|
||||
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
|
||||
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
|
||||
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
|
||||
- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298)
|
||||
- Ftrack: Action to transfer values of hierarchical attributes [\#3284](https://github.com/pypeclub/OpenPype/pull/3284)
|
||||
- Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269)
|
||||
- General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268)
|
||||
- Unreal: add support for skeletalMesh and staticMesh to loaders [\#3267](https://github.com/pypeclub/OpenPype/pull/3267)
|
||||
- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
|
||||
|
|
@ -18,6 +61,15 @@
|
|||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- General: Handle empty source key on instance [\#3342](https://github.com/pypeclub/OpenPype/pull/3342)
|
||||
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
|
||||
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
|
||||
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
|
||||
- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305)
|
||||
- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303)
|
||||
- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301)
|
||||
- Maya: Fix swaped width and height in reviews [\#3300](https://github.com/pypeclub/OpenPype/pull/3300)
|
||||
- Maya: point cache publish handles Maya instances [\#3297](https://github.com/pypeclub/OpenPype/pull/3297)
|
||||
- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286)
|
||||
- Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281)
|
||||
- Hiero: add support for task tags 3.10.x [\#3279](https://github.com/pypeclub/OpenPype/pull/3279)
|
||||
|
|
@ -28,37 +80,26 @@
|
|||
- Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
|
||||
- Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240)
|
||||
- Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239)
|
||||
- Unreal: Fixed Camera loading in UE5 [\#3238](https://github.com/pypeclub/OpenPype/pull/3238)
|
||||
- Flame: debugging [\#3224](https://github.com/pypeclub/OpenPype/pull/3224)
|
||||
- add silent audio to slate [\#3162](https://github.com/pypeclub/OpenPype/pull/3162)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
|
||||
- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269)
|
||||
- Deadline: publishing of animation and pointcache on a farm [\#3225](https://github.com/pypeclub/OpenPype/pull/3225)
|
||||
- Nuke: add pointcache and animation to loader [\#3186](https://github.com/pypeclub/OpenPype/pull/3186)
|
||||
- Add a gizmo menu to nuke [\#3172](https://github.com/pypeclub/OpenPype/pull/3172)
|
||||
- Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318)
|
||||
- Maya look: skip empty file attributes [\#3274](https://github.com/pypeclub/OpenPype/pull/3274)
|
||||
- Harmony: 21.1 fix [\#3248](https://github.com/pypeclub/OpenPype/pull/3248)
|
||||
|
||||
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- General: OpenPype modules publish plugins are registered in host [\#3180](https://github.com/pypeclub/OpenPype/pull/3180)
|
||||
- General: Creator plugins from addons can be registered [\#3179](https://github.com/pypeclub/OpenPype/pull/3179)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
|
||||
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
|
||||
- Project Manager: Allow to paste Tasks into multiple assets at the same time [\#3226](https://github.com/pypeclub/OpenPype/pull/3226)
|
||||
- Project manager: Sped up project load [\#3216](https://github.com/pypeclub/OpenPype/pull/3216)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3199](https://github.com/pypeclub/OpenPype/pull/3199)
|
||||
- Looks: add basic support for Renderman [\#3190](https://github.com/pypeclub/OpenPype/pull/3190)
|
||||
- Maya: added clean\_import option to Import loader [\#3181](https://github.com/pypeclub/OpenPype/pull/3181)
|
||||
- Add the scripts menu definition to nuke [\#3168](https://github.com/pypeclub/OpenPype/pull/3168)
|
||||
- Maya: add maya 2023 to default applications [\#3167](https://github.com/pypeclub/OpenPype/pull/3167)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -67,56 +108,16 @@
|
|||
- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242)
|
||||
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
|
||||
- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236)
|
||||
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
|
||||
- Hiero: debugging frame range and other 3.10 [\#3222](https://github.com/pypeclub/OpenPype/pull/3222)
|
||||
- Project Manager: Fix persistent editors on project change [\#3218](https://github.com/pypeclub/OpenPype/pull/3218)
|
||||
- Deadline: instance data overwrite fix [\#3214](https://github.com/pypeclub/OpenPype/pull/3214)
|
||||
- Ftrack: Push hierarchical attributes action works [\#3210](https://github.com/pypeclub/OpenPype/pull/3210)
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3203](https://github.com/pypeclub/OpenPype/pull/3203)
|
||||
- Photoshop: skip collector when automatic testing [\#3202](https://github.com/pypeclub/OpenPype/pull/3202)
|
||||
- Nuke: render/workfile version sync doesn't work on farm [\#3185](https://github.com/pypeclub/OpenPype/pull/3185)
|
||||
- Ftrack: Review image only if there are no mp4 reviews [\#3183](https://github.com/pypeclub/OpenPype/pull/3183)
|
||||
- Ftrack: Locations deepcopy issue [\#3177](https://github.com/pypeclub/OpenPype/pull/3177)
|
||||
- General: Avoid creating multiple thumbnails [\#3176](https://github.com/pypeclub/OpenPype/pull/3176)
|
||||
- General/Hiero: better clip duration calculation [\#3169](https://github.com/pypeclub/OpenPype/pull/3169)
|
||||
- General: Oiio conversion for ffmpeg checks for invalid characters [\#3166](https://github.com/pypeclub/OpenPype/pull/3166)
|
||||
- Fix for attaching render to subset [\#3164](https://github.com/pypeclub/OpenPype/pull/3164)
|
||||
- Harmony: fixed missing task name in render instance [\#3163](https://github.com/pypeclub/OpenPype/pull/3163)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Avalon repo removed from Jobs workflow [\#3193](https://github.com/pypeclub/OpenPype/pull/3193)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
|
||||
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
|
||||
- Maya: added jpg to filter for Image Plane Loader [\#3223](https://github.com/pypeclub/OpenPype/pull/3223)
|
||||
- Webpublisher: replace space by underscore in subset names [\#3160](https://github.com/pypeclub/OpenPype/pull/3160)
|
||||
|
||||
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- nuke: generate publishing nodes inside render group node [\#3206](https://github.com/pypeclub/OpenPype/pull/3206)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3200](https://github.com/pypeclub/OpenPype/pull/3200)
|
||||
- Backport of fix for attaching renders to subsets [\#3195](https://github.com/pypeclub/OpenPype/pull/3195)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3204](https://github.com/pypeclub/OpenPype/pull/3204)
|
||||
- Nuke: render/workfile version sync doesn't work on farm [\#3184](https://github.com/pypeclub/OpenPype/pull/3184)
|
||||
- Ftrack: Review image only if there are no mp4 reviews [\#3182](https://github.com/pypeclub/OpenPype/pull/3182)
|
||||
- Ftrack: Locations deepcopy issue [\#3175](https://github.com/pypeclub/OpenPype/pull/3175)
|
||||
- General: Avoid creating multiple thumbnails [\#3174](https://github.com/pypeclub/OpenPype/pull/3174)
|
||||
- General: TemplateResult can be copied [\#3170](https://github.com/pypeclub/OpenPype/pull/3170)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- hiero: otio p3 compatibility issue - metadata on effect use update [\#3194](https://github.com/pypeclub/OpenPype/pull/3194)
|
||||
|
||||
## [3.9.7](https://github.com/pypeclub/OpenPype/tree/3.9.7) (2022-05-11)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.6...3.9.7)
|
||||
|
|
|
|||
|
|
@ -44,7 +44,6 @@ from . import resources
|
|||
|
||||
from .plugin import (
|
||||
Extractor,
|
||||
Integrator,
|
||||
|
||||
ValidatePipelineOrder,
|
||||
ValidateContentsOrder,
|
||||
|
|
@ -87,7 +86,6 @@ __all__ = [
|
|||
|
||||
# plugin classes
|
||||
"Extractor",
|
||||
"Integrator",
|
||||
# ordering
|
||||
"ValidatePipelineOrder",
|
||||
"ValidateContentsOrder",
|
||||
|
|
|
|||
71
openpype/client/__init__.py
Normal file
|
|
@ -0,0 +1,71 @@
|
|||
from .entities import (
|
||||
get_projects,
|
||||
get_project,
|
||||
|
||||
get_asset_by_id,
|
||||
get_asset_by_name,
|
||||
get_assets,
|
||||
get_asset_ids_with_subsets,
|
||||
|
||||
get_subset_by_id,
|
||||
get_subset_by_name,
|
||||
get_subsets,
|
||||
get_subset_families,
|
||||
|
||||
get_version_by_id,
|
||||
get_version_by_name,
|
||||
get_versions,
|
||||
get_hero_version_by_id,
|
||||
get_hero_version_by_subset_id,
|
||||
get_hero_versions,
|
||||
get_last_versions,
|
||||
get_last_version_by_subset_id,
|
||||
get_last_version_by_subset_name,
|
||||
get_output_link_versions,
|
||||
|
||||
get_representation_by_id,
|
||||
get_representation_by_name,
|
||||
get_representations,
|
||||
get_representation_parents,
|
||||
get_representations_parents,
|
||||
|
||||
get_thumbnail,
|
||||
get_thumbnails,
|
||||
get_thumbnail_id_from_source,
|
||||
)
|
||||
|
||||
__all__ = (
|
||||
"get_projects",
|
||||
"get_project",
|
||||
|
||||
"get_asset_by_id",
|
||||
"get_asset_by_name",
|
||||
"get_assets",
|
||||
"get_asset_ids_with_subsets",
|
||||
|
||||
"get_subset_by_id",
|
||||
"get_subset_by_name",
|
||||
"get_subsets",
|
||||
"get_subset_families",
|
||||
|
||||
"get_version_by_id",
|
||||
"get_version_by_name",
|
||||
"get_versions",
|
||||
"get_hero_version_by_id",
|
||||
"get_hero_version_by_subset_id",
|
||||
"get_hero_versions",
|
||||
"get_last_versions",
|
||||
"get_last_version_by_subset_id",
|
||||
"get_last_version_by_subset_name",
|
||||
"get_output_link_versions",
|
||||
|
||||
"get_representation_by_id",
|
||||
"get_representation_by_name",
|
||||
"get_representations",
|
||||
"get_representation_parents",
|
||||
"get_representations_parents",
|
||||
|
||||
"get_thumbnail",
|
||||
"get_thumbnails",
|
||||
"get_thumbnail_id_from_source",
|
||||
)
|
||||
1666
openpype/client/entities.py
Normal file
|
|
@ -21,7 +21,7 @@ class AERenderInstance(RenderInstance):
|
|||
projectEntity = attr.ib(default=None)
|
||||
stagingDir = attr.ib(default=None)
|
||||
app_version = attr.ib(default=None)
|
||||
publish_attributes = attr.ib(default=None)
|
||||
publish_attributes = attr.ib(default={})
|
||||
file_name = attr.ib(default=None)
|
||||
|
||||
|
||||
|
|
@ -90,7 +90,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
|
||||
subset_name = inst.data["subset"]
|
||||
instance = AERenderInstance(
|
||||
family=family,
|
||||
family="render",
|
||||
families=inst.data.get("families", []),
|
||||
version=version,
|
||||
time="",
|
||||
|
|
@ -116,7 +116,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
toBeRenderedOn='deadline',
|
||||
fps=fps,
|
||||
app_version=app_version,
|
||||
publish_attributes=inst.data.get("publish_attributes"),
|
||||
publish_attributes=inst.data.get("publish_attributes", {}),
|
||||
file_name=render_q.file_name
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ class ValidateSceneSettings(OptionalPyblishPluginMixin,
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Scene Settings"
|
||||
families = ["render.farm", "render"]
|
||||
families = ["render.farm", "render.local", "render"]
|
||||
hosts = ["aftereffects"]
|
||||
optional = True
|
||||
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ from . import ops
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import (
|
||||
schema,
|
||||
legacy_io,
|
||||
|
|
@ -83,11 +84,9 @@ def uninstall():
|
|||
|
||||
|
||||
def set_start_end_frames():
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
scene = bpy.context.scene
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,11 @@
|
|||
import os
|
||||
import json
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
import bpy
|
||||
import bpy_extras
|
||||
import bpy_extras.anim_utils
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_representation_by_name
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
import openpype.api
|
||||
|
|
@ -131,43 +129,32 @@ class ExtractLayout(openpype.api.Extractor):
|
|||
|
||||
fbx_count = 0
|
||||
|
||||
project_name = instance.context.data["projectEntity"]["name"]
|
||||
for asset in asset_group.children:
|
||||
metadata = asset.get(AVALON_PROPERTY)
|
||||
|
||||
parent = metadata["parent"]
|
||||
version_id = metadata["parent"]
|
||||
family = metadata["family"]
|
||||
|
||||
self.log.debug("Parent: {}".format(parent))
|
||||
self.log.debug("Parent: {}".format(version_id))
|
||||
# Get blend reference
|
||||
blend = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": ObjectId(parent),
|
||||
"name": "blend"
|
||||
},
|
||||
projection={"_id": True})
|
||||
blend = get_representation_by_name(
|
||||
project_name, "blend", version_id, fields=["_id"]
|
||||
)
|
||||
blend_id = None
|
||||
if blend:
|
||||
blend_id = blend["_id"]
|
||||
# Get fbx reference
|
||||
fbx = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": ObjectId(parent),
|
||||
"name": "fbx"
|
||||
},
|
||||
projection={"_id": True})
|
||||
fbx = get_representation_by_name(
|
||||
project_name, "fbx", version_id, fields=["_id"]
|
||||
)
|
||||
fbx_id = None
|
||||
if fbx:
|
||||
fbx_id = fbx["_id"]
|
||||
# Get abc reference
|
||||
abc = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": ObjectId(parent),
|
||||
"name": "abc"
|
||||
},
|
||||
projection={"_id": True})
|
||||
abc = get_representation_by_name(
|
||||
project_name, "abc", version_id, fields=["_id"]
|
||||
)
|
||||
abc_id = None
|
||||
if abc:
|
||||
abc_id = abc["_id"]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import os
|
||||
import re
|
||||
import tempfile
|
||||
from pprint import pformat
|
||||
from copy import deepcopy
|
||||
|
||||
|
|
@ -420,3 +421,30 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
"Path `{}` is containing more that one clip".format(path)
|
||||
)
|
||||
return clips[0]
|
||||
|
||||
def staging_dir(self, instance):
|
||||
"""Provide a temporary directory in which to store extracted files
|
||||
|
||||
Upon calling this method the staging directory is stored inside
|
||||
the instance.data['stagingDir']
|
||||
"""
|
||||
staging_dir = instance.data.get('stagingDir', None)
|
||||
openpype_temp_dir = os.getenv("OPENPYPE_TEMP_DIR")
|
||||
|
||||
if not staging_dir:
|
||||
if openpype_temp_dir and os.path.exists(openpype_temp_dir):
|
||||
staging_dir = os.path.normpath(
|
||||
tempfile.mkdtemp(
|
||||
prefix="pyblish_tmp_",
|
||||
dir=openpype_temp_dir
|
||||
)
|
||||
)
|
||||
else:
|
||||
staging_dir = os.path.normpath(
|
||||
tempfile.mkdtemp(prefix="pyblish_tmp_")
|
||||
)
|
||||
instance.data['stagingDir'] = staging_dir
|
||||
|
||||
instance.context.data["cleanupFullPaths"].append(staging_dir)
|
||||
|
||||
return staging_dir
|
||||
|
|
|
|||
|
|
@ -47,6 +47,6 @@ class ValidateAudio(pyblish.api.InstancePlugin):
|
|||
formatting_data = {
|
||||
"audio_url": audio_path
|
||||
}
|
||||
if os.path.isfile(audio_path):
|
||||
if not os.path.isfile(audio_path):
|
||||
raise PublishXmlValidationError(self, msg,
|
||||
formatting_data=formatting_data)
|
||||
|
|
|
|||
|
|
@ -132,7 +132,7 @@ def create_time_effects(otio_clip, track_item):
|
|||
otio_effect = otio.schema.TimeEffect()
|
||||
otio_effect.name = name
|
||||
otio_effect.effect_name = effect_name
|
||||
otio_effect.metadata = metadata
|
||||
otio_effect.metadata.update(metadata)
|
||||
|
||||
# add otio effect to clip effects
|
||||
otio_clip.effects.append(otio_effect)
|
||||
|
|
|
|||
|
|
@ -130,6 +130,8 @@ def get_output_parameter(node):
|
|||
elif node_type == "arnold":
|
||||
if node.evalParm("ar_ass_export_enable"):
|
||||
return node.parm("ar_ass_file")
|
||||
elif node_type == "Redshift_Proxy_Output":
|
||||
return node.parm("RS_archive_file")
|
||||
|
||||
raise TypeError("Node type '%s' not supported" % node_type)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
|
||||
|
||||
class CreateRedshiftProxy(plugin.Creator):
|
||||
"""Redshift Proxy"""
|
||||
|
||||
label = "Redshift Proxy"
|
||||
family = "redshiftproxy"
|
||||
icon = "magic"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
|
||||
|
||||
# Remove the active, we are checking the bypass flag of the nodes
|
||||
self.data.pop("active", None)
|
||||
|
||||
# Redshift provides a `Redshift_Proxy_Output` node type which shows
|
||||
# a limited set of parameters by default and is set to extract a
|
||||
# Redshift Proxy. However when "imprinting" extra parameters needed
|
||||
# for OpenPype it starts showing all its parameters again. It's unclear
|
||||
# why this happens.
|
||||
# TODO: Somehow enforce so that it only shows the original limited
|
||||
# attributes of the Redshift_Proxy_Output node type
|
||||
self.data.update({"node_type": "Redshift_Proxy_Output"})
|
||||
|
||||
def _process(self, instance):
|
||||
"""Creator main entry point.
|
||||
|
||||
Args:
|
||||
instance (hou.Node): Created Houdini instance.
|
||||
|
||||
"""
|
||||
parms = {
|
||||
"RS_archive_file": '$HIP/pyblish/`chs("subset")`.$F4.rs',
|
||||
}
|
||||
|
||||
if self.nodes:
|
||||
node = self.nodes[0]
|
||||
path = node.path()
|
||||
parms["RS_archive_sopPath"] = path
|
||||
|
||||
instance.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id"]
|
||||
for name in to_lock:
|
||||
parm = instance.parm(name)
|
||||
parm.lock(True)
|
||||
|
|
@ -102,7 +102,7 @@ class VdbLoader(load.LoaderPlugin):
|
|||
file_path = get_representation_path(representation)
|
||||
file_path = self.format_path(file_path)
|
||||
|
||||
file_node.setParms({"fileName": file_path})
|
||||
file_node.setParms({"file": file_path})
|
||||
|
||||
# Update attribute
|
||||
node.setParms({"representation": str(representation["_id"])})
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass"]
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
"imagesequence",
|
||||
"usd",
|
||||
"usdrender",
|
||||
"redshiftproxy"
|
||||
]
|
||||
|
||||
hosts = ["houdini"]
|
||||
|
|
@ -54,6 +55,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
out_node = node.parm("loppath").evalAsNode()
|
||||
|
||||
elif node_type == "Redshift_Proxy_Output":
|
||||
out_node = node.parm("RS_archive_sopPath").evalAsNode()
|
||||
else:
|
||||
raise ValueError(
|
||||
"ROP node type '%s' is" " not supported." % node_type
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.hosts.houdini.api.lib import render_rop
|
||||
|
||||
|
||||
class ExtractRedshiftProxy(openpype.api.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.1
|
||||
label = "Extract Redshift Proxy"
|
||||
families = ["redshiftproxy"]
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
ropnode = instance[0]
|
||||
|
||||
# Get the filename from the filename parameter
|
||||
# `.evalParm(parameter)` will make sure all tokens are resolved
|
||||
output = ropnode.evalParm("RS_archive_file")
|
||||
staging_dir = os.path.normpath(os.path.dirname(output))
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
file_name = os.path.basename(output)
|
||||
|
||||
self.log.info("Writing Redshift Proxy '%s' to '%s'" % (file_name,
|
||||
staging_dir))
|
||||
|
||||
render_rop(ropnode)
|
||||
|
||||
output = instance.data["frames"]
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "rs",
|
||||
"ext": "rs",
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
# A single frame may also be rendered without start/end frame.
|
||||
if "frameStart" in instance.data and "frameEnd" in instance.data:
|
||||
representation["frameStart"] = instance.data["frameStart"]
|
||||
representation["frameEnd"] = instance.data["frameEnd"]
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -22,7 +22,8 @@ class CreateYetiCache(plugin.Creator):
|
|||
# Add animation data without step and handles
|
||||
anim_data = lib.collect_animation_data()
|
||||
anim_data.pop("step")
|
||||
anim_data.pop("handles")
|
||||
anim_data.pop("handleStart")
|
||||
anim_data.pop("handleEnd")
|
||||
self.data.update(anim_data)
|
||||
|
||||
# Add samples
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ from openpype.api import get_project_settings
|
|||
class GpuCacheLoader(load.LoaderPlugin):
|
||||
"""Load Alembic as gpuCache"""
|
||||
|
||||
families = ["model"]
|
||||
families = ["model", "animation", "pointcache"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Import Gpu Cache"
|
||||
|
|
|
|||
|
|
@ -1,15 +1,13 @@
|
|||
import os
|
||||
import json
|
||||
import re
|
||||
import glob
|
||||
from collections import defaultdict
|
||||
from pprint import pprint
|
||||
|
||||
import clique
|
||||
from maya import cmds
|
||||
|
||||
from openpype.api import get_project_settings
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
|
|
@ -17,7 +15,15 @@ from openpype.hosts.maya.api import lib
|
|||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
|
||||
|
||||
def set_attribute(node, attr, value):
|
||||
"""Wrapper of set attribute which ignores None values"""
|
||||
if value is None:
|
||||
return
|
||||
lib.set_attribute(node, attr, value)
|
||||
|
||||
|
||||
class YetiCacheLoader(load.LoaderPlugin):
|
||||
"""Load Yeti Cache with one or more Yeti nodes"""
|
||||
|
||||
families = ["yeticache", "yetiRig"]
|
||||
representations = ["fur"]
|
||||
|
|
@ -28,6 +34,16 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
"""Loads a .fursettings file defining how to load .fur sequences
|
||||
|
||||
A single yeticache or yetiRig can have more than a single pgYetiMaya
|
||||
nodes and thus load more than a single yeti.fur sequence.
|
||||
|
||||
The .fursettings file defines what the node names should be and also
|
||||
what "cbId" attribute they should receive to match the original source
|
||||
and allow published looks to also work for Yeti rigs and its caches.
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
family = context["representation"]["context"]["family"]
|
||||
|
|
@ -43,22 +59,11 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
if not cmds.pluginInfo("pgYetiMaya", query=True, loaded=True):
|
||||
cmds.loadPlugin("pgYetiMaya", quiet=True)
|
||||
|
||||
# Get JSON
|
||||
fbase = re.search(r'^(.+)\.(\d+|#+)\.fur', self.fname)
|
||||
if not fbase:
|
||||
raise RuntimeError('Cannot determine fursettings file path')
|
||||
settings_fname = "{}.fursettings".format(fbase.group(1))
|
||||
with open(settings_fname, "r") as fp:
|
||||
fursettings = json.load(fp)
|
||||
|
||||
# Check if resources map exists
|
||||
# Get node name from JSON
|
||||
if "nodes" not in fursettings:
|
||||
raise RuntimeError("Encountered invalid data, expect 'nodes' in "
|
||||
"fursettings.")
|
||||
|
||||
node_data = fursettings["nodes"]
|
||||
nodes = self.create_nodes(namespace, node_data)
|
||||
# Create Yeti cache nodes according to settings
|
||||
settings = self.read_settings(self.fname)
|
||||
nodes = []
|
||||
for node in settings["nodes"]:
|
||||
nodes.extend(self.create_node(namespace, node))
|
||||
|
||||
group_name = "{}:{}".format(namespace, name)
|
||||
group_node = cmds.group(nodes, name=group_name)
|
||||
|
|
@ -111,28 +116,14 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
|
||||
def update(self, container, representation):
|
||||
|
||||
legacy_io.install()
|
||||
namespace = container["namespace"]
|
||||
container_node = container["objectName"]
|
||||
|
||||
fur_settings = legacy_io.find_one(
|
||||
{"parent": representation["parent"], "name": "fursettings"}
|
||||
)
|
||||
|
||||
pprint({"parent": representation["parent"], "name": "fursettings"})
|
||||
pprint(fur_settings)
|
||||
assert fur_settings is not None, (
|
||||
"cannot find fursettings representation"
|
||||
)
|
||||
|
||||
settings_fname = get_representation_path(fur_settings)
|
||||
path = get_representation_path(representation)
|
||||
# Get all node data
|
||||
with open(settings_fname, "r") as fp:
|
||||
settings = json.load(fp)
|
||||
settings = self.read_settings(path)
|
||||
|
||||
# Collect scene information of asset
|
||||
set_members = cmds.sets(container["objectName"], query=True)
|
||||
set_members = lib.get_container_members(container)
|
||||
container_root = lib.get_container_transforms(container,
|
||||
members=set_members,
|
||||
root=True)
|
||||
|
|
@ -147,7 +138,7 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
# Re-assemble metadata with cbId as keys
|
||||
meta_data_lookup = {n["cbId"]: n for n in settings["nodes"]}
|
||||
|
||||
# Compare look ups and get the nodes which ar not relevant any more
|
||||
# Delete nodes by "cbId" that are not in the updated version
|
||||
to_delete_lookup = {cb_id for cb_id in scene_lookup.keys() if
|
||||
cb_id not in meta_data_lookup}
|
||||
if to_delete_lookup:
|
||||
|
|
@ -163,25 +154,18 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
fullPath=True) or []
|
||||
to_remove.extend(shapes + transforms)
|
||||
|
||||
# Remove id from look uop
|
||||
# Remove id from lookup
|
||||
scene_lookup.pop(_id, None)
|
||||
|
||||
cmds.delete(to_remove)
|
||||
|
||||
# replace frame in filename with %04d
|
||||
RE_frame = re.compile(r"(\d+)(\.fur)$")
|
||||
file_name = re.sub(RE_frame, r"%04d\g<2>", os.path.basename(path))
|
||||
for cb_id, data in meta_data_lookup.items():
|
||||
|
||||
# Update cache file name
|
||||
data["attrs"]["cacheFileName"] = os.path.join(
|
||||
os.path.dirname(path), file_name)
|
||||
for cb_id, node_settings in meta_data_lookup.items():
|
||||
|
||||
if cb_id not in scene_lookup:
|
||||
|
||||
# Create new nodes
|
||||
self.log.info("Creating new nodes ..")
|
||||
|
||||
new_nodes = self.create_nodes(namespace, [data])
|
||||
new_nodes = self.create_node(namespace, node_settings)
|
||||
cmds.sets(new_nodes, addElement=container_node)
|
||||
cmds.parent(new_nodes, container_root)
|
||||
|
||||
|
|
@ -218,14 +202,8 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
children=True)
|
||||
yeti_node = yeti_nodes[0]
|
||||
|
||||
for attr, value in data["attrs"].items():
|
||||
# handle empty attribute strings. Those are reported
|
||||
# as None, so their type is NoneType and this is not
|
||||
# supported on attributes in Maya. We change it to
|
||||
# empty string.
|
||||
if value is None:
|
||||
value = ""
|
||||
lib.set_attribute(attr, value, yeti_node)
|
||||
for attr, value in node_settings["attrs"].items():
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
||||
cmds.setAttr("{}.representation".format(container_node),
|
||||
str(representation["_id"]),
|
||||
|
|
@ -235,7 +213,6 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
# helper functions
|
||||
|
||||
def create_namespace(self, asset):
|
||||
"""Create a unique namespace
|
||||
Args:
|
||||
|
|
@ -253,100 +230,122 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
|
||||
return namespace
|
||||
|
||||
def validate_cache(self, filename, pattern="%04d"):
|
||||
"""Check if the cache has more than 1 frame
|
||||
def get_cache_node_filepath(self, root, node_name):
|
||||
"""Get the cache file path for one of the yeti nodes.
|
||||
|
||||
All caches with more than 1 frame need to be called with `%04d`
|
||||
If the cache has only one frame we return that file name as we assume
|
||||
All caches with more than 1 frame need cache file name set with `%04d`
|
||||
If the cache has only one frame we return the file name as we assume
|
||||
it is a snapshot.
|
||||
|
||||
This expects the files to be named after the "node name" through
|
||||
exports with <Name> in Yeti.
|
||||
|
||||
Args:
|
||||
filename(str)
|
||||
pattern(str)
|
||||
root(str): Folder containing cache files to search in.
|
||||
node_name(str): Node name to search cache files for
|
||||
|
||||
Returns:
|
||||
str
|
||||
str: Cache file path value needed for cacheFileName attribute
|
||||
|
||||
"""
|
||||
|
||||
glob_pattern = filename.replace(pattern, "*")
|
||||
name = node_name.replace(":", "_")
|
||||
pattern = r"^({name})(\.[0-4]+)?(\.fur)$".format(name=re.escape(name))
|
||||
|
||||
escaped = re.escape(filename)
|
||||
re_pattern = escaped.replace(pattern, "-?[0-9]+")
|
||||
|
||||
files = glob.glob(glob_pattern)
|
||||
files = [str(f) for f in files if re.match(re_pattern, f)]
|
||||
files = [fname for fname in os.listdir(root) if re.match(pattern,
|
||||
fname)]
|
||||
if not files:
|
||||
self.log.error("Could not find cache files for '{}' "
|
||||
"with pattern {}".format(node_name, pattern))
|
||||
return
|
||||
|
||||
if len(files) == 1:
|
||||
return files[0]
|
||||
elif len(files) == 0:
|
||||
self.log.error("Could not find cache files for '%s'" % filename)
|
||||
# Single file
|
||||
return os.path.join(root, files[0])
|
||||
|
||||
return filename
|
||||
# Get filename for the sequence with padding
|
||||
collections, remainder = clique.assemble(files)
|
||||
assert not remainder, "This is a bug"
|
||||
assert len(collections) == 1, "This is a bug"
|
||||
collection = collections[0]
|
||||
|
||||
def create_nodes(self, namespace, settings):
|
||||
# Formats name as {head}%d{tail} like cache.%04d.fur
|
||||
fname = collection.format("{head}{padding}{tail}")
|
||||
return os.path.join(root, fname)
|
||||
|
||||
def create_node(self, namespace, node_settings):
|
||||
"""Create nodes with the correct namespace and settings
|
||||
|
||||
Args:
|
||||
namespace(str): namespace
|
||||
settings(list): list of dictionaries
|
||||
node_settings(dict): Single "nodes" entry from .fursettings file.
|
||||
|
||||
Returns:
|
||||
list
|
||||
list: Created nodes
|
||||
|
||||
"""
|
||||
|
||||
nodes = []
|
||||
for node_settings in settings:
|
||||
|
||||
# Create pgYetiMaya node
|
||||
original_node = node_settings["name"]
|
||||
node_name = "{}:{}".format(namespace, original_node)
|
||||
yeti_node = cmds.createNode("pgYetiMaya", name=node_name)
|
||||
# Get original names and ids
|
||||
orig_transform_name = node_settings["transform"]["name"]
|
||||
orig_shape_name = node_settings["name"]
|
||||
|
||||
# Create transform node
|
||||
transform_node = node_name.rstrip("Shape")
|
||||
# Add namespace
|
||||
transform_name = "{}:{}".format(namespace, orig_transform_name)
|
||||
shape_name = "{}:{}".format(namespace, orig_shape_name)
|
||||
|
||||
lib.set_id(transform_node, node_settings["transform"]["cbId"])
|
||||
lib.set_id(yeti_node, node_settings["cbId"])
|
||||
# Create pgYetiMaya node
|
||||
transform_node = cmds.createNode("transform",
|
||||
name=transform_name)
|
||||
yeti_node = cmds.createNode("pgYetiMaya",
|
||||
name=shape_name,
|
||||
parent=transform_node)
|
||||
|
||||
nodes.extend([transform_node, yeti_node])
|
||||
lib.set_id(transform_node, node_settings["transform"]["cbId"])
|
||||
lib.set_id(yeti_node, node_settings["cbId"])
|
||||
|
||||
# Ensure the node has no namespace identifiers
|
||||
attributes = node_settings["attrs"]
|
||||
nodes.extend([transform_node, yeti_node])
|
||||
|
||||
# Check if cache file name is stored
|
||||
# Update attributes with defaults
|
||||
attributes = node_settings["attrs"]
|
||||
attributes.update({
|
||||
"viewportDensity": 0.1,
|
||||
"verbosity": 2,
|
||||
"fileMode": 1,
|
||||
|
||||
# get number of # in path and convert it to C prinf format
|
||||
# like %04d expected by Yeti
|
||||
fbase = re.search(r'^(.+)\.(\d+|#+)\.fur', self.fname)
|
||||
if not fbase:
|
||||
raise RuntimeError('Cannot determine file path')
|
||||
padding = len(fbase.group(2))
|
||||
if "cacheFileName" not in attributes:
|
||||
cache = "{}.%0{}d.fur".format(fbase.group(1), padding)
|
||||
# Fix render stats, like Yeti's own
|
||||
# ../scripts/pgYetiNode.mel script
|
||||
"visibleInReflections": True,
|
||||
"visibleInRefractions": True
|
||||
})
|
||||
|
||||
self.validate_cache(cache)
|
||||
attributes["cacheFileName"] = cache
|
||||
# Apply attributes to pgYetiMaya node
|
||||
for attr, value in attributes.items():
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
||||
# Update attributes with requirements
|
||||
attributes.update({"viewportDensity": 0.1,
|
||||
"verbosity": 2,
|
||||
"fileMode": 1})
|
||||
|
||||
# Apply attributes to pgYetiMaya node
|
||||
for attr, value in attributes.items():
|
||||
if value is None:
|
||||
continue
|
||||
lib.set_attribute(attr, value, yeti_node)
|
||||
|
||||
# Fix for : YETI-6
|
||||
# Fixes the render stats (this is literally taken from Perigrene's
|
||||
# ../scripts/pgYetiNode.mel script)
|
||||
cmds.setAttr("{}.visibleInReflections".format(yeti_node), True)
|
||||
cmds.setAttr("{}.visibleInRefractions".format(yeti_node), True)
|
||||
|
||||
# Connect to the time node
|
||||
cmds.connectAttr("time1.outTime", "%s.currentTime" % yeti_node)
|
||||
# Connect to the time node
|
||||
cmds.connectAttr("time1.outTime", "%s.currentTime" % yeti_node)
|
||||
|
||||
return nodes
|
||||
|
||||
def read_settings(self, path):
|
||||
"""Read .fursettings file and compute some additional attributes"""
|
||||
|
||||
with open(path, "r") as fp:
|
||||
fur_settings = json.load(fp)
|
||||
|
||||
if "nodes" not in fur_settings:
|
||||
raise RuntimeError("Encountered invalid data, "
|
||||
"expected 'nodes' in fursettings.")
|
||||
|
||||
# Compute the cache file name values we want to set for the nodes
|
||||
root = os.path.dirname(path)
|
||||
for node in fur_settings["nodes"]:
|
||||
cache_filename = self.get_cache_node_filepath(
|
||||
root=root, node_name=node["name"])
|
||||
|
||||
attrs = node.get("attrs", {}) # allow 'attrs' to not exist
|
||||
attrs["cacheFileName"] = cache_filename
|
||||
node["attrs"] = attrs
|
||||
|
||||
return fur_settings
|
||||
|
|
|
|||
|
|
@ -1,9 +1,50 @@
|
|||
from maya import cmds
|
||||
import maya.api.OpenMaya as om
|
||||
|
||||
import pyblish.api
|
||||
import json
|
||||
|
||||
|
||||
def get_all_children(nodes):
|
||||
"""Return all children of `nodes` including each instanced child.
|
||||
Using maya.cmds.listRelatives(allDescendents=True) includes only the first
|
||||
instance. As such, this function acts as an optimal replacement with a
|
||||
focus on a fast query.
|
||||
|
||||
"""
|
||||
|
||||
sel = om.MSelectionList()
|
||||
traversed = set()
|
||||
iterator = om.MItDag(om.MItDag.kDepthFirst)
|
||||
for node in nodes:
|
||||
|
||||
if node in traversed:
|
||||
# Ignore if already processed as a child
|
||||
# before
|
||||
continue
|
||||
|
||||
sel.clear()
|
||||
sel.add(node)
|
||||
dag = sel.getDagPath(0)
|
||||
|
||||
iterator.reset(dag)
|
||||
# ignore self
|
||||
iterator.next() # noqa: B305
|
||||
while not iterator.isDone():
|
||||
|
||||
path = iterator.fullPathName()
|
||||
|
||||
if path in traversed:
|
||||
iterator.prune()
|
||||
iterator.next() # noqa: B305
|
||||
continue
|
||||
|
||||
traversed.add(path)
|
||||
iterator.next() # noqa: B305
|
||||
|
||||
return list(traversed)
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
"""Gather instances by objectSet and pre-defined attribute
|
||||
|
||||
|
|
@ -86,12 +127,8 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
# Collect members
|
||||
members = cmds.ls(members, long=True) or []
|
||||
|
||||
# `maya.cmds.listRelatives(noIntermediate=True)` only works when
|
||||
# `shapes=True` argument is passed, since we also want to include
|
||||
# transforms we filter afterwards.
|
||||
children = cmds.listRelatives(members,
|
||||
allDescendents=True,
|
||||
fullPath=True) or []
|
||||
dag_members = cmds.ls(members, type="dagNode", long=True)
|
||||
children = get_all_children(dag_members)
|
||||
children = cmds.ls(children, noIntermediate=True, long=True)
|
||||
|
||||
parents = []
|
||||
|
|
|
|||
|
|
@ -603,6 +603,18 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
source,
|
||||
computed_source))
|
||||
|
||||
# renderman allows nodes to have filename attribute empty while
|
||||
# you can have another incoming connection from different node.
|
||||
pxr_nodes = set()
|
||||
if cmds.pluginInfo("RenderMan_for_Maya", query=True, loaded=True):
|
||||
pxr_nodes = set(
|
||||
cmds.pluginInfo("RenderMan_for_Maya",
|
||||
query=True,
|
||||
dependNode=True)
|
||||
)
|
||||
if not source and cmds.nodeType(node) in pxr_nodes:
|
||||
self.log.info("Renderman: source is empty, skipping...")
|
||||
continue
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
|
|
|
|||
|
|
@ -43,11 +43,12 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
|
||||
instance.data["resources"] = yeti_resources
|
||||
|
||||
# Force frame range for export
|
||||
instance.data["frameStart"] = cmds.playbackOptions(
|
||||
query=True, animationStartTime=True)
|
||||
instance.data["frameEnd"] = cmds.playbackOptions(
|
||||
query=True, animationStartTime=True)
|
||||
# Force frame range for yeti cache export for the rig
|
||||
start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
for key in ["frameStart", "frameEnd",
|
||||
"frameStartHandle", "frameEndHandle"]:
|
||||
instance.data[key] = start
|
||||
instance.data["preroll"] = 0
|
||||
|
||||
def collect_input_connections(self, instance):
|
||||
"""Collect the inputs for all nodes in the input_SET"""
|
||||
|
|
|
|||
|
|
@ -25,13 +25,10 @@ class ExtractYetiCache(openpype.api.Extractor):
|
|||
# Define extract output file path
|
||||
dirname = self.staging_dir(instance)
|
||||
|
||||
# Yeti related staging dirs
|
||||
data_file = os.path.join(dirname, "yeti.fursettings")
|
||||
|
||||
# Collect information for writing cache
|
||||
start_frame = instance.data.get("frameStartHandle")
|
||||
end_frame = instance.data.get("frameEndHandle")
|
||||
preroll = instance.data.get("preroll")
|
||||
start_frame = instance.data["frameStartHandle"]
|
||||
end_frame = instance.data["frameEndHandle"]
|
||||
preroll = instance.data["preroll"]
|
||||
if preroll > 0:
|
||||
start_frame -= preroll
|
||||
|
||||
|
|
@ -57,32 +54,35 @@ class ExtractYetiCache(openpype.api.Extractor):
|
|||
cache_files = [x for x in os.listdir(dirname) if x.endswith(".fur")]
|
||||
|
||||
self.log.info("Writing metadata file")
|
||||
settings = instance.data.get("fursettings", None)
|
||||
if settings is not None:
|
||||
with open(data_file, "w") as fp:
|
||||
json.dump(settings, fp, ensure_ascii=False)
|
||||
settings = instance.data["fursettings"]
|
||||
fursettings_path = os.path.join(dirname, "yeti.fursettings")
|
||||
with open(fursettings_path, "w") as fp:
|
||||
json.dump(settings, fp, ensure_ascii=False)
|
||||
|
||||
# build representations
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self.log.info("cache files: {}".format(cache_files[0]))
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
'name': 'fur',
|
||||
'ext': 'fur',
|
||||
'files': cache_files[0] if len(cache_files) == 1 else cache_files,
|
||||
'stagingDir': dirname,
|
||||
'frameStart': int(start_frame),
|
||||
'frameEnd': int(end_frame)
|
||||
}
|
||||
)
|
||||
|
||||
# Workaround: We do not explicitly register these files with the
|
||||
# representation solely so that we can write multiple sequences
|
||||
# a single Subset without renaming - it's a bit of a hack
|
||||
# TODO: Implement better way to manage this sort of integration
|
||||
if 'transfers' not in instance.data:
|
||||
instance.data['transfers'] = []
|
||||
|
||||
publish_dir = instance.data["publishDir"]
|
||||
for cache_filename in cache_files:
|
||||
src = os.path.join(dirname, cache_filename)
|
||||
dst = os.path.join(publish_dir, os.path.basename(cache_filename))
|
||||
instance.data['transfers'].append([src, dst])
|
||||
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
'name': 'fursettings',
|
||||
'name': 'fur',
|
||||
'ext': 'fursettings',
|
||||
'files': os.path.basename(data_file),
|
||||
'files': os.path.basename(fursettings_path),
|
||||
'stagingDir': dirname
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -124,8 +124,8 @@ class ExtractYetiRig(openpype.api.Extractor):
|
|||
settings_path = os.path.join(dirname, "yeti.rigsettings")
|
||||
|
||||
# Yeti related staging dirs
|
||||
maya_path = os.path.join(
|
||||
dirname, "yeti_rig.{}".format(self.scene_type))
|
||||
maya_path = os.path.join(dirname,
|
||||
"yeti_rig.{}".format(self.scene_type))
|
||||
|
||||
self.log.info("Writing metadata file")
|
||||
|
||||
|
|
@ -157,7 +157,7 @@ class ExtractYetiRig(openpype.api.Extractor):
|
|||
input_set = next(i for i in instance if i == "input_SET")
|
||||
|
||||
# Get all items
|
||||
set_members = cmds.sets(input_set, query=True)
|
||||
set_members = cmds.sets(input_set, query=True) or []
|
||||
set_members += cmds.listRelatives(set_members,
|
||||
allDescendents=True,
|
||||
fullPath=True) or []
|
||||
|
|
@ -167,7 +167,7 @@ class ExtractYetiRig(openpype.api.Extractor):
|
|||
resources = instance.data.get("resources", {})
|
||||
with disconnect_plugs(settings, members):
|
||||
with yetigraph_attribute_values(resources_dir, resources):
|
||||
with maya.attribute_values(attr_value):
|
||||
with lib.attribute_values(attr_value):
|
||||
cmds.select(nodes, noExpand=True)
|
||||
cmds.file(maya_path,
|
||||
force=True,
|
||||
|
|
|
|||
|
|
@ -539,7 +539,9 @@ def get_created_node_imageio_setting_legacy(nodeclass, creator, subset):
|
|||
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]
|
||||
required_nodes = imageio_nodes["requiredNodes"]
|
||||
override_nodes = imageio_nodes["overrideNodes"]
|
||||
|
||||
# HACK: for backward compatibility this needs to be optional
|
||||
override_nodes = imageio_nodes.get("overrideNodes", [])
|
||||
|
||||
imageio_node = None
|
||||
for node in required_nodes:
|
||||
|
|
|
|||
|
|
@ -2,7 +2,20 @@ import nuke
|
|||
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
create_write_node, create_write_node_legacy)
|
||||
create_write_node,
|
||||
create_write_node_legacy,
|
||||
get_created_node_imageio_setting_legacy
|
||||
)
|
||||
|
||||
# HACK: just to disable still image on projects which
|
||||
# are not having anatomy imageio preset for CreateWriteStill
|
||||
# TODO: remove this code as soon as it will be obsolete
|
||||
imageio_writes = get_created_node_imageio_setting_legacy(
|
||||
"Write",
|
||||
"CreateWriteStill",
|
||||
"stillMain"
|
||||
)
|
||||
print(imageio_writes["knobs"])
|
||||
|
||||
|
||||
class CreateWriteStill(plugin.AbstractWriteRender):
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import clique
|
|||
|
||||
class NukeRenderLocal(openpype.api.Extractor):
|
||||
# TODO: rewrite docstring to nuke
|
||||
"""Render the current Fusion composition locally.
|
||||
"""Render the current Nuke composition locally.
|
||||
|
||||
Extract the result of savers by starting a comp render
|
||||
This will run the local render of Fusion.
|
||||
|
|
|
|||
|
|
@ -23,9 +23,13 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
families = ["review"]
|
||||
hosts = ["nuke"]
|
||||
|
||||
# presets
|
||||
# settings
|
||||
use_rendered = False
|
||||
bake_viewer_process = True
|
||||
bake_viewer_input_process = True
|
||||
nodes = {}
|
||||
|
||||
|
||||
def process(self, instance):
|
||||
if "render.farm" in instance.data["families"]:
|
||||
return
|
||||
|
|
@ -38,11 +42,17 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
self.render_thumbnail(instance)
|
||||
|
||||
def render_thumbnail(self, instance):
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
last_frame = instance.data["frameEndHandle"]
|
||||
|
||||
# find frame range and define middle thumb frame
|
||||
mid_frame = int((last_frame - first_frame) / 2)
|
||||
|
||||
node = instance[0] # group node
|
||||
self.log.info("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
instance.data["representations"] = []
|
||||
|
||||
staging_dir = os.path.normpath(
|
||||
os.path.dirname(instance.data['path']))
|
||||
|
|
@ -53,7 +63,11 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
temporary_nodes = []
|
||||
|
||||
# try to connect already rendered images
|
||||
previous_node = node
|
||||
collection = instance.data.get("collection", None)
|
||||
self.log.debug("__ collection: `{}`".format(collection))
|
||||
|
||||
if collection:
|
||||
# get path
|
||||
|
|
@ -61,40 +75,45 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
"{head}{padding}{tail}"))
|
||||
fhead = collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
first_frame = min(collection.indexes)
|
||||
last_frame = max(collection.indexes)
|
||||
thumb_fname = list(collection)[mid_frame]
|
||||
else:
|
||||
fname = os.path.basename(instance.data.get("path", None))
|
||||
fname = thumb_fname = os.path.basename(
|
||||
instance.data.get("path", None))
|
||||
fhead = os.path.splitext(fname)[0] + "."
|
||||
first_frame = instance.data.get("frameStart", None)
|
||||
last_frame = instance.data.get("frameEnd", None)
|
||||
|
||||
self.log.debug("__ fhead: `{}`".format(fhead))
|
||||
|
||||
if "#" in fhead:
|
||||
fhead = fhead.replace("#", "")[:-1]
|
||||
|
||||
path_render = os.path.join(staging_dir, fname).replace("\\", "/")
|
||||
# check if file exist otherwise connect to write node
|
||||
if os.path.isfile(path_render):
|
||||
path_render = os.path.join(
|
||||
staging_dir, thumb_fname).replace("\\", "/")
|
||||
self.log.debug("__ path_render: `{}`".format(path_render))
|
||||
|
||||
if self.use_rendered and os.path.isfile(path_render):
|
||||
# check if file exist otherwise connect to write node
|
||||
rnode = nuke.createNode("Read")
|
||||
|
||||
rnode["file"].setValue(path_render)
|
||||
|
||||
rnode["first"].setValue(first_frame)
|
||||
rnode["origfirst"].setValue(first_frame)
|
||||
rnode["last"].setValue(last_frame)
|
||||
rnode["origlast"].setValue(last_frame)
|
||||
# turn it raw if none of baking is ON
|
||||
if all([
|
||||
not self.bake_viewer_input_process,
|
||||
not self.bake_viewer_process
|
||||
]):
|
||||
rnode["raw"].setValue(True)
|
||||
|
||||
temporary_nodes.append(rnode)
|
||||
previous_node = rnode
|
||||
else:
|
||||
previous_node = node
|
||||
|
||||
# get input process and connect it to baking
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
# bake viewer input look node into thumbnail image
|
||||
if self.bake_viewer_input_process:
|
||||
# get input process and connect it to baking
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
|
||||
reformat_node = nuke.createNode("Reformat")
|
||||
|
||||
|
|
@ -110,10 +129,12 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
previous_node = reformat_node
|
||||
temporary_nodes.append(reformat_node)
|
||||
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node.setInput(0, previous_node)
|
||||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
# bake viewer colorspace into thumbnail image
|
||||
if self.bake_viewer_process:
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node.setInput(0, previous_node)
|
||||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
|
||||
# create write node
|
||||
write_node = nuke.createNode("Write")
|
||||
|
|
@ -128,26 +149,18 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
temporary_nodes.append(write_node)
|
||||
tags = ["thumbnail", "publish_on_farm"]
|
||||
|
||||
# retime for
|
||||
mid_frame = int((int(last_frame) - int(first_frame)) / 2) \
|
||||
+ int(first_frame)
|
||||
first_frame = int(last_frame) / 2
|
||||
last_frame = int(last_frame) / 2
|
||||
|
||||
repre = {
|
||||
'name': name,
|
||||
'ext': "jpg",
|
||||
"outputName": "thumb",
|
||||
'files': file,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame,
|
||||
"tags": tags
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
# Render frames
|
||||
nuke.execute(write_node.name(), int(mid_frame), int(mid_frame))
|
||||
nuke.execute(write_node.name(), mid_frame, mid_frame)
|
||||
|
||||
self.log.debug(
|
||||
"representations: {}".format(instance.data["representations"]))
|
||||
|
|
|
|||
|
|
@ -151,15 +151,11 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
|
|||
"resolutionWidth": resolution_width,
|
||||
"resolutionHeight": resolution_height,
|
||||
"pixelAspect": pixel_aspect,
|
||||
"review": review
|
||||
"review": review,
|
||||
"representations": []
|
||||
|
||||
})
|
||||
self.log.info("collected instance: {}".format(instance.data))
|
||||
instances.append(instance)
|
||||
|
||||
# create instances in context data if not are created yet
|
||||
if not context.data.get("instances"):
|
||||
context.data["instances"] = list()
|
||||
|
||||
context.data["instances"].extend(instances)
|
||||
self.log.debug("context: {}".format(context))
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
label = "Pre-collect Workfile"
|
||||
hosts = ['nuke']
|
||||
|
||||
def process(self, context):
|
||||
def process(self, context): # sourcery skip: avoid-builtin-shadow
|
||||
root = nuke.root()
|
||||
|
||||
current_file = os.path.normpath(nuke.root().name())
|
||||
|
|
@ -74,20 +74,6 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
}
|
||||
context.data.update(script_data)
|
||||
|
||||
# creating instance data
|
||||
instance.data.update({
|
||||
"subset": subset,
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"publish": root.knob('publish').value(),
|
||||
"family": family,
|
||||
"families": [family],
|
||||
"representations": list()
|
||||
})
|
||||
|
||||
# adding basic script data
|
||||
instance.data.update(script_data)
|
||||
|
||||
# creating representation
|
||||
representation = {
|
||||
'name': 'nk',
|
||||
|
|
@ -96,12 +82,18 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
# creating instance data
|
||||
instance.data.update({
|
||||
"subset": subset,
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"publish": root.knob('publish').value(),
|
||||
"family": family,
|
||||
"families": [family],
|
||||
"representations": [representation]
|
||||
})
|
||||
|
||||
# adding basic script data
|
||||
instance.data.update(script_data)
|
||||
|
||||
self.log.info('Publishing script version')
|
||||
|
||||
# create instances in context data if not are created yet
|
||||
if not context.data.get("instances"):
|
||||
context.data["instances"] = list()
|
||||
|
||||
context.data["instances"].append(instance)
|
||||
|
|
|
|||
|
|
@ -72,12 +72,12 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
|
||||
representation = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
"stagingDir": output_dir,
|
||||
"tags": list()
|
||||
}
|
||||
representation = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
"stagingDir": output_dir,
|
||||
"tags": list()
|
||||
}
|
||||
|
||||
try:
|
||||
collected_frames = [f for f in os.listdir(output_dir)
|
||||
|
|
@ -175,6 +175,11 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
"frameEndHandle": last_frame,
|
||||
})
|
||||
|
||||
# make sure rendered sequence on farm will
|
||||
# be used for exctract review
|
||||
if not instance.data["review"]:
|
||||
instance.data["useSequenceForReview"] = False
|
||||
|
||||
# * Add audio to instance if exists.
|
||||
# Find latest versions document
|
||||
version_doc = pype.get_latest_version(
|
||||
|
|
|
|||
|
|
@ -1 +0,0 @@
|
|||
import knob_scripter
|
||||
|
Before Width: | Height: | Size: 1.8 KiB |
|
Before Width: | Height: | Size: 1.2 KiB |
|
Before Width: | Height: | Size: 1.8 KiB |
|
Before Width: | Height: | Size: 2.1 KiB |
|
Before Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 2.7 KiB |
|
Before Width: | Height: | Size: 1.7 KiB |
|
Before Width: | Height: | Size: 2.3 KiB |
|
Before Width: | Height: | Size: 1.7 KiB |
|
Before Width: | Height: | Size: 2.3 KiB |
|
Before Width: | Height: | Size: 1.4 KiB |
|
|
@ -1,4 +0,0 @@
|
|||
import nuke
|
||||
|
||||
# default write mov
|
||||
nuke.knobDefault('Write.mov.colorspace', 'sRGB')
|
||||
|
|
@ -3,7 +3,7 @@ import json
|
|||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
||||
|
|
@ -24,12 +24,9 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
asset_name = instance.data["asset"]
|
||||
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
if not asset_doc:
|
||||
raise AssertionError((
|
||||
"Couldn't find Asset document with name \"{}\""
|
||||
|
|
@ -52,7 +49,7 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
self.subset_name_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
project_name
|
||||
)
|
||||
instance_name = f"{asset_name}_{subset_name}"
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import re
|
|||
from copy import deepcopy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_id
|
||||
|
||||
|
||||
class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
||||
|
|
@ -61,27 +61,32 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
**instance.data["anatomyData"])
|
||||
|
||||
def create_hierarchy(self, instance):
|
||||
parents = list()
|
||||
hierarchy = list()
|
||||
visual_hierarchy = [instance.context.data["assetEntity"]]
|
||||
asset_doc = instance.context.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
project_name = project_doc["name"]
|
||||
visual_hierarchy = [asset_doc]
|
||||
current_doc = asset_doc
|
||||
while True:
|
||||
visual_parent = legacy_io.find_one(
|
||||
{"_id": visual_hierarchy[-1]["data"]["visualParent"]}
|
||||
)
|
||||
if visual_parent:
|
||||
visual_hierarchy.append(visual_parent)
|
||||
else:
|
||||
visual_hierarchy.append(
|
||||
instance.context.data["projectEntity"])
|
||||
visual_parent_id = current_doc["data"]["visualParent"]
|
||||
visual_parent = None
|
||||
if visual_parent_id:
|
||||
visual_parent = get_asset_by_id(project_name, visual_parent_id)
|
||||
|
||||
if not visual_parent:
|
||||
visual_hierarchy.append(project_doc)
|
||||
break
|
||||
visual_hierarchy.append(visual_parent)
|
||||
current_doc = visual_parent
|
||||
|
||||
# add current selection context hierarchy from standalonepublisher
|
||||
parents = list()
|
||||
for entity in reversed(visual_hierarchy):
|
||||
parents.append({
|
||||
"entity_type": entity["data"]["entityType"],
|
||||
"entity_name": entity["name"]
|
||||
})
|
||||
|
||||
hierarchy = list()
|
||||
if self.shot_add_hierarchy:
|
||||
parent_template_patern = re.compile(r"\{([a-z]*?)\}")
|
||||
# fill the parents parts from presets
|
||||
|
|
@ -129,9 +134,8 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
self.log.debug(f"Hierarchy: {hierarchy}")
|
||||
self.log.debug(f"parents: {parents}")
|
||||
|
||||
tasks_to_add = dict()
|
||||
if self.shot_add_tasks:
|
||||
tasks_to_add = dict()
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_tasks = project_doc["config"]["tasks"]
|
||||
for task_name, task_data in self.shot_add_tasks.items():
|
||||
_task_data = deepcopy(task_data)
|
||||
|
|
@ -150,9 +154,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
task_name,
|
||||
list(project_tasks.keys())))
|
||||
|
||||
instance.data["tasks"] = tasks_to_add
|
||||
else:
|
||||
instance.data["tasks"] = dict()
|
||||
instance.data["tasks"] = tasks_to_add
|
||||
|
||||
# updating hierarchy data
|
||||
instance.data["anatomyData"].update({
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import collections
|
|||
import pyblish.api
|
||||
from pprint import pformat
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_assets
|
||||
|
||||
|
||||
class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin):
|
||||
|
|
@ -119,8 +119,9 @@ class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
def _asset_docs_by_parent_id(self, instance):
|
||||
# Query all assets for project and store them by parent's id to list
|
||||
project_name = instance.context.data["projectEntity"]["name"]
|
||||
asset_docs_by_parent_id = collections.defaultdict(list)
|
||||
for asset_doc in legacy_io.find({"type": "asset"}):
|
||||
for asset_doc in get_assets(project_name):
|
||||
parent_id = asset_doc["data"]["visualParent"]
|
||||
asset_docs_by_parent_id[parent_id].append(asset_doc)
|
||||
return asset_docs_by_parent_id
|
||||
|
|
|
|||
|
|
@ -1,9 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
PublishXmlValidationError,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.client import get_assets
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
||||
|
||||
class ValidateTaskExistence(pyblish.api.ContextPlugin):
|
||||
|
|
@ -20,15 +18,11 @@ class ValidateTaskExistence(pyblish.api.ContextPlugin):
|
|||
for instance in context:
|
||||
asset_names.add(instance.data["asset"])
|
||||
|
||||
asset_docs = legacy_io.find(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": {"$in": list(asset_names)}
|
||||
},
|
||||
{
|
||||
"name": 1,
|
||||
"data.tasks": 1
|
||||
}
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
asset_docs = get_assets(
|
||||
project_name,
|
||||
asset_names=asset_names,
|
||||
fields=["name", "data.tasks"]
|
||||
)
|
||||
tasks_by_asset_names = {}
|
||||
for asset_doc in asset_docs:
|
||||
|
|
|
|||
|
|
@ -13,9 +13,13 @@ import tempfile
|
|||
import math
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
get_last_version_by_subset_name
|
||||
)
|
||||
from openpype.lib import (
|
||||
prepare_template_data,
|
||||
get_asset,
|
||||
get_ffprobe_streams,
|
||||
convert_ffprobe_fps_value,
|
||||
)
|
||||
|
|
@ -23,7 +27,6 @@ from openpype.lib.plugin_tools import (
|
|||
parse_json,
|
||||
get_subset_name_with_asset_doc
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
||||
|
|
@ -56,8 +59,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
|
||||
self.log.info("task_sub:: {}".format(task_subfolders))
|
||||
|
||||
project_name = context.data["project_name"]
|
||||
asset_name = context.data["asset"]
|
||||
asset_doc = get_asset()
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
task_name = context.data["task"]
|
||||
task_type = context.data["taskType"]
|
||||
project_name = context.data["project_name"]
|
||||
|
|
@ -80,7 +84,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
family, variant, task_name, asset_doc,
|
||||
project_name=project_name, host_name="webpublisher"
|
||||
)
|
||||
version = self._get_last_version(asset_name, subset_name) + 1
|
||||
version = self._get_next_version(
|
||||
project_name, asset_doc, subset_name
|
||||
)
|
||||
|
||||
instance = context.create_instance(subset_name)
|
||||
instance.data["asset"] = asset_name
|
||||
|
|
@ -219,55 +225,19 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
config["families"],
|
||||
config["tags"])
|
||||
|
||||
def _get_last_version(self, asset_name, subset_name):
|
||||
"""Returns version number or 0 for 'asset' and 'subset'"""
|
||||
query = [
|
||||
{
|
||||
"$match": {"type": "asset", "name": asset_name}
|
||||
},
|
||||
{
|
||||
"$lookup":
|
||||
{
|
||||
"from": os.environ["AVALON_PROJECT"],
|
||||
"localField": "_id",
|
||||
"foreignField": "parent",
|
||||
"as": "subsets"
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$subsets"
|
||||
},
|
||||
{
|
||||
"$match": {"subsets.type": "subset",
|
||||
"subsets.name": subset_name}},
|
||||
{
|
||||
"$lookup":
|
||||
{
|
||||
"from": os.environ["AVALON_PROJECT"],
|
||||
"localField": "subsets._id",
|
||||
"foreignField": "parent",
|
||||
"as": "versions"
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$versions"
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": {
|
||||
"asset_name": "$name",
|
||||
"subset_name": "$subsets.name"
|
||||
},
|
||||
'version': {'$max': "$versions.name"}
|
||||
}
|
||||
}
|
||||
]
|
||||
version = list(legacy_io.aggregate(query))
|
||||
def _get_next_version(self, project_name, asset_doc, subset_name):
|
||||
"""Returns version number or 1 for 'asset' and 'subset'"""
|
||||
|
||||
if version:
|
||||
return version[0].get("version") or 0
|
||||
else:
|
||||
return 0
|
||||
version_doc = get_last_version_by_subset_name(
|
||||
project_name,
|
||||
subset_name,
|
||||
asset_doc["_id"],
|
||||
fields=["name"]
|
||||
)
|
||||
version = 1
|
||||
if version_doc:
|
||||
version += int(version_doc["name"])
|
||||
return version
|
||||
|
||||
def _get_number_of_frames(self, file_url):
|
||||
"""Return duration in frames"""
|
||||
|
|
|
|||
|
|
@ -2,11 +2,15 @@
|
|||
import os
|
||||
import json
|
||||
import datetime
|
||||
from bson.objectid import ObjectId
|
||||
import collections
|
||||
from aiohttp.web_response import Response
|
||||
import subprocess
|
||||
from bson.objectid import ObjectId
|
||||
from aiohttp.web_response import Response
|
||||
|
||||
from openpype.client import (
|
||||
get_projects,
|
||||
get_assets,
|
||||
)
|
||||
from openpype.lib import (
|
||||
OpenPypeMongoConnection,
|
||||
PypeLogger,
|
||||
|
|
@ -16,30 +20,29 @@ from openpype.lib.remote_publish import (
|
|||
ERROR_STATUS,
|
||||
REPROCESS_STATUS
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype_modules.avalon_apps.rest_api import _RestApiEndpoint
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype_modules.webserver.base_routes import RestApiEndpoint
|
||||
|
||||
log = PypeLogger.get_logger("WebpublishRoutes")
|
||||
|
||||
|
||||
|
||||
log = PypeLogger.get_logger("WebServer")
|
||||
class ResourceRestApiEndpoint(RestApiEndpoint):
|
||||
def __init__(self, resource):
|
||||
self.resource = resource
|
||||
super(ResourceRestApiEndpoint, self).__init__()
|
||||
|
||||
|
||||
class RestApiResource:
|
||||
"""Resource carrying needed info and Avalon DB connection for publish."""
|
||||
def __init__(self, server_manager, executable, upload_dir,
|
||||
studio_task_queue=None):
|
||||
self.server_manager = server_manager
|
||||
self.upload_dir = upload_dir
|
||||
self.executable = executable
|
||||
class WebpublishApiEndpoint(ResourceRestApiEndpoint):
|
||||
@property
|
||||
def dbcon(self):
|
||||
return self.resource.dbcon
|
||||
|
||||
if studio_task_queue is None:
|
||||
studio_task_queue = collections.deque().dequeu
|
||||
self.studio_task_queue = studio_task_queue
|
||||
|
||||
self.dbcon = AvalonMongoDB()
|
||||
self.dbcon.install()
|
||||
class JsonApiResource:
|
||||
"""Resource for json manipulation.
|
||||
|
||||
All resources handling sending output to REST should inherit from
|
||||
"""
|
||||
@staticmethod
|
||||
def json_dump_handler(value):
|
||||
if isinstance(value, datetime.datetime):
|
||||
|
|
@ -59,19 +62,33 @@ class RestApiResource:
|
|||
).encode("utf-8")
|
||||
|
||||
|
||||
class OpenPypeRestApiResource(RestApiResource):
|
||||
class RestApiResource(JsonApiResource):
|
||||
"""Resource carrying needed info and Avalon DB connection for publish."""
|
||||
def __init__(self, server_manager, executable, upload_dir,
|
||||
studio_task_queue=None):
|
||||
self.server_manager = server_manager
|
||||
self.upload_dir = upload_dir
|
||||
self.executable = executable
|
||||
|
||||
if studio_task_queue is None:
|
||||
studio_task_queue = collections.deque().dequeu
|
||||
self.studio_task_queue = studio_task_queue
|
||||
|
||||
|
||||
class WebpublishRestApiResource(JsonApiResource):
|
||||
"""Resource carrying OP DB connection for storing batch info into DB."""
|
||||
def __init__(self, ):
|
||||
|
||||
def __init__(self):
|
||||
mongo_client = OpenPypeMongoConnection.get_mongo_client()
|
||||
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
|
||||
self.dbcon = mongo_client[database_name]["webpublishes"]
|
||||
|
||||
|
||||
class ProjectsEndpoint(_RestApiEndpoint):
|
||||
class ProjectsEndpoint(ResourceRestApiEndpoint):
|
||||
"""Returns list of dict with project info (id, name)."""
|
||||
async def get(self) -> Response:
|
||||
output = []
|
||||
for project_doc in self.dbcon.projects():
|
||||
for project_doc in get_projects():
|
||||
ret_val = {
|
||||
"id": project_doc["_id"],
|
||||
"name": project_doc["name"]
|
||||
|
|
@ -84,7 +101,7 @@ class ProjectsEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class HiearchyEndpoint(_RestApiEndpoint):
|
||||
class HiearchyEndpoint(ResourceRestApiEndpoint):
|
||||
"""Returns dictionary with context tree from assets."""
|
||||
async def get(self, project_name) -> Response:
|
||||
query_projection = {
|
||||
|
|
@ -96,10 +113,7 @@ class HiearchyEndpoint(_RestApiEndpoint):
|
|||
"type": 1,
|
||||
}
|
||||
|
||||
asset_docs = self.dbcon.database[project_name].find(
|
||||
{"type": "asset"},
|
||||
query_projection
|
||||
)
|
||||
asset_docs = get_assets(project_name, fields=query_projection.keys())
|
||||
asset_docs_by_id = {
|
||||
asset_doc["_id"]: asset_doc
|
||||
for asset_doc in asset_docs
|
||||
|
|
@ -183,7 +197,7 @@ class TaskNode(Node):
|
|||
self["attributes"] = {}
|
||||
|
||||
|
||||
class BatchPublishEndpoint(_RestApiEndpoint):
|
||||
class BatchPublishEndpoint(WebpublishApiEndpoint):
|
||||
"""Triggers headless publishing of batch."""
|
||||
async def post(self, request) -> Response:
|
||||
# Validate existence of openpype executable
|
||||
|
|
@ -288,7 +302,7 @@ class BatchPublishEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class TaskPublishEndpoint(_RestApiEndpoint):
|
||||
class TaskPublishEndpoint(WebpublishApiEndpoint):
|
||||
"""Prepared endpoint triggered after each task - for future development."""
|
||||
async def post(self, request) -> Response:
|
||||
return Response(
|
||||
|
|
@ -298,8 +312,12 @@ class TaskPublishEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class BatchStatusEndpoint(_RestApiEndpoint):
|
||||
"""Returns dict with info for batch_id."""
|
||||
class BatchStatusEndpoint(WebpublishApiEndpoint):
|
||||
"""Returns dict with info for batch_id.
|
||||
|
||||
Uses 'WebpublishRestApiResource'.
|
||||
"""
|
||||
|
||||
async def get(self, batch_id) -> Response:
|
||||
output = self.dbcon.find_one({"batch_id": batch_id})
|
||||
|
||||
|
|
@ -318,8 +336,12 @@ class BatchStatusEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class UserReportEndpoint(_RestApiEndpoint):
|
||||
"""Returns list of dict with batch info for user (email address)."""
|
||||
class UserReportEndpoint(WebpublishApiEndpoint):
|
||||
"""Returns list of dict with batch info for user (email address).
|
||||
|
||||
Uses 'WebpublishRestApiResource'.
|
||||
"""
|
||||
|
||||
async def get(self, user) -> Response:
|
||||
output = list(self.dbcon.find({"user": user},
|
||||
projection={"log": False}))
|
||||
|
|
@ -338,7 +360,7 @@ class UserReportEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class ConfiguredExtensionsEndpoint(_RestApiEndpoint):
|
||||
class ConfiguredExtensionsEndpoint(WebpublishApiEndpoint):
|
||||
"""Returns dict of extensions which have mapping to family.
|
||||
|
||||
Returns:
|
||||
|
|
@ -378,8 +400,12 @@ class ConfiguredExtensionsEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class BatchReprocessEndpoint(_RestApiEndpoint):
|
||||
"""Marks latest 'batch_id' for reprocessing, returns 404 if not found."""
|
||||
class BatchReprocessEndpoint(WebpublishApiEndpoint):
|
||||
"""Marks latest 'batch_id' for reprocessing, returns 404 if not found.
|
||||
|
||||
Uses 'WebpublishRestApiResource'.
|
||||
"""
|
||||
|
||||
async def post(self, batch_id) -> Response:
|
||||
batches = self.dbcon.find({"batch_id": batch_id,
|
||||
"status": ERROR_STATUS}).sort("_id", -1)
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ from openpype.lib import PypeLogger
|
|||
|
||||
from .webpublish_routes import (
|
||||
RestApiResource,
|
||||
OpenPypeRestApiResource,
|
||||
WebpublishRestApiResource,
|
||||
HiearchyEndpoint,
|
||||
ProjectsEndpoint,
|
||||
ConfiguredExtensionsEndpoint,
|
||||
|
|
@ -27,7 +27,7 @@ from openpype.lib.remote_publish import (
|
|||
)
|
||||
|
||||
|
||||
log = PypeLogger().get_logger("webserver_gui")
|
||||
log = PypeLogger.get_logger("webserver_gui")
|
||||
|
||||
|
||||
def run_webserver(*args, **kwargs):
|
||||
|
|
@ -69,16 +69,14 @@ def run_webserver(*args, **kwargs):
|
|||
)
|
||||
|
||||
# triggers publish
|
||||
webpublisher_task_publish_endpoint = \
|
||||
BatchPublishEndpoint(resource)
|
||||
webpublisher_task_publish_endpoint = BatchPublishEndpoint(resource)
|
||||
server_manager.add_route(
|
||||
"POST",
|
||||
"/api/webpublish/batch",
|
||||
webpublisher_task_publish_endpoint.dispatch
|
||||
)
|
||||
|
||||
webpublisher_batch_publish_endpoint = \
|
||||
TaskPublishEndpoint(resource)
|
||||
webpublisher_batch_publish_endpoint = TaskPublishEndpoint(resource)
|
||||
server_manager.add_route(
|
||||
"POST",
|
||||
"/api/webpublish/task",
|
||||
|
|
@ -86,27 +84,26 @@ def run_webserver(*args, **kwargs):
|
|||
)
|
||||
|
||||
# reporting
|
||||
openpype_resource = OpenPypeRestApiResource()
|
||||
batch_status_endpoint = BatchStatusEndpoint(openpype_resource)
|
||||
webpublish_resource = WebpublishRestApiResource()
|
||||
batch_status_endpoint = BatchStatusEndpoint(webpublish_resource)
|
||||
server_manager.add_route(
|
||||
"GET",
|
||||
"/api/batch_status/{batch_id}",
|
||||
batch_status_endpoint.dispatch
|
||||
)
|
||||
|
||||
user_status_endpoint = UserReportEndpoint(openpype_resource)
|
||||
user_status_endpoint = UserReportEndpoint(webpublish_resource)
|
||||
server_manager.add_route(
|
||||
"GET",
|
||||
"/api/publishes/{user}",
|
||||
user_status_endpoint.dispatch
|
||||
)
|
||||
|
||||
webpublisher_batch_reprocess_endpoint = \
|
||||
BatchReprocessEndpoint(openpype_resource)
|
||||
batch_reprocess_endpoint = BatchReprocessEndpoint(webpublish_resource)
|
||||
server_manager.add_route(
|
||||
"POST",
|
||||
"/api/webpublish/reprocess/{batch_id}",
|
||||
webpublisher_batch_reprocess_endpoint.dispatch
|
||||
batch_reprocess_endpoint.dispatch
|
||||
)
|
||||
|
||||
server_manager.start_server()
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ import platform
|
|||
import logging
|
||||
import collections
|
||||
import functools
|
||||
import getpass
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
|
|
@ -19,6 +18,7 @@ from .anatomy import Anatomy
|
|||
from .profiles_filtering import filter_profiles
|
||||
from .events import emit_event
|
||||
from .path_templates import StringTemplate
|
||||
from .local_settings import get_openpype_username
|
||||
|
||||
legacy_io = None
|
||||
|
||||
|
|
@ -550,7 +550,7 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
|
|||
"asset": asset_doc["name"],
|
||||
"parent": parent_name,
|
||||
"app": host_name,
|
||||
"user": getpass.getuser(),
|
||||
"user": get_openpype_username(),
|
||||
"hierarchy": hierarchy,
|
||||
}
|
||||
|
||||
|
|
@ -797,8 +797,14 @@ def update_current_task(task=None, asset=None, app=None, template_key=None):
|
|||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
data = changes.copy()
|
||||
# Convert env keys to human readable keys
|
||||
data["project_name"] = legacy_io.Session["AVALON_PROJECT"]
|
||||
data["asset_name"] = legacy_io.Session["AVALON_ASSET"]
|
||||
data["task_name"] = legacy_io.Session["AVALON_TASK"]
|
||||
|
||||
# Emit session change
|
||||
emit_event("taskChanged", changes.copy())
|
||||
emit_event("taskChanged", data)
|
||||
|
||||
return changes
|
||||
|
||||
|
|
|
|||
|
|
@ -533,7 +533,7 @@ def convert_input_paths_for_ffmpeg(
|
|||
output_dir,
|
||||
logger=None
|
||||
):
|
||||
"""Contert source file to format supported in ffmpeg.
|
||||
"""Convert source file to format supported in ffmpeg.
|
||||
|
||||
Currently can convert only exrs. The input filepaths should be files
|
||||
with same type. Information about input is loaded only from first found
|
||||
|
|
|
|||
|
|
@ -463,6 +463,25 @@ class OpenPypeModule:
|
|||
|
||||
pass
|
||||
|
||||
def on_host_install(self, host, host_name, project_name):
|
||||
"""Host was installed which gives option to handle in-host logic.
|
||||
|
||||
It is a good option to register in-host event callbacks which are
|
||||
specific for the module. The module is kept in memory for rest of
|
||||
the process.
|
||||
|
||||
Arguments may change in future. E.g. 'host_name' should be possible
|
||||
to receive from 'host' object.
|
||||
|
||||
Args:
|
||||
host (ModuleType): Access to installed/registered host object.
|
||||
host_name (str): Name of host.
|
||||
project_name (str): Project name which is main part of host
|
||||
context.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def cli(self, module_click_group):
|
||||
"""Add commands to click group.
|
||||
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ class AfterEffectsSubmitDeadline(
|
|||
hosts = ["aftereffects"]
|
||||
families = ["render.farm"] # cannot be "render' as that is integrated
|
||||
use_published = True
|
||||
targets = ["local"]
|
||||
|
||||
priority = 50
|
||||
chunk_size = 1000000
|
||||
|
|
|
|||
|
|
@ -238,6 +238,7 @@ class HarmonySubmitDeadline(
|
|||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
hosts = ["harmony"]
|
||||
families = ["render.farm"]
|
||||
targets = ["local"]
|
||||
|
||||
optional = True
|
||||
use_published = False
|
||||
|
|
|
|||
|
|
@ -287,6 +287,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
hosts = ["maya"]
|
||||
families = ["renderlayer"]
|
||||
targets = ["local"]
|
||||
|
||||
use_published = True
|
||||
tile_assembler_plugin = "OpenPypeTileAssembler"
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import openpype.api
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class MayaSubmitRemotePublishDeadline(openpype.api.Integrator):
|
||||
class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
|
||||
"""Submit Maya scene to perform a local publish in Deadline.
|
||||
|
||||
Publishing in Deadline can be helpful for scenes that publish very slow.
|
||||
|
|
@ -31,6 +31,7 @@ class MayaSubmitRemotePublishDeadline(openpype.api.Integrator):
|
|||
order = pyblish.api.IntegratorOrder
|
||||
hosts = ["maya"]
|
||||
families = ["publish.farm"]
|
||||
targets = ["local"]
|
||||
|
||||
def process(self, instance):
|
||||
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
hosts = ["nuke", "nukestudio"]
|
||||
families = ["render.farm", "prerender.farm"]
|
||||
optional = True
|
||||
targets = ["local"]
|
||||
|
||||
# presets
|
||||
priority = 50
|
||||
|
|
@ -54,8 +55,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
self._ver = re.search(r"\d+\.\d+", context.data.get("hostVersion"))
|
||||
self._deadline_user = context.data.get(
|
||||
"deadlineUser", getpass.getuser())
|
||||
self._frame_start = int(instance.data["frameStartHandle"])
|
||||
self._frame_end = int(instance.data["frameEndHandle"])
|
||||
submit_frame_start = int(instance.data["frameStartHandle"])
|
||||
submit_frame_end = int(instance.data["frameEndHandle"])
|
||||
|
||||
# get output path
|
||||
render_path = instance.data['path']
|
||||
|
|
@ -81,13 +82,16 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start -= 1
|
||||
submit_frame_start -= 1
|
||||
|
||||
response = self.payload_submit(instance,
|
||||
script_path,
|
||||
render_path,
|
||||
node.name()
|
||||
)
|
||||
response = self.payload_submit(
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
node.name(),
|
||||
submit_frame_start,
|
||||
submit_frame_end
|
||||
)
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
instance.data["deadlineSubmissionJob"] = response.json()
|
||||
instance.data["outputDir"] = os.path.dirname(
|
||||
|
|
@ -95,20 +99,22 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
if instance.data.get("bakingNukeScripts"):
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
submit_frame_start += 1
|
||||
|
||||
for baking_script in instance.data["bakingNukeScripts"]:
|
||||
render_path = baking_script["bakeRenderPath"]
|
||||
script_path = baking_script["bakeScriptPath"]
|
||||
exe_node_name = baking_script["bakeWriteNodeName"]
|
||||
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start += 1
|
||||
|
||||
resp = self.payload_submit(
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
submit_frame_start,
|
||||
submit_frame_end,
|
||||
response.json()
|
||||
)
|
||||
|
||||
|
|
@ -125,13 +131,16 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
families.insert(0, "prerender")
|
||||
instance.data["families"] = families
|
||||
|
||||
def payload_submit(self,
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
responce_data=None
|
||||
):
|
||||
def payload_submit(
|
||||
self,
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
start_frame,
|
||||
end_frame,
|
||||
responce_data=None
|
||||
):
|
||||
render_dir = os.path.normpath(os.path.dirname(render_path))
|
||||
script_name = os.path.basename(script_path)
|
||||
jobname = "%s - %s" % (script_name, instance.name)
|
||||
|
|
@ -191,8 +200,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
"Plugin": "Nuke",
|
||||
"Frames": "{start}-{end}".format(
|
||||
start=self._frame_start,
|
||||
end=self._frame_end
|
||||
start=start_frame,
|
||||
end=end_frame
|
||||
),
|
||||
"Comment": self._comment,
|
||||
|
||||
|
|
@ -292,7 +301,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
# adding expectied files to instance.data
|
||||
self.expected_files(instance, render_path)
|
||||
self.expected_files(
|
||||
instance,
|
||||
render_path,
|
||||
start_frame,
|
||||
end_frame
|
||||
)
|
||||
|
||||
self.log.debug("__ expectedFiles: `{}`".format(
|
||||
instance.data["expectedFiles"]))
|
||||
response = requests.post(self.deadline_url, json=payload, timeout=10)
|
||||
|
|
@ -338,9 +353,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
self.log.debug("_ path: `{}`".format(path))
|
||||
return path
|
||||
|
||||
def expected_files(self,
|
||||
instance,
|
||||
path):
|
||||
def expected_files(
|
||||
self,
|
||||
instance,
|
||||
path,
|
||||
start_frame,
|
||||
end_frame
|
||||
):
|
||||
""" Create expected files in instance data
|
||||
"""
|
||||
if not instance.data.get("expectedFiles"):
|
||||
|
|
@ -358,7 +377,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
instance.data["expectedFiles"].append(path)
|
||||
return
|
||||
|
||||
for i in range(self._frame_start, (self._frame_end + 1)):
|
||||
for i in range(start_frame, (end_frame + 1)):
|
||||
instance.data["expectedFiles"].append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
|
|
|
|||
|
|
@ -103,6 +103,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.IntegratorOrder + 0.2
|
||||
icon = "tractor"
|
||||
deadline_plugin = "OpenPype"
|
||||
targets = ["local"]
|
||||
|
||||
hosts = ["fusion", "maya", "nuke", "celaction", "aftereffects", "harmony"]
|
||||
|
||||
|
|
@ -128,7 +129,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"OPENPYPE_LOG_NO_COLORS",
|
||||
"OPENPYPE_USERNAME",
|
||||
"OPENPYPE_RENDER_JOB",
|
||||
"OPENPYPE_PUBLISH_JOB"
|
||||
"OPENPYPE_PUBLISH_JOB",
|
||||
"OPENPYPE_MONGO"
|
||||
]
|
||||
|
||||
# custom deadline attributes
|
||||
|
|
@ -640,6 +642,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
|
||||
def _solve_families(self, instance, preview=False):
|
||||
families = instance.get("families")
|
||||
|
||||
# if we have one representation with preview tag
|
||||
# flag whole instance for review and for ftrack
|
||||
if preview:
|
||||
|
|
@ -719,10 +722,17 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
" This may cause issues."
|
||||
).format(source))
|
||||
|
||||
families = ["render"]
|
||||
family = "render"
|
||||
if "prerender" in instance.data["families"]:
|
||||
family = "prerender"
|
||||
families = [family]
|
||||
|
||||
# pass review to families if marked as review
|
||||
if data.get("review"):
|
||||
families.append("review")
|
||||
|
||||
instance_skeleton_data = {
|
||||
"family": "render",
|
||||
"family": family,
|
||||
"subset": subset,
|
||||
"families": families,
|
||||
"asset": asset,
|
||||
|
|
@ -744,11 +754,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"useSequenceForReview": data.get("useSequenceForReview", True)
|
||||
}
|
||||
|
||||
if "prerender" in instance.data["families"]:
|
||||
instance_skeleton_data.update({
|
||||
"family": "prerender",
|
||||
"families": []})
|
||||
|
||||
# skip locking version if we are creating v01
|
||||
instance_version = instance.data.get("version") # take this if exists
|
||||
if instance_version != 1:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,288 @@
|
|||
import threading
|
||||
import datetime
|
||||
import copy
|
||||
import collections
|
||||
|
||||
import ftrack_api
|
||||
|
||||
from openpype.lib import get_datetime_data
|
||||
from openpype.api import get_project_settings
|
||||
from openpype_modules.ftrack.lib import ServerAction
|
||||
|
||||
|
||||
class CreateDailyReviewSessionServerAction(ServerAction):
|
||||
"""Create daily review session object per project.
|
||||
|
||||
Action creates review sessions based on settings. Settings define if is
|
||||
action enabled and what is a template for review session name. Logic works
|
||||
in a way that if review session with the name already exists then skip
|
||||
process. If review session for current day does not exist but yesterdays
|
||||
review exists and is empty then yesterdays is renamed otherwise creates
|
||||
new review session.
|
||||
|
||||
Also contains cycle creation of dailies which is triggered each morning.
|
||||
This option must be enabled in project settings. Cycle creation is also
|
||||
checked on registration of action.
|
||||
"""
|
||||
|
||||
identifier = "create.daily.review.session"
|
||||
#: Action label.
|
||||
label = "OpenPype Admin"
|
||||
variant = "- Create Daily Review Session (Server)"
|
||||
#: Action description.
|
||||
description = "Manually create daily review session"
|
||||
role_list = {"Pypeclub", "Administrator", "Project Manager"}
|
||||
|
||||
settings_key = "create_daily_review_session"
|
||||
default_template = "{yy}{mm}{dd}"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateDailyReviewSessionServerAction, self).__init__(
|
||||
*args, **kwargs
|
||||
)
|
||||
|
||||
self._cycle_timer = None
|
||||
self._last_cyle_time = None
|
||||
self._day_delta = datetime.timedelta(days=1)
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
"""Show action only on AssetVersions."""
|
||||
|
||||
valid_selection = False
|
||||
for ent in event["data"]["selection"]:
|
||||
# Ignore entities that are not tasks or projects
|
||||
if ent["entityType"].lower() in (
|
||||
"show", "task", "reviewsession", "assetversion"
|
||||
):
|
||||
valid_selection = True
|
||||
break
|
||||
|
||||
if not valid_selection:
|
||||
return False
|
||||
return self.valid_roles(session, entities, event)
|
||||
|
||||
def launch(self, session, entities, event):
|
||||
project_entity = self.get_project_from_entity(entities[0], session)
|
||||
project_name = project_entity["full_name"]
|
||||
project_settings = self.get_project_settings_from_event(
|
||||
event, project_name
|
||||
)
|
||||
action_settings = self._extract_action_settings(project_settings)
|
||||
project_name_by_id = {
|
||||
project_entity["id"]: project_name
|
||||
}
|
||||
settings_by_project_id = {
|
||||
project_entity["id"]: action_settings
|
||||
}
|
||||
self._process_review_session(
|
||||
session, settings_by_project_id, project_name_by_id
|
||||
)
|
||||
return True
|
||||
|
||||
def register(self, *args, **kwargs):
|
||||
"""Override register to be able trigger """
|
||||
# Register server action as would be normally
|
||||
super(CreateDailyReviewSessionServerAction, self).register(
|
||||
*args, **kwargs
|
||||
)
|
||||
|
||||
# Create threading timer which will trigger creation of report
|
||||
# at the 00:00:01 of next day
|
||||
# - callback will trigger another timer which will have 1 day offset
|
||||
now = datetime.datetime.now()
|
||||
# Create object of today morning
|
||||
today_morning = datetime.datetime(
|
||||
now.year, now.month, now.day, 0, 0, 1
|
||||
)
|
||||
# Add a day delta (to calculate next day date)
|
||||
next_day_morning = today_morning + self._day_delta
|
||||
# Calculate first delta in seconds for first threading timer
|
||||
first_delta = (next_day_morning - now).total_seconds()
|
||||
# Store cycle time which will be used to create next timer
|
||||
self._last_cyle_time = next_day_morning
|
||||
# Create timer thread
|
||||
self._cycle_timer = threading.Timer(first_delta, self._timer_callback)
|
||||
self._cycle_timer.start()
|
||||
|
||||
self._check_review_session()
|
||||
|
||||
def _timer_callback(self):
|
||||
if (
|
||||
self._cycle_timer is not None
|
||||
and self._last_cyle_time is not None
|
||||
):
|
||||
now = datetime.datetime.now()
|
||||
while self._last_cyle_time < now:
|
||||
self._last_cyle_time = self._last_cyle_time + self._day_delta
|
||||
|
||||
delay = (self._last_cyle_time - now).total_seconds()
|
||||
|
||||
self._cycle_timer = threading.Timer(delay, self._timer_callback)
|
||||
self._cycle_timer.start()
|
||||
self._check_review_session()
|
||||
|
||||
def _check_review_session(self):
|
||||
session = ftrack_api.Session(
|
||||
server_url=self.session.server_url,
|
||||
api_key=self.session.api_key,
|
||||
api_user=self.session.api_user,
|
||||
auto_connect_event_hub=False
|
||||
)
|
||||
project_entities = session.query(
|
||||
"select id, full_name from Project"
|
||||
).all()
|
||||
project_names_by_id = {
|
||||
project_entity["id"]: project_entity["full_name"]
|
||||
for project_entity in project_entities
|
||||
}
|
||||
|
||||
action_settings_by_project_id = self._get_action_settings(
|
||||
project_names_by_id
|
||||
)
|
||||
enabled_action_settings_by_project_id = {}
|
||||
for item in action_settings_by_project_id.items():
|
||||
project_id, action_settings = item
|
||||
if action_settings.get("cycle_enabled"):
|
||||
enabled_action_settings_by_project_id[project_id] = (
|
||||
action_settings
|
||||
)
|
||||
|
||||
if not enabled_action_settings_by_project_id:
|
||||
self.log.info((
|
||||
"There are no projects that have enabled"
|
||||
" cycle review sesison creation"
|
||||
))
|
||||
|
||||
else:
|
||||
self._process_review_session(
|
||||
session,
|
||||
enabled_action_settings_by_project_id,
|
||||
project_names_by_id
|
||||
)
|
||||
|
||||
session.close()
|
||||
|
||||
def _process_review_session(
|
||||
self, session, settings_by_project_id, project_names_by_id
|
||||
):
|
||||
review_sessions = session.query((
|
||||
"select id, name, project_id"
|
||||
" from ReviewSession where project_id in ({})"
|
||||
).format(self.join_query_keys(settings_by_project_id))).all()
|
||||
|
||||
review_sessions_by_project_id = collections.defaultdict(list)
|
||||
for review_session in review_sessions:
|
||||
project_id = review_session["project_id"]
|
||||
review_sessions_by_project_id[project_id].append(review_session)
|
||||
|
||||
# Prepare fill data for today's review sesison and yesterdays
|
||||
now = datetime.datetime.now()
|
||||
today_obj = datetime.datetime(
|
||||
now.year, now.month, now.day, 0, 0, 0
|
||||
)
|
||||
yesterday_obj = today_obj - self._day_delta
|
||||
|
||||
today_fill_data = get_datetime_data(today_obj)
|
||||
yesterday_fill_data = get_datetime_data(yesterday_obj)
|
||||
|
||||
# Loop through projects and try to create daily reviews
|
||||
for project_id, action_settings in settings_by_project_id.items():
|
||||
review_session_template = (
|
||||
action_settings["review_session_template"]
|
||||
).strip() or self.default_template
|
||||
|
||||
today_project_fill_data = copy.deepcopy(today_fill_data)
|
||||
yesterday_project_fill_data = copy.deepcopy(yesterday_fill_data)
|
||||
project_name = project_names_by_id[project_id]
|
||||
today_project_fill_data["project_name"] = project_name
|
||||
yesterday_project_fill_data["project_name"] = project_name
|
||||
|
||||
today_session_name = self._fill_review_template(
|
||||
review_session_template, today_project_fill_data
|
||||
)
|
||||
yesterday_session_name = self._fill_review_template(
|
||||
review_session_template, yesterday_project_fill_data
|
||||
)
|
||||
# Skip if today's session name could not be filled
|
||||
if not today_session_name:
|
||||
continue
|
||||
|
||||
# Find matchin review session
|
||||
project_review_sessions = review_sessions_by_project_id[project_id]
|
||||
todays_session = None
|
||||
yesterdays_session = None
|
||||
for review_session in project_review_sessions:
|
||||
session_name = review_session["name"]
|
||||
if session_name == today_session_name:
|
||||
todays_session = review_session
|
||||
break
|
||||
elif session_name == yesterday_session_name:
|
||||
yesterdays_session = review_session
|
||||
|
||||
# Skip if today's session already exist
|
||||
if todays_session is not None:
|
||||
self.log.debug((
|
||||
"Todays ReviewSession \"{}\""
|
||||
" in project \"{}\" already exists"
|
||||
).format(today_session_name, project_name))
|
||||
continue
|
||||
|
||||
# Check if there is yesterday's session and is empty
|
||||
# - in that case just rename it
|
||||
if (
|
||||
yesterdays_session is not None
|
||||
and len(yesterdays_session["review_session_objects"]) == 0
|
||||
):
|
||||
self.log.debug((
|
||||
"Renaming yesterdays empty review session \"{}\" to \"{}\""
|
||||
" in project \"{}\""
|
||||
).format(
|
||||
yesterday_session_name, today_session_name, project_name
|
||||
))
|
||||
yesterdays_session["name"] = today_session_name
|
||||
session.commit()
|
||||
continue
|
||||
|
||||
# Create new review session with new name
|
||||
self.log.debug((
|
||||
"Creating new review session \"{}\" in project \"{}\""
|
||||
).format(today_session_name, project_name))
|
||||
session.create("ReviewSession", {
|
||||
"project_id": project_id,
|
||||
"name": today_session_name
|
||||
})
|
||||
session.commit()
|
||||
|
||||
def _get_action_settings(self, project_names_by_id):
|
||||
settings_by_project_id = {}
|
||||
for project_id, project_name in project_names_by_id.items():
|
||||
project_settings = get_project_settings(project_name)
|
||||
action_settings = self._extract_action_settings(project_settings)
|
||||
settings_by_project_id[project_id] = action_settings
|
||||
return settings_by_project_id
|
||||
|
||||
def _extract_action_settings(self, project_settings):
|
||||
return (
|
||||
project_settings
|
||||
.get("ftrack", {})
|
||||
.get(self.settings_frack_subkey, {})
|
||||
.get(self.settings_key)
|
||||
) or {}
|
||||
|
||||
def _fill_review_template(self, template, data):
|
||||
output = None
|
||||
try:
|
||||
output = template.format(**data)
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
(
|
||||
"Failed to fill review session template {} with data {}"
|
||||
).format(template, data),
|
||||
exc_info=True
|
||||
)
|
||||
return output
|
||||
|
||||
|
||||
def register(session):
|
||||
'''Register plugin. Called when used as an plugin.'''
|
||||
CreateDailyReviewSessionServerAction(session).register()
|
||||
|
|
@ -140,9 +140,9 @@ class CustomAttributes(BaseAction):
|
|||
identifier = 'create.update.attributes'
|
||||
#: Action label.
|
||||
label = "OpenPype Admin"
|
||||
variant = '- Create/Update Avalon Attributes'
|
||||
variant = '- Create/Update Custom Attributes'
|
||||
#: Action description.
|
||||
description = 'Creates Avalon/Mongo ID for double check'
|
||||
description = 'Creates required custom attributes in ftrack'
|
||||
icon = statics_icon("ftrack", "action_icons", "OpenPypeAdmin.svg")
|
||||
settings_key = "create_update_attributes"
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
import time
|
||||
import datetime
|
||||
import threading
|
||||
|
||||
from Qt import QtCore, QtWidgets, QtGui
|
||||
|
||||
import ftrack_api
|
||||
|
|
@ -48,6 +49,9 @@ class FtrackTrayWrapper:
|
|||
self.widget_login.activateWindow()
|
||||
self.widget_login.raise_()
|
||||
|
||||
def show_ftrack_browser(self):
|
||||
QtGui.QDesktopServices.openUrl(self.module.ftrack_url)
|
||||
|
||||
def validate(self):
|
||||
validation = False
|
||||
cred = credentials.get_credentials()
|
||||
|
|
@ -284,6 +288,13 @@ class FtrackTrayWrapper:
|
|||
tray_server_menu.addAction(self.action_server_stop)
|
||||
|
||||
self.tray_server_menu = tray_server_menu
|
||||
|
||||
# Ftrack Browser
|
||||
browser_open = QtWidgets.QAction("Open Ftrack...", tray_menu)
|
||||
browser_open.triggered.connect(self.show_ftrack_browser)
|
||||
tray_menu.addAction(browser_open)
|
||||
self.browser_open = browser_open
|
||||
|
||||
self.bool_logged = False
|
||||
self.set_menu_visibility()
|
||||
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ def update_op_assets(
|
|||
# Frame in, fallback on 0
|
||||
frame_in = int(item_data.get("frame_in") or 0)
|
||||
item_data["frameStart"] = frame_in
|
||||
item_data.pop("frame_in")
|
||||
item_data.pop("frame_in", None)
|
||||
# Frame out, fallback on frame_in + duration
|
||||
frames_duration = int(item.get("nb_frames") or 1)
|
||||
frame_out = (
|
||||
|
|
@ -94,7 +94,7 @@ def update_op_assets(
|
|||
else frame_in + frames_duration
|
||||
)
|
||||
item_data["frameEnd"] = int(frame_out)
|
||||
item_data.pop("frame_out")
|
||||
item_data.pop("frame_out", None)
|
||||
# Fps, fallback to project's value when entity fps is deleted
|
||||
if not item_data.get("fps") and item_doc["data"].get("fps"):
|
||||
item_data["fps"] = project_doc["data"]["fps"]
|
||||
|
|
|
|||
BIN
openpype/modules/sync_server/resources/disabled.png
Normal file
|
After Width: | Height: | Size: 2.3 KiB |
|
|
@ -280,14 +280,13 @@ class SyncServerThread(threading.Thread):
|
|||
while self.is_running and not self.module.is_paused():
|
||||
try:
|
||||
import time
|
||||
start_time = None
|
||||
start_time = time.time()
|
||||
self.module.set_sync_project_settings() # clean cache
|
||||
for collection, preset in self.module.sync_project_settings.\
|
||||
items():
|
||||
if collection not in self.module.get_enabled_projects():
|
||||
continue
|
||||
collection = None
|
||||
enabled_projects = self.module.get_enabled_projects()
|
||||
for collection in enabled_projects:
|
||||
preset = self.module.sync_project_settings[collection]
|
||||
|
||||
start_time = time.time()
|
||||
local_site, remote_site = self._working_sites(collection)
|
||||
if not all([local_site, remote_site]):
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -926,9 +926,22 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
|
||||
return enabled_projects
|
||||
|
||||
def is_project_enabled(self, project_name):
|
||||
def is_project_enabled(self, project_name, single=False):
|
||||
"""Checks if 'project_name' is enabled for syncing.
|
||||
'get_sync_project_setting' is potentially expensive operation (pulls
|
||||
settings for all projects if cached version is not available), using
|
||||
project_settings for specific project should be faster.
|
||||
Args:
|
||||
project_name (str)
|
||||
single (bool): use 'get_project_settings' method
|
||||
"""
|
||||
if self.enabled:
|
||||
project_settings = self.get_sync_project_setting(project_name)
|
||||
if single:
|
||||
project_settings = get_project_settings(project_name)
|
||||
project_settings = \
|
||||
self._parse_sync_settings_from_settings(project_settings)
|
||||
else:
|
||||
project_settings = self.get_sync_project_setting(project_name)
|
||||
if project_settings and project_settings.get("enabled"):
|
||||
return True
|
||||
return False
|
||||
|
|
@ -1026,21 +1039,13 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
"""
|
||||
self.server_init()
|
||||
|
||||
from .tray.app import SyncServerWindow
|
||||
self.widget = SyncServerWindow(self)
|
||||
|
||||
def server_init(self):
|
||||
"""Actual initialization of Sync Server."""
|
||||
# import only in tray or Python3, because of Python2 hosts
|
||||
from .sync_server import SyncServerThread
|
||||
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
enabled_projects = self.get_enabled_projects()
|
||||
if not enabled_projects:
|
||||
self.enabled = False
|
||||
return
|
||||
from .sync_server import SyncServerThread
|
||||
|
||||
self.lock = threading.Lock()
|
||||
|
||||
|
|
@ -1060,7 +1065,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
self.server_start()
|
||||
|
||||
def server_start(self):
|
||||
if self.sync_project_settings and self.enabled:
|
||||
if self.enabled:
|
||||
self.sync_server_thread.start()
|
||||
else:
|
||||
log.info("No presets or active providers. " +
|
||||
|
|
@ -1851,6 +1856,9 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
Returns:
|
||||
(int): in seconds
|
||||
"""
|
||||
if not project_name:
|
||||
return 60
|
||||
|
||||
ld = self.sync_project_settings[project_name]["config"]["loop_delay"]
|
||||
return int(ld)
|
||||
|
||||
|
|
|
|||
|
|
@ -46,6 +46,14 @@ class SyncServerWindow(QtWidgets.QDialog):
|
|||
|
||||
left_column_layout.addWidget(self.pause_btn)
|
||||
|
||||
checkbox = QtWidgets.QCheckBox("Show only enabled", self)
|
||||
checkbox.setStyleSheet("QCheckBox{spacing: 5px;"
|
||||
"padding:5px 5px 5px 5px;}")
|
||||
checkbox.setChecked(True)
|
||||
self.show_only_enabled_chk = checkbox
|
||||
|
||||
left_column_layout.addWidget(self.show_only_enabled_chk)
|
||||
|
||||
repres = SyncRepresentationSummaryWidget(
|
||||
sync_server,
|
||||
project=self.projects.current_project,
|
||||
|
|
@ -86,15 +94,27 @@ class SyncServerWindow(QtWidgets.QDialog):
|
|||
repres.message_generated.connect(self._update_message)
|
||||
self.projects.message_generated.connect(self._update_message)
|
||||
|
||||
self.show_only_enabled_chk.stateChanged.connect(
|
||||
self._on_enabled_change
|
||||
)
|
||||
|
||||
self.representationWidget = repres
|
||||
|
||||
def showEvent(self, event):
|
||||
self.representationWidget.set_project(self.projects.current_project)
|
||||
self.projects.refresh()
|
||||
self._set_running(True)
|
||||
super().showEvent(event)
|
||||
|
||||
def closeEvent(self, event):
|
||||
self._set_running(False)
|
||||
super().closeEvent(event)
|
||||
|
||||
def _on_project_change(self):
|
||||
if self.projects.current_project is None:
|
||||
return
|
||||
|
||||
self.representationWidget.table_view.model().set_project(
|
||||
self.projects.current_project
|
||||
)
|
||||
self.representationWidget.set_project(self.projects.current_project)
|
||||
|
||||
project_name = self.projects.current_project
|
||||
if not self.sync_server.get_sync_project_setting(project_name):
|
||||
|
|
@ -103,16 +123,12 @@ class SyncServerWindow(QtWidgets.QDialog):
|
|||
self.projects.refresh()
|
||||
return
|
||||
|
||||
def showEvent(self, event):
|
||||
self.representationWidget.model.set_project(
|
||||
self.projects.current_project)
|
||||
def _on_enabled_change(self):
|
||||
"""Called when enabled projects only checkbox is toggled."""
|
||||
self.projects.show_only_enabled = \
|
||||
self.show_only_enabled_chk.isChecked()
|
||||
self.projects.refresh()
|
||||
self._set_running(True)
|
||||
super().showEvent(event)
|
||||
|
||||
def closeEvent(self, event):
|
||||
self._set_running(False)
|
||||
super().closeEvent(event)
|
||||
self.representationWidget.set_project(None)
|
||||
|
||||
def _set_running(self, running):
|
||||
self.representationWidget.model.is_running = running
|
||||
|
|
|
|||
|
|
@ -52,7 +52,8 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
|
||||
All queries should go through this (because of collection).
|
||||
"""
|
||||
return self.sync_server.connection.database[self.project]
|
||||
if self.project:
|
||||
return self.sync_server.connection.database[self.project]
|
||||
|
||||
@property
|
||||
def project(self):
|
||||
|
|
@ -150,6 +151,9 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
@property
|
||||
def can_edit(self):
|
||||
"""Returns true if some site is user local site, eg. could edit"""
|
||||
if not self.project:
|
||||
return False
|
||||
|
||||
return get_local_site_id() in (self.active_site, self.remote_site)
|
||||
|
||||
def get_column(self, index):
|
||||
|
|
@ -190,7 +194,7 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
actually queried (scrolled a couple of times to list more
|
||||
than single page of records)
|
||||
"""
|
||||
if self.is_editing or not self.is_running:
|
||||
if self.is_editing or not self.is_running or not self.project:
|
||||
return
|
||||
self.refresh_started.emit()
|
||||
self.beginResetModel()
|
||||
|
|
@ -232,6 +236,9 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
more records in DB than loaded.
|
||||
"""
|
||||
log.debug("fetchMore")
|
||||
if not self.dbcon:
|
||||
return
|
||||
|
||||
items_to_fetch = min(self._total_records - self._rec_loaded,
|
||||
self.PAGE_SIZE)
|
||||
self.query = self.get_query(self._rec_loaded)
|
||||
|
|
@ -286,9 +293,10 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
# replace('False', 'false').\
|
||||
# replace('True', 'true').replace('None', 'null'))
|
||||
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
if self.dbcon:
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
|
||||
def set_word_filter(self, word_filter):
|
||||
"""
|
||||
|
|
@ -378,9 +386,9 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
project (str): name of project
|
||||
"""
|
||||
self._project = project
|
||||
self.sync_server.set_sync_project_settings()
|
||||
# project might have been deactivated in the meantime
|
||||
if not self.sync_server.get_sync_project_setting(project):
|
||||
self._data = {}
|
||||
return
|
||||
|
||||
self.active_site = self.sync_server.get_active_site(self.project)
|
||||
|
|
@ -509,25 +517,23 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
|
|||
|
||||
self._word_filter = None
|
||||
|
||||
if not self._project or self._project == lib.DUMMY_PROJECT:
|
||||
return
|
||||
|
||||
self.sync_server = sync_server
|
||||
# TODO think about admin mode
|
||||
self.sort_criteria = self.DEFAULT_SORT
|
||||
|
||||
self.timer = QtCore.QTimer()
|
||||
if not self._project or self._project == lib.DUMMY_PROJECT:
|
||||
self.active_site = sync_server.DEFAULT_SITE
|
||||
self.remote_site = sync_server.DEFAULT_SITE
|
||||
return
|
||||
|
||||
# this is for regular user, always only single local and single remote
|
||||
self.active_site = self.sync_server.get_active_site(self.project)
|
||||
self.remote_site = self.sync_server.get_remote_site(self.project)
|
||||
|
||||
self.sort_criteria = self.DEFAULT_SORT
|
||||
|
||||
self.query = self.get_query()
|
||||
self.default_query = list(self.get_query())
|
||||
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
|
||||
self.timer = QtCore.QTimer()
|
||||
self.timer.timeout.connect(self.tick)
|
||||
self.timer.start(self.REFRESH_SEC)
|
||||
|
||||
|
|
@ -1003,9 +1009,6 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
|
|||
self.sort_criteria = self.DEFAULT_SORT
|
||||
|
||||
self.query = self.get_query()
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
|
||||
self.timer = QtCore.QTimer()
|
||||
self.timer.timeout.connect(self.tick)
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ class SyncProjectListWidget(QtWidgets.QWidget):
|
|||
message_generated = QtCore.Signal(str)
|
||||
|
||||
refresh_msec = 10000
|
||||
show_only_enabled = True
|
||||
|
||||
def __init__(self, sync_server, parent):
|
||||
super(SyncProjectListWidget, self).__init__(parent)
|
||||
|
|
@ -122,11 +123,15 @@ class SyncProjectListWidget(QtWidgets.QWidget):
|
|||
self._model_reset = False
|
||||
|
||||
selected_item = None
|
||||
for project_name in self.sync_server.sync_project_settings.\
|
||||
keys():
|
||||
sync_settings = self.sync_server.sync_project_settings
|
||||
for project_name in sync_settings.keys():
|
||||
if self.sync_server.is_paused() or \
|
||||
self.sync_server.is_project_paused(project_name):
|
||||
icon = self._get_icon("paused")
|
||||
elif not sync_settings[project_name]["enabled"]:
|
||||
if self.show_only_enabled:
|
||||
continue
|
||||
icon = self._get_icon("disabled")
|
||||
else:
|
||||
icon = self._get_icon("synced")
|
||||
|
||||
|
|
@ -139,12 +144,12 @@ class SyncProjectListWidget(QtWidgets.QWidget):
|
|||
if self.current_project == project_name:
|
||||
selected_item = item
|
||||
|
||||
if model.item(0) is None:
|
||||
return
|
||||
|
||||
if selected_item:
|
||||
selected_index = model.indexFromItem(selected_item)
|
||||
|
||||
if len(self.sync_server.sync_project_settings.keys()) == 0:
|
||||
model.appendRow(QtGui.QStandardItem(lib.DUMMY_PROJECT))
|
||||
|
||||
if not self.current_project:
|
||||
self.current_project = model.item(0).data(QtCore.Qt.DisplayRole)
|
||||
|
||||
|
|
@ -248,6 +253,9 @@ class _SyncRepresentationWidget(QtWidgets.QWidget):
|
|||
active_changed = QtCore.Signal() # active index changed
|
||||
message_generated = QtCore.Signal(str)
|
||||
|
||||
def set_project(self, project):
|
||||
self.model.set_project(project)
|
||||
|
||||
def _selection_changed(self, _new_selected, _all_selected):
|
||||
idxs = self.selection_model.selectedRows()
|
||||
self._selected_ids = set()
|
||||
|
|
@ -581,7 +589,6 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
|
|||
super(SyncRepresentationSummaryWidget, self).__init__(parent)
|
||||
|
||||
self.sync_server = sync_server
|
||||
|
||||
self._selected_ids = set() # keep last selected _id
|
||||
|
||||
txt_filter = QtWidgets.QLineEdit()
|
||||
|
|
@ -625,7 +632,6 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
|
|||
column = table_view.model().get_header_index("priority")
|
||||
priority_delegate = delegates.PriorityDelegate(self)
|
||||
table_view.setItemDelegateForColumn(column, priority_delegate)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.addLayout(top_bar_layout)
|
||||
|
|
@ -633,21 +639,16 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
|
|||
|
||||
self.table_view = table_view
|
||||
self.model = model
|
||||
|
||||
horizontal_header = HorizontalHeader(self)
|
||||
|
||||
table_view.setHorizontalHeader(horizontal_header)
|
||||
table_view.setSortingEnabled(True)
|
||||
|
||||
for column_name, width in self.default_widths:
|
||||
idx = model.get_header_index(column_name)
|
||||
table_view.setColumnWidth(idx, width)
|
||||
|
||||
table_view.doubleClicked.connect(self._double_clicked)
|
||||
self.txt_filter.textChanged.connect(lambda: model.set_word_filter(
|
||||
self.txt_filter.text()))
|
||||
table_view.customContextMenuRequested.connect(self._on_context_menu)
|
||||
|
||||
model.refresh_started.connect(self._save_scrollbar)
|
||||
model.refresh_finished.connect(self._set_scrollbar)
|
||||
model.modelReset.connect(self._set_selection)
|
||||
|
|
@ -963,7 +964,6 @@ class HorizontalHeader(QtWidgets.QHeaderView):
|
|||
super(HorizontalHeader, self).__init__(QtCore.Qt.Horizontal, parent)
|
||||
self._parent = parent
|
||||
self.checked_values = {}
|
||||
|
||||
self.setModel(self._parent.model)
|
||||
|
||||
self.setSectionsClickable(True)
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ from openpype_interfaces import (
|
|||
ITrayService,
|
||||
ILaunchHookPaths
|
||||
)
|
||||
from openpype.lib.events import register_event_callback
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
from .exceptions import InvalidContextError
|
||||
|
|
@ -422,3 +423,20 @@ class TimersManager(OpenPypeModule, ITrayService, ILaunchHookPaths):
|
|||
}
|
||||
|
||||
return requests.post(rest_api_url, json=data)
|
||||
|
||||
def on_host_install(self, host, host_name, project_name):
|
||||
self.log.debug("Installing task changed callback")
|
||||
register_event_callback("taskChanged", self._on_host_task_change)
|
||||
|
||||
def _on_host_task_change(self, event):
|
||||
project_name = event["project_name"]
|
||||
asset_name = event["asset_name"]
|
||||
task_name = event["task_name"]
|
||||
self.log.debug((
|
||||
"Sending message that timer should change to"
|
||||
" Project: {} Asset: {} Task: {}"
|
||||
).format(project_name, asset_name, task_name))
|
||||
|
||||
self.start_timer_with_webserver(
|
||||
project_name, asset_name, task_name, self.log
|
||||
)
|
||||
|
|
|
|||
|
|
@ -16,9 +16,7 @@ from openpype.modules import load_modules, ModulesManager
|
|||
from openpype.settings import get_project_settings
|
||||
from openpype.lib import (
|
||||
Anatomy,
|
||||
register_event_callback,
|
||||
filter_pyblish_plugins,
|
||||
change_timer_to_current_context,
|
||||
)
|
||||
|
||||
from . import (
|
||||
|
|
@ -33,6 +31,9 @@ from . import (
|
|||
_is_installed = False
|
||||
_registered_root = {"_": ""}
|
||||
_registered_host = {"_": None}
|
||||
# Keep modules manager (and it's modules) in memory
|
||||
# - that gives option to register modules' callbacks
|
||||
_modules_manager = None
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -44,6 +45,23 @@ PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
|||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
|
||||
|
||||
def _get_modules_manager():
|
||||
"""Get or create modules manager for host installation.
|
||||
|
||||
This is not meant for public usage. Reason is to keep modules
|
||||
in memory of process to be able trigger their event callbacks if they
|
||||
need any.
|
||||
|
||||
Returns:
|
||||
ModulesManager: Manager wrapping discovered modules.
|
||||
"""
|
||||
|
||||
global _modules_manager
|
||||
if _modules_manager is None:
|
||||
_modules_manager = ModulesManager()
|
||||
return _modules_manager
|
||||
|
||||
|
||||
def register_root(path):
|
||||
"""Register currently active root"""
|
||||
log.info("Registering root: %s" % path)
|
||||
|
|
@ -74,6 +92,7 @@ def install_host(host):
|
|||
_is_installed = True
|
||||
|
||||
legacy_io.install()
|
||||
modules_manager = _get_modules_manager()
|
||||
|
||||
missing = list()
|
||||
for key in ("AVALON_PROJECT", "AVALON_ASSET"):
|
||||
|
|
@ -95,8 +114,6 @@ def install_host(host):
|
|||
|
||||
register_host(host)
|
||||
|
||||
register_event_callback("taskChanged", _on_task_change)
|
||||
|
||||
def modified_emit(obj, record):
|
||||
"""Method replacing `emit` in Pyblish's MessageHandler."""
|
||||
record.msg = record.getMessage()
|
||||
|
|
@ -112,7 +129,14 @@ def install_host(host):
|
|||
else:
|
||||
pyblish.api.register_target("local")
|
||||
|
||||
install_openpype_plugins()
|
||||
project_name = os.environ.get("AVALON_PROJECT")
|
||||
host_name = os.environ.get("AVALON_APP")
|
||||
|
||||
# Give option to handle host installation
|
||||
for module in modules_manager.get_enabled_modules():
|
||||
module.on_host_install(host, host_name, project_name)
|
||||
|
||||
install_openpype_plugins(project_name, host_name)
|
||||
|
||||
|
||||
def install_openpype_plugins(project_name=None, host_name=None):
|
||||
|
|
@ -124,7 +148,7 @@ def install_openpype_plugins(project_name=None, host_name=None):
|
|||
pyblish.api.register_discovery_filter(filter_pyblish_plugins)
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
|
||||
modules_manager = ModulesManager()
|
||||
modules_manager = _get_modules_manager()
|
||||
publish_plugin_dirs = modules_manager.collect_plugin_paths()["publish"]
|
||||
for path in publish_plugin_dirs:
|
||||
pyblish.api.register_plugin_path(path)
|
||||
|
|
@ -168,10 +192,6 @@ def install_openpype_plugins(project_name=None, host_name=None):
|
|||
register_inventory_action(path)
|
||||
|
||||
|
||||
def _on_task_change():
|
||||
change_timer_to_current_context()
|
||||
|
||||
|
||||
def uninstall_host():
|
||||
"""Undo all of what `install()` did"""
|
||||
host = registered_host()
|
||||
|
|
|
|||
|
|
@ -829,9 +829,10 @@ class CreateContext:
|
|||
discover_result = publish_plugins_discover()
|
||||
publish_plugins = discover_result.plugins
|
||||
|
||||
targets = pyblish.logic.registered_targets() or ["default"]
|
||||
targets = set(pyblish.logic.registered_targets())
|
||||
targets.add("default")
|
||||
plugins_by_targets = pyblish.logic.plugins_by_targets(
|
||||
publish_plugins, targets
|
||||
publish_plugins, list(targets)
|
||||
)
|
||||
# Collect plugins that can have attribute definitions
|
||||
for plugin in publish_plugins:
|
||||
|
|
|
|||
|
|
@ -144,3 +144,12 @@ def parenthood(*args, **kwargs):
|
|||
@requires_install
|
||||
def bulk_write(*args, **kwargs):
|
||||
return _connection_object.bulk_write(*args, **kwargs)
|
||||
|
||||
|
||||
@requires_install
|
||||
def active_project(*args, **kwargs):
|
||||
return _connection_object.active_project(*args, **kwargs)
|
||||
|
||||
|
||||
def current_project(*args, **kwargs):
|
||||
return Session.get("AVALON_PROJECT")
|
||||
|
|
|
|||
|
|
@ -199,6 +199,10 @@ class AvalonMongoDB:
|
|||
"""Return the name of the active project"""
|
||||
return self.Session["AVALON_PROJECT"]
|
||||
|
||||
def current_project(self):
|
||||
"""Currently set project in Session without triggering installation."""
|
||||
return self.Session.get("AVALON_PROJECT")
|
||||
|
||||
@requires_install
|
||||
@auto_reconnect
|
||||
def projects(self, projection=None, only_active=True):
|
||||
|
|
|
|||
|
|
@ -18,16 +18,6 @@ class InstancePlugin(pyblish.api.InstancePlugin):
|
|||
super(InstancePlugin, cls).process(cls, *args, **kwargs)
|
||||
|
||||
|
||||
class Integrator(InstancePlugin):
|
||||
"""Integrator base class.
|
||||
|
||||
Wraps pyblish instance plugin. Targets set to "local" which means all
|
||||
integrators should run on "local" publishes, by default.
|
||||
"remote" targets could be used for integrators that should run externally.
|
||||
"""
|
||||
targets = ["local"]
|
||||
|
||||
|
||||
class Extractor(InstancePlugin):
|
||||
"""Extractor base class.
|
||||
|
||||
|
|
@ -38,8 +28,6 @@ class Extractor(InstancePlugin):
|
|||
|
||||
"""
|
||||
|
||||
targets = ["local"]
|
||||
|
||||
order = 2.0
|
||||
|
||||
def staging_dir(self, instance):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,3 @@
|
|||
import os
|
||||
import getpass
|
||||
import pyblish.api
|
||||
from openpype.lib import get_openpype_username
|
||||
|
||||
|
|
|
|||
|
|
@ -3,22 +3,20 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.lib import (
|
||||
get_ffmpeg_tool_path,
|
||||
get_oiio_tools_path,
|
||||
is_oiio_supported,
|
||||
|
||||
run_subprocess,
|
||||
path_to_subprocess_arg,
|
||||
|
||||
get_transcode_temp_directory,
|
||||
convert_input_paths_for_ffmpeg,
|
||||
should_convert_for_ffmpeg
|
||||
execute,
|
||||
)
|
||||
|
||||
import shutil
|
||||
|
||||
|
||||
class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
||||
class ExtractThumbnail(pyblish.api.InstancePlugin):
|
||||
"""Create jpg thumbnail from sequence using ffmpeg"""
|
||||
|
||||
label = "Extract Jpeg EXR"
|
||||
label = "Extract Thumbnail"
|
||||
order = pyblish.api.ExtractorOrder
|
||||
families = [
|
||||
"imagesequence", "render", "render2d",
|
||||
|
|
@ -49,7 +47,6 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
filtered_repres = self._get_filtered_repres(instance)
|
||||
|
||||
for repre in filtered_repres:
|
||||
repre_files = repre["files"]
|
||||
if not isinstance(repre_files, (list, tuple)):
|
||||
|
|
@ -62,78 +59,37 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
|
||||
full_input_path = os.path.join(stagingdir, input_file)
|
||||
self.log.info("input {}".format(full_input_path))
|
||||
|
||||
do_convert = should_convert_for_ffmpeg(full_input_path)
|
||||
# If result is None the requirement of conversion can't be
|
||||
# determined
|
||||
if do_convert is None:
|
||||
self.log.info((
|
||||
"Can't determine if representation requires conversion."
|
||||
" Skipped."
|
||||
))
|
||||
continue
|
||||
|
||||
# Do conversion if needed
|
||||
# - change staging dir of source representation
|
||||
# - must be set back after output definitions processing
|
||||
convert_dir = None
|
||||
if do_convert:
|
||||
convert_dir = get_transcode_temp_directory()
|
||||
filename = os.path.basename(full_input_path)
|
||||
convert_input_paths_for_ffmpeg(
|
||||
[full_input_path],
|
||||
convert_dir,
|
||||
self.log
|
||||
)
|
||||
full_input_path = os.path.join(convert_dir, filename)
|
||||
|
||||
filename = os.path.splitext(input_file)[0]
|
||||
if not filename.endswith('.'):
|
||||
filename += "."
|
||||
jpeg_file = filename + "jpg"
|
||||
full_output_path = os.path.join(stagingdir, jpeg_file)
|
||||
|
||||
self.log.info("output {}".format(full_output_path))
|
||||
thumbnail_created = False
|
||||
# Try to use FFMPEG if OIIO is not supported (for cases when
|
||||
# oiiotool isn't available)
|
||||
if not is_oiio_supported():
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(full_input_path, full_output_path) # noqa
|
||||
else:
|
||||
# Check if the file can be read by OIIO
|
||||
oiio_tool_path = get_oiio_tools_path()
|
||||
args = [
|
||||
oiio_tool_path, "--info", "-i", full_output_path
|
||||
]
|
||||
returncode = execute(args, silent=True)
|
||||
# If the input can read by OIIO then use OIIO method for
|
||||
# conversion otherwise use ffmpeg
|
||||
if returncode == 0:
|
||||
self.log.info("Input can be read by OIIO, converting with oiiotool now.") # noqa
|
||||
thumbnail_created = self.create_thumbnail_oiio(full_input_path, full_output_path) # noqa
|
||||
else:
|
||||
self.log.info("Converting with FFMPEG because input can't be read by OIIO.") # noqa
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(full_input_path, full_output_path) # noqa
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
|
||||
jpeg_items = []
|
||||
jpeg_items.append(path_to_subprocess_arg(ffmpeg_path))
|
||||
# override file if already exists
|
||||
jpeg_items.append("-y")
|
||||
# use same input args like with mov
|
||||
jpeg_items.extend(ffmpeg_args.get("input") or [])
|
||||
# input file
|
||||
jpeg_items.append("-i {}".format(
|
||||
path_to_subprocess_arg(full_input_path)
|
||||
))
|
||||
# output arguments from presets
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
|
||||
# If its a movie file, we just want one frame.
|
||||
if repre["ext"] == "mov":
|
||||
jpeg_items.append("-vframes 1")
|
||||
|
||||
# output file
|
||||
jpeg_items.append(path_to_subprocess_arg(full_output_path))
|
||||
|
||||
subprocess_command = " ".join(jpeg_items)
|
||||
|
||||
# run subprocess
|
||||
self.log.debug("{}".format(subprocess_command))
|
||||
try: # temporary until oiiotool is supported cross platform
|
||||
run_subprocess(
|
||||
subprocess_command, shell=True, logger=self.log
|
||||
)
|
||||
except RuntimeError as exp:
|
||||
if "Compression" in str(exp):
|
||||
self.log.debug(
|
||||
"Unsupported compression on input files. Skipping!!!"
|
||||
)
|
||||
return
|
||||
self.log.warning("Conversion crashed", exc_info=True)
|
||||
raise
|
||||
# Skip the rest of the process if the thumbnail wasn't created
|
||||
if not thumbnail_created:
|
||||
self.log.warning("Thumbanil has not been created.")
|
||||
return
|
||||
|
||||
new_repre = {
|
||||
"name": "thumbnail",
|
||||
|
|
@ -145,16 +101,11 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
}
|
||||
|
||||
# adding representation
|
||||
self.log.debug("Adding: {}".format(new_repre))
|
||||
self.log.debug(
|
||||
"Adding thumbnail representation: {}".format(new_repre)
|
||||
)
|
||||
instance.data["representations"].append(new_repre)
|
||||
|
||||
# Cleanup temp folder
|
||||
if convert_dir is not None and os.path.exists(convert_dir):
|
||||
shutil.rmtree(convert_dir)
|
||||
|
||||
# Create only one representation with name 'thumbnail'
|
||||
# TODO maybe handle way how to decide from which representation
|
||||
# will be thumbnail created
|
||||
# There is no need to create more then one thumbnail
|
||||
break
|
||||
|
||||
def _get_filtered_repres(self, instance):
|
||||
|
|
@ -175,3 +126,61 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
|
||||
filtered_repres.append(repre)
|
||||
return filtered_repres
|
||||
|
||||
def create_thumbnail_oiio(self, src_path, dst_path):
|
||||
self.log.info("outputting {}".format(dst_path))
|
||||
oiio_tool_path = get_oiio_tools_path()
|
||||
oiio_cmd = [oiio_tool_path, "-a",
|
||||
src_path, "-o",
|
||||
dst_path
|
||||
]
|
||||
subprocess_exr = " ".join(oiio_cmd)
|
||||
self.log.info(f"running: {subprocess_exr}")
|
||||
try:
|
||||
run_subprocess(oiio_cmd, logger=self.log)
|
||||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using oiiotool",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
||||
def create_thumbnail_ffmpeg(self, src_path, dst_path):
|
||||
self.log.info("outputting {}".format(dst_path))
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
|
||||
jpeg_items = []
|
||||
jpeg_items.append(path_to_subprocess_arg(ffmpeg_path))
|
||||
# override file if already exists
|
||||
jpeg_items.append("-y")
|
||||
# flag for large file sizes
|
||||
max_int = 2147483647
|
||||
jpeg_items.append("-analyzeduration {}".format(max_int))
|
||||
jpeg_items.append("-probesize {}".format(max_int))
|
||||
# use same input args like with mov
|
||||
jpeg_items.extend(ffmpeg_args.get("input") or [])
|
||||
# input file
|
||||
jpeg_items.append("-i {}".format(
|
||||
path_to_subprocess_arg(src_path)
|
||||
))
|
||||
# output arguments from presets
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
# we just want one frame from movie files
|
||||
jpeg_items.append("-vframes 1")
|
||||
# output file
|
||||
jpeg_items.append(path_to_subprocess_arg(dst_path))
|
||||
subprocess_command = " ".join(jpeg_items)
|
||||
try:
|
||||
run_subprocess(
|
||||
subprocess_command, shell=True, logger=self.log
|
||||
)
|
||||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using ffmpeg",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -940,9 +940,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
families += current_families
|
||||
|
||||
# create relative source path for DB
|
||||
if "source" in instance.data:
|
||||
source = instance.data["source"]
|
||||
else:
|
||||
source = instance.data.get("source")
|
||||
if not source:
|
||||
source = context.data["currentFile"]
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
source = self.get_rootless_path(anatomy, source)
|
||||
|
|
|
|||
|
|
@ -83,9 +83,6 @@
|
|||
"maya": [
|
||||
".*([Bb]eauty).*"
|
||||
],
|
||||
"nuke": [
|
||||
".*"
|
||||
],
|
||||
"aftereffects": [
|
||||
".*"
|
||||
],
|
||||
|
|
@ -98,4 +95,4 @@
|
|||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -116,6 +116,15 @@
|
|||
"Administrator",
|
||||
"Project manager"
|
||||
]
|
||||
},
|
||||
"create_daily_review_session": {
|
||||
"enabled": true,
|
||||
"role_list": [
|
||||
"Administrator",
|
||||
"Project Manager"
|
||||
],
|
||||
"cycle_enabled": false,
|
||||
"review_session_template": "{yy}{mm}{dd}"
|
||||
}
|
||||
},
|
||||
"user_handlers": {
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@
|
|||
"enabled": false,
|
||||
"profiles": []
|
||||
},
|
||||
"ExtractJpegEXR": {
|
||||
"ExtractThumbnail": {
|
||||
"enabled": true,
|
||||
"ffmpeg_args": {
|
||||
"input": [
|
||||
|
|
|
|||
|
|
@ -166,6 +166,9 @@
|
|||
},
|
||||
"ExtractThumbnail": {
|
||||
"enabled": true,
|
||||
"use_rendered": true,
|
||||
"bake_viewer_process": true,
|
||||
"bake_viewer_input_process": true,
|
||||
"nodes": {
|
||||
"Reformat": [
|
||||
[
|
||||
|
|
|
|||
|
|
@ -59,13 +59,11 @@
|
|||
"applications": {
|
||||
"write_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
],
|
||||
"read_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
]
|
||||
}
|
||||
},
|
||||
|
|
@ -73,25 +71,21 @@
|
|||
"tools_env": {
|
||||
"write_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
],
|
||||
"read_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
]
|
||||
},
|
||||
"avalon_mongo_id": {
|
||||
"write_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
],
|
||||
"read_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
]
|
||||
},
|
||||
"fps": {
|
||||
|
|
|
|||
|
|
@ -388,6 +388,44 @@
|
|||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "create_daily_review_session",
|
||||
"label": "Create daily review session",
|
||||
"type": "dict",
|
||||
"is_group": true,
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "role_list",
|
||||
"label": "Roles",
|
||||
"object_type": "text",
|
||||
"use_label_wrap": true
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "cycle_enabled",
|
||||
"label": "Create daily review session"
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "review_session_template",
|
||||
"label": "ReviewSession template",
|
||||
"placeholder": "Default: {yy}{mm}{dd}"
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Possible formatting keys in template:<br/>- \"project_name\" - <Name of project><br/>- \"d\" - <Day of month number> in shortest possible way.<br/>- \"dd\" - <Day of month number> with 2 digits.<br/>- \"ddd\" - <Week day name> shortened week day. e.g.: `Mon`, ...<br/>- \"dddd\" - <Week day name> full name of week day. e.g.: `Monday`, ...<br/>- \"m\" - <Month number> in shortest possible way. e.g.: `1` if January<br/>- \"mm\" - <Month number> with 2 digits.<br/>- \"mmm\" - <Month name> shortened month name. e.g.: `Jan`, ...<br/>- \"mmmm\" -<Month name> full month name. e.g.: `January`, ...<br/>- \"yy\" - <Year number> shortened year. e.g.: `19`, `20`, ...<br/>- \"yyyy\" - <Year number> full year. e.g.: `2019`, `2020`, ..."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -126,8 +126,8 @@
|
|||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "ExtractJpegEXR",
|
||||
"label": "ExtractJpegEXR",
|
||||
"key": "ExtractThumbnail",
|
||||
"label": "ExtractThumbnail",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
|
|
|
|||
|
|
@ -135,9 +135,31 @@
|
|||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "raw-json",
|
||||
"key": "nodes",
|
||||
"label": "Nodes"
|
||||
"type": "boolean",
|
||||
"key": "use_rendered",
|
||||
"label": "Use rendered images"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_process",
|
||||
"label": "Bake viewer process"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_input_process",
|
||||
"label": "Bake viewer input process"
|
||||
},
|
||||
{
|
||||
"type": "collapsible-wrap",
|
||||
"label": "Nodes",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "raw-json",
|
||||
"key": "nodes",
|
||||
"label": "Nodes"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -1,10 +1,16 @@
|
|||
import collections
|
||||
from openpype.client import (
|
||||
get_versions,
|
||||
get_subsets,
|
||||
get_assets,
|
||||
get_output_link_versions,
|
||||
)
|
||||
|
||||
from Qt import QtWidgets
|
||||
|
||||
|
||||
class SimpleLinkView(QtWidgets.QWidget):
|
||||
|
||||
def __init__(self, dbcon, parent=None):
|
||||
def __init__(self, dbcon, parent):
|
||||
super(SimpleLinkView, self).__init__(parent=parent)
|
||||
self.dbcon = dbcon
|
||||
|
||||
|
|
@ -24,6 +30,11 @@ class SimpleLinkView(QtWidgets.QWidget):
|
|||
|
||||
self._in_view = in_view
|
||||
self._out_view = out_view
|
||||
self._version_doc_to_process = None
|
||||
|
||||
@property
|
||||
def project_name(self):
|
||||
return self.dbcon.current_project()
|
||||
|
||||
def clear(self):
|
||||
self._in_view.clear()
|
||||
|
|
@ -31,60 +42,114 @@ class SimpleLinkView(QtWidgets.QWidget):
|
|||
|
||||
def set_version(self, version_doc):
|
||||
self.clear()
|
||||
if not version_doc or not self.isVisible():
|
||||
return
|
||||
self._version_doc_to_process = version_doc
|
||||
if version_doc and self.isVisible():
|
||||
self._fill_values()
|
||||
|
||||
# inputs
|
||||
#
|
||||
def showEvent(self, event):
|
||||
super(SimpleLinkView, self).showEvent(event)
|
||||
self._fill_values()
|
||||
|
||||
def _fill_values(self):
|
||||
if self._version_doc_to_process is None:
|
||||
return
|
||||
version_doc = self._version_doc_to_process
|
||||
self._version_doc_to_process = None
|
||||
self._fill_inputs(version_doc)
|
||||
self._fill_outputs(version_doc)
|
||||
|
||||
def _fill_inputs(self, version_doc):
|
||||
version_ids = set()
|
||||
for link in version_doc["data"].get("inputLinks", []):
|
||||
# Backwards compatibility for "input" key used as "id"
|
||||
if "id" not in link:
|
||||
link_id = link["input"]
|
||||
else:
|
||||
link_id = link["id"]
|
||||
version = self.dbcon.find_one(
|
||||
{"_id": link_id, "type": "version"},
|
||||
projection={"name": 1, "parent": 1}
|
||||
)
|
||||
if not version:
|
||||
continue
|
||||
subset = self.dbcon.find_one(
|
||||
{"_id": version["parent"], "type": "subset"},
|
||||
projection={"name": 1, "parent": 1}
|
||||
)
|
||||
if not subset:
|
||||
continue
|
||||
asset = self.dbcon.find_one(
|
||||
{"_id": subset["parent"], "type": "asset"},
|
||||
projection={"name": 1}
|
||||
)
|
||||
version_ids.add(link_id)
|
||||
|
||||
self._in_view.addItem("{asset} {subset} v{version:0>3}".format(
|
||||
asset=asset["name"],
|
||||
subset=subset["name"],
|
||||
version=version["name"],
|
||||
version_docs = list(get_versions(
|
||||
self.project_name,
|
||||
version_ids=version_ids,
|
||||
fields=["name", "parent"]
|
||||
))
|
||||
|
||||
versions_by_subset_id = collections.defaultdict(list)
|
||||
for version_doc in version_docs:
|
||||
subset_id = version_doc["parent"]
|
||||
versions_by_subset_id[subset_id].append(version_doc)
|
||||
|
||||
subset_docs = []
|
||||
if versions_by_subset_id:
|
||||
subset_docs = list(get_subsets(
|
||||
self.project_name,
|
||||
subset_ids=versions_by_subset_id.keys(),
|
||||
fields=["_id", "name", "parent"]
|
||||
))
|
||||
|
||||
# outputs
|
||||
#
|
||||
outputs = self.dbcon.find(
|
||||
{"type": "version", "data.inputLinks.input": version_doc["_id"]},
|
||||
projection={"name": 1, "parent": 1}
|
||||
)
|
||||
for version in outputs or []:
|
||||
subset = self.dbcon.find_one(
|
||||
{"_id": version["parent"], "type": "subset"},
|
||||
projection={"name": 1, "parent": 1}
|
||||
)
|
||||
if not subset:
|
||||
continue
|
||||
asset = self.dbcon.find_one(
|
||||
{"_id": subset["parent"], "type": "asset"},
|
||||
projection={"name": 1}
|
||||
)
|
||||
asset_docs = []
|
||||
subsets_by_asset_id = collections.defaultdict(list)
|
||||
if subset_docs:
|
||||
for subset_doc in subset_docs:
|
||||
asset_id = subset_doc["parent"]
|
||||
subsets_by_asset_id[asset_id].append(subset_doc)
|
||||
|
||||
self._out_view.addItem("{asset} {subset} v{version:0>3}".format(
|
||||
asset=asset["name"],
|
||||
subset=subset["name"],
|
||||
version=version["name"],
|
||||
asset_docs = list(get_assets(
|
||||
self.project_name,
|
||||
asset_ids=subsets_by_asset_id.keys(),
|
||||
fields=["_id", "name"]
|
||||
))
|
||||
|
||||
for asset_doc in asset_docs:
|
||||
asset_id = asset_doc["_id"]
|
||||
for subset_doc in subsets_by_asset_id[asset_id]:
|
||||
subset_id = subset_doc["_id"]
|
||||
for version_doc in versions_by_subset_id[subset_id]:
|
||||
self._in_view.addItem("{} {} v{:0>3}".format(
|
||||
asset_doc["name"],
|
||||
subset_doc["name"],
|
||||
version_doc["name"],
|
||||
))
|
||||
|
||||
def _fill_outputs(self, version_doc):
|
||||
version_docs = list(get_output_link_versions(
|
||||
self.project_name,
|
||||
version_doc["_id"],
|
||||
fields=["name", "parent"]
|
||||
))
|
||||
versions_by_subset_id = collections.defaultdict(list)
|
||||
for version_doc in version_docs:
|
||||
subset_id = version_doc["parent"]
|
||||
versions_by_subset_id[subset_id].append(version_doc)
|
||||
|
||||
subset_docs = []
|
||||
if versions_by_subset_id:
|
||||
subset_docs = list(get_subsets(
|
||||
self.project_name,
|
||||
subset_ids=versions_by_subset_id.keys(),
|
||||
fields=["_id", "name", "parent"]
|
||||
))
|
||||
|
||||
asset_docs = []
|
||||
subsets_by_asset_id = collections.defaultdict(list)
|
||||
if subset_docs:
|
||||
for subset_doc in subset_docs:
|
||||
asset_id = subset_doc["parent"]
|
||||
subsets_by_asset_id[asset_id].append(subset_doc)
|
||||
|
||||
asset_docs = list(get_assets(
|
||||
self.project_name,
|
||||
asset_ids=subsets_by_asset_id.keys(),
|
||||
fields=["_id", "name"]
|
||||
))
|
||||
|
||||
for asset_doc in asset_docs:
|
||||
asset_id = asset_doc["_id"]
|
||||
for subset_doc in subsets_by_asset_id[asset_id]:
|
||||
subset_id = subset_doc["_id"]
|
||||
for version_doc in versions_by_subset_id[subset_id]:
|
||||
self._out_view.addItem("{} {} v{:0>3}".format(
|
||||
asset_doc["name"],
|
||||
subset_doc["name"],
|
||||
version_doc["name"],
|
||||
))
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import re
|
|||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.client import get_asset_by_name, get_subsets
|
||||
from openpype import style
|
||||
from openpype.api import get_current_project_settings
|
||||
from openpype.tools.utils.lib import qt_app_context
|
||||
|
|
@ -215,12 +216,12 @@ class CreatorWindow(QtWidgets.QDialog):
|
|||
self._set_valid_state(False)
|
||||
return
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = None
|
||||
if creator_plugin:
|
||||
# Get the asset from the database which match with the name
|
||||
asset_doc = legacy_io.find_one(
|
||||
{"name": asset_name, "type": "asset"},
|
||||
projection={"_id": 1}
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, asset_name, fields=["_id"]
|
||||
)
|
||||
|
||||
# Get plugin
|
||||
|
|
@ -235,7 +236,6 @@ class CreatorWindow(QtWidgets.QDialog):
|
|||
self._set_valid_state(False)
|
||||
return
|
||||
|
||||
project_name = legacy_io.Session["AVALON_PROJECT"]
|
||||
asset_id = asset_doc["_id"]
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
|
||||
|
|
@ -269,14 +269,13 @@ class CreatorWindow(QtWidgets.QDialog):
|
|||
self._subset_name_input.setText(subset_name)
|
||||
|
||||
# Get all subsets of the current asset
|
||||
subset_docs = legacy_io.find(
|
||||
{
|
||||
"type": "subset",
|
||||
"parent": asset_id
|
||||
},
|
||||
{"name": 1}
|
||||
subset_docs = get_subsets(
|
||||
project_name, asset_ids=[asset_id], fields=["name"]
|
||||
)
|
||||
existing_subset_names = set(subset_docs.distinct("name"))
|
||||
existing_subset_names = {
|
||||
subset_doc["name"]
|
||||
for subset_doc in subset_docs
|
||||
}
|
||||
existing_subset_names_low = set(
|
||||
_name.lower()
|
||||
for _name in existing_subset_names
|
||||
|
|
|
|||
|
|
@ -9,6 +9,10 @@ import appdirs
|
|||
from Qt import QtCore, QtGui
|
||||
import qtawesome
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_assets,
|
||||
)
|
||||
from openpype.lib import JSONSettingRegistry
|
||||
from openpype.lib.applications import (
|
||||
CUSTOM_LAUNCH_APP_GROUPS,
|
||||
|
|
@ -81,13 +85,11 @@ class ActionModel(QtGui.QStandardItemModel):
|
|||
|
||||
def get_application_actions(self):
|
||||
actions = []
|
||||
if not self.dbcon.Session.get("AVALON_PROJECT"):
|
||||
if not self.dbcon.current_project():
|
||||
return actions
|
||||
|
||||
project_doc = self.dbcon.find_one(
|
||||
{"type": "project"},
|
||||
{"config.apps": True}
|
||||
)
|
||||
project_name = self.dbcon.active_project()
|
||||
project_doc = get_project(project_name, fields=["config.apps"])
|
||||
if not project_doc:
|
||||
return actions
|
||||
|
||||
|
|
@ -448,7 +450,7 @@ class LauncherModel(QtCore.QObject):
|
|||
@property
|
||||
def project_name(self):
|
||||
"""Current project name."""
|
||||
return self._dbcon.Session.get("AVALON_PROJECT")
|
||||
return self._dbcon.current_project()
|
||||
|
||||
@property
|
||||
def refreshing_assets(self):
|
||||
|
|
@ -649,9 +651,8 @@ class LauncherModel(QtCore.QObject):
|
|||
self._asset_refresh_thread = None
|
||||
|
||||
def _refresh_assets(self):
|
||||
asset_docs = list(self._dbcon.find(
|
||||
{"type": "asset"},
|
||||
self._asset_projection
|
||||
asset_docs = list(get_assets(
|
||||
self._last_project_name, fields=self._asset_projection.keys()
|
||||
))
|
||||
if not self._refreshing_assets:
|
||||
return
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import sys
|
|||
from Qt import QtWidgets, QtCore, QtGui
|
||||
|
||||
from openpype import style
|
||||
from openpype.client import get_project
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype.tools.utils import lib as tools_lib
|
||||
from openpype.tools.loader.widgets import (
|
||||
|
|
@ -303,14 +304,26 @@ class LibraryLoaderWindow(QtWidgets.QDialog):
|
|||
families = self._subsets_widget.get_subsets_families()
|
||||
self._families_filter_view.set_enabled_families(families)
|
||||
|
||||
def set_context(self, context, refresh=True):
|
||||
self.echo("Setting context: {}".format(context))
|
||||
lib.schedule(
|
||||
lambda: self._set_context(context, refresh=refresh),
|
||||
50, channel="mongo"
|
||||
)
|
||||
|
||||
# ------------------------------
|
||||
def set_context(self, context, refresh=True):
|
||||
"""Set the selection in the interface using a context.
|
||||
The context must contain `asset` data by name.
|
||||
|
||||
Args:
|
||||
context (dict): The context to apply.
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
asset_name = context.get("asset", None)
|
||||
if asset_name is None:
|
||||
return
|
||||
|
||||
if refresh:
|
||||
self._refresh_assets()
|
||||
|
||||
self._assets_widget.select_asset_by_name(asset_name)
|
||||
|
||||
def _on_family_filter_change(self, families):
|
||||
self._subsets_widget.set_family_filters(families)
|
||||
|
||||
|
|
@ -323,10 +336,7 @@ class LibraryLoaderWindow(QtWidgets.QDialog):
|
|||
"""Load assets from database"""
|
||||
if self.current_project is not None:
|
||||
# Ensure a project is loaded
|
||||
project_doc = self.dbcon.find_one(
|
||||
{"type": "project"},
|
||||
{"type": 1}
|
||||
)
|
||||
project_doc = get_project(self.current_project, fields=["_id"])
|
||||
assert project_doc, "This is a bug"
|
||||
|
||||
self._families_filter_view.set_enabled_families(set())
|
||||
|
|
@ -371,7 +381,7 @@ class LibraryLoaderWindow(QtWidgets.QDialog):
|
|||
|
||||
# Clear the version information on asset change
|
||||
self._version_info_widget.set_version(None)
|
||||
self._thumbnail_widget.set_thumbnail(asset_ids)
|
||||
self._thumbnail_widget.set_thumbnail("asset", asset_ids)
|
||||
|
||||
self.data["state"]["assetIds"] = asset_ids
|
||||
|
||||
|
|
@ -426,34 +436,17 @@ class LibraryLoaderWindow(QtWidgets.QDialog):
|
|||
version_doc["_id"]
|
||||
for version_doc in version_docs
|
||||
]
|
||||
src_type = "version"
|
||||
if not thumbnail_src_ids:
|
||||
src_type = "asset"
|
||||
thumbnail_src_ids = self._assets_widget.get_selected_asset_ids()
|
||||
|
||||
self._thumbnail_widget.set_thumbnail(thumbnail_src_ids)
|
||||
self._thumbnail_widget.set_thumbnail(src_type, thumbnail_src_ids)
|
||||
|
||||
version_ids = [doc["_id"] for doc in version_docs or []]
|
||||
if self._repres_widget:
|
||||
self._repres_widget.set_version_ids(version_ids)
|
||||
|
||||
def _set_context(self, context, refresh=True):
|
||||
"""Set the selection in the interface using a context.
|
||||
The context must contain `asset` data by name.
|
||||
|
||||
Args:
|
||||
context (dict): The context to apply.
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
asset_name = context.get("asset", None)
|
||||
if asset_name is None:
|
||||
return
|
||||
|
||||
if refresh:
|
||||
self._refresh_assets()
|
||||
|
||||
self._assets_widget.select_asset_by_name(asset_name)
|
||||
|
||||
def _on_message_timeout(self):
|
||||
self._message_label.setText("")
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import traceback
|
|||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.client import get_projects, get_project
|
||||
from openpype import style
|
||||
from openpype.lib import register_event_callback
|
||||
from openpype.pipeline import (
|
||||
|
|
@ -39,7 +40,7 @@ class LoaderWindow(QtWidgets.QDialog):
|
|||
def __init__(self, parent=None):
|
||||
super(LoaderWindow, self).__init__(parent)
|
||||
title = "Asset Loader 2.1"
|
||||
project_name = legacy_io.Session.get("AVALON_PROJECT")
|
||||
project_name = legacy_io.active_project()
|
||||
if project_name:
|
||||
title += " - {}".format(project_name)
|
||||
self.setWindowTitle(title)
|
||||
|
|
@ -274,8 +275,9 @@ class LoaderWindow(QtWidgets.QDialog):
|
|||
"""Load assets from database"""
|
||||
|
||||
# Ensure a project is loaded
|
||||
project = legacy_io.find_one({"type": "project"}, {"type": 1})
|
||||
assert project, "Project was not found! This is a bug"
|
||||
project_name = legacy_io.active_project()
|
||||
project_doc = get_project(project_name, fields=["_id"])
|
||||
assert project_doc, "Project was not found! This is a bug"
|
||||
|
||||
self._assets_widget.refresh()
|
||||
self._assets_widget.setFocus()
|
||||
|
|
@ -314,7 +316,7 @@ class LoaderWindow(QtWidgets.QDialog):
|
|||
)
|
||||
|
||||
# Clear the version information on asset change
|
||||
self._thumbnail_widget.set_thumbnail(asset_ids)
|
||||
self._thumbnail_widget.set_thumbnail("asset", asset_ids)
|
||||
self._version_info_widget.set_version(None)
|
||||
|
||||
self.data["state"]["assetIds"] = asset_ids
|
||||
|
|
@ -371,10 +373,12 @@ class LoaderWindow(QtWidgets.QDialog):
|
|||
version_doc["_id"]
|
||||
for version_doc in version_docs
|
||||
]
|
||||
source_type = "version"
|
||||
if not thumbnail_src_ids:
|
||||
source_type = "asset"
|
||||
thumbnail_src_ids = self._assets_widget.get_selected_asset_ids()
|
||||
|
||||
self._thumbnail_widget.set_thumbnail(thumbnail_src_ids)
|
||||
self._thumbnail_widget.set_thumbnail(source_type, thumbnail_src_ids)
|
||||
|
||||
if self._repres_widget is not None:
|
||||
version_ids = [doc["_id"] for doc in version_docs]
|
||||
|
|
@ -576,8 +580,7 @@ def show(debug=False, parent=None, use_context=False):
|
|||
legacy_io.install()
|
||||
|
||||
any_project = next(
|
||||
project for project in legacy_io.projects()
|
||||
if project.get("active", True) is not False
|
||||
project for project in get_projects(fields=["name"])
|
||||
)
|
||||
|
||||
legacy_io.Session["AVALON_PROJECT"] = any_project["name"]
|
||||
|
|
|
|||
|
|
@ -7,6 +7,15 @@ from uuid import uuid4
|
|||
from Qt import QtCore, QtGui
|
||||
import qtawesome
|
||||
|
||||
from openpype.client import (
|
||||
get_assets,
|
||||
get_subsets,
|
||||
get_last_versions,
|
||||
get_versions,
|
||||
get_hero_versions,
|
||||
get_version_by_name,
|
||||
get_representations
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
HeroVersionType,
|
||||
schema,
|
||||
|
|
@ -56,7 +65,7 @@ class BaseRepresentationModel(object):
|
|||
remote_site = remote_provider = None
|
||||
|
||||
if not project_name:
|
||||
project_name = self.dbcon.Session.get("AVALON_PROJECT")
|
||||
project_name = self.dbcon.active_project()
|
||||
else:
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
|
|
@ -89,7 +98,7 @@ class BaseRepresentationModel(object):
|
|||
self._last_manager_cache = now_time
|
||||
|
||||
sync_server = self._modules_manager.modules_by_name["sync_server"]
|
||||
if sync_server.is_project_enabled(project_name):
|
||||
if sync_server.is_project_enabled(project_name, single=True):
|
||||
active_site = sync_server.get_active_site(project_name)
|
||||
active_provider = sync_server.get_provider_for_site(
|
||||
project_name, active_site)
|
||||
|
|
@ -197,9 +206,6 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
if subset_doc_projection:
|
||||
self.subset_doc_projection = subset_doc_projection
|
||||
|
||||
self.asset_doc_projection = asset_doc_projection
|
||||
self.subset_doc_projection = subset_doc_projection
|
||||
|
||||
self.repre_icons = {}
|
||||
self.sync_server = None
|
||||
self.active_site = self.active_provider = None
|
||||
|
|
@ -225,7 +231,7 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
self._doc_fetching_stop = False
|
||||
self._doc_payload = {}
|
||||
|
||||
self.doc_fetched.connect(self.on_doc_fetched)
|
||||
self.doc_fetched.connect(self._on_doc_fetched)
|
||||
|
||||
self.refresh()
|
||||
|
||||
|
|
@ -244,7 +250,7 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
|
||||
def set_grouping(self, state):
|
||||
self._grouping = state
|
||||
self.on_doc_fetched()
|
||||
self._on_doc_fetched()
|
||||
|
||||
def get_subsets_families(self):
|
||||
return self._doc_payload.get("subset_families") or set()
|
||||
|
|
@ -254,57 +260,61 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
# because it also updates the information in other columns
|
||||
if index.column() == self.columns_index["version"]:
|
||||
item = index.internalPointer()
|
||||
parent = item["_id"]
|
||||
subset_id = item["_id"]
|
||||
if isinstance(value, HeroVersionType):
|
||||
versions = list(self.dbcon.find({
|
||||
"type": {"$in": ["version", "hero_version"]},
|
||||
"parent": parent
|
||||
}, sort=[("name", -1)]))
|
||||
|
||||
version = None
|
||||
last_version = None
|
||||
for __version in versions:
|
||||
if __version["type"] == "hero_version":
|
||||
version = __version
|
||||
elif last_version is None:
|
||||
last_version = __version
|
||||
|
||||
if version is not None and last_version is not None:
|
||||
break
|
||||
|
||||
_version = None
|
||||
for __version in versions:
|
||||
if __version["_id"] == version["version_id"]:
|
||||
_version = __version
|
||||
break
|
||||
|
||||
version["data"] = _version["data"]
|
||||
version["name"] = _version["name"]
|
||||
version["is_from_latest"] = (
|
||||
last_version["_id"] == _version["_id"]
|
||||
)
|
||||
version_doc = self._get_hero_version(subset_id)
|
||||
|
||||
else:
|
||||
version = self.dbcon.find_one({
|
||||
"name": value,
|
||||
"type": "version",
|
||||
"parent": parent
|
||||
})
|
||||
project_name = self.dbcon.active_project()
|
||||
version_doc = get_version_by_name(
|
||||
project_name, value, subset_id
|
||||
)
|
||||
|
||||
# update availability on active site when version changes
|
||||
if self.sync_server.enabled and version:
|
||||
query = self._repre_per_version_pipeline([version["_id"]],
|
||||
self.active_site,
|
||||
self.remote_site)
|
||||
if self.sync_server.enabled and version_doc:
|
||||
query = self._repre_per_version_pipeline(
|
||||
[version_doc["_id"]],
|
||||
self.active_site,
|
||||
self.remote_site
|
||||
)
|
||||
docs = list(self.dbcon.aggregate(query))
|
||||
if docs:
|
||||
repre = docs.pop()
|
||||
version["data"].update(self._get_repre_dict(repre))
|
||||
version_doc["data"].update(self._get_repre_dict(repre))
|
||||
|
||||
self.set_version(index, version)
|
||||
self.set_version(index, version_doc)
|
||||
|
||||
return super(SubsetsModel, self).setData(index, value, role)
|
||||
|
||||
def _get_hero_version(self, subset_id):
|
||||
project_name = self.dbcon.active_project()
|
||||
version_docs = get_versions(
|
||||
project_name, subset_ids=[subset_id], hero=True
|
||||
)
|
||||
standard_versions = []
|
||||
hero_version_doc = None
|
||||
for version_doc in version_docs:
|
||||
if version_doc["type"] == "hero_version":
|
||||
hero_version_doc = version_doc
|
||||
continue
|
||||
standard_versions.append(version_doc)
|
||||
|
||||
src_version_id = hero_version_doc["version_id"]
|
||||
src_version = None
|
||||
is_from_latest = True
|
||||
for version_doc in reversed(sorted(
|
||||
standard_versions, key=lambda item: item["name"]
|
||||
)):
|
||||
if version_doc["_id"] == src_version_id:
|
||||
src_version = version_doc
|
||||
break
|
||||
is_from_latest = False
|
||||
|
||||
hero_version_doc["data"] = src_version["data"]
|
||||
hero_version_doc["name"] = src_version["name"]
|
||||
hero_version_doc["is_from_latest"] = is_from_latest
|
||||
return hero_version_doc
|
||||
|
||||
def set_version(self, index, version):
|
||||
"""Update the version data of the given index.
|
||||
|
||||
|
|
@ -391,26 +401,25 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
item["repre_info"] = repre_info
|
||||
|
||||
def _fetch(self):
|
||||
asset_docs = self.dbcon.find(
|
||||
{
|
||||
"type": "asset",
|
||||
"_id": {"$in": self._asset_ids}
|
||||
},
|
||||
self.asset_doc_projection
|
||||
project_name = self.dbcon.active_project()
|
||||
asset_docs = get_assets(
|
||||
project_name,
|
||||
asset_ids=self._asset_ids,
|
||||
fields=self.asset_doc_projection.keys()
|
||||
)
|
||||
|
||||
asset_docs_by_id = {
|
||||
asset_doc["_id"]: asset_doc
|
||||
for asset_doc in asset_docs
|
||||
}
|
||||
|
||||
subset_docs_by_id = {}
|
||||
subset_docs = self.dbcon.find(
|
||||
{
|
||||
"type": "subset",
|
||||
"parent": {"$in": self._asset_ids}
|
||||
},
|
||||
self.subset_doc_projection
|
||||
subset_docs = get_subsets(
|
||||
project_name,
|
||||
asset_ids=self._asset_ids,
|
||||
fields=self.subset_doc_projection.keys()
|
||||
)
|
||||
|
||||
subset_families = set()
|
||||
for subset_doc in subset_docs:
|
||||
if self._doc_fetching_stop:
|
||||
|
|
@ -423,37 +432,13 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
subset_docs_by_id[subset_doc["_id"]] = subset_doc
|
||||
|
||||
subset_ids = list(subset_docs_by_id.keys())
|
||||
_pipeline = [
|
||||
# Find all versions of those subsets
|
||||
{"$match": {
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}},
|
||||
# Sorting versions all together
|
||||
{"$sort": {"name": 1}},
|
||||
# Group them by "parent", but only take the last
|
||||
{"$group": {
|
||||
"_id": "$parent",
|
||||
"_version_id": {"$last": "$_id"},
|
||||
"name": {"$last": "$name"},
|
||||
"type": {"$last": "$type"},
|
||||
"data": {"$last": "$data"},
|
||||
"locations": {"$last": "$locations"},
|
||||
"schema": {"$last": "$schema"}
|
||||
}}
|
||||
]
|
||||
last_versions_by_subset_id = dict()
|
||||
for doc in self.dbcon.aggregate(_pipeline):
|
||||
if self._doc_fetching_stop:
|
||||
return
|
||||
doc["parent"] = doc["_id"]
|
||||
doc["_id"] = doc.pop("_version_id")
|
||||
last_versions_by_subset_id[doc["parent"]] = doc
|
||||
last_versions_by_subset_id = get_last_versions(
|
||||
project_name,
|
||||
subset_ids,
|
||||
fields=["_id", "parent", "name", "type", "data", "schema"]
|
||||
)
|
||||
|
||||
hero_versions = self.dbcon.find({
|
||||
"type": "hero_version",
|
||||
"parent": {"$in": subset_ids}
|
||||
})
|
||||
hero_versions = get_hero_versions(project_name, subset_ids=subset_ids)
|
||||
missing_versions = []
|
||||
for hero_version in hero_versions:
|
||||
version_id = hero_version["version_id"]
|
||||
|
|
@ -462,10 +447,9 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
|
||||
missing_versions_by_id = {}
|
||||
if missing_versions:
|
||||
missing_version_docs = self.dbcon.find({
|
||||
"type": "version",
|
||||
"_id": {"$in": missing_versions}
|
||||
})
|
||||
missing_version_docs = get_versions(
|
||||
project_name, version_ids=missing_versions
|
||||
)
|
||||
missing_versions_by_id = {
|
||||
missing_version_doc["_id"]: missing_version_doc
|
||||
for missing_version_doc in missing_version_docs
|
||||
|
|
@ -488,23 +472,16 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
|
||||
last_versions_by_subset_id[subset_id] = hero_version
|
||||
|
||||
self._doc_payload = {
|
||||
"asset_docs_by_id": asset_docs_by_id,
|
||||
"subset_docs_by_id": subset_docs_by_id,
|
||||
"subset_families": subset_families,
|
||||
"last_versions_by_subset_id": last_versions_by_subset_id
|
||||
}
|
||||
|
||||
repre_info = {}
|
||||
if self.sync_server.enabled:
|
||||
version_ids = set()
|
||||
for _subset_id, doc in last_versions_by_subset_id.items():
|
||||
version_ids.add(doc["_id"])
|
||||
|
||||
query = self._repre_per_version_pipeline(list(version_ids),
|
||||
self.active_site,
|
||||
self.remote_site)
|
||||
query = self._repre_per_version_pipeline(
|
||||
list(version_ids), self.active_site, self.remote_site
|
||||
)
|
||||
|
||||
repre_info = {}
|
||||
for doc in self.dbcon.aggregate(query):
|
||||
if self._doc_fetching_stop:
|
||||
return
|
||||
|
|
@ -512,7 +489,13 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
doc["remote_provider"] = self.remote_provider
|
||||
repre_info[doc["_id"]] = doc
|
||||
|
||||
self._doc_payload["repre_info_by_version_id"] = repre_info
|
||||
self._doc_payload = {
|
||||
"asset_docs_by_id": asset_docs_by_id,
|
||||
"subset_docs_by_id": subset_docs_by_id,
|
||||
"subset_families": subset_families,
|
||||
"last_versions_by_subset_id": last_versions_by_subset_id,
|
||||
"repre_info_by_version_id": repre_info
|
||||
}
|
||||
|
||||
self.doc_fetched.emit()
|
||||
|
||||
|
|
@ -545,7 +528,7 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
|
|||
|
||||
self.fetch_subset_and_version()
|
||||
|
||||
def on_doc_fetched(self):
|
||||
def _on_doc_fetched(self):
|
||||
self.clear()
|
||||
self._items_by_id = {}
|
||||
self.beginResetModel()
|
||||
|
|
@ -1036,7 +1019,6 @@ class RepresentationSortProxyModel(GroupMemberFilterProxyModel):
|
|||
|
||||
|
||||
class RepresentationModel(TreeModel, BaseRepresentationModel):
|
||||
|
||||
doc_fetched = QtCore.Signal()
|
||||
refreshed = QtCore.Signal(bool)
|
||||
|
||||
|
|
@ -1062,33 +1044,43 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
"remote_site": "Remote"
|
||||
}
|
||||
|
||||
def __init__(self, dbcon, header, version_ids):
|
||||
repre_projection = {
|
||||
"_id": 1,
|
||||
"name": 1,
|
||||
"context.subset": 1,
|
||||
"context.asset": 1,
|
||||
"context.version": 1,
|
||||
"context.representation": 1,
|
||||
'files.sites': 1
|
||||
}
|
||||
|
||||
def __init__(self, dbcon, header):
|
||||
super(RepresentationModel, self).__init__()
|
||||
self.dbcon = dbcon
|
||||
self._data = []
|
||||
self._header = header
|
||||
self.version_ids = version_ids
|
||||
self._version_ids = []
|
||||
|
||||
manager = ModulesManager()
|
||||
sync_server = active_site = remote_site = None
|
||||
active_provider = remote_provider = None
|
||||
|
||||
project = dbcon.Session["AVALON_PROJECT"]
|
||||
if project:
|
||||
project_name = dbcon.current_project()
|
||||
if project_name:
|
||||
sync_server = manager.modules_by_name["sync_server"]
|
||||
active_site = sync_server.get_active_site(project)
|
||||
remote_site = sync_server.get_remote_site(project)
|
||||
active_site = sync_server.get_active_site(project_name)
|
||||
remote_site = sync_server.get_remote_site(project_name)
|
||||
|
||||
# TODO refactor
|
||||
active_provider = \
|
||||
sync_server.get_provider_for_site(project,
|
||||
active_site)
|
||||
active_provider = sync_server.get_provider_for_site(
|
||||
project_name, active_site
|
||||
)
|
||||
if active_site == 'studio':
|
||||
active_provider = 'studio'
|
||||
|
||||
remote_provider = \
|
||||
sync_server.get_provider_for_site(project,
|
||||
remote_site)
|
||||
remote_provider = sync_server.get_provider_for_site(
|
||||
project_name, remote_site
|
||||
)
|
||||
|
||||
if remote_site == 'studio':
|
||||
remote_provider = 'studio'
|
||||
|
|
@ -1099,7 +1091,7 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
self.remote_site = remote_site
|
||||
self.remote_provider = remote_provider
|
||||
|
||||
self.doc_fetched.connect(self.on_doc_fetched)
|
||||
self.doc_fetched.connect(self._on_doc_fetched)
|
||||
|
||||
self._docs = {}
|
||||
self._icons = lib.get_repre_icons()
|
||||
|
|
@ -1110,7 +1102,7 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
self._items_by_id = {}
|
||||
|
||||
def set_version_ids(self, version_ids):
|
||||
self.version_ids = version_ids
|
||||
self._version_ids = version_ids
|
||||
self.refresh()
|
||||
|
||||
def data(self, index, role):
|
||||
|
|
@ -1127,8 +1119,7 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
if index.column() == self.Columns.index("name"):
|
||||
if item.get("isMerged"):
|
||||
return item["icon"]
|
||||
else:
|
||||
return self._icons["repre"]
|
||||
return self._icons["repre"]
|
||||
|
||||
active_index = self.Columns.index("active_site")
|
||||
remote_index = self.Columns.index("remote_site")
|
||||
|
|
@ -1144,12 +1135,12 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
# site added, sync in progress
|
||||
progress_str = "not avail."
|
||||
if progress >= 0:
|
||||
# progress == 0 for isMerged is unavailable
|
||||
if progress == 0 and item.get("isMerged"):
|
||||
progress_str = "not avail."
|
||||
else:
|
||||
progress_str = "{}% {}".format(int(progress * 100),
|
||||
label)
|
||||
progress_str = "{}% {}".format(
|
||||
int(progress * 100), label
|
||||
)
|
||||
|
||||
return progress_str
|
||||
|
||||
|
|
@ -1179,7 +1170,7 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
|
||||
return super(RepresentationModel, self).data(index, role)
|
||||
|
||||
def on_doc_fetched(self):
|
||||
def _on_doc_fetched(self):
|
||||
self.clear()
|
||||
self.beginResetModel()
|
||||
subsets = set()
|
||||
|
|
@ -1189,10 +1180,9 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
group = None
|
||||
self._items_by_id = {}
|
||||
for doc in self._docs:
|
||||
if len(self.version_ids) > 1:
|
||||
if len(self._version_ids) > 1:
|
||||
group = repre_groups.get(doc["name"])
|
||||
if not group:
|
||||
|
||||
group_item = Item()
|
||||
item_id = str(uuid4())
|
||||
group_item.update({
|
||||
|
|
@ -1213,9 +1203,9 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
repre_groups_items[doc["name"]] = 0
|
||||
group = group_item
|
||||
|
||||
progress = lib.get_progress_for_repre(doc,
|
||||
self.active_site,
|
||||
self.remote_site)
|
||||
progress = lib.get_progress_for_repre(
|
||||
doc, self.active_site, self.remote_site
|
||||
)
|
||||
|
||||
active_site_icon = self._icons.get(self.active_provider)
|
||||
remote_site_icon = self._icons.get(self.remote_provider)
|
||||
|
|
@ -1248,9 +1238,9 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
'remote_site_progress': progress[self.remote_site]
|
||||
}
|
||||
if group:
|
||||
group = self._sum_group_progress(doc["name"], group,
|
||||
current_progress,
|
||||
repre_groups_items)
|
||||
group = self._sum_group_progress(
|
||||
doc["name"], group, current_progress, repre_groups_items
|
||||
)
|
||||
|
||||
self.add_child(item, group)
|
||||
|
||||
|
|
@ -1269,47 +1259,39 @@ class RepresentationModel(TreeModel, BaseRepresentationModel):
|
|||
return self._items_by_id.get(item_id)
|
||||
|
||||
def refresh(self):
|
||||
docs = []
|
||||
session_project = self.dbcon.Session['AVALON_PROJECT']
|
||||
if not session_project:
|
||||
project_name = self.dbcon.current_project()
|
||||
if not project_name:
|
||||
return
|
||||
|
||||
if self.version_ids:
|
||||
repre_docs = []
|
||||
if self._version_ids:
|
||||
# Simple find here for now, expected to receive lower number of
|
||||
# representations and logic could be in Python
|
||||
docs = list(self.dbcon.find(
|
||||
{"type": "representation", "parent": {"$in": self.version_ids},
|
||||
"files.sites.name": {"$exists": 1}}, self.projection()))
|
||||
self._docs = docs
|
||||
repre_docs = list(get_representations(
|
||||
project_name,
|
||||
version_ids=self._version_ids,
|
||||
fields=self.repre_projection.keys()
|
||||
))
|
||||
|
||||
self._docs = repre_docs
|
||||
|
||||
self.doc_fetched.emit()
|
||||
|
||||
@classmethod
|
||||
def projection(cls):
|
||||
return {
|
||||
"_id": 1,
|
||||
"name": 1,
|
||||
"context.subset": 1,
|
||||
"context.asset": 1,
|
||||
"context.version": 1,
|
||||
"context.representation": 1,
|
||||
'files.sites': 1
|
||||
}
|
||||
def _sum_group_progress(
|
||||
self, repre_name, group, current_item_progress, repre_groups_items
|
||||
):
|
||||
"""Update final group progress
|
||||
|
||||
def _sum_group_progress(self, repre_name, group, current_item_progress,
|
||||
repre_groups_items):
|
||||
"""
|
||||
Update final group progress
|
||||
Called after every item in group is added
|
||||
Called after every item in group is added
|
||||
|
||||
Args:
|
||||
repre_name(string)
|
||||
group(dict): info about group of selected items
|
||||
current_item_progress(dict): {'active_site_progress': XX,
|
||||
'remote_site_progress': YY}
|
||||
repre_groups_items(dict)
|
||||
Returns:
|
||||
(dict): updated group info
|
||||
Args:
|
||||
repre_name(string)
|
||||
group(dict): info about group of selected items
|
||||
current_item_progress(dict): {'active_site_progress': XX,
|
||||
'remote_site_progress': YY}
|
||||
repre_groups_items(dict)
|
||||
Returns:
|
||||
(dict): updated group info
|
||||
"""
|
||||
repre_groups_items[repre_name] += 1
|
||||
|
||||
|
|
|
|||
|
|
@ -7,6 +7,16 @@ import collections
|
|||
|
||||
from Qt import QtWidgets, QtCore, QtGui
|
||||
|
||||
from openpype.client import (
|
||||
get_subset_families,
|
||||
get_subset_by_id,
|
||||
get_subsets,
|
||||
get_version_by_id,
|
||||
get_versions,
|
||||
get_representations,
|
||||
get_thumbnail_id_from_source,
|
||||
get_thumbnail,
|
||||
)
|
||||
from openpype.api import Anatomy
|
||||
from openpype.pipeline import HeroVersionType
|
||||
from openpype.pipeline.thumbnail import get_thumbnail_binary
|
||||
|
|
@ -237,8 +247,7 @@ class SubsetWidget(QtWidgets.QWidget):
|
|||
self.model = model
|
||||
self.view = view
|
||||
|
||||
actual_project = dbcon.Session["AVALON_PROJECT"]
|
||||
self.on_project_change(actual_project)
|
||||
self.on_project_change(dbcon.current_project())
|
||||
|
||||
view.customContextMenuRequested.connect(self.on_context_menu)
|
||||
|
||||
|
|
@ -302,33 +311,23 @@ class SubsetWidget(QtWidgets.QWidget):
|
|||
item["version_document"]
|
||||
)
|
||||
|
||||
subset_docs = list(self.dbcon.find(
|
||||
{
|
||||
"_id": {"$in": list(version_docs_by_subset_id.keys())},
|
||||
"type": "subset"
|
||||
},
|
||||
{
|
||||
"schema": 1,
|
||||
"data.families": 1
|
||||
}
|
||||
project_name = self.dbcon.active_project()
|
||||
subset_docs = list(get_subsets(
|
||||
project_name,
|
||||
subset_ids=version_docs_by_subset_id.keys(),
|
||||
fields=["schema", "data.families"]
|
||||
))
|
||||
subset_docs_by_id = {
|
||||
subset_doc["_id"]: subset_doc
|
||||
for subset_doc in subset_docs
|
||||
}
|
||||
version_ids = list(version_docs_by_id.keys())
|
||||
repre_docs = self.dbcon.find(
|
||||
# Query all representations for selected versions at once
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids}
|
||||
},
|
||||
# Query only name and parent from representation
|
||||
{
|
||||
"name": 1,
|
||||
"parent": 1
|
||||
}
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
version_ids=version_ids,
|
||||
fields=["name", "parent"]
|
||||
)
|
||||
|
||||
repre_docs_by_version_id = {
|
||||
version_id: []
|
||||
for version_id in version_ids
|
||||
|
|
@ -356,9 +355,10 @@ class SubsetWidget(QtWidgets.QWidget):
|
|||
enabled = False
|
||||
if project_name:
|
||||
self.model.reset_sync_server(project_name)
|
||||
if self.model.sync_server:
|
||||
enabled_proj = self.model.sync_server.get_enabled_projects()
|
||||
enabled = project_name in enabled_proj
|
||||
sync_server = self.model.sync_server
|
||||
if sync_server:
|
||||
enabled = sync_server.is_project_enabled(project_name,
|
||||
single=True)
|
||||
|
||||
lib.change_visibility(self.model, self.view, "repre_info", enabled)
|
||||
|
||||
|
|
@ -566,28 +566,42 @@ class SubsetWidget(QtWidgets.QWidget):
|
|||
# same representation available
|
||||
|
||||
# Trigger
|
||||
repre_ids = []
|
||||
project_name = self.dbcon.active_project()
|
||||
subset_names_by_version_id = collections.defaultdict(set)
|
||||
for item in items:
|
||||
representation = self.dbcon.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"name": representation_name,
|
||||
"parent": item["version_document"]["_id"]
|
||||
},
|
||||
{"_id": 1}
|
||||
)
|
||||
if not representation:
|
||||
self.echo("Subset '{}' has no representation '{}'".format(
|
||||
item["subset"], representation_name
|
||||
))
|
||||
continue
|
||||
repre_ids.append(representation["_id"])
|
||||
version_id = item["version_document"]["_id"]
|
||||
subset_names_by_version_id[version_id].add(item["subset"])
|
||||
|
||||
version_ids = set(subset_names_by_version_id.keys())
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
representation_names=[representation_name],
|
||||
version_ids=version_ids,
|
||||
fields=["_id", "parent"]
|
||||
)
|
||||
|
||||
repre_ids = []
|
||||
for repre_doc in repre_docs:
|
||||
repre_ids.append(repre_doc["_id"])
|
||||
|
||||
version_id = repre_doc["parent"]
|
||||
if version_id not in version_ids:
|
||||
version_ids.remove(version_id)
|
||||
|
||||
for version_id in version_ids:
|
||||
joined_subset_names = ", ".join([
|
||||
'"{}"'.format(subset)
|
||||
for subset in subset_names_by_version_id[version_id]
|
||||
])
|
||||
self.echo("Subsets {} don't have representation '{}'".format(
|
||||
joined_subset_names, representation_name
|
||||
))
|
||||
|
||||
# get contexts only for selected menu option
|
||||
repre_contexts = get_repres_contexts(repre_ids, self.dbcon)
|
||||
options = lib.get_options(action, loader, self,
|
||||
list(repre_contexts.values()))
|
||||
|
||||
options = lib.get_options(
|
||||
action, loader, self, list(repre_contexts.values())
|
||||
)
|
||||
error_info = _load_representations_by_loader(
|
||||
loader, repre_contexts, options=options
|
||||
)
|
||||
|
|
@ -661,27 +675,21 @@ class VersionTextEdit(QtWidgets.QTextEdit):
|
|||
|
||||
print("Querying..")
|
||||
|
||||
project_name = self.dbcon.active_project()
|
||||
if not version_doc:
|
||||
version_doc = self.dbcon.find_one({
|
||||
"_id": version_id,
|
||||
"type": {"$in": ["version", "hero_version"]}
|
||||
})
|
||||
version_doc = get_version_by_id(project_name, version_id)
|
||||
assert version_doc, "Not a valid version id"
|
||||
|
||||
if version_doc["type"] == "hero_version":
|
||||
_version_doc = self.dbcon.find_one({
|
||||
"_id": version_doc["version_id"],
|
||||
"type": "version"
|
||||
})
|
||||
_version_doc = get_version_by_id(
|
||||
project_name, version_doc["version_id"]
|
||||
)
|
||||
version_doc["data"] = _version_doc["data"]
|
||||
version_doc["name"] = HeroVersionType(
|
||||
_version_doc["name"]
|
||||
)
|
||||
|
||||
subset = self.dbcon.find_one({
|
||||
"_id": version_doc["parent"],
|
||||
"type": "subset"
|
||||
})
|
||||
subset = get_subset_by_id(project_name, version_doc["parent"])
|
||||
assert subset, "No valid subset parent for version"
|
||||
|
||||
# Define readable creation timestamp
|
||||
|
|
@ -752,7 +760,7 @@ class VersionTextEdit(QtWidgets.QTextEdit):
|
|||
if not source:
|
||||
return
|
||||
|
||||
project_name = self.dbcon.Session["AVALON_PROJECT"]
|
||||
project_name = self.dbcon.current_project()
|
||||
if self._anatomy is None or self._anatomy.project_name != project_name:
|
||||
self._anatomy = Anatomy(project_name)
|
||||
|
||||
|
|
@ -833,24 +841,19 @@ class ThumbnailWidget(QtWidgets.QLabel):
|
|||
QtCore.Qt.SmoothTransformation
|
||||
)
|
||||
|
||||
def set_thumbnail(self, doc_id=None):
|
||||
if not doc_id:
|
||||
def set_thumbnail(self, src_type, doc_ids):
|
||||
if not doc_ids:
|
||||
self.set_pixmap()
|
||||
return
|
||||
|
||||
if isinstance(doc_id, (list, tuple)):
|
||||
if len(doc_id) < 1:
|
||||
self.set_pixmap()
|
||||
return
|
||||
doc_id = doc_id[0]
|
||||
src_id = doc_ids[0]
|
||||
|
||||
doc = self.dbcon.find_one(
|
||||
{"_id": doc_id},
|
||||
{"data.thumbnail_id"}
|
||||
project_name = self.dbcon.active_project()
|
||||
thumbnail_id = get_thumbnail_id_from_source(
|
||||
project_name,
|
||||
src_type,
|
||||
src_id,
|
||||
)
|
||||
thumbnail_id = None
|
||||
if doc:
|
||||
thumbnail_id = doc.get("data", {}).get("thumbnail_id")
|
||||
if thumbnail_id == self.current_thumb_id:
|
||||
if self.current_thumbnail is None:
|
||||
self.set_pixmap()
|
||||
|
|
@ -861,9 +864,7 @@ class ThumbnailWidget(QtWidgets.QLabel):
|
|||
self.set_pixmap()
|
||||
return
|
||||
|
||||
thumbnail_ent = self.dbcon.find_one(
|
||||
{"type": "thumbnail", "_id": thumbnail_id}
|
||||
)
|
||||
thumbnail_ent = get_thumbnail(project_name, thumbnail_id)
|
||||
if not thumbnail_ent:
|
||||
return
|
||||
|
||||
|
|
@ -917,21 +918,9 @@ class FamilyModel(QtGui.QStandardItemModel):
|
|||
|
||||
def refresh(self):
|
||||
families = set()
|
||||
if self.dbcon.Session.get("AVALON_PROJECT"):
|
||||
result = list(self.dbcon.aggregate([
|
||||
{"$match": {
|
||||
"type": "subset"
|
||||
}},
|
||||
{"$project": {
|
||||
"family": {"$arrayElemAt": ["$data.families", 0]}
|
||||
}},
|
||||
{"$group": {
|
||||
"_id": "family_group",
|
||||
"families": {"$addToSet": "$family"}
|
||||
}}
|
||||
]))
|
||||
if result:
|
||||
families = set(result[0]["families"])
|
||||
project_name = self.dbcon.current_project()
|
||||
if project_name:
|
||||
families = get_subset_families(project_name)
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
|
||||
|
|
@ -1176,7 +1165,7 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
|
||||
headers = [item[0] for item in self.default_widths]
|
||||
|
||||
model = RepresentationModel(self.dbcon, headers, [])
|
||||
model = RepresentationModel(self.dbcon, headers)
|
||||
|
||||
proxy_model = RepresentationSortProxyModel(self)
|
||||
proxy_model.setSourceModel(model)
|
||||
|
|
@ -1213,8 +1202,8 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
self.proxy_model = proxy_model
|
||||
|
||||
self.sync_server_enabled = False
|
||||
actual_project = dbcon.Session["AVALON_PROJECT"]
|
||||
self.on_project_change(actual_project)
|
||||
|
||||
self.on_project_change(dbcon.current_project())
|
||||
|
||||
self.model.refresh()
|
||||
|
||||
|
|
@ -1228,9 +1217,10 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
enabled = False
|
||||
if project_name:
|
||||
self.model.reset_sync_server(project_name)
|
||||
if self.model.sync_server:
|
||||
enabled_proj = self.model.sync_server.get_enabled_projects()
|
||||
enabled = project_name in enabled_proj
|
||||
sync_server = self.model.sync_server
|
||||
if sync_server:
|
||||
enabled = sync_server.is_project_enabled(project_name,
|
||||
single=True)
|
||||
|
||||
self.sync_server_enabled = enabled
|
||||
lib.change_visibility(self.model, self.tree_view,
|
||||
|
|
@ -1243,23 +1233,18 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
for item in items:
|
||||
repre_ids.append(item["_id"])
|
||||
|
||||
repre_docs = list(self.dbcon.find(
|
||||
{
|
||||
"type": "representation",
|
||||
"_id": {"$in": repre_ids}
|
||||
},
|
||||
{
|
||||
"name": 1,
|
||||
"parent": 1
|
||||
}
|
||||
project_name = self.dbcon.active_project()
|
||||
repre_docs = list(get_representations(
|
||||
project_name,
|
||||
representation_ids=repre_ids,
|
||||
fields=["name", "parent"]
|
||||
))
|
||||
|
||||
version_ids = [
|
||||
repre_doc["parent"]
|
||||
for repre_doc in repre_docs
|
||||
]
|
||||
version_docs = self.dbcon.find({
|
||||
"_id": {"$in": version_ids}
|
||||
})
|
||||
version_docs = get_versions(project_name, version_ids=version_ids)
|
||||
|
||||
version_docs_by_id = {}
|
||||
version_docs_by_subset_id = collections.defaultdict(list)
|
||||
|
|
@ -1269,15 +1254,10 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
version_docs_by_id[version_id] = version_doc
|
||||
version_docs_by_subset_id[subset_id].append(version_doc)
|
||||
|
||||
subset_docs = list(self.dbcon.find(
|
||||
{
|
||||
"_id": {"$in": list(version_docs_by_subset_id.keys())},
|
||||
"type": "subset"
|
||||
},
|
||||
{
|
||||
"schema": 1,
|
||||
"data.families": 1
|
||||
}
|
||||
subset_docs = list(get_subsets(
|
||||
project_name,
|
||||
subset_ids=version_docs_by_subset_id.keys(),
|
||||
fields=["schema", "data.families"]
|
||||
))
|
||||
subset_docs_by_id = {
|
||||
subset_doc["_id"]: subset_doc
|
||||
|
|
@ -1446,13 +1426,12 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
self._process_action(items, menu, point)
|
||||
|
||||
def _process_action(self, items, menu, point):
|
||||
"""
|
||||
Show the context action menu and process selected
|
||||
"""Show the context action menu and process selected
|
||||
|
||||
Args:
|
||||
items(dict): menu items
|
||||
menu(OptionalMenu)
|
||||
point(PointIndex)
|
||||
Args:
|
||||
items(dict): menu items
|
||||
menu(OptionalMenu)
|
||||
point(PointIndex)
|
||||
"""
|
||||
global_point = self.tree_view.mapToGlobal(point)
|
||||
action = menu.exec_(global_point)
|
||||
|
|
@ -1468,21 +1447,23 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
data_by_repre_id = {}
|
||||
selected_side = action_representation.get("selected_side")
|
||||
|
||||
is_sync_loader = tools_lib.is_sync_loader(loader)
|
||||
for item in items:
|
||||
if tools_lib.is_sync_loader(loader):
|
||||
site_name = "{}_site_name".format(selected_side)
|
||||
data = {
|
||||
"_id": item.get("_id"),
|
||||
"site_name": item.get(site_name),
|
||||
"project_name": self.dbcon.Session["AVALON_PROJECT"]
|
||||
}
|
||||
item_id = item.get("_id")
|
||||
repre_ids.append(item_id)
|
||||
if not is_sync_loader:
|
||||
continue
|
||||
|
||||
if not data["site_name"]:
|
||||
continue
|
||||
site_name = "{}_site_name".format(selected_side)
|
||||
data_site_name = item.get(site_name)
|
||||
if not data_site_name:
|
||||
continue
|
||||
|
||||
data_by_repre_id[data["_id"]] = data
|
||||
|
||||
repre_ids.append(item.get("_id"))
|
||||
data_by_repre_id[item_id] = {
|
||||
"_id": item_id,
|
||||
"site_name": data_site_name,
|
||||
"project_name": self.dbcon.active_project()
|
||||
}
|
||||
|
||||
repre_contexts = get_repres_contexts(repre_ids, self.dbcon)
|
||||
options = lib.get_options(action, loader, self,
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import logging
|
|||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.client import get_last_version_by_subset_id
|
||||
from openpype import style
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.tools.utils.lib import qt_app_context
|
||||
|
|
@ -211,6 +212,7 @@ class MayaLookAssignerWindow(QtWidgets.QWidget):
|
|||
selection = self.assign_selected.isChecked()
|
||||
asset_nodes = self.asset_outliner.get_nodes(selection=selection)
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
start = time.time()
|
||||
for i, (asset, item) in enumerate(asset_nodes.items()):
|
||||
|
||||
|
|
@ -222,23 +224,20 @@ class MayaLookAssignerWindow(QtWidgets.QWidget):
|
|||
assign_look = next((subset for subset in item["looks"]
|
||||
if subset["name"] in looks), None)
|
||||
if not assign_look:
|
||||
self.echo("{} No matching selected "
|
||||
"look for {}".format(prefix, asset))
|
||||
self.echo(
|
||||
"{} No matching selected look for {}".format(prefix, asset)
|
||||
)
|
||||
continue
|
||||
|
||||
# Get the latest version of this asset's look subset
|
||||
version = legacy_io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": assign_look["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
version = get_last_version_by_subset_id(
|
||||
project_name, assign_look["_id"], fields=["_id"]
|
||||
)
|
||||
|
||||
subset_name = assign_look["name"]
|
||||
self.echo("{} Assigning {} to {}\t".format(prefix,
|
||||
subset_name,
|
||||
asset))
|
||||
self.echo("{} Assigning {} to {}\t".format(
|
||||
prefix, subset_name, asset
|
||||
))
|
||||
nodes = item["nodes"]
|
||||
|
||||
if cmds.pluginInfo('vrayformaya', query=True, loaded=True):
|
||||
|
|
|
|||
|
|
@ -2,9 +2,9 @@ from collections import defaultdict
|
|||
import logging
|
||||
import os
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
import maya.cmds as cmds
|
||||
|
||||
from openpype.client import get_asset_by_id
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
remove_container,
|
||||
|
|
@ -159,11 +159,9 @@ def create_items_from_nodes(nodes):
|
|||
log.warning("No id hashes")
|
||||
return asset_view_items
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
for _id, id_nodes in id_hashes.items():
|
||||
asset = legacy_io.find_one(
|
||||
{"_id": ObjectId(_id)},
|
||||
projection={"name": True}
|
||||
)
|
||||
asset = get_asset_by_id(project_name, _id, fields=["name"])
|
||||
|
||||
# Skip if asset id is not found
|
||||
if not asset:
|
||||
|
|
@ -180,10 +178,12 @@ def create_items_from_nodes(nodes):
|
|||
namespace = get_namespace_from_node(node)
|
||||
namespaces.add(namespace)
|
||||
|
||||
asset_view_items.append({"label": asset["name"],
|
||||
"asset": asset,
|
||||
"looks": looks,
|
||||
"namespaces": namespaces})
|
||||
asset_view_items.append({
|
||||
"label": asset["name"],
|
||||
"asset": asset,
|
||||
"looks": looks,
|
||||
"namespaces": namespaces
|
||||
})
|
||||
|
||||
return asset_view_items
|
||||
|
||||
|
|
|
|||
|
|
@ -6,11 +6,14 @@ import logging
|
|||
import json
|
||||
|
||||
import six
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
import alembic.Abc
|
||||
from maya import cmds
|
||||
|
||||
from openpype.client import (
|
||||
get_representation_by_name,
|
||||
get_last_version_by_subset_name,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load_container,
|
||||
|
|
@ -155,13 +158,12 @@ def get_look_relationships(version_id):
|
|||
|
||||
Returns:
|
||||
dict: Dictionary of relations.
|
||||
|
||||
"""
|
||||
json_representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": "json"
|
||||
})
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
json_representation = get_representation_by_name(
|
||||
project_name, representation_name="json", version_id=version_id
|
||||
)
|
||||
|
||||
# Load relationships
|
||||
shader_relation = get_representation_path(json_representation)
|
||||
|
|
@ -184,12 +186,12 @@ def load_look(version_id):
|
|||
list of shader nodes.
|
||||
|
||||
"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
# Get representations of shader file and relationships
|
||||
look_representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": "ma"
|
||||
})
|
||||
look_representation = get_representation_by_name(
|
||||
project_name, representation_name="ma", version_id=version_id
|
||||
)
|
||||
|
||||
# See if representation is already loaded, if so reuse it.
|
||||
host = registered_host()
|
||||
|
|
@ -220,42 +222,6 @@ def load_look(version_id):
|
|||
return shader_nodes
|
||||
|
||||
|
||||
def get_latest_version(asset_id, subset):
|
||||
# type: (str, str) -> dict
|
||||
"""Get latest version of subset.
|
||||
|
||||
Args:
|
||||
asset_id (str): Asset ID
|
||||
subset (str): Subset name.
|
||||
|
||||
Returns:
|
||||
Latest version
|
||||
|
||||
Throws:
|
||||
RuntimeError: When subset or version doesn't exist.
|
||||
|
||||
"""
|
||||
subset = legacy_io.find_one({
|
||||
"name": subset,
|
||||
"parent": ObjectId(asset_id),
|
||||
"type": "subset"
|
||||
})
|
||||
if not subset:
|
||||
raise RuntimeError("Subset does not exist: %s" % subset)
|
||||
|
||||
version = legacy_io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if not version:
|
||||
raise RuntimeError("Version does not exist.")
|
||||
|
||||
return version
|
||||
|
||||
|
||||
def vrayproxy_assign_look(vrayproxy, subset="lookDefault"):
|
||||
# type: (str, str) -> None
|
||||
"""Assign look to vray proxy.
|
||||
|
|
@ -281,13 +247,20 @@ def vrayproxy_assign_look(vrayproxy, subset="lookDefault"):
|
|||
asset_id = node_id.split(":", 1)[0]
|
||||
node_ids_by_asset_id[asset_id].add(node_id)
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
for asset_id, node_ids in node_ids_by_asset_id.items():
|
||||
|
||||
# Get latest look version
|
||||
try:
|
||||
version = get_latest_version(asset_id, subset=subset)
|
||||
except RuntimeError as exc:
|
||||
print(exc)
|
||||
version = get_last_version_by_subset_name(
|
||||
project_name,
|
||||
subset_name=subset,
|
||||
asset_id=asset_id,
|
||||
fields=["_id"]
|
||||
)
|
||||
if not version:
|
||||
print("Didn't find last version for subset name {}".format(
|
||||
subset
|
||||
))
|
||||
continue
|
||||
|
||||
relationships = get_look_relationships(version["_id"])
|
||||
|
|
|
|||
|
|
@ -205,3 +205,9 @@ class ToolsDelegate(QtWidgets.QStyledItemDelegate):
|
|||
|
||||
def setModelData(self, editor, model, index):
|
||||
model.setData(index, editor.value(), QtCore.Qt.EditRole)
|
||||
|
||||
def displayText(self, value, locale):
|
||||
if value:
|
||||
return ", ".join(value)
|
||||
else:
|
||||
return
|
||||
|
|
|
|||