Merge branch 'develop' into feature/OP-3407_Use-query-functions-in-ftrack
89
CHANGELOG.md
|
|
@ -1,8 +1,42 @@
|
|||
# Changelog
|
||||
|
||||
## [3.11.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.10.0...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.0...3.11.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Flame: custom export temp folder [\#3346](https://github.com/pypeclub/OpenPype/pull/3346)
|
||||
- Nuke: removing third-party plugins [\#3344](https://github.com/pypeclub/OpenPype/pull/3344)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Pyblish Pype: Hiding/Close issues [\#3367](https://github.com/pypeclub/OpenPype/pull/3367)
|
||||
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
|
||||
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
|
||||
- Ftrack: Open browser from tray [\#3320](https://github.com/pypeclub/OpenPype/pull/3320)
|
||||
- Enhancement: More control over thumbnail processing. [\#3259](https://github.com/pypeclub/OpenPype/pull/3259)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: bake streams with slate on farm [\#3368](https://github.com/pypeclub/OpenPype/pull/3368)
|
||||
- Harmony: audio validator has wrong logic [\#3364](https://github.com/pypeclub/OpenPype/pull/3364)
|
||||
- Nuke: Fix missing variable in extract thumbnail [\#3363](https://github.com/pypeclub/OpenPype/pull/3363)
|
||||
- Nuke: Fix precollect writes [\#3361](https://github.com/pypeclub/OpenPype/pull/3361)
|
||||
- AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358)
|
||||
- deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356)
|
||||
- General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351)
|
||||
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
|
||||
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
|
||||
- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
|
||||
|
||||
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
|
|
@ -11,7 +45,9 @@
|
|||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
|
||||
- updated poetry installation source [\#3316](https://github.com/pypeclub/OpenPype/pull/3316)
|
||||
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
|
||||
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
|
||||
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
|
||||
- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298)
|
||||
|
|
@ -22,13 +58,13 @@
|
|||
- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
|
||||
- TVPaint: Init file for TVPaint worker also handle guideline images [\#3250](https://github.com/pypeclub/OpenPype/pull/3250)
|
||||
- Nuke: Change default icon path in settings [\#3247](https://github.com/pypeclub/OpenPype/pull/3247)
|
||||
- Maya: publishing of animation and pointcache on a farm [\#3225](https://github.com/pypeclub/OpenPype/pull/3225)
|
||||
- Maya: Look assigner UI improvements [\#3208](https://github.com/pypeclub/OpenPype/pull/3208)
|
||||
- Nuke: add pointcache and animation to loader [\#3186](https://github.com/pypeclub/OpenPype/pull/3186)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- General: Handle empty source key on instance [\#3342](https://github.com/pypeclub/OpenPype/pull/3342)
|
||||
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
|
||||
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
|
||||
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
|
||||
- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305)
|
||||
- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303)
|
||||
- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301)
|
||||
|
|
@ -44,12 +80,17 @@
|
|||
- Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
|
||||
- Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240)
|
||||
- Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239)
|
||||
- Unreal: Fixed Camera loading in UE5 [\#3238](https://github.com/pypeclub/OpenPype/pull/3238)
|
||||
- Flame: debugging [\#3224](https://github.com/pypeclub/OpenPype/pull/3224)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
|
||||
- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318)
|
||||
- Maya look: skip empty file attributes [\#3274](https://github.com/pypeclub/OpenPype/pull/3274)
|
||||
- Harmony: 21.1 fix [\#3248](https://github.com/pypeclub/OpenPype/pull/3248)
|
||||
|
||||
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
|
||||
|
||||
|
|
@ -59,9 +100,6 @@
|
|||
|
||||
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
|
||||
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
|
||||
- Project Manager: Allow to paste Tasks into multiple assets at the same time [\#3226](https://github.com/pypeclub/OpenPype/pull/3226)
|
||||
- Project manager: Sped up project load [\#3216](https://github.com/pypeclub/OpenPype/pull/3216)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3199](https://github.com/pypeclub/OpenPype/pull/3199)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -70,47 +108,16 @@
|
|||
- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242)
|
||||
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
|
||||
- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236)
|
||||
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
|
||||
- Hiero: debugging frame range and other 3.10 [\#3222](https://github.com/pypeclub/OpenPype/pull/3222)
|
||||
- Project Manager: Fix persistent editors on project change [\#3218](https://github.com/pypeclub/OpenPype/pull/3218)
|
||||
- Deadline: instance data overwrite fix [\#3214](https://github.com/pypeclub/OpenPype/pull/3214)
|
||||
- Ftrack: Push hierarchical attributes action works [\#3210](https://github.com/pypeclub/OpenPype/pull/3210)
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3203](https://github.com/pypeclub/OpenPype/pull/3203)
|
||||
- Photoshop: skip collector when automatic testing [\#3202](https://github.com/pypeclub/OpenPype/pull/3202)
|
||||
- Nuke: render/workfile version sync doesn't work on farm [\#3185](https://github.com/pypeclub/OpenPype/pull/3185)
|
||||
- Ftrack: Review image only if there are no mp4 reviews [\#3183](https://github.com/pypeclub/OpenPype/pull/3183)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Avalon repo removed from Jobs workflow [\#3193](https://github.com/pypeclub/OpenPype/pull/3193)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
|
||||
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
|
||||
- Maya: added jpg to filter for Image Plane Loader [\#3223](https://github.com/pypeclub/OpenPype/pull/3223)
|
||||
|
||||
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- nuke: generate publishing nodes inside render group node [\#3206](https://github.com/pypeclub/OpenPype/pull/3206)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3200](https://github.com/pypeclub/OpenPype/pull/3200)
|
||||
- Backport of fix for attaching renders to subsets [\#3195](https://github.com/pypeclub/OpenPype/pull/3195)
|
||||
- Looks: add basic support for Renderman [\#3190](https://github.com/pypeclub/OpenPype/pull/3190)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3204](https://github.com/pypeclub/OpenPype/pull/3204)
|
||||
- Nuke: render/workfile version sync doesn't work on farm [\#3184](https://github.com/pypeclub/OpenPype/pull/3184)
|
||||
- Ftrack: Review image only if there are no mp4 reviews [\#3182](https://github.com/pypeclub/OpenPype/pull/3182)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- hiero: otio p3 compatibility issue - metadata on effect use update [\#3194](https://github.com/pypeclub/OpenPype/pull/3194)
|
||||
|
||||
## [3.9.7](https://github.com/pypeclub/OpenPype/tree/3.9.7) (2022-05-11)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.6...3.9.7)
|
||||
|
|
|
|||
|
|
@ -44,7 +44,6 @@ from . import resources
|
|||
|
||||
from .plugin import (
|
||||
Extractor,
|
||||
Integrator,
|
||||
|
||||
ValidatePipelineOrder,
|
||||
ValidateContentsOrder,
|
||||
|
|
@ -87,7 +86,6 @@ __all__ = [
|
|||
|
||||
# plugin classes
|
||||
"Extractor",
|
||||
"Integrator",
|
||||
# ordering
|
||||
"ValidatePipelineOrder",
|
||||
"ValidateContentsOrder",
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ def _get_project_connection(project_name=None):
|
|||
return mongodb
|
||||
|
||||
|
||||
def _prepare_fields(fields):
|
||||
def _prepare_fields(fields, required_fields=None):
|
||||
if not fields:
|
||||
return None
|
||||
|
||||
|
|
@ -33,6 +33,10 @@ def _prepare_fields(fields):
|
|||
}
|
||||
if "_id" not in output:
|
||||
output["_id"] = True
|
||||
|
||||
if required_fields:
|
||||
for key in required_fields:
|
||||
output[key] = True
|
||||
return output
|
||||
|
||||
|
||||
|
|
@ -747,9 +751,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
|
|||
doc["_version_id"]
|
||||
for doc in conn.aggregate(_pipeline)
|
||||
]
|
||||
fields = _prepare_fields(fields)
|
||||
if fields and "parent" not in fields:
|
||||
fields.append("parent")
|
||||
|
||||
fields = _prepare_fields(fields, ["parent"])
|
||||
|
||||
version_docs = get_versions(
|
||||
project_name, version_ids=version_ids, fields=fields
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class AERenderInstance(RenderInstance):
|
|||
projectEntity = attr.ib(default=None)
|
||||
stagingDir = attr.ib(default=None)
|
||||
app_version = attr.ib(default=None)
|
||||
publish_attributes = attr.ib(default=None)
|
||||
publish_attributes = attr.ib(default={})
|
||||
file_name = attr.ib(default=None)
|
||||
|
||||
|
||||
|
|
@ -90,7 +90,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
|
||||
subset_name = inst.data["subset"]
|
||||
instance = AERenderInstance(
|
||||
family=family,
|
||||
family="render",
|
||||
families=inst.data.get("families", []),
|
||||
version=version,
|
||||
time="",
|
||||
|
|
@ -116,7 +116,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
|
|||
toBeRenderedOn='deadline',
|
||||
fps=fps,
|
||||
app_version=app_version,
|
||||
publish_attributes=inst.data.get("publish_attributes"),
|
||||
publish_attributes=inst.data.get("publish_attributes", {}),
|
||||
file_name=render_q.file_name
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ class ValidateSceneSettings(OptionalPyblishPluginMixin,
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Scene Settings"
|
||||
families = ["render.farm", "render"]
|
||||
families = ["render.farm", "render.local", "render"]
|
||||
hosts = ["aftereffects"]
|
||||
optional = True
|
||||
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ from . import ops
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import (
|
||||
schema,
|
||||
legacy_io,
|
||||
|
|
@ -83,11 +84,9 @@ def uninstall():
|
|||
|
||||
|
||||
def set_start_end_frames():
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
scene = bpy.context.scene
|
||||
|
||||
|
|
|
|||
|
|
@ -1,13 +1,11 @@
|
|||
import os
|
||||
import json
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
import bpy
|
||||
import bpy_extras
|
||||
import bpy_extras.anim_utils
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_representation_by_name
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
|
||||
import openpype.api
|
||||
|
|
@ -131,43 +129,32 @@ class ExtractLayout(openpype.api.Extractor):
|
|||
|
||||
fbx_count = 0
|
||||
|
||||
project_name = instance.context.data["projectEntity"]["name"]
|
||||
for asset in asset_group.children:
|
||||
metadata = asset.get(AVALON_PROPERTY)
|
||||
|
||||
parent = metadata["parent"]
|
||||
version_id = metadata["parent"]
|
||||
family = metadata["family"]
|
||||
|
||||
self.log.debug("Parent: {}".format(parent))
|
||||
self.log.debug("Parent: {}".format(version_id))
|
||||
# Get blend reference
|
||||
blend = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": ObjectId(parent),
|
||||
"name": "blend"
|
||||
},
|
||||
projection={"_id": True})
|
||||
blend = get_representation_by_name(
|
||||
project_name, "blend", version_id, fields=["_id"]
|
||||
)
|
||||
blend_id = None
|
||||
if blend:
|
||||
blend_id = blend["_id"]
|
||||
# Get fbx reference
|
||||
fbx = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": ObjectId(parent),
|
||||
"name": "fbx"
|
||||
},
|
||||
projection={"_id": True})
|
||||
fbx = get_representation_by_name(
|
||||
project_name, "fbx", version_id, fields=["_id"]
|
||||
)
|
||||
fbx_id = None
|
||||
if fbx:
|
||||
fbx_id = fbx["_id"]
|
||||
# Get abc reference
|
||||
abc = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": ObjectId(parent),
|
||||
"name": "abc"
|
||||
},
|
||||
projection={"_id": True})
|
||||
abc = get_representation_by_name(
|
||||
project_name, "abc", version_id, fields=["_id"]
|
||||
)
|
||||
abc_id = None
|
||||
if abc:
|
||||
abc_id = abc["_id"]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import os
|
||||
import re
|
||||
import tempfile
|
||||
from pprint import pformat
|
||||
from copy import deepcopy
|
||||
|
||||
|
|
@ -420,3 +421,30 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
"Path `{}` is containing more that one clip".format(path)
|
||||
)
|
||||
return clips[0]
|
||||
|
||||
def staging_dir(self, instance):
|
||||
"""Provide a temporary directory in which to store extracted files
|
||||
|
||||
Upon calling this method the staging directory is stored inside
|
||||
the instance.data['stagingDir']
|
||||
"""
|
||||
staging_dir = instance.data.get('stagingDir', None)
|
||||
openpype_temp_dir = os.getenv("OPENPYPE_TEMP_DIR")
|
||||
|
||||
if not staging_dir:
|
||||
if openpype_temp_dir and os.path.exists(openpype_temp_dir):
|
||||
staging_dir = os.path.normpath(
|
||||
tempfile.mkdtemp(
|
||||
prefix="pyblish_tmp_",
|
||||
dir=openpype_temp_dir
|
||||
)
|
||||
)
|
||||
else:
|
||||
staging_dir = os.path.normpath(
|
||||
tempfile.mkdtemp(prefix="pyblish_tmp_")
|
||||
)
|
||||
instance.data['stagingDir'] = staging_dir
|
||||
|
||||
instance.context.data["cleanupFullPaths"].append(staging_dir)
|
||||
|
||||
return staging_dir
|
||||
|
|
|
|||
|
|
@ -47,6 +47,6 @@ class ValidateAudio(pyblish.api.InstancePlugin):
|
|||
formatting_data = {
|
||||
"audio_url": audio_path
|
||||
}
|
||||
if os.path.isfile(audio_path):
|
||||
if not os.path.isfile(audio_path):
|
||||
raise PublishXmlValidationError(self, msg,
|
||||
formatting_data=formatting_data)
|
||||
|
|
|
|||
|
|
@ -130,6 +130,8 @@ def get_output_parameter(node):
|
|||
elif node_type == "arnold":
|
||||
if node.evalParm("ar_ass_export_enable"):
|
||||
return node.parm("ar_ass_file")
|
||||
elif node_type == "Redshift_Proxy_Output":
|
||||
return node.parm("RS_archive_file")
|
||||
|
||||
raise TypeError("Node type '%s' not supported" % node_type)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
|
||||
|
||||
class CreateRedshiftProxy(plugin.Creator):
|
||||
"""Redshift Proxy"""
|
||||
|
||||
label = "Redshift Proxy"
|
||||
family = "redshiftproxy"
|
||||
icon = "magic"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
|
||||
|
||||
# Remove the active, we are checking the bypass flag of the nodes
|
||||
self.data.pop("active", None)
|
||||
|
||||
# Redshift provides a `Redshift_Proxy_Output` node type which shows
|
||||
# a limited set of parameters by default and is set to extract a
|
||||
# Redshift Proxy. However when "imprinting" extra parameters needed
|
||||
# for OpenPype it starts showing all its parameters again. It's unclear
|
||||
# why this happens.
|
||||
# TODO: Somehow enforce so that it only shows the original limited
|
||||
# attributes of the Redshift_Proxy_Output node type
|
||||
self.data.update({"node_type": "Redshift_Proxy_Output"})
|
||||
|
||||
def _process(self, instance):
|
||||
"""Creator main entry point.
|
||||
|
||||
Args:
|
||||
instance (hou.Node): Created Houdini instance.
|
||||
|
||||
"""
|
||||
parms = {
|
||||
"RS_archive_file": '$HIP/pyblish/`chs("subset")`.$F4.rs',
|
||||
}
|
||||
|
||||
if self.nodes:
|
||||
node = self.nodes[0]
|
||||
path = node.path()
|
||||
parms["RS_archive_sopPath"] = path
|
||||
|
||||
instance.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id"]
|
||||
for name in to_lock:
|
||||
parm = instance.parm(name)
|
||||
parm.lock(True)
|
||||
|
|
@ -20,7 +20,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass"]
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
"imagesequence",
|
||||
"usd",
|
||||
"usdrender",
|
||||
"redshiftproxy"
|
||||
]
|
||||
|
||||
hosts = ["houdini"]
|
||||
|
|
@ -54,6 +55,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
out_node = node.parm("loppath").evalAsNode()
|
||||
|
||||
elif node_type == "Redshift_Proxy_Output":
|
||||
out_node = node.parm("RS_archive_sopPath").evalAsNode()
|
||||
else:
|
||||
raise ValueError(
|
||||
"ROP node type '%s' is" " not supported." % node_type
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.hosts.houdini.api.lib import render_rop
|
||||
|
||||
|
||||
class ExtractRedshiftProxy(openpype.api.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.1
|
||||
label = "Extract Redshift Proxy"
|
||||
families = ["redshiftproxy"]
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
ropnode = instance[0]
|
||||
|
||||
# Get the filename from the filename parameter
|
||||
# `.evalParm(parameter)` will make sure all tokens are resolved
|
||||
output = ropnode.evalParm("RS_archive_file")
|
||||
staging_dir = os.path.normpath(os.path.dirname(output))
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
file_name = os.path.basename(output)
|
||||
|
||||
self.log.info("Writing Redshift Proxy '%s' to '%s'" % (file_name,
|
||||
staging_dir))
|
||||
|
||||
render_rop(ropnode)
|
||||
|
||||
output = instance.data["frames"]
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "rs",
|
||||
"ext": "rs",
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
# A single frame may also be rendered without start/end frame.
|
||||
if "frameStart" in instance.data and "frameEnd" in instance.data:
|
||||
representation["frameStart"] = instance.data["frameStart"]
|
||||
representation["frameEnd"] = instance.data["frameEnd"]
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -22,7 +22,8 @@ class CreateYetiCache(plugin.Creator):
|
|||
# Add animation data without step and handles
|
||||
anim_data = lib.collect_animation_data()
|
||||
anim_data.pop("step")
|
||||
anim_data.pop("handles")
|
||||
anim_data.pop("handleStart")
|
||||
anim_data.pop("handleEnd")
|
||||
self.data.update(anim_data)
|
||||
|
||||
# Add samples
|
||||
|
|
|
|||
|
|
@ -1,15 +1,13 @@
|
|||
import os
|
||||
import json
|
||||
import re
|
||||
import glob
|
||||
from collections import defaultdict
|
||||
from pprint import pprint
|
||||
|
||||
import clique
|
||||
from maya import cmds
|
||||
|
||||
from openpype.api import get_project_settings
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
|
|
@ -17,7 +15,15 @@ from openpype.hosts.maya.api import lib
|
|||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
|
||||
|
||||
def set_attribute(node, attr, value):
|
||||
"""Wrapper of set attribute which ignores None values"""
|
||||
if value is None:
|
||||
return
|
||||
lib.set_attribute(node, attr, value)
|
||||
|
||||
|
||||
class YetiCacheLoader(load.LoaderPlugin):
|
||||
"""Load Yeti Cache with one or more Yeti nodes"""
|
||||
|
||||
families = ["yeticache", "yetiRig"]
|
||||
representations = ["fur"]
|
||||
|
|
@ -28,6 +34,16 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
"""Loads a .fursettings file defining how to load .fur sequences
|
||||
|
||||
A single yeticache or yetiRig can have more than a single pgYetiMaya
|
||||
nodes and thus load more than a single yeti.fur sequence.
|
||||
|
||||
The .fursettings file defines what the node names should be and also
|
||||
what "cbId" attribute they should receive to match the original source
|
||||
and allow published looks to also work for Yeti rigs and its caches.
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
family = context["representation"]["context"]["family"]
|
||||
|
|
@ -43,22 +59,11 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
if not cmds.pluginInfo("pgYetiMaya", query=True, loaded=True):
|
||||
cmds.loadPlugin("pgYetiMaya", quiet=True)
|
||||
|
||||
# Get JSON
|
||||
fbase = re.search(r'^(.+)\.(\d+|#+)\.fur', self.fname)
|
||||
if not fbase:
|
||||
raise RuntimeError('Cannot determine fursettings file path')
|
||||
settings_fname = "{}.fursettings".format(fbase.group(1))
|
||||
with open(settings_fname, "r") as fp:
|
||||
fursettings = json.load(fp)
|
||||
|
||||
# Check if resources map exists
|
||||
# Get node name from JSON
|
||||
if "nodes" not in fursettings:
|
||||
raise RuntimeError("Encountered invalid data, expect 'nodes' in "
|
||||
"fursettings.")
|
||||
|
||||
node_data = fursettings["nodes"]
|
||||
nodes = self.create_nodes(namespace, node_data)
|
||||
# Create Yeti cache nodes according to settings
|
||||
settings = self.read_settings(self.fname)
|
||||
nodes = []
|
||||
for node in settings["nodes"]:
|
||||
nodes.extend(self.create_node(namespace, node))
|
||||
|
||||
group_name = "{}:{}".format(namespace, name)
|
||||
group_node = cmds.group(nodes, name=group_name)
|
||||
|
|
@ -111,28 +116,14 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
|
||||
def update(self, container, representation):
|
||||
|
||||
legacy_io.install()
|
||||
namespace = container["namespace"]
|
||||
container_node = container["objectName"]
|
||||
|
||||
fur_settings = legacy_io.find_one(
|
||||
{"parent": representation["parent"], "name": "fursettings"}
|
||||
)
|
||||
|
||||
pprint({"parent": representation["parent"], "name": "fursettings"})
|
||||
pprint(fur_settings)
|
||||
assert fur_settings is not None, (
|
||||
"cannot find fursettings representation"
|
||||
)
|
||||
|
||||
settings_fname = get_representation_path(fur_settings)
|
||||
path = get_representation_path(representation)
|
||||
# Get all node data
|
||||
with open(settings_fname, "r") as fp:
|
||||
settings = json.load(fp)
|
||||
settings = self.read_settings(path)
|
||||
|
||||
# Collect scene information of asset
|
||||
set_members = cmds.sets(container["objectName"], query=True)
|
||||
set_members = lib.get_container_members(container)
|
||||
container_root = lib.get_container_transforms(container,
|
||||
members=set_members,
|
||||
root=True)
|
||||
|
|
@ -147,7 +138,7 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
# Re-assemble metadata with cbId as keys
|
||||
meta_data_lookup = {n["cbId"]: n for n in settings["nodes"]}
|
||||
|
||||
# Compare look ups and get the nodes which ar not relevant any more
|
||||
# Delete nodes by "cbId" that are not in the updated version
|
||||
to_delete_lookup = {cb_id for cb_id in scene_lookup.keys() if
|
||||
cb_id not in meta_data_lookup}
|
||||
if to_delete_lookup:
|
||||
|
|
@ -163,25 +154,18 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
fullPath=True) or []
|
||||
to_remove.extend(shapes + transforms)
|
||||
|
||||
# Remove id from look uop
|
||||
# Remove id from lookup
|
||||
scene_lookup.pop(_id, None)
|
||||
|
||||
cmds.delete(to_remove)
|
||||
|
||||
# replace frame in filename with %04d
|
||||
RE_frame = re.compile(r"(\d+)(\.fur)$")
|
||||
file_name = re.sub(RE_frame, r"%04d\g<2>", os.path.basename(path))
|
||||
for cb_id, data in meta_data_lookup.items():
|
||||
|
||||
# Update cache file name
|
||||
data["attrs"]["cacheFileName"] = os.path.join(
|
||||
os.path.dirname(path), file_name)
|
||||
for cb_id, node_settings in meta_data_lookup.items():
|
||||
|
||||
if cb_id not in scene_lookup:
|
||||
|
||||
# Create new nodes
|
||||
self.log.info("Creating new nodes ..")
|
||||
|
||||
new_nodes = self.create_nodes(namespace, [data])
|
||||
new_nodes = self.create_node(namespace, node_settings)
|
||||
cmds.sets(new_nodes, addElement=container_node)
|
||||
cmds.parent(new_nodes, container_root)
|
||||
|
||||
|
|
@ -218,14 +202,8 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
children=True)
|
||||
yeti_node = yeti_nodes[0]
|
||||
|
||||
for attr, value in data["attrs"].items():
|
||||
# handle empty attribute strings. Those are reported
|
||||
# as None, so their type is NoneType and this is not
|
||||
# supported on attributes in Maya. We change it to
|
||||
# empty string.
|
||||
if value is None:
|
||||
value = ""
|
||||
lib.set_attribute(attr, value, yeti_node)
|
||||
for attr, value in node_settings["attrs"].items():
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
||||
cmds.setAttr("{}.representation".format(container_node),
|
||||
str(representation["_id"]),
|
||||
|
|
@ -235,7 +213,6 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
# helper functions
|
||||
|
||||
def create_namespace(self, asset):
|
||||
"""Create a unique namespace
|
||||
Args:
|
||||
|
|
@ -253,100 +230,122 @@ class YetiCacheLoader(load.LoaderPlugin):
|
|||
|
||||
return namespace
|
||||
|
||||
def validate_cache(self, filename, pattern="%04d"):
|
||||
"""Check if the cache has more than 1 frame
|
||||
def get_cache_node_filepath(self, root, node_name):
|
||||
"""Get the cache file path for one of the yeti nodes.
|
||||
|
||||
All caches with more than 1 frame need to be called with `%04d`
|
||||
If the cache has only one frame we return that file name as we assume
|
||||
All caches with more than 1 frame need cache file name set with `%04d`
|
||||
If the cache has only one frame we return the file name as we assume
|
||||
it is a snapshot.
|
||||
|
||||
This expects the files to be named after the "node name" through
|
||||
exports with <Name> in Yeti.
|
||||
|
||||
Args:
|
||||
filename(str)
|
||||
pattern(str)
|
||||
root(str): Folder containing cache files to search in.
|
||||
node_name(str): Node name to search cache files for
|
||||
|
||||
Returns:
|
||||
str
|
||||
str: Cache file path value needed for cacheFileName attribute
|
||||
|
||||
"""
|
||||
|
||||
glob_pattern = filename.replace(pattern, "*")
|
||||
name = node_name.replace(":", "_")
|
||||
pattern = r"^({name})(\.[0-4]+)?(\.fur)$".format(name=re.escape(name))
|
||||
|
||||
escaped = re.escape(filename)
|
||||
re_pattern = escaped.replace(pattern, "-?[0-9]+")
|
||||
|
||||
files = glob.glob(glob_pattern)
|
||||
files = [str(f) for f in files if re.match(re_pattern, f)]
|
||||
files = [fname for fname in os.listdir(root) if re.match(pattern,
|
||||
fname)]
|
||||
if not files:
|
||||
self.log.error("Could not find cache files for '{}' "
|
||||
"with pattern {}".format(node_name, pattern))
|
||||
return
|
||||
|
||||
if len(files) == 1:
|
||||
return files[0]
|
||||
elif len(files) == 0:
|
||||
self.log.error("Could not find cache files for '%s'" % filename)
|
||||
# Single file
|
||||
return os.path.join(root, files[0])
|
||||
|
||||
return filename
|
||||
# Get filename for the sequence with padding
|
||||
collections, remainder = clique.assemble(files)
|
||||
assert not remainder, "This is a bug"
|
||||
assert len(collections) == 1, "This is a bug"
|
||||
collection = collections[0]
|
||||
|
||||
def create_nodes(self, namespace, settings):
|
||||
# Formats name as {head}%d{tail} like cache.%04d.fur
|
||||
fname = collection.format("{head}{padding}{tail}")
|
||||
return os.path.join(root, fname)
|
||||
|
||||
def create_node(self, namespace, node_settings):
|
||||
"""Create nodes with the correct namespace and settings
|
||||
|
||||
Args:
|
||||
namespace(str): namespace
|
||||
settings(list): list of dictionaries
|
||||
node_settings(dict): Single "nodes" entry from .fursettings file.
|
||||
|
||||
Returns:
|
||||
list
|
||||
list: Created nodes
|
||||
|
||||
"""
|
||||
|
||||
nodes = []
|
||||
for node_settings in settings:
|
||||
|
||||
# Create pgYetiMaya node
|
||||
original_node = node_settings["name"]
|
||||
node_name = "{}:{}".format(namespace, original_node)
|
||||
yeti_node = cmds.createNode("pgYetiMaya", name=node_name)
|
||||
# Get original names and ids
|
||||
orig_transform_name = node_settings["transform"]["name"]
|
||||
orig_shape_name = node_settings["name"]
|
||||
|
||||
# Create transform node
|
||||
transform_node = node_name.rstrip("Shape")
|
||||
# Add namespace
|
||||
transform_name = "{}:{}".format(namespace, orig_transform_name)
|
||||
shape_name = "{}:{}".format(namespace, orig_shape_name)
|
||||
|
||||
lib.set_id(transform_node, node_settings["transform"]["cbId"])
|
||||
lib.set_id(yeti_node, node_settings["cbId"])
|
||||
# Create pgYetiMaya node
|
||||
transform_node = cmds.createNode("transform",
|
||||
name=transform_name)
|
||||
yeti_node = cmds.createNode("pgYetiMaya",
|
||||
name=shape_name,
|
||||
parent=transform_node)
|
||||
|
||||
nodes.extend([transform_node, yeti_node])
|
||||
lib.set_id(transform_node, node_settings["transform"]["cbId"])
|
||||
lib.set_id(yeti_node, node_settings["cbId"])
|
||||
|
||||
# Ensure the node has no namespace identifiers
|
||||
attributes = node_settings["attrs"]
|
||||
nodes.extend([transform_node, yeti_node])
|
||||
|
||||
# Check if cache file name is stored
|
||||
# Update attributes with defaults
|
||||
attributes = node_settings["attrs"]
|
||||
attributes.update({
|
||||
"viewportDensity": 0.1,
|
||||
"verbosity": 2,
|
||||
"fileMode": 1,
|
||||
|
||||
# get number of # in path and convert it to C prinf format
|
||||
# like %04d expected by Yeti
|
||||
fbase = re.search(r'^(.+)\.(\d+|#+)\.fur', self.fname)
|
||||
if not fbase:
|
||||
raise RuntimeError('Cannot determine file path')
|
||||
padding = len(fbase.group(2))
|
||||
if "cacheFileName" not in attributes:
|
||||
cache = "{}.%0{}d.fur".format(fbase.group(1), padding)
|
||||
# Fix render stats, like Yeti's own
|
||||
# ../scripts/pgYetiNode.mel script
|
||||
"visibleInReflections": True,
|
||||
"visibleInRefractions": True
|
||||
})
|
||||
|
||||
self.validate_cache(cache)
|
||||
attributes["cacheFileName"] = cache
|
||||
# Apply attributes to pgYetiMaya node
|
||||
for attr, value in attributes.items():
|
||||
set_attribute(attr, value, yeti_node)
|
||||
|
||||
# Update attributes with requirements
|
||||
attributes.update({"viewportDensity": 0.1,
|
||||
"verbosity": 2,
|
||||
"fileMode": 1})
|
||||
|
||||
# Apply attributes to pgYetiMaya node
|
||||
for attr, value in attributes.items():
|
||||
if value is None:
|
||||
continue
|
||||
lib.set_attribute(attr, value, yeti_node)
|
||||
|
||||
# Fix for : YETI-6
|
||||
# Fixes the render stats (this is literally taken from Perigrene's
|
||||
# ../scripts/pgYetiNode.mel script)
|
||||
cmds.setAttr("{}.visibleInReflections".format(yeti_node), True)
|
||||
cmds.setAttr("{}.visibleInRefractions".format(yeti_node), True)
|
||||
|
||||
# Connect to the time node
|
||||
cmds.connectAttr("time1.outTime", "%s.currentTime" % yeti_node)
|
||||
# Connect to the time node
|
||||
cmds.connectAttr("time1.outTime", "%s.currentTime" % yeti_node)
|
||||
|
||||
return nodes
|
||||
|
||||
def read_settings(self, path):
|
||||
"""Read .fursettings file and compute some additional attributes"""
|
||||
|
||||
with open(path, "r") as fp:
|
||||
fur_settings = json.load(fp)
|
||||
|
||||
if "nodes" not in fur_settings:
|
||||
raise RuntimeError("Encountered invalid data, "
|
||||
"expected 'nodes' in fursettings.")
|
||||
|
||||
# Compute the cache file name values we want to set for the nodes
|
||||
root = os.path.dirname(path)
|
||||
for node in fur_settings["nodes"]:
|
||||
cache_filename = self.get_cache_node_filepath(
|
||||
root=root, node_name=node["name"])
|
||||
|
||||
attrs = node.get("attrs", {}) # allow 'attrs' to not exist
|
||||
attrs["cacheFileName"] = cache_filename
|
||||
node["attrs"] = attrs
|
||||
|
||||
return fur_settings
|
||||
|
|
|
|||
|
|
@ -28,18 +28,19 @@ def get_all_children(nodes):
|
|||
dag = sel.getDagPath(0)
|
||||
|
||||
iterator.reset(dag)
|
||||
next(iterator) # ignore self
|
||||
# ignore self
|
||||
iterator.next() # noqa: B305
|
||||
while not iterator.isDone():
|
||||
|
||||
path = iterator.fullPathName()
|
||||
|
||||
if path in traversed:
|
||||
iterator.prune()
|
||||
next(iterator)
|
||||
iterator.next() # noqa: B305
|
||||
continue
|
||||
|
||||
traversed.add(path)
|
||||
next(iterator)
|
||||
iterator.next() # noqa: B305
|
||||
|
||||
return list(traversed)
|
||||
|
||||
|
|
|
|||
|
|
@ -43,11 +43,12 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
|
||||
instance.data["resources"] = yeti_resources
|
||||
|
||||
# Force frame range for export
|
||||
instance.data["frameStart"] = cmds.playbackOptions(
|
||||
query=True, animationStartTime=True)
|
||||
instance.data["frameEnd"] = cmds.playbackOptions(
|
||||
query=True, animationStartTime=True)
|
||||
# Force frame range for yeti cache export for the rig
|
||||
start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
for key in ["frameStart", "frameEnd",
|
||||
"frameStartHandle", "frameEndHandle"]:
|
||||
instance.data[key] = start
|
||||
instance.data["preroll"] = 0
|
||||
|
||||
def collect_input_connections(self, instance):
|
||||
"""Collect the inputs for all nodes in the input_SET"""
|
||||
|
|
|
|||
|
|
@ -25,13 +25,10 @@ class ExtractYetiCache(openpype.api.Extractor):
|
|||
# Define extract output file path
|
||||
dirname = self.staging_dir(instance)
|
||||
|
||||
# Yeti related staging dirs
|
||||
data_file = os.path.join(dirname, "yeti.fursettings")
|
||||
|
||||
# Collect information for writing cache
|
||||
start_frame = instance.data.get("frameStartHandle")
|
||||
end_frame = instance.data.get("frameEndHandle")
|
||||
preroll = instance.data.get("preroll")
|
||||
start_frame = instance.data["frameStartHandle"]
|
||||
end_frame = instance.data["frameEndHandle"]
|
||||
preroll = instance.data["preroll"]
|
||||
if preroll > 0:
|
||||
start_frame -= preroll
|
||||
|
||||
|
|
@ -57,32 +54,35 @@ class ExtractYetiCache(openpype.api.Extractor):
|
|||
cache_files = [x for x in os.listdir(dirname) if x.endswith(".fur")]
|
||||
|
||||
self.log.info("Writing metadata file")
|
||||
settings = instance.data.get("fursettings", None)
|
||||
if settings is not None:
|
||||
with open(data_file, "w") as fp:
|
||||
json.dump(settings, fp, ensure_ascii=False)
|
||||
settings = instance.data["fursettings"]
|
||||
fursettings_path = os.path.join(dirname, "yeti.fursettings")
|
||||
with open(fursettings_path, "w") as fp:
|
||||
json.dump(settings, fp, ensure_ascii=False)
|
||||
|
||||
# build representations
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self.log.info("cache files: {}".format(cache_files[0]))
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
'name': 'fur',
|
||||
'ext': 'fur',
|
||||
'files': cache_files[0] if len(cache_files) == 1 else cache_files,
|
||||
'stagingDir': dirname,
|
||||
'frameStart': int(start_frame),
|
||||
'frameEnd': int(end_frame)
|
||||
}
|
||||
)
|
||||
|
||||
# Workaround: We do not explicitly register these files with the
|
||||
# representation solely so that we can write multiple sequences
|
||||
# a single Subset without renaming - it's a bit of a hack
|
||||
# TODO: Implement better way to manage this sort of integration
|
||||
if 'transfers' not in instance.data:
|
||||
instance.data['transfers'] = []
|
||||
|
||||
publish_dir = instance.data["publishDir"]
|
||||
for cache_filename in cache_files:
|
||||
src = os.path.join(dirname, cache_filename)
|
||||
dst = os.path.join(publish_dir, os.path.basename(cache_filename))
|
||||
instance.data['transfers'].append([src, dst])
|
||||
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
'name': 'fursettings',
|
||||
'name': 'fur',
|
||||
'ext': 'fursettings',
|
||||
'files': os.path.basename(data_file),
|
||||
'files': os.path.basename(fursettings_path),
|
||||
'stagingDir': dirname
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -124,8 +124,8 @@ class ExtractYetiRig(openpype.api.Extractor):
|
|||
settings_path = os.path.join(dirname, "yeti.rigsettings")
|
||||
|
||||
# Yeti related staging dirs
|
||||
maya_path = os.path.join(
|
||||
dirname, "yeti_rig.{}".format(self.scene_type))
|
||||
maya_path = os.path.join(dirname,
|
||||
"yeti_rig.{}".format(self.scene_type))
|
||||
|
||||
self.log.info("Writing metadata file")
|
||||
|
||||
|
|
@ -157,7 +157,7 @@ class ExtractYetiRig(openpype.api.Extractor):
|
|||
input_set = next(i for i in instance if i == "input_SET")
|
||||
|
||||
# Get all items
|
||||
set_members = cmds.sets(input_set, query=True)
|
||||
set_members = cmds.sets(input_set, query=True) or []
|
||||
set_members += cmds.listRelatives(set_members,
|
||||
allDescendents=True,
|
||||
fullPath=True) or []
|
||||
|
|
@ -167,7 +167,7 @@ class ExtractYetiRig(openpype.api.Extractor):
|
|||
resources = instance.data.get("resources", {})
|
||||
with disconnect_plugs(settings, members):
|
||||
with yetigraph_attribute_values(resources_dir, resources):
|
||||
with maya.attribute_values(attr_value):
|
||||
with lib.attribute_values(attr_value):
|
||||
cmds.select(nodes, noExpand=True)
|
||||
cmds.file(maya_path,
|
||||
force=True,
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import clique
|
|||
|
||||
class NukeRenderLocal(openpype.api.Extractor):
|
||||
# TODO: rewrite docstring to nuke
|
||||
"""Render the current Fusion composition locally.
|
||||
"""Render the current Nuke composition locally.
|
||||
|
||||
Extract the result of savers by starting a comp render
|
||||
This will run the local render of Fusion.
|
||||
|
|
|
|||
|
|
@ -23,9 +23,13 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
families = ["review"]
|
||||
hosts = ["nuke"]
|
||||
|
||||
# presets
|
||||
# settings
|
||||
use_rendered = False
|
||||
bake_viewer_process = True
|
||||
bake_viewer_input_process = True
|
||||
nodes = {}
|
||||
|
||||
|
||||
def process(self, instance):
|
||||
if "render.farm" in instance.data["families"]:
|
||||
return
|
||||
|
|
@ -38,11 +42,17 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
self.render_thumbnail(instance)
|
||||
|
||||
def render_thumbnail(self, instance):
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
last_frame = instance.data["frameEndHandle"]
|
||||
|
||||
# find frame range and define middle thumb frame
|
||||
mid_frame = int((last_frame - first_frame) / 2)
|
||||
|
||||
node = instance[0] # group node
|
||||
self.log.info("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
instance.data["representations"] = []
|
||||
|
||||
staging_dir = os.path.normpath(
|
||||
os.path.dirname(instance.data['path']))
|
||||
|
|
@ -53,7 +63,11 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
temporary_nodes = []
|
||||
|
||||
# try to connect already rendered images
|
||||
previous_node = node
|
||||
collection = instance.data.get("collection", None)
|
||||
self.log.debug("__ collection: `{}`".format(collection))
|
||||
|
||||
if collection:
|
||||
# get path
|
||||
|
|
@ -61,40 +75,45 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
"{head}{padding}{tail}"))
|
||||
fhead = collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
first_frame = min(collection.indexes)
|
||||
last_frame = max(collection.indexes)
|
||||
thumb_fname = list(collection)[mid_frame]
|
||||
else:
|
||||
fname = os.path.basename(instance.data.get("path", None))
|
||||
fname = thumb_fname = os.path.basename(
|
||||
instance.data.get("path", None))
|
||||
fhead = os.path.splitext(fname)[0] + "."
|
||||
first_frame = instance.data.get("frameStart", None)
|
||||
last_frame = instance.data.get("frameEnd", None)
|
||||
|
||||
self.log.debug("__ fhead: `{}`".format(fhead))
|
||||
|
||||
if "#" in fhead:
|
||||
fhead = fhead.replace("#", "")[:-1]
|
||||
|
||||
path_render = os.path.join(staging_dir, fname).replace("\\", "/")
|
||||
# check if file exist otherwise connect to write node
|
||||
if os.path.isfile(path_render):
|
||||
path_render = os.path.join(
|
||||
staging_dir, thumb_fname).replace("\\", "/")
|
||||
self.log.debug("__ path_render: `{}`".format(path_render))
|
||||
|
||||
if self.use_rendered and os.path.isfile(path_render):
|
||||
# check if file exist otherwise connect to write node
|
||||
rnode = nuke.createNode("Read")
|
||||
|
||||
rnode["file"].setValue(path_render)
|
||||
|
||||
rnode["first"].setValue(first_frame)
|
||||
rnode["origfirst"].setValue(first_frame)
|
||||
rnode["last"].setValue(last_frame)
|
||||
rnode["origlast"].setValue(last_frame)
|
||||
# turn it raw if none of baking is ON
|
||||
if all([
|
||||
not self.bake_viewer_input_process,
|
||||
not self.bake_viewer_process
|
||||
]):
|
||||
rnode["raw"].setValue(True)
|
||||
|
||||
temporary_nodes.append(rnode)
|
||||
previous_node = rnode
|
||||
else:
|
||||
previous_node = node
|
||||
|
||||
# get input process and connect it to baking
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
# bake viewer input look node into thumbnail image
|
||||
if self.bake_viewer_input_process:
|
||||
# get input process and connect it to baking
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
|
||||
reformat_node = nuke.createNode("Reformat")
|
||||
|
||||
|
|
@ -110,10 +129,12 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
previous_node = reformat_node
|
||||
temporary_nodes.append(reformat_node)
|
||||
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node.setInput(0, previous_node)
|
||||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
# bake viewer colorspace into thumbnail image
|
||||
if self.bake_viewer_process:
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node.setInput(0, previous_node)
|
||||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
|
||||
# create write node
|
||||
write_node = nuke.createNode("Write")
|
||||
|
|
@ -128,26 +149,18 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
temporary_nodes.append(write_node)
|
||||
tags = ["thumbnail", "publish_on_farm"]
|
||||
|
||||
# retime for
|
||||
mid_frame = int((int(last_frame) - int(first_frame)) / 2) \
|
||||
+ int(first_frame)
|
||||
first_frame = int(last_frame) / 2
|
||||
last_frame = int(last_frame) / 2
|
||||
|
||||
repre = {
|
||||
'name': name,
|
||||
'ext': "jpg",
|
||||
"outputName": "thumb",
|
||||
'files': file,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame,
|
||||
"tags": tags
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
# Render frames
|
||||
nuke.execute(write_node.name(), int(mid_frame), int(mid_frame))
|
||||
nuke.execute(write_node.name(), mid_frame, mid_frame)
|
||||
|
||||
self.log.debug(
|
||||
"representations: {}".format(instance.data["representations"]))
|
||||
|
|
|
|||
|
|
@ -151,15 +151,11 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
|
|||
"resolutionWidth": resolution_width,
|
||||
"resolutionHeight": resolution_height,
|
||||
"pixelAspect": pixel_aspect,
|
||||
"review": review
|
||||
"review": review,
|
||||
"representations": []
|
||||
|
||||
})
|
||||
self.log.info("collected instance: {}".format(instance.data))
|
||||
instances.append(instance)
|
||||
|
||||
# create instances in context data if not are created yet
|
||||
if not context.data.get("instances"):
|
||||
context.data["instances"] = list()
|
||||
|
||||
context.data["instances"].extend(instances)
|
||||
self.log.debug("context: {}".format(context))
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
label = "Pre-collect Workfile"
|
||||
hosts = ['nuke']
|
||||
|
||||
def process(self, context):
|
||||
def process(self, context): # sourcery skip: avoid-builtin-shadow
|
||||
root = nuke.root()
|
||||
|
||||
current_file = os.path.normpath(nuke.root().name())
|
||||
|
|
@ -74,20 +74,6 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
}
|
||||
context.data.update(script_data)
|
||||
|
||||
# creating instance data
|
||||
instance.data.update({
|
||||
"subset": subset,
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"publish": root.knob('publish').value(),
|
||||
"family": family,
|
||||
"families": [family],
|
||||
"representations": list()
|
||||
})
|
||||
|
||||
# adding basic script data
|
||||
instance.data.update(script_data)
|
||||
|
||||
# creating representation
|
||||
representation = {
|
||||
'name': 'nk',
|
||||
|
|
@ -96,12 +82,18 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
# creating instance data
|
||||
instance.data.update({
|
||||
"subset": subset,
|
||||
"label": base_name,
|
||||
"name": base_name,
|
||||
"publish": root.knob('publish').value(),
|
||||
"family": family,
|
||||
"families": [family],
|
||||
"representations": [representation]
|
||||
})
|
||||
|
||||
# adding basic script data
|
||||
instance.data.update(script_data)
|
||||
|
||||
self.log.info('Publishing script version')
|
||||
|
||||
# create instances in context data if not are created yet
|
||||
if not context.data.get("instances"):
|
||||
context.data["instances"] = list()
|
||||
|
||||
context.data["instances"].append(instance)
|
||||
|
|
|
|||
|
|
@ -72,12 +72,12 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
|
||||
representation = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
"stagingDir": output_dir,
|
||||
"tags": list()
|
||||
}
|
||||
representation = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
"stagingDir": output_dir,
|
||||
"tags": list()
|
||||
}
|
||||
|
||||
try:
|
||||
collected_frames = [f for f in os.listdir(output_dir)
|
||||
|
|
@ -175,6 +175,11 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
"frameEndHandle": last_frame,
|
||||
})
|
||||
|
||||
# make sure rendered sequence on farm will
|
||||
# be used for exctract review
|
||||
if not instance.data["review"]:
|
||||
instance.data["useSequenceForReview"] = False
|
||||
|
||||
# * Add audio to instance if exists.
|
||||
# Find latest versions document
|
||||
version_doc = pype.get_latest_version(
|
||||
|
|
|
|||
|
|
@ -1 +0,0 @@
|
|||
import knob_scripter
|
||||
|
Before Width: | Height: | Size: 1.8 KiB |
|
Before Width: | Height: | Size: 1.2 KiB |
|
Before Width: | Height: | Size: 1.8 KiB |
|
Before Width: | Height: | Size: 2.1 KiB |
|
Before Width: | Height: | Size: 2.2 KiB |
|
Before Width: | Height: | Size: 2.7 KiB |
|
Before Width: | Height: | Size: 1.7 KiB |
|
Before Width: | Height: | Size: 2.3 KiB |
|
Before Width: | Height: | Size: 1.7 KiB |
|
Before Width: | Height: | Size: 2.3 KiB |
|
Before Width: | Height: | Size: 1.4 KiB |
|
|
@ -1,4 +0,0 @@
|
|||
import nuke
|
||||
|
||||
# default write mov
|
||||
nuke.knobDefault('Write.mov.colorspace', 'sRGB')
|
||||
|
|
@ -3,7 +3,7 @@ import json
|
|||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
||||
|
|
@ -24,12 +24,9 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
asset_name = instance.data["asset"]
|
||||
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
if not asset_doc:
|
||||
raise AssertionError((
|
||||
"Couldn't find Asset document with name \"{}\""
|
||||
|
|
@ -52,7 +49,7 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
self.subset_name_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
project_name
|
||||
)
|
||||
instance_name = f"{asset_name}_{subset_name}"
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import re
|
|||
from copy import deepcopy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_id
|
||||
|
||||
|
||||
class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
||||
|
|
@ -61,27 +61,32 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
**instance.data["anatomyData"])
|
||||
|
||||
def create_hierarchy(self, instance):
|
||||
parents = list()
|
||||
hierarchy = list()
|
||||
visual_hierarchy = [instance.context.data["assetEntity"]]
|
||||
asset_doc = instance.context.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
project_name = project_doc["name"]
|
||||
visual_hierarchy = [asset_doc]
|
||||
current_doc = asset_doc
|
||||
while True:
|
||||
visual_parent = legacy_io.find_one(
|
||||
{"_id": visual_hierarchy[-1]["data"]["visualParent"]}
|
||||
)
|
||||
if visual_parent:
|
||||
visual_hierarchy.append(visual_parent)
|
||||
else:
|
||||
visual_hierarchy.append(
|
||||
instance.context.data["projectEntity"])
|
||||
visual_parent_id = current_doc["data"]["visualParent"]
|
||||
visual_parent = None
|
||||
if visual_parent_id:
|
||||
visual_parent = get_asset_by_id(project_name, visual_parent_id)
|
||||
|
||||
if not visual_parent:
|
||||
visual_hierarchy.append(project_doc)
|
||||
break
|
||||
visual_hierarchy.append(visual_parent)
|
||||
current_doc = visual_parent
|
||||
|
||||
# add current selection context hierarchy from standalonepublisher
|
||||
parents = list()
|
||||
for entity in reversed(visual_hierarchy):
|
||||
parents.append({
|
||||
"entity_type": entity["data"]["entityType"],
|
||||
"entity_name": entity["name"]
|
||||
})
|
||||
|
||||
hierarchy = list()
|
||||
if self.shot_add_hierarchy:
|
||||
parent_template_patern = re.compile(r"\{([a-z]*?)\}")
|
||||
# fill the parents parts from presets
|
||||
|
|
@ -129,9 +134,8 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
self.log.debug(f"Hierarchy: {hierarchy}")
|
||||
self.log.debug(f"parents: {parents}")
|
||||
|
||||
tasks_to_add = dict()
|
||||
if self.shot_add_tasks:
|
||||
tasks_to_add = dict()
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_tasks = project_doc["config"]["tasks"]
|
||||
for task_name, task_data in self.shot_add_tasks.items():
|
||||
_task_data = deepcopy(task_data)
|
||||
|
|
@ -150,9 +154,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
task_name,
|
||||
list(project_tasks.keys())))
|
||||
|
||||
instance.data["tasks"] = tasks_to_add
|
||||
else:
|
||||
instance.data["tasks"] = dict()
|
||||
instance.data["tasks"] = tasks_to_add
|
||||
|
||||
# updating hierarchy data
|
||||
instance.data["anatomyData"].update({
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import collections
|
|||
import pyblish.api
|
||||
from pprint import pformat
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_assets
|
||||
|
||||
|
||||
class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin):
|
||||
|
|
@ -119,8 +119,9 @@ class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
def _asset_docs_by_parent_id(self, instance):
|
||||
# Query all assets for project and store them by parent's id to list
|
||||
project_name = instance.context.data["projectEntity"]["name"]
|
||||
asset_docs_by_parent_id = collections.defaultdict(list)
|
||||
for asset_doc in legacy_io.find({"type": "asset"}):
|
||||
for asset_doc in get_assets(project_name):
|
||||
parent_id = asset_doc["data"]["visualParent"]
|
||||
asset_docs_by_parent_id[parent_id].append(asset_doc)
|
||||
return asset_docs_by_parent_id
|
||||
|
|
|
|||
|
|
@ -1,9 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
PublishXmlValidationError,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.client import get_assets
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
||||
|
||||
class ValidateTaskExistence(pyblish.api.ContextPlugin):
|
||||
|
|
@ -20,15 +18,11 @@ class ValidateTaskExistence(pyblish.api.ContextPlugin):
|
|||
for instance in context:
|
||||
asset_names.add(instance.data["asset"])
|
||||
|
||||
asset_docs = legacy_io.find(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": {"$in": list(asset_names)}
|
||||
},
|
||||
{
|
||||
"name": 1,
|
||||
"data.tasks": 1
|
||||
}
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
asset_docs = get_assets(
|
||||
project_name,
|
||||
asset_names=asset_names,
|
||||
fields=["name", "data.tasks"]
|
||||
)
|
||||
tasks_by_asset_names = {}
|
||||
for asset_doc in asset_docs:
|
||||
|
|
|
|||
|
|
@ -13,9 +13,13 @@ import tempfile
|
|||
import math
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
get_last_version_by_subset_name
|
||||
)
|
||||
from openpype.lib import (
|
||||
prepare_template_data,
|
||||
get_asset,
|
||||
get_ffprobe_streams,
|
||||
convert_ffprobe_fps_value,
|
||||
)
|
||||
|
|
@ -23,7 +27,6 @@ from openpype.lib.plugin_tools import (
|
|||
parse_json,
|
||||
get_subset_name_with_asset_doc
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
||||
|
|
@ -56,8 +59,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
|
||||
self.log.info("task_sub:: {}".format(task_subfolders))
|
||||
|
||||
project_name = context.data["project_name"]
|
||||
asset_name = context.data["asset"]
|
||||
asset_doc = get_asset()
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
task_name = context.data["task"]
|
||||
task_type = context.data["taskType"]
|
||||
project_name = context.data["project_name"]
|
||||
|
|
@ -80,7 +84,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
family, variant, task_name, asset_doc,
|
||||
project_name=project_name, host_name="webpublisher"
|
||||
)
|
||||
version = self._get_last_version(asset_name, subset_name) + 1
|
||||
version = self._get_next_version(
|
||||
project_name, asset_doc, subset_name
|
||||
)
|
||||
|
||||
instance = context.create_instance(subset_name)
|
||||
instance.data["asset"] = asset_name
|
||||
|
|
@ -219,55 +225,19 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
config["families"],
|
||||
config["tags"])
|
||||
|
||||
def _get_last_version(self, asset_name, subset_name):
|
||||
"""Returns version number or 0 for 'asset' and 'subset'"""
|
||||
query = [
|
||||
{
|
||||
"$match": {"type": "asset", "name": asset_name}
|
||||
},
|
||||
{
|
||||
"$lookup":
|
||||
{
|
||||
"from": os.environ["AVALON_PROJECT"],
|
||||
"localField": "_id",
|
||||
"foreignField": "parent",
|
||||
"as": "subsets"
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$subsets"
|
||||
},
|
||||
{
|
||||
"$match": {"subsets.type": "subset",
|
||||
"subsets.name": subset_name}},
|
||||
{
|
||||
"$lookup":
|
||||
{
|
||||
"from": os.environ["AVALON_PROJECT"],
|
||||
"localField": "subsets._id",
|
||||
"foreignField": "parent",
|
||||
"as": "versions"
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$versions"
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": {
|
||||
"asset_name": "$name",
|
||||
"subset_name": "$subsets.name"
|
||||
},
|
||||
'version': {'$max': "$versions.name"}
|
||||
}
|
||||
}
|
||||
]
|
||||
version = list(legacy_io.aggregate(query))
|
||||
def _get_next_version(self, project_name, asset_doc, subset_name):
|
||||
"""Returns version number or 1 for 'asset' and 'subset'"""
|
||||
|
||||
if version:
|
||||
return version[0].get("version") or 0
|
||||
else:
|
||||
return 0
|
||||
version_doc = get_last_version_by_subset_name(
|
||||
project_name,
|
||||
subset_name,
|
||||
asset_doc["_id"],
|
||||
fields=["name"]
|
||||
)
|
||||
version = 1
|
||||
if version_doc:
|
||||
version += int(version_doc["name"])
|
||||
return version
|
||||
|
||||
def _get_number_of_frames(self, file_url):
|
||||
"""Return duration in frames"""
|
||||
|
|
|
|||
|
|
@ -2,11 +2,15 @@
|
|||
import os
|
||||
import json
|
||||
import datetime
|
||||
from bson.objectid import ObjectId
|
||||
import collections
|
||||
from aiohttp.web_response import Response
|
||||
import subprocess
|
||||
from bson.objectid import ObjectId
|
||||
from aiohttp.web_response import Response
|
||||
|
||||
from openpype.client import (
|
||||
get_projects,
|
||||
get_assets,
|
||||
)
|
||||
from openpype.lib import (
|
||||
OpenPypeMongoConnection,
|
||||
PypeLogger,
|
||||
|
|
@ -16,30 +20,29 @@ from openpype.lib.remote_publish import (
|
|||
ERROR_STATUS,
|
||||
REPROCESS_STATUS
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype_modules.avalon_apps.rest_api import _RestApiEndpoint
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype_modules.webserver.base_routes import RestApiEndpoint
|
||||
|
||||
log = PypeLogger.get_logger("WebpublishRoutes")
|
||||
|
||||
|
||||
|
||||
log = PypeLogger.get_logger("WebServer")
|
||||
class ResourceRestApiEndpoint(RestApiEndpoint):
|
||||
def __init__(self, resource):
|
||||
self.resource = resource
|
||||
super(ResourceRestApiEndpoint, self).__init__()
|
||||
|
||||
|
||||
class RestApiResource:
|
||||
"""Resource carrying needed info and Avalon DB connection for publish."""
|
||||
def __init__(self, server_manager, executable, upload_dir,
|
||||
studio_task_queue=None):
|
||||
self.server_manager = server_manager
|
||||
self.upload_dir = upload_dir
|
||||
self.executable = executable
|
||||
class WebpublishApiEndpoint(ResourceRestApiEndpoint):
|
||||
@property
|
||||
def dbcon(self):
|
||||
return self.resource.dbcon
|
||||
|
||||
if studio_task_queue is None:
|
||||
studio_task_queue = collections.deque().dequeu
|
||||
self.studio_task_queue = studio_task_queue
|
||||
|
||||
self.dbcon = AvalonMongoDB()
|
||||
self.dbcon.install()
|
||||
class JsonApiResource:
|
||||
"""Resource for json manipulation.
|
||||
|
||||
All resources handling sending output to REST should inherit from
|
||||
"""
|
||||
@staticmethod
|
||||
def json_dump_handler(value):
|
||||
if isinstance(value, datetime.datetime):
|
||||
|
|
@ -59,19 +62,33 @@ class RestApiResource:
|
|||
).encode("utf-8")
|
||||
|
||||
|
||||
class OpenPypeRestApiResource(RestApiResource):
|
||||
class RestApiResource(JsonApiResource):
|
||||
"""Resource carrying needed info and Avalon DB connection for publish."""
|
||||
def __init__(self, server_manager, executable, upload_dir,
|
||||
studio_task_queue=None):
|
||||
self.server_manager = server_manager
|
||||
self.upload_dir = upload_dir
|
||||
self.executable = executable
|
||||
|
||||
if studio_task_queue is None:
|
||||
studio_task_queue = collections.deque().dequeu
|
||||
self.studio_task_queue = studio_task_queue
|
||||
|
||||
|
||||
class WebpublishRestApiResource(JsonApiResource):
|
||||
"""Resource carrying OP DB connection for storing batch info into DB."""
|
||||
def __init__(self, ):
|
||||
|
||||
def __init__(self):
|
||||
mongo_client = OpenPypeMongoConnection.get_mongo_client()
|
||||
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
|
||||
self.dbcon = mongo_client[database_name]["webpublishes"]
|
||||
|
||||
|
||||
class ProjectsEndpoint(_RestApiEndpoint):
|
||||
class ProjectsEndpoint(ResourceRestApiEndpoint):
|
||||
"""Returns list of dict with project info (id, name)."""
|
||||
async def get(self) -> Response:
|
||||
output = []
|
||||
for project_doc in self.dbcon.projects():
|
||||
for project_doc in get_projects():
|
||||
ret_val = {
|
||||
"id": project_doc["_id"],
|
||||
"name": project_doc["name"]
|
||||
|
|
@ -84,7 +101,7 @@ class ProjectsEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class HiearchyEndpoint(_RestApiEndpoint):
|
||||
class HiearchyEndpoint(ResourceRestApiEndpoint):
|
||||
"""Returns dictionary with context tree from assets."""
|
||||
async def get(self, project_name) -> Response:
|
||||
query_projection = {
|
||||
|
|
@ -96,10 +113,7 @@ class HiearchyEndpoint(_RestApiEndpoint):
|
|||
"type": 1,
|
||||
}
|
||||
|
||||
asset_docs = self.dbcon.database[project_name].find(
|
||||
{"type": "asset"},
|
||||
query_projection
|
||||
)
|
||||
asset_docs = get_assets(project_name, fields=query_projection.keys())
|
||||
asset_docs_by_id = {
|
||||
asset_doc["_id"]: asset_doc
|
||||
for asset_doc in asset_docs
|
||||
|
|
@ -183,7 +197,7 @@ class TaskNode(Node):
|
|||
self["attributes"] = {}
|
||||
|
||||
|
||||
class BatchPublishEndpoint(_RestApiEndpoint):
|
||||
class BatchPublishEndpoint(WebpublishApiEndpoint):
|
||||
"""Triggers headless publishing of batch."""
|
||||
async def post(self, request) -> Response:
|
||||
# Validate existence of openpype executable
|
||||
|
|
@ -288,7 +302,7 @@ class BatchPublishEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class TaskPublishEndpoint(_RestApiEndpoint):
|
||||
class TaskPublishEndpoint(WebpublishApiEndpoint):
|
||||
"""Prepared endpoint triggered after each task - for future development."""
|
||||
async def post(self, request) -> Response:
|
||||
return Response(
|
||||
|
|
@ -298,8 +312,12 @@ class TaskPublishEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class BatchStatusEndpoint(_RestApiEndpoint):
|
||||
"""Returns dict with info for batch_id."""
|
||||
class BatchStatusEndpoint(WebpublishApiEndpoint):
|
||||
"""Returns dict with info for batch_id.
|
||||
|
||||
Uses 'WebpublishRestApiResource'.
|
||||
"""
|
||||
|
||||
async def get(self, batch_id) -> Response:
|
||||
output = self.dbcon.find_one({"batch_id": batch_id})
|
||||
|
||||
|
|
@ -318,8 +336,12 @@ class BatchStatusEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class UserReportEndpoint(_RestApiEndpoint):
|
||||
"""Returns list of dict with batch info for user (email address)."""
|
||||
class UserReportEndpoint(WebpublishApiEndpoint):
|
||||
"""Returns list of dict with batch info for user (email address).
|
||||
|
||||
Uses 'WebpublishRestApiResource'.
|
||||
"""
|
||||
|
||||
async def get(self, user) -> Response:
|
||||
output = list(self.dbcon.find({"user": user},
|
||||
projection={"log": False}))
|
||||
|
|
@ -338,7 +360,7 @@ class UserReportEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class ConfiguredExtensionsEndpoint(_RestApiEndpoint):
|
||||
class ConfiguredExtensionsEndpoint(WebpublishApiEndpoint):
|
||||
"""Returns dict of extensions which have mapping to family.
|
||||
|
||||
Returns:
|
||||
|
|
@ -378,8 +400,12 @@ class ConfiguredExtensionsEndpoint(_RestApiEndpoint):
|
|||
)
|
||||
|
||||
|
||||
class BatchReprocessEndpoint(_RestApiEndpoint):
|
||||
"""Marks latest 'batch_id' for reprocessing, returns 404 if not found."""
|
||||
class BatchReprocessEndpoint(WebpublishApiEndpoint):
|
||||
"""Marks latest 'batch_id' for reprocessing, returns 404 if not found.
|
||||
|
||||
Uses 'WebpublishRestApiResource'.
|
||||
"""
|
||||
|
||||
async def post(self, batch_id) -> Response:
|
||||
batches = self.dbcon.find({"batch_id": batch_id,
|
||||
"status": ERROR_STATUS}).sort("_id", -1)
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ from openpype.lib import PypeLogger
|
|||
|
||||
from .webpublish_routes import (
|
||||
RestApiResource,
|
||||
OpenPypeRestApiResource,
|
||||
WebpublishRestApiResource,
|
||||
HiearchyEndpoint,
|
||||
ProjectsEndpoint,
|
||||
ConfiguredExtensionsEndpoint,
|
||||
|
|
@ -27,7 +27,7 @@ from openpype.lib.remote_publish import (
|
|||
)
|
||||
|
||||
|
||||
log = PypeLogger().get_logger("webserver_gui")
|
||||
log = PypeLogger.get_logger("webserver_gui")
|
||||
|
||||
|
||||
def run_webserver(*args, **kwargs):
|
||||
|
|
@ -69,16 +69,14 @@ def run_webserver(*args, **kwargs):
|
|||
)
|
||||
|
||||
# triggers publish
|
||||
webpublisher_task_publish_endpoint = \
|
||||
BatchPublishEndpoint(resource)
|
||||
webpublisher_task_publish_endpoint = BatchPublishEndpoint(resource)
|
||||
server_manager.add_route(
|
||||
"POST",
|
||||
"/api/webpublish/batch",
|
||||
webpublisher_task_publish_endpoint.dispatch
|
||||
)
|
||||
|
||||
webpublisher_batch_publish_endpoint = \
|
||||
TaskPublishEndpoint(resource)
|
||||
webpublisher_batch_publish_endpoint = TaskPublishEndpoint(resource)
|
||||
server_manager.add_route(
|
||||
"POST",
|
||||
"/api/webpublish/task",
|
||||
|
|
@ -86,27 +84,26 @@ def run_webserver(*args, **kwargs):
|
|||
)
|
||||
|
||||
# reporting
|
||||
openpype_resource = OpenPypeRestApiResource()
|
||||
batch_status_endpoint = BatchStatusEndpoint(openpype_resource)
|
||||
webpublish_resource = WebpublishRestApiResource()
|
||||
batch_status_endpoint = BatchStatusEndpoint(webpublish_resource)
|
||||
server_manager.add_route(
|
||||
"GET",
|
||||
"/api/batch_status/{batch_id}",
|
||||
batch_status_endpoint.dispatch
|
||||
)
|
||||
|
||||
user_status_endpoint = UserReportEndpoint(openpype_resource)
|
||||
user_status_endpoint = UserReportEndpoint(webpublish_resource)
|
||||
server_manager.add_route(
|
||||
"GET",
|
||||
"/api/publishes/{user}",
|
||||
user_status_endpoint.dispatch
|
||||
)
|
||||
|
||||
webpublisher_batch_reprocess_endpoint = \
|
||||
BatchReprocessEndpoint(openpype_resource)
|
||||
batch_reprocess_endpoint = BatchReprocessEndpoint(webpublish_resource)
|
||||
server_manager.add_route(
|
||||
"POST",
|
||||
"/api/webpublish/reprocess/{batch_id}",
|
||||
webpublisher_batch_reprocess_endpoint.dispatch
|
||||
batch_reprocess_endpoint.dispatch
|
||||
)
|
||||
|
||||
server_manager.start_server()
|
||||
|
|
|
|||
|
|
@ -533,7 +533,7 @@ def convert_input_paths_for_ffmpeg(
|
|||
output_dir,
|
||||
logger=None
|
||||
):
|
||||
"""Contert source file to format supported in ffmpeg.
|
||||
"""Convert source file to format supported in ffmpeg.
|
||||
|
||||
Currently can convert only exrs. The input filepaths should be files
|
||||
with same type. Information about input is loaded only from first found
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@ class AfterEffectsSubmitDeadline(
|
|||
hosts = ["aftereffects"]
|
||||
families = ["render.farm"] # cannot be "render' as that is integrated
|
||||
use_published = True
|
||||
targets = ["local"]
|
||||
|
||||
priority = 50
|
||||
chunk_size = 1000000
|
||||
|
|
|
|||
|
|
@ -238,6 +238,7 @@ class HarmonySubmitDeadline(
|
|||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
hosts = ["harmony"]
|
||||
families = ["render.farm"]
|
||||
targets = ["local"]
|
||||
|
||||
optional = True
|
||||
use_published = False
|
||||
|
|
|
|||
|
|
@ -287,6 +287,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
hosts = ["maya"]
|
||||
families = ["renderlayer"]
|
||||
targets = ["local"]
|
||||
|
||||
use_published = True
|
||||
tile_assembler_plugin = "OpenPypeTileAssembler"
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import openpype.api
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class MayaSubmitRemotePublishDeadline(openpype.api.Integrator):
|
||||
class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
|
||||
"""Submit Maya scene to perform a local publish in Deadline.
|
||||
|
||||
Publishing in Deadline can be helpful for scenes that publish very slow.
|
||||
|
|
@ -31,6 +31,7 @@ class MayaSubmitRemotePublishDeadline(openpype.api.Integrator):
|
|||
order = pyblish.api.IntegratorOrder
|
||||
hosts = ["maya"]
|
||||
families = ["publish.farm"]
|
||||
targets = ["local"]
|
||||
|
||||
def process(self, instance):
|
||||
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
hosts = ["nuke", "nukestudio"]
|
||||
families = ["render.farm", "prerender.farm"]
|
||||
optional = True
|
||||
targets = ["local"]
|
||||
|
||||
# presets
|
||||
priority = 50
|
||||
|
|
@ -54,8 +55,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
self._ver = re.search(r"\d+\.\d+", context.data.get("hostVersion"))
|
||||
self._deadline_user = context.data.get(
|
||||
"deadlineUser", getpass.getuser())
|
||||
self._frame_start = int(instance.data["frameStartHandle"])
|
||||
self._frame_end = int(instance.data["frameEndHandle"])
|
||||
submit_frame_start = int(instance.data["frameStartHandle"])
|
||||
submit_frame_end = int(instance.data["frameEndHandle"])
|
||||
|
||||
# get output path
|
||||
render_path = instance.data['path']
|
||||
|
|
@ -81,13 +82,16 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start -= 1
|
||||
submit_frame_start -= 1
|
||||
|
||||
response = self.payload_submit(instance,
|
||||
script_path,
|
||||
render_path,
|
||||
node.name()
|
||||
)
|
||||
response = self.payload_submit(
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
node.name(),
|
||||
submit_frame_start,
|
||||
submit_frame_end
|
||||
)
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
instance.data["deadlineSubmissionJob"] = response.json()
|
||||
instance.data["outputDir"] = os.path.dirname(
|
||||
|
|
@ -95,20 +99,22 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
if instance.data.get("bakingNukeScripts"):
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
submit_frame_start += 1
|
||||
|
||||
for baking_script in instance.data["bakingNukeScripts"]:
|
||||
render_path = baking_script["bakeRenderPath"]
|
||||
script_path = baking_script["bakeScriptPath"]
|
||||
exe_node_name = baking_script["bakeWriteNodeName"]
|
||||
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start += 1
|
||||
|
||||
resp = self.payload_submit(
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
submit_frame_start,
|
||||
submit_frame_end,
|
||||
response.json()
|
||||
)
|
||||
|
||||
|
|
@ -125,13 +131,16 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
families.insert(0, "prerender")
|
||||
instance.data["families"] = families
|
||||
|
||||
def payload_submit(self,
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
responce_data=None
|
||||
):
|
||||
def payload_submit(
|
||||
self,
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
start_frame,
|
||||
end_frame,
|
||||
responce_data=None
|
||||
):
|
||||
render_dir = os.path.normpath(os.path.dirname(render_path))
|
||||
script_name = os.path.basename(script_path)
|
||||
jobname = "%s - %s" % (script_name, instance.name)
|
||||
|
|
@ -191,8 +200,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
"Plugin": "Nuke",
|
||||
"Frames": "{start}-{end}".format(
|
||||
start=self._frame_start,
|
||||
end=self._frame_end
|
||||
start=start_frame,
|
||||
end=end_frame
|
||||
),
|
||||
"Comment": self._comment,
|
||||
|
||||
|
|
@ -292,7 +301,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
# adding expectied files to instance.data
|
||||
self.expected_files(instance, render_path)
|
||||
self.expected_files(
|
||||
instance,
|
||||
render_path,
|
||||
start_frame,
|
||||
end_frame
|
||||
)
|
||||
|
||||
self.log.debug("__ expectedFiles: `{}`".format(
|
||||
instance.data["expectedFiles"]))
|
||||
response = requests.post(self.deadline_url, json=payload, timeout=10)
|
||||
|
|
@ -338,9 +353,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
self.log.debug("_ path: `{}`".format(path))
|
||||
return path
|
||||
|
||||
def expected_files(self,
|
||||
instance,
|
||||
path):
|
||||
def expected_files(
|
||||
self,
|
||||
instance,
|
||||
path,
|
||||
start_frame,
|
||||
end_frame
|
||||
):
|
||||
""" Create expected files in instance data
|
||||
"""
|
||||
if not instance.data.get("expectedFiles"):
|
||||
|
|
@ -358,7 +377,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
instance.data["expectedFiles"].append(path)
|
||||
return
|
||||
|
||||
for i in range(self._frame_start, (self._frame_end + 1)):
|
||||
for i in range(start_frame, (end_frame + 1)):
|
||||
instance.data["expectedFiles"].append(
|
||||
os.path.join(dir, (file % i)).replace("\\", "/"))
|
||||
|
||||
|
|
|
|||
|
|
@ -103,6 +103,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.IntegratorOrder + 0.2
|
||||
icon = "tractor"
|
||||
deadline_plugin = "OpenPype"
|
||||
targets = ["local"]
|
||||
|
||||
hosts = ["fusion", "maya", "nuke", "celaction", "aftereffects", "harmony"]
|
||||
|
||||
|
|
@ -128,7 +129,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"OPENPYPE_LOG_NO_COLORS",
|
||||
"OPENPYPE_USERNAME",
|
||||
"OPENPYPE_RENDER_JOB",
|
||||
"OPENPYPE_PUBLISH_JOB"
|
||||
"OPENPYPE_PUBLISH_JOB",
|
||||
"OPENPYPE_MONGO"
|
||||
]
|
||||
|
||||
# custom deadline attributes
|
||||
|
|
@ -640,6 +642,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
|
||||
def _solve_families(self, instance, preview=False):
|
||||
families = instance.get("families")
|
||||
|
||||
# if we have one representation with preview tag
|
||||
# flag whole instance for review and for ftrack
|
||||
if preview:
|
||||
|
|
@ -719,10 +722,17 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
" This may cause issues."
|
||||
).format(source))
|
||||
|
||||
families = ["render"]
|
||||
family = "render"
|
||||
if "prerender" in instance.data["families"]:
|
||||
family = "prerender"
|
||||
families = [family]
|
||||
|
||||
# pass review to families if marked as review
|
||||
if data.get("review"):
|
||||
families.append("review")
|
||||
|
||||
instance_skeleton_data = {
|
||||
"family": "render",
|
||||
"family": family,
|
||||
"subset": subset,
|
||||
"families": families,
|
||||
"asset": asset,
|
||||
|
|
@ -744,11 +754,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"useSequenceForReview": data.get("useSequenceForReview", True)
|
||||
}
|
||||
|
||||
if "prerender" in instance.data["families"]:
|
||||
instance_skeleton_data.update({
|
||||
"family": "prerender",
|
||||
"families": []})
|
||||
|
||||
# skip locking version if we are creating v01
|
||||
instance_version = instance.data.get("version") # take this if exists
|
||||
if instance_version != 1:
|
||||
|
|
|
|||
|
|
@ -140,9 +140,9 @@ class CustomAttributes(BaseAction):
|
|||
identifier = 'create.update.attributes'
|
||||
#: Action label.
|
||||
label = "OpenPype Admin"
|
||||
variant = '- Create/Update Avalon Attributes'
|
||||
variant = '- Create/Update Custom Attributes'
|
||||
#: Action description.
|
||||
description = 'Creates Avalon/Mongo ID for double check'
|
||||
description = 'Creates required custom attributes in ftrack'
|
||||
icon = statics_icon("ftrack", "action_icons", "OpenPypeAdmin.svg")
|
||||
settings_key = "create_update_attributes"
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
import time
|
||||
import datetime
|
||||
import threading
|
||||
|
||||
from Qt import QtCore, QtWidgets, QtGui
|
||||
|
||||
import ftrack_api
|
||||
|
|
@ -48,6 +49,9 @@ class FtrackTrayWrapper:
|
|||
self.widget_login.activateWindow()
|
||||
self.widget_login.raise_()
|
||||
|
||||
def show_ftrack_browser(self):
|
||||
QtGui.QDesktopServices.openUrl(self.module.ftrack_url)
|
||||
|
||||
def validate(self):
|
||||
validation = False
|
||||
cred = credentials.get_credentials()
|
||||
|
|
@ -284,6 +288,13 @@ class FtrackTrayWrapper:
|
|||
tray_server_menu.addAction(self.action_server_stop)
|
||||
|
||||
self.tray_server_menu = tray_server_menu
|
||||
|
||||
# Ftrack Browser
|
||||
browser_open = QtWidgets.QAction("Open Ftrack...", tray_menu)
|
||||
browser_open.triggered.connect(self.show_ftrack_browser)
|
||||
tray_menu.addAction(browser_open)
|
||||
self.browser_open = browser_open
|
||||
|
||||
self.bool_logged = False
|
||||
self.set_menu_visibility()
|
||||
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ def update_op_assets(
|
|||
# Frame in, fallback on 0
|
||||
frame_in = int(item_data.get("frame_in") or 0)
|
||||
item_data["frameStart"] = frame_in
|
||||
item_data.pop("frame_in")
|
||||
item_data.pop("frame_in", None)
|
||||
# Frame out, fallback on frame_in + duration
|
||||
frames_duration = int(item.get("nb_frames") or 1)
|
||||
frame_out = (
|
||||
|
|
@ -94,7 +94,7 @@ def update_op_assets(
|
|||
else frame_in + frames_duration
|
||||
)
|
||||
item_data["frameEnd"] = int(frame_out)
|
||||
item_data.pop("frame_out")
|
||||
item_data.pop("frame_out", None)
|
||||
# Fps, fallback to project's value when entity fps is deleted
|
||||
if not item_data.get("fps") and item_doc["data"].get("fps"):
|
||||
item_data["fps"] = project_doc["data"]["fps"]
|
||||
|
|
|
|||
BIN
openpype/modules/sync_server/resources/disabled.png
Normal file
|
After Width: | Height: | Size: 2.3 KiB |
|
|
@ -280,14 +280,13 @@ class SyncServerThread(threading.Thread):
|
|||
while self.is_running and not self.module.is_paused():
|
||||
try:
|
||||
import time
|
||||
start_time = None
|
||||
start_time = time.time()
|
||||
self.module.set_sync_project_settings() # clean cache
|
||||
for collection, preset in self.module.sync_project_settings.\
|
||||
items():
|
||||
if collection not in self.module.get_enabled_projects():
|
||||
continue
|
||||
collection = None
|
||||
enabled_projects = self.module.get_enabled_projects()
|
||||
for collection in enabled_projects:
|
||||
preset = self.module.sync_project_settings[collection]
|
||||
|
||||
start_time = time.time()
|
||||
local_site, remote_site = self._working_sites(collection)
|
||||
if not all([local_site, remote_site]):
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -926,9 +926,22 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
|
||||
return enabled_projects
|
||||
|
||||
def is_project_enabled(self, project_name):
|
||||
def is_project_enabled(self, project_name, single=False):
|
||||
"""Checks if 'project_name' is enabled for syncing.
|
||||
'get_sync_project_setting' is potentially expensive operation (pulls
|
||||
settings for all projects if cached version is not available), using
|
||||
project_settings for specific project should be faster.
|
||||
Args:
|
||||
project_name (str)
|
||||
single (bool): use 'get_project_settings' method
|
||||
"""
|
||||
if self.enabled:
|
||||
project_settings = self.get_sync_project_setting(project_name)
|
||||
if single:
|
||||
project_settings = get_project_settings(project_name)
|
||||
project_settings = \
|
||||
self._parse_sync_settings_from_settings(project_settings)
|
||||
else:
|
||||
project_settings = self.get_sync_project_setting(project_name)
|
||||
if project_settings and project_settings.get("enabled"):
|
||||
return True
|
||||
return False
|
||||
|
|
@ -1026,21 +1039,13 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
"""
|
||||
self.server_init()
|
||||
|
||||
from .tray.app import SyncServerWindow
|
||||
self.widget = SyncServerWindow(self)
|
||||
|
||||
def server_init(self):
|
||||
"""Actual initialization of Sync Server."""
|
||||
# import only in tray or Python3, because of Python2 hosts
|
||||
from .sync_server import SyncServerThread
|
||||
|
||||
if not self.enabled:
|
||||
return
|
||||
|
||||
enabled_projects = self.get_enabled_projects()
|
||||
if not enabled_projects:
|
||||
self.enabled = False
|
||||
return
|
||||
from .sync_server import SyncServerThread
|
||||
|
||||
self.lock = threading.Lock()
|
||||
|
||||
|
|
@ -1060,7 +1065,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
self.server_start()
|
||||
|
||||
def server_start(self):
|
||||
if self.sync_project_settings and self.enabled:
|
||||
if self.enabled:
|
||||
self.sync_server_thread.start()
|
||||
else:
|
||||
log.info("No presets or active providers. " +
|
||||
|
|
@ -1851,6 +1856,9 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
Returns:
|
||||
(int): in seconds
|
||||
"""
|
||||
if not project_name:
|
||||
return 60
|
||||
|
||||
ld = self.sync_project_settings[project_name]["config"]["loop_delay"]
|
||||
return int(ld)
|
||||
|
||||
|
|
|
|||
|
|
@ -46,6 +46,14 @@ class SyncServerWindow(QtWidgets.QDialog):
|
|||
|
||||
left_column_layout.addWidget(self.pause_btn)
|
||||
|
||||
checkbox = QtWidgets.QCheckBox("Show only enabled", self)
|
||||
checkbox.setStyleSheet("QCheckBox{spacing: 5px;"
|
||||
"padding:5px 5px 5px 5px;}")
|
||||
checkbox.setChecked(True)
|
||||
self.show_only_enabled_chk = checkbox
|
||||
|
||||
left_column_layout.addWidget(self.show_only_enabled_chk)
|
||||
|
||||
repres = SyncRepresentationSummaryWidget(
|
||||
sync_server,
|
||||
project=self.projects.current_project,
|
||||
|
|
@ -86,15 +94,27 @@ class SyncServerWindow(QtWidgets.QDialog):
|
|||
repres.message_generated.connect(self._update_message)
|
||||
self.projects.message_generated.connect(self._update_message)
|
||||
|
||||
self.show_only_enabled_chk.stateChanged.connect(
|
||||
self._on_enabled_change
|
||||
)
|
||||
|
||||
self.representationWidget = repres
|
||||
|
||||
def showEvent(self, event):
|
||||
self.representationWidget.set_project(self.projects.current_project)
|
||||
self.projects.refresh()
|
||||
self._set_running(True)
|
||||
super().showEvent(event)
|
||||
|
||||
def closeEvent(self, event):
|
||||
self._set_running(False)
|
||||
super().closeEvent(event)
|
||||
|
||||
def _on_project_change(self):
|
||||
if self.projects.current_project is None:
|
||||
return
|
||||
|
||||
self.representationWidget.table_view.model().set_project(
|
||||
self.projects.current_project
|
||||
)
|
||||
self.representationWidget.set_project(self.projects.current_project)
|
||||
|
||||
project_name = self.projects.current_project
|
||||
if not self.sync_server.get_sync_project_setting(project_name):
|
||||
|
|
@ -103,16 +123,12 @@ class SyncServerWindow(QtWidgets.QDialog):
|
|||
self.projects.refresh()
|
||||
return
|
||||
|
||||
def showEvent(self, event):
|
||||
self.representationWidget.model.set_project(
|
||||
self.projects.current_project)
|
||||
def _on_enabled_change(self):
|
||||
"""Called when enabled projects only checkbox is toggled."""
|
||||
self.projects.show_only_enabled = \
|
||||
self.show_only_enabled_chk.isChecked()
|
||||
self.projects.refresh()
|
||||
self._set_running(True)
|
||||
super().showEvent(event)
|
||||
|
||||
def closeEvent(self, event):
|
||||
self._set_running(False)
|
||||
super().closeEvent(event)
|
||||
self.representationWidget.set_project(None)
|
||||
|
||||
def _set_running(self, running):
|
||||
self.representationWidget.model.is_running = running
|
||||
|
|
|
|||
|
|
@ -52,7 +52,8 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
|
||||
All queries should go through this (because of collection).
|
||||
"""
|
||||
return self.sync_server.connection.database[self.project]
|
||||
if self.project:
|
||||
return self.sync_server.connection.database[self.project]
|
||||
|
||||
@property
|
||||
def project(self):
|
||||
|
|
@ -150,6 +151,9 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
@property
|
||||
def can_edit(self):
|
||||
"""Returns true if some site is user local site, eg. could edit"""
|
||||
if not self.project:
|
||||
return False
|
||||
|
||||
return get_local_site_id() in (self.active_site, self.remote_site)
|
||||
|
||||
def get_column(self, index):
|
||||
|
|
@ -190,7 +194,7 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
actually queried (scrolled a couple of times to list more
|
||||
than single page of records)
|
||||
"""
|
||||
if self.is_editing or not self.is_running:
|
||||
if self.is_editing or not self.is_running or not self.project:
|
||||
return
|
||||
self.refresh_started.emit()
|
||||
self.beginResetModel()
|
||||
|
|
@ -232,6 +236,9 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
more records in DB than loaded.
|
||||
"""
|
||||
log.debug("fetchMore")
|
||||
if not self.dbcon:
|
||||
return
|
||||
|
||||
items_to_fetch = min(self._total_records - self._rec_loaded,
|
||||
self.PAGE_SIZE)
|
||||
self.query = self.get_query(self._rec_loaded)
|
||||
|
|
@ -286,9 +293,10 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
# replace('False', 'false').\
|
||||
# replace('True', 'true').replace('None', 'null'))
|
||||
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
if self.dbcon:
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
|
||||
def set_word_filter(self, word_filter):
|
||||
"""
|
||||
|
|
@ -378,9 +386,9 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
|
|||
project (str): name of project
|
||||
"""
|
||||
self._project = project
|
||||
self.sync_server.set_sync_project_settings()
|
||||
# project might have been deactivated in the meantime
|
||||
if not self.sync_server.get_sync_project_setting(project):
|
||||
self._data = {}
|
||||
return
|
||||
|
||||
self.active_site = self.sync_server.get_active_site(self.project)
|
||||
|
|
@ -509,25 +517,23 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
|
|||
|
||||
self._word_filter = None
|
||||
|
||||
if not self._project or self._project == lib.DUMMY_PROJECT:
|
||||
return
|
||||
|
||||
self.sync_server = sync_server
|
||||
# TODO think about admin mode
|
||||
self.sort_criteria = self.DEFAULT_SORT
|
||||
|
||||
self.timer = QtCore.QTimer()
|
||||
if not self._project or self._project == lib.DUMMY_PROJECT:
|
||||
self.active_site = sync_server.DEFAULT_SITE
|
||||
self.remote_site = sync_server.DEFAULT_SITE
|
||||
return
|
||||
|
||||
# this is for regular user, always only single local and single remote
|
||||
self.active_site = self.sync_server.get_active_site(self.project)
|
||||
self.remote_site = self.sync_server.get_remote_site(self.project)
|
||||
|
||||
self.sort_criteria = self.DEFAULT_SORT
|
||||
|
||||
self.query = self.get_query()
|
||||
self.default_query = list(self.get_query())
|
||||
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
|
||||
self.timer = QtCore.QTimer()
|
||||
self.timer.timeout.connect(self.tick)
|
||||
self.timer.start(self.REFRESH_SEC)
|
||||
|
||||
|
|
@ -1003,9 +1009,6 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
|
|||
self.sort_criteria = self.DEFAULT_SORT
|
||||
|
||||
self.query = self.get_query()
|
||||
representations = self.dbcon.aggregate(pipeline=self.query,
|
||||
allowDiskUse=True)
|
||||
self.refresh(representations)
|
||||
|
||||
self.timer = QtCore.QTimer()
|
||||
self.timer.timeout.connect(self.tick)
|
||||
|
|
|
|||
|
|
@ -47,6 +47,7 @@ class SyncProjectListWidget(QtWidgets.QWidget):
|
|||
message_generated = QtCore.Signal(str)
|
||||
|
||||
refresh_msec = 10000
|
||||
show_only_enabled = True
|
||||
|
||||
def __init__(self, sync_server, parent):
|
||||
super(SyncProjectListWidget, self).__init__(parent)
|
||||
|
|
@ -122,11 +123,15 @@ class SyncProjectListWidget(QtWidgets.QWidget):
|
|||
self._model_reset = False
|
||||
|
||||
selected_item = None
|
||||
for project_name in self.sync_server.sync_project_settings.\
|
||||
keys():
|
||||
sync_settings = self.sync_server.sync_project_settings
|
||||
for project_name in sync_settings.keys():
|
||||
if self.sync_server.is_paused() or \
|
||||
self.sync_server.is_project_paused(project_name):
|
||||
icon = self._get_icon("paused")
|
||||
elif not sync_settings[project_name]["enabled"]:
|
||||
if self.show_only_enabled:
|
||||
continue
|
||||
icon = self._get_icon("disabled")
|
||||
else:
|
||||
icon = self._get_icon("synced")
|
||||
|
||||
|
|
@ -139,12 +144,12 @@ class SyncProjectListWidget(QtWidgets.QWidget):
|
|||
if self.current_project == project_name:
|
||||
selected_item = item
|
||||
|
||||
if model.item(0) is None:
|
||||
return
|
||||
|
||||
if selected_item:
|
||||
selected_index = model.indexFromItem(selected_item)
|
||||
|
||||
if len(self.sync_server.sync_project_settings.keys()) == 0:
|
||||
model.appendRow(QtGui.QStandardItem(lib.DUMMY_PROJECT))
|
||||
|
||||
if not self.current_project:
|
||||
self.current_project = model.item(0).data(QtCore.Qt.DisplayRole)
|
||||
|
||||
|
|
@ -248,6 +253,9 @@ class _SyncRepresentationWidget(QtWidgets.QWidget):
|
|||
active_changed = QtCore.Signal() # active index changed
|
||||
message_generated = QtCore.Signal(str)
|
||||
|
||||
def set_project(self, project):
|
||||
self.model.set_project(project)
|
||||
|
||||
def _selection_changed(self, _new_selected, _all_selected):
|
||||
idxs = self.selection_model.selectedRows()
|
||||
self._selected_ids = set()
|
||||
|
|
@ -581,7 +589,6 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
|
|||
super(SyncRepresentationSummaryWidget, self).__init__(parent)
|
||||
|
||||
self.sync_server = sync_server
|
||||
|
||||
self._selected_ids = set() # keep last selected _id
|
||||
|
||||
txt_filter = QtWidgets.QLineEdit()
|
||||
|
|
@ -625,7 +632,6 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
|
|||
column = table_view.model().get_header_index("priority")
|
||||
priority_delegate = delegates.PriorityDelegate(self)
|
||||
table_view.setItemDelegateForColumn(column, priority_delegate)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.addLayout(top_bar_layout)
|
||||
|
|
@ -633,21 +639,16 @@ class SyncRepresentationSummaryWidget(_SyncRepresentationWidget):
|
|||
|
||||
self.table_view = table_view
|
||||
self.model = model
|
||||
|
||||
horizontal_header = HorizontalHeader(self)
|
||||
|
||||
table_view.setHorizontalHeader(horizontal_header)
|
||||
table_view.setSortingEnabled(True)
|
||||
|
||||
for column_name, width in self.default_widths:
|
||||
idx = model.get_header_index(column_name)
|
||||
table_view.setColumnWidth(idx, width)
|
||||
|
||||
table_view.doubleClicked.connect(self._double_clicked)
|
||||
self.txt_filter.textChanged.connect(lambda: model.set_word_filter(
|
||||
self.txt_filter.text()))
|
||||
table_view.customContextMenuRequested.connect(self._on_context_menu)
|
||||
|
||||
model.refresh_started.connect(self._save_scrollbar)
|
||||
model.refresh_finished.connect(self._set_scrollbar)
|
||||
model.modelReset.connect(self._set_selection)
|
||||
|
|
@ -963,7 +964,6 @@ class HorizontalHeader(QtWidgets.QHeaderView):
|
|||
super(HorizontalHeader, self).__init__(QtCore.Qt.Horizontal, parent)
|
||||
self._parent = parent
|
||||
self.checked_values = {}
|
||||
|
||||
self.setModel(self._parent.model)
|
||||
|
||||
self.setSectionsClickable(True)
|
||||
|
|
|
|||
|
|
@ -829,9 +829,10 @@ class CreateContext:
|
|||
discover_result = publish_plugins_discover()
|
||||
publish_plugins = discover_result.plugins
|
||||
|
||||
targets = pyblish.logic.registered_targets() or ["default"]
|
||||
targets = set(pyblish.logic.registered_targets())
|
||||
targets.add("default")
|
||||
plugins_by_targets = pyblish.logic.plugins_by_targets(
|
||||
publish_plugins, targets
|
||||
publish_plugins, list(targets)
|
||||
)
|
||||
# Collect plugins that can have attribute definitions
|
||||
for plugin in publish_plugins:
|
||||
|
|
|
|||
|
|
@ -18,16 +18,6 @@ class InstancePlugin(pyblish.api.InstancePlugin):
|
|||
super(InstancePlugin, cls).process(cls, *args, **kwargs)
|
||||
|
||||
|
||||
class Integrator(InstancePlugin):
|
||||
"""Integrator base class.
|
||||
|
||||
Wraps pyblish instance plugin. Targets set to "local" which means all
|
||||
integrators should run on "local" publishes, by default.
|
||||
"remote" targets could be used for integrators that should run externally.
|
||||
"""
|
||||
targets = ["local"]
|
||||
|
||||
|
||||
class Extractor(InstancePlugin):
|
||||
"""Extractor base class.
|
||||
|
||||
|
|
@ -38,8 +28,6 @@ class Extractor(InstancePlugin):
|
|||
|
||||
"""
|
||||
|
||||
targets = ["local"]
|
||||
|
||||
order = 2.0
|
||||
|
||||
def staging_dir(self, instance):
|
||||
|
|
|
|||
|
|
@ -3,22 +3,20 @@ import os
|
|||
import pyblish.api
|
||||
from openpype.lib import (
|
||||
get_ffmpeg_tool_path,
|
||||
get_oiio_tools_path,
|
||||
is_oiio_supported,
|
||||
|
||||
run_subprocess,
|
||||
path_to_subprocess_arg,
|
||||
|
||||
get_transcode_temp_directory,
|
||||
convert_input_paths_for_ffmpeg,
|
||||
should_convert_for_ffmpeg
|
||||
execute,
|
||||
)
|
||||
|
||||
import shutil
|
||||
|
||||
|
||||
class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
||||
class ExtractThumbnail(pyblish.api.InstancePlugin):
|
||||
"""Create jpg thumbnail from sequence using ffmpeg"""
|
||||
|
||||
label = "Extract Jpeg EXR"
|
||||
label = "Extract Thumbnail"
|
||||
order = pyblish.api.ExtractorOrder
|
||||
families = [
|
||||
"imagesequence", "render", "render2d",
|
||||
|
|
@ -49,7 +47,6 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
filtered_repres = self._get_filtered_repres(instance)
|
||||
|
||||
for repre in filtered_repres:
|
||||
repre_files = repre["files"]
|
||||
if not isinstance(repre_files, (list, tuple)):
|
||||
|
|
@ -62,78 +59,37 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
|
||||
full_input_path = os.path.join(stagingdir, input_file)
|
||||
self.log.info("input {}".format(full_input_path))
|
||||
|
||||
do_convert = should_convert_for_ffmpeg(full_input_path)
|
||||
# If result is None the requirement of conversion can't be
|
||||
# determined
|
||||
if do_convert is None:
|
||||
self.log.info((
|
||||
"Can't determine if representation requires conversion."
|
||||
" Skipped."
|
||||
))
|
||||
continue
|
||||
|
||||
# Do conversion if needed
|
||||
# - change staging dir of source representation
|
||||
# - must be set back after output definitions processing
|
||||
convert_dir = None
|
||||
if do_convert:
|
||||
convert_dir = get_transcode_temp_directory()
|
||||
filename = os.path.basename(full_input_path)
|
||||
convert_input_paths_for_ffmpeg(
|
||||
[full_input_path],
|
||||
convert_dir,
|
||||
self.log
|
||||
)
|
||||
full_input_path = os.path.join(convert_dir, filename)
|
||||
|
||||
filename = os.path.splitext(input_file)[0]
|
||||
if not filename.endswith('.'):
|
||||
filename += "."
|
||||
jpeg_file = filename + "jpg"
|
||||
full_output_path = os.path.join(stagingdir, jpeg_file)
|
||||
|
||||
self.log.info("output {}".format(full_output_path))
|
||||
thumbnail_created = False
|
||||
# Try to use FFMPEG if OIIO is not supported (for cases when
|
||||
# oiiotool isn't available)
|
||||
if not is_oiio_supported():
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(full_input_path, full_output_path) # noqa
|
||||
else:
|
||||
# Check if the file can be read by OIIO
|
||||
oiio_tool_path = get_oiio_tools_path()
|
||||
args = [
|
||||
oiio_tool_path, "--info", "-i", full_output_path
|
||||
]
|
||||
returncode = execute(args, silent=True)
|
||||
# If the input can read by OIIO then use OIIO method for
|
||||
# conversion otherwise use ffmpeg
|
||||
if returncode == 0:
|
||||
self.log.info("Input can be read by OIIO, converting with oiiotool now.") # noqa
|
||||
thumbnail_created = self.create_thumbnail_oiio(full_input_path, full_output_path) # noqa
|
||||
else:
|
||||
self.log.info("Converting with FFMPEG because input can't be read by OIIO.") # noqa
|
||||
thumbnail_created = self.create_thumbnail_ffmpeg(full_input_path, full_output_path) # noqa
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
|
||||
jpeg_items = []
|
||||
jpeg_items.append(path_to_subprocess_arg(ffmpeg_path))
|
||||
# override file if already exists
|
||||
jpeg_items.append("-y")
|
||||
# use same input args like with mov
|
||||
jpeg_items.extend(ffmpeg_args.get("input") or [])
|
||||
# input file
|
||||
jpeg_items.append("-i {}".format(
|
||||
path_to_subprocess_arg(full_input_path)
|
||||
))
|
||||
# output arguments from presets
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
|
||||
# If its a movie file, we just want one frame.
|
||||
if repre["ext"] == "mov":
|
||||
jpeg_items.append("-vframes 1")
|
||||
|
||||
# output file
|
||||
jpeg_items.append(path_to_subprocess_arg(full_output_path))
|
||||
|
||||
subprocess_command = " ".join(jpeg_items)
|
||||
|
||||
# run subprocess
|
||||
self.log.debug("{}".format(subprocess_command))
|
||||
try: # temporary until oiiotool is supported cross platform
|
||||
run_subprocess(
|
||||
subprocess_command, shell=True, logger=self.log
|
||||
)
|
||||
except RuntimeError as exp:
|
||||
if "Compression" in str(exp):
|
||||
self.log.debug(
|
||||
"Unsupported compression on input files. Skipping!!!"
|
||||
)
|
||||
return
|
||||
self.log.warning("Conversion crashed", exc_info=True)
|
||||
raise
|
||||
# Skip the rest of the process if the thumbnail wasn't created
|
||||
if not thumbnail_created:
|
||||
self.log.warning("Thumbanil has not been created.")
|
||||
return
|
||||
|
||||
new_repre = {
|
||||
"name": "thumbnail",
|
||||
|
|
@ -145,16 +101,11 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
}
|
||||
|
||||
# adding representation
|
||||
self.log.debug("Adding: {}".format(new_repre))
|
||||
self.log.debug(
|
||||
"Adding thumbnail representation: {}".format(new_repre)
|
||||
)
|
||||
instance.data["representations"].append(new_repre)
|
||||
|
||||
# Cleanup temp folder
|
||||
if convert_dir is not None and os.path.exists(convert_dir):
|
||||
shutil.rmtree(convert_dir)
|
||||
|
||||
# Create only one representation with name 'thumbnail'
|
||||
# TODO maybe handle way how to decide from which representation
|
||||
# will be thumbnail created
|
||||
# There is no need to create more then one thumbnail
|
||||
break
|
||||
|
||||
def _get_filtered_repres(self, instance):
|
||||
|
|
@ -175,3 +126,61 @@ class ExtractJpegEXR(pyblish.api.InstancePlugin):
|
|||
|
||||
filtered_repres.append(repre)
|
||||
return filtered_repres
|
||||
|
||||
def create_thumbnail_oiio(self, src_path, dst_path):
|
||||
self.log.info("outputting {}".format(dst_path))
|
||||
oiio_tool_path = get_oiio_tools_path()
|
||||
oiio_cmd = [oiio_tool_path, "-a",
|
||||
src_path, "-o",
|
||||
dst_path
|
||||
]
|
||||
subprocess_exr = " ".join(oiio_cmd)
|
||||
self.log.info(f"running: {subprocess_exr}")
|
||||
try:
|
||||
run_subprocess(oiio_cmd, logger=self.log)
|
||||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using oiiotool",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
||||
def create_thumbnail_ffmpeg(self, src_path, dst_path):
|
||||
self.log.info("outputting {}".format(dst_path))
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
ffmpeg_args = self.ffmpeg_args or {}
|
||||
|
||||
jpeg_items = []
|
||||
jpeg_items.append(path_to_subprocess_arg(ffmpeg_path))
|
||||
# override file if already exists
|
||||
jpeg_items.append("-y")
|
||||
# flag for large file sizes
|
||||
max_int = 2147483647
|
||||
jpeg_items.append("-analyzeduration {}".format(max_int))
|
||||
jpeg_items.append("-probesize {}".format(max_int))
|
||||
# use same input args like with mov
|
||||
jpeg_items.extend(ffmpeg_args.get("input") or [])
|
||||
# input file
|
||||
jpeg_items.append("-i {}".format(
|
||||
path_to_subprocess_arg(src_path)
|
||||
))
|
||||
# output arguments from presets
|
||||
jpeg_items.extend(ffmpeg_args.get("output") or [])
|
||||
# we just want one frame from movie files
|
||||
jpeg_items.append("-vframes 1")
|
||||
# output file
|
||||
jpeg_items.append(path_to_subprocess_arg(dst_path))
|
||||
subprocess_command = " ".join(jpeg_items)
|
||||
try:
|
||||
run_subprocess(
|
||||
subprocess_command, shell=True, logger=self.log
|
||||
)
|
||||
return True
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to create thubmnail using ffmpeg",
|
||||
exc_info=True
|
||||
)
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -940,9 +940,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
families += current_families
|
||||
|
||||
# create relative source path for DB
|
||||
if "source" in instance.data:
|
||||
source = instance.data["source"]
|
||||
else:
|
||||
source = instance.data.get("source")
|
||||
if not source:
|
||||
source = context.data["currentFile"]
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
source = self.get_rootless_path(anatomy, source)
|
||||
|
|
|
|||
|
|
@ -83,9 +83,6 @@
|
|||
"maya": [
|
||||
".*([Bb]eauty).*"
|
||||
],
|
||||
"nuke": [
|
||||
".*"
|
||||
],
|
||||
"aftereffects": [
|
||||
".*"
|
||||
],
|
||||
|
|
@ -98,4 +95,4 @@
|
|||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@
|
|||
"enabled": false,
|
||||
"profiles": []
|
||||
},
|
||||
"ExtractJpegEXR": {
|
||||
"ExtractThumbnail": {
|
||||
"enabled": true,
|
||||
"ffmpeg_args": {
|
||||
"input": [
|
||||
|
|
|
|||
|
|
@ -166,6 +166,9 @@
|
|||
},
|
||||
"ExtractThumbnail": {
|
||||
"enabled": true,
|
||||
"use_rendered": true,
|
||||
"bake_viewer_process": true,
|
||||
"bake_viewer_input_process": true,
|
||||
"nodes": {
|
||||
"Reformat": [
|
||||
[
|
||||
|
|
|
|||
|
|
@ -59,13 +59,11 @@
|
|||
"applications": {
|
||||
"write_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
],
|
||||
"read_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
]
|
||||
}
|
||||
},
|
||||
|
|
@ -73,25 +71,21 @@
|
|||
"tools_env": {
|
||||
"write_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
],
|
||||
"read_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
]
|
||||
},
|
||||
"avalon_mongo_id": {
|
||||
"write_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
],
|
||||
"read_security_roles": [
|
||||
"API",
|
||||
"Administrator",
|
||||
"Pypeclub"
|
||||
"Administrator"
|
||||
]
|
||||
},
|
||||
"fps": {
|
||||
|
|
|
|||
|
|
@ -126,8 +126,8 @@
|
|||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "ExtractJpegEXR",
|
||||
"label": "ExtractJpegEXR",
|
||||
"key": "ExtractThumbnail",
|
||||
"label": "ExtractThumbnail",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
|
|
|
|||
|
|
@ -135,9 +135,31 @@
|
|||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "raw-json",
|
||||
"key": "nodes",
|
||||
"label": "Nodes"
|
||||
"type": "boolean",
|
||||
"key": "use_rendered",
|
||||
"label": "Use rendered images"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_process",
|
||||
"label": "Bake viewer process"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_input_process",
|
||||
"label": "Bake viewer input process"
|
||||
},
|
||||
{
|
||||
"type": "collapsible-wrap",
|
||||
"label": "Nodes",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "raw-json",
|
||||
"key": "nodes",
|
||||
"label": "Nodes"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@ class BaseRepresentationModel(object):
|
|||
self._last_manager_cache = now_time
|
||||
|
||||
sync_server = self._modules_manager.modules_by_name["sync_server"]
|
||||
if sync_server.is_project_enabled(project_name):
|
||||
if sync_server.is_project_enabled(project_name, single=True):
|
||||
active_site = sync_server.get_active_site(project_name)
|
||||
active_provider = sync_server.get_provider_for_site(
|
||||
project_name, active_site)
|
||||
|
|
|
|||
|
|
@ -355,9 +355,10 @@ class SubsetWidget(QtWidgets.QWidget):
|
|||
enabled = False
|
||||
if project_name:
|
||||
self.model.reset_sync_server(project_name)
|
||||
if self.model.sync_server:
|
||||
enabled_proj = self.model.sync_server.get_enabled_projects()
|
||||
enabled = project_name in enabled_proj
|
||||
sync_server = self.model.sync_server
|
||||
if sync_server:
|
||||
enabled = sync_server.is_project_enabled(project_name,
|
||||
single=True)
|
||||
|
||||
lib.change_visibility(self.model, self.view, "repre_info", enabled)
|
||||
|
||||
|
|
@ -1216,9 +1217,10 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
enabled = False
|
||||
if project_name:
|
||||
self.model.reset_sync_server(project_name)
|
||||
if self.model.sync_server:
|
||||
enabled_proj = self.model.sync_server.get_enabled_projects()
|
||||
enabled = project_name in enabled_proj
|
||||
sync_server = self.model.sync_server
|
||||
if sync_server:
|
||||
enabled = sync_server.is_project_enabled(project_name,
|
||||
single=True)
|
||||
|
||||
self.sync_server_enabled = enabled
|
||||
lib.change_visibility(self.model, self.tree_view,
|
||||
|
|
|
|||
|
|
@ -468,10 +468,8 @@ class Window(QtWidgets.QDialog):
|
|||
current_page == "terminal"
|
||||
)
|
||||
|
||||
self.state = {
|
||||
"is_closing": False,
|
||||
"current_page": current_page
|
||||
}
|
||||
self._current_page = current_page
|
||||
self._hidden_for_plugin_process = False
|
||||
|
||||
self.tabs[current_page].setChecked(True)
|
||||
|
||||
|
|
@ -590,14 +588,14 @@ class Window(QtWidgets.QDialog):
|
|||
target_page = page
|
||||
if direction is None:
|
||||
direction = -1
|
||||
elif name == self.state["current_page"]:
|
||||
elif name == self._current_page:
|
||||
previous_page = page
|
||||
if direction is None:
|
||||
direction = 1
|
||||
else:
|
||||
page.setVisible(False)
|
||||
|
||||
self.state["current_page"] = target
|
||||
self._current_page = target
|
||||
self.slide_page(previous_page, target_page, direction)
|
||||
|
||||
def slide_page(self, previous_page, target_page, direction):
|
||||
|
|
@ -684,7 +682,7 @@ class Window(QtWidgets.QDialog):
|
|||
comment_visible=None,
|
||||
terminal_filters_visibile=None
|
||||
):
|
||||
target = self.state["current_page"]
|
||||
target = self._current_page
|
||||
comment_visibility = (
|
||||
not self.perspective_widget.isVisible()
|
||||
and not target == "terminal"
|
||||
|
|
@ -845,7 +843,7 @@ class Window(QtWidgets.QDialog):
|
|||
|
||||
def apply_log_suspend_value(self, value):
|
||||
self._suspend_logs = value
|
||||
if self.state["current_page"] == "terminal":
|
||||
if self._current_page == "terminal":
|
||||
self.tabs["overview"].setChecked(True)
|
||||
|
||||
self.tabs["terminal"].setVisible(not self._suspend_logs)
|
||||
|
|
@ -882,9 +880,21 @@ class Window(QtWidgets.QDialog):
|
|||
visibility = True
|
||||
if hasattr(plugin, "hide_ui_on_process") and plugin.hide_ui_on_process:
|
||||
visibility = False
|
||||
self._hidden_for_plugin_process = not visibility
|
||||
|
||||
if self.isVisible() != visibility:
|
||||
self.setVisible(visibility)
|
||||
self._ensure_visible(visibility)
|
||||
|
||||
def _ensure_visible(self, visible):
|
||||
if self.isVisible() == visible:
|
||||
return
|
||||
|
||||
if not visible:
|
||||
self.setVisible(visible)
|
||||
else:
|
||||
self.show()
|
||||
self.raise_()
|
||||
self.activateWindow()
|
||||
self.showNormal()
|
||||
|
||||
def on_plugin_action_menu_requested(self, pos):
|
||||
"""The user right-clicked on a plug-in
|
||||
|
|
@ -955,7 +965,7 @@ class Window(QtWidgets.QDialog):
|
|||
self.intent_box.setEnabled(True)
|
||||
|
||||
# Refresh tab
|
||||
self.on_tab_changed(self.state["current_page"])
|
||||
self.on_tab_changed(self._current_page)
|
||||
self.update_compatibility()
|
||||
|
||||
self.button_suspend_logs.setEnabled(False)
|
||||
|
|
@ -1027,8 +1037,9 @@ class Window(QtWidgets.QDialog):
|
|||
|
||||
self._update_state()
|
||||
|
||||
if not self.isVisible():
|
||||
self.setVisible(True)
|
||||
if self._hidden_for_plugin_process:
|
||||
self._hidden_for_plugin_process = False
|
||||
self._ensure_visible(True)
|
||||
|
||||
def on_was_skipped(self, plugin):
|
||||
plugin_item = self.plugin_model.plugin_items[plugin.id]
|
||||
|
|
@ -1103,8 +1114,9 @@ class Window(QtWidgets.QDialog):
|
|||
plugin_item, instance_item
|
||||
)
|
||||
|
||||
if not self.isVisible():
|
||||
self.setVisible(True)
|
||||
if self._hidden_for_plugin_process:
|
||||
self._hidden_for_plugin_process = False
|
||||
self._ensure_visible(True)
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
#
|
||||
|
|
@ -1223,53 +1235,20 @@ class Window(QtWidgets.QDialog):
|
|||
|
||||
"""
|
||||
|
||||
# Make it snappy, but take care to clean it all up.
|
||||
# TODO(marcus): Enable GUI to return on problem, such
|
||||
# as asking whether or not the user really wants to quit
|
||||
# given there are things currently running.
|
||||
self.hide()
|
||||
self.info(self.tr("Closing.."))
|
||||
|
||||
if self.state["is_closing"]:
|
||||
if self.controller.is_running:
|
||||
self.info(self.tr("..as soon as processing is finished.."))
|
||||
self.controller.stop()
|
||||
|
||||
# Explicitly clear potentially referenced data
|
||||
self.info(self.tr("Cleaning up models.."))
|
||||
self.intent_model.deleteLater()
|
||||
self.plugin_model.deleteLater()
|
||||
self.terminal_model.deleteLater()
|
||||
self.terminal_proxy.deleteLater()
|
||||
self.plugin_proxy.deleteLater()
|
||||
self.info(self.tr("Cleaning up controller.."))
|
||||
self.controller.cleanup()
|
||||
|
||||
self.overview_instance_view.setModel(None)
|
||||
self.overview_plugin_view.setModel(None)
|
||||
self.terminal_view.setModel(None)
|
||||
|
||||
self.info(self.tr("Cleaning up controller.."))
|
||||
self.controller.cleanup()
|
||||
|
||||
self.info(self.tr("All clean!"))
|
||||
self.info(self.tr("Good bye"))
|
||||
return super(Window, self).closeEvent(event)
|
||||
|
||||
self.info(self.tr("Closing.."))
|
||||
|
||||
def on_problem():
|
||||
self.heads_up(
|
||||
"Warning", "Had trouble closing down. "
|
||||
"Please tell someone and try again."
|
||||
)
|
||||
self.show()
|
||||
|
||||
if self.controller.is_running:
|
||||
self.info(self.tr("..as soon as processing is finished.."))
|
||||
self.controller.stop()
|
||||
self.finished.connect(self.close)
|
||||
util.defer(200, on_problem)
|
||||
return event.ignore()
|
||||
|
||||
self.state["is_closing"] = True
|
||||
|
||||
util.defer(200, self.close)
|
||||
return event.ignore()
|
||||
event.accept()
|
||||
|
||||
def reject(self):
|
||||
"""Handle ESC key"""
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.11.0-nightly.2"
|
||||
__version__ = "3.11.1"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.11.0-nightly.2" # OpenPype
|
||||
version = "3.11.1" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ You can only use our Ftrack Actions and publish to Ftrack if each artist is logg
|
|||
### Custom Attributes
|
||||
After successfully connecting OpenPype with you Ftrack, you can right click on any project in Ftrack and you should see a bunch of actions available. The most important one is called `OpenPype Admin` and contains multiple options inside.
|
||||
|
||||
To prepare Ftrack for working with OpenPype you'll need to run [OpenPype Admin - Create/Update Avalon Attributes](manager_ftrack_actions.md#create-update-avalon-attributes), which creates and sets the Custom Attributes necessary for OpenPype to function.
|
||||
To prepare Ftrack for working with OpenPype you'll need to run [OpenPype Admin - Create/Update Custom Attributes](manager_ftrack_actions.md#create-update-avalon-attributes), which creates and sets the Custom Attributes necessary for OpenPype to function.
|
||||
|
||||
|
||||
|
||||
|
|
|
|||