mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
Merge branch 'develop' into release/3.15.x
This commit is contained in:
commit
5a4852703b
83 changed files with 4496 additions and 441 deletions
53
CHANGELOG.md
53
CHANGELOG.md
|
|
@ -1,5 +1,58 @@
|
|||
# Changelog
|
||||
|
||||
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...HEAD)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
|
||||
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
|
||||
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
|
||||
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
|
||||
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
|
||||
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
|
||||
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
|
||||
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
|
||||
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
|
||||
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
|
||||
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
|
||||
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
|
||||
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
|
||||
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
|
||||
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
|
||||
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
|
||||
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
|
||||
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
|
||||
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
|
||||
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
|
||||
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
|
||||
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
|
||||
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
|
||||
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
|
||||
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
|
||||
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
|
||||
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
|
||||
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
|
||||
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
|
||||
|
||||
|
||||
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.8...3.14.9)
|
||||
|
|
|
|||
52
HISTORY.md
52
HISTORY.md
|
|
@ -1,5 +1,57 @@
|
|||
# Changelog
|
||||
|
||||
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...3.14.10)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
|
||||
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
|
||||
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
|
||||
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
|
||||
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
|
||||
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
|
||||
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
|
||||
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
|
||||
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
|
||||
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
|
||||
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
|
||||
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
|
||||
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
|
||||
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
|
||||
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
|
||||
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
|
||||
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
|
||||
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
|
||||
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
|
||||
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
|
||||
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
|
||||
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
|
||||
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
|
||||
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
|
||||
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
|
||||
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
|
||||
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
|
||||
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
|
||||
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
|
||||
|
||||
|
||||
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)
|
||||
|
||||
|
|
|
|||
|
|
@ -76,6 +76,18 @@ class HostBase(object):
|
|||
|
||||
pass
|
||||
|
||||
def install(self):
|
||||
"""Install host specific functionality.
|
||||
|
||||
This is where should be added menu with tools, registered callbacks
|
||||
and other host integration initialization.
|
||||
|
||||
It is called automatically when 'openpype.pipeline.install_host' is
|
||||
triggered.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
if self._log is None:
|
||||
|
|
|
|||
|
|
@ -44,3 +44,6 @@ class CreateAnimation(plugin.Creator):
|
|||
# Default to not send to farm.
|
||||
self.data["farm"] = False
|
||||
self.data["priority"] = 50
|
||||
|
||||
# Default to write normals.
|
||||
self.data["writeNormals"] = True
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ class CreateMultiverseUsd(plugin.Creator):
|
|||
|
||||
name = "mvUsdMain"
|
||||
label = "Multiverse USD Asset"
|
||||
family = "mvUsd"
|
||||
family = "usd"
|
||||
icon = "cubes"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import maya.cmds as cmds
|
||||
from maya import mel
|
||||
import os
|
||||
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
|
|
@ -11,12 +13,13 @@ from openpype.hosts.maya.api.lib import (
|
|||
unique_namespace
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.client import get_representation_by_id
|
||||
|
||||
|
||||
class MultiverseUsdLoader(load.LoaderPlugin):
|
||||
"""Read USD data in a Multiverse Compound"""
|
||||
|
||||
families = ["model", "mvUsd", "mvUsdComposition", "mvUsdOverride",
|
||||
families = ["model", "usd", "mvUsdComposition", "mvUsdOverride",
|
||||
"pointcache", "animation"]
|
||||
representations = ["usd", "usda", "usdc", "usdz", "abc"]
|
||||
|
||||
|
|
@ -26,7 +29,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, options=None):
|
||||
|
||||
asset = context['asset']['name']
|
||||
namespace = namespace or unique_namespace(
|
||||
asset + "_",
|
||||
|
|
@ -34,22 +36,20 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
suffix="_",
|
||||
)
|
||||
|
||||
# Create the shape
|
||||
# Make sure we can load the plugin
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
# Create the shape
|
||||
shape = None
|
||||
transform = None
|
||||
with maintained_selection():
|
||||
cmds.namespace(addNamespace=namespace)
|
||||
with namespaced(namespace, new=False):
|
||||
import multiverse
|
||||
shape = multiverse.CreateUsdCompound(self.fname)
|
||||
transform = cmds.listRelatives(
|
||||
shape, parent=True, fullPath=True)[0]
|
||||
|
||||
# Lock the shape node so the user cannot delete it.
|
||||
cmds.lockNode(shape, lock=True)
|
||||
|
||||
nodes = [transform, shape]
|
||||
self[:] = nodes
|
||||
|
||||
|
|
@ -70,15 +70,34 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
shapes = cmds.ls(members, type="mvUsdCompoundShape")
|
||||
assert shapes, "Cannot find mvUsdCompoundShape in container"
|
||||
|
||||
path = get_representation_path(representation)
|
||||
project_name = representation["context"]["project"]["name"]
|
||||
prev_representation_id = cmds.getAttr("{}.representation".format(node))
|
||||
prev_representation = get_representation_by_id(project_name,
|
||||
prev_representation_id)
|
||||
prev_path = os.path.normpath(prev_representation["data"]["path"])
|
||||
|
||||
# Make sure we can load the plugin
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
for shape in shapes:
|
||||
multiverse.SetUsdCompoundAssetPaths(shape, [path])
|
||||
|
||||
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
|
||||
asset_paths = [os.path.normpath(p) for p in asset_paths]
|
||||
|
||||
assert asset_paths.count(prev_path) == 1, \
|
||||
"Couldn't find matching path (or too many)"
|
||||
prev_path_idx = asset_paths.index(prev_path)
|
||||
|
||||
path = get_representation_path(representation)
|
||||
asset_paths[prev_path_idx] = path
|
||||
|
||||
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
|
||||
|
||||
cmds.setAttr("{}.representation".format(node),
|
||||
str(representation["_id"]),
|
||||
type="string")
|
||||
mel.eval('refreshEditorTemplates;')
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
132
openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py
Normal file
132
openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import maya.cmds as cmds
|
||||
from maya import mel
|
||||
import os
|
||||
|
||||
import qargparse
|
||||
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.client import get_representation_by_id
|
||||
|
||||
|
||||
class MultiverseUsdOverLoader(load.LoaderPlugin):
|
||||
"""Reference file"""
|
||||
|
||||
families = ["mvUsdOverride"]
|
||||
representations = ["usda", "usd", "udsz"]
|
||||
|
||||
label = "Load Usd Override into Compound"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
options = [
|
||||
qargparse.String(
|
||||
"Which Compound",
|
||||
label="Compound",
|
||||
help="Select which compound to add this as a layer to."
|
||||
)
|
||||
]
|
||||
|
||||
def load(self, context, name=None, namespace=None, options=None):
|
||||
current_usd = cmds.ls(selection=True,
|
||||
type="mvUsdCompoundShape",
|
||||
dag=True,
|
||||
long=True)
|
||||
if len(current_usd) != 1:
|
||||
self.log.error("Current selection invalid: '{}', "
|
||||
"must contain exactly 1 mvUsdCompoundShape."
|
||||
"".format(current_usd))
|
||||
return
|
||||
|
||||
# Make sure we can load the plugin
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
nodes = current_usd
|
||||
with maintained_selection():
|
||||
multiverse.AddUsdCompoundAssetPath(current_usd[0], self.fname)
|
||||
|
||||
namespace = current_usd[0].split("|")[1].split(":")[0]
|
||||
|
||||
container = containerise(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
nodes=nodes,
|
||||
context=context,
|
||||
loader=self.__class__.__name__)
|
||||
|
||||
cmds.addAttr(container, longName="mvUsdCompoundShape",
|
||||
niceName="mvUsdCompoundShape", dataType="string")
|
||||
cmds.setAttr(container + ".mvUsdCompoundShape",
|
||||
current_usd[0], type="string")
|
||||
|
||||
return container
|
||||
|
||||
def update(self, container, representation):
|
||||
# type: (dict, dict) -> None
|
||||
"""Update container with specified representation."""
|
||||
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
node = container['objectName']
|
||||
assert cmds.objExists(node), "Missing container"
|
||||
|
||||
members = cmds.sets(node, query=True) or []
|
||||
shapes = cmds.ls(members, type="mvUsdCompoundShape")
|
||||
assert shapes, "Cannot find mvUsdCompoundShape in container"
|
||||
|
||||
mvShape = container['mvUsdCompoundShape']
|
||||
assert mvShape, "Missing mv source"
|
||||
|
||||
project_name = representation["context"]["project"]["name"]
|
||||
prev_representation_id = cmds.getAttr("{}.representation".format(node))
|
||||
prev_representation = get_representation_by_id(project_name,
|
||||
prev_representation_id)
|
||||
prev_path = os.path.normpath(prev_representation["data"]["path"])
|
||||
|
||||
path = get_representation_path(representation)
|
||||
|
||||
for shape in shapes:
|
||||
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
|
||||
asset_paths = [os.path.normpath(p) for p in asset_paths]
|
||||
|
||||
assert asset_paths.count(prev_path) == 1, \
|
||||
"Couldn't find matching path (or too many)"
|
||||
prev_path_idx = asset_paths.index(prev_path)
|
||||
asset_paths[prev_path_idx] = path
|
||||
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
|
||||
|
||||
cmds.setAttr("{}.representation".format(node),
|
||||
str(representation["_id"]),
|
||||
type="string")
|
||||
mel.eval('refreshEditorTemplates;')
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
# type: (dict) -> None
|
||||
"""Remove loaded container."""
|
||||
# Delete container and its contents
|
||||
if cmds.objExists(container['objectName']):
|
||||
members = cmds.sets(container['objectName'], query=True) or []
|
||||
cmds.delete([container['objectName']] + members)
|
||||
|
||||
# Remove the namespace, if empty
|
||||
namespace = container['namespace']
|
||||
if cmds.namespace(exists=namespace):
|
||||
members = cmds.namespaceInfo(namespace, listNamespace=True)
|
||||
if not members:
|
||||
cmds.namespace(removeNamespace=namespace)
|
||||
else:
|
||||
self.log.warning("Namespace not deleted because it "
|
||||
"still has members: %s", namespace)
|
||||
|
|
@ -26,7 +26,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"rig",
|
||||
"camerarig",
|
||||
"xgen",
|
||||
"staticMesh"]
|
||||
"staticMesh",
|
||||
"mvLook"]
|
||||
representations = ["ma", "abc", "fbx", "mb"]
|
||||
|
||||
label = "Reference"
|
||||
|
|
|
|||
|
|
@ -74,13 +74,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
objectset = cmds.ls("*.id", long=True, type="objectSet",
|
||||
recursive=True, objectsOnly=True)
|
||||
|
||||
ctx_frame_start = context.data['frameStart']
|
||||
ctx_frame_end = context.data['frameEnd']
|
||||
ctx_handle_start = context.data['handleStart']
|
||||
ctx_handle_end = context.data['handleEnd']
|
||||
ctx_frame_start_handle = context.data['frameStartHandle']
|
||||
ctx_frame_end_handle = context.data['frameEndHandle']
|
||||
|
||||
context.data['objectsets'] = objectset
|
||||
for objset in objectset:
|
||||
|
||||
|
|
@ -156,31 +149,20 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
# Append start frame and end frame to label if present
|
||||
if "frameStart" and "frameEnd" in data:
|
||||
|
||||
# if frame range on maya set is the same as full shot range
|
||||
# adjust the values to match the asset data
|
||||
if (ctx_frame_start_handle == data["frameStart"]
|
||||
and ctx_frame_end_handle == data["frameEnd"]): # noqa: W503, E501
|
||||
data["frameStartHandle"] = ctx_frame_start_handle
|
||||
data["frameEndHandle"] = ctx_frame_end_handle
|
||||
data["frameStart"] = ctx_frame_start
|
||||
data["frameEnd"] = ctx_frame_end
|
||||
data["handleStart"] = ctx_handle_start
|
||||
data["handleEnd"] = ctx_handle_end
|
||||
|
||||
# if there are user values on start and end frame not matching
|
||||
# the asset, use them
|
||||
|
||||
else:
|
||||
if "handles" in data:
|
||||
data["handleStart"] = data["handles"]
|
||||
data["handleEnd"] = data["handles"]
|
||||
|
||||
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
|
||||
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
|
||||
|
||||
# Backwards compatibility for 'handles' data
|
||||
if "handles" in data:
|
||||
data["handleStart"] = data["handles"]
|
||||
data["handleEnd"] = data["handles"]
|
||||
data.pop('handles')
|
||||
|
||||
# Take handles from context if not set locally on the instance
|
||||
for key in ["handleStart", "handleEnd"]:
|
||||
if key not in data:
|
||||
data[key] = context.data[key]
|
||||
|
||||
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
|
||||
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
|
||||
|
||||
label += " [{0}-{1}]".format(int(data["frameStartHandle"]),
|
||||
int(data["frameEndHandle"]))
|
||||
|
||||
|
|
|
|||
|
|
@ -440,7 +440,8 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
for res in self.collect_resources(n):
|
||||
instance.data["resources"].append(res)
|
||||
|
||||
self.log.info("Collected resources: {}".format(instance.data["resources"]))
|
||||
self.log.info("Collected resources: {}".format(
|
||||
instance.data["resources"]))
|
||||
|
||||
# Log warning when no relevant sets were retrieved for the look.
|
||||
if (
|
||||
|
|
@ -548,6 +549,11 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if not cmds.attributeQuery(attr, node=node, exists=True):
|
||||
continue
|
||||
attribute = "{}.{}".format(node, attr)
|
||||
# We don't support mixed-type attributes yet.
|
||||
if cmds.attributeQuery(attr, node=node, multi=True):
|
||||
self.log.warning("Attribute '{}' is mixed-type and is "
|
||||
"not supported yet.".format(attribute))
|
||||
continue
|
||||
if cmds.getAttr(attribute, type=True) == "message":
|
||||
continue
|
||||
node_attributes[attr] = cmds.getAttr(attribute)
|
||||
|
|
|
|||
|
|
@ -21,37 +21,68 @@ COLOUR_SPACES = ['sRGB', 'linear', 'auto']
|
|||
MIPMAP_EXTENSIONS = ['tdl']
|
||||
|
||||
|
||||
def get_look_attrs(node):
|
||||
"""Returns attributes of a node that are important for the look.
|
||||
class _NodeTypeAttrib(object):
|
||||
"""docstring for _NodeType"""
|
||||
|
||||
These are the "changed" attributes (those that have edits applied
|
||||
in the current scene).
|
||||
def __init__(self, name, fname, computed_fname=None, colour_space=None):
|
||||
self.name = name
|
||||
self.fname = fname
|
||||
self.computed_fname = computed_fname or fname
|
||||
self.colour_space = colour_space or "colorSpace"
|
||||
|
||||
Returns:
|
||||
list: Attribute names to extract
|
||||
def get_fname(self, node):
|
||||
return "{}.{}".format(node, self.fname)
|
||||
|
||||
def get_computed_fname(self, node):
|
||||
return "{}.{}".format(node, self.computed_fname)
|
||||
|
||||
def get_colour_space(self, node):
|
||||
return "{}.{}".format(node, self.colour_space)
|
||||
|
||||
def __str__(self):
|
||||
return "_NodeTypeAttrib(name={}, fname={}, "
|
||||
"computed_fname={}, colour_space={})".format(
|
||||
self.name, self.fname, self.computed_fname, self.colour_space)
|
||||
|
||||
|
||||
NODETYPES = {
|
||||
"file": [_NodeTypeAttrib("file", "fileTextureName",
|
||||
"computedFileTextureNamePattern")],
|
||||
"aiImage": [_NodeTypeAttrib("aiImage", "filename")],
|
||||
"RedshiftNormalMap": [_NodeTypeAttrib("RedshiftNormalMap", "tex0")],
|
||||
"dlTexture": [_NodeTypeAttrib("dlTexture", "textureFile",
|
||||
None, "textureFile_meta_colorspace")],
|
||||
"dlTriplanar": [_NodeTypeAttrib("dlTriplanar", "colorTexture",
|
||||
None, "colorTexture_meta_colorspace"),
|
||||
_NodeTypeAttrib("dlTriplanar", "floatTexture",
|
||||
None, "floatTexture_meta_colorspace"),
|
||||
_NodeTypeAttrib("dlTriplanar", "heightTexture",
|
||||
None, "heightTexture_meta_colorspace")]
|
||||
}
|
||||
|
||||
|
||||
def get_file_paths_for_node(node):
|
||||
"""Gets all the file paths in this node.
|
||||
|
||||
Returns all filepaths that this node references. Some node types only
|
||||
reference one, but others, like dlTriplanar, can reference 3.
|
||||
|
||||
Args:
|
||||
node (str): Name of the Maya node
|
||||
|
||||
Returns
|
||||
list(str): A list with all evaluated maya attributes for filepaths.
|
||||
"""
|
||||
# When referenced get only attributes that are "changed since file open"
|
||||
# which includes any reference edits, otherwise take *all* user defined
|
||||
# attributes
|
||||
is_referenced = cmds.referenceQuery(node, isNodeReferenced=True)
|
||||
result = cmds.listAttr(node, userDefined=True,
|
||||
changedSinceFileOpen=is_referenced) or []
|
||||
|
||||
# `cbId` is added when a scene is saved, ignore by default
|
||||
if "cbId" in result:
|
||||
result.remove("cbId")
|
||||
node_type = cmds.nodeType(node)
|
||||
if node_type not in NODETYPES:
|
||||
return []
|
||||
|
||||
# For shapes allow render stat changes
|
||||
if cmds.objectType(node, isAType="shape"):
|
||||
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
|
||||
for attr in attrs:
|
||||
if attr in SHAPE_ATTRS:
|
||||
result.append(attr)
|
||||
elif attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
|
||||
return result
|
||||
paths = []
|
||||
for node_type_attr in NODETYPES[node_type]:
|
||||
fname = cmds.getAttr("{}.{}".format(node, node_type_attr.fname))
|
||||
paths.append(fname)
|
||||
return paths
|
||||
|
||||
|
||||
def node_uses_image_sequence(node):
|
||||
|
|
@ -69,13 +100,29 @@ def node_uses_image_sequence(node):
|
|||
"""
|
||||
|
||||
# useFrameExtension indicates an explicit image sequence
|
||||
node_path = get_file_node_path(node).lower()
|
||||
paths = get_file_node_paths(node)
|
||||
paths = [path.lower() for path in paths]
|
||||
|
||||
# The following tokens imply a sequence
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
|
||||
|
||||
return (cmds.getAttr('%s.useFrameExtension' % node) or
|
||||
any(pattern in node_path for pattern in patterns))
|
||||
def pattern_in_paths(patterns, paths):
|
||||
"""Helper function for checking to see if a pattern is contained
|
||||
in the list of paths"""
|
||||
for pattern in patterns:
|
||||
for path in paths:
|
||||
if pattern in path:
|
||||
return True
|
||||
return False
|
||||
|
||||
node_type = cmds.nodeType(node)
|
||||
if node_type == 'dlTexture':
|
||||
return (cmds.getAttr('{}.useImageSequence'.format(node)) or
|
||||
pattern_in_paths(patterns, paths))
|
||||
elif node_type == "file":
|
||||
return (cmds.getAttr('{}.useFrameExtension'.format(node)) or
|
||||
pattern_in_paths(patterns, paths))
|
||||
return False
|
||||
|
||||
|
||||
def seq_to_glob(path):
|
||||
|
|
@ -132,7 +179,7 @@ def seq_to_glob(path):
|
|||
return path
|
||||
|
||||
|
||||
def get_file_node_path(node):
|
||||
def get_file_node_paths(node):
|
||||
"""Get the file path used by a Maya file node.
|
||||
|
||||
Args:
|
||||
|
|
@ -158,15 +205,9 @@ def get_file_node_path(node):
|
|||
"<uvtile>"]
|
||||
lower = texture_pattern.lower()
|
||||
if any(pattern in lower for pattern in patterns):
|
||||
return texture_pattern
|
||||
return [texture_pattern]
|
||||
|
||||
if cmds.nodeType(node) == 'aiImage':
|
||||
return cmds.getAttr('{0}.filename'.format(node))
|
||||
if cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
return cmds.getAttr('{}.tex0'.format(node))
|
||||
|
||||
# otherwise use fileTextureName
|
||||
return cmds.getAttr('{0}.fileTextureName'.format(node))
|
||||
return get_file_paths_for_node(node)
|
||||
|
||||
|
||||
def get_file_node_files(node):
|
||||
|
|
@ -181,15 +222,15 @@ def get_file_node_files(node):
|
|||
|
||||
"""
|
||||
|
||||
path = get_file_node_path(node)
|
||||
path = cmds.workspace(expandName=path)
|
||||
paths = get_file_node_paths(node)
|
||||
paths = [cmds.workspace(expandName=path) for path in paths]
|
||||
if node_uses_image_sequence(node):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
return glob.glob(glob_pattern)
|
||||
elif os.path.exists(path):
|
||||
return [path]
|
||||
globs = []
|
||||
for path in paths:
|
||||
globs += glob.glob(seq_to_glob(path))
|
||||
return globs
|
||||
else:
|
||||
return []
|
||||
return list(filter(lambda x: os.path.exists(x), paths))
|
||||
|
||||
|
||||
def get_mipmap(fname):
|
||||
|
|
@ -211,6 +252,11 @@ def is_mipmap(fname):
|
|||
class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
||||
"""Collect Multiverse Look
|
||||
|
||||
Searches through the overrides finding all material overrides. From there
|
||||
it extracts the shading group and then finds all texture files in the
|
||||
shading group network. It also checks for mipmap versions of texture files
|
||||
and adds them to the resouces to get published.
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
|
|
@ -258,12 +304,20 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
shadingGroup), "members": list()}
|
||||
|
||||
# The SG may reference files, add those too!
|
||||
history = cmds.listHistory(shadingGroup)
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
history = cmds.listHistory(
|
||||
shadingGroup, allConnections=True)
|
||||
|
||||
# We need to iterate over node_types since `cmds.ls` may
|
||||
# error out if we don't have the appropriate plugin loaded.
|
||||
files = []
|
||||
for node_type in NODETYPES.keys():
|
||||
files += cmds.ls(history,
|
||||
type=node_type,
|
||||
long=True)
|
||||
|
||||
for f in files:
|
||||
resources = self.collect_resource(f, publishMipMap)
|
||||
instance.data["resources"].append(resources)
|
||||
instance.data["resources"] += resources
|
||||
|
||||
elif isinstance(matOver, multiverse.MaterialSourceUsdPath):
|
||||
# TODO: Handle this later.
|
||||
|
|
@ -284,69 +338,63 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
dict
|
||||
"""
|
||||
|
||||
self.log.debug("processing: {}".format(node))
|
||||
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
|
||||
self.log.error(
|
||||
"Unsupported file node: {}".format(cmds.nodeType(node)))
|
||||
node_type = cmds.nodeType(node)
|
||||
self.log.debug("processing: {}/{}".format(node, node_type))
|
||||
|
||||
if node_type not in NODETYPES:
|
||||
self.log.error("Unsupported file node: {}".format(node_type))
|
||||
raise AssertionError("Unsupported file node")
|
||||
|
||||
if cmds.nodeType(node) == 'file':
|
||||
self.log.debug(" - file node")
|
||||
attribute = "{}.fileTextureName".format(node)
|
||||
computed_attribute = "{}.computedFileTextureNamePattern".format(
|
||||
node)
|
||||
elif cmds.nodeType(node) == 'aiImage':
|
||||
self.log.debug("aiImage node")
|
||||
attribute = "{}.filename".format(node)
|
||||
computed_attribute = attribute
|
||||
elif cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
self.log.debug("RedshiftNormalMap node")
|
||||
attribute = "{}.tex0".format(node)
|
||||
computed_attribute = attribute
|
||||
resources = []
|
||||
for node_type_attr in NODETYPES[node_type]:
|
||||
fname_attrib = node_type_attr.get_fname(node)
|
||||
computed_fname_attrib = node_type_attr.get_computed_fname(node)
|
||||
colour_space_attrib = node_type_attr.get_colour_space(node)
|
||||
|
||||
source = cmds.getAttr(attribute)
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
source = cmds.getAttr(fname_attrib)
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
try:
|
||||
color_space = cmds.getAttr(colour_space_attrib)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute, use "Raw" from before
|
||||
pass
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node) # noqa
|
||||
computed_source = cmds.getAttr(computed_fname_attrib)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
|
||||
files = get_file_node_files(node)
|
||||
files = self.handle_files(files, publishMipMap)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
files = get_file_node_files(node)
|
||||
files = self.handle_files(files, publishMipMap)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(fname_attrib))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
return {"node": node,
|
||||
"attribute": attribute,
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
# Define the resource
|
||||
resource = {"node": node,
|
||||
"attribute": fname_attrib,
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
resources.append(resource)
|
||||
return resources
|
||||
|
||||
def handle_files(self, files, publishMipMap):
|
||||
"""This will go through all the files and make sure that they are
|
||||
|
|
|
|||
152
openpype/hosts/maya/plugins/publish/extract_import_reference.py
Normal file
152
openpype/hosts/maya/plugins/publish/extract_import_reference.py
Normal file
|
|
@ -0,0 +1,152 @@
|
|||
import os
|
||||
import sys
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
import tempfile
|
||||
|
||||
from openpype.lib import run_subprocess
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class ExtractImportReference(publish.Extractor):
|
||||
"""
|
||||
|
||||
Extract the scene with imported reference.
|
||||
The temp scene with imported reference is
|
||||
published for rendering if this extractor is activated
|
||||
|
||||
"""
|
||||
|
||||
label = "Extract Import Reference"
|
||||
order = pyblish.api.ExtractorOrder - 0.48
|
||||
hosts = ["maya"]
|
||||
families = ["renderlayer", "workfile"]
|
||||
optional = True
|
||||
tmp_format = "_tmp"
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_setting, system_settings):
|
||||
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
|
||||
|
||||
def process(self, instance):
|
||||
ext_mapping = (
|
||||
instance.context.data["project_settings"]["maya"]["ext_mapping"]
|
||||
)
|
||||
if ext_mapping:
|
||||
self.log.info("Looking in settings for scene type ...")
|
||||
# use extension mapping for first family found
|
||||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
|
||||
except KeyError:
|
||||
# set scene type to ma
|
||||
self.scene_type = "ma"
|
||||
|
||||
_scene_type = ("mayaAscii"
|
||||
if self.scene_type == "ma"
|
||||
else "mayaBinary")
|
||||
|
||||
dir_path = self.staging_dir(instance)
|
||||
# named the file with imported reference
|
||||
if instance.name == "Main":
|
||||
return
|
||||
tmp_name = instance.name + self.tmp_format
|
||||
current_name = cmds.file(query=True, sceneName=True)
|
||||
ref_scene_name = "{0}.{1}".format(tmp_name, self.scene_type)
|
||||
|
||||
reference_path = os.path.join(dir_path, ref_scene_name)
|
||||
tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
# This generates script for mayapy to take care of reference
|
||||
# importing outside current session. It is passing current scene
|
||||
# name and destination scene name.
|
||||
script = ("""
|
||||
# -*- coding: utf-8 -*-
|
||||
'''Script to import references to given scene.'''
|
||||
import maya.standalone
|
||||
maya.standalone.initialize()
|
||||
# scene names filled by caller
|
||||
current_name = "{current_name}"
|
||||
ref_scene_name = "{ref_scene_name}"
|
||||
print(">>> Opening {{}} ...".format(current_name))
|
||||
cmds.file(current_name, open=True, force=True)
|
||||
print(">>> Processing references")
|
||||
all_reference = cmds.file(q=True, reference=True) or []
|
||||
for ref in all_reference:
|
||||
if cmds.referenceQuery(ref, il=True):
|
||||
cmds.file(ref, importReference=True)
|
||||
|
||||
nested_ref = cmds.file(q=True, reference=True)
|
||||
if nested_ref:
|
||||
for new_ref in nested_ref:
|
||||
if new_ref not in all_reference:
|
||||
all_reference.append(new_ref)
|
||||
|
||||
print(">>> Finish importing references")
|
||||
print(">>> Saving scene as {{}}".format(ref_scene_name))
|
||||
|
||||
cmds.file(rename=ref_scene_name)
|
||||
cmds.file(save=True, force=True)
|
||||
print("*** Done")
|
||||
""").format(current_name=current_name, ref_scene_name=tmp_path)
|
||||
mayapy_exe = os.path.join(os.getenv("MAYA_LOCATION"), "bin", "mayapy")
|
||||
if sys.platform == "windows":
|
||||
mayapy_exe += ".exe"
|
||||
mayapy_exe = os.path.normpath(mayapy_exe)
|
||||
# can't use TemporaryNamedFile as that can't be opened in another
|
||||
# process until handles are closed by context manager.
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py")
|
||||
self.log.info("Using script file: {}".format(tmp_script_path))
|
||||
with open(tmp_script_path, "wt") as tmp:
|
||||
tmp.write(script)
|
||||
|
||||
try:
|
||||
run_subprocess([mayapy_exe, tmp_script_path])
|
||||
except Exception:
|
||||
self.log.error("Import reference failed", exc_info=True)
|
||||
raise
|
||||
|
||||
with lib.maintained_selection():
|
||||
cmds.select(all=True, noExpand=True)
|
||||
cmds.file(reference_path,
|
||||
force=True,
|
||||
typ=_scene_type,
|
||||
exportSelected=True,
|
||||
channels=True,
|
||||
constraints=True,
|
||||
shader=True,
|
||||
expressions=True,
|
||||
constructionHistory=True)
|
||||
|
||||
instance.context.data["currentFile"] = tmp_path
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = []
|
||||
instance.data["files"].append(ref_scene_name)
|
||||
|
||||
if instance.data.get("representations") is None:
|
||||
instance.data["representations"] = []
|
||||
|
||||
ref_representation = {
|
||||
"name": self.scene_type,
|
||||
"ext": self.scene_type,
|
||||
"files": ref_scene_name,
|
||||
"stagingDir": os.path.dirname(current_name),
|
||||
"outputName": "imported"
|
||||
}
|
||||
self.log.info("%s" % ref_representation)
|
||||
|
||||
instance.data["representations"].append(ref_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name,
|
||||
reference_path))
|
||||
|
|
@ -73,12 +73,12 @@ class ExtractMultiverseLook(publish.Extractor):
|
|||
"writeAll": False,
|
||||
"writeTransforms": False,
|
||||
"writeVisibility": False,
|
||||
"writeAttributes": False,
|
||||
"writeAttributes": True,
|
||||
"writeMaterials": True,
|
||||
"writeVariants": False,
|
||||
"writeVariantsDefinition": False,
|
||||
"writeActiveState": False,
|
||||
"writeNamespaces": False,
|
||||
"writeNamespaces": True,
|
||||
"numTimeSamples": 1,
|
||||
"timeSamplesSpan": 0.0
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@ import os
|
|||
import six
|
||||
|
||||
from maya import cmds
|
||||
from maya import mel
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
|
||||
|
|
@ -26,7 +28,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
label = "Extract Multiverse USD Asset"
|
||||
hosts = ["maya"]
|
||||
families = ["mvUsd"]
|
||||
families = ["usd"]
|
||||
scene_type = "usd"
|
||||
file_formats = ["usd", "usda", "usdz"]
|
||||
|
||||
|
|
@ -87,7 +89,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
return {
|
||||
"stripNamespaces": False,
|
||||
"mergeTransformAndShape": False,
|
||||
"writeAncestors": True,
|
||||
"writeAncestors": False,
|
||||
"flattenParentXforms": False,
|
||||
"writeSparseOverrides": False,
|
||||
"useMetaPrimPath": False,
|
||||
|
|
@ -147,7 +149,15 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
return options
|
||||
|
||||
def get_default_options(self):
|
||||
self.log.info("ExtractMultiverseUsd get_default_options")
|
||||
return self.default_options
|
||||
|
||||
def filter_members(self, members):
|
||||
return members
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
|
||||
|
|
@ -161,7 +171,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
file_path = file_path.replace('\\', '/')
|
||||
|
||||
# Parse export options
|
||||
options = self.default_options
|
||||
options = self.get_default_options()
|
||||
options = self.parse_overrides(instance, options)
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
|
||||
|
|
@ -170,27 +180,35 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
self.log.info('Collected objects: {}'.format(members))
|
||||
members = self.filter_members(members)
|
||||
if not members:
|
||||
self.log.error('No members!')
|
||||
return
|
||||
self.log.info(' - filtered: {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
||||
time_opts = None
|
||||
frame_start = instance.data['frameStart']
|
||||
frame_end = instance.data['frameEnd']
|
||||
handle_start = instance.data['handleStart']
|
||||
handle_end = instance.data['handleEnd']
|
||||
step = instance.data['step']
|
||||
fps = instance.data['fps']
|
||||
if frame_end != frame_start:
|
||||
time_opts = multiverse.TimeOptions()
|
||||
|
||||
time_opts.writeTimeRange = True
|
||||
|
||||
handle_start = instance.data['handleStart']
|
||||
handle_end = instance.data['handleEnd']
|
||||
|
||||
time_opts.frameRange = (
|
||||
frame_start - handle_start, frame_end + handle_end)
|
||||
time_opts.frameIncrement = step
|
||||
time_opts.numTimeSamples = instance.data["numTimeSamples"]
|
||||
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
|
||||
time_opts.framePerSecond = fps
|
||||
time_opts.frameIncrement = instance.data['step']
|
||||
time_opts.numTimeSamples = instance.data.get(
|
||||
'numTimeSamples', options['numTimeSamples'])
|
||||
time_opts.timeSamplesSpan = instance.data.get(
|
||||
'timeSamplesSpan', options['timeSamplesSpan'])
|
||||
time_opts.framePerSecond = instance.data.get(
|
||||
'fps', mel.eval('currentTimeUnitToFPS()'))
|
||||
|
||||
asset_write_opts = multiverse.AssetWriteOptions(time_opts)
|
||||
options_discard_keys = {
|
||||
|
|
@ -203,11 +221,15 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
'step',
|
||||
'fps'
|
||||
}
|
||||
self.log.debug("Write Options:")
|
||||
for key, value in options.items():
|
||||
if key in options_discard_keys:
|
||||
continue
|
||||
|
||||
self.log.debug(" - {}={}".format(key, value))
|
||||
setattr(asset_write_opts, key, value)
|
||||
|
||||
self.log.info('WriteAsset: {} / {}'.format(file_path, members))
|
||||
multiverse.WriteAsset(file_path, members, asset_write_opts)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -223,3 +245,33 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
|
||||
|
||||
class ExtractMultiverseUsdAnim(ExtractMultiverseUsd):
|
||||
"""Extractor for Multiverse USD Animation Sparse Cache data.
|
||||
|
||||
This will extract the sparse cache data from the scene and generate a
|
||||
USD file with all the animation data.
|
||||
|
||||
Upon publish a .usd sparse cache will be written.
|
||||
"""
|
||||
label = "Extract Multiverse USD Animation Sparse Cache"
|
||||
families = ["animation", "usd"]
|
||||
match = pyblish.api.Subset
|
||||
|
||||
def get_default_options(self):
|
||||
anim_options = self.default_options
|
||||
anim_options["writeSparseOverrides"] = True
|
||||
anim_options["writeUsdAttributes"] = True
|
||||
anim_options["stripNamespaces"] = True
|
||||
return anim_options
|
||||
|
||||
def filter_members(self, members):
|
||||
out_set = next((i for i in members if i.endswith("out_SET")), None)
|
||||
|
||||
if out_set is None:
|
||||
self.log.warning("Expecting out_SET")
|
||||
return None
|
||||
|
||||
members = cmds.ls(cmds.sets(out_set, query=True), long=True)
|
||||
return members
|
||||
|
|
|
|||
|
|
@ -80,13 +80,14 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
|
|||
def is_or_has_mipmap(self, fname, files):
|
||||
ext = os.path.splitext(fname)[1][1:]
|
||||
if ext in MIPMAP_EXTENSIONS:
|
||||
self.log.debug("Is a mipmap '{}'".format(fname))
|
||||
self.log.debug(" - Is a mipmap '{}'".format(fname))
|
||||
return True
|
||||
|
||||
for colour_space in COLOUR_SPACES:
|
||||
for mipmap_ext in MIPMAP_EXTENSIONS:
|
||||
mipmap_fname = '.'.join([fname, colour_space, mipmap_ext])
|
||||
if mipmap_fname in files:
|
||||
self.log.debug("Has a mipmap '{}'".format(fname))
|
||||
self.log.debug(
|
||||
" - Has a mipmap '{}'".format(mipmap_fname))
|
||||
return True
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
- nurbsSurface: _NRB
|
||||
- locator: _LOC
|
||||
- null/group: _GRP
|
||||
Suffices can also be overriden by project settings.
|
||||
|
||||
.. warning::
|
||||
This grabs the first child shape as a reference and doesn't use the
|
||||
|
|
@ -44,6 +45,13 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
|
||||
ALLOW_IF_NOT_IN_SUFFIX_TABLE = True
|
||||
|
||||
@classmethod
|
||||
def get_table_for_invalid(cls):
|
||||
ss = []
|
||||
for k, v in cls.SUFFIX_NAMING_TABLE.items():
|
||||
ss.append(" - {}: {}".format(k, ", ".join(v)))
|
||||
return "\n".join(ss)
|
||||
|
||||
@staticmethod
|
||||
def is_valid_name(node_name, shape_type,
|
||||
SUFFIX_NAMING_TABLE, ALLOW_IF_NOT_IN_SUFFIX_TABLE):
|
||||
|
|
@ -106,5 +114,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
valid = self.get_table_for_invalid()
|
||||
raise ValueError("Incorrectly named geometry "
|
||||
"transforms: {0}".format(invalid))
|
||||
"transforms: {0}, accepted suffixes are: "
|
||||
"\n{1}".format(invalid, valid))
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import os
|
||||
import sys
|
||||
|
||||
from Qt import QtWidgets, QtCore
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import re
|
|||
import uuid
|
||||
|
||||
import qargparse
|
||||
from Qt import QtWidgets, QtCore
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
||||
from openpype.settings import get_current_project_settings
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
|
|
|
|||
|
|
@ -171,7 +171,6 @@ class ShotMetadataSolver:
|
|||
_index == 0
|
||||
and parents[-1]["entity_name"] == parent_name
|
||||
):
|
||||
self.log.debug(f" skipping : {parent_name}")
|
||||
continue
|
||||
|
||||
# in case first parent is project then start parents from start
|
||||
|
|
@ -179,7 +178,6 @@ class ShotMetadataSolver:
|
|||
_index == 0
|
||||
and parent_token_type == "Project"
|
||||
):
|
||||
self.log.debug("rebuilding parents from scratch")
|
||||
project_parent = parents[0]
|
||||
parents = [project_parent]
|
||||
continue
|
||||
|
|
@ -189,8 +187,6 @@ class ShotMetadataSolver:
|
|||
"entity_name": parent_name
|
||||
})
|
||||
|
||||
self.log.debug(f"__ parents: {parents}")
|
||||
|
||||
return parents
|
||||
|
||||
def _create_hierarchy_path(self, parents):
|
||||
|
|
@ -297,7 +293,6 @@ class ShotMetadataSolver:
|
|||
Returns:
|
||||
(str, dict): shot name and hierarchy data
|
||||
"""
|
||||
self.log.info(f"_ source_data: {source_data}")
|
||||
|
||||
tasks = {}
|
||||
asset_doc = source_data["selected_asset_doc"]
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
import os
|
||||
from copy import deepcopy
|
||||
from pprint import pformat
|
||||
import opentimelineio as otio
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
|
|
@ -13,9 +12,7 @@ from openpype.hosts.traypublisher.api.plugin import (
|
|||
from openpype.hosts.traypublisher.api.editorial import (
|
||||
ShotMetadataSolver
|
||||
)
|
||||
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
from openpype.lib import (
|
||||
get_ffprobe_data,
|
||||
convert_ffprobe_fps_value,
|
||||
|
|
@ -33,14 +30,14 @@ from openpype.lib import (
|
|||
CLIP_ATTR_DEFS = [
|
||||
EnumDef(
|
||||
"fps",
|
||||
items={
|
||||
"from_selection": "From selection",
|
||||
23.997: "23.976",
|
||||
24: "24",
|
||||
25: "25",
|
||||
29.97: "29.97",
|
||||
30: "30"
|
||||
},
|
||||
items=[
|
||||
{"value": "from_selection", "label": "From selection"},
|
||||
{"value": 23.997, "label": "23.976"},
|
||||
{"value": 24, "label": "24"},
|
||||
{"value": 25, "label": "25"},
|
||||
{"value": 29.97, "label": "29.97"},
|
||||
{"value": 30, "label": "30"}
|
||||
],
|
||||
label="FPS"
|
||||
),
|
||||
NumberDef(
|
||||
|
|
@ -70,14 +67,12 @@ class EditorialClipInstanceCreatorBase(HiddenTrayPublishCreator):
|
|||
host_name = "traypublisher"
|
||||
|
||||
def create(self, instance_data, source_data=None):
|
||||
self.log.info(f"instance_data: {instance_data}")
|
||||
subset_name = instance_data["subset"]
|
||||
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(
|
||||
self.family, subset_name, instance_data, self
|
||||
)
|
||||
self.log.info(f"instance_data: {pformat(new_instance.data)}")
|
||||
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
|
|
@ -223,8 +218,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
asset_name = instance_data["asset"]
|
||||
asset_doc = get_asset_by_name(self.project_name, asset_name)
|
||||
|
||||
self.log.info(pre_create_data["fps"])
|
||||
|
||||
if pre_create_data["fps"] == "from_selection":
|
||||
# get asset doc data attributes
|
||||
fps = asset_doc["data"]["fps"]
|
||||
|
|
@ -239,34 +232,43 @@ or updating already created. Publishing will create OTIO file.
|
|||
sequence_path_data = pre_create_data["sequence_filepath_data"]
|
||||
media_path_data = pre_create_data["media_filepaths_data"]
|
||||
|
||||
sequence_path = self._get_path_from_file_data(sequence_path_data)
|
||||
sequence_paths = self._get_path_from_file_data(
|
||||
sequence_path_data, multi=True)
|
||||
media_path = self._get_path_from_file_data(media_path_data)
|
||||
|
||||
# get otio timeline
|
||||
otio_timeline = self._create_otio_timeline(
|
||||
sequence_path, fps)
|
||||
first_otio_timeline = None
|
||||
for seq_path in sequence_paths:
|
||||
# get otio timeline
|
||||
otio_timeline = self._create_otio_timeline(
|
||||
seq_path, fps)
|
||||
|
||||
# Create all clip instances
|
||||
clip_instance_properties.update({
|
||||
"fps": fps,
|
||||
"parent_asset_name": asset_name,
|
||||
"variant": instance_data["variant"]
|
||||
})
|
||||
# Create all clip instances
|
||||
clip_instance_properties.update({
|
||||
"fps": fps,
|
||||
"parent_asset_name": asset_name,
|
||||
"variant": instance_data["variant"]
|
||||
})
|
||||
|
||||
# create clip instances
|
||||
self._get_clip_instances(
|
||||
otio_timeline,
|
||||
media_path,
|
||||
clip_instance_properties,
|
||||
family_presets=allowed_family_presets
|
||||
# create clip instances
|
||||
self._get_clip_instances(
|
||||
otio_timeline,
|
||||
media_path,
|
||||
clip_instance_properties,
|
||||
allowed_family_presets,
|
||||
os.path.basename(seq_path),
|
||||
first_otio_timeline
|
||||
)
|
||||
|
||||
)
|
||||
if not first_otio_timeline:
|
||||
# assing otio timeline for multi file to layer
|
||||
first_otio_timeline = otio_timeline
|
||||
|
||||
# create otio editorial instance
|
||||
self._create_otio_instance(
|
||||
subset_name, instance_data,
|
||||
sequence_path, media_path,
|
||||
otio_timeline
|
||||
subset_name,
|
||||
instance_data,
|
||||
seq_path, media_path,
|
||||
first_otio_timeline
|
||||
)
|
||||
|
||||
def _create_otio_instance(
|
||||
|
|
@ -317,14 +319,14 @@ or updating already created. Publishing will create OTIO file.
|
|||
kwargs["rate"] = fps
|
||||
kwargs["ignore_timecode_mismatch"] = True
|
||||
|
||||
self.log.info(f"kwargs: {kwargs}")
|
||||
return otio.adapters.read_from_file(sequence_path, **kwargs)
|
||||
|
||||
def _get_path_from_file_data(self, file_path_data):
|
||||
def _get_path_from_file_data(self, file_path_data, multi=False):
|
||||
"""Converting creator path data to single path string
|
||||
|
||||
Args:
|
||||
file_path_data (FileDefItem): creator path data inputs
|
||||
multi (bool): switch to multiple files mode
|
||||
|
||||
Raises:
|
||||
FileExistsError: in case nothing had been set
|
||||
|
|
@ -332,23 +334,29 @@ or updating already created. Publishing will create OTIO file.
|
|||
Returns:
|
||||
str: path string
|
||||
"""
|
||||
# TODO: just temporarly solving only one media file
|
||||
if isinstance(file_path_data, list):
|
||||
file_path_data = file_path_data.pop()
|
||||
return_path_list = []
|
||||
|
||||
if len(file_path_data["filenames"]) == 0:
|
||||
|
||||
if isinstance(file_path_data, list):
|
||||
return_path_list = [
|
||||
os.path.join(f["directory"], f["filenames"][0])
|
||||
for f in file_path_data
|
||||
]
|
||||
|
||||
if not return_path_list:
|
||||
raise FileExistsError(
|
||||
f"File path was not added: {file_path_data}")
|
||||
|
||||
return os.path.join(
|
||||
file_path_data["directory"], file_path_data["filenames"][0])
|
||||
return return_path_list if multi else return_path_list[0]
|
||||
|
||||
def _get_clip_instances(
|
||||
self,
|
||||
otio_timeline,
|
||||
media_path,
|
||||
instance_data,
|
||||
family_presets
|
||||
family_presets,
|
||||
sequence_file_name,
|
||||
first_otio_timeline=None
|
||||
):
|
||||
"""Helping function fro creating clip instance
|
||||
|
||||
|
|
@ -368,17 +376,15 @@ or updating already created. Publishing will create OTIO file.
|
|||
media_data = self._get_media_source_metadata(media_path)
|
||||
|
||||
for track in tracks:
|
||||
self.log.debug(f"track.name: {track.name}")
|
||||
track.name = f"{sequence_file_name} - {otio_timeline.name}"
|
||||
try:
|
||||
track_start_frame = (
|
||||
abs(track.source_range.start_time.value)
|
||||
)
|
||||
self.log.debug(f"track_start_frame: {track_start_frame}")
|
||||
track_start_frame -= self.timeline_frame_start
|
||||
except AttributeError:
|
||||
track_start_frame = 0
|
||||
|
||||
self.log.debug(f"track_start_frame: {track_start_frame}")
|
||||
|
||||
for clip in track.each_child():
|
||||
if not self._validate_clip_for_processing(clip):
|
||||
|
|
@ -400,10 +406,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
"instance_label": None,
|
||||
"instance_id": None
|
||||
}
|
||||
self.log.info((
|
||||
"Creating subsets from presets: \n"
|
||||
f"{pformat(family_presets)}"
|
||||
))
|
||||
|
||||
for _fpreset in family_presets:
|
||||
# exclude audio family if no audio stream
|
||||
|
|
@ -419,7 +421,10 @@ or updating already created. Publishing will create OTIO file.
|
|||
deepcopy(base_instance_data),
|
||||
parenting_data
|
||||
)
|
||||
self.log.debug(f"{pformat(dict(instance.data))}")
|
||||
|
||||
# add track to first otioTimeline if it is in input args
|
||||
if first_otio_timeline:
|
||||
first_otio_timeline.tracks.append(deepcopy(track))
|
||||
|
||||
def _restore_otio_source_range(self, otio_clip):
|
||||
"""Infusing source range.
|
||||
|
|
@ -460,7 +465,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
target_url=media_path,
|
||||
available_range=available_range
|
||||
)
|
||||
|
||||
otio_clip.media_reference = media_reference
|
||||
|
||||
def _get_media_source_metadata(self, path):
|
||||
|
|
@ -481,7 +485,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
media_data = get_ffprobe_data(
|
||||
path, self.log
|
||||
)
|
||||
self.log.debug(f"__ media_data: {pformat(media_data)}")
|
||||
|
||||
# get video stream data
|
||||
video_stream = media_data["streams"][0]
|
||||
|
|
@ -589,9 +592,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
# get variant name from preset or from inharitance
|
||||
_variant_name = preset.get("variant") or variant_name
|
||||
|
||||
self.log.debug(f"__ family: {family}")
|
||||
self.log.debug(f"__ preset: {preset}")
|
||||
|
||||
# subset name
|
||||
subset_name = "{}{}".format(
|
||||
family, _variant_name.capitalize()
|
||||
|
|
@ -722,17 +722,13 @@ or updating already created. Publishing will create OTIO file.
|
|||
clip_in += track_start_frame
|
||||
clip_out = otio_clip.range_in_parent().end_time_inclusive().value
|
||||
clip_out += track_start_frame
|
||||
self.log.info(f"clip_in: {clip_in} | clip_out: {clip_out}")
|
||||
|
||||
# add offset in case there is any
|
||||
self.log.debug(f"__ timeline_offset: {timeline_offset}")
|
||||
if timeline_offset:
|
||||
clip_in += timeline_offset
|
||||
clip_out += timeline_offset
|
||||
|
||||
clip_duration = otio_clip.duration().value
|
||||
self.log.info(f"clip duration: {clip_duration}")
|
||||
|
||||
source_in = otio_clip.trimmed_range().start_time.value
|
||||
source_out = source_in + clip_duration
|
||||
|
||||
|
|
@ -762,7 +758,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
Returns:
|
||||
list: lit of dict with preset items
|
||||
"""
|
||||
self.log.debug(f"__ pre_create_data: {pre_create_data}")
|
||||
return [
|
||||
{"family": "shot"},
|
||||
*[
|
||||
|
|
@ -833,7 +828,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
".fcpxml"
|
||||
],
|
||||
allow_sequences=False,
|
||||
single_item=True,
|
||||
single_item=False,
|
||||
label="Sequence file",
|
||||
),
|
||||
FileDef(
|
||||
|
|
|
|||
|
|
@ -7,8 +7,8 @@ exists under selected asset.
|
|||
"""
|
||||
from pathlib import Path
|
||||
|
||||
from openpype.client import get_subset_by_name, get_asset_by_name
|
||||
from openpype.lib.attribute_definitions import FileDef
|
||||
# from openpype.client import get_subset_by_name, get_asset_by_name
|
||||
from openpype.lib.attribute_definitions import FileDef, BoolDef
|
||||
from openpype.pipeline import (
|
||||
CreatedInstance,
|
||||
CreatorError
|
||||
|
|
@ -23,7 +23,8 @@ class OnlineCreator(TrayPublishCreator):
|
|||
label = "Online"
|
||||
family = "online"
|
||||
description = "Publish file retaining its original file name"
|
||||
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg"]
|
||||
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg", ".exr",
|
||||
".dpx", ".tif", ".png", ".jpg"]
|
||||
|
||||
def get_detail_description(self):
|
||||
return """# Create file retaining its original file name.
|
||||
|
|
@ -49,13 +50,17 @@ class OnlineCreator(TrayPublishCreator):
|
|||
|
||||
origin_basename = Path(files[0]).stem
|
||||
|
||||
# disable check for existing subset with the same name
|
||||
"""
|
||||
asset = get_asset_by_name(
|
||||
self.project_name, instance_data["asset"], fields=["_id"])
|
||||
|
||||
if get_subset_by_name(
|
||||
self.project_name, origin_basename, asset["_id"],
|
||||
fields=["_id"]):
|
||||
raise CreatorError(f"subset with {origin_basename} already "
|
||||
"exists in selected asset")
|
||||
"""
|
||||
|
||||
instance_data["originalBasename"] = origin_basename
|
||||
subset_name = origin_basename
|
||||
|
|
@ -69,15 +74,29 @@ class OnlineCreator(TrayPublishCreator):
|
|||
instance_data, self)
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef(
|
||||
"add_review_family",
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
return [
|
||||
FileDef(
|
||||
"representation_file",
|
||||
folders=False,
|
||||
extensions=self.extensions,
|
||||
allow_sequences=False,
|
||||
allow_sequences=True,
|
||||
single_item=True,
|
||||
label="Representation",
|
||||
),
|
||||
BoolDef(
|
||||
"add_review_family",
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -12,12 +12,18 @@ class CollectOnlineFile(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
file = Path(instance.data["creator_attributes"]["path"])
|
||||
review = instance.data["creator_attributes"]["add_review_family"]
|
||||
instance.data["review"] = review
|
||||
if "review" not in instance.data["families"]:
|
||||
instance.data["families"].append("review")
|
||||
self.log.info(f"Adding review: {review}")
|
||||
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
"name": file.suffix.lstrip("."),
|
||||
"ext": file.suffix.lstrip("."),
|
||||
"files": file.name,
|
||||
"stagingDir": file.parent.as_posix()
|
||||
"stagingDir": file.parent.as_posix(),
|
||||
"tags": ["review"] if review else []
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -33,8 +33,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.debug(pformat(instance.data))
|
||||
|
||||
creator_identifier = instance.data["creator_identifier"]
|
||||
if "editorial" not in creator_identifier:
|
||||
return
|
||||
|
|
@ -82,7 +80,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
otio_clip = clips.pop()
|
||||
self.log.debug(f"__ otioclip.parent: {otio_clip.parent}")
|
||||
|
||||
return otio_clip
|
||||
|
||||
|
|
@ -172,7 +169,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
}
|
||||
|
||||
parents = instance.data.get('parents', [])
|
||||
self.log.debug(f"parents: {pformat(parents)}")
|
||||
|
||||
actual = {name: in_info}
|
||||
|
||||
|
|
@ -190,7 +186,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
# adding hierarchy context to instance
|
||||
context.data["hierarchyContext"] = final_context
|
||||
self.log.debug(pformat(final_context))
|
||||
|
||||
def _update_dict(self, ex_dict, new_dict):
|
||||
""" Recursion function
|
||||
|
|
|
|||
|
|
@ -20,6 +20,8 @@ class ValidateOnlineFile(OptionalPyblishPluginMixin,
|
|||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
project_name = instance.context.data["projectName"]
|
||||
asset_id = instance.data["assetEntity"]["_id"]
|
||||
subset = get_subset_by_name(
|
||||
|
|
|
|||
|
|
@ -302,8 +302,9 @@ private:
|
|||
std::string websocket_url;
|
||||
// Should be avalon plugin available?
|
||||
// - this may change during processing if websocketet url is not set or server is down
|
||||
bool use_avalon;
|
||||
bool server_available;
|
||||
public:
|
||||
Communicator(std::string url);
|
||||
Communicator();
|
||||
websocket_endpoint endpoint;
|
||||
bool is_connected();
|
||||
|
|
@ -314,43 +315,45 @@ public:
|
|||
void call_notification(std::string method_name, nlohmann::json params);
|
||||
};
|
||||
|
||||
Communicator::Communicator() {
|
||||
|
||||
Communicator::Communicator(std::string url) {
|
||||
// URL to websocket server
|
||||
websocket_url = std::getenv("WEBSOCKET_URL");
|
||||
websocket_url = url;
|
||||
// Should be avalon plugin available?
|
||||
// - this may change during processing if websocketet url is not set or server is down
|
||||
if (websocket_url == "") {
|
||||
use_avalon = false;
|
||||
if (url == "") {
|
||||
server_available = false;
|
||||
} else {
|
||||
use_avalon = true;
|
||||
server_available = true;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
bool Communicator::is_connected(){
|
||||
return endpoint.connected();
|
||||
}
|
||||
|
||||
bool Communicator::is_usable(){
|
||||
return use_avalon;
|
||||
return server_available;
|
||||
}
|
||||
|
||||
void Communicator::connect()
|
||||
{
|
||||
if (!use_avalon) {
|
||||
if (!server_available) {
|
||||
return;
|
||||
}
|
||||
int con_result;
|
||||
con_result = endpoint.connect(websocket_url);
|
||||
if (con_result == -1)
|
||||
{
|
||||
use_avalon = false;
|
||||
server_available = false;
|
||||
} else {
|
||||
use_avalon = true;
|
||||
server_available = true;
|
||||
}
|
||||
}
|
||||
|
||||
void Communicator::call_notification(std::string method_name, nlohmann::json params) {
|
||||
if (!use_avalon || !is_connected()) {return;}
|
||||
if (!server_available || !is_connected()) {return;}
|
||||
|
||||
jsonrpcpp::Notification notification = {method_name, params};
|
||||
endpoint.send_notification(¬ification);
|
||||
|
|
@ -358,7 +361,7 @@ void Communicator::call_notification(std::string method_name, nlohmann::json par
|
|||
|
||||
jsonrpcpp::Response Communicator::call_method(std::string method_name, nlohmann::json params) {
|
||||
jsonrpcpp::Response response;
|
||||
if (!use_avalon || !is_connected())
|
||||
if (!server_available || !is_connected())
|
||||
{
|
||||
return response;
|
||||
}
|
||||
|
|
@ -382,7 +385,7 @@ jsonrpcpp::Response Communicator::call_method(std::string method_name, nlohmann:
|
|||
}
|
||||
|
||||
void Communicator::process_requests() {
|
||||
if (!use_avalon || !is_connected() || Data.messages.empty()) {return;}
|
||||
if (!server_available || !is_connected() || Data.messages.empty()) {return;}
|
||||
|
||||
std::string msg = Data.messages.front();
|
||||
Data.messages.pop();
|
||||
|
|
@ -458,7 +461,7 @@ void register_callbacks(){
|
|||
parser.register_request_callback("execute_george", execute_george);
|
||||
}
|
||||
|
||||
Communicator communication;
|
||||
Communicator* communication = nullptr;
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////////////
|
||||
|
||||
|
|
@ -484,7 +487,7 @@ static char* GetLocalString( PIFilter* iFilter, int iNum, char* iDefault )
|
|||
// in the localized file (or the localized file doesn't exist).
|
||||
std::string label_from_evn()
|
||||
{
|
||||
std::string _plugin_label = "Avalon";
|
||||
std::string _plugin_label = "OpenPype";
|
||||
if (std::getenv("AVALON_LABEL") && std::getenv("AVALON_LABEL") != "")
|
||||
{
|
||||
_plugin_label = std::getenv("AVALON_LABEL");
|
||||
|
|
@ -540,9 +543,12 @@ int FAR PASCAL PI_Open( PIFilter* iFilter )
|
|||
{
|
||||
PI_Parameters( iFilter, NULL ); // NULL as iArg means "open the requester"
|
||||
}
|
||||
|
||||
communication.connect();
|
||||
register_callbacks();
|
||||
char *env_value = std::getenv("WEBSOCKET_URL");
|
||||
if (env_value != NULL) {
|
||||
communication = new Communicator(env_value);
|
||||
communication->connect();
|
||||
register_callbacks();
|
||||
}
|
||||
return 1; // OK
|
||||
}
|
||||
|
||||
|
|
@ -560,7 +566,10 @@ void FAR PASCAL PI_Close( PIFilter* iFilter )
|
|||
{
|
||||
TVCloseReq( iFilter, Data.mReq );
|
||||
}
|
||||
communication.endpoint.close_connection();
|
||||
if (communication != nullptr) {
|
||||
communication->endpoint.close_connection();
|
||||
delete communication;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -709,7 +718,7 @@ int FAR PASCAL PI_Msg( PIFilter* iFilter, INTPTR iEvent, INTPTR iReq, INTPTR* iA
|
|||
if (Data.menuItemsById.contains(button_up_item_id_str))
|
||||
{
|
||||
std::string callback_name = Data.menuItemsById[button_up_item_id_str].get<std::string>();
|
||||
communication.call_method(callback_name, nlohmann::json::array());
|
||||
communication->call_method(callback_name, nlohmann::json::array());
|
||||
}
|
||||
TVExecute( iFilter );
|
||||
break;
|
||||
|
|
@ -737,7 +746,9 @@ int FAR PASCAL PI_Msg( PIFilter* iFilter, INTPTR iEvent, INTPTR iReq, INTPTR* iA
|
|||
{
|
||||
newMenuItemsProcess(iFilter);
|
||||
}
|
||||
communication.process_requests();
|
||||
if (communication != nullptr) {
|
||||
communication->process_requests();
|
||||
}
|
||||
}
|
||||
|
||||
return 1;
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
|
|
@ -18,6 +18,7 @@ from .pipeline import (
|
|||
show_tools_popup,
|
||||
instantiate,
|
||||
UnrealHost,
|
||||
maintained_selection
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
|
|
@ -36,4 +37,5 @@ __all__ = [
|
|||
"show_tools_popup",
|
||||
"instantiate",
|
||||
"UnrealHost",
|
||||
"maintained_selection"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
import os
|
||||
import logging
|
||||
from typing import List
|
||||
from contextlib import contextmanager
|
||||
import semver
|
||||
|
||||
import pyblish.api
|
||||
|
|
@ -447,3 +448,16 @@ def get_subsequences(sequence: unreal.LevelSequence):
|
|||
if subscene_track is not None and subscene_track.get_sections():
|
||||
return subscene_track.get_sections()
|
||||
return []
|
||||
|
||||
|
||||
@contextmanager
|
||||
def maintained_selection():
|
||||
"""Stub to be either implemented or replaced.
|
||||
|
||||
This is needed for old publisher implementation, but
|
||||
it is not supported (yet) in UE.
|
||||
"""
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
|
|
|||
61
openpype/hosts/unreal/plugins/create/create_uasset.py
Normal file
61
openpype/hosts/unreal/plugins/create/create_uasset.py
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
"""Create UAsset."""
|
||||
from pathlib import Path
|
||||
|
||||
import unreal
|
||||
|
||||
from openpype.hosts.unreal.api import pipeline
|
||||
from openpype.pipeline import LegacyCreator
|
||||
|
||||
|
||||
class CreateUAsset(LegacyCreator):
|
||||
"""UAsset."""
|
||||
|
||||
name = "UAsset"
|
||||
label = "UAsset"
|
||||
family = "uasset"
|
||||
icon = "cube"
|
||||
|
||||
root = "/Game/OpenPype"
|
||||
suffix = "_INS"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateUAsset, self).__init__(*args, **kwargs)
|
||||
|
||||
def process(self):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
subset = self.data["subset"]
|
||||
path = f"{self.root}/PublishInstances/"
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(path)
|
||||
|
||||
selection = []
|
||||
if (self.options or {}).get("useSelection"):
|
||||
sel_objects = unreal.EditorUtilityLibrary.get_selected_assets()
|
||||
selection = [a.get_path_name() for a in sel_objects]
|
||||
|
||||
if len(selection) != 1:
|
||||
raise RuntimeError("Please select only one object.")
|
||||
|
||||
obj = selection[0]
|
||||
|
||||
asset = ar.get_asset_by_object_path(obj).get_asset()
|
||||
sys_path = unreal.SystemLibrary.get_system_path(asset)
|
||||
|
||||
if not sys_path:
|
||||
raise RuntimeError(
|
||||
f"{Path(obj).name} is not on the disk. Likely it needs to"
|
||||
"be saved first.")
|
||||
|
||||
if Path(sys_path).suffix != ".uasset":
|
||||
raise RuntimeError(f"{Path(sys_path).name} is not a UAsset.")
|
||||
|
||||
unreal.log("selection: {}".format(selection))
|
||||
container_name = f"{subset}{self.suffix}"
|
||||
pipeline.create_publish_instance(
|
||||
instance=container_name, path=path)
|
||||
|
||||
data = self.data.copy()
|
||||
data["members"] = selection
|
||||
|
||||
pipeline.imprint(f"{path}/{container_name}", data)
|
||||
145
openpype/hosts/unreal/plugins/load/load_uasset.py
Normal file
145
openpype/hosts/unreal/plugins/load/load_uasset.py
Normal file
|
|
@ -0,0 +1,145 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Load UAsset."""
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
|
||||
from openpype.pipeline import (
|
||||
get_representation_path,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
class UAssetLoader(plugin.Loader):
|
||||
"""Load UAsset."""
|
||||
|
||||
families = ["uasset"]
|
||||
label = "Load UAsset"
|
||||
representations = ["uasset"]
|
||||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
namespace (str): in Unreal this is basically path to container.
|
||||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
options (dict): Those would be data to be imprinted. This is not
|
||||
used now, data are imprinted by `containerise()`.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
"""
|
||||
|
||||
# Create directory for asset and OpenPype container
|
||||
root = "/Game/OpenPype/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
"{}/{}/{}".format(root, asset, name), suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
destination_path = asset_dir.replace(
|
||||
"/Game",
|
||||
Path(unreal.Paths.project_content_dir()).as_posix(),
|
||||
1)
|
||||
|
||||
shutil.copy(self.fname, f"{destination_path}/{name}.uasset")
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(
|
||||
"{}/{}".format(asset_dir, container_name), data)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
asset_dir = container["namespace"]
|
||||
name = representation["context"]["subset"]
|
||||
|
||||
destination_path = asset_dir.replace(
|
||||
"/Game",
|
||||
Path(unreal.Paths.project_content_dir()).as_posix(),
|
||||
1)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=False, include_folder=True
|
||||
)
|
||||
|
||||
for asset in asset_content:
|
||||
obj = ar.get_asset_by_object_path(asset).get_asset()
|
||||
if not obj.get_class().get_name() == 'AssetContainer':
|
||||
unreal.EditorAssetLibrary.delete_asset(asset)
|
||||
|
||||
update_filepath = get_representation_path(representation)
|
||||
|
||||
shutil.copy(update_filepath, f"{destination_path}/{name}.uasset")
|
||||
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
def remove(self, container):
|
||||
path = container["namespace"]
|
||||
parent_path = Path(path).parent.as_posix()
|
||||
|
||||
unreal.EditorAssetLibrary.delete_directory(path)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
parent_path, recursive=False
|
||||
)
|
||||
|
||||
if len(asset_content) == 0:
|
||||
unreal.EditorAssetLibrary.delete_directory(parent_path)
|
||||
|
|
@ -25,9 +25,13 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
class_name = ["/Script/OpenPype",
|
||||
"AssetContainer"] if UNREAL_VERSION.major == 5 and \
|
||||
UNREAL_VERSION.minor > 0 else "OpenPypePublishInstance" # noqa
|
||||
class_name = [
|
||||
"/Script/OpenPype",
|
||||
"OpenPypePublishInstance"
|
||||
] if (
|
||||
UNREAL_VERSION.major == 5
|
||||
and UNREAL_VERSION.minor > 0
|
||||
) else "OpenPypePublishInstance" # noqa
|
||||
instance_containers = ar.get_assets_by_class(class_name, True)
|
||||
|
||||
for container_data in instance_containers:
|
||||
|
|
|
|||
42
openpype/hosts/unreal/plugins/publish/extract_uasset.py
Normal file
42
openpype/hosts/unreal/plugins/publish/extract_uasset.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
from pathlib import Path
|
||||
import shutil
|
||||
|
||||
import unreal
|
||||
|
||||
from openpype.pipeline import publish
|
||||
|
||||
|
||||
class ExtractUAsset(publish.Extractor):
|
||||
"""Extract a UAsset."""
|
||||
|
||||
label = "Extract UAsset"
|
||||
hosts = ["unreal"]
|
||||
families = ["uasset"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{}.uasset".format(instance.name)
|
||||
|
||||
obj = instance[0]
|
||||
|
||||
asset = ar.get_asset_by_object_path(obj).get_asset()
|
||||
sys_path = unreal.SystemLibrary.get_system_path(asset)
|
||||
filename = Path(sys_path).name
|
||||
|
||||
shutil.copy(sys_path, staging_dir)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'uasset',
|
||||
'ext': 'uasset',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
import unreal
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateNoDependencies(pyblish.api.InstancePlugin):
|
||||
"""Ensure that the uasset has no dependencies
|
||||
|
||||
The uasset is checked for dependencies. If there are any, the instance
|
||||
cannot be published.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Check no dependencies"
|
||||
families = ["uasset"]
|
||||
hosts = ["unreal"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
all_dependencies = []
|
||||
|
||||
for obj in instance[:]:
|
||||
asset = ar.get_asset_by_object_path(obj)
|
||||
dependencies = ar.get_dependencies(
|
||||
asset.package_name,
|
||||
unreal.AssetRegistryDependencyOptions(
|
||||
include_soft_package_references=False,
|
||||
include_hard_package_references=True,
|
||||
include_searchable_names=False,
|
||||
include_soft_management_references=False,
|
||||
include_hard_management_references=False
|
||||
))
|
||||
if dependencies:
|
||||
for dep in dependencies:
|
||||
if str(dep).startswith("/Game/"):
|
||||
all_dependencies.append(str(dep))
|
||||
|
||||
if all_dependencies:
|
||||
raise RuntimeError(
|
||||
f"Dependencies found: {all_dependencies}")
|
||||
|
|
@ -3,6 +3,7 @@ import re
|
|||
import collections
|
||||
import uuid
|
||||
import json
|
||||
import copy
|
||||
from abc import ABCMeta, abstractmethod, abstractproperty
|
||||
|
||||
import six
|
||||
|
|
@ -418,9 +419,8 @@ class EnumDef(AbtractAttrDef):
|
|||
"""Enumeration of single item from items.
|
||||
|
||||
Args:
|
||||
items: Items definition that can be coverted to
|
||||
`collections.OrderedDict`. Dictionary represent {value: label}
|
||||
relation.
|
||||
items: Items definition that can be coverted using
|
||||
'prepare_enum_items'.
|
||||
default: Default value. Must be one key(value) from passed items.
|
||||
"""
|
||||
|
||||
|
|
@ -433,38 +433,95 @@ class EnumDef(AbtractAttrDef):
|
|||
" defined values on initialization."
|
||||
).format(self.__class__.__name__))
|
||||
|
||||
items = collections.OrderedDict(items)
|
||||
if default not in items:
|
||||
for _key in items.keys():
|
||||
default = _key
|
||||
items = self.prepare_enum_items(items)
|
||||
item_values = [item["value"] for item in items]
|
||||
if default not in item_values:
|
||||
for value in item_values:
|
||||
default = value
|
||||
break
|
||||
|
||||
super(EnumDef, self).__init__(key, default=default, **kwargs)
|
||||
|
||||
self.items = items
|
||||
self._item_values = set(item_values)
|
||||
|
||||
def __eq__(self, other):
|
||||
if not super(EnumDef, self).__eq__(other):
|
||||
return False
|
||||
|
||||
if set(self.items.keys()) != set(other.items.keys()):
|
||||
return False
|
||||
|
||||
for key, label in self.items.items():
|
||||
if other.items[key] != label:
|
||||
return False
|
||||
return True
|
||||
return self.items == other.items
|
||||
|
||||
def convert_value(self, value):
|
||||
if value in self.items:
|
||||
if value in self._item_values:
|
||||
return value
|
||||
return self.default
|
||||
|
||||
def serialize(self):
|
||||
data = super(TextDef, self).serialize()
|
||||
data["items"] = list(self.items)
|
||||
data["items"] = copy.deepcopy(self.items)
|
||||
return data
|
||||
|
||||
@staticmethod
|
||||
def prepare_enum_items(items):
|
||||
"""Convert items to unified structure.
|
||||
|
||||
Output is a list where each item is dictionary with 'value'
|
||||
and 'label'.
|
||||
|
||||
```python
|
||||
# Example output
|
||||
[
|
||||
{"label": "Option 1", "value": 1},
|
||||
{"label": "Option 2", "value": 2},
|
||||
{"label": "Option 3", "value": 3}
|
||||
]
|
||||
```
|
||||
|
||||
Args:
|
||||
items (Union[Dict[str, Any], List[Any], List[Dict[str, Any]]): The
|
||||
items to convert.
|
||||
|
||||
Returns:
|
||||
List[Dict[str, Any]]: Unified structure of items.
|
||||
"""
|
||||
|
||||
output = []
|
||||
if isinstance(items, dict):
|
||||
for value, label in items.items():
|
||||
output.append({"label": label, "value": value})
|
||||
|
||||
elif isinstance(items, (tuple, list, set)):
|
||||
for item in items:
|
||||
if isinstance(item, dict):
|
||||
# Validate if 'value' is available
|
||||
if "value" not in item:
|
||||
raise KeyError("Item does not contain 'value' key.")
|
||||
|
||||
if "label" not in item:
|
||||
item["label"] = str(item["value"])
|
||||
elif isinstance(item, (list, tuple)):
|
||||
if len(item) == 2:
|
||||
value, label = item
|
||||
elif len(item) == 1:
|
||||
value = item[0]
|
||||
label = str(value)
|
||||
else:
|
||||
raise ValueError((
|
||||
"Invalid items count {}."
|
||||
" Expected 1 or 2. Value: {}"
|
||||
).format(len(item), str(item)))
|
||||
|
||||
item = {"label": label, "value": value}
|
||||
else:
|
||||
item = {"label": str(item), "value": item}
|
||||
output.append(item)
|
||||
|
||||
else:
|
||||
raise TypeError(
|
||||
"Unknown type for enum items '{}'".format(type(items))
|
||||
)
|
||||
|
||||
return output
|
||||
|
||||
class BoolDef(AbtractAttrDef):
|
||||
"""Boolean representation.
|
||||
|
|
|
|||
|
|
@ -92,7 +92,9 @@ class FileTransaction(object):
|
|||
def process(self):
|
||||
# Backup any existing files
|
||||
for dst, (src, _) in self._transfers.items():
|
||||
if dst == src or not os.path.exists(dst):
|
||||
self.log.debug("Checking file ... {} -> {}".format(src, dst))
|
||||
path_same = self._same_paths(src, dst)
|
||||
if path_same or not os.path.exists(dst):
|
||||
continue
|
||||
|
||||
# Backup original file
|
||||
|
|
@ -105,7 +107,8 @@ class FileTransaction(object):
|
|||
|
||||
# Copy the files to transfer
|
||||
for dst, (src, opts) in self._transfers.items():
|
||||
if dst == src:
|
||||
path_same = self._same_paths(src, dst)
|
||||
if path_same:
|
||||
self.log.debug(
|
||||
"Source and destionation are same files {} -> {}".format(
|
||||
src, dst))
|
||||
|
|
@ -182,3 +185,10 @@ class FileTransaction(object):
|
|||
else:
|
||||
self.log.critical("An unexpected error occurred.")
|
||||
six.reraise(*sys.exc_info())
|
||||
|
||||
def _same_paths(self, src, dst):
|
||||
# handles same paths but with C:/project vs c:/project
|
||||
if os.path.exists(src) and os.path.exists(dst):
|
||||
return os.path.samefile(src, dst)
|
||||
|
||||
return src == dst
|
||||
|
|
|
|||
|
|
@ -400,6 +400,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Submit to Deadline"
|
||||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
import_reference = False
|
||||
use_published = True
|
||||
asset_dependencies = False
|
||||
|
||||
|
|
@ -424,7 +425,11 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
file_path = None
|
||||
if self.use_published:
|
||||
file_path = self.from_published_scene()
|
||||
if not self.import_reference:
|
||||
file_path = self.from_published_scene()
|
||||
else:
|
||||
self.log.info("use the scene with imported reference for rendering") # noqa
|
||||
file_path = context.data["currentFile"]
|
||||
|
||||
# fallback if nothing was set
|
||||
if not file_path:
|
||||
|
|
@ -516,7 +521,6 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
published.
|
||||
|
||||
"""
|
||||
|
||||
instance = self._instance
|
||||
workfile_instance = self._get_workfile_instance(instance.context)
|
||||
if workfile_instance is None:
|
||||
|
|
@ -524,7 +528,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
# determine published path from Anatomy.
|
||||
template_data = workfile_instance.data.get("anatomyData")
|
||||
rep = workfile_instance.data.get("representations")[0]
|
||||
rep = workfile_instance.data["representations"][0]
|
||||
template_data["representation"] = rep.get("name")
|
||||
template_data["ext"] = rep.get("ext")
|
||||
template_data["comment"] = None
|
||||
|
|
|
|||
|
|
@ -973,7 +973,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
except Exception:
|
||||
# TODO logging
|
||||
# TODO report
|
||||
self.process_session.rolback()
|
||||
self.process_session.rollback()
|
||||
ent_path_items = [self.cur_project["full_name"]]
|
||||
ent_path_items.extend([
|
||||
par for par in avalon_entity["data"]["parents"]
|
||||
|
|
@ -1016,7 +1016,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
except Exception:
|
||||
# TODO logging
|
||||
# TODO report
|
||||
self.process_session.rolback()
|
||||
self.process_session.rollback()
|
||||
error_msg = (
|
||||
"Couldn't update custom attributes after recreation"
|
||||
" of entity in Ftrack"
|
||||
|
|
@ -1338,7 +1338,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
try:
|
||||
self.process_session.commit()
|
||||
except Exception:
|
||||
self.process_session.rolback()
|
||||
self.process_session.rollback()
|
||||
# TODO logging
|
||||
# TODO report
|
||||
error_msg = (
|
||||
|
|
|
|||
|
|
@ -129,8 +129,8 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
|
|||
if not note_text.solved:
|
||||
self.log.warning((
|
||||
"Note template require more keys then can be provided."
|
||||
"\nTemplate: {}\nData: {}"
|
||||
).format(template, format_data))
|
||||
"\nTemplate: {}\nMissing values for keys:{}\nData: {}"
|
||||
).format(template, note_text.missing_keys, format_data))
|
||||
continue
|
||||
|
||||
if not note_text:
|
||||
|
|
|
|||
|
|
@ -328,10 +328,6 @@ class GDriveHandler(AbstractProvider):
|
|||
last_tick = status = response = None
|
||||
status_val = 0
|
||||
while response is None:
|
||||
if server.is_representation_paused(representation['_id'],
|
||||
check_parents=True,
|
||||
project_name=project_name):
|
||||
raise ValueError("Paused during process, please redo.")
|
||||
if status:
|
||||
status_val = float(status.progress())
|
||||
if not last_tick or \
|
||||
|
|
@ -346,6 +342,13 @@ class GDriveHandler(AbstractProvider):
|
|||
site=site,
|
||||
progress=status_val
|
||||
)
|
||||
if server.is_representation_paused(
|
||||
project_name,
|
||||
representation['_id'],
|
||||
site,
|
||||
check_parents=True
|
||||
):
|
||||
raise ValueError("Paused during process, please redo.")
|
||||
status, response = request.next_chunk()
|
||||
|
||||
except errors.HttpError as ex:
|
||||
|
|
@ -415,10 +418,6 @@ class GDriveHandler(AbstractProvider):
|
|||
last_tick = status = response = None
|
||||
status_val = 0
|
||||
while response is None:
|
||||
if server.is_representation_paused(representation['_id'],
|
||||
check_parents=True,
|
||||
project_name=project_name):
|
||||
raise ValueError("Paused during process, please redo.")
|
||||
if status:
|
||||
status_val = float(status.progress())
|
||||
if not last_tick or \
|
||||
|
|
@ -433,6 +432,13 @@ class GDriveHandler(AbstractProvider):
|
|||
site=site,
|
||||
progress=status_val
|
||||
)
|
||||
if server.is_representation_paused(
|
||||
project_name,
|
||||
representation['_id'],
|
||||
site,
|
||||
check_parents=True
|
||||
):
|
||||
raise ValueError("Paused during process, please redo.")
|
||||
status, response = downloader.next_chunk()
|
||||
|
||||
return target_name
|
||||
|
|
|
|||
|
|
@ -284,9 +284,6 @@ class SyncServerThread(threading.Thread):
|
|||
# building folder tree structure in memory
|
||||
# call only if needed, eg. DO_UPLOAD or DO_DOWNLOAD
|
||||
for sync in sync_repres:
|
||||
if self.module.\
|
||||
is_representation_paused(sync['_id']):
|
||||
continue
|
||||
if limit <= 0:
|
||||
continue
|
||||
files = sync.get("files") or []
|
||||
|
|
|
|||
|
|
@ -124,7 +124,6 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
self.action_show_widget = None
|
||||
self._paused = False
|
||||
self._paused_projects = set()
|
||||
self._paused_representations = set()
|
||||
self._anatomies = {}
|
||||
|
||||
self._connection = None
|
||||
|
|
@ -475,7 +474,6 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
site_name (string): 'gdrive', 'studio' etc.
|
||||
"""
|
||||
self.log.info("Pausing SyncServer for {}".format(representation_id))
|
||||
self._paused_representations.add(representation_id)
|
||||
self.reset_site_on_representation(project_name, representation_id,
|
||||
site_name=site_name, pause=True)
|
||||
|
||||
|
|
@ -492,35 +490,47 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
site_name (string): 'gdrive', 'studio' etc.
|
||||
"""
|
||||
self.log.info("Unpausing SyncServer for {}".format(representation_id))
|
||||
try:
|
||||
self._paused_representations.remove(representation_id)
|
||||
except KeyError:
|
||||
pass
|
||||
# self.paused_representations is not persistent
|
||||
self.reset_site_on_representation(project_name, representation_id,
|
||||
site_name=site_name, pause=False)
|
||||
|
||||
def is_representation_paused(self, representation_id,
|
||||
check_parents=False, project_name=None):
|
||||
def is_representation_paused(self, project_name, representation_id,
|
||||
site_name, check_parents=False):
|
||||
"""
|
||||
Returns if 'representation_id' is paused or not.
|
||||
Returns if 'representation_id' is paused or not for site.
|
||||
|
||||
Args:
|
||||
representation_id (string): MongoDB objectId value
|
||||
project_name (str): project to check if paused
|
||||
representation_id (str): MongoDB objectId value
|
||||
site (str): site to check representation is paused for
|
||||
check_parents (bool): check if parent project or server itself
|
||||
are not paused
|
||||
project_name (string): project to check if paused
|
||||
|
||||
if 'check_parents', 'project_name' should be set too
|
||||
Returns:
|
||||
(bool)
|
||||
"""
|
||||
condition = representation_id in self._paused_representations
|
||||
if check_parents and project_name:
|
||||
condition = condition or \
|
||||
self.is_project_paused(project_name) or \
|
||||
self.is_paused()
|
||||
return condition
|
||||
# Check parents are paused
|
||||
if check_parents and (
|
||||
self.is_project_paused(project_name)
|
||||
or self.is_paused()
|
||||
):
|
||||
return True
|
||||
|
||||
# Get representation
|
||||
representation = get_representation_by_id(project_name,
|
||||
representation_id,
|
||||
fields=["files.sites"])
|
||||
if not representation:
|
||||
return False
|
||||
|
||||
# Check if representation is paused
|
||||
for file_info in representation.get("files", []):
|
||||
for site in file_info.get("sites", []):
|
||||
if site["name"] != site_name:
|
||||
continue
|
||||
|
||||
return site.get("paused", False)
|
||||
|
||||
return False
|
||||
|
||||
def pause_project(self, project_name):
|
||||
"""
|
||||
|
|
@ -1484,7 +1494,8 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
"$elemMatch": {
|
||||
"name": {"$in": [remote_site]},
|
||||
"created_dt": {"$exists": False},
|
||||
"tries": {"$in": retries_arr}
|
||||
"tries": {"$in": retries_arr},
|
||||
"paused": {"$exists": False}
|
||||
}
|
||||
}
|
||||
}]},
|
||||
|
|
@ -1494,7 +1505,8 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
"$elemMatch": {
|
||||
"name": active_site,
|
||||
"created_dt": {"$exists": False},
|
||||
"tries": {"$in": retries_arr}
|
||||
"tries": {"$in": retries_arr},
|
||||
"paused": {"$exists": False}
|
||||
}
|
||||
}}, {
|
||||
"files.sites": {
|
||||
|
|
|
|||
|
|
@ -1112,17 +1112,21 @@ class RootItem(FormatObject):
|
|||
result = False
|
||||
output = str(path)
|
||||
|
||||
root_paths = list(self.cleaned_data.values())
|
||||
mod_path = self.clean_path(path)
|
||||
for root_path in root_paths:
|
||||
for root_os, root_path in self.cleaned_data.items():
|
||||
# Skip empty paths
|
||||
if not root_path:
|
||||
continue
|
||||
|
||||
if mod_path.startswith(root_path):
|
||||
_mod_path = mod_path # reset to original cleaned value
|
||||
if root_os == "windows":
|
||||
root_path = root_path.lower()
|
||||
_mod_path = _mod_path.lower()
|
||||
|
||||
if _mod_path.startswith(root_path):
|
||||
result = True
|
||||
replacement = "{" + self.full_key() + "}"
|
||||
output = replacement + mod_path[len(root_path):]
|
||||
output = replacement + _mod_path[len(root_path):]
|
||||
break
|
||||
|
||||
return (result, output)
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from .constants import (
|
|||
|
||||
from .subset_name import (
|
||||
TaskNotSetError,
|
||||
get_subset_name_template,
|
||||
get_subset_name,
|
||||
)
|
||||
|
||||
|
|
@ -46,6 +47,7 @@ __all__ = (
|
|||
"PRE_CREATE_THUMBNAIL_KEY",
|
||||
|
||||
"TaskNotSetError",
|
||||
"get_subset_name_template",
|
||||
"get_subset_name",
|
||||
|
||||
"CreatorError",
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ class LegacyCreator(object):
|
|||
family = None
|
||||
defaults = None
|
||||
maintain_selection = True
|
||||
enabled = True
|
||||
|
||||
dynamic_subset_keys = []
|
||||
|
||||
|
|
@ -76,11 +77,10 @@ class LegacyCreator(object):
|
|||
print(">>> We have preset for {}".format(plugin_name))
|
||||
for option, value in plugin_settings.items():
|
||||
if option == "enabled" and value is False:
|
||||
setattr(cls, "active", False)
|
||||
print(" - is disabled by preset")
|
||||
else:
|
||||
setattr(cls, option, value)
|
||||
print(" - setting `{}`: `{}`".format(option, value))
|
||||
setattr(cls, option, value)
|
||||
|
||||
def process(self):
|
||||
pass
|
||||
|
|
|
|||
|
|
@ -14,6 +14,53 @@ class TaskNotSetError(KeyError):
|
|||
super(TaskNotSetError, self).__init__(msg)
|
||||
|
||||
|
||||
def get_subset_name_template(
|
||||
project_name,
|
||||
family,
|
||||
task_name,
|
||||
task_type,
|
||||
host_name,
|
||||
default_template=None,
|
||||
project_settings=None
|
||||
):
|
||||
"""Get subset name template based on passed context.
|
||||
|
||||
Args:
|
||||
project_name (str): Project on which the context lives.
|
||||
family (str): Family (subset type) for which the subset name is
|
||||
calculated.
|
||||
host_name (str): Name of host in which the subset name is calculated.
|
||||
task_name (str): Name of task in which context the subset is created.
|
||||
task_type (str): Type of task in which context the subset is created.
|
||||
default_template (Union[str, None]): Default template which is used if
|
||||
settings won't find any matching possitibility. Constant
|
||||
'DEFAULT_SUBSET_TEMPLATE' is used if not defined.
|
||||
project_settings (Union[Dict[str, Any], None]): Prepared settings for
|
||||
project. Settings are queried if not passed.
|
||||
"""
|
||||
|
||||
if project_settings is None:
|
||||
project_settings = get_project_settings(project_name)
|
||||
tools_settings = project_settings["global"]["tools"]
|
||||
profiles = tools_settings["creator"]["subset_name_profiles"]
|
||||
filtering_criteria = {
|
||||
"families": family,
|
||||
"hosts": host_name,
|
||||
"tasks": task_name,
|
||||
"task_types": task_type
|
||||
}
|
||||
|
||||
matching_profile = filter_profiles(profiles, filtering_criteria)
|
||||
template = None
|
||||
if matching_profile:
|
||||
template = matching_profile["template"]
|
||||
|
||||
# Make sure template is set (matching may have empty string)
|
||||
if not template:
|
||||
template = default_template or DEFAULT_SUBSET_TEMPLATE
|
||||
return template
|
||||
|
||||
|
||||
def get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
|
|
@ -37,9 +84,9 @@ def get_subset_name(
|
|||
|
||||
Args:
|
||||
family (str): Instance family.
|
||||
variant (str): In most of cases it is user input during creation.
|
||||
variant (str): In most of the cases it is user input during creation.
|
||||
task_name (str): Task name on which context is instance created.
|
||||
asset_doc (dict): Queried asset document with it's tasks in data.
|
||||
asset_doc (dict): Queried asset document with its tasks in data.
|
||||
Used to get task type.
|
||||
project_name (str): Name of project on which is instance created.
|
||||
Important for project settings that are loaded.
|
||||
|
|
@ -50,15 +97,15 @@ def get_subset_name(
|
|||
is not passed.
|
||||
dynamic_data (dict): Dynamic data specific for a creator which creates
|
||||
instance.
|
||||
dbcon (AvalonMongoDB): Mongo connection to be able query asset document
|
||||
if 'asset_doc' is not passed.
|
||||
project_settings (Union[Dict[str, Any], None]): Prepared settings for
|
||||
project. Settings are queried if not passed.
|
||||
"""
|
||||
|
||||
if not family:
|
||||
return ""
|
||||
|
||||
if not host_name:
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
host_name = os.environ.get("AVALON_APP")
|
||||
|
||||
# Use only last part of class family value split by dot (`.`)
|
||||
family = family.rsplit(".", 1)[-1]
|
||||
|
|
@ -70,27 +117,15 @@ def get_subset_name(
|
|||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
|
||||
# Get settings
|
||||
if not project_settings:
|
||||
project_settings = get_project_settings(project_name)
|
||||
tools_settings = project_settings["global"]["tools"]
|
||||
profiles = tools_settings["creator"]["subset_name_profiles"]
|
||||
filtering_criteria = {
|
||||
"families": family,
|
||||
"hosts": host_name,
|
||||
"tasks": task_name,
|
||||
"task_types": task_type
|
||||
}
|
||||
|
||||
matching_profile = filter_profiles(profiles, filtering_criteria)
|
||||
template = None
|
||||
if matching_profile:
|
||||
template = matching_profile["template"]
|
||||
|
||||
# Make sure template is set (matching may have empty string)
|
||||
if not template:
|
||||
template = default_template or DEFAULT_SUBSET_TEMPLATE
|
||||
|
||||
template = get_subset_name_template(
|
||||
project_name,
|
||||
family,
|
||||
task_name,
|
||||
task_type,
|
||||
host_name,
|
||||
default_template=default_template,
|
||||
project_settings=project_settings
|
||||
)
|
||||
# Simple check of task name existence for template with {task} in
|
||||
# - missing task should be possible only in Standalone publisher
|
||||
if not task_name and "{task" in template.lower():
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
from .utils import (
|
||||
HeroVersionType,
|
||||
|
||||
LoadError,
|
||||
IncompatibleLoaderError,
|
||||
InvalidRepresentationContext,
|
||||
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@ class LoaderPlugin(list):
|
|||
representations = list()
|
||||
order = 0
|
||||
is_multiple_contexts_compatible = False
|
||||
enabled = True
|
||||
|
||||
options = []
|
||||
|
||||
|
|
@ -73,11 +74,10 @@ class LoaderPlugin(list):
|
|||
print(">>> We have preset for {}".format(plugin_name))
|
||||
for option, value in plugin_settings.items():
|
||||
if option == "enabled" and value is False:
|
||||
setattr(cls, "active", False)
|
||||
print(" - is disabled by preset")
|
||||
else:
|
||||
setattr(cls, option, value)
|
||||
print(" - setting `{}`: `{}`".format(option, value))
|
||||
setattr(cls, option, value)
|
||||
|
||||
@classmethod
|
||||
def get_representations(cls):
|
||||
|
|
|
|||
|
|
@ -60,6 +60,16 @@ class HeroVersionType(object):
|
|||
return self.version.__format__(format_spec)
|
||||
|
||||
|
||||
class LoadError(Exception):
|
||||
"""Known error that happened during loading.
|
||||
|
||||
A message is shown to user (without traceback). Make sure an artist can
|
||||
understand the problem.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IncompatibleLoaderError(ValueError):
|
||||
"""Error when Loader is incompatible with a representation."""
|
||||
pass
|
||||
|
|
|
|||
|
|
@ -120,6 +120,8 @@ class BuildWorkfile:
|
|||
# Prepare available loaders
|
||||
loaders_by_name = {}
|
||||
for loader in discover_loader_plugins():
|
||||
if not loader.enabled:
|
||||
continue
|
||||
loader_name = loader.__name__
|
||||
if loader_name in loaders_by_name:
|
||||
raise KeyError(
|
||||
|
|
|
|||
|
|
@ -239,6 +239,8 @@ class AbstractTemplateBuilder(object):
|
|||
if self._creators_by_name is None:
|
||||
self._creators_by_name = {}
|
||||
for creator in discover_legacy_creator_plugins():
|
||||
if not creator.enabled:
|
||||
continue
|
||||
creator_name = creator.__name__
|
||||
if creator_name in self._creators_by_name:
|
||||
raise KeyError(
|
||||
|
|
@ -1147,11 +1149,11 @@ class PlaceholderLoadMixin(object):
|
|||
|
||||
loaders_by_name = self.builder.get_loaders_by_name()
|
||||
loader_items = [
|
||||
(loader_name, loader.label or loader_name)
|
||||
{"value": loader_name, "label": loader.label or loader_name}
|
||||
for loader_name, loader in loaders_by_name.items()
|
||||
]
|
||||
|
||||
loader_items = list(sorted(loader_items, key=lambda i: i[1]))
|
||||
loader_items = list(sorted(loader_items, key=lambda i: i["label"]))
|
||||
options = options or {}
|
||||
return [
|
||||
attribute_definitions.UISeparatorDef(),
|
||||
|
|
@ -1163,9 +1165,9 @@ class PlaceholderLoadMixin(object):
|
|||
label="Asset Builder Type",
|
||||
default=options.get("builder_type"),
|
||||
items=[
|
||||
("context_asset", "Current asset"),
|
||||
("linked_asset", "Linked assets"),
|
||||
("all_assets", "All assets")
|
||||
{"label": "Current asset", "value": "context_asset"},
|
||||
{"label": "Linked assets", "value": "linked_asset"},
|
||||
{"label": "All assets", "value": "all_assets"},
|
||||
],
|
||||
tooltip=(
|
||||
"Asset Builder Type\n"
|
||||
|
|
|
|||
52
openpype/plugins/load/push_to_library.py
Normal file
52
openpype/plugins/load/push_to_library.py
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
import os
|
||||
|
||||
from openpype import PACKAGE_DIR
|
||||
from openpype.lib import get_openpype_execute_args, run_detached_process
|
||||
from openpype.pipeline import load
|
||||
from openpype.pipeline.load import LoadError
|
||||
|
||||
|
||||
class PushToLibraryProject(load.SubsetLoaderPlugin):
|
||||
"""Export selected versions to folder structure from Template"""
|
||||
|
||||
is_multiple_contexts_compatible = True
|
||||
|
||||
representations = ["*"]
|
||||
families = ["*"]
|
||||
|
||||
label = "Push to Library project"
|
||||
order = 35
|
||||
icon = "send"
|
||||
color = "#d8d8d8"
|
||||
|
||||
def load(self, contexts, name=None, namespace=None, options=None):
|
||||
filtered_contexts = [
|
||||
context
|
||||
for context in contexts
|
||||
if context.get("project") and context.get("version")
|
||||
]
|
||||
if not filtered_contexts:
|
||||
raise LoadError("Nothing to push for your selection")
|
||||
|
||||
if len(filtered_contexts) > 1:
|
||||
raise LoadError("Please select only one item")
|
||||
|
||||
context = tuple(filtered_contexts)[0]
|
||||
push_tool_script_path = os.path.join(
|
||||
PACKAGE_DIR,
|
||||
"tools",
|
||||
"push_to_project",
|
||||
"app.py"
|
||||
)
|
||||
project_doc = context["project"]
|
||||
version_doc = context["version"]
|
||||
project_name = project_doc["name"]
|
||||
version_id = str(version_doc["_id"])
|
||||
|
||||
args = get_openpype_execute_args(
|
||||
"run",
|
||||
push_tool_script_path,
|
||||
"--project", project_name,
|
||||
"--version", version_id
|
||||
)
|
||||
run_detached_process(args)
|
||||
|
|
@ -4,8 +4,6 @@ import os
|
|||
import shutil
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CleanUpFarm(pyblish.api.ContextPlugin):
|
||||
"""Cleans up the staging directory after a successful publish.
|
||||
|
|
@ -23,8 +21,8 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
# Get source host from which farm publishing was started
|
||||
src_host_name = legacy_io.Session.get("AVALON_APP")
|
||||
self.log.debug("Host name from session is {}".format(src_host_name))
|
||||
src_host_name = context.data["hostName"]
|
||||
self.log.debug("Host name from context is {}".format(src_host_name))
|
||||
# Skip process if is not in list of source hosts in which this
|
||||
# plugin should run
|
||||
if src_host_name not in self.allowed_hosts:
|
||||
|
|
|
|||
|
|
@ -32,7 +32,6 @@ from openpype.client import (
|
|||
get_subsets,
|
||||
get_last_versions
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
||||
|
|
@ -49,7 +48,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
self.log.info("Collecting anatomy data for all instances.")
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = context.data["projectName"]
|
||||
self.fill_missing_asset_docs(context, project_name)
|
||||
self.fill_instance_data_from_asset(context)
|
||||
self.fill_latest_versions(context, project_name)
|
||||
|
|
|
|||
|
|
@ -73,7 +73,9 @@ class CollectComment(
|
|||
"""
|
||||
|
||||
label = "Collect Instance Comment"
|
||||
order = pyblish.api.CollectorOrder + 0.49
|
||||
# TODO change to CollectorOrder after Pyblish is purged
|
||||
# Pyblish allows modifying comment after collect phase
|
||||
order = pyblish.api.ExtractorOrder - 0.49
|
||||
|
||||
def process(self, context):
|
||||
context_comment = self.cleanup_comment(context.data.get("comment"))
|
||||
|
|
|
|||
|
|
@ -15,7 +15,11 @@ import pyblish.api
|
|||
|
||||
|
||||
class CollectResourcesPath(pyblish.api.InstancePlugin):
|
||||
"""Generate directory path where the files and resources will be stored"""
|
||||
"""Generate directory path where the files and resources will be stored.
|
||||
|
||||
Collects folder name and file name from files, if exists, for in-situ
|
||||
publishing.
|
||||
"""
|
||||
|
||||
label = "Collect Resources Path"
|
||||
order = pyblish.api.CollectorOrder + 0.495
|
||||
|
|
|
|||
|
|
@ -1,10 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_representations
|
||||
from openpype.pipeline import (
|
||||
registered_host,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
|
||||
class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
||||
|
|
@ -44,7 +41,7 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
|||
for container in containers
|
||||
}
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = context.data["projectName"]
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
representation_ids=repre_ids,
|
||||
|
|
|
|||
42
openpype/plugins/publish/collect_source_for_source.py
Normal file
42
openpype/plugins/publish/collect_source_for_source.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
"""
|
||||
Requires:
|
||||
instance -> currentFile
|
||||
instance -> source
|
||||
|
||||
Provides:
|
||||
instance -> originalBasename
|
||||
instance -> originalDirname
|
||||
"""
|
||||
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectSourceForSource(pyblish.api.InstancePlugin):
|
||||
"""Collects source location of file for instance.
|
||||
|
||||
Used for 'source' template name which handles in place publishing.
|
||||
For this kind of publishing files are present with correct file name
|
||||
pattern and correct location.
|
||||
"""
|
||||
|
||||
label = "Collect Source"
|
||||
order = pyblish.api.CollectorOrder + 0.495
|
||||
|
||||
def process(self, instance):
|
||||
# parse folder name and file name for online and source templates
|
||||
# currentFile comes from hosts workfiles
|
||||
# source comes from Publisher
|
||||
current_file = instance.data.get("currentFile")
|
||||
source = instance.data.get("source")
|
||||
source_file = current_file or source
|
||||
if source_file:
|
||||
self.log.debug("Parsing paths for {}".format(source_file))
|
||||
if not instance.data.get("originalBasename"):
|
||||
instance.data["originalBasename"] = \
|
||||
os.path.basename(source_file)
|
||||
|
||||
if not instance.data.get("originalDirname"):
|
||||
instance.data["originalDirname"] = \
|
||||
os.path.dirname(source_file)
|
||||
|
|
@ -350,6 +350,7 @@ class ExtractOTIOReview(publish.Extractor):
|
|||
# start command list
|
||||
command = [ffmpeg_path]
|
||||
|
||||
input_extension = None
|
||||
if sequence:
|
||||
input_dir, collection = sequence
|
||||
in_frame_start = min(collection.indexes)
|
||||
|
|
@ -357,6 +358,7 @@ class ExtractOTIOReview(publish.Extractor):
|
|||
# converting image sequence to image sequence
|
||||
input_file = collection.format("{head}{padding}{tail}")
|
||||
input_path = os.path.join(input_dir, input_file)
|
||||
input_extension = os.path.splitext(input_path)[-1]
|
||||
|
||||
# form command for rendering gap files
|
||||
command.extend([
|
||||
|
|
@ -373,6 +375,7 @@ class ExtractOTIOReview(publish.Extractor):
|
|||
sec_duration = frames_to_seconds(
|
||||
frame_duration, input_fps
|
||||
)
|
||||
input_extension = os.path.splitext(video_path)[-1]
|
||||
|
||||
# form command for rendering gap files
|
||||
command.extend([
|
||||
|
|
@ -397,9 +400,21 @@ class ExtractOTIOReview(publish.Extractor):
|
|||
|
||||
# add output attributes
|
||||
command.extend([
|
||||
"-start_number", str(out_frame_start),
|
||||
output_path
|
||||
"-start_number", str(out_frame_start)
|
||||
])
|
||||
|
||||
# add copying if extensions are matching
|
||||
if (
|
||||
input_extension
|
||||
and self.output_ext == input_extension
|
||||
):
|
||||
command.extend([
|
||||
"-c", "copy"
|
||||
])
|
||||
|
||||
# add output path at the end
|
||||
command.append(output_path)
|
||||
|
||||
# execute
|
||||
self.log.debug("Executing: {}".format(" ".join(command)))
|
||||
output = run_subprocess(
|
||||
|
|
|
|||
|
|
@ -1038,6 +1038,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
# Set audio duration
|
||||
audio_in_args.append("-to {:0.10f}".format(audio_duration))
|
||||
|
||||
# Ignore video data from audio input
|
||||
audio_in_args.append("-vn")
|
||||
|
||||
# Add audio input path
|
||||
audio_in_args.append("-i {}".format(
|
||||
path_to_subprocess_arg(audio["filename"])
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.ExtractorOrder
|
||||
families = [
|
||||
"imagesequence", "render", "render2d", "prerender",
|
||||
"source", "clip", "take"
|
||||
"source", "clip", "take", "online"
|
||||
]
|
||||
hosts = ["shell", "fusion", "resolve", "traypublisher"]
|
||||
enabled = False
|
||||
|
|
@ -91,7 +91,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
full_input_path = os.path.join(src_staging, input_file)
|
||||
self.log.info("input {}".format(full_input_path))
|
||||
filename = os.path.splitext(input_file)[0]
|
||||
jpeg_file = filename + ".jpg"
|
||||
jpeg_file = filename + "_thumb.jpg"
|
||||
full_output_path = os.path.join(dst_staging, jpeg_file)
|
||||
|
||||
if oiio_supported:
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.info("Thumbnail source: {}".format(thumbnail_source))
|
||||
src_basename = os.path.basename(thumbnail_source)
|
||||
dst_filename = os.path.splitext(src_basename)[0] + ".jpg"
|
||||
dst_filename = os.path.splitext(src_basename)[0] + "_thumb.jpg"
|
||||
full_output_path = os.path.join(dst_staging, dst_filename)
|
||||
|
||||
if oiio_supported:
|
||||
|
|
|
|||
31
openpype/plugins/publish/help/validate_publish_dir.xml
Normal file
31
openpype/plugins/publish/help/validate_publish_dir.xml
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Source directory not collected</title>
|
||||
<description>
|
||||
## Source directory not collected
|
||||
|
||||
Instance is marked for in place publishing. Its 'originalDirname' must be collected. Contact OP developer to modify collector.
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
### __Detailed Info__ (optional)
|
||||
|
||||
In place publishing uses source directory and file name in resulting path and file name of published item. For this instance
|
||||
all required metadata weren't filled. This is not recoverable error, unless instance itself is removed.
|
||||
Collector for this instance must be updated for instance to be published.
|
||||
</detail>
|
||||
</error>
|
||||
<error id="not_in_dir">
|
||||
<title>Source file not in project dir</title>
|
||||
<description>
|
||||
## Source file not in project dir
|
||||
|
||||
Path '{original_dirname}' not in project folder. Please publish from inside of project folder.
|
||||
|
||||
### How to repair?
|
||||
|
||||
Restart publish after you moved source file into project directory.
|
||||
</description>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -25,7 +25,6 @@ from openpype.client import (
|
|||
)
|
||||
from openpype.lib import source_hash
|
||||
from openpype.lib.file_transaction import FileTransaction
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.publish import (
|
||||
KnownPublishError,
|
||||
get_publish_template_name,
|
||||
|
|
@ -132,7 +131,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"mvUsdComposition",
|
||||
"mvUsdOverride",
|
||||
"simpleUnrealTexture",
|
||||
"online"
|
||||
"online",
|
||||
"uasset"
|
||||
]
|
||||
|
||||
default_template_name = "publish"
|
||||
|
|
@ -244,7 +244,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
return filtered_repres
|
||||
|
||||
def register(self, instance, file_transactions, filtered_repres):
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = instance.context.data["projectName"]
|
||||
|
||||
instance_stagingdir = instance.data.get("stagingDir")
|
||||
if not instance_stagingdir:
|
||||
|
|
@ -270,6 +270,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
)
|
||||
instance.data["versionEntity"] = version
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
# Get existing representations (if any)
|
||||
existing_repres_by_name = {
|
||||
repre_doc["name"].lower(): repre_doc
|
||||
|
|
@ -303,13 +305,17 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# .ma representation. Those destination paths are pre-defined, etc.
|
||||
# todo: should we move or simplify this logic?
|
||||
resource_destinations = set()
|
||||
for src, dst in instance.data.get("transfers", []):
|
||||
file_transactions.add(src, dst, mode=FileTransaction.MODE_COPY)
|
||||
resource_destinations.add(os.path.abspath(dst))
|
||||
|
||||
for src, dst in instance.data.get("hardlinks", []):
|
||||
file_transactions.add(src, dst, mode=FileTransaction.MODE_HARDLINK)
|
||||
resource_destinations.add(os.path.abspath(dst))
|
||||
file_copy_modes = [
|
||||
("transfers", FileTransaction.MODE_COPY),
|
||||
("hardlinks", FileTransaction.MODE_HARDLINK)
|
||||
]
|
||||
for files_type, copy_mode in file_copy_modes:
|
||||
for src, dst in instance.data.get(files_type, []):
|
||||
self._validate_path_in_project_roots(anatomy, dst)
|
||||
|
||||
file_transactions.add(src, dst, mode=copy_mode)
|
||||
resource_destinations.add(os.path.abspath(dst))
|
||||
|
||||
# Bulk write to the database
|
||||
# We write the subset and version to the database before the File
|
||||
|
|
@ -342,7 +348,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# Compute the resource file infos once (files belonging to the
|
||||
# version instance instead of an individual representation) so
|
||||
# we can re-use those file infos per representation
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
resource_file_infos = self.get_files_info(resource_destinations,
|
||||
sites=sites,
|
||||
anatomy=anatomy)
|
||||
|
|
@ -529,6 +534,20 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
template_data["representation"] = repre["name"]
|
||||
template_data["ext"] = repre["ext"]
|
||||
|
||||
stagingdir = repre.get("stagingDir")
|
||||
if not stagingdir:
|
||||
# Fall back to instance staging dir if not explicitly
|
||||
# set for representation in the instance
|
||||
self.log.debug((
|
||||
"Representation uses instance staging dir: {}"
|
||||
).format(instance_stagingdir))
|
||||
stagingdir = instance_stagingdir
|
||||
|
||||
if not stagingdir:
|
||||
raise KnownPublishError(
|
||||
"No staging directory set for representation: {}".format(repre)
|
||||
)
|
||||
|
||||
# optionals
|
||||
# retrieve additional anatomy data from representation if exists
|
||||
for key, anatomy_key in {
|
||||
|
|
@ -548,20 +567,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
if value is not None:
|
||||
template_data[anatomy_key] = value
|
||||
|
||||
stagingdir = repre.get("stagingDir")
|
||||
if not stagingdir:
|
||||
# Fall back to instance staging dir if not explicitly
|
||||
# set for representation in the instance
|
||||
self.log.debug((
|
||||
"Representation uses instance staging dir: {}"
|
||||
).format(instance_stagingdir))
|
||||
stagingdir = instance_stagingdir
|
||||
|
||||
if not stagingdir:
|
||||
raise KnownPublishError(
|
||||
"No staging directory set for representation: {}".format(repre)
|
||||
)
|
||||
|
||||
self.log.debug("Anatomy template name: {}".format(template_name))
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
publish_template_category = anatomy.templates[template_name]
|
||||
|
|
@ -569,6 +574,25 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
|
||||
is_udim = bool(repre.get("udim"))
|
||||
|
||||
# handle publish in place
|
||||
if "originalDirname" in template:
|
||||
# store as originalDirname only original value without project root
|
||||
# if instance collected originalDirname is present, it should be
|
||||
# used for all represe
|
||||
# from temp to final
|
||||
original_directory = (
|
||||
instance.data.get("originalDirname") or instance_stagingdir)
|
||||
|
||||
_rootless = self.get_rootless_path(anatomy, original_directory)
|
||||
if _rootless == original_directory:
|
||||
raise KnownPublishError((
|
||||
"Destination path '{}' ".format(original_directory) +
|
||||
"must be in project dir"
|
||||
))
|
||||
relative_path_start = _rootless.rfind('}') + 2
|
||||
without_root = _rootless[relative_path_start:]
|
||||
template_data["originalDirname"] = without_root
|
||||
|
||||
is_sequence_representation = isinstance(files, (list, tuple))
|
||||
if is_sequence_representation:
|
||||
# Collection of files (sequence)
|
||||
|
|
@ -587,6 +611,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
))
|
||||
|
||||
src_collection = src_collections[0]
|
||||
template_data["originalBasename"] = src_collection.head[:-1]
|
||||
destination_indexes = list(src_collection.indexes)
|
||||
# Use last frame for minimum padding
|
||||
# - that should cover both 'udim' and 'frame' minimum padding
|
||||
|
|
@ -671,12 +696,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
raise KnownPublishError(
|
||||
"This is a bug. Representation file name is full path"
|
||||
)
|
||||
|
||||
template_data["originalBasename"], _ = os.path.splitext(fname)
|
||||
# Manage anatomy template data
|
||||
template_data.pop("frame", None)
|
||||
if is_udim:
|
||||
template_data["udim"] = repre["udim"][0]
|
||||
|
||||
# Construct destination filepath from template
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
template_filled = anatomy_filled[template_name]["path"]
|
||||
|
|
@ -805,11 +829,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"""Return anatomy template name to use for integration"""
|
||||
|
||||
# Anatomy data is pre-filled by Collectors
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
context = instance.context
|
||||
project_name = context.data["projectName"]
|
||||
|
||||
# Task can be optional in anatomy data
|
||||
host_name = instance.context.data["hostName"]
|
||||
host_name = context.data["hostName"]
|
||||
anatomy_data = instance.data["anatomyData"]
|
||||
family = anatomy_data["family"]
|
||||
task_info = anatomy_data.get("task") or {}
|
||||
|
|
@ -820,7 +844,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
family,
|
||||
task_name=task_info.get("name"),
|
||||
task_type=task_info.get("type"),
|
||||
project_settings=instance.context.data["project_settings"],
|
||||
project_settings=context.data["project_settings"],
|
||||
logger=self.log
|
||||
)
|
||||
|
||||
|
|
@ -890,3 +914,21 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"hash": source_hash(path),
|
||||
"sites": sites
|
||||
}
|
||||
|
||||
def _validate_path_in_project_roots(self, anatomy, file_path):
|
||||
"""Checks if 'file_path' starts with any of the roots.
|
||||
|
||||
Used to check that published path belongs to project, eg. we are not
|
||||
trying to publish to local only folder.
|
||||
Args:
|
||||
anatomy (Anatomy)
|
||||
file_path (str)
|
||||
Raises
|
||||
(KnownPublishError)
|
||||
"""
|
||||
path = self.get_rootless_path(anatomy, file_path)
|
||||
if not path:
|
||||
raise KnownPublishError((
|
||||
"Destination path '{}' ".format(file_path) +
|
||||
"must be in project dir"
|
||||
))
|
||||
|
|
|
|||
|
|
@ -123,7 +123,6 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"staticMesh",
|
||||
"skeletalMesh",
|
||||
"mvLook",
|
||||
"mvUsd",
|
||||
"mvUsdComposition",
|
||||
"mvUsdOverride",
|
||||
"simpleUnrealTexture"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ from pprint import pformat
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_assets
|
||||
|
||||
|
||||
|
|
@ -28,10 +27,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
|
|||
asset_and_parents = self.get_parents(context)
|
||||
self.log.debug("__ asset_and_parents: {}".format(asset_and_parents))
|
||||
|
||||
if not legacy_io.Session:
|
||||
legacy_io.install()
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = context.data["projectName"]
|
||||
db_assets = list(get_assets(
|
||||
project_name, fields=["name", "data.parents"]
|
||||
))
|
||||
|
|
|
|||
74
openpype/plugins/publish/validate_publish_dir.py
Normal file
74
openpype/plugins/publish/validate_publish_dir.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
from openpype.pipeline.publish import (
|
||||
PublishXmlValidationError,
|
||||
get_publish_template_name,
|
||||
)
|
||||
|
||||
|
||||
class ValidatePublishDir(pyblish.api.InstancePlugin):
|
||||
"""Validates if 'publishDir' is a project directory
|
||||
|
||||
'publishDir' is collected based on publish templates. In specific cases
|
||||
('source' template) source folder of items is used as a 'publishDir', this
|
||||
validates if it is inside any project dir for the project.
|
||||
(eg. files are not published from local folder, unaccessible for studio'
|
||||
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
label = "Validate publish dir"
|
||||
|
||||
checked_template_names = ["source"]
|
||||
# validate instances might have interim family, needs to be mapped to final
|
||||
family_mapping = {
|
||||
"renderLayer": "render",
|
||||
"renderLocal": "render"
|
||||
}
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
template_name = self._get_template_name_from_instance(instance)
|
||||
|
||||
if template_name not in self.checked_template_names:
|
||||
return
|
||||
|
||||
original_dirname = instance.data.get("originalDirname")
|
||||
if not original_dirname:
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Instance meant for in place publishing."
|
||||
" Its 'originalDirname' must be collected."
|
||||
" Contact OP developer to modify collector."
|
||||
)
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
success, _ = anatomy.find_root_template_from_path(original_dirname)
|
||||
|
||||
formatting_data = {
|
||||
"original_dirname": original_dirname,
|
||||
}
|
||||
msg = "Path '{}' not in project folder.".format(original_dirname) + \
|
||||
" Please publish from inside of project folder."
|
||||
if not success:
|
||||
raise PublishXmlValidationError(self, msg, key="not_in_dir",
|
||||
formatting_data=formatting_data)
|
||||
|
||||
def _get_template_name_from_instance(self, instance):
|
||||
project_name = instance.context.data["projectName"]
|
||||
host_name = instance.context.data["hostName"]
|
||||
anatomy_data = instance.data["anatomyData"]
|
||||
family = anatomy_data["family"]
|
||||
family = self.family_mapping.get("family") or family
|
||||
task_info = anatomy_data.get("task") or {}
|
||||
|
||||
return get_publish_template_name(
|
||||
project_name,
|
||||
host_name,
|
||||
family,
|
||||
task_name=task_info.get("name"),
|
||||
task_type=task_info.get("type"),
|
||||
project_settings=instance.context.data["project_settings"],
|
||||
logger=self.log
|
||||
)
|
||||
|
|
@ -53,11 +53,17 @@
|
|||
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"source": {
|
||||
"folder": "{root[work]}/{originalDirname}",
|
||||
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"__dynamic_keys_labels__": {
|
||||
"maya2unreal": "Maya to Unreal",
|
||||
"simpleUnrealTextureHero": "Simple Unreal Texture - Hero",
|
||||
"simpleUnrealTexture": "Simple Unreal Texture",
|
||||
"online": "online"
|
||||
"online": "online",
|
||||
"source": "source"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -25,6 +25,7 @@
|
|||
"active": true,
|
||||
"tile_assembler_plugin": "OpenPypeTileAssembler",
|
||||
"use_published": true,
|
||||
"import_reference": false,
|
||||
"asset_dependencies": true,
|
||||
"priority": 50,
|
||||
"tile_priority": 50,
|
||||
|
|
|
|||
|
|
@ -130,6 +130,11 @@
|
|||
"key": "use_published",
|
||||
"label": "Use Published scene"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "import_reference",
|
||||
"label": "Use Scene with Imported Reference"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "asset_dependencies",
|
||||
|
|
|
|||
|
|
@ -148,6 +148,10 @@ QPushButton::menu-indicator {
|
|||
padding-right: 5px;
|
||||
}
|
||||
|
||||
QPushButton[state="error"] {
|
||||
background: {color:publisher:error};
|
||||
}
|
||||
|
||||
QToolButton {
|
||||
border: 0px solid transparent;
|
||||
background: {color:bg-buttons};
|
||||
|
|
@ -1416,6 +1420,13 @@ CreateNextPageOverlay {
|
|||
}
|
||||
|
||||
/* Globally used names */
|
||||
#ValidatedLineEdit[state="valid"], #ValidatedLineEdit[state="valid"]:focus, #ValidatedLineEdit[state="valid"]:hover {
|
||||
border-color: {color:publisher:success};
|
||||
}
|
||||
#ValidatedLineEdit[state="invalid"], #ValidatedLineEdit[state="invalid"]:focus, #ValidatedLineEdit[state="invalid"]:hover {
|
||||
border-color: {color:publisher:error};
|
||||
}
|
||||
|
||||
#Separator {
|
||||
background: {color:bg-menu-separator};
|
||||
}
|
||||
|
|
|
|||
|
|
@ -401,9 +401,8 @@ class EnumAttrWidget(_BaseAttrDefWidget):
|
|||
if self.attr_def.tooltip:
|
||||
input_widget.setToolTip(self.attr_def.tooltip)
|
||||
|
||||
items = self.attr_def.items
|
||||
for key, label in items.items():
|
||||
input_widget.addItem(label, key)
|
||||
for item in self.attr_def.items:
|
||||
input_widget.addItem(item["label"], item["value"])
|
||||
|
||||
idx = input_widget.findData(self.attr_def.default)
|
||||
if idx >= 0:
|
||||
|
|
|
|||
|
|
@ -23,6 +23,8 @@ class CreatorsModel(QtGui.QStandardItemModel):
|
|||
items = []
|
||||
creators = discover_legacy_creator_plugins()
|
||||
for creator in creators:
|
||||
if not creator.enabled:
|
||||
continue
|
||||
item_id = str(uuid.uuid4())
|
||||
self._creators_by_id[item_id] = creator
|
||||
|
||||
|
|
|
|||
|
|
@ -29,6 +29,7 @@ from openpype.pipeline.load import (
|
|||
load_with_repre_context,
|
||||
load_with_subset_context,
|
||||
load_with_subset_contexts,
|
||||
LoadError,
|
||||
IncompatibleLoaderError,
|
||||
)
|
||||
from openpype.tools.utils import (
|
||||
|
|
@ -458,6 +459,8 @@ class SubsetWidget(QtWidgets.QWidget):
|
|||
repre_loaders = []
|
||||
subset_loaders = []
|
||||
for loader in available_loaders:
|
||||
if not loader.enabled:
|
||||
continue
|
||||
# Skip if its a SubsetLoader.
|
||||
if issubclass(loader, SubsetLoaderPlugin):
|
||||
subset_loaders.append(loader)
|
||||
|
|
@ -1364,6 +1367,8 @@ class RepresentationWidget(QtWidgets.QWidget):
|
|||
|
||||
filtered_loaders = []
|
||||
for loader in available_loaders:
|
||||
if not loader.enabled:
|
||||
continue
|
||||
# Skip subset loaders
|
||||
if issubclass(loader, SubsetLoaderPlugin):
|
||||
continue
|
||||
|
|
@ -1589,6 +1594,7 @@ def _load_representations_by_loader(loader, repre_contexts,
|
|||
repre_context,
|
||||
options=options
|
||||
)
|
||||
|
||||
except IncompatibleLoaderError as exc:
|
||||
print(exc)
|
||||
error_info.append((
|
||||
|
|
@ -1600,10 +1606,13 @@ def _load_representations_by_loader(loader, repre_contexts,
|
|||
))
|
||||
|
||||
except Exception as exc:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
formatted_traceback = "".join(traceback.format_exception(
|
||||
exc_type, exc_value, exc_traceback
|
||||
))
|
||||
formatted_traceback = None
|
||||
if not isinstance(exc, LoadError):
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
formatted_traceback = "".join(traceback.format_exception(
|
||||
exc_type, exc_value, exc_traceback
|
||||
))
|
||||
|
||||
error_info.append((
|
||||
str(exc),
|
||||
formatted_traceback,
|
||||
|
|
@ -1628,7 +1637,7 @@ def _load_subsets_by_loader(loader, subset_contexts, options,
|
|||
error_info = []
|
||||
|
||||
if options is None: # not load when cancelled
|
||||
return
|
||||
return error_info
|
||||
|
||||
if loader.is_multiple_contexts_compatible:
|
||||
subset_names = []
|
||||
|
|
@ -1643,13 +1652,14 @@ def _load_subsets_by_loader(loader, subset_contexts, options,
|
|||
subset_contexts,
|
||||
options=options
|
||||
)
|
||||
|
||||
except Exception as exc:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
formatted_traceback = "".join(
|
||||
traceback.format_exception(
|
||||
formatted_traceback = None
|
||||
if not isinstance(exc, LoadError):
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
formatted_traceback = "".join(traceback.format_exception(
|
||||
exc_type, exc_value, exc_traceback
|
||||
)
|
||||
)
|
||||
))
|
||||
error_info.append((
|
||||
str(exc),
|
||||
formatted_traceback,
|
||||
|
|
@ -1669,13 +1679,15 @@ def _load_subsets_by_loader(loader, subset_contexts, options,
|
|||
subset_context,
|
||||
options=options
|
||||
)
|
||||
|
||||
except Exception as exc:
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
formatted_traceback = "\n".join(
|
||||
traceback.format_exception(
|
||||
formatted_traceback = None
|
||||
if not isinstance(exc, LoadError):
|
||||
exc_type, exc_value, exc_traceback = sys.exc_info()
|
||||
formatted_traceback = "".join(traceback.format_exception(
|
||||
exc_type, exc_value, exc_traceback
|
||||
)
|
||||
)
|
||||
))
|
||||
|
||||
error_info.append((
|
||||
str(exc),
|
||||
formatted_traceback,
|
||||
|
|
|
|||
0
openpype/tools/push_to_project/__init__.py
Normal file
0
openpype/tools/push_to_project/__init__.py
Normal file
41
openpype/tools/push_to_project/app.py
Normal file
41
openpype/tools/push_to_project/app.py
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
import click
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
||||
from openpype.tools.push_to_project.window import PushToContextSelectWindow
|
||||
|
||||
|
||||
@click.command()
|
||||
@click.option("--project", help="Source project name")
|
||||
@click.option("--version", help="Source version id")
|
||||
def main(project, version):
|
||||
"""Run PushToProject tool to integrate version in different project.
|
||||
|
||||
Args:
|
||||
project (str): Source project name.
|
||||
version (str): Version id.
|
||||
"""
|
||||
|
||||
app = QtWidgets.QApplication.instance()
|
||||
if not app:
|
||||
# 'AA_EnableHighDpiScaling' must be set before app instance creation
|
||||
high_dpi_scale_attr = getattr(
|
||||
QtCore.Qt, "AA_EnableHighDpiScaling", None
|
||||
)
|
||||
if high_dpi_scale_attr is not None:
|
||||
QtWidgets.QApplication.setAttribute(high_dpi_scale_attr)
|
||||
|
||||
app = QtWidgets.QApplication([])
|
||||
|
||||
attr = getattr(QtCore.Qt, "AA_UseHighDpiPixmaps", None)
|
||||
if attr is not None:
|
||||
app.setAttribute(attr)
|
||||
|
||||
window = PushToContextSelectWindow()
|
||||
window.show()
|
||||
window.controller.set_source(project, version)
|
||||
|
||||
app.exec_()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
678
openpype/tools/push_to_project/control_context.py
Normal file
678
openpype/tools/push_to_project/control_context.py
Normal file
|
|
@ -0,0 +1,678 @@
|
|||
import re
|
||||
import collections
|
||||
import threading
|
||||
|
||||
from openpype.client import (
|
||||
get_projects,
|
||||
get_assets,
|
||||
get_asset_by_id,
|
||||
get_subset_by_id,
|
||||
get_version_by_id,
|
||||
get_representations,
|
||||
)
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.lib.events import EventSystem
|
||||
from openpype.pipeline.create import (
|
||||
SUBSET_NAME_ALLOWED_SYMBOLS,
|
||||
get_subset_name_template,
|
||||
)
|
||||
|
||||
from .control_integrate import (
|
||||
ProjectPushItem,
|
||||
ProjectPushItemProcess,
|
||||
ProjectPushItemStatus,
|
||||
)
|
||||
|
||||
|
||||
class AssetItem:
|
||||
def __init__(
|
||||
self,
|
||||
entity_id,
|
||||
name,
|
||||
icon_name,
|
||||
icon_color,
|
||||
parent_id,
|
||||
has_children
|
||||
):
|
||||
self.id = entity_id
|
||||
self.name = name
|
||||
self.icon_name = icon_name
|
||||
self.icon_color = icon_color
|
||||
self.parent_id = parent_id
|
||||
self.has_children = has_children
|
||||
|
||||
@classmethod
|
||||
def from_doc(cls, asset_doc, has_children=True):
|
||||
parent_id = asset_doc["data"].get("visualParent")
|
||||
if parent_id is not None:
|
||||
parent_id = str(parent_id)
|
||||
return cls(
|
||||
str(asset_doc["_id"]),
|
||||
asset_doc["name"],
|
||||
asset_doc["data"].get("icon"),
|
||||
asset_doc["data"].get("color"),
|
||||
parent_id,
|
||||
has_children
|
||||
)
|
||||
|
||||
|
||||
class TaskItem:
|
||||
def __init__(self, asset_id, name, task_type, short_name):
|
||||
self.asset_id = asset_id
|
||||
self.name = name
|
||||
self.task_type = task_type
|
||||
self.short_name = short_name
|
||||
|
||||
@classmethod
|
||||
def from_asset_doc(cls, asset_doc, project_doc):
|
||||
asset_tasks = asset_doc["data"].get("tasks") or {}
|
||||
project_task_types = project_doc["config"]["tasks"]
|
||||
output = []
|
||||
for task_name, task_info in asset_tasks.items():
|
||||
task_type = task_info.get("type")
|
||||
task_type_info = project_task_types.get(task_type) or {}
|
||||
output.append(cls(
|
||||
asset_doc["_id"],
|
||||
task_name,
|
||||
task_type,
|
||||
task_type_info.get("short_name")
|
||||
))
|
||||
return output
|
||||
|
||||
|
||||
class EntitiesModel:
|
||||
def __init__(self, event_system):
|
||||
self._event_system = event_system
|
||||
self._project_names = None
|
||||
self._project_docs_by_name = {}
|
||||
self._assets_by_project = {}
|
||||
self._tasks_by_asset_id = collections.defaultdict(dict)
|
||||
|
||||
def has_cached_projects(self):
|
||||
return self._project_names is None
|
||||
|
||||
def has_cached_assets(self, project_name):
|
||||
if not project_name:
|
||||
return True
|
||||
return project_name in self._assets_by_project
|
||||
|
||||
def has_cached_tasks(self, project_name):
|
||||
return self.has_cached_assets(project_name)
|
||||
|
||||
def get_projects(self):
|
||||
if self._project_names is None:
|
||||
self.refresh_projects()
|
||||
return list(self._project_names)
|
||||
|
||||
def get_assets(self, project_name):
|
||||
if project_name not in self._assets_by_project:
|
||||
self.refresh_assets(project_name)
|
||||
return dict(self._assets_by_project[project_name])
|
||||
|
||||
def get_asset_by_id(self, project_name, asset_id):
|
||||
return self._assets_by_project[project_name].get(asset_id)
|
||||
|
||||
def get_tasks(self, project_name, asset_id):
|
||||
if not project_name or not asset_id:
|
||||
return []
|
||||
|
||||
if project_name not in self._tasks_by_asset_id:
|
||||
self.refresh_assets(project_name)
|
||||
|
||||
all_task_items = self._tasks_by_asset_id[project_name]
|
||||
asset_task_items = all_task_items.get(asset_id)
|
||||
if not asset_task_items:
|
||||
return []
|
||||
return list(asset_task_items)
|
||||
|
||||
def refresh_projects(self, force=False):
|
||||
self._event_system.emit(
|
||||
"projects.refresh.started", {}, "entities.model"
|
||||
)
|
||||
if force or self._project_names is None:
|
||||
project_names = []
|
||||
project_docs_by_name = {}
|
||||
for project_doc in get_projects():
|
||||
library_project = project_doc["data"].get("library_project")
|
||||
if not library_project:
|
||||
continue
|
||||
project_name = project_doc["name"]
|
||||
project_names.append(project_name)
|
||||
project_docs_by_name[project_name] = project_doc
|
||||
self._project_names = project_names
|
||||
self._project_docs_by_name = project_docs_by_name
|
||||
self._event_system.emit(
|
||||
"projects.refresh.finished", {}, "entities.model"
|
||||
)
|
||||
|
||||
def _refresh_assets(self, project_name):
|
||||
asset_items_by_id = {}
|
||||
task_items_by_asset_id = {}
|
||||
self._assets_by_project[project_name] = asset_items_by_id
|
||||
self._tasks_by_asset_id[project_name] = task_items_by_asset_id
|
||||
if not project_name:
|
||||
return
|
||||
|
||||
project_doc = self._project_docs_by_name[project_name]
|
||||
asset_docs_by_parent_id = collections.defaultdict(list)
|
||||
for asset_doc in get_assets(project_name):
|
||||
parent_id = asset_doc["data"].get("visualParent")
|
||||
asset_docs_by_parent_id[parent_id].append(asset_doc)
|
||||
|
||||
hierarchy_queue = collections.deque()
|
||||
for asset_doc in asset_docs_by_parent_id[None]:
|
||||
hierarchy_queue.append(asset_doc)
|
||||
|
||||
while hierarchy_queue:
|
||||
asset_doc = hierarchy_queue.popleft()
|
||||
children = asset_docs_by_parent_id[asset_doc["_id"]]
|
||||
asset_item = AssetItem.from_doc(asset_doc, len(children) > 0)
|
||||
asset_items_by_id[asset_item.id] = asset_item
|
||||
task_items_by_asset_id[asset_item.id] = (
|
||||
TaskItem.from_asset_doc(asset_doc, project_doc)
|
||||
)
|
||||
for child in children:
|
||||
hierarchy_queue.append(child)
|
||||
|
||||
def refresh_assets(self, project_name, force=False):
|
||||
self._event_system.emit(
|
||||
"assets.refresh.started",
|
||||
{"project_name": project_name},
|
||||
"entities.model"
|
||||
)
|
||||
|
||||
if force or project_name not in self._assets_by_project:
|
||||
self._refresh_assets(project_name)
|
||||
|
||||
self._event_system.emit(
|
||||
"assets.refresh.finished",
|
||||
{"project_name": project_name},
|
||||
"entities.model"
|
||||
)
|
||||
|
||||
|
||||
class SelectionModel:
|
||||
def __init__(self, event_system):
|
||||
self._event_system = event_system
|
||||
|
||||
self.project_name = None
|
||||
self.asset_id = None
|
||||
self.task_name = None
|
||||
|
||||
def select_project(self, project_name):
|
||||
if self.project_name == project_name:
|
||||
return
|
||||
|
||||
self.project_name = project_name
|
||||
self._event_system.emit(
|
||||
"project.changed",
|
||||
{"project_name": project_name},
|
||||
"selection.model"
|
||||
)
|
||||
|
||||
def select_asset(self, asset_id):
|
||||
if self.asset_id == asset_id:
|
||||
return
|
||||
self.asset_id = asset_id
|
||||
self._event_system.emit(
|
||||
"asset.changed",
|
||||
{
|
||||
"project_name": self.project_name,
|
||||
"asset_id": asset_id
|
||||
},
|
||||
"selection.model"
|
||||
)
|
||||
|
||||
def select_task(self, task_name):
|
||||
if self.task_name == task_name:
|
||||
return
|
||||
self.task_name = task_name
|
||||
self._event_system.emit(
|
||||
"task.changed",
|
||||
{
|
||||
"project_name": self.project_name,
|
||||
"asset_id": self.asset_id,
|
||||
"task_name": task_name
|
||||
},
|
||||
"selection.model"
|
||||
)
|
||||
|
||||
|
||||
class UserPublishValues:
|
||||
"""Helper object to validate values required for push to different project.
|
||||
|
||||
Args:
|
||||
event_system (EventSystem): Event system to catch and emit events.
|
||||
new_asset_name (str): Name of new asset name.
|
||||
variant (str): Variant for new subset name in new project.
|
||||
"""
|
||||
|
||||
asset_name_regex = re.compile("^[a-zA-Z0-9_.]+$")
|
||||
variant_regex = re.compile("^[{}]+$".format(SUBSET_NAME_ALLOWED_SYMBOLS))
|
||||
|
||||
def __init__(self, event_system):
|
||||
self._event_system = event_system
|
||||
self._new_asset_name = None
|
||||
self._variant = None
|
||||
self._comment = None
|
||||
self._is_variant_valid = False
|
||||
self._is_new_asset_name_valid = False
|
||||
|
||||
self.set_new_asset("")
|
||||
self.set_variant("")
|
||||
self.set_comment("")
|
||||
|
||||
@property
|
||||
def new_asset_name(self):
|
||||
return self._new_asset_name
|
||||
|
||||
@property
|
||||
def variant(self):
|
||||
return self._variant
|
||||
|
||||
@property
|
||||
def comment(self):
|
||||
return self._comment
|
||||
|
||||
@property
|
||||
def is_variant_valid(self):
|
||||
return self._is_variant_valid
|
||||
|
||||
@property
|
||||
def is_new_asset_name_valid(self):
|
||||
return self._is_new_asset_name_valid
|
||||
|
||||
@property
|
||||
def is_valid(self):
|
||||
return self.is_variant_valid and self.is_new_asset_name_valid
|
||||
|
||||
def set_variant(self, variant):
|
||||
if variant == self._variant:
|
||||
return
|
||||
|
||||
old_variant = self._variant
|
||||
old_is_valid = self._is_variant_valid
|
||||
|
||||
self._variant = variant
|
||||
is_valid = False
|
||||
if variant:
|
||||
is_valid = self.variant_regex.match(variant) is not None
|
||||
self._is_variant_valid = is_valid
|
||||
|
||||
changes = {
|
||||
key: {"new": new, "old": old}
|
||||
for key, old, new in (
|
||||
("variant", old_variant, variant),
|
||||
("is_valid", old_is_valid, is_valid)
|
||||
)
|
||||
}
|
||||
|
||||
self._event_system.emit(
|
||||
"variant.changed",
|
||||
{
|
||||
"variant": variant,
|
||||
"is_valid": self._is_variant_valid,
|
||||
"changes": changes
|
||||
},
|
||||
"user_values"
|
||||
)
|
||||
|
||||
def set_new_asset(self, asset_name):
|
||||
if self._new_asset_name == asset_name:
|
||||
return
|
||||
old_asset_name = self._new_asset_name
|
||||
old_is_valid = self._is_new_asset_name_valid
|
||||
self._new_asset_name = asset_name
|
||||
is_valid = True
|
||||
if asset_name:
|
||||
is_valid = (
|
||||
self.asset_name_regex.match(asset_name) is not None
|
||||
)
|
||||
self._is_new_asset_name_valid = is_valid
|
||||
changes = {
|
||||
key: {"new": new, "old": old}
|
||||
for key, old, new in (
|
||||
("new_asset_name", old_asset_name, asset_name),
|
||||
("is_valid", old_is_valid, is_valid)
|
||||
)
|
||||
}
|
||||
|
||||
self._event_system.emit(
|
||||
"new_asset_name.changed",
|
||||
{
|
||||
"new_asset_name": self._new_asset_name,
|
||||
"is_valid": self._is_new_asset_name_valid,
|
||||
"changes": changes
|
||||
},
|
||||
"user_values"
|
||||
)
|
||||
|
||||
def set_comment(self, comment):
|
||||
if comment == self._comment:
|
||||
return
|
||||
old_comment = self._comment
|
||||
self._comment = comment
|
||||
self._event_system.emit(
|
||||
"comment.changed",
|
||||
{
|
||||
"comment": comment,
|
||||
"changes": {
|
||||
"comment": {"new": comment, "old": old_comment}
|
||||
}
|
||||
},
|
||||
"user_values"
|
||||
)
|
||||
|
||||
|
||||
class PushToContextController:
|
||||
def __init__(self, project_name=None, version_id=None):
|
||||
self._src_project_name = None
|
||||
self._src_version_id = None
|
||||
self._src_asset_doc = None
|
||||
self._src_subset_doc = None
|
||||
self._src_version_doc = None
|
||||
|
||||
event_system = EventSystem()
|
||||
entities_model = EntitiesModel(event_system)
|
||||
selection_model = SelectionModel(event_system)
|
||||
user_values = UserPublishValues(event_system)
|
||||
|
||||
self._event_system = event_system
|
||||
self._entities_model = entities_model
|
||||
self._selection_model = selection_model
|
||||
self._user_values = user_values
|
||||
|
||||
event_system.add_callback("project.changed", self._on_project_change)
|
||||
event_system.add_callback("asset.changed", self._invalidate)
|
||||
event_system.add_callback("variant.changed", self._invalidate)
|
||||
event_system.add_callback("new_asset_name.changed", self._invalidate)
|
||||
|
||||
self._submission_enabled = False
|
||||
self._process_thread = None
|
||||
self._process_item = None
|
||||
|
||||
self.set_source(project_name, version_id)
|
||||
|
||||
def _get_task_info_from_repre_docs(self, asset_doc, repre_docs):
|
||||
asset_tasks = asset_doc["data"].get("tasks") or {}
|
||||
found_comb = []
|
||||
for repre_doc in repre_docs:
|
||||
context = repre_doc["context"]
|
||||
task_info = context.get("task")
|
||||
if task_info is None:
|
||||
continue
|
||||
|
||||
task_name = None
|
||||
task_type = None
|
||||
if isinstance(task_info, str):
|
||||
task_name = task_info
|
||||
asset_task_info = asset_tasks.get(task_info) or {}
|
||||
task_type = asset_task_info.get("type")
|
||||
|
||||
elif isinstance(task_info, dict):
|
||||
task_name = task_info.get("name")
|
||||
task_type = task_info.get("type")
|
||||
|
||||
if task_name and task_type:
|
||||
return task_name, task_type
|
||||
|
||||
if task_name:
|
||||
found_comb.append((task_name, task_type))
|
||||
|
||||
for task_name, task_type in found_comb:
|
||||
return task_name, task_type
|
||||
return None, None
|
||||
|
||||
def _get_src_variant(self):
|
||||
project_name = self._src_project_name
|
||||
version_doc = self._src_version_doc
|
||||
asset_doc = self._src_asset_doc
|
||||
repre_docs = get_representations(
|
||||
project_name, version_ids=[version_doc["_id"]]
|
||||
)
|
||||
task_name, task_type = self._get_task_info_from_repre_docs(
|
||||
asset_doc, repre_docs
|
||||
)
|
||||
|
||||
project_settings = get_project_settings(project_name)
|
||||
subset_doc = self.src_subset_doc
|
||||
family = subset_doc["data"].get("family")
|
||||
if not family:
|
||||
family = subset_doc["data"]["families"][0]
|
||||
template = get_subset_name_template(
|
||||
self._src_project_name,
|
||||
family,
|
||||
task_name,
|
||||
task_type,
|
||||
None,
|
||||
project_settings=project_settings
|
||||
)
|
||||
template_low = template.lower()
|
||||
variant_placeholder = "{variant}"
|
||||
if (
|
||||
variant_placeholder not in template_low
|
||||
or (not task_name and "{task" in template_low)
|
||||
):
|
||||
return ""
|
||||
|
||||
idx = template_low.index(variant_placeholder)
|
||||
template_s = template[:idx]
|
||||
template_e = template[idx + len(variant_placeholder):]
|
||||
fill_data = prepare_template_data({
|
||||
"family": family,
|
||||
"task": task_name
|
||||
})
|
||||
try:
|
||||
subset_s = template_s.format(**fill_data)
|
||||
subset_e = template_e.format(**fill_data)
|
||||
except Exception as exc:
|
||||
print("Failed format", exc)
|
||||
return ""
|
||||
|
||||
subset_name = self.src_subset_doc["name"]
|
||||
if (
|
||||
(subset_s and not subset_name.startswith(subset_s))
|
||||
or (subset_e and not subset_name.endswith(subset_e))
|
||||
):
|
||||
return ""
|
||||
|
||||
if subset_s:
|
||||
subset_name = subset_name[len(subset_s):]
|
||||
if subset_e:
|
||||
subset_name = subset_name[:len(subset_e)]
|
||||
return subset_name
|
||||
|
||||
def set_source(self, project_name, version_id):
|
||||
if (
|
||||
project_name == self._src_project_name
|
||||
and version_id == self._src_version_id
|
||||
):
|
||||
return
|
||||
|
||||
self._src_project_name = project_name
|
||||
self._src_version_id = version_id
|
||||
asset_doc = None
|
||||
subset_doc = None
|
||||
version_doc = None
|
||||
if project_name and version_id:
|
||||
version_doc = get_version_by_id(project_name, version_id)
|
||||
|
||||
if version_doc:
|
||||
subset_doc = get_subset_by_id(project_name, version_doc["parent"])
|
||||
|
||||
if subset_doc:
|
||||
asset_doc = get_asset_by_id(project_name, subset_doc["parent"])
|
||||
|
||||
self._src_asset_doc = asset_doc
|
||||
self._src_subset_doc = subset_doc
|
||||
self._src_version_doc = version_doc
|
||||
if asset_doc:
|
||||
self.user_values.set_new_asset(asset_doc["name"])
|
||||
variant = self._get_src_variant()
|
||||
if variant:
|
||||
self.user_values.set_variant(variant)
|
||||
|
||||
comment = version_doc["data"].get("comment")
|
||||
if comment:
|
||||
self.user_values.set_comment(comment)
|
||||
|
||||
self._event_system.emit(
|
||||
"source.changed", {
|
||||
"project_name": project_name,
|
||||
"version_id": version_id
|
||||
},
|
||||
"controller"
|
||||
)
|
||||
|
||||
@property
|
||||
def src_project_name(self):
|
||||
return self._src_project_name
|
||||
|
||||
@property
|
||||
def src_version_id(self):
|
||||
return self._src_version_id
|
||||
|
||||
@property
|
||||
def src_label(self):
|
||||
if not self._src_project_name or not self._src_version_id:
|
||||
return "Source is not defined"
|
||||
|
||||
asset_doc = self.src_asset_doc
|
||||
if not asset_doc:
|
||||
return "Source is invalid"
|
||||
|
||||
asset_path_parts = list(asset_doc["data"]["parents"])
|
||||
asset_path_parts.append(asset_doc["name"])
|
||||
asset_path = "/".join(asset_path_parts)
|
||||
subset_doc = self.src_subset_doc
|
||||
version_doc = self.src_version_doc
|
||||
return "Source: {}/{}/{}/v{:0>3}".format(
|
||||
self._src_project_name,
|
||||
asset_path,
|
||||
subset_doc["name"],
|
||||
version_doc["name"]
|
||||
)
|
||||
|
||||
@property
|
||||
def src_version_doc(self):
|
||||
return self._src_version_doc
|
||||
|
||||
@property
|
||||
def src_subset_doc(self):
|
||||
return self._src_subset_doc
|
||||
|
||||
@property
|
||||
def src_asset_doc(self):
|
||||
return self._src_asset_doc
|
||||
|
||||
@property
|
||||
def event_system(self):
|
||||
return self._event_system
|
||||
|
||||
@property
|
||||
def model(self):
|
||||
return self._entities_model
|
||||
|
||||
@property
|
||||
def selection_model(self):
|
||||
return self._selection_model
|
||||
|
||||
@property
|
||||
def user_values(self):
|
||||
return self._user_values
|
||||
|
||||
@property
|
||||
def submission_enabled(self):
|
||||
return self._submission_enabled
|
||||
|
||||
def _on_project_change(self, event):
|
||||
project_name = event["project_name"]
|
||||
self.model.refresh_assets(project_name)
|
||||
self._invalidate()
|
||||
|
||||
def _invalidate(self):
|
||||
submission_enabled = self._check_submit_validations()
|
||||
if submission_enabled == self._submission_enabled:
|
||||
return
|
||||
self._submission_enabled = submission_enabled
|
||||
self._event_system.emit(
|
||||
"submission.enabled.changed",
|
||||
{"enabled": submission_enabled},
|
||||
"controller"
|
||||
)
|
||||
|
||||
def _check_submit_validations(self):
|
||||
if not self._user_values.is_valid:
|
||||
return False
|
||||
|
||||
if not self.selection_model.project_name:
|
||||
return False
|
||||
|
||||
if (
|
||||
not self._user_values.new_asset_name
|
||||
and not self.selection_model.asset_id
|
||||
):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def get_selected_asset_name(self):
|
||||
project_name = self._selection_model.project_name
|
||||
asset_id = self._selection_model.asset_id
|
||||
if not project_name or not asset_id:
|
||||
return None
|
||||
asset_item = self._entities_model.get_asset_by_id(
|
||||
project_name, asset_id
|
||||
)
|
||||
if asset_item:
|
||||
return asset_item.name
|
||||
return None
|
||||
|
||||
def submit(self, wait=True):
|
||||
if not self.submission_enabled:
|
||||
return
|
||||
|
||||
if self._process_thread is not None:
|
||||
return
|
||||
|
||||
item = ProjectPushItem(
|
||||
self.src_project_name,
|
||||
self.src_version_id,
|
||||
self.selection_model.project_name,
|
||||
self.selection_model.asset_id,
|
||||
self.selection_model.task_name,
|
||||
self.user_values.variant,
|
||||
comment=self.user_values.comment,
|
||||
new_asset_name=self.user_values.new_asset_name,
|
||||
dst_version=1
|
||||
)
|
||||
|
||||
status_item = ProjectPushItemStatus(event_system=self._event_system)
|
||||
process_item = ProjectPushItemProcess(item, status_item)
|
||||
self._process_item = process_item
|
||||
self._event_system.emit("submit.started", {}, "controller")
|
||||
if wait:
|
||||
self._submit_callback()
|
||||
self._process_item = None
|
||||
return process_item
|
||||
|
||||
thread = threading.Thread(target=self._submit_callback)
|
||||
self._process_thread = thread
|
||||
thread.start()
|
||||
return process_item
|
||||
|
||||
def wait_for_process_thread(self):
|
||||
if self._process_thread is None:
|
||||
return
|
||||
self._process_thread.join()
|
||||
self._process_thread = None
|
||||
|
||||
def _submit_callback(self):
|
||||
process_item = self._process_item
|
||||
if process_item is None:
|
||||
return
|
||||
process_item.process()
|
||||
self._event_system.emit("submit.finished", {}, "controller")
|
||||
if process_item is self._process_item:
|
||||
self._process_item = None
|
||||
1184
openpype/tools/push_to_project/control_integrate.py
Normal file
1184
openpype/tools/push_to_project/control_integrate.py
Normal file
File diff suppressed because it is too large
Load diff
826
openpype/tools/push_to_project/window.py
Normal file
826
openpype/tools/push_to_project/window.py
Normal file
|
|
@ -0,0 +1,826 @@
|
|||
import collections
|
||||
|
||||
from qtpy import QtWidgets, QtGui, QtCore
|
||||
|
||||
from openpype.style import load_stylesheet, get_app_icon_path
|
||||
from openpype.tools.utils import (
|
||||
PlaceholderLineEdit,
|
||||
SeparatorWidget,
|
||||
get_asset_icon_by_name,
|
||||
set_style_property,
|
||||
)
|
||||
from openpype.tools.utils.views import DeselectableTreeView
|
||||
|
||||
from .control_context import PushToContextController
|
||||
|
||||
PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 1
|
||||
ASSET_NAME_ROLE = QtCore.Qt.UserRole + 2
|
||||
ASSET_ID_ROLE = QtCore.Qt.UserRole + 3
|
||||
TASK_NAME_ROLE = QtCore.Qt.UserRole + 4
|
||||
TASK_TYPE_ROLE = QtCore.Qt.UserRole + 5
|
||||
|
||||
|
||||
class ProjectsModel(QtGui.QStandardItemModel):
|
||||
empty_text = "< Empty >"
|
||||
refreshing_text = "< Refreshing >"
|
||||
select_project_text = "< Select Project >"
|
||||
|
||||
refreshed = QtCore.Signal()
|
||||
|
||||
def __init__(self, controller):
|
||||
super(ProjectsModel, self).__init__()
|
||||
self._controller = controller
|
||||
|
||||
self.event_system.add_callback(
|
||||
"projects.refresh.finished", self._on_refresh_finish
|
||||
)
|
||||
|
||||
placeholder_item = QtGui.QStandardItem(self.empty_text)
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
root_item.appendRows([placeholder_item])
|
||||
items = {None: placeholder_item}
|
||||
|
||||
self._placeholder_item = placeholder_item
|
||||
self._items = items
|
||||
|
||||
@property
|
||||
def event_system(self):
|
||||
return self._controller.event_system
|
||||
|
||||
def _on_refresh_finish(self):
|
||||
root_item = self.invisibleRootItem()
|
||||
project_names = self._controller.model.get_projects()
|
||||
|
||||
if not project_names:
|
||||
placeholder_text = self.empty_text
|
||||
else:
|
||||
placeholder_text = self.select_project_text
|
||||
self._placeholder_item.setData(placeholder_text, QtCore.Qt.DisplayRole)
|
||||
|
||||
new_items = []
|
||||
if None not in self._items:
|
||||
new_items.append(self._placeholder_item)
|
||||
|
||||
current_project_names = set(self._items.keys())
|
||||
for project_name in current_project_names - set(project_names):
|
||||
if project_name is None:
|
||||
continue
|
||||
item = self._items.pop(project_name)
|
||||
root_item.takeRow(item.row())
|
||||
|
||||
for project_name in project_names:
|
||||
if project_name in self._items:
|
||||
continue
|
||||
item = QtGui.QStandardItem(project_name)
|
||||
item.setData(project_name, PROJECT_NAME_ROLE)
|
||||
new_items.append(item)
|
||||
|
||||
if new_items:
|
||||
root_item.appendRows(new_items)
|
||||
self.refreshed.emit()
|
||||
|
||||
|
||||
class ProjectProxyModel(QtCore.QSortFilterProxyModel):
|
||||
def __init__(self):
|
||||
super(ProjectProxyModel, self).__init__()
|
||||
self._filter_empty_projects = False
|
||||
|
||||
def set_filter_empty_project(self, filter_empty_projects):
|
||||
if filter_empty_projects == self._filter_empty_projects:
|
||||
return
|
||||
self._filter_empty_projects = filter_empty_projects
|
||||
self.invalidate()
|
||||
|
||||
def filterAcceptsRow(self, row, parent):
|
||||
if not self._filter_empty_projects:
|
||||
return True
|
||||
model = self.sourceModel()
|
||||
source_index = model.index(row, self.filterKeyColumn(), parent)
|
||||
if model.data(source_index, PROJECT_NAME_ROLE) is None:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
class AssetsModel(QtGui.QStandardItemModel):
|
||||
items_changed = QtCore.Signal()
|
||||
empty_text = "< Empty >"
|
||||
|
||||
def __init__(self, controller):
|
||||
super(AssetsModel, self).__init__()
|
||||
self._controller = controller
|
||||
|
||||
placeholder_item = QtGui.QStandardItem(self.empty_text)
|
||||
placeholder_item.setFlags(QtCore.Qt.ItemIsEnabled)
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
root_item.appendRows([placeholder_item])
|
||||
|
||||
self.event_system.add_callback(
|
||||
"project.changed", self._on_project_change
|
||||
)
|
||||
self.event_system.add_callback(
|
||||
"assets.refresh.started", self._on_refresh_start
|
||||
)
|
||||
self.event_system.add_callback(
|
||||
"assets.refresh.finished", self._on_refresh_finish
|
||||
)
|
||||
|
||||
self._items = {None: placeholder_item}
|
||||
|
||||
self._placeholder_item = placeholder_item
|
||||
self._last_project = None
|
||||
|
||||
@property
|
||||
def event_system(self):
|
||||
return self._controller.event_system
|
||||
|
||||
def _clear(self):
|
||||
placeholder_in = False
|
||||
root_item = self.invisibleRootItem()
|
||||
for row in reversed(range(root_item.rowCount())):
|
||||
item = root_item.child(row)
|
||||
asset_id = item.data(ASSET_ID_ROLE)
|
||||
if asset_id is None:
|
||||
placeholder_in = True
|
||||
continue
|
||||
root_item.removeRow(item.row())
|
||||
|
||||
for key in tuple(self._items.keys()):
|
||||
if key is not None:
|
||||
self._items.pop(key)
|
||||
|
||||
if not placeholder_in:
|
||||
root_item.appendRows([self._placeholder_item])
|
||||
self._items[None] = self._placeholder_item
|
||||
|
||||
def _on_project_change(self, event):
|
||||
project_name = event["project_name"]
|
||||
if project_name == self._last_project:
|
||||
return
|
||||
|
||||
self._last_project = project_name
|
||||
self._clear()
|
||||
self.items_changed.emit()
|
||||
|
||||
def _on_refresh_start(self, event):
|
||||
pass
|
||||
|
||||
def _on_refresh_finish(self, event):
|
||||
event_project_name = event["project_name"]
|
||||
project_name = self._controller.selection_model.project_name
|
||||
if event_project_name != project_name:
|
||||
return
|
||||
|
||||
self._last_project = event["project_name"]
|
||||
if project_name is None:
|
||||
if None not in self._items:
|
||||
self._clear()
|
||||
self.items_changed.emit()
|
||||
return
|
||||
|
||||
asset_items_by_id = self._controller.model.get_assets(project_name)
|
||||
if not asset_items_by_id:
|
||||
self._clear()
|
||||
self.items_changed.emit()
|
||||
return
|
||||
|
||||
assets_by_parent_id = collections.defaultdict(list)
|
||||
for asset_item in asset_items_by_id.values():
|
||||
assets_by_parent_id[asset_item.parent_id].append(asset_item)
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
if None in self._items:
|
||||
self._items.pop(None)
|
||||
root_item.takeRow(self._placeholder_item.row())
|
||||
|
||||
items_to_remove = set(self._items) - set(asset_items_by_id.keys())
|
||||
hierarchy_queue = collections.deque()
|
||||
hierarchy_queue.append((None, root_item))
|
||||
while hierarchy_queue:
|
||||
parent_id, parent_item = hierarchy_queue.popleft()
|
||||
new_items = []
|
||||
for asset_item in assets_by_parent_id[parent_id]:
|
||||
item = self._items.get(asset_item.id)
|
||||
if item is None:
|
||||
item = QtGui.QStandardItem()
|
||||
item.setFlags(
|
||||
QtCore.Qt.ItemIsSelectable
|
||||
| QtCore.Qt.ItemIsEnabled
|
||||
)
|
||||
new_items.append(item)
|
||||
self._items[asset_item.id] = item
|
||||
|
||||
elif item.parent() is not parent_item:
|
||||
new_items.append(item)
|
||||
|
||||
icon = get_asset_icon_by_name(
|
||||
asset_item.icon_name, asset_item.icon_color
|
||||
)
|
||||
item.setData(asset_item.name, QtCore.Qt.DisplayRole)
|
||||
item.setData(icon, QtCore.Qt.DecorationRole)
|
||||
item.setData(asset_item.id, ASSET_ID_ROLE)
|
||||
|
||||
hierarchy_queue.append((asset_item.id, item))
|
||||
|
||||
if new_items:
|
||||
parent_item.appendRows(new_items)
|
||||
|
||||
for item_id in items_to_remove:
|
||||
item = self._items.pop(item_id, None)
|
||||
if item is None:
|
||||
continue
|
||||
parent = item.parent()
|
||||
if parent is not None:
|
||||
parent.takeRow(item.row())
|
||||
|
||||
self.items_changed.emit()
|
||||
|
||||
|
||||
class TasksModel(QtGui.QStandardItemModel):
|
||||
items_changed = QtCore.Signal()
|
||||
empty_text = "< Empty >"
|
||||
|
||||
def __init__(self, controller):
|
||||
super(TasksModel, self).__init__()
|
||||
self._controller = controller
|
||||
|
||||
placeholder_item = QtGui.QStandardItem(self.empty_text)
|
||||
placeholder_item.setFlags(QtCore.Qt.ItemIsEnabled)
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
root_item.appendRows([placeholder_item])
|
||||
|
||||
self.event_system.add_callback(
|
||||
"project.changed", self._on_project_change
|
||||
)
|
||||
self.event_system.add_callback(
|
||||
"assets.refresh.finished", self._on_asset_refresh_finish
|
||||
)
|
||||
self.event_system.add_callback(
|
||||
"asset.changed", self._on_asset_change
|
||||
)
|
||||
|
||||
self._items = {None: placeholder_item}
|
||||
|
||||
self._placeholder_item = placeholder_item
|
||||
self._last_project = None
|
||||
|
||||
@property
|
||||
def event_system(self):
|
||||
return self._controller.event_system
|
||||
|
||||
def _clear(self):
|
||||
placeholder_in = False
|
||||
root_item = self.invisibleRootItem()
|
||||
for row in reversed(range(root_item.rowCount())):
|
||||
item = root_item.child(row)
|
||||
task_name = item.data(TASK_NAME_ROLE)
|
||||
if task_name is None:
|
||||
placeholder_in = True
|
||||
continue
|
||||
root_item.removeRow(item.row())
|
||||
|
||||
for key in tuple(self._items.keys()):
|
||||
if key is not None:
|
||||
self._items.pop(key)
|
||||
|
||||
if not placeholder_in:
|
||||
root_item.appendRows([self._placeholder_item])
|
||||
self._items[None] = self._placeholder_item
|
||||
|
||||
def _on_project_change(self, event):
|
||||
project_name = event["project_name"]
|
||||
if project_name == self._last_project:
|
||||
return
|
||||
|
||||
self._last_project = project_name
|
||||
self._clear()
|
||||
self.items_changed.emit()
|
||||
|
||||
def _on_asset_refresh_finish(self, event):
|
||||
self._refresh(event["project_name"])
|
||||
|
||||
def _on_asset_change(self, event):
|
||||
self._refresh(event["project_name"])
|
||||
|
||||
def _refresh(self, new_project_name):
|
||||
project_name = self._controller.selection_model.project_name
|
||||
if new_project_name != project_name:
|
||||
return
|
||||
|
||||
self._last_project = project_name
|
||||
if project_name is None:
|
||||
if None not in self._items:
|
||||
self._clear()
|
||||
self.items_changed.emit()
|
||||
return
|
||||
|
||||
asset_id = self._controller.selection_model.asset_id
|
||||
task_items = self._controller.model.get_tasks(
|
||||
project_name, asset_id
|
||||
)
|
||||
if not task_items:
|
||||
self._clear()
|
||||
self.items_changed.emit()
|
||||
return
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
if None in self._items:
|
||||
self._items.pop(None)
|
||||
root_item.takeRow(self._placeholder_item.row())
|
||||
|
||||
new_items = []
|
||||
task_names = set()
|
||||
for task_item in task_items:
|
||||
task_name = task_item.name
|
||||
item = self._items.get(task_name)
|
||||
if item is None:
|
||||
item = QtGui.QStandardItem()
|
||||
item.setFlags(
|
||||
QtCore.Qt.ItemIsSelectable
|
||||
| QtCore.Qt.ItemIsEnabled
|
||||
)
|
||||
new_items.append(item)
|
||||
self._items[task_name] = item
|
||||
|
||||
item.setData(task_name, QtCore.Qt.DisplayRole)
|
||||
item.setData(task_name, TASK_NAME_ROLE)
|
||||
item.setData(task_item.task_type, TASK_TYPE_ROLE)
|
||||
|
||||
if new_items:
|
||||
root_item.appendRows(new_items)
|
||||
|
||||
items_to_remove = set(self._items) - task_names
|
||||
for item_id in items_to_remove:
|
||||
item = self._items.pop(item_id, None)
|
||||
if item is None:
|
||||
continue
|
||||
parent = item.parent()
|
||||
if parent is not None:
|
||||
parent.removeRow(item.row())
|
||||
|
||||
self.items_changed.emit()
|
||||
|
||||
|
||||
class PushToContextSelectWindow(QtWidgets.QWidget):
|
||||
def __init__(self, controller=None):
|
||||
super(PushToContextSelectWindow, self).__init__()
|
||||
if controller is None:
|
||||
controller = PushToContextController()
|
||||
self._controller = controller
|
||||
|
||||
self.setWindowTitle("Push to project (select context)")
|
||||
self.setWindowIcon(QtGui.QIcon(get_app_icon_path()))
|
||||
|
||||
main_context_widget = QtWidgets.QWidget(self)
|
||||
|
||||
header_widget = QtWidgets.QWidget(main_context_widget)
|
||||
|
||||
header_label = QtWidgets.QLabel(controller.src_label, header_widget)
|
||||
|
||||
header_layout = QtWidgets.QHBoxLayout(header_widget)
|
||||
header_layout.setContentsMargins(0, 0, 0, 0)
|
||||
header_layout.addWidget(header_label)
|
||||
|
||||
main_splitter = QtWidgets.QSplitter(
|
||||
QtCore.Qt.Horizontal, main_context_widget
|
||||
)
|
||||
|
||||
context_widget = QtWidgets.QWidget(main_splitter)
|
||||
|
||||
project_combobox = QtWidgets.QComboBox(context_widget)
|
||||
project_model = ProjectsModel(controller)
|
||||
project_proxy = ProjectProxyModel()
|
||||
project_proxy.setSourceModel(project_model)
|
||||
project_proxy.setDynamicSortFilter(True)
|
||||
project_delegate = QtWidgets.QStyledItemDelegate()
|
||||
project_combobox.setItemDelegate(project_delegate)
|
||||
project_combobox.setModel(project_proxy)
|
||||
|
||||
asset_task_splitter = QtWidgets.QSplitter(
|
||||
QtCore.Qt.Vertical, context_widget
|
||||
)
|
||||
|
||||
asset_view = DeselectableTreeView(asset_task_splitter)
|
||||
asset_view.setHeaderHidden(True)
|
||||
asset_model = AssetsModel(controller)
|
||||
asset_proxy = QtCore.QSortFilterProxyModel()
|
||||
asset_proxy.setSourceModel(asset_model)
|
||||
asset_proxy.setDynamicSortFilter(True)
|
||||
asset_view.setModel(asset_proxy)
|
||||
|
||||
task_view = QtWidgets.QListView(asset_task_splitter)
|
||||
task_proxy = QtCore.QSortFilterProxyModel()
|
||||
task_model = TasksModel(controller)
|
||||
task_proxy.setSourceModel(task_model)
|
||||
task_proxy.setDynamicSortFilter(True)
|
||||
task_view.setModel(task_proxy)
|
||||
|
||||
asset_task_splitter.addWidget(asset_view)
|
||||
asset_task_splitter.addWidget(task_view)
|
||||
|
||||
context_layout = QtWidgets.QVBoxLayout(context_widget)
|
||||
context_layout.setContentsMargins(0, 0, 0, 0)
|
||||
context_layout.addWidget(project_combobox, 0)
|
||||
context_layout.addWidget(asset_task_splitter, 1)
|
||||
|
||||
# --- Inputs widget ---
|
||||
inputs_widget = QtWidgets.QWidget(main_splitter)
|
||||
|
||||
asset_name_input = PlaceholderLineEdit(inputs_widget)
|
||||
asset_name_input.setPlaceholderText("< Name of new asset >")
|
||||
asset_name_input.setObjectName("ValidatedLineEdit")
|
||||
|
||||
variant_input = PlaceholderLineEdit(inputs_widget)
|
||||
variant_input.setPlaceholderText("< Variant >")
|
||||
variant_input.setObjectName("ValidatedLineEdit")
|
||||
|
||||
comment_input = PlaceholderLineEdit(inputs_widget)
|
||||
comment_input.setPlaceholderText("< Publish comment >")
|
||||
|
||||
inputs_layout = QtWidgets.QFormLayout(inputs_widget)
|
||||
inputs_layout.setContentsMargins(0, 0, 0, 0)
|
||||
inputs_layout.addRow("New asset name", asset_name_input)
|
||||
inputs_layout.addRow("Variant", variant_input)
|
||||
inputs_layout.addRow("Comment", comment_input)
|
||||
|
||||
main_splitter.addWidget(context_widget)
|
||||
main_splitter.addWidget(inputs_widget)
|
||||
|
||||
# --- Buttons widget ---
|
||||
btns_widget = QtWidgets.QWidget(self)
|
||||
cancel_btn = QtWidgets.QPushButton("Cancel", btns_widget)
|
||||
publish_btn = QtWidgets.QPushButton("Publish", btns_widget)
|
||||
|
||||
btns_layout = QtWidgets.QHBoxLayout(btns_widget)
|
||||
btns_layout.setContentsMargins(0, 0, 0, 0)
|
||||
btns_layout.addStretch(1)
|
||||
btns_layout.addWidget(cancel_btn, 0)
|
||||
btns_layout.addWidget(publish_btn, 0)
|
||||
|
||||
sep_1 = SeparatorWidget(parent=main_context_widget)
|
||||
sep_2 = SeparatorWidget(parent=main_context_widget)
|
||||
main_context_layout = QtWidgets.QVBoxLayout(main_context_widget)
|
||||
main_context_layout.addWidget(header_widget, 0)
|
||||
main_context_layout.addWidget(sep_1, 0)
|
||||
main_context_layout.addWidget(main_splitter, 1)
|
||||
main_context_layout.addWidget(sep_2, 0)
|
||||
main_context_layout.addWidget(btns_widget, 0)
|
||||
|
||||
# NOTE This was added in hurry
|
||||
# - should be reorganized and changed styles
|
||||
overlay_widget = QtWidgets.QFrame(self)
|
||||
overlay_widget.setObjectName("OverlayFrame")
|
||||
|
||||
overlay_label = QtWidgets.QLabel(overlay_widget)
|
||||
overlay_label.setAlignment(QtCore.Qt.AlignCenter)
|
||||
|
||||
overlay_btns_widget = QtWidgets.QWidget(overlay_widget)
|
||||
overlay_btns_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground)
|
||||
|
||||
# Add try again button (requires changes in controller)
|
||||
overlay_try_btn = QtWidgets.QPushButton(
|
||||
"Try again", overlay_btns_widget
|
||||
)
|
||||
overlay_close_btn = QtWidgets.QPushButton(
|
||||
"Close", overlay_btns_widget
|
||||
)
|
||||
|
||||
overlay_btns_layout = QtWidgets.QHBoxLayout(overlay_btns_widget)
|
||||
overlay_btns_layout.addStretch(1)
|
||||
overlay_btns_layout.addWidget(overlay_try_btn, 0)
|
||||
overlay_btns_layout.addWidget(overlay_close_btn, 0)
|
||||
overlay_btns_layout.addStretch(1)
|
||||
|
||||
overlay_layout = QtWidgets.QVBoxLayout(overlay_widget)
|
||||
overlay_layout.addWidget(overlay_label, 0)
|
||||
overlay_layout.addWidget(overlay_btns_widget, 0)
|
||||
overlay_layout.setAlignment(QtCore.Qt.AlignCenter)
|
||||
|
||||
main_layout = QtWidgets.QStackedLayout(self)
|
||||
main_layout.setContentsMargins(0, 0, 0, 0)
|
||||
main_layout.addWidget(main_context_widget)
|
||||
main_layout.addWidget(overlay_widget)
|
||||
main_layout.setStackingMode(QtWidgets.QStackedLayout.StackAll)
|
||||
main_layout.setCurrentWidget(main_context_widget)
|
||||
|
||||
show_timer = QtCore.QTimer()
|
||||
show_timer.setInterval(1)
|
||||
|
||||
main_thread_timer = QtCore.QTimer()
|
||||
main_thread_timer.setInterval(10)
|
||||
|
||||
user_input_changed_timer = QtCore.QTimer()
|
||||
user_input_changed_timer.setInterval(200)
|
||||
user_input_changed_timer.setSingleShot(True)
|
||||
|
||||
main_thread_timer.timeout.connect(self._on_main_thread_timer)
|
||||
show_timer.timeout.connect(self._on_show_timer)
|
||||
user_input_changed_timer.timeout.connect(self._on_user_input_timer)
|
||||
asset_name_input.textChanged.connect(self._on_new_asset_change)
|
||||
variant_input.textChanged.connect(self._on_variant_change)
|
||||
comment_input.textChanged.connect(self._on_comment_change)
|
||||
project_model.refreshed.connect(self._on_projects_refresh)
|
||||
project_combobox.currentIndexChanged.connect(self._on_project_change)
|
||||
asset_view.selectionModel().selectionChanged.connect(
|
||||
self._on_asset_change
|
||||
)
|
||||
asset_model.items_changed.connect(self._on_asset_model_change)
|
||||
task_view.selectionModel().selectionChanged.connect(
|
||||
self._on_task_change
|
||||
)
|
||||
task_model.items_changed.connect(self._on_task_model_change)
|
||||
publish_btn.clicked.connect(self._on_select_click)
|
||||
cancel_btn.clicked.connect(self._on_close_click)
|
||||
overlay_close_btn.clicked.connect(self._on_close_click)
|
||||
overlay_try_btn.clicked.connect(self._on_try_again_click)
|
||||
|
||||
controller.event_system.add_callback(
|
||||
"new_asset_name.changed", self._on_controller_new_asset_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"variant.changed", self._on_controller_variant_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"comment.changed", self._on_controller_comment_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"submission.enabled.changed", self._on_submission_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"source.changed", self._on_controller_source_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"submit.started", self._on_controller_submit_start
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"submit.finished", self._on_controller_submit_end
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"push.message.added", self._on_push_message
|
||||
)
|
||||
|
||||
self._main_layout = main_layout
|
||||
|
||||
self._main_context_widget = main_context_widget
|
||||
|
||||
self._header_label = header_label
|
||||
self._main_splitter = main_splitter
|
||||
|
||||
self._project_combobox = project_combobox
|
||||
self._project_model = project_model
|
||||
self._project_proxy = project_proxy
|
||||
self._project_delegate = project_delegate
|
||||
|
||||
self._asset_view = asset_view
|
||||
self._asset_model = asset_model
|
||||
self._asset_proxy_model = asset_proxy
|
||||
|
||||
self._task_view = task_view
|
||||
self._task_proxy_model = task_proxy
|
||||
|
||||
self._variant_input = variant_input
|
||||
self._asset_name_input = asset_name_input
|
||||
self._comment_input = comment_input
|
||||
|
||||
self._publish_btn = publish_btn
|
||||
|
||||
self._overlay_widget = overlay_widget
|
||||
self._overlay_close_btn = overlay_close_btn
|
||||
self._overlay_try_btn = overlay_try_btn
|
||||
self._overlay_label = overlay_label
|
||||
|
||||
self._user_input_changed_timer = user_input_changed_timer
|
||||
# Store current value on input text change
|
||||
# The value is unset when is passed to controller
|
||||
# The goal is to have controll over changes happened during user change
|
||||
# in UI and controller auto-changes
|
||||
self._variant_input_text = None
|
||||
self._new_asset_name_input_text = None
|
||||
self._comment_input_text = None
|
||||
self._show_timer = show_timer
|
||||
self._show_counter = 2
|
||||
self._first_show = True
|
||||
|
||||
self._main_thread_timer = main_thread_timer
|
||||
self._main_thread_timer_can_stop = True
|
||||
self._last_submit_message = None
|
||||
self._process_item = None
|
||||
|
||||
publish_btn.setEnabled(False)
|
||||
overlay_close_btn.setVisible(False)
|
||||
overlay_try_btn.setVisible(False)
|
||||
|
||||
if controller.user_values.new_asset_name:
|
||||
asset_name_input.setText(controller.user_values.new_asset_name)
|
||||
if controller.user_values.variant:
|
||||
variant_input.setText(controller.user_values.variant)
|
||||
self._invalidate_variant()
|
||||
self._invalidate_new_asset_name()
|
||||
|
||||
@property
|
||||
def controller(self):
|
||||
return self._controller
|
||||
|
||||
def showEvent(self, event):
|
||||
super(PushToContextSelectWindow, self).showEvent(event)
|
||||
if self._first_show:
|
||||
self._first_show = False
|
||||
self.setStyleSheet(load_stylesheet())
|
||||
self._invalidate_variant()
|
||||
self._show_timer.start()
|
||||
|
||||
def _on_show_timer(self):
|
||||
if self._show_counter == 0:
|
||||
self._show_timer.stop()
|
||||
return
|
||||
|
||||
self._show_counter -= 1
|
||||
if self._show_counter == 1:
|
||||
width = 740
|
||||
height = 640
|
||||
inputs_width = 360
|
||||
self.resize(width, height)
|
||||
self._main_splitter.setSizes([width - inputs_width, inputs_width])
|
||||
|
||||
if self._show_counter > 0:
|
||||
return
|
||||
|
||||
self._controller.model.refresh_projects()
|
||||
|
||||
def _on_new_asset_change(self, text):
|
||||
self._new_asset_name_input_text = text
|
||||
self._user_input_changed_timer.start()
|
||||
|
||||
def _on_variant_change(self, text):
|
||||
self._variant_input_text = text
|
||||
self._user_input_changed_timer.start()
|
||||
|
||||
def _on_comment_change(self, text):
|
||||
self._comment_input_text = text
|
||||
self._user_input_changed_timer.start()
|
||||
|
||||
def _on_user_input_timer(self):
|
||||
asset_name = self._new_asset_name_input_text
|
||||
if asset_name is not None:
|
||||
self._new_asset_name_input_text = None
|
||||
self._controller.user_values.set_new_asset(asset_name)
|
||||
|
||||
variant = self._variant_input_text
|
||||
if variant is not None:
|
||||
self._variant_input_text = None
|
||||
self._controller.user_values.set_variant(variant)
|
||||
|
||||
comment = self._comment_input_text
|
||||
if comment is not None:
|
||||
self._comment_input_text = None
|
||||
self._controller.user_values.set_comment(comment)
|
||||
|
||||
def _on_controller_new_asset_change(self, event):
|
||||
asset_name = event["changes"]["new_asset_name"]["new"]
|
||||
if (
|
||||
self._new_asset_name_input_text is None
|
||||
and asset_name != self._asset_name_input.text()
|
||||
):
|
||||
self._asset_name_input.setText(asset_name)
|
||||
|
||||
self._invalidate_new_asset_name()
|
||||
|
||||
def _on_controller_variant_change(self, event):
|
||||
is_valid_changes = event["changes"]["is_valid"]
|
||||
variant = event["changes"]["variant"]["new"]
|
||||
if (
|
||||
self._variant_input_text is None
|
||||
and variant != self._variant_input.text()
|
||||
):
|
||||
self._variant_input.setText(variant)
|
||||
|
||||
if is_valid_changes["old"] != is_valid_changes["new"]:
|
||||
self._invalidate_variant()
|
||||
|
||||
def _on_controller_comment_change(self, event):
|
||||
comment = event["comment"]
|
||||
if (
|
||||
self._comment_input_text is None
|
||||
and comment != self._comment_input.text()
|
||||
):
|
||||
self._comment_input.setText(comment)
|
||||
|
||||
def _on_controller_source_change(self):
|
||||
self._header_label.setText(self._controller.src_label)
|
||||
|
||||
def _invalidate_new_asset_name(self):
|
||||
asset_name = self._controller.user_values.new_asset_name
|
||||
self._task_view.setVisible(not asset_name)
|
||||
|
||||
valid = None
|
||||
if asset_name:
|
||||
valid = self._controller.user_values.is_new_asset_name_valid
|
||||
|
||||
state = ""
|
||||
if valid is True:
|
||||
state = "valid"
|
||||
elif valid is False:
|
||||
state = "invalid"
|
||||
set_style_property(self._asset_name_input, "state", state)
|
||||
|
||||
def _invalidate_variant(self):
|
||||
valid = self._controller.user_values.is_variant_valid
|
||||
state = "invalid"
|
||||
if valid is True:
|
||||
state = "valid"
|
||||
set_style_property(self._variant_input, "state", state)
|
||||
|
||||
def _on_projects_refresh(self):
|
||||
self._project_proxy.sort(0, QtCore.Qt.AscendingOrder)
|
||||
|
||||
def _on_project_change(self):
|
||||
idx = self._project_combobox.currentIndex()
|
||||
if idx < 0:
|
||||
self._project_proxy.set_filter_empty_project(False)
|
||||
return
|
||||
|
||||
project_name = self._project_combobox.itemData(idx, PROJECT_NAME_ROLE)
|
||||
self._project_proxy.set_filter_empty_project(project_name is not None)
|
||||
self._controller.selection_model.select_project(project_name)
|
||||
|
||||
def _on_asset_change(self):
|
||||
indexes = self._asset_view.selectedIndexes()
|
||||
index = next(iter(indexes), None)
|
||||
asset_id = None
|
||||
if index is not None:
|
||||
model = self._asset_view.model()
|
||||
asset_id = model.data(index, ASSET_ID_ROLE)
|
||||
self._controller.selection_model.select_asset(asset_id)
|
||||
|
||||
def _on_asset_model_change(self):
|
||||
self._asset_proxy_model.sort(0, QtCore.Qt.AscendingOrder)
|
||||
|
||||
def _on_task_model_change(self):
|
||||
self._task_proxy_model.sort(0, QtCore.Qt.AscendingOrder)
|
||||
|
||||
def _on_task_change(self):
|
||||
indexes = self._task_view.selectedIndexes()
|
||||
index = next(iter(indexes), None)
|
||||
task_name = None
|
||||
if index is not None:
|
||||
model = self._task_view.model()
|
||||
task_name = model.data(index, TASK_NAME_ROLE)
|
||||
self._controller.selection_model.select_task(task_name)
|
||||
|
||||
def _on_submission_change(self, event):
|
||||
self._publish_btn.setEnabled(event["enabled"])
|
||||
|
||||
def _on_close_click(self):
|
||||
self.close()
|
||||
|
||||
def _on_select_click(self):
|
||||
self._process_item = self._controller.submit(wait=False)
|
||||
|
||||
def _on_try_again_click(self):
|
||||
self._process_item = None
|
||||
self._last_submit_message = None
|
||||
|
||||
self._overlay_close_btn.setVisible(False)
|
||||
self._overlay_try_btn.setVisible(False)
|
||||
self._main_layout.setCurrentWidget(self._main_context_widget)
|
||||
|
||||
def _on_main_thread_timer(self):
|
||||
if self._last_submit_message:
|
||||
self._overlay_label.setText(self._last_submit_message)
|
||||
self._last_submit_message = None
|
||||
|
||||
process_status = self._process_item.status
|
||||
push_failed = process_status.failed
|
||||
fail_traceback = process_status.traceback
|
||||
if self._main_thread_timer_can_stop:
|
||||
self._main_thread_timer.stop()
|
||||
self._overlay_close_btn.setVisible(True)
|
||||
if push_failed and not fail_traceback:
|
||||
self._overlay_try_btn.setVisible(True)
|
||||
|
||||
if push_failed:
|
||||
message = "Push Failed:\n{}".format(process_status.fail_reason)
|
||||
if fail_traceback:
|
||||
message += "\n{}".format(fail_traceback)
|
||||
self._overlay_label.setText(message)
|
||||
set_style_property(self._overlay_close_btn, "state", "error")
|
||||
|
||||
if self._main_thread_timer_can_stop:
|
||||
# Join thread in controller
|
||||
self._controller.wait_for_process_thread()
|
||||
# Reset process item to None
|
||||
self._process_item = None
|
||||
|
||||
def _on_controller_submit_start(self):
|
||||
self._main_thread_timer_can_stop = False
|
||||
self._main_thread_timer.start()
|
||||
self._main_layout.setCurrentWidget(self._overlay_widget)
|
||||
self._overlay_label.setText("Submittion started")
|
||||
|
||||
def _on_controller_submit_end(self):
|
||||
self._main_thread_timer_can_stop = True
|
||||
|
||||
def _on_push_message(self, event):
|
||||
self._last_submit_message = event["message"]
|
||||
|
|
@ -20,6 +20,9 @@ from .lib import (
|
|||
DynamicQThread,
|
||||
qt_app_context,
|
||||
get_asset_icon,
|
||||
get_asset_icon_by_name,
|
||||
get_asset_icon_name_from_doc,
|
||||
get_asset_icon_color_from_doc,
|
||||
)
|
||||
|
||||
from .models import (
|
||||
|
|
@ -53,6 +56,9 @@ __all__ = (
|
|||
"DynamicQThread",
|
||||
"qt_app_context",
|
||||
"get_asset_icon",
|
||||
"get_asset_icon_by_name",
|
||||
"get_asset_icon_name_from_doc",
|
||||
"get_asset_icon_color_from_doc",
|
||||
|
||||
"RecursiveSortFilterProxyModel",
|
||||
|
||||
|
|
|
|||
|
|
@ -202,20 +202,52 @@ def get_project_icon(project_doc):
|
|||
|
||||
|
||||
def get_asset_icon_name(asset_doc, has_children=True):
|
||||
icon_name = asset_doc["data"].get("icon")
|
||||
icon_name = get_asset_icon_name_from_doc(asset_doc)
|
||||
if icon_name:
|
||||
return icon_name
|
||||
return get_default_asset_icon_name(has_children)
|
||||
|
||||
|
||||
def get_asset_icon_color(asset_doc):
|
||||
icon_color = get_asset_icon_color_from_doc(asset_doc)
|
||||
if icon_color:
|
||||
return icon_color
|
||||
return get_default_entity_icon_color()
|
||||
|
||||
|
||||
def get_default_asset_icon_name(has_children):
|
||||
if has_children:
|
||||
return "fa.folder"
|
||||
return "fa.folder-o"
|
||||
|
||||
|
||||
def get_asset_icon_color(asset_doc):
|
||||
icon_color = asset_doc["data"].get("color")
|
||||
def get_asset_icon_name_from_doc(asset_doc):
|
||||
if asset_doc:
|
||||
return asset_doc["data"].get("icon")
|
||||
return None
|
||||
|
||||
|
||||
def get_asset_icon_color_from_doc(asset_doc):
|
||||
if asset_doc:
|
||||
return asset_doc["data"].get("color")
|
||||
return None
|
||||
|
||||
|
||||
def get_asset_icon_by_name(icon_name, icon_color, has_children=False):
|
||||
if not icon_name:
|
||||
icon_name = get_default_asset_icon_name(has_children)
|
||||
|
||||
if icon_color:
|
||||
return icon_color
|
||||
return get_default_entity_icon_color()
|
||||
icon_color = QtGui.QColor(icon_color)
|
||||
else:
|
||||
icon_color = get_default_entity_icon_color()
|
||||
icon = get_qta_icon_by_name_and_color(icon_name, icon_color)
|
||||
if icon is not None:
|
||||
return icon
|
||||
return get_qta_icon_by_name_and_color(
|
||||
get_default_asset_icon_name(has_children),
|
||||
icon_color
|
||||
)
|
||||
|
||||
|
||||
def get_asset_icon(asset_doc, has_children=False):
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.14.10-nightly.7"
|
||||
__version__ = "3.14.11-nightly.2"
|
||||
|
|
|
|||
|
|
@ -43,4 +43,5 @@ $openpype_root = (Get-Item $script_dir).parent.FullName
|
|||
|
||||
Set-Location $openpype_root/website
|
||||
|
||||
& yarn run start
|
||||
& yarn install
|
||||
& yarn start
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue