Merge branch 'release/3.15.x' into bugfix/OP-3845_nuke-input-process-node-sourcing

This commit is contained in:
Jakub Jezek 2023-01-18 13:01:58 +01:00
commit 8fb7494d20
No known key found for this signature in database
GPG key ID: 730D7C02726179A7
284 changed files with 10484 additions and 2861 deletions

View file

@ -1,5 +1,58 @@
# Changelog
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...HEAD)
**🆕 New features**
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
**🚀 Enhancements**
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
**🐛 Bug fixes**
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
**🔀 Refactored code**
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
**Merged pull requests:**
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.8...3.14.9)

View file

@ -1,5 +1,57 @@
# Changelog
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...3.14.10)
**🆕 New features**
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
**🚀 Enhancements**
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
**🐛 Bug fixes**
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
**🔀 Refactored code**
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
**Merged pull requests:**
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)

View file

@ -76,6 +76,18 @@ class HostBase(object):
pass
def install(self):
"""Install host specific functionality.
This is where should be added menu with tools, registered callbacks
and other host integration initialization.
It is called automatically when 'openpype.pipeline.install_host' is
triggered.
"""
pass
@property
def log(self):
if self._log is None:

View file

@ -40,6 +40,7 @@ class Server(threading.Thread):
# Create a TCP/IP socket
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Bind the socket to the port
server_address = ("127.0.0.1", port)
@ -91,7 +92,13 @@ class Server(threading.Thread):
self.log.info("wait ttt")
# Receive the data in small chunks and retransmit it
request = None
header = self.connection.recv(10)
try:
header = self.connection.recv(10)
except OSError:
# could happen on MacOS
self.log.info("")
break
if len(header) == 0:
# null data received, socket is closing.
self.log.info(f"[{self.timestamp()}] Connection closing.")

View file

@ -44,3 +44,6 @@ class CreateAnimation(plugin.Creator):
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50
# Default to write normals.
self.data["writeNormals"] = True

View file

@ -6,7 +6,7 @@ class CreateMultiverseUsd(plugin.Creator):
name = "mvUsdMain"
label = "Multiverse USD Asset"
family = "mvUsd"
family = "usd"
icon = "cubes"
def __init__(self, *args, **kwargs):

View file

@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds
from maya import mel
import os
from openpype.pipeline import (
load,
@ -11,12 +13,13 @@ from openpype.hosts.maya.api.lib import (
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
from openpype.client import get_representation_by_id
class MultiverseUsdLoader(load.LoaderPlugin):
"""Read USD data in a Multiverse Compound"""
families = ["model", "mvUsd", "mvUsdComposition", "mvUsdOverride",
families = ["model", "usd", "mvUsdComposition", "mvUsdOverride",
"pointcache", "animation"]
representations = ["usd", "usda", "usdc", "usdz", "abc"]
@ -26,7 +29,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
asset = context['asset']['name']
namespace = namespace or unique_namespace(
asset + "_",
@ -34,22 +36,20 @@ class MultiverseUsdLoader(load.LoaderPlugin):
suffix="_",
)
# Create the shape
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
# Create the shape
shape = None
transform = None
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
import multiverse
shape = multiverse.CreateUsdCompound(self.fname)
transform = cmds.listRelatives(
shape, parent=True, fullPath=True)[0]
# Lock the shape node so the user cannot delete it.
cmds.lockNode(shape, lock=True)
nodes = [transform, shape]
self[:] = nodes
@ -70,15 +70,34 @@ class MultiverseUsdLoader(load.LoaderPlugin):
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
path = get_representation_path(representation)
project_name = representation["context"]["project"]["name"]
prev_representation_id = cmds.getAttr("{}.representation".format(node))
prev_representation = get_representation_by_id(project_name,
prev_representation_id)
prev_path = os.path.normpath(prev_representation["data"]["path"])
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
for shape in shapes:
multiverse.SetUsdCompoundAssetPaths(shape, [path])
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
asset_paths = [os.path.normpath(p) for p in asset_paths]
assert asset_paths.count(prev_path) == 1, \
"Couldn't find matching path (or too many)"
prev_path_idx = asset_paths.index(prev_path)
path = get_representation_path(representation)
asset_paths[prev_path_idx] = path
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
cmds.setAttr("{}.representation".format(node),
str(representation["_id"]),
type="string")
mel.eval('refreshEditorTemplates;')
def switch(self, container, representation):
self.update(container, representation)

View file

@ -0,0 +1,132 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds
from maya import mel
import os
import qargparse
from openpype.pipeline import (
load,
get_representation_path
)
from openpype.hosts.maya.api.lib import (
maintained_selection
)
from openpype.hosts.maya.api.pipeline import containerise
from openpype.client import get_representation_by_id
class MultiverseUsdOverLoader(load.LoaderPlugin):
"""Reference file"""
families = ["mvUsdOverride"]
representations = ["usda", "usd", "udsz"]
label = "Load Usd Override into Compound"
order = -10
icon = "code-fork"
color = "orange"
options = [
qargparse.String(
"Which Compound",
label="Compound",
help="Select which compound to add this as a layer to."
)
]
def load(self, context, name=None, namespace=None, options=None):
current_usd = cmds.ls(selection=True,
type="mvUsdCompoundShape",
dag=True,
long=True)
if len(current_usd) != 1:
self.log.error("Current selection invalid: '{}', "
"must contain exactly 1 mvUsdCompoundShape."
"".format(current_usd))
return
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
nodes = current_usd
with maintained_selection():
multiverse.AddUsdCompoundAssetPath(current_usd[0], self.fname)
namespace = current_usd[0].split("|")[1].split(":")[0]
container = containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
cmds.addAttr(container, longName="mvUsdCompoundShape",
niceName="mvUsdCompoundShape", dataType="string")
cmds.setAttr(container + ".mvUsdCompoundShape",
current_usd[0], type="string")
return container
def update(self, container, representation):
# type: (dict, dict) -> None
"""Update container with specified representation."""
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
mvShape = container['mvUsdCompoundShape']
assert mvShape, "Missing mv source"
project_name = representation["context"]["project"]["name"]
prev_representation_id = cmds.getAttr("{}.representation".format(node))
prev_representation = get_representation_by_id(project_name,
prev_representation_id)
prev_path = os.path.normpath(prev_representation["data"]["path"])
path = get_representation_path(representation)
for shape in shapes:
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
asset_paths = [os.path.normpath(p) for p in asset_paths]
assert asset_paths.count(prev_path) == 1, \
"Couldn't find matching path (or too many)"
prev_path_idx = asset_paths.index(prev_path)
asset_paths[prev_path_idx] = path
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
cmds.setAttr("{}.representation".format(node),
str(representation["_id"]),
type="string")
mel.eval('refreshEditorTemplates;')
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)

View file

@ -26,7 +26,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"rig",
"camerarig",
"xgen",
"staticMesh"]
"staticMesh",
"mvLook"]
representations = ["ma", "abc", "fbx", "mb"]
label = "Reference"

View file

@ -74,13 +74,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
objectset = cmds.ls("*.id", long=True, type="objectSet",
recursive=True, objectsOnly=True)
ctx_frame_start = context.data['frameStart']
ctx_frame_end = context.data['frameEnd']
ctx_handle_start = context.data['handleStart']
ctx_handle_end = context.data['handleEnd']
ctx_frame_start_handle = context.data['frameStartHandle']
ctx_frame_end_handle = context.data['frameEndHandle']
context.data['objectsets'] = objectset
for objset in objectset:
@ -156,31 +149,20 @@ class CollectInstances(pyblish.api.ContextPlugin):
# Append start frame and end frame to label if present
if "frameStart" and "frameEnd" in data:
# if frame range on maya set is the same as full shot range
# adjust the values to match the asset data
if (ctx_frame_start_handle == data["frameStart"]
and ctx_frame_end_handle == data["frameEnd"]): # noqa: W503, E501
data["frameStartHandle"] = ctx_frame_start_handle
data["frameEndHandle"] = ctx_frame_end_handle
data["frameStart"] = ctx_frame_start
data["frameEnd"] = ctx_frame_end
data["handleStart"] = ctx_handle_start
data["handleEnd"] = ctx_handle_end
# if there are user values on start and end frame not matching
# the asset, use them
else:
if "handles" in data:
data["handleStart"] = data["handles"]
data["handleEnd"] = data["handles"]
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
# Backwards compatibility for 'handles' data
if "handles" in data:
data["handleStart"] = data["handles"]
data["handleEnd"] = data["handles"]
data.pop('handles')
# Take handles from context if not set locally on the instance
for key in ["handleStart", "handleEnd"]:
if key not in data:
data[key] = context.data[key]
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
label += " [{0}-{1}]".format(int(data["frameStartHandle"]),
int(data["frameEndHandle"]))

View file

@ -440,7 +440,8 @@ class CollectLook(pyblish.api.InstancePlugin):
for res in self.collect_resources(n):
instance.data["resources"].append(res)
self.log.info("Collected resources: {}".format(instance.data["resources"]))
self.log.info("Collected resources: {}".format(
instance.data["resources"]))
# Log warning when no relevant sets were retrieved for the look.
if (
@ -548,6 +549,11 @@ class CollectLook(pyblish.api.InstancePlugin):
if not cmds.attributeQuery(attr, node=node, exists=True):
continue
attribute = "{}.{}".format(node, attr)
# We don't support mixed-type attributes yet.
if cmds.attributeQuery(attr, node=node, multi=True):
self.log.warning("Attribute '{}' is mixed-type and is "
"not supported yet.".format(attribute))
continue
if cmds.getAttr(attribute, type=True) == "message":
continue
node_attributes[attr] = cmds.getAttr(attribute)

View file

@ -21,37 +21,68 @@ COLOUR_SPACES = ['sRGB', 'linear', 'auto']
MIPMAP_EXTENSIONS = ['tdl']
def get_look_attrs(node):
"""Returns attributes of a node that are important for the look.
class _NodeTypeAttrib(object):
"""docstring for _NodeType"""
These are the "changed" attributes (those that have edits applied
in the current scene).
def __init__(self, name, fname, computed_fname=None, colour_space=None):
self.name = name
self.fname = fname
self.computed_fname = computed_fname or fname
self.colour_space = colour_space or "colorSpace"
Returns:
list: Attribute names to extract
def get_fname(self, node):
return "{}.{}".format(node, self.fname)
def get_computed_fname(self, node):
return "{}.{}".format(node, self.computed_fname)
def get_colour_space(self, node):
return "{}.{}".format(node, self.colour_space)
def __str__(self):
return "_NodeTypeAttrib(name={}, fname={}, "
"computed_fname={}, colour_space={})".format(
self.name, self.fname, self.computed_fname, self.colour_space)
NODETYPES = {
"file": [_NodeTypeAttrib("file", "fileTextureName",
"computedFileTextureNamePattern")],
"aiImage": [_NodeTypeAttrib("aiImage", "filename")],
"RedshiftNormalMap": [_NodeTypeAttrib("RedshiftNormalMap", "tex0")],
"dlTexture": [_NodeTypeAttrib("dlTexture", "textureFile",
None, "textureFile_meta_colorspace")],
"dlTriplanar": [_NodeTypeAttrib("dlTriplanar", "colorTexture",
None, "colorTexture_meta_colorspace"),
_NodeTypeAttrib("dlTriplanar", "floatTexture",
None, "floatTexture_meta_colorspace"),
_NodeTypeAttrib("dlTriplanar", "heightTexture",
None, "heightTexture_meta_colorspace")]
}
def get_file_paths_for_node(node):
"""Gets all the file paths in this node.
Returns all filepaths that this node references. Some node types only
reference one, but others, like dlTriplanar, can reference 3.
Args:
node (str): Name of the Maya node
Returns
list(str): A list with all evaluated maya attributes for filepaths.
"""
# When referenced get only attributes that are "changed since file open"
# which includes any reference edits, otherwise take *all* user defined
# attributes
is_referenced = cmds.referenceQuery(node, isNodeReferenced=True)
result = cmds.listAttr(node, userDefined=True,
changedSinceFileOpen=is_referenced) or []
# `cbId` is added when a scene is saved, ignore by default
if "cbId" in result:
result.remove("cbId")
node_type = cmds.nodeType(node)
if node_type not in NODETYPES:
return []
# For shapes allow render stat changes
if cmds.objectType(node, isAType="shape"):
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
for attr in attrs:
if attr in SHAPE_ATTRS:
result.append(attr)
elif attr.startswith('ai'):
result.append(attr)
return result
paths = []
for node_type_attr in NODETYPES[node_type]:
fname = cmds.getAttr("{}.{}".format(node, node_type_attr.fname))
paths.append(fname)
return paths
def node_uses_image_sequence(node):
@ -69,13 +100,29 @@ def node_uses_image_sequence(node):
"""
# useFrameExtension indicates an explicit image sequence
node_path = get_file_node_path(node).lower()
paths = get_file_node_paths(node)
paths = [path.lower() for path in paths]
# The following tokens imply a sequence
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
return (cmds.getAttr('%s.useFrameExtension' % node) or
any(pattern in node_path for pattern in patterns))
def pattern_in_paths(patterns, paths):
"""Helper function for checking to see if a pattern is contained
in the list of paths"""
for pattern in patterns:
for path in paths:
if pattern in path:
return True
return False
node_type = cmds.nodeType(node)
if node_type == 'dlTexture':
return (cmds.getAttr('{}.useImageSequence'.format(node)) or
pattern_in_paths(patterns, paths))
elif node_type == "file":
return (cmds.getAttr('{}.useFrameExtension'.format(node)) or
pattern_in_paths(patterns, paths))
return False
def seq_to_glob(path):
@ -132,7 +179,7 @@ def seq_to_glob(path):
return path
def get_file_node_path(node):
def get_file_node_paths(node):
"""Get the file path used by a Maya file node.
Args:
@ -158,15 +205,9 @@ def get_file_node_path(node):
"<uvtile>"]
lower = texture_pattern.lower()
if any(pattern in lower for pattern in patterns):
return texture_pattern
return [texture_pattern]
if cmds.nodeType(node) == 'aiImage':
return cmds.getAttr('{0}.filename'.format(node))
if cmds.nodeType(node) == 'RedshiftNormalMap':
return cmds.getAttr('{}.tex0'.format(node))
# otherwise use fileTextureName
return cmds.getAttr('{0}.fileTextureName'.format(node))
return get_file_paths_for_node(node)
def get_file_node_files(node):
@ -181,15 +222,15 @@ def get_file_node_files(node):
"""
path = get_file_node_path(node)
path = cmds.workspace(expandName=path)
paths = get_file_node_paths(node)
paths = [cmds.workspace(expandName=path) for path in paths]
if node_uses_image_sequence(node):
glob_pattern = seq_to_glob(path)
return glob.glob(glob_pattern)
elif os.path.exists(path):
return [path]
globs = []
for path in paths:
globs += glob.glob(seq_to_glob(path))
return globs
else:
return []
return list(filter(lambda x: os.path.exists(x), paths))
def get_mipmap(fname):
@ -211,6 +252,11 @@ def is_mipmap(fname):
class CollectMultiverseLookData(pyblish.api.InstancePlugin):
"""Collect Multiverse Look
Searches through the overrides finding all material overrides. From there
it extracts the shading group and then finds all texture files in the
shading group network. It also checks for mipmap versions of texture files
and adds them to the resouces to get published.
"""
order = pyblish.api.CollectorOrder + 0.2
@ -258,12 +304,20 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
shadingGroup), "members": list()}
# The SG may reference files, add those too!
history = cmds.listHistory(shadingGroup)
files = cmds.ls(history, type="file", long=True)
history = cmds.listHistory(
shadingGroup, allConnections=True)
# We need to iterate over node_types since `cmds.ls` may
# error out if we don't have the appropriate plugin loaded.
files = []
for node_type in NODETYPES.keys():
files += cmds.ls(history,
type=node_type,
long=True)
for f in files:
resources = self.collect_resource(f, publishMipMap)
instance.data["resources"].append(resources)
instance.data["resources"] += resources
elif isinstance(matOver, multiverse.MaterialSourceUsdPath):
# TODO: Handle this later.
@ -284,69 +338,63 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
dict
"""
self.log.debug("processing: {}".format(node))
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
self.log.error(
"Unsupported file node: {}".format(cmds.nodeType(node)))
node_type = cmds.nodeType(node)
self.log.debug("processing: {}/{}".format(node, node_type))
if node_type not in NODETYPES:
self.log.error("Unsupported file node: {}".format(node_type))
raise AssertionError("Unsupported file node")
if cmds.nodeType(node) == 'file':
self.log.debug(" - file node")
attribute = "{}.fileTextureName".format(node)
computed_attribute = "{}.computedFileTextureNamePattern".format(
node)
elif cmds.nodeType(node) == 'aiImage':
self.log.debug("aiImage node")
attribute = "{}.filename".format(node)
computed_attribute = attribute
elif cmds.nodeType(node) == 'RedshiftNormalMap':
self.log.debug("RedshiftNormalMap node")
attribute = "{}.tex0".format(node)
computed_attribute = attribute
resources = []
for node_type_attr in NODETYPES[node_type]:
fname_attrib = node_type_attr.get_fname(node)
computed_fname_attrib = node_type_attr.get_computed_fname(node)
colour_space_attrib = node_type_attr.get_colour_space(node)
source = cmds.getAttr(attribute)
self.log.info(" - file source: {}".format(source))
color_space_attr = "{}.colorSpace".format(node)
try:
color_space = cmds.getAttr(color_space_attr)
except ValueError:
# node doesn't have colorspace attribute
source = cmds.getAttr(fname_attrib)
color_space = "Raw"
# Compare with the computed file path, e.g. the one with the <UDIM>
# pattern in it, to generate some logging information about this
# difference
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
computed_source = cmds.getAttr(computed_attribute)
if source != computed_source:
self.log.debug("Detected computed file pattern difference "
"from original pattern: {0} "
"({1} -> {2})".format(node,
source,
computed_source))
try:
color_space = cmds.getAttr(colour_space_attrib)
except ValueError:
# node doesn't have colorspace attribute, use "Raw" from before
pass
# Compare with the computed file path, e.g. the one with the <UDIM>
# pattern in it, to generate some logging information about this
# difference
# computed_attribute = "{}.computedFileTextureNamePattern".format(node) # noqa
computed_source = cmds.getAttr(computed_fname_attrib)
if source != computed_source:
self.log.debug("Detected computed file pattern difference "
"from original pattern: {0} "
"({1} -> {2})".format(node,
source,
computed_source))
# We replace backslashes with forward slashes because V-Ray
# can't handle the UDIM files with the backslashes in the
# paths as the computed patterns
source = source.replace("\\", "/")
# We replace backslashes with forward slashes because V-Ray
# can't handle the UDIM files with the backslashes in the
# paths as the computed patterns
source = source.replace("\\", "/")
files = get_file_node_files(node)
files = self.handle_files(files, publishMipMap)
if len(files) == 0:
self.log.error("No valid files found from node `%s`" % node)
files = get_file_node_files(node)
files = self.handle_files(files, publishMipMap)
if len(files) == 0:
self.log.error("No valid files found from node `%s`" % node)
self.log.info("collection of resource done:")
self.log.info(" - node: {}".format(node))
self.log.info(" - attribute: {}".format(attribute))
self.log.info(" - source: {}".format(source))
self.log.info(" - file: {}".format(files))
self.log.info(" - color space: {}".format(color_space))
self.log.info("collection of resource done:")
self.log.info(" - node: {}".format(node))
self.log.info(" - attribute: {}".format(fname_attrib))
self.log.info(" - source: {}".format(source))
self.log.info(" - file: {}".format(files))
self.log.info(" - color space: {}".format(color_space))
# Define the resource
return {"node": node,
"attribute": attribute,
"source": source, # required for resources
"files": files,
"color_space": color_space} # required for resources
# Define the resource
resource = {"node": node,
"attribute": fname_attrib,
"source": source, # required for resources
"files": files,
"color_space": color_space} # required for resources
resources.append(resource)
return resources
def handle_files(self, files, publishMipMap):
"""This will go through all the files and make sure that they are

View file

@ -0,0 +1,152 @@
import os
import sys
from maya import cmds
import pyblish.api
import tempfile
from openpype.lib import run_subprocess
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractImportReference(publish.Extractor):
"""
Extract the scene with imported reference.
The temp scene with imported reference is
published for rendering if this extractor is activated
"""
label = "Extract Import Reference"
order = pyblish.api.ExtractorOrder - 0.48
hosts = ["maya"]
families = ["renderlayer", "workfile"]
optional = True
tmp_format = "_tmp"
@classmethod
def apply_settings(cls, project_setting, system_settings):
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
def process(self, instance):
ext_mapping = (
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
self.scene_type = ext_mapping[family]
self.log.info(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
# set scene type to ma
self.scene_type = "ma"
_scene_type = ("mayaAscii"
if self.scene_type == "ma"
else "mayaBinary")
dir_path = self.staging_dir(instance)
# named the file with imported reference
if instance.name == "Main":
return
tmp_name = instance.name + self.tmp_format
current_name = cmds.file(query=True, sceneName=True)
ref_scene_name = "{0}.{1}".format(tmp_name, self.scene_type)
reference_path = os.path.join(dir_path, ref_scene_name)
tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name
self.log.info("Performing extraction..")
# This generates script for mayapy to take care of reference
# importing outside current session. It is passing current scene
# name and destination scene name.
script = ("""
# -*- coding: utf-8 -*-
'''Script to import references to given scene.'''
import maya.standalone
maya.standalone.initialize()
# scene names filled by caller
current_name = "{current_name}"
ref_scene_name = "{ref_scene_name}"
print(">>> Opening {{}} ...".format(current_name))
cmds.file(current_name, open=True, force=True)
print(">>> Processing references")
all_reference = cmds.file(q=True, reference=True) or []
for ref in all_reference:
if cmds.referenceQuery(ref, il=True):
cmds.file(ref, importReference=True)
nested_ref = cmds.file(q=True, reference=True)
if nested_ref:
for new_ref in nested_ref:
if new_ref not in all_reference:
all_reference.append(new_ref)
print(">>> Finish importing references")
print(">>> Saving scene as {{}}".format(ref_scene_name))
cmds.file(rename=ref_scene_name)
cmds.file(save=True, force=True)
print("*** Done")
""").format(current_name=current_name, ref_scene_name=tmp_path)
mayapy_exe = os.path.join(os.getenv("MAYA_LOCATION"), "bin", "mayapy")
if sys.platform == "windows":
mayapy_exe += ".exe"
mayapy_exe = os.path.normpath(mayapy_exe)
# can't use TemporaryNamedFile as that can't be opened in another
# process until handles are closed by context manager.
with tempfile.TemporaryDirectory() as tmp_dir_name:
tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py")
self.log.info("Using script file: {}".format(tmp_script_path))
with open(tmp_script_path, "wt") as tmp:
tmp.write(script)
try:
run_subprocess([mayapy_exe, tmp_script_path])
except Exception:
self.log.error("Import reference failed", exc_info=True)
raise
with lib.maintained_selection():
cmds.select(all=True, noExpand=True)
cmds.file(reference_path,
force=True,
typ=_scene_type,
exportSelected=True,
channels=True,
constraints=True,
shader=True,
expressions=True,
constructionHistory=True)
instance.context.data["currentFile"] = tmp_path
if "files" not in instance.data:
instance.data["files"] = []
instance.data["files"].append(ref_scene_name)
if instance.data.get("representations") is None:
instance.data["representations"] = []
ref_representation = {
"name": self.scene_type,
"ext": self.scene_type,
"files": ref_scene_name,
"stagingDir": os.path.dirname(current_name),
"outputName": "imported"
}
self.log.info("%s" % ref_representation)
instance.data["representations"].append(ref_representation)
self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name,
reference_path))

View file

@ -73,12 +73,12 @@ class ExtractMultiverseLook(publish.Extractor):
"writeAll": False,
"writeTransforms": False,
"writeVisibility": False,
"writeAttributes": False,
"writeAttributes": True,
"writeMaterials": True,
"writeVariants": False,
"writeVariantsDefinition": False,
"writeActiveState": False,
"writeNamespaces": False,
"writeNamespaces": True,
"numTimeSamples": 1,
"timeSamplesSpan": 0.0
}

View file

@ -2,7 +2,9 @@ import os
import six
from maya import cmds
from maya import mel
import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
@ -26,7 +28,7 @@ class ExtractMultiverseUsd(publish.Extractor):
label = "Extract Multiverse USD Asset"
hosts = ["maya"]
families = ["mvUsd"]
families = ["usd"]
scene_type = "usd"
file_formats = ["usd", "usda", "usdz"]
@ -87,7 +89,7 @@ class ExtractMultiverseUsd(publish.Extractor):
return {
"stripNamespaces": False,
"mergeTransformAndShape": False,
"writeAncestors": True,
"writeAncestors": False,
"flattenParentXforms": False,
"writeSparseOverrides": False,
"useMetaPrimPath": False,
@ -147,7 +149,15 @@ class ExtractMultiverseUsd(publish.Extractor):
return options
def get_default_options(self):
self.log.info("ExtractMultiverseUsd get_default_options")
return self.default_options
def filter_members(self, members):
return members
def process(self, instance):
# Load plugin first
cmds.loadPlugin("MultiverseForMaya", quiet=True)
@ -161,7 +171,7 @@ class ExtractMultiverseUsd(publish.Extractor):
file_path = file_path.replace('\\', '/')
# Parse export options
options = self.default_options
options = self.get_default_options()
options = self.parse_overrides(instance, options)
self.log.info("Export options: {0}".format(options))
@ -170,27 +180,35 @@ class ExtractMultiverseUsd(publish.Extractor):
with maintained_selection():
members = instance.data("setMembers")
self.log.info('Collected object {}'.format(members))
self.log.info('Collected objects: {}'.format(members))
members = self.filter_members(members)
if not members:
self.log.error('No members!')
return
self.log.info(' - filtered: {}'.format(members))
import multiverse
time_opts = None
frame_start = instance.data['frameStart']
frame_end = instance.data['frameEnd']
handle_start = instance.data['handleStart']
handle_end = instance.data['handleEnd']
step = instance.data['step']
fps = instance.data['fps']
if frame_end != frame_start:
time_opts = multiverse.TimeOptions()
time_opts.writeTimeRange = True
handle_start = instance.data['handleStart']
handle_end = instance.data['handleEnd']
time_opts.frameRange = (
frame_start - handle_start, frame_end + handle_end)
time_opts.frameIncrement = step
time_opts.numTimeSamples = instance.data["numTimeSamples"]
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
time_opts.framePerSecond = fps
time_opts.frameIncrement = instance.data['step']
time_opts.numTimeSamples = instance.data.get(
'numTimeSamples', options['numTimeSamples'])
time_opts.timeSamplesSpan = instance.data.get(
'timeSamplesSpan', options['timeSamplesSpan'])
time_opts.framePerSecond = instance.data.get(
'fps', mel.eval('currentTimeUnitToFPS()'))
asset_write_opts = multiverse.AssetWriteOptions(time_opts)
options_discard_keys = {
@ -203,11 +221,15 @@ class ExtractMultiverseUsd(publish.Extractor):
'step',
'fps'
}
self.log.debug("Write Options:")
for key, value in options.items():
if key in options_discard_keys:
continue
self.log.debug(" - {}={}".format(key, value))
setattr(asset_write_opts, key, value)
self.log.info('WriteAsset: {} / {}'.format(file_path, members))
multiverse.WriteAsset(file_path, members, asset_write_opts)
if "representations" not in instance.data:
@ -223,3 +245,33 @@ class ExtractMultiverseUsd(publish.Extractor):
self.log.info("Extracted instance {} to {}".format(
instance.name, file_path))
class ExtractMultiverseUsdAnim(ExtractMultiverseUsd):
"""Extractor for Multiverse USD Animation Sparse Cache data.
This will extract the sparse cache data from the scene and generate a
USD file with all the animation data.
Upon publish a .usd sparse cache will be written.
"""
label = "Extract Multiverse USD Animation Sparse Cache"
families = ["animation", "usd"]
match = pyblish.api.Subset
def get_default_options(self):
anim_options = self.default_options
anim_options["writeSparseOverrides"] = True
anim_options["writeUsdAttributes"] = True
anim_options["stripNamespaces"] = True
return anim_options
def filter_members(self, members):
out_set = next((i for i in members if i.endswith("out_SET")), None)
if out_set is None:
self.log.warning("Expecting out_SET")
return None
members = cmds.ls(cmds.sets(out_set, query=True), long=True)
return members

View file

@ -93,12 +93,12 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
from openpype.hosts.maya.api import lib
# Store namespace in variable, cosmetics thingy
messagebox = QtWidgets.QMessageBox
mode = messagebox.StandardButton.Ok | messagebox.StandardButton.Cancel
choice = messagebox.warning(None,
"Matrix reset",
cls.prompt_message,
mode)
choice = QtWidgets.QMessageBox.warning(
None,
"Matrix reset",
cls.prompt_message,
QtWidgets.QMessageBox.Ok | QtWidgets.QMessageBox.Cancel
)
invalid = cls.get_invalid(instance)
if not invalid:

View file

@ -80,13 +80,14 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
def is_or_has_mipmap(self, fname, files):
ext = os.path.splitext(fname)[1][1:]
if ext in MIPMAP_EXTENSIONS:
self.log.debug("Is a mipmap '{}'".format(fname))
self.log.debug(" - Is a mipmap '{}'".format(fname))
return True
for colour_space in COLOUR_SPACES:
for mipmap_ext in MIPMAP_EXTENSIONS:
mipmap_fname = '.'.join([fname, colour_space, mipmap_ext])
if mipmap_fname in files:
self.log.debug("Has a mipmap '{}'".format(fname))
self.log.debug(
" - Has a mipmap '{}'".format(mipmap_fname))
return True
return False

View file

@ -21,6 +21,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
- nurbsSurface: _NRB
- locator: _LOC
- null/group: _GRP
Suffices can also be overriden by project settings.
.. warning::
This grabs the first child shape as a reference and doesn't use the
@ -44,6 +45,13 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
ALLOW_IF_NOT_IN_SUFFIX_TABLE = True
@classmethod
def get_table_for_invalid(cls):
ss = []
for k, v in cls.SUFFIX_NAMING_TABLE.items():
ss.append(" - {}: {}".format(k, ", ".join(v)))
return "\n".join(ss)
@staticmethod
def is_valid_name(node_name, shape_type,
SUFFIX_NAMING_TABLE, ALLOW_IF_NOT_IN_SUFFIX_TABLE):
@ -106,5 +114,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
"""
invalid = self.get_invalid(instance)
if invalid:
valid = self.get_table_for_invalid()
raise ValueError("Incorrectly named geometry "
"transforms: {0}".format(invalid))
"transforms: {0}, accepted suffixes are: "
"\n{1}".format(invalid, valid))

View file

@ -1,7 +1,7 @@
import os
import sys
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype.tools.utils import host_tools

View file

@ -2,7 +2,7 @@ import re
import uuid
import qargparse
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype.settings import get_current_project_settings
from openpype.pipeline.context_tools import get_current_project_asset

View file

@ -171,7 +171,6 @@ class ShotMetadataSolver:
_index == 0
and parents[-1]["entity_name"] == parent_name
):
self.log.debug(f" skipping : {parent_name}")
continue
# in case first parent is project then start parents from start
@ -179,7 +178,6 @@ class ShotMetadataSolver:
_index == 0
and parent_token_type == "Project"
):
self.log.debug("rebuilding parents from scratch")
project_parent = parents[0]
parents = [project_parent]
continue
@ -189,8 +187,6 @@ class ShotMetadataSolver:
"entity_name": parent_name
})
self.log.debug(f"__ parents: {parents}")
return parents
def _create_hierarchy_path(self, parents):
@ -297,7 +293,6 @@ class ShotMetadataSolver:
Returns:
(str, dict): shot name and hierarchy data
"""
self.log.info(f"_ source_data: {source_data}")
tasks = {}
asset_doc = source_data["selected_asset_doc"]

View file

@ -1,6 +1,5 @@
import os
from copy import deepcopy
from pprint import pformat
import opentimelineio as otio
from openpype.client import (
get_asset_by_name,
@ -13,9 +12,7 @@ from openpype.hosts.traypublisher.api.plugin import (
from openpype.hosts.traypublisher.api.editorial import (
ShotMetadataSolver
)
from openpype.pipeline import CreatedInstance
from openpype.lib import (
get_ffprobe_data,
convert_ffprobe_fps_value,
@ -33,14 +30,14 @@ from openpype.lib import (
CLIP_ATTR_DEFS = [
EnumDef(
"fps",
items={
"from_selection": "From selection",
23.997: "23.976",
24: "24",
25: "25",
29.97: "29.97",
30: "30"
},
items=[
{"value": "from_selection", "label": "From selection"},
{"value": 23.997, "label": "23.976"},
{"value": 24, "label": "24"},
{"value": 25, "label": "25"},
{"value": 29.97, "label": "29.97"},
{"value": 30, "label": "30"}
],
label="FPS"
),
NumberDef(
@ -70,14 +67,12 @@ class EditorialClipInstanceCreatorBase(HiddenTrayPublishCreator):
host_name = "traypublisher"
def create(self, instance_data, source_data=None):
self.log.info(f"instance_data: {instance_data}")
subset_name = instance_data["subset"]
# Create new instance
new_instance = CreatedInstance(
self.family, subset_name, instance_data, self
)
self.log.info(f"instance_data: {pformat(new_instance.data)}")
self._store_new_instance(new_instance)
@ -223,8 +218,6 @@ or updating already created. Publishing will create OTIO file.
asset_name = instance_data["asset"]
asset_doc = get_asset_by_name(self.project_name, asset_name)
self.log.info(pre_create_data["fps"])
if pre_create_data["fps"] == "from_selection":
# get asset doc data attributes
fps = asset_doc["data"]["fps"]
@ -239,34 +232,43 @@ or updating already created. Publishing will create OTIO file.
sequence_path_data = pre_create_data["sequence_filepath_data"]
media_path_data = pre_create_data["media_filepaths_data"]
sequence_path = self._get_path_from_file_data(sequence_path_data)
sequence_paths = self._get_path_from_file_data(
sequence_path_data, multi=True)
media_path = self._get_path_from_file_data(media_path_data)
# get otio timeline
otio_timeline = self._create_otio_timeline(
sequence_path, fps)
first_otio_timeline = None
for seq_path in sequence_paths:
# get otio timeline
otio_timeline = self._create_otio_timeline(
seq_path, fps)
# Create all clip instances
clip_instance_properties.update({
"fps": fps,
"parent_asset_name": asset_name,
"variant": instance_data["variant"]
})
# Create all clip instances
clip_instance_properties.update({
"fps": fps,
"parent_asset_name": asset_name,
"variant": instance_data["variant"]
})
# create clip instances
self._get_clip_instances(
otio_timeline,
media_path,
clip_instance_properties,
family_presets=allowed_family_presets
# create clip instances
self._get_clip_instances(
otio_timeline,
media_path,
clip_instance_properties,
allowed_family_presets,
os.path.basename(seq_path),
first_otio_timeline
)
)
if not first_otio_timeline:
# assing otio timeline for multi file to layer
first_otio_timeline = otio_timeline
# create otio editorial instance
self._create_otio_instance(
subset_name, instance_data,
sequence_path, media_path,
otio_timeline
subset_name,
instance_data,
seq_path, media_path,
first_otio_timeline
)
def _create_otio_instance(
@ -317,14 +319,14 @@ or updating already created. Publishing will create OTIO file.
kwargs["rate"] = fps
kwargs["ignore_timecode_mismatch"] = True
self.log.info(f"kwargs: {kwargs}")
return otio.adapters.read_from_file(sequence_path, **kwargs)
def _get_path_from_file_data(self, file_path_data):
def _get_path_from_file_data(self, file_path_data, multi=False):
"""Converting creator path data to single path string
Args:
file_path_data (FileDefItem): creator path data inputs
multi (bool): switch to multiple files mode
Raises:
FileExistsError: in case nothing had been set
@ -332,23 +334,29 @@ or updating already created. Publishing will create OTIO file.
Returns:
str: path string
"""
# TODO: just temporarly solving only one media file
if isinstance(file_path_data, list):
file_path_data = file_path_data.pop()
return_path_list = []
if len(file_path_data["filenames"]) == 0:
if isinstance(file_path_data, list):
return_path_list = [
os.path.join(f["directory"], f["filenames"][0])
for f in file_path_data
]
if not return_path_list:
raise FileExistsError(
f"File path was not added: {file_path_data}")
return os.path.join(
file_path_data["directory"], file_path_data["filenames"][0])
return return_path_list if multi else return_path_list[0]
def _get_clip_instances(
self,
otio_timeline,
media_path,
instance_data,
family_presets
family_presets,
sequence_file_name,
first_otio_timeline=None
):
"""Helping function fro creating clip instance
@ -368,17 +376,15 @@ or updating already created. Publishing will create OTIO file.
media_data = self._get_media_source_metadata(media_path)
for track in tracks:
self.log.debug(f"track.name: {track.name}")
track.name = f"{sequence_file_name} - {otio_timeline.name}"
try:
track_start_frame = (
abs(track.source_range.start_time.value)
)
self.log.debug(f"track_start_frame: {track_start_frame}")
track_start_frame -= self.timeline_frame_start
except AttributeError:
track_start_frame = 0
self.log.debug(f"track_start_frame: {track_start_frame}")
for clip in track.each_child():
if not self._validate_clip_for_processing(clip):
@ -400,10 +406,6 @@ or updating already created. Publishing will create OTIO file.
"instance_label": None,
"instance_id": None
}
self.log.info((
"Creating subsets from presets: \n"
f"{pformat(family_presets)}"
))
for _fpreset in family_presets:
# exclude audio family if no audio stream
@ -419,7 +421,10 @@ or updating already created. Publishing will create OTIO file.
deepcopy(base_instance_data),
parenting_data
)
self.log.debug(f"{pformat(dict(instance.data))}")
# add track to first otioTimeline if it is in input args
if first_otio_timeline:
first_otio_timeline.tracks.append(deepcopy(track))
def _restore_otio_source_range(self, otio_clip):
"""Infusing source range.
@ -460,7 +465,6 @@ or updating already created. Publishing will create OTIO file.
target_url=media_path,
available_range=available_range
)
otio_clip.media_reference = media_reference
def _get_media_source_metadata(self, path):
@ -481,7 +485,6 @@ or updating already created. Publishing will create OTIO file.
media_data = get_ffprobe_data(
path, self.log
)
self.log.debug(f"__ media_data: {pformat(media_data)}")
# get video stream data
video_stream = media_data["streams"][0]
@ -589,9 +592,6 @@ or updating already created. Publishing will create OTIO file.
# get variant name from preset or from inharitance
_variant_name = preset.get("variant") or variant_name
self.log.debug(f"__ family: {family}")
self.log.debug(f"__ preset: {preset}")
# subset name
subset_name = "{}{}".format(
family, _variant_name.capitalize()
@ -722,17 +722,13 @@ or updating already created. Publishing will create OTIO file.
clip_in += track_start_frame
clip_out = otio_clip.range_in_parent().end_time_inclusive().value
clip_out += track_start_frame
self.log.info(f"clip_in: {clip_in} | clip_out: {clip_out}")
# add offset in case there is any
self.log.debug(f"__ timeline_offset: {timeline_offset}")
if timeline_offset:
clip_in += timeline_offset
clip_out += timeline_offset
clip_duration = otio_clip.duration().value
self.log.info(f"clip duration: {clip_duration}")
source_in = otio_clip.trimmed_range().start_time.value
source_out = source_in + clip_duration
@ -762,7 +758,6 @@ or updating already created. Publishing will create OTIO file.
Returns:
list: lit of dict with preset items
"""
self.log.debug(f"__ pre_create_data: {pre_create_data}")
return [
{"family": "shot"},
*[
@ -833,7 +828,7 @@ or updating already created. Publishing will create OTIO file.
".fcpxml"
],
allow_sequences=False,
single_item=True,
single_item=False,
label="Sequence file",
),
FileDef(

View file

@ -7,8 +7,8 @@ exists under selected asset.
"""
from pathlib import Path
from openpype.client import get_subset_by_name, get_asset_by_name
from openpype.lib.attribute_definitions import FileDef
# from openpype.client import get_subset_by_name, get_asset_by_name
from openpype.lib.attribute_definitions import FileDef, BoolDef
from openpype.pipeline import (
CreatedInstance,
CreatorError
@ -23,7 +23,8 @@ class OnlineCreator(TrayPublishCreator):
label = "Online"
family = "online"
description = "Publish file retaining its original file name"
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg"]
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg", ".exr",
".dpx", ".tif", ".png", ".jpg"]
def get_detail_description(self):
return """# Create file retaining its original file name.
@ -49,13 +50,17 @@ class OnlineCreator(TrayPublishCreator):
origin_basename = Path(files[0]).stem
# disable check for existing subset with the same name
"""
asset = get_asset_by_name(
self.project_name, instance_data["asset"], fields=["_id"])
if get_subset_by_name(
self.project_name, origin_basename, asset["_id"],
fields=["_id"]):
raise CreatorError(f"subset with {origin_basename} already "
"exists in selected asset")
"""
instance_data["originalBasename"] = origin_basename
subset_name = origin_basename
@ -69,15 +74,29 @@ class OnlineCreator(TrayPublishCreator):
instance_data, self)
self._store_new_instance(new_instance)
def get_instance_attr_defs(self):
return [
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_pre_create_attr_defs(self):
return [
FileDef(
"representation_file",
folders=False,
extensions=self.extensions,
allow_sequences=False,
allow_sequences=True,
single_item=True,
label="Representation",
),
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]

View file

@ -12,12 +12,18 @@ class CollectOnlineFile(pyblish.api.InstancePlugin):
def process(self, instance):
file = Path(instance.data["creator_attributes"]["path"])
review = instance.data["creator_attributes"]["add_review_family"]
instance.data["review"] = review
if "review" not in instance.data["families"]:
instance.data["families"].append("review")
self.log.info(f"Adding review: {review}")
instance.data["representations"].append(
{
"name": file.suffix.lstrip("."),
"ext": file.suffix.lstrip("."),
"files": file.name,
"stagingDir": file.parent.as_posix()
"stagingDir": file.parent.as_posix(),
"tags": ["review"] if review else []
}
)

View file

@ -33,8 +33,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
]
def process(self, instance):
self.log.debug(pformat(instance.data))
creator_identifier = instance.data["creator_identifier"]
if "editorial" not in creator_identifier:
return
@ -82,7 +80,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
]
otio_clip = clips.pop()
self.log.debug(f"__ otioclip.parent: {otio_clip.parent}")
return otio_clip
@ -172,7 +169,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
}
parents = instance.data.get('parents', [])
self.log.debug(f"parents: {pformat(parents)}")
actual = {name: in_info}
@ -190,7 +186,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
# adding hierarchy context to instance
context.data["hierarchyContext"] = final_context
self.log.debug(pformat(final_context))
def _update_dict(self, ex_dict, new_dict):
""" Recursion function

View file

@ -20,6 +20,8 @@ class ValidateOnlineFile(OptionalPyblishPluginMixin,
optional = True
def process(self, instance):
if not self.is_active(instance.data):
return
project_name = instance.context.data["projectName"]
asset_id = instance.data["assetEntity"]["_id"]
subset = get_subset_by_name(

View file

@ -302,8 +302,9 @@ private:
std::string websocket_url;
// Should be avalon plugin available?
// - this may change during processing if websocketet url is not set or server is down
bool use_avalon;
bool server_available;
public:
Communicator(std::string url);
Communicator();
websocket_endpoint endpoint;
bool is_connected();
@ -314,43 +315,45 @@ public:
void call_notification(std::string method_name, nlohmann::json params);
};
Communicator::Communicator() {
Communicator::Communicator(std::string url) {
// URL to websocket server
websocket_url = std::getenv("WEBSOCKET_URL");
websocket_url = url;
// Should be avalon plugin available?
// - this may change during processing if websocketet url is not set or server is down
if (websocket_url == "") {
use_avalon = false;
if (url == "") {
server_available = false;
} else {
use_avalon = true;
server_available = true;
}
}
bool Communicator::is_connected(){
return endpoint.connected();
}
bool Communicator::is_usable(){
return use_avalon;
return server_available;
}
void Communicator::connect()
{
if (!use_avalon) {
if (!server_available) {
return;
}
int con_result;
con_result = endpoint.connect(websocket_url);
if (con_result == -1)
{
use_avalon = false;
server_available = false;
} else {
use_avalon = true;
server_available = true;
}
}
void Communicator::call_notification(std::string method_name, nlohmann::json params) {
if (!use_avalon || !is_connected()) {return;}
if (!server_available || !is_connected()) {return;}
jsonrpcpp::Notification notification = {method_name, params};
endpoint.send_notification(&notification);
@ -358,7 +361,7 @@ void Communicator::call_notification(std::string method_name, nlohmann::json par
jsonrpcpp::Response Communicator::call_method(std::string method_name, nlohmann::json params) {
jsonrpcpp::Response response;
if (!use_avalon || !is_connected())
if (!server_available || !is_connected())
{
return response;
}
@ -382,7 +385,7 @@ jsonrpcpp::Response Communicator::call_method(std::string method_name, nlohmann:
}
void Communicator::process_requests() {
if (!use_avalon || !is_connected() || Data.messages.empty()) {return;}
if (!server_available || !is_connected() || Data.messages.empty()) {return;}
std::string msg = Data.messages.front();
Data.messages.pop();
@ -458,7 +461,7 @@ void register_callbacks(){
parser.register_request_callback("execute_george", execute_george);
}
Communicator communication;
Communicator* communication = nullptr;
////////////////////////////////////////////////////////////////////////////////////////
@ -484,7 +487,7 @@ static char* GetLocalString( PIFilter* iFilter, int iNum, char* iDefault )
// in the localized file (or the localized file doesn't exist).
std::string label_from_evn()
{
std::string _plugin_label = "Avalon";
std::string _plugin_label = "OpenPype";
if (std::getenv("AVALON_LABEL") && std::getenv("AVALON_LABEL") != "")
{
_plugin_label = std::getenv("AVALON_LABEL");
@ -540,9 +543,12 @@ int FAR PASCAL PI_Open( PIFilter* iFilter )
{
PI_Parameters( iFilter, NULL ); // NULL as iArg means "open the requester"
}
communication.connect();
register_callbacks();
char *env_value = std::getenv("WEBSOCKET_URL");
if (env_value != NULL) {
communication = new Communicator(env_value);
communication->connect();
register_callbacks();
}
return 1; // OK
}
@ -560,7 +566,10 @@ void FAR PASCAL PI_Close( PIFilter* iFilter )
{
TVCloseReq( iFilter, Data.mReq );
}
communication.endpoint.close_connection();
if (communication != nullptr) {
communication->endpoint.close_connection();
delete communication;
}
}
@ -709,7 +718,7 @@ int FAR PASCAL PI_Msg( PIFilter* iFilter, INTPTR iEvent, INTPTR iReq, INTPTR* iA
if (Data.menuItemsById.contains(button_up_item_id_str))
{
std::string callback_name = Data.menuItemsById[button_up_item_id_str].get<std::string>();
communication.call_method(callback_name, nlohmann::json::array());
communication->call_method(callback_name, nlohmann::json::array());
}
TVExecute( iFilter );
break;
@ -737,7 +746,9 @@ int FAR PASCAL PI_Msg( PIFilter* iFilter, INTPTR iEvent, INTPTR iReq, INTPTR* iA
{
newMenuItemsProcess(iFilter);
}
communication.process_requests();
if (communication != nullptr) {
communication->process_requests();
}
}
return 1;

View file

@ -18,6 +18,7 @@ from .pipeline import (
show_tools_popup,
instantiate,
UnrealHost,
maintained_selection
)
__all__ = [
@ -36,4 +37,5 @@ __all__ = [
"show_tools_popup",
"instantiate",
"UnrealHost",
"maintained_selection"
]

View file

@ -2,6 +2,7 @@
import os
import logging
from typing import List
from contextlib import contextmanager
import semver
import pyblish.api
@ -447,3 +448,16 @@ def get_subsequences(sequence: unreal.LevelSequence):
if subscene_track is not None and subscene_track.get_sections():
return subscene_track.get_sections()
return []
@contextmanager
def maintained_selection():
"""Stub to be either implemented or replaced.
This is needed for old publisher implementation, but
it is not supported (yet) in UE.
"""
try:
yield
finally:
pass

View file

@ -0,0 +1,61 @@
"""Create UAsset."""
from pathlib import Path
import unreal
from openpype.hosts.unreal.api import pipeline
from openpype.pipeline import LegacyCreator
class CreateUAsset(LegacyCreator):
"""UAsset."""
name = "UAsset"
label = "UAsset"
family = "uasset"
icon = "cube"
root = "/Game/OpenPype"
suffix = "_INS"
def __init__(self, *args, **kwargs):
super(CreateUAsset, self).__init__(*args, **kwargs)
def process(self):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
subset = self.data["subset"]
path = f"{self.root}/PublishInstances/"
unreal.EditorAssetLibrary.make_directory(path)
selection = []
if (self.options or {}).get("useSelection"):
sel_objects = unreal.EditorUtilityLibrary.get_selected_assets()
selection = [a.get_path_name() for a in sel_objects]
if len(selection) != 1:
raise RuntimeError("Please select only one object.")
obj = selection[0]
asset = ar.get_asset_by_object_path(obj).get_asset()
sys_path = unreal.SystemLibrary.get_system_path(asset)
if not sys_path:
raise RuntimeError(
f"{Path(obj).name} is not on the disk. Likely it needs to"
"be saved first.")
if Path(sys_path).suffix != ".uasset":
raise RuntimeError(f"{Path(sys_path).name} is not a UAsset.")
unreal.log("selection: {}".format(selection))
container_name = f"{subset}{self.suffix}"
pipeline.create_publish_instance(
instance=container_name, path=path)
data = self.data.copy()
data["members"] = selection
pipeline.imprint(f"{path}/{container_name}", data)

View file

@ -0,0 +1,145 @@
# -*- coding: utf-8 -*-
"""Load UAsset."""
from pathlib import Path
import shutil
from openpype.pipeline import (
get_representation_path,
AVALON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
import unreal # noqa
class UAssetLoader(plugin.Loader):
"""Load UAsset."""
families = ["uasset"]
label = "Load UAsset"
representations = ["uasset"]
icon = "cube"
color = "orange"
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
Args:
context (dict): application context
name (str): subset name
namespace (str): in Unreal this is basically path to container.
This is not passed here, so namespace is set
by `containerise()` because only then we know
real path.
options (dict): Those would be data to be imprinted. This is not
used now, data are imprinted by `containerise()`.
Returns:
list(str): list of container content
"""
# Create directory for asset and OpenPype container
root = "/Game/OpenPype/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
"{}/{}/{}".format(root, asset, name), suffix="")
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
destination_path = asset_dir.replace(
"/Game",
Path(unreal.Paths.project_content_dir()).as_posix(),
1)
shutil.copy(self.fname, f"{destination_path}/{name}.uasset")
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
)
for a in asset_content:
unreal.EditorAssetLibrary.save_asset(a)
return asset_content
def update(self, container, representation):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
asset_dir = container["namespace"]
name = representation["context"]["subset"]
destination_path = asset_dir.replace(
"/Game",
Path(unreal.Paths.project_content_dir()).as_posix(),
1)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=False, include_folder=True
)
for asset in asset_content:
obj = ar.get_asset_by_object_path(asset).get_asset()
if not obj.get_class().get_name() == 'AssetContainer':
unreal.EditorAssetLibrary.delete_asset(asset)
update_filepath = get_representation_path(representation)
shutil.copy(update_filepath, f"{destination_path}/{name}.uasset")
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
)
for a in asset_content:
unreal.EditorAssetLibrary.save_asset(a)
def remove(self, container):
path = container["namespace"]
parent_path = Path(path).parent.as_posix()
unreal.EditorAssetLibrary.delete_directory(path)
asset_content = unreal.EditorAssetLibrary.list_assets(
parent_path, recursive=False
)
if len(asset_content) == 0:
unreal.EditorAssetLibrary.delete_directory(parent_path)

View file

@ -25,9 +25,13 @@ class CollectInstances(pyblish.api.ContextPlugin):
def process(self, context):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
class_name = ["/Script/OpenPype",
"AssetContainer"] if UNREAL_VERSION.major == 5 and \
UNREAL_VERSION.minor > 0 else "OpenPypePublishInstance" # noqa
class_name = [
"/Script/OpenPype",
"OpenPypePublishInstance"
] if (
UNREAL_VERSION.major == 5
and UNREAL_VERSION.minor > 0
) else "OpenPypePublishInstance" # noqa
instance_containers = ar.get_assets_by_class(class_name, True)
for container_data in instance_containers:

View file

@ -0,0 +1,42 @@
from pathlib import Path
import shutil
import unreal
from openpype.pipeline import publish
class ExtractUAsset(publish.Extractor):
"""Extract a UAsset."""
label = "Extract UAsset"
hosts = ["unreal"]
families = ["uasset"]
optional = True
def process(self, instance):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
self.log.info("Performing extraction..")
staging_dir = self.staging_dir(instance)
filename = "{}.uasset".format(instance.name)
obj = instance[0]
asset = ar.get_asset_by_object_path(obj).get_asset()
sys_path = unreal.SystemLibrary.get_system_path(asset)
filename = Path(sys_path).name
shutil.copy(sys_path, staging_dir)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'uasset',
'ext': 'uasset',
'files': filename,
"stagingDir": staging_dir,
}
instance.data["representations"].append(representation)

View file

@ -0,0 +1,41 @@
import unreal
import pyblish.api
class ValidateNoDependencies(pyblish.api.InstancePlugin):
"""Ensure that the uasset has no dependencies
The uasset is checked for dependencies. If there are any, the instance
cannot be published.
"""
order = pyblish.api.ValidatorOrder
label = "Check no dependencies"
families = ["uasset"]
hosts = ["unreal"]
optional = True
def process(self, instance):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
all_dependencies = []
for obj in instance[:]:
asset = ar.get_asset_by_object_path(obj)
dependencies = ar.get_dependencies(
asset.package_name,
unreal.AssetRegistryDependencyOptions(
include_soft_package_references=False,
include_hard_package_references=True,
include_searchable_names=False,
include_soft_management_references=False,
include_hard_management_references=False
))
if dependencies:
for dep in dependencies:
if str(dep).startswith("/Game/"):
all_dependencies.append(str(dep))
if all_dependencies:
raise RuntimeError(
f"Dependencies found: {all_dependencies}")

View file

@ -908,24 +908,25 @@ class ApplicationLaunchContext:
self.launch_args.extend(self.data.pop("app_args"))
# Handle launch environemtns
env = self.data.pop("env", None)
if env is not None and not isinstance(env, dict):
src_env = self.data.pop("env", None)
if src_env is not None and not isinstance(src_env, dict):
self.log.warning((
"Passed `env` kwarg has invalid type: {}. Expected: `dict`."
" Using `os.environ` instead."
).format(str(type(env))))
env = None
).format(str(type(src_env))))
src_env = None
if env is None:
env = os.environ
if src_env is None:
src_env = os.environ
# subprocess.Popen keyword arguments
self.kwargs = {
"env": {
key: str(value)
for key, value in env.items()
}
ignored_env = {"QT_API", }
env = {
key: str(value)
for key, value in src_env.items()
if key not in ignored_env
}
# subprocess.Popen keyword arguments
self.kwargs = {"env": env}
if platform.system().lower() == "windows":
# Detach new process from currently running process on Windows

View file

@ -3,6 +3,7 @@ import re
import collections
import uuid
import json
import copy
from abc import ABCMeta, abstractmethod, abstractproperty
import six
@ -418,9 +419,8 @@ class EnumDef(AbtractAttrDef):
"""Enumeration of single item from items.
Args:
items: Items definition that can be coverted to
`collections.OrderedDict`. Dictionary represent {value: label}
relation.
items: Items definition that can be coverted using
'prepare_enum_items'.
default: Default value. Must be one key(value) from passed items.
"""
@ -433,38 +433,95 @@ class EnumDef(AbtractAttrDef):
" defined values on initialization."
).format(self.__class__.__name__))
items = collections.OrderedDict(items)
if default not in items:
for _key in items.keys():
default = _key
items = self.prepare_enum_items(items)
item_values = [item["value"] for item in items]
if default not in item_values:
for value in item_values:
default = value
break
super(EnumDef, self).__init__(key, default=default, **kwargs)
self.items = items
self._item_values = set(item_values)
def __eq__(self, other):
if not super(EnumDef, self).__eq__(other):
return False
if set(self.items.keys()) != set(other.items.keys()):
return False
for key, label in self.items.items():
if other.items[key] != label:
return False
return True
return self.items == other.items
def convert_value(self, value):
if value in self.items:
if value in self._item_values:
return value
return self.default
def serialize(self):
data = super(TextDef, self).serialize()
data["items"] = list(self.items)
data["items"] = copy.deepcopy(self.items)
return data
@staticmethod
def prepare_enum_items(items):
"""Convert items to unified structure.
Output is a list where each item is dictionary with 'value'
and 'label'.
```python
# Example output
[
{"label": "Option 1", "value": 1},
{"label": "Option 2", "value": 2},
{"label": "Option 3", "value": 3}
]
```
Args:
items (Union[Dict[str, Any], List[Any], List[Dict[str, Any]]): The
items to convert.
Returns:
List[Dict[str, Any]]: Unified structure of items.
"""
output = []
if isinstance(items, dict):
for value, label in items.items():
output.append({"label": label, "value": value})
elif isinstance(items, (tuple, list, set)):
for item in items:
if isinstance(item, dict):
# Validate if 'value' is available
if "value" not in item:
raise KeyError("Item does not contain 'value' key.")
if "label" not in item:
item["label"] = str(item["value"])
elif isinstance(item, (list, tuple)):
if len(item) == 2:
value, label = item
elif len(item) == 1:
value = item[0]
label = str(value)
else:
raise ValueError((
"Invalid items count {}."
" Expected 1 or 2. Value: {}"
).format(len(item), str(item)))
item = {"label": label, "value": value}
else:
item = {"label": str(item), "value": item}
output.append(item)
else:
raise TypeError(
"Unknown type for enum items '{}'".format(type(items))
)
return output
class BoolDef(AbtractAttrDef):
"""Boolean representation.

View file

@ -92,7 +92,9 @@ class FileTransaction(object):
def process(self):
# Backup any existing files
for dst, (src, _) in self._transfers.items():
if dst == src or not os.path.exists(dst):
self.log.debug("Checking file ... {} -> {}".format(src, dst))
path_same = self._same_paths(src, dst)
if path_same or not os.path.exists(dst):
continue
# Backup original file
@ -105,7 +107,8 @@ class FileTransaction(object):
# Copy the files to transfer
for dst, (src, opts) in self._transfers.items():
if dst == src:
path_same = self._same_paths(src, dst)
if path_same:
self.log.debug(
"Source and destionation are same files {} -> {}".format(
src, dst))
@ -182,3 +185,10 @@ class FileTransaction(object):
else:
self.log.critical("An unexpected error occurred.")
six.reraise(*sys.exc_info())
def _same_paths(self, src, dst):
# handles same paths but with C:/project vs c:/project
if os.path.exists(src) and os.path.exists(dst):
return os.path.samefile(src, dst)
return src == dst

View file

@ -400,6 +400,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
label = "Submit to Deadline"
order = pyblish.api.IntegratorOrder + 0.1
import_reference = False
use_published = True
asset_dependencies = False
@ -424,7 +425,11 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
file_path = None
if self.use_published:
file_path = self.from_published_scene()
if not self.import_reference:
file_path = self.from_published_scene()
else:
self.log.info("use the scene with imported reference for rendering") # noqa
file_path = context.data["currentFile"]
# fallback if nothing was set
if not file_path:
@ -516,7 +521,6 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
published.
"""
instance = self._instance
workfile_instance = self._get_workfile_instance(instance.context)
if workfile_instance is None:
@ -524,7 +528,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
# determine published path from Anatomy.
template_data = workfile_instance.data.get("anatomyData")
rep = workfile_instance.data.get("representations")[0]
rep = workfile_instance.data["representations"][0]
template_data["representation"] = rep.get("name")
template_data["ext"] = rep.get("ext")
template_data["comment"] = None

View file

@ -973,7 +973,7 @@ class SyncToAvalonEvent(BaseEvent):
except Exception:
# TODO logging
# TODO report
self.process_session.rolback()
self.process_session.rollback()
ent_path_items = [self.cur_project["full_name"]]
ent_path_items.extend([
par for par in avalon_entity["data"]["parents"]
@ -1016,7 +1016,7 @@ class SyncToAvalonEvent(BaseEvent):
except Exception:
# TODO logging
# TODO report
self.process_session.rolback()
self.process_session.rollback()
error_msg = (
"Couldn't update custom attributes after recreation"
" of entity in Ftrack"
@ -1338,7 +1338,7 @@ class SyncToAvalonEvent(BaseEvent):
try:
self.process_session.commit()
except Exception:
self.process_session.rolback()
self.process_session.rollback()
# TODO logging
# TODO report
error_msg = (

View file

@ -129,8 +129,8 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
if not note_text.solved:
self.log.warning((
"Note template require more keys then can be provided."
"\nTemplate: {}\nData: {}"
).format(template, format_data))
"\nTemplate: {}\nMissing values for keys:{}\nData: {}"
).format(template, note_text.missing_keys, format_data))
continue
if not note_text:

View file

@ -11,7 +11,7 @@ class SearchComboBox(QtWidgets.QComboBox):
super(SearchComboBox, self).__init__(parent)
self.setEditable(True)
self.setInsertPolicy(self.NoInsert)
self.setInsertPolicy(QtWidgets.QComboBox.NoInsert)
self.lineEdit().setPlaceholderText(placeholder)
# Apply completer settings

View file

@ -39,6 +39,7 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
token = instance.data["slack_token"]
if additional_message:
message = "{} \n".format(additional_message)
users = groups = None
for message_profile in instance.data["slack_channel_message_profiles"]:
message += self._get_filled_message(message_profile["message"],
instance,
@ -60,8 +61,18 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
else:
client = SlackPython3Operations(token, self.log)
users, groups = client.get_users_and_groups()
message = self._translate_users(message, users, groups)
if "@" in message:
cache_key = "__cache_slack_ids"
slack_ids = instance.context.data.get(cache_key, None)
if slack_ids is None:
users, groups = client.get_users_and_groups()
instance.context.data[cache_key] = {}
instance.context.data[cache_key]["users"] = users
instance.context.data[cache_key]["groups"] = groups
else:
users = slack_ids["users"]
groups = slack_ids["groups"]
message = self._translate_users(message, users, groups)
msg_id, file_ids = client.send_message(channel,
message,
@ -212,7 +223,7 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
def _translate_users(self, message, users, groups):
"""Replace all occurences of @mentions with proper <@name> format."""
matches = re.findall(r"(?<!<)@[^ ]+", message)
matches = re.findall(r"(?<!<)@\S+", message)
in_quotes = re.findall(r"(?<!<)(['\"])(@[^'\"]+)", message)
for item in in_quotes:
matches.append(item[1])

View file

@ -328,10 +328,6 @@ class GDriveHandler(AbstractProvider):
last_tick = status = response = None
status_val = 0
while response is None:
if server.is_representation_paused(representation['_id'],
check_parents=True,
project_name=project_name):
raise ValueError("Paused during process, please redo.")
if status:
status_val = float(status.progress())
if not last_tick or \
@ -346,6 +342,13 @@ class GDriveHandler(AbstractProvider):
site=site,
progress=status_val
)
if server.is_representation_paused(
project_name,
representation['_id'],
site,
check_parents=True
):
raise ValueError("Paused during process, please redo.")
status, response = request.next_chunk()
except errors.HttpError as ex:
@ -415,10 +418,6 @@ class GDriveHandler(AbstractProvider):
last_tick = status = response = None
status_val = 0
while response is None:
if server.is_representation_paused(representation['_id'],
check_parents=True,
project_name=project_name):
raise ValueError("Paused during process, please redo.")
if status:
status_val = float(status.progress())
if not last_tick or \
@ -433,6 +432,13 @@ class GDriveHandler(AbstractProvider):
site=site,
progress=status_val
)
if server.is_representation_paused(
project_name,
representation['_id'],
site,
check_parents=True
):
raise ValueError("Paused during process, please redo.")
status, response = downloader.next_chunk()
return target_name

View file

@ -284,9 +284,6 @@ class SyncServerThread(threading.Thread):
# building folder tree structure in memory
# call only if needed, eg. DO_UPLOAD or DO_DOWNLOAD
for sync in sync_repres:
if self.module.\
is_representation_paused(sync['_id']):
continue
if limit <= 0:
continue
files = sync.get("files") or []

View file

@ -124,7 +124,6 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
self.action_show_widget = None
self._paused = False
self._paused_projects = set()
self._paused_representations = set()
self._anatomies = {}
self._connection = None
@ -475,7 +474,6 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
site_name (string): 'gdrive', 'studio' etc.
"""
self.log.info("Pausing SyncServer for {}".format(representation_id))
self._paused_representations.add(representation_id)
self.reset_site_on_representation(project_name, representation_id,
site_name=site_name, pause=True)
@ -492,35 +490,47 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
site_name (string): 'gdrive', 'studio' etc.
"""
self.log.info("Unpausing SyncServer for {}".format(representation_id))
try:
self._paused_representations.remove(representation_id)
except KeyError:
pass
# self.paused_representations is not persistent
self.reset_site_on_representation(project_name, representation_id,
site_name=site_name, pause=False)
def is_representation_paused(self, representation_id,
check_parents=False, project_name=None):
def is_representation_paused(self, project_name, representation_id,
site_name, check_parents=False):
"""
Returns if 'representation_id' is paused or not.
Returns if 'representation_id' is paused or not for site.
Args:
representation_id (string): MongoDB objectId value
project_name (str): project to check if paused
representation_id (str): MongoDB objectId value
site (str): site to check representation is paused for
check_parents (bool): check if parent project or server itself
are not paused
project_name (string): project to check if paused
if 'check_parents', 'project_name' should be set too
Returns:
(bool)
"""
condition = representation_id in self._paused_representations
if check_parents and project_name:
condition = condition or \
self.is_project_paused(project_name) or \
self.is_paused()
return condition
# Check parents are paused
if check_parents and (
self.is_project_paused(project_name)
or self.is_paused()
):
return True
# Get representation
representation = get_representation_by_id(project_name,
representation_id,
fields=["files.sites"])
if not representation:
return False
# Check if representation is paused
for file_info in representation.get("files", []):
for site in file_info.get("sites", []):
if site["name"] != site_name:
continue
return site.get("paused", False)
return False
def pause_project(self, project_name):
"""
@ -1484,7 +1494,8 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
"$elemMatch": {
"name": {"$in": [remote_site]},
"created_dt": {"$exists": False},
"tries": {"$in": retries_arr}
"tries": {"$in": retries_arr},
"paused": {"$exists": False}
}
}
}]},
@ -1494,7 +1505,8 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
"$elemMatch": {
"name": active_site,
"created_dt": {"$exists": False},
"tries": {"$in": retries_arr}
"tries": {"$in": retries_arr},
"paused": {"$exists": False}
}
}}, {
"files.sites": {

View file

@ -156,8 +156,10 @@ class SyncProjectListWidget(QtWidgets.QWidget):
if selected_index and \
selected_index.isValid() and \
not self._selection_changed:
mode = QtCore.QItemSelectionModel.Select | \
QtCore.QItemSelectionModel.Rows
mode = (
QtCore.QItemSelectionModel.Select
| QtCore.QItemSelectionModel.Rows
)
self.project_list.selectionModel().select(selected_index, mode)
if self.current_project:
@ -271,8 +273,10 @@ class _SyncRepresentationWidget(QtWidgets.QWidget):
for selected_id in self._selected_ids:
index = self.model.get_index(selected_id)
if index and index.isValid():
mode = QtCore.QItemSelectionModel.Select | \
QtCore.QItemSelectionModel.Rows
mode = (
QtCore.QItemSelectionModel.Select
| QtCore.QItemSelectionModel.Rows
)
self.selection_model.select(index, mode)
existing_ids.add(selected_id)

View file

@ -1112,17 +1112,21 @@ class RootItem(FormatObject):
result = False
output = str(path)
root_paths = list(self.cleaned_data.values())
mod_path = self.clean_path(path)
for root_path in root_paths:
for root_os, root_path in self.cleaned_data.items():
# Skip empty paths
if not root_path:
continue
if mod_path.startswith(root_path):
_mod_path = mod_path # reset to original cleaned value
if root_os == "windows":
root_path = root_path.lower()
_mod_path = _mod_path.lower()
if _mod_path.startswith(root_path):
result = True
replacement = "{" + self.full_key() + "}"
output = replacement + mod_path[len(root_path):]
output = replacement + _mod_path[len(root_path):]
break
return (result, output)

View file

@ -6,6 +6,7 @@ from .constants import (
from .subset_name import (
TaskNotSetError,
get_subset_name_template,
get_subset_name,
)
@ -46,6 +47,7 @@ __all__ = (
"PRE_CREATE_THUMBNAIL_KEY",
"TaskNotSetError",
"get_subset_name_template",
"get_subset_name",
"CreatorError",

View file

@ -20,6 +20,7 @@ class LegacyCreator(object):
family = None
defaults = None
maintain_selection = True
enabled = True
dynamic_subset_keys = []
@ -76,11 +77,10 @@ class LegacyCreator(object):
print(">>> We have preset for {}".format(plugin_name))
for option, value in plugin_settings.items():
if option == "enabled" and value is False:
setattr(cls, "active", False)
print(" - is disabled by preset")
else:
setattr(cls, option, value)
print(" - setting `{}`: `{}`".format(option, value))
setattr(cls, option, value)
def process(self):
pass

View file

@ -14,6 +14,53 @@ class TaskNotSetError(KeyError):
super(TaskNotSetError, self).__init__(msg)
def get_subset_name_template(
project_name,
family,
task_name,
task_type,
host_name,
default_template=None,
project_settings=None
):
"""Get subset name template based on passed context.
Args:
project_name (str): Project on which the context lives.
family (str): Family (subset type) for which the subset name is
calculated.
host_name (str): Name of host in which the subset name is calculated.
task_name (str): Name of task in which context the subset is created.
task_type (str): Type of task in which context the subset is created.
default_template (Union[str, None]): Default template which is used if
settings won't find any matching possitibility. Constant
'DEFAULT_SUBSET_TEMPLATE' is used if not defined.
project_settings (Union[Dict[str, Any], None]): Prepared settings for
project. Settings are queried if not passed.
"""
if project_settings is None:
project_settings = get_project_settings(project_name)
tools_settings = project_settings["global"]["tools"]
profiles = tools_settings["creator"]["subset_name_profiles"]
filtering_criteria = {
"families": family,
"hosts": host_name,
"tasks": task_name,
"task_types": task_type
}
matching_profile = filter_profiles(profiles, filtering_criteria)
template = None
if matching_profile:
template = matching_profile["template"]
# Make sure template is set (matching may have empty string)
if not template:
template = default_template or DEFAULT_SUBSET_TEMPLATE
return template
def get_subset_name(
family,
variant,
@ -37,9 +84,9 @@ def get_subset_name(
Args:
family (str): Instance family.
variant (str): In most of cases it is user input during creation.
variant (str): In most of the cases it is user input during creation.
task_name (str): Task name on which context is instance created.
asset_doc (dict): Queried asset document with it's tasks in data.
asset_doc (dict): Queried asset document with its tasks in data.
Used to get task type.
project_name (str): Name of project on which is instance created.
Important for project settings that are loaded.
@ -50,15 +97,15 @@ def get_subset_name(
is not passed.
dynamic_data (dict): Dynamic data specific for a creator which creates
instance.
dbcon (AvalonMongoDB): Mongo connection to be able query asset document
if 'asset_doc' is not passed.
project_settings (Union[Dict[str, Any], None]): Prepared settings for
project. Settings are queried if not passed.
"""
if not family:
return ""
if not host_name:
host_name = os.environ["AVALON_APP"]
host_name = os.environ.get("AVALON_APP")
# Use only last part of class family value split by dot (`.`)
family = family.rsplit(".", 1)[-1]
@ -70,27 +117,15 @@ def get_subset_name(
task_info = asset_tasks.get(task_name) or {}
task_type = task_info.get("type")
# Get settings
if not project_settings:
project_settings = get_project_settings(project_name)
tools_settings = project_settings["global"]["tools"]
profiles = tools_settings["creator"]["subset_name_profiles"]
filtering_criteria = {
"families": family,
"hosts": host_name,
"tasks": task_name,
"task_types": task_type
}
matching_profile = filter_profiles(profiles, filtering_criteria)
template = None
if matching_profile:
template = matching_profile["template"]
# Make sure template is set (matching may have empty string)
if not template:
template = default_template or DEFAULT_SUBSET_TEMPLATE
template = get_subset_name_template(
project_name,
family,
task_name,
task_type,
host_name,
default_template=default_template,
project_settings=project_settings
)
# Simple check of task name existence for template with {task} in
# - missing task should be possible only in Standalone publisher
if not task_name and "{task" in template.lower():

View file

@ -1,6 +1,7 @@
from .utils import (
HeroVersionType,
LoadError,
IncompatibleLoaderError,
InvalidRepresentationContext,

View file

@ -30,6 +30,7 @@ class LoaderPlugin(list):
representations = list()
order = 0
is_multiple_contexts_compatible = False
enabled = True
options = []
@ -73,11 +74,10 @@ class LoaderPlugin(list):
print(">>> We have preset for {}".format(plugin_name))
for option, value in plugin_settings.items():
if option == "enabled" and value is False:
setattr(cls, "active", False)
print(" - is disabled by preset")
else:
setattr(cls, option, value)
print(" - setting `{}`: `{}`".format(option, value))
setattr(cls, option, value)
@classmethod
def get_representations(cls):

View file

@ -60,6 +60,16 @@ class HeroVersionType(object):
return self.version.__format__(format_spec)
class LoadError(Exception):
"""Known error that happened during loading.
A message is shown to user (without traceback). Make sure an artist can
understand the problem.
"""
pass
class IncompatibleLoaderError(ValueError):
"""Error when Loader is incompatible with a representation."""
pass

View file

@ -120,6 +120,8 @@ class BuildWorkfile:
# Prepare available loaders
loaders_by_name = {}
for loader in discover_loader_plugins():
if not loader.enabled:
continue
loader_name = loader.__name__
if loader_name in loaders_by_name:
raise KeyError(

View file

@ -239,6 +239,8 @@ class AbstractTemplateBuilder(object):
if self._creators_by_name is None:
self._creators_by_name = {}
for creator in discover_legacy_creator_plugins():
if not creator.enabled:
continue
creator_name = creator.__name__
if creator_name in self._creators_by_name:
raise KeyError(
@ -1147,11 +1149,11 @@ class PlaceholderLoadMixin(object):
loaders_by_name = self.builder.get_loaders_by_name()
loader_items = [
(loader_name, loader.label or loader_name)
{"value": loader_name, "label": loader.label or loader_name}
for loader_name, loader in loaders_by_name.items()
]
loader_items = list(sorted(loader_items, key=lambda i: i[1]))
loader_items = list(sorted(loader_items, key=lambda i: i["label"]))
options = options or {}
return [
attribute_definitions.UISeparatorDef(),
@ -1163,9 +1165,9 @@ class PlaceholderLoadMixin(object):
label="Asset Builder Type",
default=options.get("builder_type"),
items=[
("context_asset", "Current asset"),
("linked_asset", "Linked assets"),
("all_assets", "All assets")
{"label": "Current asset", "value": "context_asset"},
{"label": "Linked assets", "value": "linked_asset"},
{"label": "All assets", "value": "all_assets"},
],
tooltip=(
"Asset Builder Type\n"

View file

@ -19,7 +19,7 @@ class CopyFile(load.LoaderPlugin):
@staticmethod
def copy_file_to_clipboard(path):
from Qt import QtCore, QtWidgets
from qtpy import QtCore, QtWidgets
clipboard = QtWidgets.QApplication.clipboard()
assert clipboard, "Must have running QApplication instance"

View file

@ -19,7 +19,7 @@ class CopyFilePath(load.LoaderPlugin):
@staticmethod
def copy_path_to_clipboard(path):
from Qt import QtWidgets
from qtpy import QtWidgets
clipboard = QtWidgets.QApplication.clipboard()
assert clipboard, "Must have running QApplication instance"

View file

@ -5,7 +5,7 @@ import uuid
import clique
from pymongo import UpdateOne
import qargparse
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype import style
from openpype.client import get_versions, get_representations

View file

@ -1,7 +1,7 @@
import copy
from collections import defaultdict
from Qt import QtWidgets, QtCore, QtGui
from qtpy import QtWidgets, QtCore, QtGui
from openpype.client import get_representations
from openpype.pipeline import load, Anatomy

View file

@ -0,0 +1,52 @@
import os
from openpype import PACKAGE_DIR
from openpype.lib import get_openpype_execute_args, run_detached_process
from openpype.pipeline import load
from openpype.pipeline.load import LoadError
class PushToLibraryProject(load.SubsetLoaderPlugin):
"""Export selected versions to folder structure from Template"""
is_multiple_contexts_compatible = True
representations = ["*"]
families = ["*"]
label = "Push to Library project"
order = 35
icon = "send"
color = "#d8d8d8"
def load(self, contexts, name=None, namespace=None, options=None):
filtered_contexts = [
context
for context in contexts
if context.get("project") and context.get("version")
]
if not filtered_contexts:
raise LoadError("Nothing to push for your selection")
if len(filtered_contexts) > 1:
raise LoadError("Please select only one item")
context = tuple(filtered_contexts)[0]
push_tool_script_path = os.path.join(
PACKAGE_DIR,
"tools",
"push_to_project",
"app.py"
)
project_doc = context["project"]
version_doc = context["version"]
project_name = project_doc["name"]
version_id = str(version_doc["_id"])
args = get_openpype_execute_args(
"run",
push_tool_script_path,
"--project", project_name,
"--version", version_id
)
run_detached_process(args)

View file

@ -4,8 +4,6 @@ import os
import shutil
import pyblish.api
from openpype.pipeline import legacy_io
class CleanUpFarm(pyblish.api.ContextPlugin):
"""Cleans up the staging directory after a successful publish.
@ -23,8 +21,8 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
def process(self, context):
# Get source host from which farm publishing was started
src_host_name = legacy_io.Session.get("AVALON_APP")
self.log.debug("Host name from session is {}".format(src_host_name))
src_host_name = context.data["hostName"]
self.log.debug("Host name from context is {}".format(src_host_name))
# Skip process if is not in list of source hosts in which this
# plugin should run
if src_host_name not in self.allowed_hosts:

View file

@ -32,7 +32,6 @@ from openpype.client import (
get_subsets,
get_last_versions
)
from openpype.pipeline import legacy_io
class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
@ -49,7 +48,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
def process(self, context):
self.log.info("Collecting anatomy data for all instances.")
project_name = legacy_io.active_project()
project_name = context.data["projectName"]
self.fill_missing_asset_docs(context, project_name)
self.fill_instance_data_from_asset(context)
self.fill_latest_versions(context, project_name)

View file

@ -73,7 +73,9 @@ class CollectComment(
"""
label = "Collect Instance Comment"
order = pyblish.api.CollectorOrder + 0.49
# TODO change to CollectorOrder after Pyblish is purged
# Pyblish allows modifying comment after collect phase
order = pyblish.api.ExtractorOrder - 0.49
def process(self, context):
context_comment = self.cleanup_comment(context.data.get("comment"))

View file

@ -15,7 +15,11 @@ import pyblish.api
class CollectResourcesPath(pyblish.api.InstancePlugin):
"""Generate directory path where the files and resources will be stored"""
"""Generate directory path where the files and resources will be stored.
Collects folder name and file name from files, if exists, for in-situ
publishing.
"""
label = "Collect Resources Path"
order = pyblish.api.CollectorOrder + 0.495

View file

@ -1,10 +1,7 @@
import pyblish.api
from openpype.client import get_representations
from openpype.pipeline import (
registered_host,
legacy_io,
)
from openpype.pipeline import registered_host
class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
@ -44,7 +41,7 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
for container in containers
}
project_name = legacy_io.active_project()
project_name = context.data["projectName"]
repre_docs = get_representations(
project_name,
representation_ids=repre_ids,

View file

@ -0,0 +1,42 @@
"""
Requires:
instance -> currentFile
instance -> source
Provides:
instance -> originalBasename
instance -> originalDirname
"""
import os
import pyblish.api
class CollectSourceForSource(pyblish.api.InstancePlugin):
"""Collects source location of file for instance.
Used for 'source' template name which handles in place publishing.
For this kind of publishing files are present with correct file name
pattern and correct location.
"""
label = "Collect Source"
order = pyblish.api.CollectorOrder + 0.495
def process(self, instance):
# parse folder name and file name for online and source templates
# currentFile comes from hosts workfiles
# source comes from Publisher
current_file = instance.data.get("currentFile")
source = instance.data.get("source")
source_file = current_file or source
if source_file:
self.log.debug("Parsing paths for {}".format(source_file))
if not instance.data.get("originalBasename"):
instance.data["originalBasename"] = \
os.path.basename(source_file)
if not instance.data.get("originalDirname"):
instance.data["originalDirname"] = \
os.path.dirname(source_file)

View file

@ -350,6 +350,7 @@ class ExtractOTIOReview(publish.Extractor):
# start command list
command = [ffmpeg_path]
input_extension = None
if sequence:
input_dir, collection = sequence
in_frame_start = min(collection.indexes)
@ -357,6 +358,7 @@ class ExtractOTIOReview(publish.Extractor):
# converting image sequence to image sequence
input_file = collection.format("{head}{padding}{tail}")
input_path = os.path.join(input_dir, input_file)
input_extension = os.path.splitext(input_path)[-1]
# form command for rendering gap files
command.extend([
@ -373,6 +375,7 @@ class ExtractOTIOReview(publish.Extractor):
sec_duration = frames_to_seconds(
frame_duration, input_fps
)
input_extension = os.path.splitext(video_path)[-1]
# form command for rendering gap files
command.extend([
@ -397,9 +400,21 @@ class ExtractOTIOReview(publish.Extractor):
# add output attributes
command.extend([
"-start_number", str(out_frame_start),
output_path
"-start_number", str(out_frame_start)
])
# add copying if extensions are matching
if (
input_extension
and self.output_ext == input_extension
):
command.extend([
"-c", "copy"
])
# add output path at the end
command.append(output_path)
# execute
self.log.debug("Executing: {}".format(" ".join(command)))
output = run_subprocess(

View file

@ -1038,6 +1038,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
# Set audio duration
audio_in_args.append("-to {:0.10f}".format(audio_duration))
# Ignore video data from audio input
audio_in_args.append("-vn")
# Add audio input path
audio_in_args.append("-i {}".format(
path_to_subprocess_arg(audio["filename"])

View file

@ -19,7 +19,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
order = pyblish.api.ExtractorOrder
families = [
"imagesequence", "render", "render2d", "prerender",
"source", "clip", "take"
"source", "clip", "take", "online"
]
hosts = ["shell", "fusion", "resolve", "traypublisher"]
enabled = False
@ -91,7 +91,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
full_input_path = os.path.join(src_staging, input_file)
self.log.info("input {}".format(full_input_path))
filename = os.path.splitext(input_file)[0]
jpeg_file = filename + ".jpg"
jpeg_file = filename + "_thumb.jpg"
full_output_path = os.path.join(dst_staging, jpeg_file)
if oiio_supported:

View file

@ -100,7 +100,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
self.log.info("Thumbnail source: {}".format(thumbnail_source))
src_basename = os.path.basename(thumbnail_source)
dst_filename = os.path.splitext(src_basename)[0] + ".jpg"
dst_filename = os.path.splitext(src_basename)[0] + "_thumb.jpg"
full_output_path = os.path.join(dst_staging, dst_filename)
if oiio_supported:

View file

@ -0,0 +1,31 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Source directory not collected</title>
<description>
## Source directory not collected
Instance is marked for in place publishing. Its 'originalDirname' must be collected. Contact OP developer to modify collector.
</description>
<detail>
### __Detailed Info__ (optional)
In place publishing uses source directory and file name in resulting path and file name of published item. For this instance
all required metadata weren't filled. This is not recoverable error, unless instance itself is removed.
Collector for this instance must be updated for instance to be published.
</detail>
</error>
<error id="not_in_dir">
<title>Source file not in project dir</title>
<description>
## Source file not in project dir
Path '{original_dirname}' not in project folder. Please publish from inside of project folder.
### How to repair?
Restart publish after you moved source file into project directory.
</description>
</error>
</root>

View file

@ -25,7 +25,6 @@ from openpype.client import (
)
from openpype.lib import source_hash
from openpype.lib.file_transaction import FileTransaction
from openpype.pipeline import legacy_io
from openpype.pipeline.publish import (
KnownPublishError,
get_publish_template_name,
@ -132,7 +131,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
"mvUsdComposition",
"mvUsdOverride",
"simpleUnrealTexture",
"online"
"online",
"uasset"
]
default_template_name = "publish"
@ -244,7 +244,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
return filtered_repres
def register(self, instance, file_transactions, filtered_repres):
project_name = legacy_io.active_project()
project_name = instance.context.data["projectName"]
instance_stagingdir = instance.data.get("stagingDir")
if not instance_stagingdir:
@ -270,6 +270,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
)
instance.data["versionEntity"] = version
anatomy = instance.context.data["anatomy"]
# Get existing representations (if any)
existing_repres_by_name = {
repre_doc["name"].lower(): repre_doc
@ -303,13 +305,17 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# .ma representation. Those destination paths are pre-defined, etc.
# todo: should we move or simplify this logic?
resource_destinations = set()
for src, dst in instance.data.get("transfers", []):
file_transactions.add(src, dst, mode=FileTransaction.MODE_COPY)
resource_destinations.add(os.path.abspath(dst))
for src, dst in instance.data.get("hardlinks", []):
file_transactions.add(src, dst, mode=FileTransaction.MODE_HARDLINK)
resource_destinations.add(os.path.abspath(dst))
file_copy_modes = [
("transfers", FileTransaction.MODE_COPY),
("hardlinks", FileTransaction.MODE_HARDLINK)
]
for files_type, copy_mode in file_copy_modes:
for src, dst in instance.data.get(files_type, []):
self._validate_path_in_project_roots(anatomy, dst)
file_transactions.add(src, dst, mode=copy_mode)
resource_destinations.add(os.path.abspath(dst))
# Bulk write to the database
# We write the subset and version to the database before the File
@ -342,7 +348,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# Compute the resource file infos once (files belonging to the
# version instance instead of an individual representation) so
# we can re-use those file infos per representation
anatomy = instance.context.data["anatomy"]
resource_file_infos = self.get_files_info(resource_destinations,
sites=sites,
anatomy=anatomy)
@ -529,6 +534,20 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
template_data["representation"] = repre["name"]
template_data["ext"] = repre["ext"]
stagingdir = repre.get("stagingDir")
if not stagingdir:
# Fall back to instance staging dir if not explicitly
# set for representation in the instance
self.log.debug((
"Representation uses instance staging dir: {}"
).format(instance_stagingdir))
stagingdir = instance_stagingdir
if not stagingdir:
raise KnownPublishError(
"No staging directory set for representation: {}".format(repre)
)
# optionals
# retrieve additional anatomy data from representation if exists
for key, anatomy_key in {
@ -548,20 +567,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
if value is not None:
template_data[anatomy_key] = value
stagingdir = repre.get("stagingDir")
if not stagingdir:
# Fall back to instance staging dir if not explicitly
# set for representation in the instance
self.log.debug((
"Representation uses instance staging dir: {}"
).format(instance_stagingdir))
stagingdir = instance_stagingdir
if not stagingdir:
raise KnownPublishError(
"No staging directory set for representation: {}".format(repre)
)
self.log.debug("Anatomy template name: {}".format(template_name))
anatomy = instance.context.data["anatomy"]
publish_template_category = anatomy.templates[template_name]
@ -569,6 +574,25 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
is_udim = bool(repre.get("udim"))
# handle publish in place
if "originalDirname" in template:
# store as originalDirname only original value without project root
# if instance collected originalDirname is present, it should be
# used for all represe
# from temp to final
original_directory = (
instance.data.get("originalDirname") or instance_stagingdir)
_rootless = self.get_rootless_path(anatomy, original_directory)
if _rootless == original_directory:
raise KnownPublishError((
"Destination path '{}' ".format(original_directory) +
"must be in project dir"
))
relative_path_start = _rootless.rfind('}') + 2
without_root = _rootless[relative_path_start:]
template_data["originalDirname"] = without_root
is_sequence_representation = isinstance(files, (list, tuple))
if is_sequence_representation:
# Collection of files (sequence)
@ -587,6 +611,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
))
src_collection = src_collections[0]
template_data["originalBasename"] = src_collection.head[:-1]
destination_indexes = list(src_collection.indexes)
# Use last frame for minimum padding
# - that should cover both 'udim' and 'frame' minimum padding
@ -671,12 +696,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
raise KnownPublishError(
"This is a bug. Representation file name is full path"
)
template_data["originalBasename"], _ = os.path.splitext(fname)
# Manage anatomy template data
template_data.pop("frame", None)
if is_udim:
template_data["udim"] = repre["udim"][0]
# Construct destination filepath from template
anatomy_filled = anatomy.format(template_data)
template_filled = anatomy_filled[template_name]["path"]
@ -805,11 +829,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
"""Return anatomy template name to use for integration"""
# Anatomy data is pre-filled by Collectors
project_name = legacy_io.active_project()
context = instance.context
project_name = context.data["projectName"]
# Task can be optional in anatomy data
host_name = instance.context.data["hostName"]
host_name = context.data["hostName"]
anatomy_data = instance.data["anatomyData"]
family = anatomy_data["family"]
task_info = anatomy_data.get("task") or {}
@ -820,7 +844,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
family,
task_name=task_info.get("name"),
task_type=task_info.get("type"),
project_settings=instance.context.data["project_settings"],
project_settings=context.data["project_settings"],
logger=self.log
)
@ -890,3 +914,21 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
"hash": source_hash(path),
"sites": sites
}
def _validate_path_in_project_roots(self, anatomy, file_path):
"""Checks if 'file_path' starts with any of the roots.
Used to check that published path belongs to project, eg. we are not
trying to publish to local only folder.
Args:
anatomy (Anatomy)
file_path (str)
Raises
(KnownPublishError)
"""
path = self.get_rootless_path(anatomy, file_path)
if not path:
raise KnownPublishError((
"Destination path '{}' ".format(file_path) +
"must be in project dir"
))

View file

@ -123,7 +123,6 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
"staticMesh",
"skeletalMesh",
"mvLook",
"mvUsd",
"mvUsdComposition",
"mvUsdOverride",
"simpleUnrealTexture"

View file

@ -2,7 +2,6 @@ from pprint import pformat
import pyblish.api
from openpype.pipeline import legacy_io
from openpype.client import get_assets
@ -28,10 +27,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
asset_and_parents = self.get_parents(context)
self.log.debug("__ asset_and_parents: {}".format(asset_and_parents))
if not legacy_io.Session:
legacy_io.install()
project_name = legacy_io.active_project()
project_name = context.data["projectName"]
db_assets = list(get_assets(
project_name, fields=["name", "data.parents"]
))

View file

@ -0,0 +1,74 @@
import pyblish.api
from openpype.pipeline.publish import ValidateContentsOrder
from openpype.pipeline.publish import (
PublishXmlValidationError,
get_publish_template_name,
)
class ValidatePublishDir(pyblish.api.InstancePlugin):
"""Validates if 'publishDir' is a project directory
'publishDir' is collected based on publish templates. In specific cases
('source' template) source folder of items is used as a 'publishDir', this
validates if it is inside any project dir for the project.
(eg. files are not published from local folder, unaccessible for studio'
"""
order = ValidateContentsOrder
label = "Validate publish dir"
checked_template_names = ["source"]
# validate instances might have interim family, needs to be mapped to final
family_mapping = {
"renderLayer": "render",
"renderLocal": "render"
}
def process(self, instance):
template_name = self._get_template_name_from_instance(instance)
if template_name not in self.checked_template_names:
return
original_dirname = instance.data.get("originalDirname")
if not original_dirname:
raise PublishXmlValidationError(
self,
"Instance meant for in place publishing."
" Its 'originalDirname' must be collected."
" Contact OP developer to modify collector."
)
anatomy = instance.context.data["anatomy"]
success, _ = anatomy.find_root_template_from_path(original_dirname)
formatting_data = {
"original_dirname": original_dirname,
}
msg = "Path '{}' not in project folder.".format(original_dirname) + \
" Please publish from inside of project folder."
if not success:
raise PublishXmlValidationError(self, msg, key="not_in_dir",
formatting_data=formatting_data)
def _get_template_name_from_instance(self, instance):
project_name = instance.context.data["projectName"]
host_name = instance.context.data["hostName"]
anatomy_data = instance.data["anatomyData"]
family = anatomy_data["family"]
family = self.family_mapping.get("family") or family
task_info = anatomy_data.get("task") or {}
return get_publish_template_name(
project_name,
host_name,
family,
task_name=task_info.get("name"),
task_type=task_info.get("type"),
project_settings=instance.context.data["project_settings"],
logger=self.log
)

View file

@ -14,7 +14,7 @@ CURRENT_FILE = os.path.abspath(__file__)
def show_error_messagebox(title, message, detail_message=None):
"""Function will show message and process ends after closing it."""
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype import style
app = QtWidgets.QApplication([])

View file

@ -53,11 +53,17 @@
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
"path": "{@folder}/{@file}"
},
"source": {
"folder": "{root[work]}/{originalDirname}",
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
"path": "{@folder}/{@file}"
},
"__dynamic_keys_labels__": {
"maya2unreal": "Maya to Unreal",
"simpleUnrealTextureHero": "Simple Unreal Texture - Hero",
"simpleUnrealTexture": "Simple Unreal Texture",
"online": "online"
"online": "online",
"source": "source"
}
}
}

View file

@ -25,6 +25,7 @@
"active": true,
"tile_assembler_plugin": "OpenPypeTileAssembler",
"use_published": true,
"import_reference": false,
"asset_dependencies": true,
"priority": 50,
"tile_priority": 50,

View file

@ -130,6 +130,11 @@
"key": "use_published",
"label": "Use Published scene"
},
{
"type": "boolean",
"key": "import_reference",
"label": "Use Scene with Imported Reference"
},
{
"type": "boolean",
"key": "asset_dependencies",

View file

@ -157,7 +157,7 @@ def _load_stylesheet():
def _load_font():
"""Load and register fonts into Qt application."""
from Qt import QtGui
from qtpy import QtGui
# Check if font ids are still loaded
if _Cache.font_ids is not None:

View file

@ -47,7 +47,7 @@ def create_qcolor(*args):
*args (tuple): It is possible to pass initialization arguments for
Qcolor.
"""
from Qt import QtGui
from qtpy import QtGui
return QtGui.QColor(*args)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,13 @@
import Qt
import qtpy
initialized = False
resources = None
if Qt.__binding__ == "PySide2":
if qtpy.API == "pyside6":
from . import pyside6_resources as resources
elif qtpy.API == "pyside2":
from . import pyside2_resources as resources
elif Qt.__binding__ == "PyQt5":
elif qtpy.API == "pyqt5":
from . import pyqt5_resources as resources

View file

@ -148,6 +148,10 @@ QPushButton::menu-indicator {
padding-right: 5px;
}
QPushButton[state="error"] {
background: {color:publisher:error};
}
QToolButton {
border: 0px solid transparent;
background: {color:bg-buttons};
@ -1416,6 +1420,13 @@ CreateNextPageOverlay {
}
/* Globally used names */
#ValidatedLineEdit[state="valid"], #ValidatedLineEdit[state="valid"]:focus, #ValidatedLineEdit[state="valid"]:hover {
border-color: {color:publisher:success};
}
#ValidatedLineEdit[state="invalid"], #ValidatedLineEdit[state="invalid"]:focus, #ValidatedLineEdit[state="invalid"]:hover {
border-color: {color:publisher:error};
}
#Separator {
background: {color:bg-menu-separator};
}

View file

@ -6,7 +6,7 @@ from openpype.client import (
get_output_link_versions,
)
from Qt import QtWidgets
from qtpy import QtWidgets
class SimpleLinkView(QtWidgets.QWidget):

View file

@ -1,4 +1,4 @@
from Qt import QtWidgets
from qtpy import QtWidgets
from .widgets import AttributeDefinitionsWidget

View file

@ -3,7 +3,7 @@ import collections
import uuid
import json
from Qt import QtWidgets, QtCore, QtGui
from qtpy import QtWidgets, QtCore, QtGui
from openpype.lib import FileDefItem
from openpype.tools.utils import (
@ -599,14 +599,14 @@ class FilesView(QtWidgets.QListView):
def __init__(self, *args, **kwargs):
super(FilesView, self).__init__(*args, **kwargs)
self.setEditTriggers(QtWidgets.QListView.NoEditTriggers)
self.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
self.setSelectionMode(
QtWidgets.QAbstractItemView.ExtendedSelection
)
self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
self.setAcceptDrops(True)
self.setDragEnabled(True)
self.setDragDropMode(self.InternalMove)
self.setDragDropMode(QtWidgets.QAbstractItemView.InternalMove)
remove_btn = InViewButton(self)
pix_enabled = paint_image_with_color(
@ -616,7 +616,7 @@ class FilesView(QtWidgets.QListView):
get_image(filename="delete.png"), QtCore.Qt.gray
)
icon = QtGui.QIcon(pix_enabled)
icon.addPixmap(pix_disabled, icon.Disabled, icon.Off)
icon.addPixmap(pix_disabled, QtGui.QIcon.Disabled, QtGui.QIcon.Off)
remove_btn.setIcon(icon)
remove_btn.setEnabled(False)
@ -734,7 +734,7 @@ class FilesWidget(QtWidgets.QFrame):
layout = QtWidgets.QStackedLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.setStackingMode(layout.StackAll)
layout.setStackingMode(QtWidgets.QStackedLayout.StackAll)
layout.addWidget(empty_widget)
layout.addWidget(files_view)
layout.setCurrentWidget(empty_widget)

View file

@ -1,7 +1,7 @@
import uuid
import copy
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype.lib.attribute_definitions import (
AbtractAttrDef,
@ -401,9 +401,8 @@ class EnumAttrWidget(_BaseAttrDefWidget):
if self.attr_def.tooltip:
input_widget.setToolTip(self.attr_def.tooltip)
items = self.attr_def.items
for key, label in items.items():
input_widget.addItem(label, key)
for item in self.attr_def.items:
input_widget.addItem(item["label"], item["value"])
idx = input_widget.findData(self.attr_def.default)
if idx >= 0:

View file

@ -1,4 +1,4 @@
from Qt import QtCore
from qtpy import QtCore
FAMILY_ROLE = QtCore.Qt.UserRole + 1

View file

@ -1,5 +1,5 @@
import uuid
from Qt import QtGui, QtCore
from qtpy import QtGui, QtCore
from openpype.pipeline import discover_legacy_creator_plugins
@ -23,6 +23,8 @@ class CreatorsModel(QtGui.QStandardItemModel):
items = []
creators = discover_legacy_creator_plugins()
for creator in creators:
if not creator.enabled:
continue
item_id = str(uuid.uuid4())
self._creators_by_id[item_id] = creator

View file

@ -1,13 +1,20 @@
import re
import inspect
from Qt import QtWidgets, QtCore, QtGui
from qtpy import QtWidgets, QtCore, QtGui
import qtawesome
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
from openpype.tools.utils import ErrorMessageBox
if hasattr(QtGui, "QRegularExpressionValidator"):
RegularExpressionValidatorClass = QtGui.QRegularExpressionValidator
RegularExpressionClass = QtCore.QRegularExpression
else:
RegularExpressionValidatorClass = QtGui.QRegExpValidator
RegularExpressionClass = QtCore.QRegExp
class CreateErrorMessageBox(ErrorMessageBox):
def __init__(
@ -82,12 +89,12 @@ class CreateErrorMessageBox(ErrorMessageBox):
content_layout.addWidget(tb_widget)
class SubsetNameValidator(QtGui.QRegExpValidator):
class SubsetNameValidator(RegularExpressionValidatorClass):
invalid = QtCore.Signal(set)
pattern = "^[{}]*$".format(SUBSET_NAME_ALLOWED_SYMBOLS)
def __init__(self):
reg = QtCore.QRegExp(self.pattern)
reg = RegularExpressionClass(self.pattern)
super(SubsetNameValidator, self).__init__(reg)
def validate(self, text, pos):

View file

@ -2,7 +2,7 @@ import sys
import traceback
import re
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype.client import get_asset_by_name, get_subsets
from openpype import style

View file

@ -1,4 +1,4 @@
from Qt import QtWidgets, QtCore, QtGui
from qtpy import QtWidgets, QtCore, QtGui
from openpype.style import (
load_stylesheet,

View file

@ -16,7 +16,7 @@ travelled only very slightly with the cursor.
"""
import copy
from Qt import QtWidgets, QtCore, QtGui
from qtpy import QtWidgets, QtCore, QtGui
class FlickData(object):

View file

@ -173,7 +173,7 @@ class ActionBar(QtWidgets.QWidget):
view.setResizeMode(QtWidgets.QListView.Adjust)
view.setSelectionMode(QtWidgets.QListView.NoSelection)
view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
view.setEditTriggers(QtWidgets.QListView.NoEditTriggers)
view.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers)
view.setWrapping(True)
view.setGridSize(QtCore.QSize(70, 75))
view.setIconSize(QtCore.QSize(30, 30))
@ -423,7 +423,7 @@ class ActionHistory(QtWidgets.QPushButton):
return
widget = QtWidgets.QListWidget()
widget.setSelectionMode(widget.NoSelection)
widget.setSelectionMode(QtWidgets.QAbstractItemView.NoSelection)
widget.setStyleSheet("""
* {
font-family: "Courier New";

View file

@ -40,7 +40,7 @@ class ProjectIconView(QtWidgets.QListView):
# Workaround for scrolling being super slow or fast when
# toggling between the two visual modes
self.setVerticalScrollMode(self.ScrollPerPixel)
self.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel)
self.setObjectName("IconView")
self._mode = None

View file

@ -1,6 +1,6 @@
import sys
from Qt import QtWidgets, QtCore, QtGui
from qtpy import QtWidgets, QtCore, QtGui
from openpype import style
from openpype.client import get_projects, get_project

View file

@ -1,7 +1,7 @@
import sys
import traceback
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from openpype.client import get_projects, get_project
from openpype import style

Some files were not shown because too many files have changed in this diff Show more