mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-01 16:34:53 +01:00
Merge remote-tracking branch 'upstream/develop' into substance_integration
# Conflicts: # openpype/plugins/publish/extract_thumbnail.py
This commit is contained in:
commit
b3c21aaa9c
77 changed files with 2513 additions and 505 deletions
53
CHANGELOG.md
53
CHANGELOG.md
|
|
@ -1,5 +1,58 @@
|
|||
# Changelog
|
||||
|
||||
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...HEAD)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
|
||||
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
|
||||
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
|
||||
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
|
||||
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
|
||||
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
|
||||
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
|
||||
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
|
||||
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
|
||||
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
|
||||
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
|
||||
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
|
||||
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
|
||||
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
|
||||
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
|
||||
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
|
||||
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
|
||||
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
|
||||
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
|
||||
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
|
||||
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
|
||||
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
|
||||
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
|
||||
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
|
||||
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
|
||||
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
|
||||
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
|
||||
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
|
||||
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
|
||||
|
||||
|
||||
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.8...3.14.9)
|
||||
|
|
|
|||
52
HISTORY.md
52
HISTORY.md
|
|
@ -1,5 +1,57 @@
|
|||
# Changelog
|
||||
|
||||
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...3.14.10)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
|
||||
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
|
||||
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
|
||||
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
|
||||
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
|
||||
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
|
||||
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
|
||||
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
|
||||
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
|
||||
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
|
||||
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
|
||||
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
|
||||
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
|
||||
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
|
||||
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
|
||||
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
|
||||
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
|
||||
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
|
||||
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
|
||||
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
|
||||
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
|
||||
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
|
||||
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
|
||||
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
|
||||
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
|
||||
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
|
||||
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
|
||||
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
|
||||
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
|
||||
|
||||
|
||||
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)
|
||||
|
||||
|
|
|
|||
|
|
@ -76,6 +76,18 @@ class HostBase(object):
|
|||
|
||||
pass
|
||||
|
||||
def install(self):
|
||||
"""Install host specific functionality.
|
||||
|
||||
This is where should be added menu with tools, registered callbacks
|
||||
and other host integration initialization.
|
||||
|
||||
It is called automatically when 'openpype.pipeline.install_host' is
|
||||
triggered.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
if self._log is None:
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ class ExtractThumnail(publish.Extractor):
|
|||
track_item_name, thumb_frame, ".png")
|
||||
thumb_path = os.path.join(staging_dir, thumb_file)
|
||||
|
||||
thumbnail = track_item.thumbnail(thumb_frame).save(
|
||||
thumbnail = track_item.thumbnail(thumb_frame, "colour").save(
|
||||
thumb_path,
|
||||
format='png'
|
||||
)
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class MayaTemplateBuilder(AbstractTemplateBuilder):
|
|||
|
||||
Args:
|
||||
path (str): A path to current template (usually given by
|
||||
get_template_path implementation)
|
||||
get_template_preset implementation)
|
||||
|
||||
Returns:
|
||||
bool: Wether the template was succesfully imported or not
|
||||
|
|
@ -240,7 +240,7 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
cmds.setAttr(node + ".hiddenInOutliner", True)
|
||||
|
||||
def load_succeed(self, placeholder, container):
|
||||
self._parent_in_hierarhchy(placeholder, container)
|
||||
self._parent_in_hierarchy(placeholder, container)
|
||||
|
||||
def _parent_in_hierarchy(self, placeholder, container):
|
||||
"""Parent loaded container to placeholder's parent.
|
||||
|
|
|
|||
|
|
@ -44,3 +44,6 @@ class CreateAnimation(plugin.Creator):
|
|||
# Default to not send to farm.
|
||||
self.data["farm"] = False
|
||||
self.data["priority"] = 50
|
||||
|
||||
# Default to write normals.
|
||||
self.data["writeNormals"] = True
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ class CreateMultiverseUsd(plugin.Creator):
|
|||
|
||||
name = "mvUsdMain"
|
||||
label = "Multiverse USD Asset"
|
||||
family = "mvUsd"
|
||||
family = "usd"
|
||||
icon = "cubes"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
|
|||
|
|
@ -72,15 +72,19 @@ class CreateRender(plugin.Creator):
|
|||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor."""
|
||||
super(CreateRender, self).__init__(*args, **kwargs)
|
||||
deadline_settings = get_system_settings()["modules"]["deadline"]
|
||||
if not deadline_settings["enabled"]:
|
||||
self.deadline_servers = {}
|
||||
return
|
||||
|
||||
# Defaults
|
||||
self._project_settings = get_project_settings(
|
||||
legacy_io.Session["AVALON_PROJECT"])
|
||||
if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
|
||||
lib_rendersettings.RenderSettings().set_default_renderer_settings()
|
||||
|
||||
# Deadline-only
|
||||
manager = ModulesManager()
|
||||
deadline_settings = get_system_settings()["modules"]["deadline"]
|
||||
if not deadline_settings["enabled"]:
|
||||
self.deadline_servers = {}
|
||||
return
|
||||
self.deadline_module = manager.modules_by_name["deadline"]
|
||||
try:
|
||||
default_servers = deadline_settings["deadline_urls"]
|
||||
|
|
@ -193,8 +197,6 @@ class CreateRender(plugin.Creator):
|
|||
pool_names = []
|
||||
default_priority = 50
|
||||
|
||||
self.server_aliases = list(self.deadline_servers.keys())
|
||||
self.data["deadlineServers"] = self.server_aliases
|
||||
self.data["suspendPublishJob"] = False
|
||||
self.data["review"] = True
|
||||
self.data["extendFrames"] = False
|
||||
|
|
@ -233,6 +235,9 @@ class CreateRender(plugin.Creator):
|
|||
raise RuntimeError("Both Deadline and Muster are enabled")
|
||||
|
||||
if deadline_enabled:
|
||||
self.server_aliases = list(self.deadline_servers.keys())
|
||||
self.data["deadlineServers"] = self.server_aliases
|
||||
|
||||
try:
|
||||
deadline_url = self.deadline_servers["default"]
|
||||
except KeyError:
|
||||
|
|
@ -254,6 +259,19 @@ class CreateRender(plugin.Creator):
|
|||
default_priority)
|
||||
self.data["tile_priority"] = tile_priority
|
||||
|
||||
pool_setting = (self._project_settings["deadline"]
|
||||
["publish"]
|
||||
["CollectDeadlinePools"])
|
||||
primary_pool = pool_setting["primary_pool"]
|
||||
self.data["primaryPool"] = self._set_default_pool(pool_names,
|
||||
primary_pool)
|
||||
# We add a string "-" to allow the user to not
|
||||
# set any secondary pools
|
||||
pool_names = ["-"] + pool_names
|
||||
secondary_pool = pool_setting["secondary_pool"]
|
||||
self.data["secondaryPool"] = self._set_default_pool(pool_names,
|
||||
secondary_pool)
|
||||
|
||||
if muster_enabled:
|
||||
self.log.info(">>> Loading Muster credentials ...")
|
||||
self._load_credentials()
|
||||
|
|
@ -273,18 +291,6 @@ class CreateRender(plugin.Creator):
|
|||
self.log.info(" - pool: {}".format(pool["name"]))
|
||||
pool_names.append(pool["name"])
|
||||
|
||||
pool_setting = (self._project_settings["deadline"]
|
||||
["publish"]
|
||||
["CollectDeadlinePools"])
|
||||
primary_pool = pool_setting["primary_pool"]
|
||||
self.data["primaryPool"] = self._set_default_pool(pool_names,
|
||||
primary_pool)
|
||||
# We add a string "-" to allow the user to not
|
||||
# set any secondary pools
|
||||
pool_names = ["-"] + pool_names
|
||||
secondary_pool = pool_setting["secondary_pool"]
|
||||
self.data["secondaryPool"] = self._set_default_pool(pool_names,
|
||||
secondary_pool)
|
||||
self.options = {"useSelection": False} # Force no content
|
||||
|
||||
def _set_default_pool(self, pool_names, pool_value):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import maya.cmds as cmds
|
||||
from maya import mel
|
||||
import os
|
||||
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
|
|
@ -11,12 +13,13 @@ from openpype.hosts.maya.api.lib import (
|
|||
unique_namespace
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.client import get_representation_by_id
|
||||
|
||||
|
||||
class MultiverseUsdLoader(load.LoaderPlugin):
|
||||
"""Read USD data in a Multiverse Compound"""
|
||||
|
||||
families = ["model", "mvUsd", "mvUsdComposition", "mvUsdOverride",
|
||||
families = ["model", "usd", "mvUsdComposition", "mvUsdOverride",
|
||||
"pointcache", "animation"]
|
||||
representations = ["usd", "usda", "usdc", "usdz", "abc"]
|
||||
|
||||
|
|
@ -26,7 +29,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, options=None):
|
||||
|
||||
asset = context['asset']['name']
|
||||
namespace = namespace or unique_namespace(
|
||||
asset + "_",
|
||||
|
|
@ -34,22 +36,20 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
suffix="_",
|
||||
)
|
||||
|
||||
# Create the shape
|
||||
# Make sure we can load the plugin
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
# Create the shape
|
||||
shape = None
|
||||
transform = None
|
||||
with maintained_selection():
|
||||
cmds.namespace(addNamespace=namespace)
|
||||
with namespaced(namespace, new=False):
|
||||
import multiverse
|
||||
shape = multiverse.CreateUsdCompound(self.fname)
|
||||
transform = cmds.listRelatives(
|
||||
shape, parent=True, fullPath=True)[0]
|
||||
|
||||
# Lock the shape node so the user cannot delete it.
|
||||
cmds.lockNode(shape, lock=True)
|
||||
|
||||
nodes = [transform, shape]
|
||||
self[:] = nodes
|
||||
|
||||
|
|
@ -70,15 +70,34 @@ class MultiverseUsdLoader(load.LoaderPlugin):
|
|||
shapes = cmds.ls(members, type="mvUsdCompoundShape")
|
||||
assert shapes, "Cannot find mvUsdCompoundShape in container"
|
||||
|
||||
path = get_representation_path(representation)
|
||||
project_name = representation["context"]["project"]["name"]
|
||||
prev_representation_id = cmds.getAttr("{}.representation".format(node))
|
||||
prev_representation = get_representation_by_id(project_name,
|
||||
prev_representation_id)
|
||||
prev_path = os.path.normpath(prev_representation["data"]["path"])
|
||||
|
||||
# Make sure we can load the plugin
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
for shape in shapes:
|
||||
multiverse.SetUsdCompoundAssetPaths(shape, [path])
|
||||
|
||||
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
|
||||
asset_paths = [os.path.normpath(p) for p in asset_paths]
|
||||
|
||||
assert asset_paths.count(prev_path) == 1, \
|
||||
"Couldn't find matching path (or too many)"
|
||||
prev_path_idx = asset_paths.index(prev_path)
|
||||
|
||||
path = get_representation_path(representation)
|
||||
asset_paths[prev_path_idx] = path
|
||||
|
||||
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
|
||||
|
||||
cmds.setAttr("{}.representation".format(node),
|
||||
str(representation["_id"]),
|
||||
type="string")
|
||||
mel.eval('refreshEditorTemplates;')
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
132
openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py
Normal file
132
openpype/hosts/maya/plugins/load/load_multiverse_usd_over.py
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import maya.cmds as cmds
|
||||
from maya import mel
|
||||
import os
|
||||
|
||||
import qargparse
|
||||
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.client import get_representation_by_id
|
||||
|
||||
|
||||
class MultiverseUsdOverLoader(load.LoaderPlugin):
|
||||
"""Reference file"""
|
||||
|
||||
families = ["mvUsdOverride"]
|
||||
representations = ["usda", "usd", "udsz"]
|
||||
|
||||
label = "Load Usd Override into Compound"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
options = [
|
||||
qargparse.String(
|
||||
"Which Compound",
|
||||
label="Compound",
|
||||
help="Select which compound to add this as a layer to."
|
||||
)
|
||||
]
|
||||
|
||||
def load(self, context, name=None, namespace=None, options=None):
|
||||
current_usd = cmds.ls(selection=True,
|
||||
type="mvUsdCompoundShape",
|
||||
dag=True,
|
||||
long=True)
|
||||
if len(current_usd) != 1:
|
||||
self.log.error("Current selection invalid: '{}', "
|
||||
"must contain exactly 1 mvUsdCompoundShape."
|
||||
"".format(current_usd))
|
||||
return
|
||||
|
||||
# Make sure we can load the plugin
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
nodes = current_usd
|
||||
with maintained_selection():
|
||||
multiverse.AddUsdCompoundAssetPath(current_usd[0], self.fname)
|
||||
|
||||
namespace = current_usd[0].split("|")[1].split(":")[0]
|
||||
|
||||
container = containerise(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
nodes=nodes,
|
||||
context=context,
|
||||
loader=self.__class__.__name__)
|
||||
|
||||
cmds.addAttr(container, longName="mvUsdCompoundShape",
|
||||
niceName="mvUsdCompoundShape", dataType="string")
|
||||
cmds.setAttr(container + ".mvUsdCompoundShape",
|
||||
current_usd[0], type="string")
|
||||
|
||||
return container
|
||||
|
||||
def update(self, container, representation):
|
||||
# type: (dict, dict) -> None
|
||||
"""Update container with specified representation."""
|
||||
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
node = container['objectName']
|
||||
assert cmds.objExists(node), "Missing container"
|
||||
|
||||
members = cmds.sets(node, query=True) or []
|
||||
shapes = cmds.ls(members, type="mvUsdCompoundShape")
|
||||
assert shapes, "Cannot find mvUsdCompoundShape in container"
|
||||
|
||||
mvShape = container['mvUsdCompoundShape']
|
||||
assert mvShape, "Missing mv source"
|
||||
|
||||
project_name = representation["context"]["project"]["name"]
|
||||
prev_representation_id = cmds.getAttr("{}.representation".format(node))
|
||||
prev_representation = get_representation_by_id(project_name,
|
||||
prev_representation_id)
|
||||
prev_path = os.path.normpath(prev_representation["data"]["path"])
|
||||
|
||||
path = get_representation_path(representation)
|
||||
|
||||
for shape in shapes:
|
||||
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
|
||||
asset_paths = [os.path.normpath(p) for p in asset_paths]
|
||||
|
||||
assert asset_paths.count(prev_path) == 1, \
|
||||
"Couldn't find matching path (or too many)"
|
||||
prev_path_idx = asset_paths.index(prev_path)
|
||||
asset_paths[prev_path_idx] = path
|
||||
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
|
||||
|
||||
cmds.setAttr("{}.representation".format(node),
|
||||
str(representation["_id"]),
|
||||
type="string")
|
||||
mel.eval('refreshEditorTemplates;')
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
# type: (dict) -> None
|
||||
"""Remove loaded container."""
|
||||
# Delete container and its contents
|
||||
if cmds.objExists(container['objectName']):
|
||||
members = cmds.sets(container['objectName'], query=True) or []
|
||||
cmds.delete([container['objectName']] + members)
|
||||
|
||||
# Remove the namespace, if empty
|
||||
namespace = container['namespace']
|
||||
if cmds.namespace(exists=namespace):
|
||||
members = cmds.namespaceInfo(namespace, listNamespace=True)
|
||||
if not members:
|
||||
cmds.namespace(removeNamespace=namespace)
|
||||
else:
|
||||
self.log.warning("Namespace not deleted because it "
|
||||
"still has members: %s", namespace)
|
||||
|
|
@ -26,7 +26,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"rig",
|
||||
"camerarig",
|
||||
"xgen",
|
||||
"staticMesh"]
|
||||
"staticMesh",
|
||||
"mvLook"]
|
||||
representations = ["ma", "abc", "fbx", "mb"]
|
||||
|
||||
label = "Reference"
|
||||
|
|
|
|||
|
|
@ -74,13 +74,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
objectset = cmds.ls("*.id", long=True, type="objectSet",
|
||||
recursive=True, objectsOnly=True)
|
||||
|
||||
ctx_frame_start = context.data['frameStart']
|
||||
ctx_frame_end = context.data['frameEnd']
|
||||
ctx_handle_start = context.data['handleStart']
|
||||
ctx_handle_end = context.data['handleEnd']
|
||||
ctx_frame_start_handle = context.data['frameStartHandle']
|
||||
ctx_frame_end_handle = context.data['frameEndHandle']
|
||||
|
||||
context.data['objectsets'] = objectset
|
||||
for objset in objectset:
|
||||
|
||||
|
|
@ -156,34 +149,20 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
# Append start frame and end frame to label if present
|
||||
if "frameStart" and "frameEnd" in data:
|
||||
|
||||
# if frame range on maya set is the same as full shot range
|
||||
# adjust the values to match the asset data
|
||||
if (ctx_frame_start_handle == data["frameStart"]
|
||||
and ctx_frame_end_handle == data["frameEnd"]): # noqa: W503, E501
|
||||
data["frameStartHandle"] = ctx_frame_start_handle
|
||||
data["frameEndHandle"] = ctx_frame_end_handle
|
||||
data["frameStart"] = ctx_frame_start
|
||||
data["frameEnd"] = ctx_frame_end
|
||||
data["handleStart"] = ctx_handle_start
|
||||
data["handleEnd"] = ctx_handle_end
|
||||
|
||||
# if there are user values on start and end frame not matching
|
||||
# the asset, use them
|
||||
|
||||
else:
|
||||
if "handles" in data:
|
||||
data["handleStart"] = data["handles"]
|
||||
data["handleEnd"] = data["handles"]
|
||||
else:
|
||||
data["handleStart"] = 0
|
||||
data["handleEnd"] = 0
|
||||
|
||||
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
|
||||
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
|
||||
|
||||
# Backwards compatibility for 'handles' data
|
||||
if "handles" in data:
|
||||
data["handleStart"] = data["handles"]
|
||||
data["handleEnd"] = data["handles"]
|
||||
data.pop('handles')
|
||||
|
||||
# Take handles from context if not set locally on the instance
|
||||
for key in ["handleStart", "handleEnd"]:
|
||||
if key not in data:
|
||||
data[key] = context.data[key]
|
||||
|
||||
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
|
||||
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
|
||||
|
||||
label += " [{0}-{1}]".format(int(data["frameStartHandle"]),
|
||||
int(data["frameEndHandle"]))
|
||||
|
||||
|
|
|
|||
|
|
@ -440,7 +440,8 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
for res in self.collect_resources(n):
|
||||
instance.data["resources"].append(res)
|
||||
|
||||
self.log.info("Collected resources: {}".format(instance.data["resources"]))
|
||||
self.log.info("Collected resources: {}".format(
|
||||
instance.data["resources"]))
|
||||
|
||||
# Log warning when no relevant sets were retrieved for the look.
|
||||
if (
|
||||
|
|
@ -548,6 +549,11 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
if not cmds.attributeQuery(attr, node=node, exists=True):
|
||||
continue
|
||||
attribute = "{}.{}".format(node, attr)
|
||||
# We don't support mixed-type attributes yet.
|
||||
if cmds.attributeQuery(attr, node=node, multi=True):
|
||||
self.log.warning("Attribute '{}' is mixed-type and is "
|
||||
"not supported yet.".format(attribute))
|
||||
continue
|
||||
if cmds.getAttr(attribute, type=True) == "message":
|
||||
continue
|
||||
node_attributes[attr] = cmds.getAttr(attribute)
|
||||
|
|
|
|||
|
|
@ -21,37 +21,68 @@ COLOUR_SPACES = ['sRGB', 'linear', 'auto']
|
|||
MIPMAP_EXTENSIONS = ['tdl']
|
||||
|
||||
|
||||
def get_look_attrs(node):
|
||||
"""Returns attributes of a node that are important for the look.
|
||||
class _NodeTypeAttrib(object):
|
||||
"""docstring for _NodeType"""
|
||||
|
||||
These are the "changed" attributes (those that have edits applied
|
||||
in the current scene).
|
||||
def __init__(self, name, fname, computed_fname=None, colour_space=None):
|
||||
self.name = name
|
||||
self.fname = fname
|
||||
self.computed_fname = computed_fname or fname
|
||||
self.colour_space = colour_space or "colorSpace"
|
||||
|
||||
Returns:
|
||||
list: Attribute names to extract
|
||||
def get_fname(self, node):
|
||||
return "{}.{}".format(node, self.fname)
|
||||
|
||||
def get_computed_fname(self, node):
|
||||
return "{}.{}".format(node, self.computed_fname)
|
||||
|
||||
def get_colour_space(self, node):
|
||||
return "{}.{}".format(node, self.colour_space)
|
||||
|
||||
def __str__(self):
|
||||
return "_NodeTypeAttrib(name={}, fname={}, "
|
||||
"computed_fname={}, colour_space={})".format(
|
||||
self.name, self.fname, self.computed_fname, self.colour_space)
|
||||
|
||||
|
||||
NODETYPES = {
|
||||
"file": [_NodeTypeAttrib("file", "fileTextureName",
|
||||
"computedFileTextureNamePattern")],
|
||||
"aiImage": [_NodeTypeAttrib("aiImage", "filename")],
|
||||
"RedshiftNormalMap": [_NodeTypeAttrib("RedshiftNormalMap", "tex0")],
|
||||
"dlTexture": [_NodeTypeAttrib("dlTexture", "textureFile",
|
||||
None, "textureFile_meta_colorspace")],
|
||||
"dlTriplanar": [_NodeTypeAttrib("dlTriplanar", "colorTexture",
|
||||
None, "colorTexture_meta_colorspace"),
|
||||
_NodeTypeAttrib("dlTriplanar", "floatTexture",
|
||||
None, "floatTexture_meta_colorspace"),
|
||||
_NodeTypeAttrib("dlTriplanar", "heightTexture",
|
||||
None, "heightTexture_meta_colorspace")]
|
||||
}
|
||||
|
||||
|
||||
def get_file_paths_for_node(node):
|
||||
"""Gets all the file paths in this node.
|
||||
|
||||
Returns all filepaths that this node references. Some node types only
|
||||
reference one, but others, like dlTriplanar, can reference 3.
|
||||
|
||||
Args:
|
||||
node (str): Name of the Maya node
|
||||
|
||||
Returns
|
||||
list(str): A list with all evaluated maya attributes for filepaths.
|
||||
"""
|
||||
# When referenced get only attributes that are "changed since file open"
|
||||
# which includes any reference edits, otherwise take *all* user defined
|
||||
# attributes
|
||||
is_referenced = cmds.referenceQuery(node, isNodeReferenced=True)
|
||||
result = cmds.listAttr(node, userDefined=True,
|
||||
changedSinceFileOpen=is_referenced) or []
|
||||
|
||||
# `cbId` is added when a scene is saved, ignore by default
|
||||
if "cbId" in result:
|
||||
result.remove("cbId")
|
||||
node_type = cmds.nodeType(node)
|
||||
if node_type not in NODETYPES:
|
||||
return []
|
||||
|
||||
# For shapes allow render stat changes
|
||||
if cmds.objectType(node, isAType="shape"):
|
||||
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
|
||||
for attr in attrs:
|
||||
if attr in SHAPE_ATTRS:
|
||||
result.append(attr)
|
||||
elif attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
|
||||
return result
|
||||
paths = []
|
||||
for node_type_attr in NODETYPES[node_type]:
|
||||
fname = cmds.getAttr("{}.{}".format(node, node_type_attr.fname))
|
||||
paths.append(fname)
|
||||
return paths
|
||||
|
||||
|
||||
def node_uses_image_sequence(node):
|
||||
|
|
@ -69,13 +100,29 @@ def node_uses_image_sequence(node):
|
|||
"""
|
||||
|
||||
# useFrameExtension indicates an explicit image sequence
|
||||
node_path = get_file_node_path(node).lower()
|
||||
paths = get_file_node_paths(node)
|
||||
paths = [path.lower() for path in paths]
|
||||
|
||||
# The following tokens imply a sequence
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
|
||||
|
||||
return (cmds.getAttr('%s.useFrameExtension' % node) or
|
||||
any(pattern in node_path for pattern in patterns))
|
||||
def pattern_in_paths(patterns, paths):
|
||||
"""Helper function for checking to see if a pattern is contained
|
||||
in the list of paths"""
|
||||
for pattern in patterns:
|
||||
for path in paths:
|
||||
if pattern in path:
|
||||
return True
|
||||
return False
|
||||
|
||||
node_type = cmds.nodeType(node)
|
||||
if node_type == 'dlTexture':
|
||||
return (cmds.getAttr('{}.useImageSequence'.format(node)) or
|
||||
pattern_in_paths(patterns, paths))
|
||||
elif node_type == "file":
|
||||
return (cmds.getAttr('{}.useFrameExtension'.format(node)) or
|
||||
pattern_in_paths(patterns, paths))
|
||||
return False
|
||||
|
||||
|
||||
def seq_to_glob(path):
|
||||
|
|
@ -132,7 +179,7 @@ def seq_to_glob(path):
|
|||
return path
|
||||
|
||||
|
||||
def get_file_node_path(node):
|
||||
def get_file_node_paths(node):
|
||||
"""Get the file path used by a Maya file node.
|
||||
|
||||
Args:
|
||||
|
|
@ -158,15 +205,9 @@ def get_file_node_path(node):
|
|||
"<uvtile>"]
|
||||
lower = texture_pattern.lower()
|
||||
if any(pattern in lower for pattern in patterns):
|
||||
return texture_pattern
|
||||
return [texture_pattern]
|
||||
|
||||
if cmds.nodeType(node) == 'aiImage':
|
||||
return cmds.getAttr('{0}.filename'.format(node))
|
||||
if cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
return cmds.getAttr('{}.tex0'.format(node))
|
||||
|
||||
# otherwise use fileTextureName
|
||||
return cmds.getAttr('{0}.fileTextureName'.format(node))
|
||||
return get_file_paths_for_node(node)
|
||||
|
||||
|
||||
def get_file_node_files(node):
|
||||
|
|
@ -181,15 +222,15 @@ def get_file_node_files(node):
|
|||
|
||||
"""
|
||||
|
||||
path = get_file_node_path(node)
|
||||
path = cmds.workspace(expandName=path)
|
||||
paths = get_file_node_paths(node)
|
||||
paths = [cmds.workspace(expandName=path) for path in paths]
|
||||
if node_uses_image_sequence(node):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
return glob.glob(glob_pattern)
|
||||
elif os.path.exists(path):
|
||||
return [path]
|
||||
globs = []
|
||||
for path in paths:
|
||||
globs += glob.glob(seq_to_glob(path))
|
||||
return globs
|
||||
else:
|
||||
return []
|
||||
return list(filter(lambda x: os.path.exists(x), paths))
|
||||
|
||||
|
||||
def get_mipmap(fname):
|
||||
|
|
@ -211,6 +252,11 @@ def is_mipmap(fname):
|
|||
class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
||||
"""Collect Multiverse Look
|
||||
|
||||
Searches through the overrides finding all material overrides. From there
|
||||
it extracts the shading group and then finds all texture files in the
|
||||
shading group network. It also checks for mipmap versions of texture files
|
||||
and adds them to the resouces to get published.
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
|
|
@ -258,12 +304,20 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
shadingGroup), "members": list()}
|
||||
|
||||
# The SG may reference files, add those too!
|
||||
history = cmds.listHistory(shadingGroup)
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
history = cmds.listHistory(
|
||||
shadingGroup, allConnections=True)
|
||||
|
||||
# We need to iterate over node_types since `cmds.ls` may
|
||||
# error out if we don't have the appropriate plugin loaded.
|
||||
files = []
|
||||
for node_type in NODETYPES.keys():
|
||||
files += cmds.ls(history,
|
||||
type=node_type,
|
||||
long=True)
|
||||
|
||||
for f in files:
|
||||
resources = self.collect_resource(f, publishMipMap)
|
||||
instance.data["resources"].append(resources)
|
||||
instance.data["resources"] += resources
|
||||
|
||||
elif isinstance(matOver, multiverse.MaterialSourceUsdPath):
|
||||
# TODO: Handle this later.
|
||||
|
|
@ -284,69 +338,63 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
dict
|
||||
"""
|
||||
|
||||
self.log.debug("processing: {}".format(node))
|
||||
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
|
||||
self.log.error(
|
||||
"Unsupported file node: {}".format(cmds.nodeType(node)))
|
||||
node_type = cmds.nodeType(node)
|
||||
self.log.debug("processing: {}/{}".format(node, node_type))
|
||||
|
||||
if node_type not in NODETYPES:
|
||||
self.log.error("Unsupported file node: {}".format(node_type))
|
||||
raise AssertionError("Unsupported file node")
|
||||
|
||||
if cmds.nodeType(node) == 'file':
|
||||
self.log.debug(" - file node")
|
||||
attribute = "{}.fileTextureName".format(node)
|
||||
computed_attribute = "{}.computedFileTextureNamePattern".format(
|
||||
node)
|
||||
elif cmds.nodeType(node) == 'aiImage':
|
||||
self.log.debug("aiImage node")
|
||||
attribute = "{}.filename".format(node)
|
||||
computed_attribute = attribute
|
||||
elif cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
self.log.debug("RedshiftNormalMap node")
|
||||
attribute = "{}.tex0".format(node)
|
||||
computed_attribute = attribute
|
||||
resources = []
|
||||
for node_type_attr in NODETYPES[node_type]:
|
||||
fname_attrib = node_type_attr.get_fname(node)
|
||||
computed_fname_attrib = node_type_attr.get_computed_fname(node)
|
||||
colour_space_attrib = node_type_attr.get_colour_space(node)
|
||||
|
||||
source = cmds.getAttr(attribute)
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
source = cmds.getAttr(fname_attrib)
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
try:
|
||||
color_space = cmds.getAttr(colour_space_attrib)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute, use "Raw" from before
|
||||
pass
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node) # noqa
|
||||
computed_source = cmds.getAttr(computed_fname_attrib)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
|
||||
files = get_file_node_files(node)
|
||||
files = self.handle_files(files, publishMipMap)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
files = get_file_node_files(node)
|
||||
files = self.handle_files(files, publishMipMap)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(fname_attrib))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
return {"node": node,
|
||||
"attribute": attribute,
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
# Define the resource
|
||||
resource = {"node": node,
|
||||
"attribute": fname_attrib,
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
resources.append(resource)
|
||||
return resources
|
||||
|
||||
def handle_files(self, files, publishMipMap):
|
||||
"""This will go through all the files and make sure that they are
|
||||
|
|
|
|||
152
openpype/hosts/maya/plugins/publish/extract_import_reference.py
Normal file
152
openpype/hosts/maya/plugins/publish/extract_import_reference.py
Normal file
|
|
@ -0,0 +1,152 @@
|
|||
import os
|
||||
import sys
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
import tempfile
|
||||
|
||||
from openpype.lib import run_subprocess
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class ExtractImportReference(publish.Extractor):
|
||||
"""
|
||||
|
||||
Extract the scene with imported reference.
|
||||
The temp scene with imported reference is
|
||||
published for rendering if this extractor is activated
|
||||
|
||||
"""
|
||||
|
||||
label = "Extract Import Reference"
|
||||
order = pyblish.api.ExtractorOrder - 0.48
|
||||
hosts = ["maya"]
|
||||
families = ["renderlayer", "workfile"]
|
||||
optional = True
|
||||
tmp_format = "_tmp"
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_setting, system_settings):
|
||||
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
|
||||
|
||||
def process(self, instance):
|
||||
ext_mapping = (
|
||||
instance.context.data["project_settings"]["maya"]["ext_mapping"]
|
||||
)
|
||||
if ext_mapping:
|
||||
self.log.info("Looking in settings for scene type ...")
|
||||
# use extension mapping for first family found
|
||||
for family in self.families:
|
||||
try:
|
||||
self.scene_type = ext_mapping[family]
|
||||
self.log.info(
|
||||
"Using {} as scene type".format(self.scene_type))
|
||||
break
|
||||
|
||||
except KeyError:
|
||||
# set scene type to ma
|
||||
self.scene_type = "ma"
|
||||
|
||||
_scene_type = ("mayaAscii"
|
||||
if self.scene_type == "ma"
|
||||
else "mayaBinary")
|
||||
|
||||
dir_path = self.staging_dir(instance)
|
||||
# named the file with imported reference
|
||||
if instance.name == "Main":
|
||||
return
|
||||
tmp_name = instance.name + self.tmp_format
|
||||
current_name = cmds.file(query=True, sceneName=True)
|
||||
ref_scene_name = "{0}.{1}".format(tmp_name, self.scene_type)
|
||||
|
||||
reference_path = os.path.join(dir_path, ref_scene_name)
|
||||
tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
# This generates script for mayapy to take care of reference
|
||||
# importing outside current session. It is passing current scene
|
||||
# name and destination scene name.
|
||||
script = ("""
|
||||
# -*- coding: utf-8 -*-
|
||||
'''Script to import references to given scene.'''
|
||||
import maya.standalone
|
||||
maya.standalone.initialize()
|
||||
# scene names filled by caller
|
||||
current_name = "{current_name}"
|
||||
ref_scene_name = "{ref_scene_name}"
|
||||
print(">>> Opening {{}} ...".format(current_name))
|
||||
cmds.file(current_name, open=True, force=True)
|
||||
print(">>> Processing references")
|
||||
all_reference = cmds.file(q=True, reference=True) or []
|
||||
for ref in all_reference:
|
||||
if cmds.referenceQuery(ref, il=True):
|
||||
cmds.file(ref, importReference=True)
|
||||
|
||||
nested_ref = cmds.file(q=True, reference=True)
|
||||
if nested_ref:
|
||||
for new_ref in nested_ref:
|
||||
if new_ref not in all_reference:
|
||||
all_reference.append(new_ref)
|
||||
|
||||
print(">>> Finish importing references")
|
||||
print(">>> Saving scene as {{}}".format(ref_scene_name))
|
||||
|
||||
cmds.file(rename=ref_scene_name)
|
||||
cmds.file(save=True, force=True)
|
||||
print("*** Done")
|
||||
""").format(current_name=current_name, ref_scene_name=tmp_path)
|
||||
mayapy_exe = os.path.join(os.getenv("MAYA_LOCATION"), "bin", "mayapy")
|
||||
if sys.platform == "windows":
|
||||
mayapy_exe += ".exe"
|
||||
mayapy_exe = os.path.normpath(mayapy_exe)
|
||||
# can't use TemporaryNamedFile as that can't be opened in another
|
||||
# process until handles are closed by context manager.
|
||||
with tempfile.TemporaryDirectory() as tmp_dir_name:
|
||||
tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py")
|
||||
self.log.info("Using script file: {}".format(tmp_script_path))
|
||||
with open(tmp_script_path, "wt") as tmp:
|
||||
tmp.write(script)
|
||||
|
||||
try:
|
||||
run_subprocess([mayapy_exe, tmp_script_path])
|
||||
except Exception:
|
||||
self.log.error("Import reference failed", exc_info=True)
|
||||
raise
|
||||
|
||||
with lib.maintained_selection():
|
||||
cmds.select(all=True, noExpand=True)
|
||||
cmds.file(reference_path,
|
||||
force=True,
|
||||
typ=_scene_type,
|
||||
exportSelected=True,
|
||||
channels=True,
|
||||
constraints=True,
|
||||
shader=True,
|
||||
expressions=True,
|
||||
constructionHistory=True)
|
||||
|
||||
instance.context.data["currentFile"] = tmp_path
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = []
|
||||
instance.data["files"].append(ref_scene_name)
|
||||
|
||||
if instance.data.get("representations") is None:
|
||||
instance.data["representations"] = []
|
||||
|
||||
ref_representation = {
|
||||
"name": self.scene_type,
|
||||
"ext": self.scene_type,
|
||||
"files": ref_scene_name,
|
||||
"stagingDir": os.path.dirname(current_name),
|
||||
"outputName": "imported"
|
||||
}
|
||||
self.log.info("%s" % ref_representation)
|
||||
|
||||
instance.data["representations"].append(ref_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name,
|
||||
reference_path))
|
||||
|
|
@ -73,12 +73,12 @@ class ExtractMultiverseLook(publish.Extractor):
|
|||
"writeAll": False,
|
||||
"writeTransforms": False,
|
||||
"writeVisibility": False,
|
||||
"writeAttributes": False,
|
||||
"writeAttributes": True,
|
||||
"writeMaterials": True,
|
||||
"writeVariants": False,
|
||||
"writeVariantsDefinition": False,
|
||||
"writeActiveState": False,
|
||||
"writeNamespaces": False,
|
||||
"writeNamespaces": True,
|
||||
"numTimeSamples": 1,
|
||||
"timeSamplesSpan": 0.0
|
||||
}
|
||||
|
|
|
|||
|
|
@ -2,7 +2,9 @@ import os
|
|||
import six
|
||||
|
||||
from maya import cmds
|
||||
from maya import mel
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
|
||||
|
|
@ -26,7 +28,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
label = "Extract Multiverse USD Asset"
|
||||
hosts = ["maya"]
|
||||
families = ["mvUsd"]
|
||||
families = ["usd"]
|
||||
scene_type = "usd"
|
||||
file_formats = ["usd", "usda", "usdz"]
|
||||
|
||||
|
|
@ -87,7 +89,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
return {
|
||||
"stripNamespaces": False,
|
||||
"mergeTransformAndShape": False,
|
||||
"writeAncestors": True,
|
||||
"writeAncestors": False,
|
||||
"flattenParentXforms": False,
|
||||
"writeSparseOverrides": False,
|
||||
"useMetaPrimPath": False,
|
||||
|
|
@ -147,7 +149,15 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
return options
|
||||
|
||||
def get_default_options(self):
|
||||
self.log.info("ExtractMultiverseUsd get_default_options")
|
||||
return self.default_options
|
||||
|
||||
def filter_members(self, members):
|
||||
return members
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
|
||||
|
|
@ -161,7 +171,7 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
file_path = file_path.replace('\\', '/')
|
||||
|
||||
# Parse export options
|
||||
options = self.default_options
|
||||
options = self.get_default_options()
|
||||
options = self.parse_overrides(instance, options)
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
|
||||
|
|
@ -170,27 +180,35 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
self.log.info('Collected objects: {}'.format(members))
|
||||
members = self.filter_members(members)
|
||||
if not members:
|
||||
self.log.error('No members!')
|
||||
return
|
||||
self.log.info(' - filtered: {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
||||
time_opts = None
|
||||
frame_start = instance.data['frameStart']
|
||||
frame_end = instance.data['frameEnd']
|
||||
handle_start = instance.data['handleStart']
|
||||
handle_end = instance.data['handleEnd']
|
||||
step = instance.data['step']
|
||||
fps = instance.data['fps']
|
||||
if frame_end != frame_start:
|
||||
time_opts = multiverse.TimeOptions()
|
||||
|
||||
time_opts.writeTimeRange = True
|
||||
|
||||
handle_start = instance.data['handleStart']
|
||||
handle_end = instance.data['handleEnd']
|
||||
|
||||
time_opts.frameRange = (
|
||||
frame_start - handle_start, frame_end + handle_end)
|
||||
time_opts.frameIncrement = step
|
||||
time_opts.numTimeSamples = instance.data["numTimeSamples"]
|
||||
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
|
||||
time_opts.framePerSecond = fps
|
||||
time_opts.frameIncrement = instance.data['step']
|
||||
time_opts.numTimeSamples = instance.data.get(
|
||||
'numTimeSamples', options['numTimeSamples'])
|
||||
time_opts.timeSamplesSpan = instance.data.get(
|
||||
'timeSamplesSpan', options['timeSamplesSpan'])
|
||||
time_opts.framePerSecond = instance.data.get(
|
||||
'fps', mel.eval('currentTimeUnitToFPS()'))
|
||||
|
||||
asset_write_opts = multiverse.AssetWriteOptions(time_opts)
|
||||
options_discard_keys = {
|
||||
|
|
@ -203,11 +221,15 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
'step',
|
||||
'fps'
|
||||
}
|
||||
self.log.debug("Write Options:")
|
||||
for key, value in options.items():
|
||||
if key in options_discard_keys:
|
||||
continue
|
||||
|
||||
self.log.debug(" - {}={}".format(key, value))
|
||||
setattr(asset_write_opts, key, value)
|
||||
|
||||
self.log.info('WriteAsset: {} / {}'.format(file_path, members))
|
||||
multiverse.WriteAsset(file_path, members, asset_write_opts)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -223,3 +245,33 @@ class ExtractMultiverseUsd(publish.Extractor):
|
|||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
|
||||
|
||||
class ExtractMultiverseUsdAnim(ExtractMultiverseUsd):
|
||||
"""Extractor for Multiverse USD Animation Sparse Cache data.
|
||||
|
||||
This will extract the sparse cache data from the scene and generate a
|
||||
USD file with all the animation data.
|
||||
|
||||
Upon publish a .usd sparse cache will be written.
|
||||
"""
|
||||
label = "Extract Multiverse USD Animation Sparse Cache"
|
||||
families = ["animation", "usd"]
|
||||
match = pyblish.api.Subset
|
||||
|
||||
def get_default_options(self):
|
||||
anim_options = self.default_options
|
||||
anim_options["writeSparseOverrides"] = True
|
||||
anim_options["writeUsdAttributes"] = True
|
||||
anim_options["stripNamespaces"] = True
|
||||
return anim_options
|
||||
|
||||
def filter_members(self, members):
|
||||
out_set = next((i for i in members if i.endswith("out_SET")), None)
|
||||
|
||||
if out_set is None:
|
||||
self.log.warning("Expecting out_SET")
|
||||
return None
|
||||
|
||||
members = cmds.ls(cmds.sets(out_set, query=True), long=True)
|
||||
return members
|
||||
|
|
|
|||
|
|
@ -5,6 +5,11 @@ from openpype.pipeline.publish import (
|
|||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
)
|
||||
from openpype.hosts.maya.api.lib_rendersetup import (
|
||||
get_attr_overrides,
|
||||
get_attr_in_layer,
|
||||
)
|
||||
from maya.app.renderSetup.model.override import AbsOverride
|
||||
|
||||
|
||||
class ValidateFrameRange(pyblish.api.InstancePlugin):
|
||||
|
|
@ -92,10 +97,86 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
Repair instance container to match asset data.
|
||||
"""
|
||||
cmds.setAttr(
|
||||
"{}.frameStart".format(instance.data["name"]),
|
||||
instance.context.data.get("frameStartHandle"))
|
||||
|
||||
cmds.setAttr(
|
||||
"{}.frameEnd".format(instance.data["name"]),
|
||||
instance.context.data.get("frameEndHandle"))
|
||||
if "renderlayer" in instance.data.get("families"):
|
||||
# Special behavior for renderlayers
|
||||
cls.repair_renderlayer(instance)
|
||||
return
|
||||
|
||||
node = instance.data["name"]
|
||||
context = instance.context
|
||||
|
||||
frame_start_handle = int(context.data.get("frameStartHandle"))
|
||||
frame_end_handle = int(context.data.get("frameEndHandle"))
|
||||
handle_start = int(context.data.get("handleStart"))
|
||||
handle_end = int(context.data.get("handleEnd"))
|
||||
frame_start = int(context.data.get("frameStart"))
|
||||
frame_end = int(context.data.get("frameEnd"))
|
||||
|
||||
# Start
|
||||
if cmds.attributeQuery("handleStart", node=node, exists=True):
|
||||
cmds.setAttr("{}.handleStart".format(node), handle_start)
|
||||
cmds.setAttr("{}.frameStart".format(node), frame_start)
|
||||
else:
|
||||
# Include start handle in frame start if no separate handleStart
|
||||
# attribute exists on the node
|
||||
cmds.setAttr("{}.frameStart".format(node), frame_start_handle)
|
||||
|
||||
# End
|
||||
if cmds.attributeQuery("handleEnd", node=node, exists=True):
|
||||
cmds.setAttr("{}.handleEnd".format(node), handle_end)
|
||||
cmds.setAttr("{}.frameEnd".format(node), frame_end)
|
||||
else:
|
||||
# Include end handle in frame end if no separate handleEnd
|
||||
# attribute exists on the node
|
||||
cmds.setAttr("{}.frameEnd".format(node), frame_end_handle)
|
||||
|
||||
@classmethod
|
||||
def repair_renderlayer(cls, instance):
|
||||
"""Apply frame range in render settings"""
|
||||
|
||||
layer = instance.data["setMembers"]
|
||||
context = instance.context
|
||||
|
||||
start_attr = "defaultRenderGlobals.startFrame"
|
||||
end_attr = "defaultRenderGlobals.endFrame"
|
||||
|
||||
frame_start_handle = int(context.data.get("frameStartHandle"))
|
||||
frame_end_handle = int(context.data.get("frameEndHandle"))
|
||||
|
||||
cls._set_attr_in_layer(start_attr, layer, frame_start_handle)
|
||||
cls._set_attr_in_layer(end_attr, layer, frame_end_handle)
|
||||
|
||||
@classmethod
|
||||
def _set_attr_in_layer(cls, node_attr, layer, value):
|
||||
|
||||
if get_attr_in_layer(node_attr, layer=layer) == value:
|
||||
# Already ok. This can happen if you have multiple renderlayers
|
||||
# validated and there are no frame range overrides. The first
|
||||
# layer's repair would have fixed the global value already
|
||||
return
|
||||
|
||||
overrides = list(get_attr_overrides(node_attr, layer=layer))
|
||||
if overrides:
|
||||
# We set the last absolute override if it is an absolute override
|
||||
# otherwise we'll add an Absolute override
|
||||
last_override = overrides[-1][1]
|
||||
if not isinstance(last_override, AbsOverride):
|
||||
collection = last_override.parent()
|
||||
node, attr = node_attr.split(".", 1)
|
||||
last_override = collection.createAbsoluteOverride(node, attr)
|
||||
|
||||
cls.log.debug("Setting {attr} absolute override in "
|
||||
"layer '{layer}': {value}".format(layer=layer,
|
||||
attr=node_attr,
|
||||
value=value))
|
||||
cmds.setAttr(last_override.name() + ".attrValue", value)
|
||||
|
||||
else:
|
||||
# Set the attribute directly
|
||||
# (Note that this will set the global attribute)
|
||||
cls.log.debug("Setting global {attr}: {value}".format(
|
||||
attr=node_attr,
|
||||
value=value
|
||||
))
|
||||
cmds.setAttr(node_attr, value)
|
||||
|
|
|
|||
|
|
@ -80,13 +80,14 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
|
|||
def is_or_has_mipmap(self, fname, files):
|
||||
ext = os.path.splitext(fname)[1][1:]
|
||||
if ext in MIPMAP_EXTENSIONS:
|
||||
self.log.debug("Is a mipmap '{}'".format(fname))
|
||||
self.log.debug(" - Is a mipmap '{}'".format(fname))
|
||||
return True
|
||||
|
||||
for colour_space in COLOUR_SPACES:
|
||||
for mipmap_ext in MIPMAP_EXTENSIONS:
|
||||
mipmap_fname = '.'.join([fname, colour_space, mipmap_ext])
|
||||
if mipmap_fname in files:
|
||||
self.log.debug("Has a mipmap '{}'".format(fname))
|
||||
self.log.debug(
|
||||
" - Has a mipmap '{}'".format(mipmap_fname))
|
||||
return True
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
- nurbsSurface: _NRB
|
||||
- locator: _LOC
|
||||
- null/group: _GRP
|
||||
Suffices can also be overriden by project settings.
|
||||
|
||||
.. warning::
|
||||
This grabs the first child shape as a reference and doesn't use the
|
||||
|
|
@ -44,6 +45,13 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
|
||||
ALLOW_IF_NOT_IN_SUFFIX_TABLE = True
|
||||
|
||||
@classmethod
|
||||
def get_table_for_invalid(cls):
|
||||
ss = []
|
||||
for k, v in cls.SUFFIX_NAMING_TABLE.items():
|
||||
ss.append(" - {}: {}".format(k, ", ".join(v)))
|
||||
return "\n".join(ss)
|
||||
|
||||
@staticmethod
|
||||
def is_valid_name(node_name, shape_type,
|
||||
SUFFIX_NAMING_TABLE, ALLOW_IF_NOT_IN_SUFFIX_TABLE):
|
||||
|
|
@ -106,5 +114,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
valid = self.get_table_for_invalid()
|
||||
raise ValueError("Incorrectly named geometry "
|
||||
"transforms: {0}".format(invalid))
|
||||
"transforms: {0}, accepted suffixes are: "
|
||||
"\n{1}".format(invalid, valid))
|
||||
|
|
|
|||
|
|
@ -2865,10 +2865,11 @@ def get_group_io_nodes(nodes):
|
|||
break
|
||||
|
||||
if input_node is None:
|
||||
raise ValueError("No Input found")
|
||||
log.warning("No Input found")
|
||||
|
||||
if output_node is None:
|
||||
raise ValueError("No Output found")
|
||||
log.warning("No Output found")
|
||||
|
||||
return input_node, output_node
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@ from .lib import (
|
|||
)
|
||||
from .workfile_template_builder import (
|
||||
NukePlaceholderLoadPlugin,
|
||||
NukePlaceholderCreatePlugin,
|
||||
build_workfile_template,
|
||||
update_workfile_template,
|
||||
create_placeholder,
|
||||
|
|
@ -139,7 +140,8 @@ def _show_workfiles():
|
|||
|
||||
def get_workfile_build_placeholder_plugins():
|
||||
return [
|
||||
NukePlaceholderLoadPlugin
|
||||
NukePlaceholderLoadPlugin,
|
||||
NukePlaceholderCreatePlugin
|
||||
]
|
||||
|
||||
|
||||
|
|
@ -217,10 +219,6 @@ def _install_menu():
|
|||
"Build Workfile from template",
|
||||
lambda: build_workfile_template()
|
||||
)
|
||||
menu_template.addCommand(
|
||||
"Update Workfile",
|
||||
lambda: update_workfile_template()
|
||||
)
|
||||
menu_template.addSeparator()
|
||||
menu_template.addCommand(
|
||||
"Create Place Holder",
|
||||
|
|
|
|||
|
|
@ -7,7 +7,9 @@ from openpype.pipeline.workfile.workfile_template_builder import (
|
|||
AbstractTemplateBuilder,
|
||||
PlaceholderPlugin,
|
||||
LoadPlaceholderItem,
|
||||
CreatePlaceholderItem,
|
||||
PlaceholderLoadMixin,
|
||||
PlaceholderCreateMixin
|
||||
)
|
||||
from openpype.tools.workfile_template_build import (
|
||||
WorkfileBuildPlaceholderDialog,
|
||||
|
|
@ -32,7 +34,7 @@ PLACEHOLDER_SET = "PLACEHOLDERS_SET"
|
|||
|
||||
|
||||
class NukeTemplateBuilder(AbstractTemplateBuilder):
|
||||
"""Concrete implementation of AbstractTemplateBuilder for maya"""
|
||||
"""Concrete implementation of AbstractTemplateBuilder for nuke"""
|
||||
|
||||
def import_template(self, path):
|
||||
"""Import template into current scene.
|
||||
|
|
@ -40,7 +42,7 @@ class NukeTemplateBuilder(AbstractTemplateBuilder):
|
|||
|
||||
Args:
|
||||
path (str): A path to current template (usually given by
|
||||
get_template_path implementation)
|
||||
get_template_preset implementation)
|
||||
|
||||
Returns:
|
||||
bool: Wether the template was succesfully imported or not
|
||||
|
|
@ -74,8 +76,7 @@ class NukePlaceholderPlugin(PlaceholderPlugin):
|
|||
|
||||
node_knobs = node.knobs()
|
||||
if (
|
||||
"builder_type" not in node_knobs
|
||||
or "is_placeholder" not in node_knobs
|
||||
"is_placeholder" not in node_knobs
|
||||
or not node.knob("is_placeholder").value()
|
||||
):
|
||||
continue
|
||||
|
|
@ -273,6 +274,15 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
|
||||
placeholder.data["nb_children"] += 1
|
||||
reset_selection()
|
||||
|
||||
# remove placeholders marked as delete
|
||||
if (
|
||||
placeholder.data.get("delete")
|
||||
and not placeholder.data.get("keep_placeholder")
|
||||
):
|
||||
self.log.debug("Deleting node: {}".format(placeholder_node.name()))
|
||||
nuke.delete(placeholder_node)
|
||||
|
||||
# go back to root group
|
||||
nuke.root().begin()
|
||||
|
||||
|
|
@ -454,12 +464,12 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
)
|
||||
for node in placeholder_node.dependent():
|
||||
for idx in range(node.inputs()):
|
||||
if node.input(idx) == placeholder_node:
|
||||
if node.input(idx) == placeholder_node and output_node:
|
||||
node.setInput(idx, output_node)
|
||||
|
||||
for node in placeholder_node.dependencies():
|
||||
for idx in range(placeholder_node.inputs()):
|
||||
if placeholder_node.input(idx) == node:
|
||||
if placeholder_node.input(idx) == node and input_node:
|
||||
input_node.setInput(0, node)
|
||||
|
||||
def _create_sib_copies(self, placeholder):
|
||||
|
|
@ -535,6 +545,408 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
siblings_input.setInput(0, copy_output)
|
||||
|
||||
|
||||
class NukePlaceholderCreatePlugin(
|
||||
NukePlaceholderPlugin, PlaceholderCreateMixin
|
||||
):
|
||||
identifier = "nuke.create"
|
||||
label = "Nuke create"
|
||||
|
||||
def _parse_placeholder_node_data(self, node):
|
||||
placeholder_data = super(
|
||||
NukePlaceholderCreatePlugin, self
|
||||
)._parse_placeholder_node_data(node)
|
||||
|
||||
node_knobs = node.knobs()
|
||||
nb_children = 0
|
||||
if "nb_children" in node_knobs:
|
||||
nb_children = int(node_knobs["nb_children"].getValue())
|
||||
placeholder_data["nb_children"] = nb_children
|
||||
|
||||
siblings = []
|
||||
if "siblings" in node_knobs:
|
||||
siblings = node_knobs["siblings"].values()
|
||||
placeholder_data["siblings"] = siblings
|
||||
|
||||
node_full_name = node.fullName()
|
||||
placeholder_data["group_name"] = node_full_name.rpartition(".")[0]
|
||||
placeholder_data["last_loaded"] = []
|
||||
placeholder_data["delete"] = False
|
||||
return placeholder_data
|
||||
|
||||
def _before_instance_create(self, placeholder):
|
||||
placeholder.data["nodes_init"] = nuke.allNodes()
|
||||
|
||||
def collect_placeholders(self):
|
||||
output = []
|
||||
scene_placeholders = self._collect_scene_placeholders()
|
||||
for node_name, node in scene_placeholders.items():
|
||||
plugin_identifier_knob = node.knob("plugin_identifier")
|
||||
if (
|
||||
plugin_identifier_knob is None
|
||||
or plugin_identifier_knob.getValue() != self.identifier
|
||||
):
|
||||
continue
|
||||
|
||||
placeholder_data = self._parse_placeholder_node_data(node)
|
||||
|
||||
output.append(
|
||||
CreatePlaceholderItem(node_name, placeholder_data, self)
|
||||
)
|
||||
|
||||
return output
|
||||
|
||||
def populate_placeholder(self, placeholder):
|
||||
self.populate_create_placeholder(placeholder)
|
||||
|
||||
def repopulate_placeholder(self, placeholder):
|
||||
self.populate_create_placeholder(placeholder)
|
||||
|
||||
def get_placeholder_options(self, options=None):
|
||||
return self.get_create_plugin_options(options)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
# deselect all selected nodes
|
||||
placeholder_node = nuke.toNode(placeholder.scene_identifier)
|
||||
|
||||
# getting the latest nodes added
|
||||
nodes_init = placeholder.data["nodes_init"]
|
||||
nodes_created = list(set(nuke.allNodes()) - set(nodes_init))
|
||||
self.log.debug("Created nodes: {}".format(nodes_created))
|
||||
if not nodes_created:
|
||||
return
|
||||
|
||||
placeholder.data["delete"] = True
|
||||
|
||||
nodes_created = self._move_to_placeholder_group(
|
||||
placeholder, nodes_created
|
||||
)
|
||||
placeholder.data["last_created"] = nodes_created
|
||||
refresh_nodes(nodes_created)
|
||||
|
||||
# positioning of the created nodes
|
||||
min_x, min_y, _, _ = get_extreme_positions(nodes_created)
|
||||
for node in nodes_created:
|
||||
xpos = (node.xpos() - min_x) + placeholder_node.xpos()
|
||||
ypos = (node.ypos() - min_y) + placeholder_node.ypos()
|
||||
node.setXYpos(xpos, ypos)
|
||||
refresh_nodes(nodes_created)
|
||||
|
||||
# fix the problem of z_order for backdrops
|
||||
self._fix_z_order(placeholder)
|
||||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data["nb_children"] == 0:
|
||||
# save initial nodes postions and dimensions, update them
|
||||
# and set inputs and outputs of created nodes
|
||||
|
||||
self._imprint_inits()
|
||||
self._update_nodes(placeholder, nuke.allNodes(), nodes_created)
|
||||
self._set_created_connections(placeholder)
|
||||
|
||||
elif placeholder.data["siblings"]:
|
||||
# create copies of placeholder siblings for the new created nodes,
|
||||
# set their inputs and outpus and update all nodes positions and
|
||||
# dimensions and siblings names
|
||||
|
||||
siblings = get_nodes_by_names(placeholder.data["siblings"])
|
||||
refresh_nodes(siblings)
|
||||
copies = self._create_sib_copies(placeholder)
|
||||
new_nodes = list(copies.values()) # copies nodes
|
||||
self._update_nodes(new_nodes, nodes_created)
|
||||
placeholder_node.removeKnob(placeholder_node.knob("siblings"))
|
||||
new_nodes_name = get_names_from_nodes(new_nodes)
|
||||
imprint(placeholder_node, {"siblings": new_nodes_name})
|
||||
self._set_copies_connections(placeholder, copies)
|
||||
|
||||
self._update_nodes(
|
||||
nuke.allNodes(),
|
||||
new_nodes + nodes_created,
|
||||
20
|
||||
)
|
||||
|
||||
new_siblings = get_names_from_nodes(new_nodes)
|
||||
placeholder.data["siblings"] = new_siblings
|
||||
|
||||
else:
|
||||
# if the placeholder doesn't have siblings, the created
|
||||
# nodes will be placed in a free space
|
||||
|
||||
xpointer, ypointer = find_free_space_to_paste_nodes(
|
||||
nodes_created, direction="bottom", offset=200
|
||||
)
|
||||
node = nuke.createNode("NoOp")
|
||||
reset_selection()
|
||||
nuke.delete(node)
|
||||
for node in nodes_created:
|
||||
xpos = (node.xpos() - min_x) + xpointer
|
||||
ypos = (node.ypos() - min_y) + ypointer
|
||||
node.setXYpos(xpos, ypos)
|
||||
|
||||
placeholder.data["nb_children"] += 1
|
||||
reset_selection()
|
||||
|
||||
# remove placeholders marked as delete
|
||||
if (
|
||||
placeholder.data.get("delete")
|
||||
and not placeholder.data.get("keep_placeholder")
|
||||
):
|
||||
self.log.debug("Deleting node: {}".format(placeholder_node.name()))
|
||||
nuke.delete(placeholder_node)
|
||||
|
||||
# go back to root group
|
||||
nuke.root().begin()
|
||||
|
||||
def _move_to_placeholder_group(self, placeholder, nodes_created):
|
||||
"""
|
||||
opening the placeholder's group and copying created nodes in it.
|
||||
|
||||
Returns :
|
||||
nodes_created (list): the new list of pasted nodes
|
||||
"""
|
||||
groups_name = placeholder.data["group_name"]
|
||||
reset_selection()
|
||||
select_nodes(nodes_created)
|
||||
if groups_name:
|
||||
with node_tempfile() as filepath:
|
||||
nuke.nodeCopy(filepath)
|
||||
for node in nuke.selectedNodes():
|
||||
nuke.delete(node)
|
||||
group = nuke.toNode(groups_name)
|
||||
group.begin()
|
||||
nuke.nodePaste(filepath)
|
||||
nodes_created = nuke.selectedNodes()
|
||||
return nodes_created
|
||||
|
||||
def _fix_z_order(self, placeholder):
|
||||
"""Fix the problem of z_order when a backdrop is create."""
|
||||
|
||||
nodes_created = placeholder.data["last_created"]
|
||||
created_backdrops = []
|
||||
bd_orders = set()
|
||||
for node in nodes_created:
|
||||
if isinstance(node, nuke.BackdropNode):
|
||||
created_backdrops.append(node)
|
||||
bd_orders.add(node.knob("z_order").getValue())
|
||||
|
||||
if not bd_orders:
|
||||
return
|
||||
|
||||
sib_orders = set()
|
||||
for node_name in placeholder.data["siblings"]:
|
||||
node = nuke.toNode(node_name)
|
||||
if isinstance(node, nuke.BackdropNode):
|
||||
sib_orders.add(node.knob("z_order").getValue())
|
||||
|
||||
if not sib_orders:
|
||||
return
|
||||
|
||||
min_order = min(bd_orders)
|
||||
max_order = max(sib_orders)
|
||||
for backdrop_node in created_backdrops:
|
||||
z_order = backdrop_node.knob("z_order").getValue()
|
||||
backdrop_node.knob("z_order").setValue(
|
||||
z_order + max_order - min_order + 1)
|
||||
|
||||
def _imprint_siblings(self, placeholder):
|
||||
"""
|
||||
- add siblings names to placeholder attributes (nodes created with it)
|
||||
- add Id to the attributes of all the other nodes
|
||||
"""
|
||||
|
||||
created_nodes = placeholder.data["last_created"]
|
||||
created_nodes_set = set(created_nodes)
|
||||
|
||||
for node in created_nodes:
|
||||
node_knobs = node.knobs()
|
||||
|
||||
if (
|
||||
"is_placeholder" not in node_knobs
|
||||
or (
|
||||
"is_placeholder" in node_knobs
|
||||
and node.knob("is_placeholder").value()
|
||||
)
|
||||
):
|
||||
siblings = list(created_nodes_set - {node})
|
||||
siblings_name = get_names_from_nodes(siblings)
|
||||
siblings = {"siblings": siblings_name}
|
||||
imprint(node, siblings)
|
||||
|
||||
def _imprint_inits(self):
|
||||
"""Add initial positions and dimensions to the attributes"""
|
||||
|
||||
for node in nuke.allNodes():
|
||||
refresh_node(node)
|
||||
imprint(node, {"x_init": node.xpos(), "y_init": node.ypos()})
|
||||
node.knob("x_init").setVisible(False)
|
||||
node.knob("y_init").setVisible(False)
|
||||
width = node.screenWidth()
|
||||
height = node.screenHeight()
|
||||
if "bdwidth" in node.knobs():
|
||||
imprint(node, {"w_init": width, "h_init": height})
|
||||
node.knob("w_init").setVisible(False)
|
||||
node.knob("h_init").setVisible(False)
|
||||
refresh_node(node)
|
||||
|
||||
def _update_nodes(
|
||||
self, placeholder, nodes, considered_nodes, offset_y=None
|
||||
):
|
||||
"""Adjust backdrop nodes dimensions and positions.
|
||||
|
||||
Considering some nodes sizes.
|
||||
|
||||
Args:
|
||||
nodes (list): list of nodes to update
|
||||
considered_nodes (list): list of nodes to consider while updating
|
||||
positions and dimensions
|
||||
offset (int): distance between copies
|
||||
"""
|
||||
|
||||
placeholder_node = nuke.toNode(placeholder.scene_identifier)
|
||||
|
||||
min_x, min_y, max_x, max_y = get_extreme_positions(considered_nodes)
|
||||
|
||||
diff_x = diff_y = 0
|
||||
contained_nodes = [] # for backdrops
|
||||
|
||||
if offset_y is None:
|
||||
width_ph = placeholder_node.screenWidth()
|
||||
height_ph = placeholder_node.screenHeight()
|
||||
diff_y = max_y - min_y - height_ph
|
||||
diff_x = max_x - min_x - width_ph
|
||||
contained_nodes = [placeholder_node]
|
||||
min_x = placeholder_node.xpos()
|
||||
min_y = placeholder_node.ypos()
|
||||
else:
|
||||
siblings = get_nodes_by_names(placeholder.data["siblings"])
|
||||
minX, _, maxX, _ = get_extreme_positions(siblings)
|
||||
diff_y = max_y - min_y + 20
|
||||
diff_x = abs(max_x - min_x - maxX + minX)
|
||||
contained_nodes = considered_nodes
|
||||
|
||||
if diff_y <= 0 and diff_x <= 0:
|
||||
return
|
||||
|
||||
for node in nodes:
|
||||
refresh_node(node)
|
||||
|
||||
if (
|
||||
node == placeholder_node
|
||||
or node in considered_nodes
|
||||
):
|
||||
continue
|
||||
|
||||
if (
|
||||
not isinstance(node, nuke.BackdropNode)
|
||||
or (
|
||||
isinstance(node, nuke.BackdropNode)
|
||||
and not set(contained_nodes) <= set(node.getNodes())
|
||||
)
|
||||
):
|
||||
if offset_y is None and node.xpos() >= min_x:
|
||||
node.setXpos(node.xpos() + diff_x)
|
||||
|
||||
if node.ypos() >= min_y:
|
||||
node.setYpos(node.ypos() + diff_y)
|
||||
|
||||
else:
|
||||
width = node.screenWidth()
|
||||
height = node.screenHeight()
|
||||
node.knob("bdwidth").setValue(width + diff_x)
|
||||
node.knob("bdheight").setValue(height + diff_y)
|
||||
|
||||
refresh_node(node)
|
||||
|
||||
def _set_created_connections(self, placeholder):
|
||||
"""
|
||||
set inputs and outputs of created nodes"""
|
||||
|
||||
placeholder_node = nuke.toNode(placeholder.scene_identifier)
|
||||
input_node, output_node = get_group_io_nodes(
|
||||
placeholder.data["last_created"]
|
||||
)
|
||||
for node in placeholder_node.dependent():
|
||||
for idx in range(node.inputs()):
|
||||
if node.input(idx) == placeholder_node and output_node:
|
||||
node.setInput(idx, output_node)
|
||||
|
||||
for node in placeholder_node.dependencies():
|
||||
for idx in range(placeholder_node.inputs()):
|
||||
if placeholder_node.input(idx) == node and input_node:
|
||||
input_node.setInput(0, node)
|
||||
|
||||
def _create_sib_copies(self, placeholder):
|
||||
""" creating copies of the palce_holder siblings (the ones who were
|
||||
created with it) for the new nodes added
|
||||
|
||||
Returns :
|
||||
copies (dict) : with copied nodes names and their copies
|
||||
"""
|
||||
|
||||
copies = {}
|
||||
siblings = get_nodes_by_names(placeholder.data["siblings"])
|
||||
for node in siblings:
|
||||
new_node = duplicate_node(node)
|
||||
|
||||
x_init = int(new_node.knob("x_init").getValue())
|
||||
y_init = int(new_node.knob("y_init").getValue())
|
||||
new_node.setXYpos(x_init, y_init)
|
||||
if isinstance(new_node, nuke.BackdropNode):
|
||||
w_init = new_node.knob("w_init").getValue()
|
||||
h_init = new_node.knob("h_init").getValue()
|
||||
new_node.knob("bdwidth").setValue(w_init)
|
||||
new_node.knob("bdheight").setValue(h_init)
|
||||
refresh_node(node)
|
||||
|
||||
if "repre_id" in node.knobs().keys():
|
||||
node.removeKnob(node.knob("repre_id"))
|
||||
copies[node.name()] = new_node
|
||||
return copies
|
||||
|
||||
def _set_copies_connections(self, placeholder, copies):
|
||||
"""Set inputs and outputs of the copies.
|
||||
|
||||
Args:
|
||||
copies (dict): Copied nodes by their names.
|
||||
"""
|
||||
|
||||
last_input, last_output = get_group_io_nodes(
|
||||
placeholder.data["last_created"]
|
||||
)
|
||||
siblings = get_nodes_by_names(placeholder.data["siblings"])
|
||||
siblings_input, siblings_output = get_group_io_nodes(siblings)
|
||||
copy_input = copies[siblings_input.name()]
|
||||
copy_output = copies[siblings_output.name()]
|
||||
|
||||
for node_init in siblings:
|
||||
if node_init == siblings_output:
|
||||
continue
|
||||
|
||||
node_copy = copies[node_init.name()]
|
||||
for node in node_init.dependent():
|
||||
for idx in range(node.inputs()):
|
||||
if node.input(idx) != node_init:
|
||||
continue
|
||||
|
||||
if node in siblings:
|
||||
copies[node.name()].setInput(idx, node_copy)
|
||||
else:
|
||||
last_input.setInput(0, node_copy)
|
||||
|
||||
for node in node_init.dependencies():
|
||||
for idx in range(node_init.inputs()):
|
||||
if node_init.input(idx) != node:
|
||||
continue
|
||||
|
||||
if node_init == siblings_input:
|
||||
copy_input.setInput(idx, node)
|
||||
elif node in siblings:
|
||||
node_copy.setInput(idx, copies[node.name()])
|
||||
else:
|
||||
node_copy.setInput(idx, last_output)
|
||||
|
||||
siblings_input.setInput(0, copy_output)
|
||||
|
||||
|
||||
def build_workfile_template(*args):
|
||||
builder = NukeTemplateBuilder(registered_host())
|
||||
builder.build_template()
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
representations = ["nk"]
|
||||
families = ["workfile", "nukenodes"]
|
||||
|
||||
label = "Iport Nuke Nodes"
|
||||
label = "Import Nuke Nodes"
|
||||
order = 0
|
||||
icon = "eye"
|
||||
color = "white"
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import os
|
||||
import sys
|
||||
|
||||
from Qt import QtWidgets, QtCore
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ import re
|
|||
import uuid
|
||||
|
||||
import qargparse
|
||||
from Qt import QtWidgets, QtCore
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
||||
from openpype.settings import get_current_project_settings
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
|
|
|
|||
|
|
@ -171,7 +171,6 @@ class ShotMetadataSolver:
|
|||
_index == 0
|
||||
and parents[-1]["entity_name"] == parent_name
|
||||
):
|
||||
self.log.debug(f" skipping : {parent_name}")
|
||||
continue
|
||||
|
||||
# in case first parent is project then start parents from start
|
||||
|
|
@ -179,7 +178,6 @@ class ShotMetadataSolver:
|
|||
_index == 0
|
||||
and parent_token_type == "Project"
|
||||
):
|
||||
self.log.debug("rebuilding parents from scratch")
|
||||
project_parent = parents[0]
|
||||
parents = [project_parent]
|
||||
continue
|
||||
|
|
@ -189,8 +187,6 @@ class ShotMetadataSolver:
|
|||
"entity_name": parent_name
|
||||
})
|
||||
|
||||
self.log.debug(f"__ parents: {parents}")
|
||||
|
||||
return parents
|
||||
|
||||
def _create_hierarchy_path(self, parents):
|
||||
|
|
@ -297,7 +293,6 @@ class ShotMetadataSolver:
|
|||
Returns:
|
||||
(str, dict): shot name and hierarchy data
|
||||
"""
|
||||
self.log.info(f"_ source_data: {source_data}")
|
||||
|
||||
tasks = {}
|
||||
asset_doc = source_data["selected_asset_doc"]
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
import os
|
||||
from copy import deepcopy
|
||||
from pprint import pformat
|
||||
import opentimelineio as otio
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
|
|
@ -13,9 +12,7 @@ from openpype.hosts.traypublisher.api.plugin import (
|
|||
from openpype.hosts.traypublisher.api.editorial import (
|
||||
ShotMetadataSolver
|
||||
)
|
||||
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
from openpype.lib import (
|
||||
get_ffprobe_data,
|
||||
convert_ffprobe_fps_value,
|
||||
|
|
@ -70,14 +67,12 @@ class EditorialClipInstanceCreatorBase(HiddenTrayPublishCreator):
|
|||
host_name = "traypublisher"
|
||||
|
||||
def create(self, instance_data, source_data=None):
|
||||
self.log.info(f"instance_data: {instance_data}")
|
||||
subset_name = instance_data["subset"]
|
||||
|
||||
# Create new instance
|
||||
new_instance = CreatedInstance(
|
||||
self.family, subset_name, instance_data, self
|
||||
)
|
||||
self.log.info(f"instance_data: {pformat(new_instance.data)}")
|
||||
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
|
|
@ -223,8 +218,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
asset_name = instance_data["asset"]
|
||||
asset_doc = get_asset_by_name(self.project_name, asset_name)
|
||||
|
||||
self.log.info(pre_create_data["fps"])
|
||||
|
||||
if pre_create_data["fps"] == "from_selection":
|
||||
# get asset doc data attributes
|
||||
fps = asset_doc["data"]["fps"]
|
||||
|
|
@ -239,34 +232,43 @@ or updating already created. Publishing will create OTIO file.
|
|||
sequence_path_data = pre_create_data["sequence_filepath_data"]
|
||||
media_path_data = pre_create_data["media_filepaths_data"]
|
||||
|
||||
sequence_path = self._get_path_from_file_data(sequence_path_data)
|
||||
sequence_paths = self._get_path_from_file_data(
|
||||
sequence_path_data, multi=True)
|
||||
media_path = self._get_path_from_file_data(media_path_data)
|
||||
|
||||
# get otio timeline
|
||||
otio_timeline = self._create_otio_timeline(
|
||||
sequence_path, fps)
|
||||
first_otio_timeline = None
|
||||
for seq_path in sequence_paths:
|
||||
# get otio timeline
|
||||
otio_timeline = self._create_otio_timeline(
|
||||
seq_path, fps)
|
||||
|
||||
# Create all clip instances
|
||||
clip_instance_properties.update({
|
||||
"fps": fps,
|
||||
"parent_asset_name": asset_name,
|
||||
"variant": instance_data["variant"]
|
||||
})
|
||||
# Create all clip instances
|
||||
clip_instance_properties.update({
|
||||
"fps": fps,
|
||||
"parent_asset_name": asset_name,
|
||||
"variant": instance_data["variant"]
|
||||
})
|
||||
|
||||
# create clip instances
|
||||
self._get_clip_instances(
|
||||
otio_timeline,
|
||||
media_path,
|
||||
clip_instance_properties,
|
||||
family_presets=allowed_family_presets
|
||||
# create clip instances
|
||||
self._get_clip_instances(
|
||||
otio_timeline,
|
||||
media_path,
|
||||
clip_instance_properties,
|
||||
allowed_family_presets,
|
||||
os.path.basename(seq_path),
|
||||
first_otio_timeline
|
||||
)
|
||||
|
||||
)
|
||||
if not first_otio_timeline:
|
||||
# assing otio timeline for multi file to layer
|
||||
first_otio_timeline = otio_timeline
|
||||
|
||||
# create otio editorial instance
|
||||
self._create_otio_instance(
|
||||
subset_name, instance_data,
|
||||
sequence_path, media_path,
|
||||
otio_timeline
|
||||
subset_name,
|
||||
instance_data,
|
||||
seq_path, media_path,
|
||||
first_otio_timeline
|
||||
)
|
||||
|
||||
def _create_otio_instance(
|
||||
|
|
@ -317,14 +319,14 @@ or updating already created. Publishing will create OTIO file.
|
|||
kwargs["rate"] = fps
|
||||
kwargs["ignore_timecode_mismatch"] = True
|
||||
|
||||
self.log.info(f"kwargs: {kwargs}")
|
||||
return otio.adapters.read_from_file(sequence_path, **kwargs)
|
||||
|
||||
def _get_path_from_file_data(self, file_path_data):
|
||||
def _get_path_from_file_data(self, file_path_data, multi=False):
|
||||
"""Converting creator path data to single path string
|
||||
|
||||
Args:
|
||||
file_path_data (FileDefItem): creator path data inputs
|
||||
multi (bool): switch to multiple files mode
|
||||
|
||||
Raises:
|
||||
FileExistsError: in case nothing had been set
|
||||
|
|
@ -332,23 +334,29 @@ or updating already created. Publishing will create OTIO file.
|
|||
Returns:
|
||||
str: path string
|
||||
"""
|
||||
# TODO: just temporarly solving only one media file
|
||||
if isinstance(file_path_data, list):
|
||||
file_path_data = file_path_data.pop()
|
||||
return_path_list = []
|
||||
|
||||
if len(file_path_data["filenames"]) == 0:
|
||||
|
||||
if isinstance(file_path_data, list):
|
||||
return_path_list = [
|
||||
os.path.join(f["directory"], f["filenames"][0])
|
||||
for f in file_path_data
|
||||
]
|
||||
|
||||
if not return_path_list:
|
||||
raise FileExistsError(
|
||||
f"File path was not added: {file_path_data}")
|
||||
|
||||
return os.path.join(
|
||||
file_path_data["directory"], file_path_data["filenames"][0])
|
||||
return return_path_list if multi else return_path_list[0]
|
||||
|
||||
def _get_clip_instances(
|
||||
self,
|
||||
otio_timeline,
|
||||
media_path,
|
||||
instance_data,
|
||||
family_presets
|
||||
family_presets,
|
||||
sequence_file_name,
|
||||
first_otio_timeline=None
|
||||
):
|
||||
"""Helping function fro creating clip instance
|
||||
|
||||
|
|
@ -368,17 +376,15 @@ or updating already created. Publishing will create OTIO file.
|
|||
media_data = self._get_media_source_metadata(media_path)
|
||||
|
||||
for track in tracks:
|
||||
self.log.debug(f"track.name: {track.name}")
|
||||
track.name = f"{sequence_file_name} - {otio_timeline.name}"
|
||||
try:
|
||||
track_start_frame = (
|
||||
abs(track.source_range.start_time.value)
|
||||
)
|
||||
self.log.debug(f"track_start_frame: {track_start_frame}")
|
||||
track_start_frame -= self.timeline_frame_start
|
||||
except AttributeError:
|
||||
track_start_frame = 0
|
||||
|
||||
self.log.debug(f"track_start_frame: {track_start_frame}")
|
||||
|
||||
for clip in track.each_child():
|
||||
if not self._validate_clip_for_processing(clip):
|
||||
|
|
@ -400,10 +406,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
"instance_label": None,
|
||||
"instance_id": None
|
||||
}
|
||||
self.log.info((
|
||||
"Creating subsets from presets: \n"
|
||||
f"{pformat(family_presets)}"
|
||||
))
|
||||
|
||||
for _fpreset in family_presets:
|
||||
# exclude audio family if no audio stream
|
||||
|
|
@ -419,7 +421,10 @@ or updating already created. Publishing will create OTIO file.
|
|||
deepcopy(base_instance_data),
|
||||
parenting_data
|
||||
)
|
||||
self.log.debug(f"{pformat(dict(instance.data))}")
|
||||
|
||||
# add track to first otioTimeline if it is in input args
|
||||
if first_otio_timeline:
|
||||
first_otio_timeline.tracks.append(deepcopy(track))
|
||||
|
||||
def _restore_otio_source_range(self, otio_clip):
|
||||
"""Infusing source range.
|
||||
|
|
@ -460,7 +465,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
target_url=media_path,
|
||||
available_range=available_range
|
||||
)
|
||||
|
||||
otio_clip.media_reference = media_reference
|
||||
|
||||
def _get_media_source_metadata(self, path):
|
||||
|
|
@ -481,7 +485,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
media_data = get_ffprobe_data(
|
||||
path, self.log
|
||||
)
|
||||
self.log.debug(f"__ media_data: {pformat(media_data)}")
|
||||
|
||||
# get video stream data
|
||||
video_stream = media_data["streams"][0]
|
||||
|
|
@ -589,9 +592,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
# get variant name from preset or from inharitance
|
||||
_variant_name = preset.get("variant") or variant_name
|
||||
|
||||
self.log.debug(f"__ family: {family}")
|
||||
self.log.debug(f"__ preset: {preset}")
|
||||
|
||||
# subset name
|
||||
subset_name = "{}{}".format(
|
||||
family, _variant_name.capitalize()
|
||||
|
|
@ -722,17 +722,13 @@ or updating already created. Publishing will create OTIO file.
|
|||
clip_in += track_start_frame
|
||||
clip_out = otio_clip.range_in_parent().end_time_inclusive().value
|
||||
clip_out += track_start_frame
|
||||
self.log.info(f"clip_in: {clip_in} | clip_out: {clip_out}")
|
||||
|
||||
# add offset in case there is any
|
||||
self.log.debug(f"__ timeline_offset: {timeline_offset}")
|
||||
if timeline_offset:
|
||||
clip_in += timeline_offset
|
||||
clip_out += timeline_offset
|
||||
|
||||
clip_duration = otio_clip.duration().value
|
||||
self.log.info(f"clip duration: {clip_duration}")
|
||||
|
||||
source_in = otio_clip.trimmed_range().start_time.value
|
||||
source_out = source_in + clip_duration
|
||||
|
||||
|
|
@ -762,7 +758,6 @@ or updating already created. Publishing will create OTIO file.
|
|||
Returns:
|
||||
list: lit of dict with preset items
|
||||
"""
|
||||
self.log.debug(f"__ pre_create_data: {pre_create_data}")
|
||||
return [
|
||||
{"family": "shot"},
|
||||
*[
|
||||
|
|
@ -833,7 +828,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
".fcpxml"
|
||||
],
|
||||
allow_sequences=False,
|
||||
single_item=True,
|
||||
single_item=False,
|
||||
label="Sequence file",
|
||||
),
|
||||
FileDef(
|
||||
|
|
|
|||
|
|
@ -7,8 +7,8 @@ exists under selected asset.
|
|||
"""
|
||||
from pathlib import Path
|
||||
|
||||
from openpype.client import get_subset_by_name, get_asset_by_name
|
||||
from openpype.lib.attribute_definitions import FileDef
|
||||
# from openpype.client import get_subset_by_name, get_asset_by_name
|
||||
from openpype.lib.attribute_definitions import FileDef, BoolDef
|
||||
from openpype.pipeline import (
|
||||
CreatedInstance,
|
||||
CreatorError
|
||||
|
|
@ -23,7 +23,8 @@ class OnlineCreator(TrayPublishCreator):
|
|||
label = "Online"
|
||||
family = "online"
|
||||
description = "Publish file retaining its original file name"
|
||||
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg"]
|
||||
extensions = [".mov", ".mp4", ".mxf", ".m4v", ".mpg", ".exr",
|
||||
".dpx", ".tif", ".png", ".jpg"]
|
||||
|
||||
def get_detail_description(self):
|
||||
return """# Create file retaining its original file name.
|
||||
|
|
@ -49,13 +50,17 @@ class OnlineCreator(TrayPublishCreator):
|
|||
|
||||
origin_basename = Path(files[0]).stem
|
||||
|
||||
# disable check for existing subset with the same name
|
||||
"""
|
||||
asset = get_asset_by_name(
|
||||
self.project_name, instance_data["asset"], fields=["_id"])
|
||||
|
||||
if get_subset_by_name(
|
||||
self.project_name, origin_basename, asset["_id"],
|
||||
fields=["_id"]):
|
||||
raise CreatorError(f"subset with {origin_basename} already "
|
||||
"exists in selected asset")
|
||||
"""
|
||||
|
||||
instance_data["originalBasename"] = origin_basename
|
||||
subset_name = origin_basename
|
||||
|
|
@ -69,15 +74,29 @@ class OnlineCreator(TrayPublishCreator):
|
|||
instance_data, self)
|
||||
self._store_new_instance(new_instance)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef(
|
||||
"add_review_family",
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
return [
|
||||
FileDef(
|
||||
"representation_file",
|
||||
folders=False,
|
||||
extensions=self.extensions,
|
||||
allow_sequences=False,
|
||||
allow_sequences=True,
|
||||
single_item=True,
|
||||
label="Representation",
|
||||
),
|
||||
BoolDef(
|
||||
"add_review_family",
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -12,12 +12,18 @@ class CollectOnlineFile(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
file = Path(instance.data["creator_attributes"]["path"])
|
||||
review = instance.data["creator_attributes"]["add_review_family"]
|
||||
instance.data["review"] = review
|
||||
if "review" not in instance.data["families"]:
|
||||
instance.data["families"].append("review")
|
||||
self.log.info(f"Adding review: {review}")
|
||||
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
"name": file.suffix.lstrip("."),
|
||||
"ext": file.suffix.lstrip("."),
|
||||
"files": file.name,
|
||||
"stagingDir": file.parent.as_posix()
|
||||
"stagingDir": file.parent.as_posix(),
|
||||
"tags": ["review"] if review else []
|
||||
}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -33,8 +33,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.debug(pformat(instance.data))
|
||||
|
||||
creator_identifier = instance.data["creator_identifier"]
|
||||
if "editorial" not in creator_identifier:
|
||||
return
|
||||
|
|
@ -82,7 +80,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
otio_clip = clips.pop()
|
||||
self.log.debug(f"__ otioclip.parent: {otio_clip.parent}")
|
||||
|
||||
return otio_clip
|
||||
|
||||
|
|
@ -172,7 +169,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
}
|
||||
|
||||
parents = instance.data.get('parents', [])
|
||||
self.log.debug(f"parents: {pformat(parents)}")
|
||||
|
||||
actual = {name: in_info}
|
||||
|
||||
|
|
@ -190,7 +186,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
# adding hierarchy context to instance
|
||||
context.data["hierarchyContext"] = final_context
|
||||
self.log.debug(pformat(final_context))
|
||||
|
||||
def _update_dict(self, ex_dict, new_dict):
|
||||
""" Recursion function
|
||||
|
|
|
|||
|
|
@ -20,6 +20,8 @@ class ValidateOnlineFile(OptionalPyblishPluginMixin,
|
|||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
project_name = instance.context.data["projectName"]
|
||||
asset_id = instance.data["assetEntity"]["_id"]
|
||||
subset = get_subset_by_name(
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ from .pipeline import (
|
|||
show_tools_popup,
|
||||
instantiate,
|
||||
UnrealHost,
|
||||
maintained_selection
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
|
|
@ -36,4 +37,5 @@ __all__ = [
|
|||
"show_tools_popup",
|
||||
"instantiate",
|
||||
"UnrealHost",
|
||||
"maintained_selection"
|
||||
]
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
import os
|
||||
import logging
|
||||
from typing import List
|
||||
from contextlib import contextmanager
|
||||
import semver
|
||||
|
||||
import pyblish.api
|
||||
|
|
@ -447,3 +448,16 @@ def get_subsequences(sequence: unreal.LevelSequence):
|
|||
if subscene_track is not None and subscene_track.get_sections():
|
||||
return subscene_track.get_sections()
|
||||
return []
|
||||
|
||||
|
||||
@contextmanager
|
||||
def maintained_selection():
|
||||
"""Stub to be either implemented or replaced.
|
||||
|
||||
This is needed for old publisher implementation, but
|
||||
it is not supported (yet) in UE.
|
||||
"""
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
|
|
|||
61
openpype/hosts/unreal/plugins/create/create_uasset.py
Normal file
61
openpype/hosts/unreal/plugins/create/create_uasset.py
Normal file
|
|
@ -0,0 +1,61 @@
|
|||
"""Create UAsset."""
|
||||
from pathlib import Path
|
||||
|
||||
import unreal
|
||||
|
||||
from openpype.hosts.unreal.api import pipeline
|
||||
from openpype.pipeline import LegacyCreator
|
||||
|
||||
|
||||
class CreateUAsset(LegacyCreator):
|
||||
"""UAsset."""
|
||||
|
||||
name = "UAsset"
|
||||
label = "UAsset"
|
||||
family = "uasset"
|
||||
icon = "cube"
|
||||
|
||||
root = "/Game/OpenPype"
|
||||
suffix = "_INS"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateUAsset, self).__init__(*args, **kwargs)
|
||||
|
||||
def process(self):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
subset = self.data["subset"]
|
||||
path = f"{self.root}/PublishInstances/"
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(path)
|
||||
|
||||
selection = []
|
||||
if (self.options or {}).get("useSelection"):
|
||||
sel_objects = unreal.EditorUtilityLibrary.get_selected_assets()
|
||||
selection = [a.get_path_name() for a in sel_objects]
|
||||
|
||||
if len(selection) != 1:
|
||||
raise RuntimeError("Please select only one object.")
|
||||
|
||||
obj = selection[0]
|
||||
|
||||
asset = ar.get_asset_by_object_path(obj).get_asset()
|
||||
sys_path = unreal.SystemLibrary.get_system_path(asset)
|
||||
|
||||
if not sys_path:
|
||||
raise RuntimeError(
|
||||
f"{Path(obj).name} is not on the disk. Likely it needs to"
|
||||
"be saved first.")
|
||||
|
||||
if Path(sys_path).suffix != ".uasset":
|
||||
raise RuntimeError(f"{Path(sys_path).name} is not a UAsset.")
|
||||
|
||||
unreal.log("selection: {}".format(selection))
|
||||
container_name = f"{subset}{self.suffix}"
|
||||
pipeline.create_publish_instance(
|
||||
instance=container_name, path=path)
|
||||
|
||||
data = self.data.copy()
|
||||
data["members"] = selection
|
||||
|
||||
pipeline.imprint(f"{path}/{container_name}", data)
|
||||
145
openpype/hosts/unreal/plugins/load/load_uasset.py
Normal file
145
openpype/hosts/unreal/plugins/load/load_uasset.py
Normal file
|
|
@ -0,0 +1,145 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Load UAsset."""
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
|
||||
from openpype.pipeline import (
|
||||
get_representation_path,
|
||||
AVALON_CONTAINER_ID
|
||||
)
|
||||
from openpype.hosts.unreal.api import plugin
|
||||
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
|
||||
import unreal # noqa
|
||||
|
||||
|
||||
class UAssetLoader(plugin.Loader):
|
||||
"""Load UAsset."""
|
||||
|
||||
families = ["uasset"]
|
||||
label = "Load UAsset"
|
||||
representations = ["uasset"]
|
||||
icon = "cube"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
"""Load and containerise representation into Content Browser.
|
||||
|
||||
Args:
|
||||
context (dict): application context
|
||||
name (str): subset name
|
||||
namespace (str): in Unreal this is basically path to container.
|
||||
This is not passed here, so namespace is set
|
||||
by `containerise()` because only then we know
|
||||
real path.
|
||||
options (dict): Those would be data to be imprinted. This is not
|
||||
used now, data are imprinted by `containerise()`.
|
||||
|
||||
Returns:
|
||||
list(str): list of container content
|
||||
"""
|
||||
|
||||
# Create directory for asset and OpenPype container
|
||||
root = "/Game/OpenPype/Assets"
|
||||
asset = context.get('asset').get('name')
|
||||
suffix = "_CON"
|
||||
if asset:
|
||||
asset_name = "{}_{}".format(asset, name)
|
||||
else:
|
||||
asset_name = "{}".format(name)
|
||||
|
||||
tools = unreal.AssetToolsHelpers().get_asset_tools()
|
||||
asset_dir, container_name = tools.create_unique_asset_name(
|
||||
"{}/{}/{}".format(root, asset, name), suffix="")
|
||||
|
||||
container_name += suffix
|
||||
|
||||
unreal.EditorAssetLibrary.make_directory(asset_dir)
|
||||
|
||||
destination_path = asset_dir.replace(
|
||||
"/Game",
|
||||
Path(unreal.Paths.project_content_dir()).as_posix(),
|
||||
1)
|
||||
|
||||
shutil.copy(self.fname, f"{destination_path}/{name}.uasset")
|
||||
|
||||
# Create Asset Container
|
||||
unreal_pipeline.create_container(
|
||||
container=container_name, path=asset_dir)
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"asset": asset,
|
||||
"namespace": asset_dir,
|
||||
"container_name": container_name,
|
||||
"asset_name": asset_name,
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": context["representation"]["_id"],
|
||||
"parent": context["representation"]["parent"],
|
||||
"family": context["representation"]["context"]["family"]
|
||||
}
|
||||
unreal_pipeline.imprint(
|
||||
"{}/{}".format(asset_dir, container_name), data)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
return asset_content
|
||||
|
||||
def update(self, container, representation):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
asset_dir = container["namespace"]
|
||||
name = representation["context"]["subset"]
|
||||
|
||||
destination_path = asset_dir.replace(
|
||||
"/Game",
|
||||
Path(unreal.Paths.project_content_dir()).as_posix(),
|
||||
1)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=False, include_folder=True
|
||||
)
|
||||
|
||||
for asset in asset_content:
|
||||
obj = ar.get_asset_by_object_path(asset).get_asset()
|
||||
if not obj.get_class().get_name() == 'AssetContainer':
|
||||
unreal.EditorAssetLibrary.delete_asset(asset)
|
||||
|
||||
update_filepath = get_representation_path(representation)
|
||||
|
||||
shutil.copy(update_filepath, f"{destination_path}/{name}.uasset")
|
||||
|
||||
container_path = "{}/{}".format(container["namespace"],
|
||||
container["objectName"])
|
||||
# update metadata
|
||||
unreal_pipeline.imprint(
|
||||
container_path,
|
||||
{
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"])
|
||||
})
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
asset_dir, recursive=True, include_folder=True
|
||||
)
|
||||
|
||||
for a in asset_content:
|
||||
unreal.EditorAssetLibrary.save_asset(a)
|
||||
|
||||
def remove(self, container):
|
||||
path = container["namespace"]
|
||||
parent_path = Path(path).parent.as_posix()
|
||||
|
||||
unreal.EditorAssetLibrary.delete_directory(path)
|
||||
|
||||
asset_content = unreal.EditorAssetLibrary.list_assets(
|
||||
parent_path, recursive=False
|
||||
)
|
||||
|
||||
if len(asset_content) == 0:
|
||||
unreal.EditorAssetLibrary.delete_directory(parent_path)
|
||||
|
|
@ -25,9 +25,13 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
class_name = ["/Script/OpenPype",
|
||||
"AssetContainer"] if UNREAL_VERSION.major == 5 and \
|
||||
UNREAL_VERSION.minor > 0 else "OpenPypePublishInstance" # noqa
|
||||
class_name = [
|
||||
"/Script/OpenPype",
|
||||
"OpenPypePublishInstance"
|
||||
] if (
|
||||
UNREAL_VERSION.major == 5
|
||||
and UNREAL_VERSION.minor > 0
|
||||
) else "OpenPypePublishInstance" # noqa
|
||||
instance_containers = ar.get_assets_by_class(class_name, True)
|
||||
|
||||
for container_data in instance_containers:
|
||||
|
|
|
|||
42
openpype/hosts/unreal/plugins/publish/extract_uasset.py
Normal file
42
openpype/hosts/unreal/plugins/publish/extract_uasset.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
from pathlib import Path
|
||||
import shutil
|
||||
|
||||
import unreal
|
||||
|
||||
from openpype.pipeline import publish
|
||||
|
||||
|
||||
class ExtractUAsset(publish.Extractor):
|
||||
"""Extract a UAsset."""
|
||||
|
||||
label = "Extract UAsset"
|
||||
hosts = ["unreal"]
|
||||
families = ["uasset"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{}.uasset".format(instance.name)
|
||||
|
||||
obj = instance[0]
|
||||
|
||||
asset = ar.get_asset_by_object_path(obj).get_asset()
|
||||
sys_path = unreal.SystemLibrary.get_system_path(asset)
|
||||
filename = Path(sys_path).name
|
||||
|
||||
shutil.copy(sys_path, staging_dir)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'uasset',
|
||||
'ext': 'uasset',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -0,0 +1,41 @@
|
|||
import unreal
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateNoDependencies(pyblish.api.InstancePlugin):
|
||||
"""Ensure that the uasset has no dependencies
|
||||
|
||||
The uasset is checked for dependencies. If there are any, the instance
|
||||
cannot be published.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Check no dependencies"
|
||||
families = ["uasset"]
|
||||
hosts = ["unreal"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
all_dependencies = []
|
||||
|
||||
for obj in instance[:]:
|
||||
asset = ar.get_asset_by_object_path(obj)
|
||||
dependencies = ar.get_dependencies(
|
||||
asset.package_name,
|
||||
unreal.AssetRegistryDependencyOptions(
|
||||
include_soft_package_references=False,
|
||||
include_hard_package_references=True,
|
||||
include_searchable_names=False,
|
||||
include_soft_management_references=False,
|
||||
include_hard_management_references=False
|
||||
))
|
||||
if dependencies:
|
||||
for dep in dependencies:
|
||||
if str(dep).startswith("/Game/"):
|
||||
all_dependencies.append(str(dep))
|
||||
|
||||
if all_dependencies:
|
||||
raise RuntimeError(
|
||||
f"Dependencies found: {all_dependencies}")
|
||||
|
|
@ -92,7 +92,9 @@ class FileTransaction(object):
|
|||
def process(self):
|
||||
# Backup any existing files
|
||||
for dst, (src, _) in self._transfers.items():
|
||||
if dst == src or not os.path.exists(dst):
|
||||
self.log.debug("Checking file ... {} -> {}".format(src, dst))
|
||||
path_same = self._same_paths(src, dst)
|
||||
if path_same or not os.path.exists(dst):
|
||||
continue
|
||||
|
||||
# Backup original file
|
||||
|
|
@ -105,7 +107,8 @@ class FileTransaction(object):
|
|||
|
||||
# Copy the files to transfer
|
||||
for dst, (src, opts) in self._transfers.items():
|
||||
if dst == src:
|
||||
path_same = self._same_paths(src, dst)
|
||||
if path_same:
|
||||
self.log.debug(
|
||||
"Source and destionation are same files {} -> {}".format(
|
||||
src, dst))
|
||||
|
|
@ -182,3 +185,10 @@ class FileTransaction(object):
|
|||
else:
|
||||
self.log.critical("An unexpected error occurred.")
|
||||
six.reraise(*sys.exc_info())
|
||||
|
||||
def _same_paths(self, src, dst):
|
||||
# handles same paths but with C:/project vs c:/project
|
||||
if os.path.exists(src) and os.path.exists(dst):
|
||||
return os.path.samefile(src, dst)
|
||||
|
||||
return src == dst
|
||||
|
|
|
|||
|
|
@ -400,6 +400,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Submit to Deadline"
|
||||
order = pyblish.api.IntegratorOrder + 0.1
|
||||
import_reference = False
|
||||
use_published = True
|
||||
asset_dependencies = False
|
||||
|
||||
|
|
@ -424,7 +425,11 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
file_path = None
|
||||
if self.use_published:
|
||||
file_path = self.from_published_scene()
|
||||
if not self.import_reference:
|
||||
file_path = self.from_published_scene()
|
||||
else:
|
||||
self.log.info("use the scene with imported reference for rendering") # noqa
|
||||
file_path = context.data["currentFile"]
|
||||
|
||||
# fallback if nothing was set
|
||||
if not file_path:
|
||||
|
|
@ -516,7 +521,6 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
published.
|
||||
|
||||
"""
|
||||
|
||||
instance = self._instance
|
||||
workfile_instance = self._get_workfile_instance(instance.context)
|
||||
if workfile_instance is None:
|
||||
|
|
@ -524,7 +528,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
|
||||
# determine published path from Anatomy.
|
||||
template_data = workfile_instance.data.get("anatomyData")
|
||||
rep = workfile_instance.data.get("representations")[0]
|
||||
rep = workfile_instance.data["representations"][0]
|
||||
template_data["representation"] = rep.get("name")
|
||||
template_data["ext"] = rep.get("ext")
|
||||
template_data["comment"] = None
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import socket
|
|||
import getpass
|
||||
|
||||
from openpype_modules.ftrack.lib import BaseAction
|
||||
from openpype_modules.ftrack.ftrack_server.lib import get_host_ip
|
||||
|
||||
|
||||
class ActionWhereIRun(BaseAction):
|
||||
|
|
@ -53,8 +54,7 @@ class ActionWhereIRun(BaseAction):
|
|||
try:
|
||||
host_name = socket.gethostname()
|
||||
msgs["Hostname"] = host_name
|
||||
host_ip = socket.gethostbyname(host_name)
|
||||
msgs["IP"] = host_ip
|
||||
msgs["IP"] = get_host_ip() or "N/A"
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@ from openpype_modules.ftrack import (
|
|||
)
|
||||
from openpype_modules.ftrack.lib import credentials
|
||||
from openpype_modules.ftrack.ftrack_server import socket_thread
|
||||
from openpype_modules.ftrack.ftrack_server.lib import get_host_ip
|
||||
|
||||
|
||||
class MongoPermissionsError(Exception):
|
||||
|
|
@ -245,11 +246,13 @@ def main_loop(ftrack_url):
|
|||
)
|
||||
|
||||
host_name = socket.gethostname()
|
||||
host_ip = get_host_ip()
|
||||
|
||||
main_info = [
|
||||
["created_at", datetime.datetime.now().strftime("%Y.%m.%d %H:%M:%S")],
|
||||
["Username", getpass.getuser()],
|
||||
["Host Name", host_name],
|
||||
["Host IP", socket.gethostbyname(host_name)],
|
||||
["Host IP", host_ip or "N/A"],
|
||||
["OpenPype executable", get_openpype_execute_args()[-1]],
|
||||
["OpenPype version", get_openpype_version() or "N/A"],
|
||||
["OpenPype build version", get_build_version() or "N/A"]
|
||||
|
|
|
|||
|
|
@ -9,8 +9,9 @@ import time
|
|||
import queue
|
||||
import collections
|
||||
import appdirs
|
||||
import pymongo
|
||||
import socket
|
||||
|
||||
import pymongo
|
||||
import requests
|
||||
import ftrack_api
|
||||
import ftrack_api.session
|
||||
|
|
@ -32,6 +33,16 @@ TOPIC_STATUS_SERVER = "openpype.event.server.status"
|
|||
TOPIC_STATUS_SERVER_RESULT = "openpype.event.server.status.result"
|
||||
|
||||
|
||||
def get_host_ip():
|
||||
host_name = socket.gethostname()
|
||||
try:
|
||||
return socket.gethostbyname(host_name)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
class SocketBaseEventHub(ftrack_api.event.hub.EventHub):
|
||||
|
||||
hearbeat_msg = b"hearbeat"
|
||||
|
|
|
|||
|
|
@ -129,8 +129,8 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
|
|||
if not note_text.solved:
|
||||
self.log.warning((
|
||||
"Note template require more keys then can be provided."
|
||||
"\nTemplate: {}\nData: {}"
|
||||
).format(template, format_data))
|
||||
"\nTemplate: {}\nMissing values for keys:{}\nData: {}"
|
||||
).format(template, note_text.missing_keys, format_data))
|
||||
continue
|
||||
|
||||
if not note_text:
|
||||
|
|
|
|||
|
|
@ -15,7 +15,8 @@ from openpype_modules.ftrack.ftrack_server.lib import (
|
|||
SocketSession,
|
||||
StatusEventHub,
|
||||
TOPIC_STATUS_SERVER,
|
||||
TOPIC_STATUS_SERVER_RESULT
|
||||
TOPIC_STATUS_SERVER_RESULT,
|
||||
get_host_ip
|
||||
)
|
||||
from openpype.lib import (
|
||||
Logger,
|
||||
|
|
@ -29,10 +30,10 @@ log = Logger.get_logger("Event storer")
|
|||
action_identifier = (
|
||||
"event.server.status" + os.environ["FTRACK_EVENT_SUB_ID"]
|
||||
)
|
||||
host_ip = socket.gethostbyname(socket.gethostname())
|
||||
host_ip = get_host_ip()
|
||||
action_data = {
|
||||
"label": "OpenPype Admin",
|
||||
"variant": "- Event server Status ({})".format(host_ip),
|
||||
"variant": "- Event server Status ({})".format(host_ip or "IP N/A"),
|
||||
"description": "Get Infromation about event server",
|
||||
"actionIdentifier": action_identifier
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,10 +1,12 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.lib.profiles_filtering import filter_profiles
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.lib import attribute_definitions
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectSlackFamilies(pyblish.api.InstancePlugin):
|
||||
class CollectSlackFamilies(pyblish.api.InstancePlugin,
|
||||
OpenPypePyblishPluginMixin):
|
||||
"""Collect family for Slack notification
|
||||
|
||||
Expects configured profile in
|
||||
|
|
@ -17,6 +19,18 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin):
|
|||
|
||||
profiles = None
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
attribute_definitions.TextDef(
|
||||
# Key under which it will be stored
|
||||
"additional_message",
|
||||
# Use plugin label as label for attribute
|
||||
label="Additional Slack message",
|
||||
placeholder="<Only if Slack is configured>"
|
||||
)
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
task_data = instance.data["anatomyData"].get("task", {})
|
||||
family = self.main_family_from_instance(instance)
|
||||
|
|
@ -55,6 +69,11 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin):
|
|||
["token"])
|
||||
instance.data["slack_token"] = slack_token
|
||||
|
||||
attribute_values = self.get_attr_values_from_data(instance.data)
|
||||
additional_message = attribute_values.get("additional_message")
|
||||
if additional_message:
|
||||
instance.data["slack_additional_message"] = additional_message
|
||||
|
||||
def main_family_from_instance(self, instance): # TODO yank from integrate
|
||||
"""Returns main family of entered instance."""
|
||||
family = instance.data.get("family")
|
||||
|
|
|
|||
|
|
@ -1,8 +1,11 @@
|
|||
import os
|
||||
import re
|
||||
import six
|
||||
import pyblish.api
|
||||
import copy
|
||||
from datetime import datetime
|
||||
from abc import ABCMeta, abstractmethod
|
||||
import time
|
||||
|
||||
from openpype.client import OpenPypeMongoConnection
|
||||
from openpype.lib.plugin_tools import prepare_template_data
|
||||
|
|
@ -31,11 +34,15 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
|
|||
review_path = self._get_review_path(instance)
|
||||
|
||||
publish_files = set()
|
||||
message = ''
|
||||
additional_message = instance.data.get("slack_additional_message")
|
||||
token = instance.data["slack_token"]
|
||||
if additional_message:
|
||||
message = "{} \n".format(additional_message)
|
||||
for message_profile in instance.data["slack_channel_message_profiles"]:
|
||||
message = self._get_filled_message(message_profile["message"],
|
||||
instance,
|
||||
review_path)
|
||||
self.log.debug("message:: {}".format(message))
|
||||
message += self._get_filled_message(message_profile["message"],
|
||||
instance,
|
||||
review_path)
|
||||
if not message:
|
||||
return
|
||||
|
||||
|
|
@ -49,18 +56,16 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
|
|||
project = instance.context.data["anatomyData"]["project"]["code"]
|
||||
for channel in message_profile["channels"]:
|
||||
if six.PY2:
|
||||
msg_id, file_ids = \
|
||||
self._python2_call(instance.data["slack_token"],
|
||||
channel,
|
||||
message,
|
||||
publish_files)
|
||||
client = SlackPython2Operations(token, self.log)
|
||||
else:
|
||||
msg_id, file_ids = \
|
||||
self._python3_call(instance.data["slack_token"],
|
||||
channel,
|
||||
message,
|
||||
publish_files)
|
||||
client = SlackPython3Operations(token, self.log)
|
||||
|
||||
users, groups = client.get_users_and_groups()
|
||||
message = self._translate_users(message, users, groups)
|
||||
|
||||
msg_id, file_ids = client.send_message(channel,
|
||||
message,
|
||||
publish_files)
|
||||
if not msg_id:
|
||||
return
|
||||
|
||||
|
|
@ -132,14 +137,14 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
|
|||
fill_key = "task[{}]".format(key)
|
||||
fill_pairs.append((fill_key, value))
|
||||
|
||||
self.log.debug("fill_pairs ::{}".format(fill_pairs))
|
||||
multiple_case_variants = prepare_template_data(fill_pairs)
|
||||
fill_data.update(multiple_case_variants)
|
||||
|
||||
message = None
|
||||
message = ''
|
||||
try:
|
||||
message = message_templ.format(**fill_data)
|
||||
message = self._escape_missing_keys(message_templ, fill_data).\
|
||||
format(**fill_data)
|
||||
except Exception:
|
||||
# shouldn't happen
|
||||
self.log.warning(
|
||||
"Some keys are missing in {}".format(message_templ),
|
||||
exc_info=True)
|
||||
|
|
@ -162,27 +167,249 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
|
|||
|
||||
def _get_review_path(self, instance):
|
||||
"""Returns abs url for review if present in instance repres"""
|
||||
published_path = None
|
||||
review_path = None
|
||||
for repre in instance.data.get("representations", []):
|
||||
tags = repre.get('tags', [])
|
||||
if (repre.get("review")
|
||||
or "review" in tags
|
||||
or "burnin" in tags):
|
||||
if os.path.exists(repre["published_path"]):
|
||||
published_path = repre["published_path"]
|
||||
repre_review_path = (
|
||||
repre.get("published_path") or
|
||||
os.path.join(repre["stagingDir"], repre["files"])
|
||||
)
|
||||
if os.path.exists(repre_review_path):
|
||||
review_path = repre_review_path
|
||||
if "burnin" in tags: # burnin has precedence if exists
|
||||
break
|
||||
return published_path
|
||||
return review_path
|
||||
|
||||
def _python2_call(self, token, channel, message, publish_files):
|
||||
from slackclient import SlackClient
|
||||
def _get_user_id(self, users, user_name):
|
||||
"""Returns internal slack id for user name"""
|
||||
user_id = None
|
||||
user_name_lower = user_name.lower()
|
||||
for user in users:
|
||||
if (not user.get("deleted") and
|
||||
(user_name_lower == user["name"].lower() or
|
||||
# bots dont have display_name
|
||||
user_name_lower == user["profile"].get("display_name",
|
||||
'').lower() or
|
||||
user_name_lower == user["profile"].get("real_name",
|
||||
'').lower())):
|
||||
user_id = user["id"]
|
||||
break
|
||||
return user_id
|
||||
|
||||
def _get_group_id(self, groups, group_name):
|
||||
"""Returns internal group id for string name"""
|
||||
group_id = None
|
||||
for group in groups:
|
||||
if (not group.get("date_delete") and
|
||||
(group_name.lower() == group["name"].lower() or
|
||||
group_name.lower() == group["handle"])):
|
||||
group_id = group["id"]
|
||||
break
|
||||
return group_id
|
||||
|
||||
def _translate_users(self, message, users, groups):
|
||||
"""Replace all occurences of @mentions with proper <@name> format."""
|
||||
matches = re.findall(r"(?<!<)@[^ ]+", message)
|
||||
in_quotes = re.findall(r"(?<!<)(['\"])(@[^'\"]+)", message)
|
||||
for item in in_quotes:
|
||||
matches.append(item[1])
|
||||
if not matches:
|
||||
return message
|
||||
|
||||
for orig_user in matches:
|
||||
user_name = orig_user.replace("@", '')
|
||||
slack_id = self._get_user_id(users, user_name)
|
||||
mention = None
|
||||
if slack_id:
|
||||
mention = "<@{}>".format(slack_id)
|
||||
else:
|
||||
slack_id = self._get_group_id(groups, user_name)
|
||||
if slack_id:
|
||||
mention = "<!subteam^{}>".format(slack_id)
|
||||
if mention:
|
||||
message = message.replace(orig_user, mention)
|
||||
|
||||
return message
|
||||
|
||||
def _escape_missing_keys(self, message, fill_data):
|
||||
"""Double escapes placeholder which are missing in 'fill_data'"""
|
||||
placeholder_keys = re.findall(r"\{([^}]+)\}", message)
|
||||
|
||||
fill_keys = []
|
||||
for key, value in fill_data.items():
|
||||
fill_keys.append(key)
|
||||
if isinstance(value, dict):
|
||||
for child_key in value.keys():
|
||||
fill_keys.append("{}[{}]".format(key, child_key))
|
||||
|
||||
not_matched = set(placeholder_keys) - set(fill_keys)
|
||||
|
||||
for not_matched_item in not_matched:
|
||||
message = message.replace("{}".format(not_matched_item),
|
||||
"{{{}}}".format(not_matched_item))
|
||||
|
||||
return message
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class AbstractSlackOperations:
|
||||
|
||||
@abstractmethod
|
||||
def _get_users_list(self):
|
||||
"""Return response with user list, different methods Python 2 vs 3"""
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def _get_usergroups_list(self):
|
||||
"""Return response with user list, different methods Python 2 vs 3"""
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def get_users_and_groups(self):
|
||||
"""Return users and groups, different retry in Python 2 vs 3"""
|
||||
raise NotImplementedError
|
||||
|
||||
@abstractmethod
|
||||
def send_message(self, channel, message, publish_files):
|
||||
"""Sends message to channel, different methods in Python 2 vs 3"""
|
||||
pass
|
||||
|
||||
def _get_users(self):
|
||||
"""Parse users.list response into list of users (dicts)"""
|
||||
first = True
|
||||
next_page = None
|
||||
users = []
|
||||
while first or next_page:
|
||||
response = self._get_users_list()
|
||||
first = False
|
||||
next_page = response.get("response_metadata").get("next_cursor")
|
||||
for user in response.get("members"):
|
||||
users.append(user)
|
||||
|
||||
return users
|
||||
|
||||
def _get_groups(self):
|
||||
"""Parses usergroups.list response into list of groups (dicts)"""
|
||||
response = self._get_usergroups_list()
|
||||
groups = []
|
||||
for group in response.get("usergroups"):
|
||||
groups.append(group)
|
||||
return groups
|
||||
|
||||
def _enrich_error(self, error_str, channel):
|
||||
"""Enhance known errors with more helpful notations."""
|
||||
if 'not_in_channel' in error_str:
|
||||
# there is no file.write.public scope, app must be explicitly in
|
||||
# the channel
|
||||
msg = " - application must added to channel '{}'.".format(channel)
|
||||
error_str += msg + " Ask Slack admin."
|
||||
return error_str
|
||||
|
||||
|
||||
class SlackPython3Operations(AbstractSlackOperations):
|
||||
|
||||
def __init__(self, token, log):
|
||||
from slack_sdk import WebClient
|
||||
|
||||
self.client = WebClient(token=token)
|
||||
self.log = log
|
||||
|
||||
def _get_users_list(self):
|
||||
return self.client.users_list()
|
||||
|
||||
def _get_usergroups_list(self):
|
||||
return self.client.usergroups_list()
|
||||
|
||||
def get_users_and_groups(self):
|
||||
from slack_sdk.errors import SlackApiError
|
||||
while True:
|
||||
try:
|
||||
users = self._get_users()
|
||||
groups = self._get_groups()
|
||||
break
|
||||
except SlackApiError as e:
|
||||
retry_after = e.response.headers.get("Retry-After")
|
||||
if retry_after:
|
||||
print(
|
||||
"Rate limit hit, sleeping for {}".format(retry_after))
|
||||
time.sleep(int(retry_after))
|
||||
else:
|
||||
self.log.warning("Cannot pull user info, "
|
||||
"mentions won't work", exc_info=True)
|
||||
return [], []
|
||||
|
||||
return users, groups
|
||||
|
||||
def send_message(self, channel, message, publish_files):
|
||||
from slack_sdk.errors import SlackApiError
|
||||
try:
|
||||
attachment_str = "\n\n Attachment links: \n"
|
||||
file_ids = []
|
||||
for published_file in publish_files:
|
||||
response = self.client.files_upload(
|
||||
file=published_file,
|
||||
filename=os.path.basename(published_file))
|
||||
attachment_str += "\n<{}|{}>".format(
|
||||
response["file"]["permalink"],
|
||||
os.path.basename(published_file))
|
||||
file_ids.append(response["file"]["id"])
|
||||
|
||||
if publish_files:
|
||||
message += attachment_str
|
||||
|
||||
response = self.client.chat_postMessage(
|
||||
channel=channel,
|
||||
text=message
|
||||
)
|
||||
return response.data["ts"], file_ids
|
||||
except SlackApiError as e:
|
||||
# # You will get a SlackApiError if "ok" is False
|
||||
error_str = self._enrich_error(str(e.response["error"]), channel)
|
||||
self.log.warning("Error happened {}".format(error_str))
|
||||
except Exception as e:
|
||||
error_str = self._enrich_error(str(e), channel)
|
||||
self.log.warning("Not SlackAPI error", exc_info=True)
|
||||
|
||||
return None, []
|
||||
|
||||
|
||||
class SlackPython2Operations(AbstractSlackOperations):
|
||||
|
||||
def __init__(self, token, log):
|
||||
from slackclient import SlackClient
|
||||
|
||||
self.client = SlackClient(token=token)
|
||||
self.log = log
|
||||
|
||||
def _get_users_list(self):
|
||||
return self.client.api_call("users.list")
|
||||
|
||||
def _get_usergroups_list(self):
|
||||
return self.client.api_call("usergroups.list")
|
||||
|
||||
def get_users_and_groups(self):
|
||||
while True:
|
||||
try:
|
||||
users = self._get_users()
|
||||
groups = self._get_groups()
|
||||
break
|
||||
except Exception:
|
||||
self.log.warning("Cannot pull user info, "
|
||||
"mentions won't work", exc_info=True)
|
||||
return [], []
|
||||
|
||||
return users, groups
|
||||
|
||||
def send_message(self, channel, message, publish_files):
|
||||
try:
|
||||
client = SlackClient(token)
|
||||
attachment_str = "\n\n Attachment links: \n"
|
||||
file_ids = []
|
||||
for p_file in publish_files:
|
||||
with open(p_file, 'rb') as pf:
|
||||
response = client.api_call(
|
||||
response = self.client.api_call(
|
||||
"files.upload",
|
||||
file=pf,
|
||||
channel=channel,
|
||||
|
|
@ -203,7 +430,7 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
|
|||
if publish_files:
|
||||
message += attachment_str
|
||||
|
||||
response = client.api_call(
|
||||
response = self.client.api_call(
|
||||
"chat.postMessage",
|
||||
channel=channel,
|
||||
text=message
|
||||
|
|
@ -220,46 +447,3 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
|
|||
self.log.warning("Error happened: {}".format(error_str))
|
||||
|
||||
return None, []
|
||||
|
||||
def _python3_call(self, token, channel, message, publish_files):
|
||||
from slack_sdk import WebClient
|
||||
from slack_sdk.errors import SlackApiError
|
||||
try:
|
||||
client = WebClient(token=token)
|
||||
attachment_str = "\n\n Attachment links: \n"
|
||||
file_ids = []
|
||||
for published_file in publish_files:
|
||||
response = client.files_upload(
|
||||
file=published_file,
|
||||
filename=os.path.basename(published_file))
|
||||
attachment_str += "\n<{}|{}>".format(
|
||||
response["file"]["permalink"],
|
||||
os.path.basename(published_file))
|
||||
file_ids.append(response["file"]["id"])
|
||||
|
||||
if publish_files:
|
||||
message += attachment_str
|
||||
|
||||
response = client.chat_postMessage(
|
||||
channel=channel,
|
||||
text=message
|
||||
)
|
||||
return response.data["ts"], file_ids
|
||||
except SlackApiError as e:
|
||||
# You will get a SlackApiError if "ok" is False
|
||||
error_str = self._enrich_error(str(e.response["error"]), channel)
|
||||
self.log.warning("Error happened {}".format(error_str))
|
||||
except Exception as e:
|
||||
error_str = self._enrich_error(str(e), channel)
|
||||
self.log.warning("Not SlackAPI error", exc_info=True)
|
||||
|
||||
return None, []
|
||||
|
||||
def _enrich_error(self, error_str, channel):
|
||||
"""Enhance known errors with more helpful notations."""
|
||||
if 'not_in_channel' in error_str:
|
||||
# there is no file.write.public scope, app must be explicitly in
|
||||
# the channel
|
||||
msg = " - application must added to channel '{}'.".format(channel)
|
||||
error_str += msg + " Ask Slack admin."
|
||||
return error_str
|
||||
|
|
|
|||
|
|
@ -165,7 +165,7 @@ class DropboxHandler(AbstractProvider):
|
|||
Returns:
|
||||
(boolean)
|
||||
"""
|
||||
return self.presets["enabled"] and self.dbx is not None
|
||||
return self.presets.get("enabled") and self.dbx is not None
|
||||
|
||||
@classmethod
|
||||
def get_configurable_items(cls):
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ class GDriveHandler(AbstractProvider):
|
|||
Returns:
|
||||
(boolean)
|
||||
"""
|
||||
return self.presets["enabled"] and self.service is not None
|
||||
return self.presets.get("enabled") and self.service is not None
|
||||
|
||||
@classmethod
|
||||
def get_system_settings_schema(cls):
|
||||
|
|
|
|||
|
|
@ -169,7 +169,7 @@ def resolve_paths(module, file_path, project_name,
|
|||
return local_file_path, remote_file_path
|
||||
|
||||
|
||||
def site_is_working(module, project_name, site_name):
|
||||
def _site_is_working(module, project_name, site_name, site_config):
|
||||
"""
|
||||
Confirm that 'site_name' is configured correctly for 'project_name'.
|
||||
|
||||
|
|
@ -179,54 +179,17 @@ def site_is_working(module, project_name, site_name):
|
|||
module (SyncServerModule)
|
||||
project_name(string):
|
||||
site_name(string):
|
||||
site_config (dict): configuration for site from Settings
|
||||
Returns
|
||||
(bool)
|
||||
"""
|
||||
if _get_configured_sites(module, project_name).get(site_name):
|
||||
return True
|
||||
return False
|
||||
provider = module.get_provider_for_site(site=site_name)
|
||||
handler = lib.factory.get_provider(provider,
|
||||
project_name,
|
||||
site_name,
|
||||
presets=site_config)
|
||||
|
||||
|
||||
def _get_configured_sites(module, project_name):
|
||||
"""
|
||||
Loops through settings and looks for configured sites and checks
|
||||
its handlers for particular 'project_name'.
|
||||
|
||||
Args:
|
||||
project_setting(dict): dictionary from Settings
|
||||
only_project_name(string, optional): only interested in
|
||||
particular project
|
||||
Returns:
|
||||
(dict of dict)
|
||||
{'ProjectA': {'studio':True, 'gdrive':False}}
|
||||
"""
|
||||
settings = module.get_sync_project_setting(project_name)
|
||||
return _get_configured_sites_from_setting(module, project_name, settings)
|
||||
|
||||
|
||||
def _get_configured_sites_from_setting(module, project_name, project_setting):
|
||||
if not project_setting.get("enabled"):
|
||||
return {}
|
||||
|
||||
initiated_handlers = {}
|
||||
configured_sites = {}
|
||||
all_sites = module._get_default_site_configs()
|
||||
all_sites.update(project_setting.get("sites"))
|
||||
for site_name, config in all_sites.items():
|
||||
provider = module.get_provider_for_site(site=site_name)
|
||||
handler = initiated_handlers.get((provider, site_name))
|
||||
if not handler:
|
||||
handler = lib.factory.get_provider(provider,
|
||||
project_name,
|
||||
site_name,
|
||||
presets=config)
|
||||
initiated_handlers[(provider, site_name)] = \
|
||||
handler
|
||||
|
||||
if handler.is_active():
|
||||
configured_sites[site_name] = True
|
||||
|
||||
return configured_sites
|
||||
return handler.is_active()
|
||||
|
||||
|
||||
class SyncServerThread(threading.Thread):
|
||||
|
|
@ -288,7 +251,8 @@ class SyncServerThread(threading.Thread):
|
|||
for project_name in enabled_projects:
|
||||
preset = self.module.sync_project_settings[project_name]
|
||||
|
||||
local_site, remote_site = self._working_sites(project_name)
|
||||
local_site, remote_site = self._working_sites(project_name,
|
||||
preset)
|
||||
if not all([local_site, remote_site]):
|
||||
continue
|
||||
|
||||
|
|
@ -464,7 +428,7 @@ class SyncServerThread(threading.Thread):
|
|||
self.timer.cancel()
|
||||
self.timer = None
|
||||
|
||||
def _working_sites(self, project_name):
|
||||
def _working_sites(self, project_name, sync_config):
|
||||
if self.module.is_project_paused(project_name):
|
||||
self.log.debug("Both sites same, skipping")
|
||||
return None, None
|
||||
|
|
@ -476,9 +440,12 @@ class SyncServerThread(threading.Thread):
|
|||
local_site, remote_site))
|
||||
return None, None
|
||||
|
||||
configured_sites = _get_configured_sites(self.module, project_name)
|
||||
if not all([local_site in configured_sites,
|
||||
remote_site in configured_sites]):
|
||||
local_site_config = sync_config.get('sites')[local_site]
|
||||
remote_site_config = sync_config.get('sites')[remote_site]
|
||||
if not all([_site_is_working(self.module, project_name, local_site,
|
||||
local_site_config),
|
||||
_site_is_working(self.module, project_name, remote_site,
|
||||
remote_site_config)]):
|
||||
self.log.debug(
|
||||
"Some of the sites {} - {} is not working properly".format(
|
||||
local_site, remote_site
|
||||
|
|
|
|||
|
|
@ -1112,17 +1112,21 @@ class RootItem(FormatObject):
|
|||
result = False
|
||||
output = str(path)
|
||||
|
||||
root_paths = list(self.cleaned_data.values())
|
||||
mod_path = self.clean_path(path)
|
||||
for root_path in root_paths:
|
||||
for root_os, root_path in self.cleaned_data.items():
|
||||
# Skip empty paths
|
||||
if not root_path:
|
||||
continue
|
||||
|
||||
if mod_path.startswith(root_path):
|
||||
_mod_path = mod_path # reset to original cleaned value
|
||||
if root_os == "windows":
|
||||
root_path = root_path.lower()
|
||||
_mod_path = _mod_path.lower()
|
||||
|
||||
if _mod_path.startswith(root_path):
|
||||
result = True
|
||||
replacement = "{" + self.full_key() + "}"
|
||||
output = replacement + mod_path[len(root_path):]
|
||||
output = replacement + _mod_path[len(root_path):]
|
||||
break
|
||||
|
||||
return (result, output)
|
||||
|
|
|
|||
|
|
@ -608,7 +608,7 @@ def discover_legacy_creator_plugins():
|
|||
plugin.apply_settings(project_settings, system_settings)
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Failed to apply settings to loader {}".format(
|
||||
"Failed to apply settings to creator {}".format(
|
||||
plugin.__name__
|
||||
),
|
||||
exc_info=True
|
||||
|
|
|
|||
|
|
@ -42,7 +42,9 @@ from openpype.pipeline.load import (
|
|||
get_contexts_for_repre_docs,
|
||||
load_with_repre_context,
|
||||
)
|
||||
from openpype.pipeline.create import get_legacy_creator_by_name
|
||||
from openpype.pipeline.create import (
|
||||
discover_legacy_creator_plugins
|
||||
)
|
||||
|
||||
|
||||
class TemplateNotFound(Exception):
|
||||
|
|
@ -235,7 +237,14 @@ class AbstractTemplateBuilder(object):
|
|||
|
||||
def get_creators_by_name(self):
|
||||
if self._creators_by_name is None:
|
||||
self._creators_by_name = get_legacy_creator_by_name()
|
||||
self._creators_by_name = {}
|
||||
for creator in discover_legacy_creator_plugins():
|
||||
creator_name = creator.__name__
|
||||
if creator_name in self._creators_by_name:
|
||||
raise KeyError(
|
||||
"Duplicated creator name {} !".format(creator_name)
|
||||
)
|
||||
self._creators_by_name[creator_name] = creator
|
||||
return self._creators_by_name
|
||||
|
||||
def get_shared_data(self, key):
|
||||
|
|
@ -401,7 +410,12 @@ class AbstractTemplateBuilder(object):
|
|||
key=lambda i: i.order
|
||||
))
|
||||
|
||||
def build_template(self, template_path=None, level_limit=None):
|
||||
def build_template(
|
||||
self,
|
||||
template_path=None,
|
||||
level_limit=None,
|
||||
keep_placeholders=None
|
||||
):
|
||||
"""Main callback for building workfile from template path.
|
||||
|
||||
Todo:
|
||||
|
|
@ -410,16 +424,25 @@ class AbstractTemplateBuilder(object):
|
|||
|
||||
Args:
|
||||
template_path (str): Path to a template file with placeholders.
|
||||
Template from settings 'get_template_path' used when not
|
||||
Template from settings 'get_template_preset' used when not
|
||||
passed.
|
||||
level_limit (int): Limit of populate loops. Related to
|
||||
'populate_scene_placeholders' method.
|
||||
keep_placeholders (bool): Add flag to placeholder data for
|
||||
hosts to decide if they want to remove
|
||||
placeholder after it is used.
|
||||
"""
|
||||
template_preset = self.get_template_preset()
|
||||
|
||||
if template_path is None:
|
||||
template_path = self.get_template_path()
|
||||
template_path = template_preset["path"]
|
||||
|
||||
if keep_placeholders is None:
|
||||
keep_placeholders = template_preset["keep_placeholder"]
|
||||
|
||||
self.import_template(template_path)
|
||||
self.populate_scene_placeholders(level_limit)
|
||||
self.populate_scene_placeholders(
|
||||
level_limit, keep_placeholders)
|
||||
|
||||
def rebuild_template(self):
|
||||
"""Go through existing placeholders in scene and update them.
|
||||
|
|
@ -489,7 +512,9 @@ class AbstractTemplateBuilder(object):
|
|||
plugin = plugins_by_identifier[identifier]
|
||||
plugin.prepare_placeholders(placeholders)
|
||||
|
||||
def populate_scene_placeholders(self, level_limit=None):
|
||||
def populate_scene_placeholders(
|
||||
self, level_limit=None, keep_placeholders=None
|
||||
):
|
||||
"""Find placeholders in scene using plugins and process them.
|
||||
|
||||
This should happen after 'import_template'.
|
||||
|
|
@ -505,6 +530,9 @@ class AbstractTemplateBuilder(object):
|
|||
|
||||
Args:
|
||||
level_limit (int): Level of loops that can happen. Default is 1000.
|
||||
keep_placeholders (bool): Add flag to placeholder data for
|
||||
hosts to decide if they want to remove
|
||||
placeholder after it is used.
|
||||
"""
|
||||
|
||||
if not self.placeholder_plugins:
|
||||
|
|
@ -541,6 +569,11 @@ class AbstractTemplateBuilder(object):
|
|||
" is already in progress."
|
||||
))
|
||||
continue
|
||||
|
||||
# add flag for keeping placeholders in scene
|
||||
# after they are processed
|
||||
placeholder.data["keep_placeholder"] = keep_placeholders
|
||||
|
||||
filtered_placeholders.append(placeholder)
|
||||
|
||||
self._prepare_placeholders(filtered_placeholders)
|
||||
|
|
@ -599,8 +632,8 @@ class AbstractTemplateBuilder(object):
|
|||
["profiles"]
|
||||
)
|
||||
|
||||
def get_template_path(self):
|
||||
"""Unified way how template path is received usign settings.
|
||||
def get_template_preset(self):
|
||||
"""Unified way how template preset is received usign settings.
|
||||
|
||||
Method is dependent on '_get_build_profiles' which should return filter
|
||||
profiles to resolve path to a template. Default implementation looks
|
||||
|
|
@ -637,6 +670,13 @@ class AbstractTemplateBuilder(object):
|
|||
).format(task_name, task_type, host_name))
|
||||
|
||||
path = profile["path"]
|
||||
|
||||
# switch to remove placeholders after they are used
|
||||
keep_placeholder = profile.get("keep_placeholder")
|
||||
# backward compatibility, since default is True
|
||||
if keep_placeholder is None:
|
||||
keep_placeholder = True
|
||||
|
||||
if not path:
|
||||
raise TemplateLoadFailed((
|
||||
"Template path is not set.\n"
|
||||
|
|
@ -650,14 +690,24 @@ class AbstractTemplateBuilder(object):
|
|||
key: value
|
||||
for key, value in os.environ.items()
|
||||
}
|
||||
|
||||
fill_data["root"] = anatomy.roots
|
||||
fill_data["project"] = {
|
||||
"name": project_name,
|
||||
"code": anatomy["attributes"]["code"]
|
||||
}
|
||||
|
||||
|
||||
result = StringTemplate.format_template(path, fill_data)
|
||||
if result.solved:
|
||||
path = result.normalized()
|
||||
|
||||
if path and os.path.exists(path):
|
||||
self.log.info("Found template at: '{}'".format(path))
|
||||
return path
|
||||
return {
|
||||
"path": path,
|
||||
"keep_placeholder": keep_placeholder
|
||||
}
|
||||
|
||||
solved_path = None
|
||||
while True:
|
||||
|
|
@ -683,7 +733,10 @@ class AbstractTemplateBuilder(object):
|
|||
|
||||
self.log.info("Found template at: '{}'".format(solved_path))
|
||||
|
||||
return solved_path
|
||||
return {
|
||||
"path": solved_path,
|
||||
"keep_placeholder": keep_placeholder
|
||||
}
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
|
|
@ -1002,7 +1055,13 @@ class PlaceholderItem(object):
|
|||
return self._log
|
||||
|
||||
def __repr__(self):
|
||||
return "< {} {} >".format(self.__class__.__name__, self.name)
|
||||
name = None
|
||||
if hasattr("name", self):
|
||||
name = self.name
|
||||
if hasattr("_scene_identifier ", self):
|
||||
name = self._scene_identifier
|
||||
|
||||
return "< {} {} >".format(self.__class__.__name__, name)
|
||||
|
||||
@property
|
||||
def order(self):
|
||||
|
|
@ -1426,6 +1485,173 @@ class PlaceholderLoadMixin(object):
|
|||
pass
|
||||
|
||||
|
||||
class PlaceholderCreateMixin(object):
|
||||
"""Mixin prepared for creating placeholder plugins.
|
||||
|
||||
Implementation prepares options for placeholders with
|
||||
'get_create_plugin_options'.
|
||||
|
||||
For placeholder population is implemented 'populate_create_placeholder'.
|
||||
|
||||
PlaceholderItem can have implemented methods:
|
||||
- 'create_failed' - called when creating of an instance failed
|
||||
- 'create_succeed' - called when creating of an instance succeeded
|
||||
"""
|
||||
|
||||
def get_create_plugin_options(self, options=None):
|
||||
"""Unified attribute definitions for create placeholder.
|
||||
|
||||
Common function for placeholder plugins used for creating of
|
||||
publishable instances. Use it with 'get_placeholder_options'.
|
||||
|
||||
Args:
|
||||
plugin (PlaceholderPlugin): Plugin used for creating of
|
||||
publish instances.
|
||||
options (Dict[str, Any]): Already available options which are used
|
||||
as defaults for attributes.
|
||||
|
||||
Returns:
|
||||
List[AbtractAttrDef]: Attribute definitions common for create
|
||||
plugins.
|
||||
"""
|
||||
|
||||
creators_by_name = self.builder.get_creators_by_name()
|
||||
|
||||
creator_items = [
|
||||
(creator_name, creator.label or creator_name)
|
||||
for creator_name, creator in creators_by_name.items()
|
||||
]
|
||||
|
||||
creator_items.sort(key=lambda i: i[1])
|
||||
options = options or {}
|
||||
return [
|
||||
attribute_definitions.UISeparatorDef(),
|
||||
attribute_definitions.UILabelDef("Main attributes"),
|
||||
attribute_definitions.UISeparatorDef(),
|
||||
|
||||
attribute_definitions.EnumDef(
|
||||
"creator",
|
||||
label="Creator",
|
||||
default=options.get("creator"),
|
||||
items=creator_items,
|
||||
tooltip=(
|
||||
"Creator"
|
||||
"\nDefines what OpenPype creator will be used to"
|
||||
" create publishable instance."
|
||||
"\nUseable creator depends on current host's creator list."
|
||||
"\nField is case sensitive."
|
||||
)
|
||||
),
|
||||
attribute_definitions.TextDef(
|
||||
"create_variant",
|
||||
label="Variant",
|
||||
default=options.get("create_variant"),
|
||||
placeholder='Main',
|
||||
tooltip=(
|
||||
"Creator"
|
||||
"\nDefines variant name which will be use for "
|
||||
"\ncompiling of subset name."
|
||||
)
|
||||
),
|
||||
attribute_definitions.UISeparatorDef(),
|
||||
attribute_definitions.NumberDef(
|
||||
"order",
|
||||
label="Order",
|
||||
default=options.get("order") or 0,
|
||||
decimals=0,
|
||||
minimum=0,
|
||||
maximum=999,
|
||||
tooltip=(
|
||||
"Order"
|
||||
"\nOrder defines creating instance priority (0 to 999)"
|
||||
"\nPriority rule is : \"lowest is first to load\"."
|
||||
)
|
||||
)
|
||||
]
|
||||
|
||||
def populate_create_placeholder(self, placeholder):
|
||||
"""Create placeholder is going to create matching publishabe instance.
|
||||
|
||||
Args:
|
||||
placeholder (PlaceholderItem): Placeholder item with information
|
||||
about requested publishable instance.
|
||||
"""
|
||||
creator_name = placeholder.data["creator"]
|
||||
create_variant = placeholder.data["create_variant"]
|
||||
|
||||
creator_plugin = self.builder.get_creators_by_name()[creator_name]
|
||||
|
||||
# create subset name
|
||||
project_name = legacy_io.Session["AVALON_PROJECT"]
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
|
||||
# get asset id
|
||||
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
|
||||
assert asset_doc, "No current asset found in Session"
|
||||
asset_id = asset_doc['_id']
|
||||
|
||||
subset_name = creator_plugin.get_subset_name(
|
||||
create_variant,
|
||||
task_name,
|
||||
asset_id,
|
||||
project_name
|
||||
)
|
||||
|
||||
creator_data = {
|
||||
"creator_name": creator_name,
|
||||
"create_variant": create_variant,
|
||||
"subset_name": subset_name,
|
||||
"creator_plugin": creator_plugin
|
||||
}
|
||||
|
||||
self._before_instance_create(placeholder)
|
||||
|
||||
# compile subset name from variant
|
||||
try:
|
||||
creator_instance = creator_plugin(
|
||||
subset_name,
|
||||
asset_name
|
||||
).process()
|
||||
|
||||
except Exception:
|
||||
failed = True
|
||||
self.create_failed(placeholder, creator_data)
|
||||
|
||||
else:
|
||||
failed = False
|
||||
self.create_succeed(placeholder, creator_instance)
|
||||
|
||||
self.cleanup_placeholder(placeholder, failed)
|
||||
|
||||
def create_failed(self, placeholder, creator_data):
|
||||
if hasattr(placeholder, "create_failed"):
|
||||
placeholder.create_failed(creator_data)
|
||||
|
||||
def create_succeed(self, placeholder, creator_instance):
|
||||
if hasattr(placeholder, "create_succeed"):
|
||||
placeholder.create_succeed(creator_instance)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
"""Cleanup placeholder after load of single representation.
|
||||
|
||||
Can be called multiple times during placeholder item populating and is
|
||||
called even if loading failed.
|
||||
|
||||
Args:
|
||||
placeholder (PlaceholderItem): Item which was just used to load
|
||||
representation.
|
||||
failed (bool): Loading of representation failed.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def _before_instance_create(self, placeholder):
|
||||
"""Can be overriden. Is called before instance is created."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class LoadPlaceholderItem(PlaceholderItem):
|
||||
"""PlaceholderItem for plugin which is loading representations.
|
||||
|
||||
|
|
@ -1449,3 +1675,28 @@ class LoadPlaceholderItem(PlaceholderItem):
|
|||
|
||||
def load_failed(self, representation):
|
||||
self._failed_representations.append(representation)
|
||||
|
||||
|
||||
class CreatePlaceholderItem(PlaceholderItem):
|
||||
"""PlaceholderItem for plugin which is creating publish instance.
|
||||
|
||||
Connected to 'PlaceholderCreateMixin'.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreatePlaceholderItem, self).__init__(*args, **kwargs)
|
||||
self._failed_created_publish_instances = []
|
||||
|
||||
def get_errors(self):
|
||||
if not self._failed_representations:
|
||||
return []
|
||||
message = (
|
||||
"Failed to create {} instance using Creator {}"
|
||||
).format(
|
||||
len(self._failed_created_publish_instances),
|
||||
self.data["creator"]
|
||||
)
|
||||
return [message]
|
||||
|
||||
def create_failed(self, creator_data):
|
||||
self._failed_created_publish_instances.append(creator_data)
|
||||
|
|
|
|||
|
|
@ -4,8 +4,6 @@ import os
|
|||
import shutil
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CleanUpFarm(pyblish.api.ContextPlugin):
|
||||
"""Cleans up the staging directory after a successful publish.
|
||||
|
|
@ -23,8 +21,8 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
# Get source host from which farm publishing was started
|
||||
src_host_name = legacy_io.Session.get("AVALON_APP")
|
||||
self.log.debug("Host name from session is {}".format(src_host_name))
|
||||
src_host_name = context.data["hostName"]
|
||||
self.log.debug("Host name from context is {}".format(src_host_name))
|
||||
# Skip process if is not in list of source hosts in which this
|
||||
# plugin should run
|
||||
if src_host_name not in self.allowed_hosts:
|
||||
|
|
|
|||
|
|
@ -32,7 +32,6 @@ from openpype.client import (
|
|||
get_subsets,
|
||||
get_last_versions
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
||||
|
|
@ -49,7 +48,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
self.log.info("Collecting anatomy data for all instances.")
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = context.data["projectName"]
|
||||
self.fill_missing_asset_docs(context, project_name)
|
||||
self.fill_instance_data_from_asset(context)
|
||||
self.fill_latest_versions(context, project_name)
|
||||
|
|
|
|||
|
|
@ -73,7 +73,9 @@ class CollectComment(
|
|||
"""
|
||||
|
||||
label = "Collect Instance Comment"
|
||||
order = pyblish.api.CollectorOrder + 0.49
|
||||
# TODO change to CollectorOrder after Pyblish is purged
|
||||
# Pyblish allows modifying comment after collect phase
|
||||
order = pyblish.api.ExtractorOrder - 0.49
|
||||
|
||||
def process(self, context):
|
||||
context_comment = self.cleanup_comment(context.data.get("comment"))
|
||||
|
|
|
|||
|
|
@ -15,7 +15,11 @@ import pyblish.api
|
|||
|
||||
|
||||
class CollectResourcesPath(pyblish.api.InstancePlugin):
|
||||
"""Generate directory path where the files and resources will be stored"""
|
||||
"""Generate directory path where the files and resources will be stored.
|
||||
|
||||
Collects folder name and file name from files, if exists, for in-situ
|
||||
publishing.
|
||||
"""
|
||||
|
||||
label = "Collect Resources Path"
|
||||
order = pyblish.api.CollectorOrder + 0.495
|
||||
|
|
|
|||
|
|
@ -1,10 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_representations
|
||||
from openpype.pipeline import (
|
||||
registered_host,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
|
||||
class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
||||
|
|
@ -44,7 +41,7 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
|||
for container in containers
|
||||
}
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = context.data["projectName"]
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
representation_ids=repre_ids,
|
||||
|
|
|
|||
42
openpype/plugins/publish/collect_source_for_source.py
Normal file
42
openpype/plugins/publish/collect_source_for_source.py
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
"""
|
||||
Requires:
|
||||
instance -> currentFile
|
||||
instance -> source
|
||||
|
||||
Provides:
|
||||
instance -> originalBasename
|
||||
instance -> originalDirname
|
||||
"""
|
||||
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectSourceForSource(pyblish.api.InstancePlugin):
|
||||
"""Collects source location of file for instance.
|
||||
|
||||
Used for 'source' template name which handles in place publishing.
|
||||
For this kind of publishing files are present with correct file name
|
||||
pattern and correct location.
|
||||
"""
|
||||
|
||||
label = "Collect Source"
|
||||
order = pyblish.api.CollectorOrder + 0.495
|
||||
|
||||
def process(self, instance):
|
||||
# parse folder name and file name for online and source templates
|
||||
# currentFile comes from hosts workfiles
|
||||
# source comes from Publisher
|
||||
current_file = instance.data.get("currentFile")
|
||||
source = instance.data.get("source")
|
||||
source_file = current_file or source
|
||||
if source_file:
|
||||
self.log.debug("Parsing paths for {}".format(source_file))
|
||||
if not instance.data.get("originalBasename"):
|
||||
instance.data["originalBasename"] = \
|
||||
os.path.basename(source_file)
|
||||
|
||||
if not instance.data.get("originalDirname"):
|
||||
instance.data["originalDirname"] = \
|
||||
os.path.dirname(source_file)
|
||||
|
|
@ -1038,6 +1038,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
# Set audio duration
|
||||
audio_in_args.append("-to {:0.10f}".format(audio_duration))
|
||||
|
||||
# Ignore video data from audio input
|
||||
audio_in_args.append("-vn")
|
||||
|
||||
# Add audio input path
|
||||
audio_in_args.append("-i {}".format(
|
||||
path_to_subprocess_arg(audio["filename"])
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.ExtractorOrder
|
||||
families = [
|
||||
"imagesequence", "render", "render2d", "prerender",
|
||||
"source", "clip", "take", "image"
|
||||
"source", "clip", "take", "online", "image"
|
||||
]
|
||||
hosts = ["shell", "fusion", "resolve", "traypublisher", "substancepainter"]
|
||||
enabled = False
|
||||
|
|
@ -91,7 +91,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
|
|||
full_input_path = os.path.join(src_staging, input_file)
|
||||
self.log.info("input {}".format(full_input_path))
|
||||
filename = os.path.splitext(input_file)[0]
|
||||
jpeg_file = filename + ".jpg"
|
||||
jpeg_file = filename + "_thumb.jpg"
|
||||
full_output_path = os.path.join(dst_staging, jpeg_file)
|
||||
|
||||
if oiio_supported:
|
||||
|
|
|
|||
|
|
@ -100,7 +100,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.info("Thumbnail source: {}".format(thumbnail_source))
|
||||
src_basename = os.path.basename(thumbnail_source)
|
||||
dst_filename = os.path.splitext(src_basename)[0] + ".jpg"
|
||||
dst_filename = os.path.splitext(src_basename)[0] + "_thumb.jpg"
|
||||
full_output_path = os.path.join(dst_staging, dst_filename)
|
||||
|
||||
if oiio_supported:
|
||||
|
|
|
|||
31
openpype/plugins/publish/help/validate_publish_dir.xml
Normal file
31
openpype/plugins/publish/help/validate_publish_dir.xml
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Source directory not collected</title>
|
||||
<description>
|
||||
## Source directory not collected
|
||||
|
||||
Instance is marked for in place publishing. Its 'originalDirname' must be collected. Contact OP developer to modify collector.
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
### __Detailed Info__ (optional)
|
||||
|
||||
In place publishing uses source directory and file name in resulting path and file name of published item. For this instance
|
||||
all required metadata weren't filled. This is not recoverable error, unless instance itself is removed.
|
||||
Collector for this instance must be updated for instance to be published.
|
||||
</detail>
|
||||
</error>
|
||||
<error id="not_in_dir">
|
||||
<title>Source file not in project dir</title>
|
||||
<description>
|
||||
## Source file not in project dir
|
||||
|
||||
Path '{original_dirname}' not in project folder. Please publish from inside of project folder.
|
||||
|
||||
### How to repair?
|
||||
|
||||
Restart publish after you moved source file into project directory.
|
||||
</description>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -25,7 +25,6 @@ from openpype.client import (
|
|||
)
|
||||
from openpype.lib import source_hash
|
||||
from openpype.lib.file_transaction import FileTransaction
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.publish import (
|
||||
KnownPublishError,
|
||||
get_publish_template_name,
|
||||
|
|
@ -132,7 +131,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"mvUsdComposition",
|
||||
"mvUsdOverride",
|
||||
"simpleUnrealTexture",
|
||||
"online"
|
||||
"online",
|
||||
"uasset"
|
||||
]
|
||||
|
||||
default_template_name = "publish"
|
||||
|
|
@ -244,7 +244,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
return filtered_repres
|
||||
|
||||
def register(self, instance, file_transactions, filtered_repres):
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = instance.context.data["projectName"]
|
||||
|
||||
instance_stagingdir = instance.data.get("stagingDir")
|
||||
if not instance_stagingdir:
|
||||
|
|
@ -270,6 +270,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
)
|
||||
instance.data["versionEntity"] = version
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
# Get existing representations (if any)
|
||||
existing_repres_by_name = {
|
||||
repre_doc["name"].lower(): repre_doc
|
||||
|
|
@ -303,13 +305,17 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# .ma representation. Those destination paths are pre-defined, etc.
|
||||
# todo: should we move or simplify this logic?
|
||||
resource_destinations = set()
|
||||
for src, dst in instance.data.get("transfers", []):
|
||||
file_transactions.add(src, dst, mode=FileTransaction.MODE_COPY)
|
||||
resource_destinations.add(os.path.abspath(dst))
|
||||
|
||||
for src, dst in instance.data.get("hardlinks", []):
|
||||
file_transactions.add(src, dst, mode=FileTransaction.MODE_HARDLINK)
|
||||
resource_destinations.add(os.path.abspath(dst))
|
||||
file_copy_modes = [
|
||||
("transfers", FileTransaction.MODE_COPY),
|
||||
("hardlinks", FileTransaction.MODE_HARDLINK)
|
||||
]
|
||||
for files_type, copy_mode in file_copy_modes:
|
||||
for src, dst in instance.data.get(files_type, []):
|
||||
self._validate_path_in_project_roots(anatomy, dst)
|
||||
|
||||
file_transactions.add(src, dst, mode=copy_mode)
|
||||
resource_destinations.add(os.path.abspath(dst))
|
||||
|
||||
# Bulk write to the database
|
||||
# We write the subset and version to the database before the File
|
||||
|
|
@ -342,7 +348,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
# Compute the resource file infos once (files belonging to the
|
||||
# version instance instead of an individual representation) so
|
||||
# we can re-use those file infos per representation
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
resource_file_infos = self.get_files_info(resource_destinations,
|
||||
sites=sites,
|
||||
anatomy=anatomy)
|
||||
|
|
@ -529,6 +534,20 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
template_data["representation"] = repre["name"]
|
||||
template_data["ext"] = repre["ext"]
|
||||
|
||||
stagingdir = repre.get("stagingDir")
|
||||
if not stagingdir:
|
||||
# Fall back to instance staging dir if not explicitly
|
||||
# set for representation in the instance
|
||||
self.log.debug((
|
||||
"Representation uses instance staging dir: {}"
|
||||
).format(instance_stagingdir))
|
||||
stagingdir = instance_stagingdir
|
||||
|
||||
if not stagingdir:
|
||||
raise KnownPublishError(
|
||||
"No staging directory set for representation: {}".format(repre)
|
||||
)
|
||||
|
||||
# optionals
|
||||
# retrieve additional anatomy data from representation if exists
|
||||
for key, anatomy_key in {
|
||||
|
|
@ -548,20 +567,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
if value is not None:
|
||||
template_data[anatomy_key] = value
|
||||
|
||||
stagingdir = repre.get("stagingDir")
|
||||
if not stagingdir:
|
||||
# Fall back to instance staging dir if not explicitly
|
||||
# set for representation in the instance
|
||||
self.log.debug((
|
||||
"Representation uses instance staging dir: {}"
|
||||
).format(instance_stagingdir))
|
||||
stagingdir = instance_stagingdir
|
||||
|
||||
if not stagingdir:
|
||||
raise KnownPublishError(
|
||||
"No staging directory set for representation: {}".format(repre)
|
||||
)
|
||||
|
||||
self.log.debug("Anatomy template name: {}".format(template_name))
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
publish_template_category = anatomy.templates[template_name]
|
||||
|
|
@ -569,6 +574,25 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
|
||||
is_udim = bool(repre.get("udim"))
|
||||
|
||||
# handle publish in place
|
||||
if "originalDirname" in template:
|
||||
# store as originalDirname only original value without project root
|
||||
# if instance collected originalDirname is present, it should be
|
||||
# used for all represe
|
||||
# from temp to final
|
||||
original_directory = (
|
||||
instance.data.get("originalDirname") or instance_stagingdir)
|
||||
|
||||
_rootless = self.get_rootless_path(anatomy, original_directory)
|
||||
if _rootless == original_directory:
|
||||
raise KnownPublishError((
|
||||
"Destination path '{}' ".format(original_directory) +
|
||||
"must be in project dir"
|
||||
))
|
||||
relative_path_start = _rootless.rfind('}') + 2
|
||||
without_root = _rootless[relative_path_start:]
|
||||
template_data["originalDirname"] = without_root
|
||||
|
||||
is_sequence_representation = isinstance(files, (list, tuple))
|
||||
if is_sequence_representation:
|
||||
# Collection of files (sequence)
|
||||
|
|
@ -587,6 +611,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
))
|
||||
|
||||
src_collection = src_collections[0]
|
||||
template_data["originalBasename"] = src_collection.head[:-1]
|
||||
destination_indexes = list(src_collection.indexes)
|
||||
# Use last frame for minimum padding
|
||||
# - that should cover both 'udim' and 'frame' minimum padding
|
||||
|
|
@ -671,12 +696,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
raise KnownPublishError(
|
||||
"This is a bug. Representation file name is full path"
|
||||
)
|
||||
|
||||
template_data["originalBasename"], _ = os.path.splitext(fname)
|
||||
# Manage anatomy template data
|
||||
template_data.pop("frame", None)
|
||||
if is_udim:
|
||||
template_data["udim"] = repre["udim"][0]
|
||||
|
||||
# Construct destination filepath from template
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
template_filled = anatomy_filled[template_name]["path"]
|
||||
|
|
@ -805,11 +829,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"""Return anatomy template name to use for integration"""
|
||||
|
||||
# Anatomy data is pre-filled by Collectors
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
context = instance.context
|
||||
project_name = context.data["projectName"]
|
||||
|
||||
# Task can be optional in anatomy data
|
||||
host_name = instance.context.data["hostName"]
|
||||
host_name = context.data["hostName"]
|
||||
anatomy_data = instance.data["anatomyData"]
|
||||
family = anatomy_data["family"]
|
||||
task_info = anatomy_data.get("task") or {}
|
||||
|
|
@ -820,7 +844,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
family,
|
||||
task_name=task_info.get("name"),
|
||||
task_type=task_info.get("type"),
|
||||
project_settings=instance.context.data["project_settings"],
|
||||
project_settings=context.data["project_settings"],
|
||||
logger=self.log
|
||||
)
|
||||
|
||||
|
|
@ -890,3 +914,21 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
|
|||
"hash": source_hash(path),
|
||||
"sites": sites
|
||||
}
|
||||
|
||||
def _validate_path_in_project_roots(self, anatomy, file_path):
|
||||
"""Checks if 'file_path' starts with any of the roots.
|
||||
|
||||
Used to check that published path belongs to project, eg. we are not
|
||||
trying to publish to local only folder.
|
||||
Args:
|
||||
anatomy (Anatomy)
|
||||
file_path (str)
|
||||
Raises
|
||||
(KnownPublishError)
|
||||
"""
|
||||
path = self.get_rootless_path(anatomy, file_path)
|
||||
if not path:
|
||||
raise KnownPublishError((
|
||||
"Destination path '{}' ".format(file_path) +
|
||||
"must be in project dir"
|
||||
))
|
||||
|
|
|
|||
|
|
@ -123,7 +123,6 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"staticMesh",
|
||||
"skeletalMesh",
|
||||
"mvLook",
|
||||
"mvUsd",
|
||||
"mvUsdComposition",
|
||||
"mvUsdOverride",
|
||||
"simpleUnrealTexture"
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ from pprint import pformat
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_assets
|
||||
|
||||
|
||||
|
|
@ -28,10 +27,7 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
|
|||
asset_and_parents = self.get_parents(context)
|
||||
self.log.debug("__ asset_and_parents: {}".format(asset_and_parents))
|
||||
|
||||
if not legacy_io.Session:
|
||||
legacy_io.install()
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
project_name = context.data["projectName"]
|
||||
db_assets = list(get_assets(
|
||||
project_name, fields=["name", "data.parents"]
|
||||
))
|
||||
|
|
|
|||
74
openpype/plugins/publish/validate_publish_dir.py
Normal file
74
openpype/plugins/publish/validate_publish_dir.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import pyblish.api
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
from openpype.pipeline.publish import (
|
||||
PublishXmlValidationError,
|
||||
get_publish_template_name,
|
||||
)
|
||||
|
||||
|
||||
class ValidatePublishDir(pyblish.api.InstancePlugin):
|
||||
"""Validates if 'publishDir' is a project directory
|
||||
|
||||
'publishDir' is collected based on publish templates. In specific cases
|
||||
('source' template) source folder of items is used as a 'publishDir', this
|
||||
validates if it is inside any project dir for the project.
|
||||
(eg. files are not published from local folder, unaccessible for studio'
|
||||
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
label = "Validate publish dir"
|
||||
|
||||
checked_template_names = ["source"]
|
||||
# validate instances might have interim family, needs to be mapped to final
|
||||
family_mapping = {
|
||||
"renderLayer": "render",
|
||||
"renderLocal": "render"
|
||||
}
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
template_name = self._get_template_name_from_instance(instance)
|
||||
|
||||
if template_name not in self.checked_template_names:
|
||||
return
|
||||
|
||||
original_dirname = instance.data.get("originalDirname")
|
||||
if not original_dirname:
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Instance meant for in place publishing."
|
||||
" Its 'originalDirname' must be collected."
|
||||
" Contact OP developer to modify collector."
|
||||
)
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
success, _ = anatomy.find_root_template_from_path(original_dirname)
|
||||
|
||||
formatting_data = {
|
||||
"original_dirname": original_dirname,
|
||||
}
|
||||
msg = "Path '{}' not in project folder.".format(original_dirname) + \
|
||||
" Please publish from inside of project folder."
|
||||
if not success:
|
||||
raise PublishXmlValidationError(self, msg, key="not_in_dir",
|
||||
formatting_data=formatting_data)
|
||||
|
||||
def _get_template_name_from_instance(self, instance):
|
||||
project_name = instance.context.data["projectName"]
|
||||
host_name = instance.context.data["hostName"]
|
||||
anatomy_data = instance.data["anatomyData"]
|
||||
family = anatomy_data["family"]
|
||||
family = self.family_mapping.get("family") or family
|
||||
task_info = anatomy_data.get("task") or {}
|
||||
|
||||
return get_publish_template_name(
|
||||
project_name,
|
||||
host_name,
|
||||
family,
|
||||
task_name=task_info.get("name"),
|
||||
task_type=task_info.get("type"),
|
||||
project_settings=instance.context.data["project_settings"],
|
||||
logger=self.log
|
||||
)
|
||||
|
|
@ -53,11 +53,17 @@
|
|||
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"source": {
|
||||
"folder": "{root[work]}/{originalDirname}",
|
||||
"file": "{originalBasename}<.{@frame}><_{udim}>.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"__dynamic_keys_labels__": {
|
||||
"maya2unreal": "Maya to Unreal",
|
||||
"simpleUnrealTextureHero": "Simple Unreal Texture - Hero",
|
||||
"simpleUnrealTexture": "Simple Unreal Texture",
|
||||
"online": "online"
|
||||
"online": "online",
|
||||
"source": "source"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -2,7 +2,7 @@
|
|||
"deadline_servers": [],
|
||||
"publish": {
|
||||
"CollectDefaultDeadlineServer": {
|
||||
"pass_mongo_url": false
|
||||
"pass_mongo_url": true
|
||||
},
|
||||
"CollectDeadlinePools": {
|
||||
"primary_pool": "",
|
||||
|
|
@ -25,6 +25,7 @@
|
|||
"active": true,
|
||||
"tile_assembler_plugin": "OpenPypeTileAssembler",
|
||||
"use_published": true,
|
||||
"import_reference": false,
|
||||
"asset_dependencies": true,
|
||||
"priority": 50,
|
||||
"tile_priority": 50,
|
||||
|
|
|
|||
|
|
@ -130,6 +130,11 @@
|
|||
"key": "use_published",
|
||||
"label": "Use Published scene"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "import_reference",
|
||||
"label": "Use Scene with Imported Reference"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "asset_dependencies",
|
||||
|
|
|
|||
|
|
@ -25,8 +25,15 @@
|
|||
{
|
||||
"key": "path",
|
||||
"label": "Path to template",
|
||||
"type": "text",
|
||||
"object_type": "text"
|
||||
"type": "path",
|
||||
"multiplatform": false,
|
||||
"multipath": false
|
||||
},
|
||||
{
|
||||
"key": "keep_placeholder",
|
||||
"label": "Keep placeholders",
|
||||
"type": "boolean",
|
||||
"default": true
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.14.10-nightly.4"
|
||||
__version__ = "3.14.11-nightly.1"
|
||||
|
|
|
|||
|
|
@ -105,16 +105,19 @@ class ScrollMessageBox(QtWidgets.QDialog):
|
|||
content_widget = QtWidgets.QWidget(self)
|
||||
scroll_widget.setWidget(content_widget)
|
||||
|
||||
max_len = 0
|
||||
message_len = 0
|
||||
content_layout = QtWidgets.QVBoxLayout(content_widget)
|
||||
for message in messages:
|
||||
label_widget = QtWidgets.QLabel(message, content_widget)
|
||||
content_layout.addWidget(label_widget)
|
||||
max_len = max(max_len, len(message))
|
||||
message_len = max(message_len, len(message))
|
||||
|
||||
# guess size of scrollable area
|
||||
max_width = QtWidgets.QApplication.desktop().availableGeometry().width
|
||||
scroll_widget.setMinimumWidth(min(max_width, max_len * 6))
|
||||
desktop = QtWidgets.QApplication.desktop()
|
||||
max_width = desktop.availableGeometry().width()
|
||||
scroll_widget.setMinimumWidth(
|
||||
min(max_width, message_len * 6)
|
||||
)
|
||||
layout.addWidget(scroll_widget)
|
||||
|
||||
if not cancelable: # if no specific buttons OK only
|
||||
|
|
|
|||
|
|
@ -43,4 +43,5 @@ $openpype_root = (Get-Item $script_dir).parent.FullName
|
|||
|
||||
Set-Location $openpype_root/website
|
||||
|
||||
& yarn run start
|
||||
& yarn install
|
||||
& yarn start
|
||||
|
|
|
|||
|
|
@ -94,6 +94,16 @@ Few keys also have Capitalized and UPPERCASE format. Values will be modified acc
|
|||
Here you can find review {review_filepath}
|
||||
```
|
||||
|
||||
##### Dynamic message for artists
|
||||
If artists uses host with implemented Publisher (new UI for publishing, implemented in Tray Publisher, Adobe products etc), it is possible for
|
||||
them to add additional message (notification for specific users for example, artists must provide proper user id with '@').
|
||||
Additional message will be sent only if at least one profile, eg. one target channel is configured.
|
||||
All available template keys (see higher) could be used here as a placeholder too.
|
||||
|
||||
#### User or group notifications
|
||||
Message template or dynamic data could contain user or group notification, it must be in format @artist.name, '@John Doe' or "@admin group" for display name containing space.
|
||||
If value prefixed with @ is not resolved and Slack user is not found, message will contain same value (not translated by Slack into link and proper mention.)
|
||||
|
||||
#### Message retention
|
||||
Currently no purging of old messages is implemented in Openpype. Admins of Slack should set their own retention of messages and files per channel.
|
||||
(see https://slack.com/help/articles/203457187-Customize-message-and-file-retention-policies)
|
||||
|
|
|
|||
|
|
@ -4740,9 +4740,9 @@ json-schema-traverse@^1.0.0:
|
|||
integrity sha512-NM8/P9n3XjXhIZn1lLhkFaACTOURQXjWhV4BA/RnOv8xvgqtqpAX9IO4mRQxSx1Rlo4tqzeqb0sOlruaOy3dug==
|
||||
|
||||
json5@^1.0.1:
|
||||
version "1.0.1"
|
||||
resolved "https://registry.yarnpkg.com/json5/-/json5-1.0.1.tgz#779fb0018604fa854eacbf6252180d83543e3dbe"
|
||||
integrity sha512-aKS4WQjPenRxiQsC93MNfjx+nbF4PAdYzmd/1JIj8HYzqfbu86beTuNgXDzPknWk0n0uARlyewZo4s++ES36Ow==
|
||||
version "1.0.2"
|
||||
resolved "https://registry.yarnpkg.com/json5/-/json5-1.0.2.tgz#63d98d60f21b313b77c4d6da18bfa69d80e1d593"
|
||||
integrity sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA==
|
||||
dependencies:
|
||||
minimist "^1.2.0"
|
||||
|
||||
|
|
@ -5154,16 +5154,11 @@ minimatch@^3.0.4:
|
|||
dependencies:
|
||||
brace-expansion "^1.1.7"
|
||||
|
||||
minimist@^1.2.0:
|
||||
minimist@^1.2.0, minimist@^1.2.5:
|
||||
version "1.2.7"
|
||||
resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.7.tgz#daa1c4d91f507390437c6a8bc01078e7000c4d18"
|
||||
integrity sha512-bzfL1YUZsP41gmu/qjrEk0Q6i2ix/cVeAhbCbqH9u3zYutS1cLg00qhrD0M2MVdCcx4Sc0UpP2eBWo9rotpq6g==
|
||||
|
||||
minimist@^1.2.5:
|
||||
version "1.2.6"
|
||||
resolved "https://registry.yarnpkg.com/minimist/-/minimist-1.2.6.tgz#8637a5b759ea0d6e98702cfb3a9283323c93af44"
|
||||
integrity sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==
|
||||
|
||||
mkdirp@^0.5.5:
|
||||
version "0.5.5"
|
||||
resolved "https://registry.yarnpkg.com/mkdirp/-/mkdirp-0.5.5.tgz#d91cefd62d1436ca0f41620e251288d420099def"
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue