🔀 merge develop in

This commit is contained in:
Ondřej Samohel 2022-10-14 17:41:03 +02:00
commit 5bfd803609
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
147 changed files with 4488 additions and 2372 deletions

View file

@ -1,17 +1,41 @@
# Changelog
## [3.14.4-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.4-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.3...HEAD)
**🚀 Enhancements**
- General: Set root environments before DCC launch [\#3947](https://github.com/pypeclub/OpenPype/pull/3947)
- Refactor: changed legacy way to update database for Hero version integrate [\#3941](https://github.com/pypeclub/OpenPype/pull/3941)
- Maya: Moved plugin from global to maya [\#3939](https://github.com/pypeclub/OpenPype/pull/3939)
- Fusion: Implement Alembic and FBX mesh loader [\#3927](https://github.com/pypeclub/OpenPype/pull/3927)
- Publisher: Instances can be marked as stored [\#3846](https://github.com/pypeclub/OpenPype/pull/3846)
**🐛 Bug fixes**
- Maya: Deadline OutputFilePath hack regression for Renderman [\#3950](https://github.com/pypeclub/OpenPype/pull/3950)
- Houdini: Fix validate workfile paths for non-parm file references [\#3948](https://github.com/pypeclub/OpenPype/pull/3948)
- Photoshop: missed sync published version of workfile with workfile [\#3946](https://github.com/pypeclub/OpenPype/pull/3946)
- Maya: fix regression of Renderman Deadline hack [\#3943](https://github.com/pypeclub/OpenPype/pull/3943)
- Tray: Change order of attribute changes [\#3938](https://github.com/pypeclub/OpenPype/pull/3938)
- AttributeDefs: Fix crashing multivalue of files widget [\#3937](https://github.com/pypeclub/OpenPype/pull/3937)
- General: Fix links query on hero version [\#3900](https://github.com/pypeclub/OpenPype/pull/3900)
- Publisher: Files Drag n Drop cleanup [\#3888](https://github.com/pypeclub/OpenPype/pull/3888)
- Maya: Render settings validation attribute check tweak logging [\#3821](https://github.com/pypeclub/OpenPype/pull/3821)
**🔀 Refactored code**
- General: Direct settings imports [\#3934](https://github.com/pypeclub/OpenPype/pull/3934)
- General: import 'Logger' from 'openpype.lib' [\#3926](https://github.com/pypeclub/OpenPype/pull/3926)
**Merged pull requests:**
- Maya + Yeti: Load Yeti Cache fix frame number recognition [\#3942](https://github.com/pypeclub/OpenPype/pull/3942)
- Fusion: Implement callbacks to Fusion's event system thread [\#3928](https://github.com/pypeclub/OpenPype/pull/3928)
- Photoshop: create single frame image in Ftrack as review [\#3908](https://github.com/pypeclub/OpenPype/pull/3908)
- Maya: Warn correctly about nodes in render instance with unexpected names [\#3816](https://github.com/pypeclub/OpenPype/pull/3816)
## [3.14.3](https://github.com/pypeclub/OpenPype/tree/3.14.3) (2022-10-03)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.3-nightly.7...3.14.3)
@ -28,6 +52,7 @@
- Flame: make migratable projects after creation [\#3860](https://github.com/pypeclub/OpenPype/pull/3860)
- Photoshop: synchronize image version with workfile [\#3854](https://github.com/pypeclub/OpenPype/pull/3854)
- General: Transcoding handle float2 attr type [\#3849](https://github.com/pypeclub/OpenPype/pull/3849)
- General: Simple script for getting license information about used packages [\#3843](https://github.com/pypeclub/OpenPype/pull/3843)
- General: Workfile template build enhancements [\#3838](https://github.com/pypeclub/OpenPype/pull/3838)
- General: lock task workfiles when they are working on [\#3810](https://github.com/pypeclub/OpenPype/pull/3810)
@ -44,7 +69,6 @@
- Tray Publisher: skip plugin if otioTimeline is missing [\#3856](https://github.com/pypeclub/OpenPype/pull/3856)
- Flame: retimed attributes are integrated with settings [\#3855](https://github.com/pypeclub/OpenPype/pull/3855)
- Maya: Extract Playblast fix textures + labelize viewport show settings [\#3852](https://github.com/pypeclub/OpenPype/pull/3852)
- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811)
**🔀 Refactored code**
@ -55,6 +79,7 @@
- Houdini: Use new Extractor location [\#3894](https://github.com/pypeclub/OpenPype/pull/3894)
- Harmony: Use new Extractor location [\#3893](https://github.com/pypeclub/OpenPype/pull/3893)
- Hiero: Use new Extractor location [\#3851](https://github.com/pypeclub/OpenPype/pull/3851)
- Maya: Remove old legacy \(ftrack\) plug-ins that are of no use anymore [\#3819](https://github.com/pypeclub/OpenPype/pull/3819)
- Nuke: Use new Extractor location [\#3799](https://github.com/pypeclub/OpenPype/pull/3799)
**Merged pull requests:**
@ -73,33 +98,15 @@
- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825)
- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809)
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
**🐛 Bug fixes**
- General: Fix Pattern access in client code [\#3828](https://github.com/pypeclub/OpenPype/pull/3828)
- Launcher: Skip opening last work file works for groups [\#3822](https://github.com/pypeclub/OpenPype/pull/3822)
- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811)
- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804)
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
**🔀 Refactored code**
- Maya: Remove old legacy \(ftrack\) plug-ins that are of no use anymore [\#3819](https://github.com/pypeclub/OpenPype/pull/3819)
- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768)
- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
**Merged pull requests:**
- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776)
- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)

View file

@ -11,7 +11,6 @@ from .lib import (
PypeLogger,
Logger,
Anatomy,
config,
execute,
run_subprocess,
version_up,
@ -72,7 +71,6 @@ __all__ = [
"PypeLogger",
"Logger",
"Anatomy",
"config",
"execute",
"get_default_components",
"ApplicationManager",

View file

@ -2,6 +2,7 @@ from .mongo import get_project_connection
from .entities import (
get_assets,
get_asset_by_id,
get_version_by_id,
get_representation_by_id,
convert_id,
)
@ -127,12 +128,20 @@ def get_linked_representation_id(
if not version_id:
return []
version_doc = get_version_by_id(
project_name, version_id, fields=["type", "version_id"]
)
if version_doc["type"] == "hero_version":
version_id = version_doc["version_id"]
if max_depth is None:
max_depth = 0
match = {
"_id": version_id,
"type": {"$in": ["version", "hero_version"]}
# Links are not stored to hero versions at this moment so filter
# is limited to just versions
"type": "version"
}
graph_lookup = {
@ -187,7 +196,7 @@ def _process_referenced_pipeline_result(result, link_type):
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
input_links = item["data"].get("inputLinks")
input_links = item.get("data", {}).get("inputLinks")
if not input_links:
continue
@ -203,7 +212,7 @@ def _process_referenced_pipeline_result(result, link_type):
continue
for output in sorted(outputs_recursive, key=lambda o: o["depth"]):
output_links = output["data"].get("inputLinks")
output_links = output.get("data", {}).get("inputLinks")
if not output_links:
continue

View file

@ -23,6 +23,7 @@ CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0"
CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0"
CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
CURRENT_HERO_VERSION_SCHEMA = "openpype:hero_version-1.0"
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0"
@ -162,6 +163,34 @@ def new_version_doc(version, subset_id, data=None, entity_id=None):
}
def new_hero_version_doc(version_id, subset_id, data=None, entity_id=None):
"""Create skeleton data of hero version document.
Args:
version_id (ObjectId): Is considered as unique identifier of version
under subset.
subset_id (Union[str, ObjectId]): Id of parent subset.
data (Dict[str, Any]): Version document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_HERO_VERSION_SCHEMA,
"type": "hero_version",
"version_id": version_id,
"parent": subset_id,
"data": data
}
def new_representation_doc(
name, version_id, context, data=None, entity_id=None
):
@ -293,6 +322,20 @@ def prepare_version_update_data(old_doc, new_doc, replace=True):
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_hero_version_update_data(old_doc, new_doc, replace=True):
"""Compare two hero version documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_representation_update_data(old_doc, new_doc, replace=True):
"""Compare two representation documents and prepare update data.

View file

@ -312,6 +312,8 @@ class IPublishHost:
required = [
"get_context_data",
"update_context_data",
"get_context_title",
"get_current_context",
]
missing = []
for name in required:

View file

@ -3,7 +3,7 @@ from typing import List
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
from openpype.pipeline.publish import ValidateContentsOrder

View file

@ -3,14 +3,15 @@ from typing import List
import bpy
import pyblish.api
import openpype.api
from openpype.pipeline.publish import ValidateContentsOrder
import openpype.hosts.blender.api.action
class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
"""Validate that the current mesh has UV's."""
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
hosts = ["blender"]
families = ["model"]
category = "geometry"

View file

@ -3,14 +3,15 @@ from typing import List
import bpy
import pyblish.api
import openpype.api
from openpype.pipeline.publish import ValidateContentsOrder
import openpype.hosts.blender.api.action
class ValidateMeshNoNegativeScale(pyblish.api.Validator):
"""Ensure that meshes don't have a negative scale."""
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
hosts = ["blender"]
families = ["model"]
category = "geometry"

View file

@ -3,7 +3,7 @@ from typing import List
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
from openpype.pipeline.publish import ValidateContentsOrder

View file

@ -4,7 +4,7 @@ import mathutils
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
from openpype.pipeline.publish import ValidateContentsOrder

View file

@ -6,9 +6,9 @@ from xml.etree import ElementTree as ET
from Qt import QtCore, QtWidgets
import openpype.api as openpype
import qargparse
from openpype import style
from openpype.settings import get_current_project_settings
from openpype.lib import Logger
from openpype.pipeline import LegacyCreator, LoaderPlugin
@ -306,7 +306,7 @@ class Creator(LegacyCreator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
self.presets = openpype.get_current_project_settings()[
self.presets = get_current_project_settings()[
"flame"]["create"].get(self.__class__.__name__, {})
# adding basic current context flame objects

View file

@ -42,17 +42,9 @@ class FlamePrelaunch(PreLaunchHook):
volume_name = _env.get("FLAME_WIRETAP_VOLUME")
# get image io
project_anatomy = self.data["anatomy"]
project_settings = self.data["project_settings"]
# make sure anatomy settings are having flame key
if not project_anatomy["imageio"].get("flame"):
raise ApplicationLaunchFailed((
"Anatomy project settings are missing `flame` key. "
"Please make sure you remove project overides on "
"Anatomy Image io")
)
imageio_flame = project_anatomy["imageio"]["flame"]
imageio_flame = project_settings["flame"]["imageio"]
# get user name and host name
user_name = get_openpype_username()

View file

@ -3,8 +3,6 @@ import sys
import re
import contextlib
from Qt import QtGui
from openpype.lib import Logger
from openpype.client import (
get_asset_by_name,
@ -92,7 +90,7 @@ def set_asset_resolution():
})
def validate_comp_prefs(comp=None):
def validate_comp_prefs(comp=None, force_repair=False):
"""Validate current comp defaults with asset settings.
Validates fps, resolutionWidth, resolutionHeight, aspectRatio.
@ -135,21 +133,22 @@ def validate_comp_prefs(comp=None):
asset_value = asset_data[key]
comp_value = comp_frame_format_prefs.get(comp_key)
if asset_value != comp_value:
# todo: Actually show dialog to user instead of just logging
log.warning(
"Comp {pref} {value} does not match asset "
"'{asset_name}' {pref} {asset_value}".format(
pref=label,
value=comp_value,
asset_name=asset_doc["name"],
asset_value=asset_value)
)
invalid_msg = "{} {} should be {}".format(label,
comp_value,
asset_value)
invalid.append(invalid_msg)
if not force_repair:
# Do not log warning if we force repair anyway
log.warning(
"Comp {pref} {value} does not match asset "
"'{asset_name}' {pref} {asset_value}".format(
pref=label,
value=comp_value,
asset_name=asset_doc["name"],
asset_value=asset_value)
)
if invalid:
def _on_repair():
@ -160,6 +159,11 @@ def validate_comp_prefs(comp=None):
attributes[comp_key_full] = value
comp.SetPrefs(attributes)
if force_repair:
log.info("Applying default Comp preferences..")
_on_repair()
return
from . import menu
from openpype.widgets import popup
from openpype.style import load_stylesheet

View file

@ -16,6 +16,7 @@ from openpype.hosts.fusion.api.lib import (
from openpype.pipeline import legacy_io
from openpype.resources import get_openpype_icon_filepath
from .pipeline import FusionEventHandler
from .pulse import FusionPulse
self = sys.modules[__name__]
@ -119,6 +120,10 @@ class OpenPypeMenu(QtWidgets.QWidget):
self._pulse = FusionPulse(parent=self)
self._pulse.start()
# Detect Fusion events as OpenPype events
self._event_handler = FusionEventHandler(parent=self)
self._event_handler.start()
def on_task_changed(self):
# Update current context label
label = legacy_io.Session["AVALON_ASSET"]

View file

@ -2,13 +2,16 @@
Basic avalon integration
"""
import os
import sys
import logging
import pyblish.api
from Qt import QtCore
from openpype.lib import (
Logger,
register_event_callback
register_event_callback,
emit_event
)
from openpype.pipeline import (
register_loader_plugin_path,
@ -39,12 +42,13 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
class CompLogHandler(logging.Handler):
class FusionLogHandler(logging.Handler):
# Keep a reference to fusion's Print function (Remote Object)
_print = getattr(sys.modules["__main__"], "fusion").Print
def emit(self, record):
entry = self.format(record)
comp = get_current_comp()
if comp:
comp.Print(entry)
self._print(entry)
def install():
@ -67,7 +71,7 @@ def install():
# Attach default logging handler that prints to active comp
logger = logging.getLogger()
formatter = logging.Formatter(fmt="%(message)s\n")
handler = CompLogHandler()
handler = FusionLogHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
@ -84,10 +88,10 @@ def install():
"instanceToggled", on_pyblish_instance_toggled
)
# Fusion integration currently does not attach to direct callbacks of
# the application. So we use workfile callbacks to allow similar behavior
# on save and open
register_event_callback("workfile.open.after", on_after_open)
# Register events
register_event_callback("open", on_after_open)
register_event_callback("save", on_save)
register_event_callback("new", on_new)
def uninstall():
@ -137,8 +141,18 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
tool.SetAttrs({"TOOLB_PassThrough": passthrough})
def on_after_open(_event):
comp = get_current_comp()
def on_new(event):
comp = event["Rets"]["comp"]
validate_comp_prefs(comp, force_repair=True)
def on_save(event):
comp = event["sender"]
validate_comp_prefs(comp)
def on_after_open(event):
comp = event["sender"]
validate_comp_prefs(comp)
if any_outdated_containers():
@ -182,7 +196,7 @@ def ls():
"""
comp = get_current_comp()
tools = comp.GetToolList(False, "Loader").values()
tools = comp.GetToolList(False).values()
for tool in tools:
container = parse_container(tool)
@ -254,3 +268,114 @@ def parse_container(tool):
return container
class FusionEventThread(QtCore.QThread):
"""QThread which will periodically ping Fusion app for any events.
The fusion.UIManager must be set up to be notified of events before they'll
be reported by this thread, for example:
fusion.UIManager.AddNotify("Comp_Save", None)
"""
on_event = QtCore.Signal(dict)
def run(self):
app = getattr(sys.modules["__main__"], "app", None)
if app is None:
# No Fusion app found
return
# As optimization store the GetEvent method directly because every
# getattr of UIManager.GetEvent tries to resolve the Remote Function
# through the PyRemoteObject
get_event = app.UIManager.GetEvent
delay = int(os.environ.get("OPENPYPE_FUSION_CALLBACK_INTERVAL", 1000))
while True:
if self.isInterruptionRequested():
return
# Process all events that have been queued up until now
while True:
event = get_event(False)
if not event:
break
self.on_event.emit(event)
# Wait some time before processing events again
# to not keep blocking the UI
self.msleep(delay)
class FusionEventHandler(QtCore.QObject):
"""Emits OpenPype events based on Fusion events captured in a QThread.
This will emit the following OpenPype events based on Fusion actions:
save: Comp_Save, Comp_SaveAs
open: Comp_Opened
new: Comp_New
To use this you can attach it to you Qt UI so it runs in the background.
E.g.
>>> handler = FusionEventHandler(parent=window)
>>> handler.start()
"""
ACTION_IDS = [
"Comp_Save",
"Comp_SaveAs",
"Comp_New",
"Comp_Opened"
]
def __init__(self, parent=None):
super(FusionEventHandler, self).__init__(parent=parent)
# Set up Fusion event callbacks
fusion = getattr(sys.modules["__main__"], "fusion", None)
ui = fusion.UIManager
# Add notifications for the ones we want to listen to
notifiers = []
for action_id in self.ACTION_IDS:
notifier = ui.AddNotify(action_id, None)
notifiers.append(notifier)
# TODO: Not entirely sure whether these must be kept to avoid
# garbage collection
self._notifiers = notifiers
self._event_thread = FusionEventThread(parent=self)
self._event_thread.on_event.connect(self._on_event)
def start(self):
self._event_thread.start()
def stop(self):
self._event_thread.stop()
def _on_event(self, event):
"""Handle Fusion events to emit OpenPype events"""
if not event:
return
what = event["what"]
# Comp Save
if what in {"Comp_Save", "Comp_SaveAs"}:
if not event["Rets"].get("success"):
# If the Save action is cancelled it will still emit an
# event but with "success": False so we ignore those cases
return
# Comp was saved
emit_event("save", data=event)
return
# Comp New
elif what in {"Comp_New"}:
emit_event("new", data=event)
# Comp Opened
elif what in {"Comp_Opened"}:
emit_event("open", data=event)

View file

@ -19,9 +19,12 @@ class PulseThread(QtCore.QThread):
while True:
if self.isInterruptionRequested():
return
try:
app.Test()
except Exception:
# We don't need to call Test because PyRemoteObject of the app
# will actually fail to even resolve the Test function if it has
# gone down. So we can actually already just check by confirming
# the method is still getting resolved. (Optimization)
if app.Test is None:
self.no_response.emit()
self.msleep(interval)

View file

@ -15,13 +15,7 @@ class FusionPreLaunchOCIO(PreLaunchHook):
project_settings = self.data["project_settings"]
# make sure anatomy settings are having flame key
imageio_fusion = project_settings.get("fusion", {}).get("imageio")
if not imageio_fusion:
raise ApplicationLaunchFailed((
"Anatomy project settings are missing `fusion` key. "
"Please make sure you remove project overrides on "
"Anatomy ImageIO")
)
imageio_fusion = project_settings["fusion"]["imageio"]
ocio = imageio_fusion.get("ocio")
enabled = ocio.get("enabled", False)

View file

@ -0,0 +1,70 @@
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.fusion.api import (
imprint_container,
get_current_comp,
comp_lock_and_undo_chunk
)
class FusionLoadAlembicMesh(load.LoaderPlugin):
"""Load Alembic mesh into Fusion"""
families = ["pointcache", "model"]
representations = ["abc"]
label = "Load alembic mesh"
order = -10
icon = "code-fork"
color = "orange"
tool_type = "SurfaceAlembicMesh"
def load(self, context, name, namespace, data):
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
# Create the Loader with the filename path set
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp, "Create tool"):
path = self.fname
args = (-32768, -32768)
tool = comp.AddTool(self.tool_type, *args)
tool["Filename"] = path
imprint_container(tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__)
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
"""Update Alembic path"""
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
path = get_representation_path(representation)
with comp_lock_and_undo_chunk(comp, "Update tool"):
tool["Filename"] = path
# Update the imprinted representation
tool.SetData("avalon.representation", str(representation["_id"]))
def remove(self, container):
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
with comp_lock_and_undo_chunk(comp, "Remove tool"):
tool.Delete()

View file

@ -0,0 +1,71 @@
from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.hosts.fusion.api import (
imprint_container,
get_current_comp,
comp_lock_and_undo_chunk
)
class FusionLoadFBXMesh(load.LoaderPlugin):
"""Load FBX mesh into Fusion"""
families = ["*"]
representations = ["fbx"]
label = "Load FBX mesh"
order = -10
icon = "code-fork"
color = "orange"
tool_type = "SurfaceFBXMesh"
def load(self, context, name, namespace, data):
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
# Create the Loader with the filename path set
comp = get_current_comp()
with comp_lock_and_undo_chunk(comp, "Create tool"):
path = self.fname
args = (-32768, -32768)
tool = comp.AddTool(self.tool_type, *args)
tool["ImportFile"] = path
imprint_container(tool,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__)
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
"""Update path"""
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
path = get_representation_path(representation)
with comp_lock_and_undo_chunk(comp, "Update tool"):
tool["ImportFile"] = path
# Update the imprinted representation
tool.SetData("avalon.representation", str(representation["_id"]))
def remove(self, container):
tool = container["_tool"]
assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
comp = tool.Comp()
with comp_lock_and_undo_chunk(comp, "Remove tool"):
tool.Delete()

View file

@ -14,7 +14,7 @@ import hiero
from Qt import QtWidgets
from openpype.client import get_project
from openpype.settings import get_anatomy_settings
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.pipeline.load import filter_containers
from openpype.lib import Logger
@ -878,8 +878,7 @@ def apply_colorspace_project():
project.close()
# get presets for hiero
imageio = get_anatomy_settings(
project_name)["imageio"].get("hiero", None)
imageio = get_project_settings(project_name)["hiero"]["imageio"]
presets = imageio.get("workfile")
# save the workfile as subversion "comment:_colorspaceChange"
@ -932,8 +931,7 @@ def apply_colorspace_clips():
clips = project.clips()
# get presets for hiero
imageio = get_anatomy_settings(
project_name)["imageio"].get("hiero", None)
imageio = get_project_settings(project_name)["hiero"]["imageio"]
from pprint import pprint
presets = imageio.get("regexInputs", {}).get("inputs", {})

View file

@ -8,7 +8,7 @@ import hiero
from Qt import QtWidgets, QtCore
import qargparse
import openpype.api as openpype
from openpype.settings import get_current_project_settings
from openpype.lib import Logger
from openpype.pipeline import LoaderPlugin, LegacyCreator
from openpype.pipeline.context_tools import get_current_project_asset
@ -606,7 +606,7 @@ class Creator(LegacyCreator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
import openpype.hosts.hiero.api as phiero
self.presets = openpype.get_current_project_settings()[
self.presets = get_current_project_settings()[
"hiero"]["create"].get(self.__class__.__name__, {})
# adding basic current context resolve objects

View file

@ -73,7 +73,7 @@ class ImageLoader(load.LoaderPlugin):
# Imprint it manually
data = {
"schema": "avalon-core:container-2.0",
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": node_name,
"namespace": namespace,

View file

@ -43,7 +43,7 @@ class USDSublayerLoader(load.LoaderPlugin):
# Imprint it manually
data = {
"schema": "avalon-core:container-2.0",
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": node_name,
"namespace": namespace,

View file

@ -43,7 +43,7 @@ class USDReferenceLoader(load.LoaderPlugin):
# Imprint it manually
data = {
"schema": "avalon-core:container-2.0",
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": node_name,
"namespace": namespace,

View file

@ -48,7 +48,6 @@ class ValidateWorkfilePaths(
if not param:
continue
# skip nodes we are not interested in
cls.log.debug(param)
if param.node().type().name() not in cls.node_types:
continue

View file

@ -28,13 +28,16 @@ class MayaAddon(OpenPypeModule, IHostAddon):
env["PYTHONPATH"] = os.pathsep.join(new_python_paths)
# Set default values if are not already set via settings
defaults = {
"OPENPYPE_LOG_NO_COLORS": "Yes"
# Set default environments
envs = {
"OPENPYPE_LOG_NO_COLORS": "Yes",
# For python module 'qtpy'
"QT_API": "PySide2",
# For python module 'Qt'
"QT_PREFERRED_BINDING": "PySide2"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
for key, value in envs.items():
env[key] = value
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:

View file

@ -8,7 +8,7 @@ from functools import partial
import maya.cmds as cmds
import maya.mel as mel
from openpype.api import resources
from openpype import resources
from openpype.tools.utils import host_tools
from .lib import get_main_window

View file

@ -23,7 +23,7 @@ from openpype.client import (
get_last_versions,
get_representation_by_name
)
from openpype.api import get_anatomy_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
legacy_io,
discover_loader_plugins,
@ -2459,182 +2459,120 @@ def bake_to_world_space(nodes,
def load_capture_preset(data=None):
"""Convert OpenPype Extract Playblast settings to `capture` arguments
Input data is the settings from:
`project_settings/maya/publish/ExtractPlayblast/capture_preset`
Args:
data (dict): Capture preset settings from OpenPype settings
Returns:
dict: `capture.capture` compatible keyword arguments
"""
import capture
preset = data
options = dict()
viewport_options = dict()
viewport2_options = dict()
camera_options = dict()
# CODEC
id = 'Codec'
for key in preset[id]:
options[str(key)] = preset[id][key]
# Straight key-value match from settings to capture arguments
options.update(data["Codec"])
options.update(data["Generic"])
options.update(data["Resolution"])
# GENERIC
id = 'Generic'
for key in preset[id]:
options[str(key)] = preset[id][key]
# RESOLUTION
id = 'Resolution'
options['height'] = preset[id]['height']
options['width'] = preset[id]['width']
camera_options.update(data['Camera Options'])
viewport_options.update(data["Renderer"])
# DISPLAY OPTIONS
id = 'Display Options'
disp_options = {}
for key in preset[id]:
for key, value in data['Display Options'].items():
if key.startswith('background'):
disp_options[key] = preset['Display Options'][key]
if len(disp_options[key]) == 4:
disp_options[key][0] = (float(disp_options[key][0])/255)
disp_options[key][1] = (float(disp_options[key][1])/255)
disp_options[key][2] = (float(disp_options[key][2])/255)
disp_options[key].pop()
# Convert background, backgroundTop, backgroundBottom colors
if len(value) == 4:
# Ignore alpha + convert RGB to float
value = [
float(value[0]) / 255,
float(value[1]) / 255,
float(value[2]) / 255
]
disp_options[key] = value
else:
disp_options['displayGradient'] = True
options['display_options'] = disp_options
# VIEWPORT OPTIONS
temp_options = {}
id = 'Renderer'
for key in preset[id]:
temp_options[str(key)] = preset[id][key]
# Viewport Options has a mixture of Viewport2 Options and Viewport Options
# to pass along to capture. So we'll need to differentiate between the two
VIEWPORT2_OPTIONS = {
"textureMaxResolution",
"renderDepthOfField",
"ssaoEnable",
"ssaoSamples",
"ssaoAmount",
"ssaoRadius",
"ssaoFilterRadius",
"hwFogStart",
"hwFogEnd",
"hwFogAlpha",
"hwFogFalloff",
"hwFogColorR",
"hwFogColorG",
"hwFogColorB",
"hwFogDensity",
"motionBlurEnable",
"motionBlurSampleCount",
"motionBlurShutterOpenFraction",
"lineAAEnable"
}
for key, value in data['Viewport Options'].items():
temp_options2 = {}
id = 'Viewport Options'
for key in preset[id]:
# There are some keys we want to ignore
if key in {"override_viewport_options", "high_quality"}:
continue
# First handle special cases where we do value conversion to
# separate option values
if key == 'textureMaxResolution':
if preset[id][key] > 0:
temp_options2['textureMaxResolution'] = preset[id][key]
temp_options2['enableTextureMaxRes'] = True
temp_options2['textureMaxResMode'] = 1
viewport2_options['textureMaxResolution'] = value
if value > 0:
viewport2_options['enableTextureMaxRes'] = True
viewport2_options['textureMaxResMode'] = 1
else:
temp_options2['textureMaxResolution'] = preset[id][key]
temp_options2['enableTextureMaxRes'] = False
temp_options2['textureMaxResMode'] = 0
viewport2_options['enableTextureMaxRes'] = False
viewport2_options['textureMaxResMode'] = 0
if key == 'multiSample':
if preset[id][key] > 0:
temp_options2['multiSampleEnable'] = True
temp_options2['multiSampleCount'] = preset[id][key]
else:
temp_options2['multiSampleEnable'] = False
temp_options2['multiSampleCount'] = preset[id][key]
elif key == 'multiSample':
viewport2_options['multiSampleEnable'] = value > 0
viewport2_options['multiSampleCount'] = value
if key == 'renderDepthOfField':
temp_options2['renderDepthOfField'] = preset[id][key]
elif key == 'alphaCut':
viewport2_options['transparencyAlgorithm'] = 5
viewport2_options['transparencyQuality'] = 1
if key == 'ssaoEnable':
if preset[id][key] is True:
temp_options2['ssaoEnable'] = True
else:
temp_options2['ssaoEnable'] = False
elif key == 'hwFogFalloff':
# Settings enum value string to integer
viewport2_options['hwFogFalloff'] = int(value)
if key == 'ssaoSamples':
temp_options2['ssaoSamples'] = preset[id][key]
if key == 'ssaoAmount':
temp_options2['ssaoAmount'] = preset[id][key]
if key == 'ssaoRadius':
temp_options2['ssaoRadius'] = preset[id][key]
if key == 'hwFogDensity':
temp_options2['hwFogDensity'] = preset[id][key]
if key == 'ssaoFilterRadius':
temp_options2['ssaoFilterRadius'] = preset[id][key]
if key == 'alphaCut':
temp_options2['transparencyAlgorithm'] = 5
temp_options2['transparencyQuality'] = 1
if key == 'headsUpDisplay':
temp_options['headsUpDisplay'] = True
if key == 'fogging':
temp_options['fogging'] = preset[id][key] or False
if key == 'hwFogStart':
temp_options2['hwFogStart'] = preset[id][key]
if key == 'hwFogEnd':
temp_options2['hwFogEnd'] = preset[id][key]
if key == 'hwFogAlpha':
temp_options2['hwFogAlpha'] = preset[id][key]
if key == 'hwFogFalloff':
temp_options2['hwFogFalloff'] = int(preset[id][key])
if key == 'hwFogColorR':
temp_options2['hwFogColorR'] = preset[id][key]
if key == 'hwFogColorG':
temp_options2['hwFogColorG'] = preset[id][key]
if key == 'hwFogColorB':
temp_options2['hwFogColorB'] = preset[id][key]
if key == 'motionBlurEnable':
if preset[id][key] is True:
temp_options2['motionBlurEnable'] = True
else:
temp_options2['motionBlurEnable'] = False
if key == 'motionBlurSampleCount':
temp_options2['motionBlurSampleCount'] = preset[id][key]
if key == 'motionBlurShutterOpenFraction':
temp_options2['motionBlurShutterOpenFraction'] = preset[id][key]
if key == 'lineAAEnable':
if preset[id][key] is True:
temp_options2['lineAAEnable'] = True
else:
temp_options2['lineAAEnable'] = False
# Then handle Viewport 2.0 Options
elif key in VIEWPORT2_OPTIONS:
viewport2_options[key] = value
# Then assume remainder is Viewport Options
else:
temp_options[str(key)] = preset[id][key]
viewport_options[key] = value
for key in ['override_viewport_options',
'high_quality',
'alphaCut',
'gpuCacheDisplayFilter',
'multiSample',
'ssaoEnable',
'ssaoSamples',
'ssaoAmount',
'ssaoFilterRadius',
'ssaoRadius',
'hwFogStart',
'hwFogEnd',
'hwFogAlpha',
'hwFogFalloff',
'hwFogColorR',
'hwFogColorG',
'hwFogColorB',
'hwFogDensity',
'textureMaxResolution',
'motionBlurEnable',
'motionBlurSampleCount',
'motionBlurShutterOpenFraction',
'lineAAEnable',
'renderDepthOfField'
]:
temp_options.pop(key, None)
options['viewport_options'] = temp_options
options['viewport2_options'] = temp_options2
options['viewport_options'] = viewport_options
options['viewport2_options'] = viewport2_options
options['camera_options'] = camera_options
# use active sound track
scene = capture.parse_active_scene()
options['sound'] = scene['sound']
# options['display_options'] = temp_options
return options
@ -3159,7 +3097,7 @@ def set_colorspace():
"""Set Colorspace from project configuration
"""
project_name = os.getenv("AVALON_PROJECT")
imageio = get_anatomy_settings(project_name)["imageio"]["maya"]
imageio = get_project_settings(project_name)["maya"]["imageio"]
# Maya 2022+ introduces new OCIO v2 color management settings that
# can override the old color managenement preferences. OpenPype has

View file

@ -80,7 +80,7 @@ IMAGE_PREFIXES = {
"mayahardware2": "defaultRenderGlobals.imageFilePrefix"
}
RENDERMAN_IMAGE_DIR = "maya/<scene>/<layer>"
RENDERMAN_IMAGE_DIR = "<scene>/<layer>"
def has_tokens(string, tokens):

View file

@ -6,7 +6,7 @@ import six
import sys
from openpype.lib import Logger
from openpype.api import (
from openpype.settings import (
get_project_settings,
get_current_project_settings
)
@ -29,7 +29,7 @@ class RenderSettings(object):
_image_prefixes = {
'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa
'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>',
'renderman': '<Scene>/<layer>/<layer>{aov_separator}<aov>',
'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa
}

View file

@ -9,7 +9,7 @@ import requests
from maya import cmds
from maya.app.renderSetup.model import renderSetup
from openpype.api import (
from openpype.settings import (
get_system_settings,
get_project_settings,
)

View file

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Static Meshes."""
from openpype.hosts.maya.api import plugin, lib
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from maya import cmds # noqa

View file

@ -12,7 +12,7 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.api import (
from openpype.settings import (
get_system_settings,
get_project_settings
)

View file

@ -1,7 +1,7 @@
import os
import clique
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path

View file

@ -4,7 +4,7 @@ from openpype.pipeline import (
load,
get_representation_path
)
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
class GpuCacheLoader(load.LoaderPlugin):

View file

@ -5,7 +5,7 @@ import clique
import maya.cmds as cmds
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path

View file

@ -1,7 +1,7 @@
import os
from maya import cmds
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.create import (
legacy_create,

View file

@ -1,6 +1,6 @@
import os
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path

View file

@ -1,6 +1,6 @@
import os
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path

View file

@ -1,6 +1,6 @@
import os
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path

View file

@ -10,7 +10,7 @@ import os
import maya.cmds as cmds
from openpype.client import get_representation_by_name
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
legacy_io,
load,

View file

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
import os
import maya.cmds as cmds # noqa
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path

View file

@ -6,7 +6,7 @@ from collections import defaultdict
import clique
from maya import cmds
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
@ -250,7 +250,7 @@ class YetiCacheLoader(load.LoaderPlugin):
"""
name = node_name.replace(":", "_")
pattern = r"^({name})(\.[0-4]+)?(\.fur)$".format(name=re.escape(name))
pattern = r"^({name})(\.[0-9]+)?(\.fur)$".format(name=re.escape(name))
files = [fname for fname in os.listdir(root) if re.match(pattern,
fname)]

View file

@ -1,7 +1,7 @@
import os
from collections import defaultdict
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api import lib

View file

@ -34,14 +34,15 @@ class ExtractLayout(publish.Extractor):
for asset in cmds.sets(str(instance), query=True):
# Find the container
grp_name = asset.split(':')[0]
containers = cmds.ls(f"{grp_name}*_CON")
containers = cmds.ls("{}*_CON".format(grp_name))
assert len(containers) == 1, \
f"More than one container found for {asset}"
"More than one container found for {}".format(asset)
container = containers[0]
representation_id = cmds.getAttr(f"{container}.representation")
representation_id = cmds.getAttr(
"{}.representation".format(container))
representation = get_representation_by_id(
project_name,
@ -56,7 +57,8 @@ class ExtractLayout(publish.Extractor):
json_element = {
"family": family,
"instance_name": cmds.getAttr(f"{container}.name"),
"instance_name": cmds.getAttr(
"{}.namespace".format(container)),
"representation": str(representation_id),
"version": str(version_id)
}

View file

@ -77,8 +77,10 @@ class ExtractPlayblast(publish.Extractor):
preset['height'] = asset_height
preset['start_frame'] = start
preset['end_frame'] = end
camera_option = preset.get("camera_option", {})
camera_option["depthOfField"] = cmds.getAttr(
# Enforce persisting camera depth of field
camera_options = preset.setdefault("camera_options", {})
camera_options["depthOfField"] = cmds.getAttr(
"{0}.depthOfField".format(camera))
stagingdir = self.staging_dir(instance)
@ -136,8 +138,10 @@ class ExtractPlayblast(publish.Extractor):
self.log.debug("playblast path {}".format(path))
collected_files = os.listdir(stagingdir)
patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(collected_files,
minimum_items=1)
minimum_items=1,
patterns=patterns)
self.log.debug("filename {}".format(filename))
frame_collection = None

View file

@ -1,5 +1,6 @@
import os
import glob
import tempfile
import capture
@ -81,9 +82,17 @@ class ExtractThumbnail(publish.Extractor):
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
stagingDir = self.staging_dir(instance)
# Create temp directory for thumbnail
# - this is to avoid "override" of source file
dst_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
self.log.debug(
"Create temp directory {} for thumbnail".format(dst_staging)
)
# Store new staging to cleanup paths
instance.context.data["cleanupFullPaths"].append(dst_staging)
filename = "{0}".format(instance.name)
path = os.path.join(stagingDir, filename)
path = os.path.join(dst_staging, filename)
self.log.info("Outputting images to %s" % path)
@ -137,7 +146,7 @@ class ExtractThumbnail(publish.Extractor):
'name': 'thumbnail',
'ext': 'jpg',
'files': thumbnail,
"stagingDir": stagingDir,
"stagingDir": dst_staging,
"thumbnail": True
}
instance.data["representations"].append(representation)

View file

@ -11,7 +11,7 @@ import pyblish.api
from openpype.lib import requests_post
from openpype.hosts.maya.api import lib
from openpype.pipeline import legacy_io
from openpype.api import get_system_settings
from openpype.settings import get_system_settings
# mapping between Maya renderer names and Muster template ids
@ -118,7 +118,7 @@ def preview_fname(folder, scene, layer, padding, ext):
"""
# Following hardcoded "<Scene>/<Scene>_<Layer>/<Layer>"
output = "maya/{scene}/{layer}/{layer}.{number}.{ext}".format(
output = "{scene}/{layer}/{layer}.{number}.{ext}".format(
scene=scene,
layer=layer,
number="#" * padding,

View file

@ -22,10 +22,10 @@ def get_redshift_image_format_labels():
class ValidateRenderSettings(pyblish.api.InstancePlugin):
"""Validates the global render settings
* File Name Prefix must start with: `maya/<Scene>`
* File Name Prefix must start with: `<Scene>`
all other token are customizable but sane values for Arnold are:
`maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>`
`<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>`
<Camera> token is supported also, useful for multiple renderable
cameras per render layer.
@ -64,12 +64,12 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
}
ImagePrefixTokens = {
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
'vray': 'maya/<Scene>/<Layer>/<Layer>',
'mentalray': '<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
'arnold': '<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa: E501
'redshift': '<Scene>/<RenderLayer>/<RenderLayer>',
'vray': '<Scene>/<Layer>/<Layer>',
'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>',
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
'mayahardware2': '<Scene>/<RenderLayer>/<RenderLayer>',
}
_aov_chars = {
@ -80,7 +80,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>{aov_separator}<RenderPass>" # noqa: E501
renderman_dir_prefix = "maya/<scene>/<layer>"
renderman_dir_prefix = "<scene>/<layer>"
R_AOV_TOKEN = re.compile(
r'%a|<aov>|<renderpass>', re.IGNORECASE)
@ -90,8 +90,8 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
R_SCENE_TOKEN = re.compile(r'%s|<scene>', re.IGNORECASE)
DEFAULT_PADDING = 4
VRAY_PREFIX = "maya/<Scene>/<Layer>/<Layer>"
DEFAULT_PREFIX = "maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>"
VRAY_PREFIX = "<Scene>/<Layer>/<Layer>"
DEFAULT_PREFIX = "<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>"
def process(self, instance):
@ -123,7 +123,6 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
prefix = prefix.replace(
"{aov_separator}", instance.data.get("aovSeparator", "_"))
required_prefix = "maya/<scene>"
default_prefix = cls.ImagePrefixTokens[renderer]
if not anim_override:
@ -131,15 +130,6 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
cls.log.error("Animation needs to be enabled. Use the same "
"frame for start and end to render single frame")
if renderer != "renderman" and not prefix.lower().startswith(
required_prefix):
invalid = True
cls.log.error(
("Wrong image prefix [ {} ] "
" - doesn't start with: '{}'").format(
prefix, required_prefix)
)
if not re.search(cls.R_LAYER_TOKEN, prefix):
invalid = True
cls.log.error("Wrong image prefix [ {} ] - "

View file

@ -1,5 +1,5 @@
import os
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
from maya import cmds

View file

@ -563,7 +563,15 @@ def get_node_path(path, padding=4):
def get_nuke_imageio_settings():
return get_anatomy_settings(Context.project_name)["imageio"]["nuke"]
project_imageio = get_project_settings(
Context.project_name)["nuke"]["imageio"]
# backward compatibility for project started before 3.10
# those are still having `__legacy__` knob types
if not project_imageio["enabled"]:
return get_anatomy_settings(Context.project_name)["imageio"]["nuke"]
return get_project_settings(Context.project_name)["nuke"]["imageio"]
def get_created_node_imageio_setting_legacy(nodeclass, creator, subset):

View file

@ -7,9 +7,7 @@ import nuke
import pyblish.api
import openpype
from openpype.api import (
get_current_project_settings
)
from openpype.settings import get_current_project_settings
from openpype.lib import register_event_callback, Logger
from openpype.pipeline import (
register_loader_plugin_path,

View file

@ -6,7 +6,7 @@ from abc import abstractmethod
import nuke
from openpype.api import get_current_project_settings
from openpype.settings import get_current_project_settings
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,

View file

@ -1,7 +1,7 @@
import os
import nuke
from openpype.api import resources
from openpype import resources
from .lib import maintained_selection

View file

@ -425,7 +425,7 @@ class LoadClip(plugin.NukeLoader):
colorspace = repre_data.get("colorspace")
colorspace = colorspace or version_data.get("colorspace")
# colorspace from `project_anatomy/imageio/nuke/regexInputs`
# colorspace from `project_settings/nuke/imageio/regexInputs`
iio_colorspace = get_imageio_input_colorspace(path)
# Set colorspace defined in version data

View file

@ -77,11 +77,14 @@ class ValidateNukeWriteNode(pyblish.api.InstancePlugin):
# fix type differences
if type(node_value) in (int, float):
if isinstance(value, list):
value = color_gui_to_int(value)
else:
value = float(value)
node_value = float(node_value)
try:
if isinstance(value, list):
value = color_gui_to_int(value)
else:
value = float(value)
node_value = float(node_value)
except ValueError:
value = str(value)
else:
value = str(value)
node_value = str(node_value)

View file

@ -244,7 +244,7 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
instance, old_value, new_value))
from openpype.hosts.resolve import (
from openpype.hosts.resolve.api import (
set_publish_attribute
)

View file

@ -4,13 +4,15 @@ import uuid
import qargparse
from Qt import QtWidgets, QtCore
from openpype.settings import get_current_project_settings
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.hosts import resolve
from . import lib
from .menu import load_stylesheet
class CreatorWidget(QtWidgets.QDialog):
@ -86,7 +88,7 @@ class CreatorWidget(QtWidgets.QDialog):
ok_btn.clicked.connect(self._on_ok_clicked)
cancel_btn.clicked.connect(self._on_cancel_clicked)
stylesheet = resolve.api.menu.load_stylesheet()
stylesheet = load_stylesheet()
self.setStyleSheet(stylesheet)
def _on_ok_clicked(self):
@ -438,7 +440,7 @@ class ClipLoader:
source_in = int(_clip_property("Start"))
source_out = int(_clip_property("End"))
resolve.swap_clips(
lib.swap_clips(
timeline_item,
media_pool_item,
source_in,
@ -504,7 +506,7 @@ class Creator(LegacyCreator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
from openpype.api import get_current_project_settings
resolve_p_settings = get_current_project_settings().get("resolve")
self.presets = {}
if resolve_p_settings:
@ -512,13 +514,13 @@ class Creator(LegacyCreator):
self.__class__.__name__, {})
# adding basic current context resolve objects
self.project = resolve.get_current_project()
self.timeline = resolve.get_current_timeline()
self.project = lib.get_current_project()
self.timeline = lib.get_current_timeline()
if (self.options or {}).get("useSelection"):
self.selected = resolve.get_current_timeline_items(filter=True)
self.selected = lib.get_current_timeline_items(filter=True)
else:
self.selected = resolve.get_current_timeline_items(filter=False)
self.selected = lib.get_current_timeline_items(filter=False)
self.widget = CreatorWidget

View file

@ -86,6 +86,8 @@ class TrayPublishCreator(Creator):
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
new_instance.mark_as_stored()
# Add instance to current context
self._add_instance_to_context(new_instance)

View file

@ -1,6 +1,6 @@
import os
from openpype.lib import Logger
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
log = Logger.get_logger(__name__)

View file

@ -35,12 +35,12 @@ class CollectMovieBatch(
"stagingDir": os.path.dirname(file_url),
"tags": []
}
instance.data["representations"].append(repre)
if creator_attributes["add_review_family"]:
repre["tags"].append("review")
instance.data["families"].append("review")
instance.data["representations"].append(repre)
instance.data["thumbnailSource"] = file_url
instance.data["source"] = file_url

View file

@ -148,8 +148,11 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
))
return
item_dir = review_file_item["directory"]
first_filepath = os.path.join(item_dir, filenames[0])
filepaths = {
os.path.join(review_file_item["directory"], filename)
os.path.join(item_dir, filename)
for filename in filenames
}
source_filepaths.extend(filepaths)
@ -176,6 +179,8 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
if "review" not in instance.data["families"]:
instance.data["families"].append("review")
instance.data["thumbnailSource"] = first_filepath
review_representation["tags"].append("review")
self.log.debug("Representation {} was marked for review. {}".format(
review_representation["name"], review_path

View file

@ -0,0 +1,173 @@
"""Create instance thumbnail from "thumbnailSource" on 'instance.data'.
Output is new representation with "thumbnail" name on instance. If instance
already have such representation the process is skipped.
This way a collector can point to a file from which should be thumbnail
generated. This is different approach then what global plugin for thumbnails
does. The global plugin has specific logic which does not support
Todos:
No size handling. Size of input is used for output thumbnail which can
cause issues.
"""
import os
import tempfile
import pyblish.api
from openpype.lib import (
get_ffmpeg_tool_path,
get_oiio_tools_path,
is_oiio_supported,
run_subprocess,
)
class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
"""Create jpg thumbnail for instance based on 'thumbnailSource'.
Thumbnail source must be a single image or video filepath.
"""
label = "Extract Thumbnail (from source)"
# Before 'ExtractThumbnail' in global plugins
order = pyblish.api.ExtractorOrder - 0.00001
hosts = ["traypublisher"]
def process(self, instance):
subset_name = instance.data["subset"]
self.log.info(
"Processing instance with subset name {}".format(subset_name)
)
thumbnail_source = instance.data.get("thumbnailSource")
if not thumbnail_source:
self.log.debug("Thumbnail source not filled. Skipping.")
return
elif not os.path.exists(thumbnail_source):
self.log.debug(
"Thumbnail source file was not found {}. Skipping.".format(
thumbnail_source))
return
# Check if already has thumbnail created
if self._already_has_thumbnail(instance):
self.log.info("Thumbnail representation already present.")
return
# Create temp directory for thumbnail
# - this is to avoid "override" of source file
dst_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
self.log.debug(
"Create temp directory {} for thumbnail".format(dst_staging)
)
# Store new staging to cleanup paths
instance.context.data["cleanupFullPaths"].append(dst_staging)
thumbnail_created = False
oiio_supported = is_oiio_supported()
self.log.info("Thumbnail source: {}".format(thumbnail_source))
src_basename = os.path.basename(thumbnail_source)
dst_filename = os.path.splitext(src_basename)[0] + ".jpg"
full_output_path = os.path.join(dst_staging, dst_filename)
if oiio_supported:
self.log.info("Trying to convert with OIIO")
# If the input can read by OIIO then use OIIO method for
# conversion otherwise use ffmpeg
thumbnail_created = self.create_thumbnail_oiio(
thumbnail_source, full_output_path
)
# Try to use FFMPEG if OIIO is not supported or for cases when
# oiiotool isn't available
if not thumbnail_created:
if oiio_supported:
self.log.info((
"Converting with FFMPEG because input"
" can't be read by OIIO."
))
thumbnail_created = self.create_thumbnail_ffmpeg(
thumbnail_source, full_output_path
)
# Skip representation and try next one if wasn't created
if not thumbnail_created:
self.log.warning("Thumbanil has not been created.")
return
new_repre = {
"name": "thumbnail",
"ext": "jpg",
"files": dst_filename,
"stagingDir": dst_staging,
"thumbnail": True,
"tags": ["thumbnail"]
}
# adding representation
self.log.debug(
"Adding thumbnail representation: {}".format(new_repre)
)
instance.data["representations"].append(new_repre)
def _already_has_thumbnail(self, instance):
if "representations" not in instance.data:
self.log.warning(
"Instance does not have 'representations' key filled"
)
instance.data["representations"] = []
for repre in instance.data["representations"]:
if repre["name"] == "thumbnail":
return True
return False
def create_thumbnail_oiio(self, src_path, dst_path):
self.log.info("outputting {}".format(dst_path))
oiio_tool_path = get_oiio_tools_path()
oiio_cmd = [
oiio_tool_path,
"-a", src_path,
"-o", dst_path
]
self.log.info("Running: {}".format(" ".join(oiio_cmd)))
try:
run_subprocess(oiio_cmd, logger=self.log)
return True
except Exception:
self.log.warning(
"Failed to create thubmnail using oiiotool",
exc_info=True
)
return False
def create_thumbnail_ffmpeg(self, src_path, dst_path):
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
max_int = str(2147483647)
ffmpeg_cmd = [
ffmpeg_path,
"-y",
"-analyzeduration", max_int,
"-probesize", max_int,
"-i", src_path,
"-vframes", "1",
dst_path
]
self.log.info("Running: {}".format(" ".join(ffmpeg_cmd)))
try:
run_subprocess(ffmpeg_cmd, logger=self.log)
return True
except Exception:
self.log.warning(
"Failed to create thubmnail using ffmpeg",
exc_info=True
)
return False

View file

@ -10,7 +10,7 @@ import pyblish.api
from openpype.client import get_project, get_asset_by_name
from openpype.hosts import tvpaint
from openpype.api import get_current_project_settings
from openpype.settings import get_current_project_settings
from openpype.lib import register_event_callback
from openpype.pipeline import (
legacy_io,

View file

@ -20,15 +20,11 @@ class StaticMeshAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
def get_task(self, filename, asset_dir, asset_name, replace):
@staticmethod
def get_task(filename, asset_dir, asset_name, replace, default_conversion):
task = unreal.AssetImportTask()
options = unreal.AbcImportSettings()
sm_settings = unreal.AbcStaticMeshSettings()
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
rotation=[0.0, 0.0, 0.0],
scale=[1.0, 1.0, 1.0])
task.set_editor_property('filename', filename)
task.set_editor_property('destination_path', asset_dir)
@ -44,13 +40,20 @@ class StaticMeshAlembicLoader(plugin.Loader):
sm_settings.set_editor_property('merge_meshes', True)
if not default_conversion:
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
rotation=[0.0, 0.0, 0.0],
scale=[1.0, 1.0, 1.0])
options.conversion_settings = conversion_settings
options.static_mesh_settings = sm_settings
options.conversion_settings = conversion_settings
task.options = options
return task
def load(self, context, name, namespace, data):
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
This is two step process. First, import FBX to temporary path and
@ -82,6 +85,10 @@ class StaticMeshAlembicLoader(plugin.Loader):
asset_name = "{}".format(name)
version = context.get('version').get('name')
default_conversion = False
if options.get("default_conversion"):
default_conversion = options.get("default_conversion")
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name}_v{version:03d}", suffix="")
@ -91,7 +98,8 @@ class StaticMeshAlembicLoader(plugin.Loader):
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(self.fname, asset_dir, asset_name, False)
task = self.get_task(
self.fname, asset_dir, asset_name, False, default_conversion)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501

View file

@ -24,7 +24,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.api import get_current_project_settings
from openpype.settings import get_current_project_settings
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline

View file

@ -0,0 +1,418 @@
import json
from pathlib import Path
import unreal
from unreal import EditorLevelLibrary
from bson.objectid import ObjectId
from openpype import pipeline
from openpype.pipeline import (
discover_loader_plugins,
loaders_from_representation,
load_container,
get_representation_path,
AVALON_CONTAINER_ID,
legacy_io,
)
from openpype.api import get_current_project_settings
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as upipeline
class ExistingLayoutLoader(plugin.Loader):
"""
Load Layout for an existing scene, and match the existing assets.
"""
families = ["layout"]
representations = ["json"]
label = "Load Layout on Existing Scene"
icon = "code-fork"
color = "orange"
ASSET_ROOT = "/Game/OpenPype"
@staticmethod
def _create_container(
asset_name, asset_dir, asset, representation, parent, family
):
container_name = f"{asset_name}_CON"
container = None
if not unreal.EditorAssetLibrary.does_asset_exist(
f"{asset_dir}/{container_name}"
):
container = upipeline.create_container(container_name, asset_dir)
else:
ar = unreal.AssetRegistryHelpers.get_asset_registry()
obj = ar.get_asset_by_object_path(
f"{asset_dir}/{container_name}.{container_name}")
container = obj.get_asset()
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"asset": asset,
"namespace": asset_dir,
"container_name": container_name,
"asset_name": asset_name,
# "loader": str(self.__class__.__name__),
"representation": representation,
"parent": parent,
"family": family
}
upipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
return container.get_path_name()
@staticmethod
def _get_current_level():
ue_version = unreal.SystemLibrary.get_engine_version().split('.')
ue_major = ue_version[0]
if ue_major == '4':
return EditorLevelLibrary.get_editor_world()
elif ue_major == '5':
return unreal.LevelEditorSubsystem().get_current_level()
raise NotImplementedError(
f"Unreal version {ue_major} not supported")
def _get_transform(self, ext, import_data, lasset):
conversion = unreal.Matrix.IDENTITY.transform()
fbx_tuning = unreal.Matrix.IDENTITY.transform()
basis = unreal.Matrix(
lasset.get('basis')[0],
lasset.get('basis')[1],
lasset.get('basis')[2],
lasset.get('basis')[3]
).transform()
transform = unreal.Matrix(
lasset.get('transform_matrix')[0],
lasset.get('transform_matrix')[1],
lasset.get('transform_matrix')[2],
lasset.get('transform_matrix')[3]
).transform()
# Check for the conversion settings. We cannot access
# the alembic conversion settings, so we assume that
# the maya ones have been applied.
if ext == '.fbx':
loc = import_data.import_translation
rot = import_data.import_rotation.to_vector()
scale = import_data.import_uniform_scale
conversion = unreal.Transform(
location=[loc.x, loc.y, loc.z],
rotation=[rot.x, rot.y, rot.z],
scale=[-scale, scale, scale]
)
fbx_tuning = unreal.Transform(
rotation=[180.0, 0.0, 90.0],
scale=[1.0, 1.0, 1.0]
)
elif ext == '.abc':
# This is the standard conversion settings for
# alembic files from Maya.
conversion = unreal.Transform(
location=[0.0, 0.0, 0.0],
rotation=[0.0, 0.0, 0.0],
scale=[1.0, -1.0, 1.0]
)
new_transform = (basis.inverse() * transform * basis)
return fbx_tuning * conversion.inverse() * new_transform
def _spawn_actor(self, obj, lasset):
actor = EditorLevelLibrary.spawn_actor_from_object(
obj, unreal.Vector(0.0, 0.0, 0.0)
)
actor.set_actor_label(lasset.get('instance_name'))
smc = actor.get_editor_property('static_mesh_component')
mesh = smc.get_editor_property('static_mesh')
import_data = mesh.get_editor_property('asset_import_data')
filename = import_data.get_first_filename()
path = Path(filename)
transform = self._get_transform(
path.suffix, import_data, lasset)
actor.set_actor_transform(transform, False, True)
@staticmethod
def _get_fbx_loader(loaders, family):
name = ""
if family == 'rig':
name = "SkeletalMeshFBXLoader"
elif family == 'model' or family == 'staticMesh':
name = "StaticMeshFBXLoader"
elif family == 'camera':
name = "CameraLoader"
if name == "":
return None
for loader in loaders:
if loader.__name__ == name:
return loader
return None
@staticmethod
def _get_abc_loader(loaders, family):
name = ""
if family == 'rig':
name = "SkeletalMeshAlembicLoader"
elif family == 'model':
name = "StaticMeshAlembicLoader"
if name == "":
return None
for loader in loaders:
if loader.__name__ == name:
return loader
return None
def _load_asset(self, representation, version, instance_name, family):
valid_formats = ['fbx', 'abc']
repr_data = legacy_io.find_one({
"type": "representation",
"parent": ObjectId(version),
"name": {"$in": valid_formats}
})
repr_format = repr_data.get('name')
all_loaders = discover_loader_plugins()
loaders = loaders_from_representation(
all_loaders, representation)
loader = None
if repr_format == 'fbx':
loader = self._get_fbx_loader(loaders, family)
elif repr_format == 'abc':
loader = self._get_abc_loader(loaders, family)
if not loader:
self.log.error(f"No valid loader found for {representation}")
return []
# This option is necessary to avoid importing the assets with a
# different conversion compared to the other assets. For ABC files,
# it is in fact impossible to access the conversion settings. So,
# we must assume that the Maya conversion settings have been applied.
options = {
"default_conversion": True
}
assets = load_container(
loader,
representation,
namespace=instance_name,
options=options
)
return assets
def _process(self, lib_path):
data = get_current_project_settings()
delete_unmatched = data["unreal"]["delete_unmatched_assets"]
ar = unreal.AssetRegistryHelpers.get_asset_registry()
actors = EditorLevelLibrary.get_all_level_actors()
with open(lib_path, "r") as fp:
data = json.load(fp)
layout_data = []
# Get all the representations in the JSON from the database.
for element in data:
if element.get('representation'):
layout_data.append((
pipeline.legacy_io.find_one({
"_id": ObjectId(element.get('representation'))
}),
element
))
containers = []
actors_matched = []
for (repr_data, lasset) in layout_data:
if not repr_data:
raise AssertionError("Representation not found")
if not (repr_data.get('data') or
repr_data.get('data').get('path')):
raise AssertionError("Representation does not have path")
if not repr_data.get('context'):
raise AssertionError("Representation does not have context")
# For every actor in the scene, check if it has a representation in
# those we got from the JSON. If so, create a container for it.
# Otherwise, remove it from the scene.
found = False
for actor in actors:
if not actor.get_class().get_name() == 'StaticMeshActor':
continue
if actor in actors_matched:
continue
# Get the original path of the file from which the asset has
# been imported.
smc = actor.get_editor_property('static_mesh_component')
mesh = smc.get_editor_property('static_mesh')
import_data = mesh.get_editor_property('asset_import_data')
filename = import_data.get_first_filename()
path = Path(filename)
if (not path.name or
path.name not in repr_data.get('data').get('path')):
continue
actor.set_actor_label(lasset.get('instance_name'))
mesh_path = Path(mesh.get_path_name()).parent.as_posix()
# Create the container for the asset.
asset = repr_data.get('context').get('asset')
subset = repr_data.get('context').get('subset')
container = self._create_container(
f"{asset}_{subset}", mesh_path, asset,
repr_data.get('_id'), repr_data.get('parent'),
repr_data.get('context').get('family')
)
containers.append(container)
# Set the transform for the actor.
transform = self._get_transform(
path.suffix, import_data, lasset)
actor.set_actor_transform(transform, False, True)
actors_matched.append(actor)
found = True
break
# If an actor has not been found for this representation,
# we check if it has been loaded already by checking all the
# loaded containers. If so, we add it to the scene. Otherwise,
# we load it.
if found:
continue
all_containers = upipeline.ls()
loaded = False
for container in all_containers:
repr = container.get('representation')
if not repr == str(repr_data.get('_id')):
continue
asset_dir = container.get('namespace')
filter = unreal.ARFilter(
class_names=["StaticMesh"],
package_paths=[asset_dir],
recursive_paths=False)
assets = ar.get_assets(filter)
for asset in assets:
obj = asset.get_asset()
self._spawn_actor(obj, lasset)
loaded = True
break
# If the asset has not been loaded yet, we load it.
if loaded:
continue
assets = self._load_asset(
lasset.get('representation'),
lasset.get('version'),
lasset.get('instance_name'),
lasset.get('family')
)
for asset in assets:
obj = ar.get_asset_by_object_path(asset).get_asset()
if not obj.get_class().get_name() == 'StaticMesh':
continue
self._spawn_actor(obj, lasset)
break
# Check if an actor was not matched to a representation.
# If so, remove it from the scene.
for actor in actors:
if not actor.get_class().get_name() == 'StaticMeshActor':
continue
if actor not in actors_matched:
self.log.warning(f"Actor {actor.get_name()} not matched.")
if delete_unmatched:
EditorLevelLibrary.destroy_actor(actor)
return containers
def load(self, context, name, namespace, options):
print("Loading Layout and Match Assets")
asset = context.get('asset').get('name')
asset_name = f"{asset}_{name}" if asset else name
container_name = f"{asset}_{name}_CON"
curr_level = self._get_current_level()
if not curr_level:
raise AssertionError("Current level not saved")
containers = self._process(self.fname)
curr_level_path = Path(
curr_level.get_outer().get_path_name()).parent.as_posix()
if not unreal.EditorAssetLibrary.does_asset_exist(
f"{curr_level_path}/{container_name}"
):
upipeline.create_container(
container=container_name, path=curr_level_path)
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"asset": asset,
"namespace": curr_level_path,
"container_name": container_name,
"asset_name": asset_name,
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"],
"loaded_assets": containers
}
upipeline.imprint(f"{curr_level_path}/{container_name}", data)
def update(self, container, representation):
asset_dir = container.get('namespace')
source_path = get_representation_path(representation)
containers = self._process(source_path)
data = {
"representation": str(representation["_id"]),
"parent": str(representation["parent"]),
"loaded_assets": containers
}
upipeline.imprint(
"{}/{}".format(asset_dir, container.get('container_name')), data)

View file

@ -37,6 +37,15 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
This is not applicable for 'studio' processing where host application is
called to process uploaded workfile and render frames itself.
For each task configure what properties should resulting instance have
based on uploaded files:
- uploading sequence of 'png' >> create instance of 'render' family,
by adding 'review' to 'Families' and 'Create review' to Tags it will
produce review.
There might be difference between single(>>image) and sequence(>>render)
uploaded files.
"""
# must be really early, context values are only in json file
order = pyblish.api.CollectorOrder - 0.490
@ -46,6 +55,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
# from Settings
task_type_to_family = []
sync_next_version = False # find max version to be published, use for all
def process(self, context):
batch_dir = context.data["batchDir"]
@ -64,6 +74,9 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
task_type = context.data["taskType"]
project_name = context.data["project_name"]
variant = context.data["variant"]
next_versions = []
instances = []
for task_dir in task_subfolders:
task_data = parse_json(os.path.join(task_dir,
"manifest.json"))
@ -90,11 +103,14 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
version = self._get_next_version(
project_name, asset_doc, subset_name
)
next_versions.append(version)
instance = context.create_instance(subset_name)
instance.data["asset"] = asset_name
instance.data["subset"] = subset_name
# set configurable result family
instance.data["family"] = family
# set configurable additional families
instance.data["families"] = families
instance.data["version"] = version
instance.data["stagingDir"] = tempfile.mkdtemp()
@ -137,8 +153,18 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
instance.data["handleStart"] = asset_doc["data"]["handleStart"]
instance.data["handleEnd"] = asset_doc["data"]["handleEnd"]
instances.append(instance)
self.log.info("instance.data:: {}".format(instance.data))
if not self.sync_next_version:
return
# overwrite specific version with same version for all
max_next_version = max(next_versions)
for inst in instances:
inst.data["version"] = max_next_version
self.log.debug("overwritten version:: {}".format(max_next_version))
def _get_subset_name(self, family, subset_template, task_name, variant):
fill_pairs = {
"variant": variant,
@ -176,7 +202,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
"ext": ext[1:],
"files": files,
"stagingDir": task_dir,
"tags": tags
"tags": tags # configurable tags from Settings
}
self.log.info("sequences repre_data.data:: {}".format(repre_data))
return [repre_data]

View file

@ -203,19 +203,6 @@ from .path_tools import (
get_project_basic_paths,
)
from .editorial import (
is_overlapping_otio_ranges,
otio_range_to_frame_range,
otio_range_with_handles,
get_media_range_with_retimes,
convert_to_padded_path,
trim_media_range,
range_from_frames,
frames_to_secons,
frames_to_timecode,
make_sequence_collection
)
from .openpype_version import (
op_version_control_available,
get_openpype_version,
@ -383,16 +370,6 @@ __all__ = [
"validate_mongo_connection",
"OpenPypeMongoConnection",
"is_overlapping_otio_ranges",
"otio_range_with_handles",
"convert_to_padded_path",
"otio_range_to_frame_range",
"get_media_range_with_retimes",
"trim_media_range",
"range_from_frames",
"frames_to_secons",
"frames_to_timecode",
"make_sequence_collection",
"create_project_folders",
"create_workdir_extra_folders",
"get_project_basic_paths",

View file

@ -1,33 +0,0 @@
# -*- coding: utf-8 -*-
"""Content was moved to 'openpype.pipeline.publish.abstract_collect_render'.
Please change your imports as soon as possible.
File will be probably removed in OpenPype 3.14.*
"""
import warnings
from openpype.pipeline.publish import AbstractCollectRender, RenderInstance
class CollectRenderDeprecated(DeprecationWarning):
pass
warnings.simplefilter("always", CollectRenderDeprecated)
warnings.warn(
(
"Content of 'abstract_collect_render' was moved."
"\nUsing deprecated source of 'abstract_collect_render'. Content was"
" move to 'openpype.pipeline.publish.abstract_collect_render'."
" Please change your imports as soon as possible."
),
category=CollectRenderDeprecated,
stacklevel=4
)
__all__ = (
"AbstractCollectRender",
"RenderInstance"
)

View file

@ -1,32 +0,0 @@
# -*- coding: utf-8 -*-
"""Content was moved to 'openpype.pipeline.publish.abstract_expected_files'.
Please change your imports as soon as possible.
File will be probably removed in OpenPype 3.14.*
"""
import warnings
from openpype.pipeline.publish import ExpectedFiles
class ExpectedFilesDeprecated(DeprecationWarning):
pass
warnings.simplefilter("always", ExpectedFilesDeprecated)
warnings.warn(
(
"Content of 'abstract_expected_files' was moved."
"\nUsing deprecated source of 'abstract_expected_files'. Content was"
" move to 'openpype.pipeline.publish.abstract_expected_files'."
" Please change your imports as soon as possible."
),
category=ExpectedFilesDeprecated,
stacklevel=4
)
__all__ = (
"ExpectedFiles",
)

View file

@ -1,35 +0,0 @@
"""Content was moved to 'openpype.pipeline.publish.publish_plugins'.
Please change your imports as soon as possible.
File will be probably removed in OpenPype 3.14.*
"""
import warnings
from openpype.pipeline.publish import (
AbstractMetaInstancePlugin,
AbstractMetaContextPlugin
)
class MetaPluginsDeprecated(DeprecationWarning):
pass
warnings.simplefilter("always", MetaPluginsDeprecated)
warnings.warn(
(
"Content of 'abstract_metaplugins' was moved."
"\nUsing deprecated source of 'abstract_metaplugins'. Content was"
" moved to 'openpype.pipeline.publish.publish_plugins'."
" Please change your imports as soon as possible."
),
category=MetaPluginsDeprecated,
stacklevel=4
)
__all__ = (
"AbstractMetaInstancePlugin",
"AbstractMetaContextPlugin",
)

View file

@ -1,41 +0,0 @@
import warnings
import functools
class ConfigDeprecatedWarning(DeprecationWarning):
pass
def deprecated(func):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
@functools.wraps(func)
def new_func(*args, **kwargs):
warnings.simplefilter("always", ConfigDeprecatedWarning)
warnings.warn(
(
"Deprecated import of function '{}'."
" Class was moved to 'openpype.lib.dateutils.{}'."
" Please change your imports."
).format(func.__name__),
category=ConfigDeprecatedWarning
)
return func(*args, **kwargs)
return new_func
@deprecated
def get_datetime_data(datetime_obj=None):
from .dateutils import get_datetime_data
return get_datetime_data(datetime_obj)
@deprecated
def get_formatted_current_time():
from .dateutils import get_formatted_current_time
return get_formatted_current_time()

View file

@ -1,102 +0,0 @@
"""Code related to editorial utility functions was moved
to 'openpype.pipeline.editorial' please change your imports as soon as
possible. File will be probably removed in OpenPype 3.14.*
"""
import warnings
import functools
class EditorialDeprecatedWarning(DeprecationWarning):
pass
def editorial_deprecated(func):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
@functools.wraps(func)
def new_func(*args, **kwargs):
warnings.simplefilter("always", EditorialDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'."
" Function was moved to 'openpype.pipeline.editorial'."
).format(func.__name__),
category=EditorialDeprecatedWarning,
stacklevel=2
)
return func(*args, **kwargs)
return new_func
@editorial_deprecated
def otio_range_to_frame_range(*args, **kwargs):
from openpype.pipeline.editorial import otio_range_to_frame_range
return otio_range_to_frame_range(*args, **kwargs)
@editorial_deprecated
def otio_range_with_handles(*args, **kwargs):
from openpype.pipeline.editorial import otio_range_with_handles
return otio_range_with_handles(*args, **kwargs)
@editorial_deprecated
def is_overlapping_otio_ranges(*args, **kwargs):
from openpype.pipeline.editorial import is_overlapping_otio_ranges
return is_overlapping_otio_ranges(*args, **kwargs)
@editorial_deprecated
def convert_to_padded_path(*args, **kwargs):
from openpype.pipeline.editorial import convert_to_padded_path
return convert_to_padded_path(*args, **kwargs)
@editorial_deprecated
def trim_media_range(*args, **kwargs):
from openpype.pipeline.editorial import trim_media_range
return trim_media_range(*args, **kwargs)
@editorial_deprecated
def range_from_frames(*args, **kwargs):
from openpype.pipeline.editorial import range_from_frames
return range_from_frames(*args, **kwargs)
@editorial_deprecated
def frames_to_secons(*args, **kwargs):
from openpype.pipeline.editorial import frames_to_seconds
return frames_to_seconds(*args, **kwargs)
@editorial_deprecated
def frames_to_timecode(*args, **kwargs):
from openpype.pipeline.editorial import frames_to_timecode
return frames_to_timecode(*args, **kwargs)
@editorial_deprecated
def make_sequence_collection(*args, **kwargs):
from openpype.pipeline.editorial import make_sequence_collection
return make_sequence_collection(*args, **kwargs)
@editorial_deprecated
def get_media_range_with_retimes(*args, **kwargs):
from openpype.pipeline.editorial import get_media_range_with_retimes
return get_media_range_with_retimes(*args, **kwargs)

View file

@ -36,8 +36,19 @@ from openpype_modules.deadline import abstract_submit_deadline
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
def _validate_deadline_bool_value(instance, attribute, value):
if not isinstance(value, (str, bool)):
raise TypeError(
"Attribute {} must be str or bool.".format(attribute))
if value not in {"1", "0", True, False}:
raise ValueError(
("Value of {} must be one of "
"'0', '1', True, False").format(attribute)
)
@attr.s
class MayaPluginInfo:
class MayaPluginInfo(object):
SceneFile = attr.ib(default=None) # Input
OutputFilePath = attr.ib(default=None) # Output directory and filename
OutputFilePrefix = attr.ib(default=None)
@ -46,11 +57,13 @@ class MayaPluginInfo:
RenderLayer = attr.ib(default=None) # Render only this layer
Renderer = attr.ib(default=None)
ProjectPath = attr.ib(default=None) # Resolve relative references
RenderSetupIncludeLights = attr.ib(default=None) # Include all lights flag
# Include all lights flag
RenderSetupIncludeLights = attr.ib(
default="1", validator=_validate_deadline_bool_value)
@attr.s
class PythonPluginInfo:
class PythonPluginInfo(object):
ScriptFile = attr.ib()
Version = attr.ib(default="3.6")
Arguments = attr.ib(default=None)
@ -58,7 +71,7 @@ class PythonPluginInfo:
@attr.s
class VRayPluginInfo:
class VRayPluginInfo(object):
InputFilename = attr.ib(default=None) # Input
SeparateFilesPerFrame = attr.ib(default=None)
VRayEngine = attr.ib(default="V-Ray")
@ -69,7 +82,7 @@ class VRayPluginInfo:
@attr.s
class ArnoldPluginInfo:
class ArnoldPluginInfo(object):
ArnoldFile = attr.ib(default=None)
@ -185,12 +198,26 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
instance = self._instance
context = instance.context
# Set it to default Maya behaviour if it cannot be determined
# from instance (but it should be, by the Collector).
default_rs_include_lights = (
instance.context.data['project_settings']
['maya']
['RenderSettings']
['enable_all_lights']
)
rs_include_lights = instance.data.get(
"renderSetupIncludeLights", default_rs_include_lights)
if rs_include_lights not in {"1", "0", True, False}:
rs_include_lights = default_rs_include_lights
plugin_info = MayaPluginInfo(
SceneFile=self.scene_path,
Version=cmds.about(version=True),
RenderLayer=instance.data['setMembers'],
Renderer=instance.data["renderer"],
RenderSetupIncludeLights=instance.data.get("renderSetupIncludeLights"), # noqa
RenderSetupIncludeLights=rs_include_lights, # noqa
ProjectPath=context.data["workspaceDir"],
UsingRenderLayers=True,
)
@ -500,6 +527,10 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
plugin_info["Renderer"] = renderer
# this is needed because renderman plugin in Deadline
# handles directory and file prefixes separately
plugin_info["OutputFilePath"] = job_info.OutputDirectory[0]
return job_info, plugin_info
def _get_vray_export_payload(self, data):
@ -731,10 +762,10 @@ def _format_tiles(
Example::
Image prefix is:
`maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>`
`<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>`
Result for tile 0 for 4x4 will be:
`maya/<Scene>/<RenderLayer>/_tile_1x1_4x4_<RenderLayer>_<RenderPass>`
`<Scene>/<RenderLayer>/_tile_1x1_4x4_<RenderLayer>_<RenderPass>`
Calculating coordinates is tricky as in Job they are defined as top,
left, bottom, right with zero being in top-left corner. But Assembler

View file

@ -18,7 +18,7 @@ from openpype_modules.ftrack.lib import (
tool_definitions_from_app_manager
)
from openpype.api import get_system_settings
from openpype.settings import get_system_settings
from openpype.lib import ApplicationManager
"""

View file

@ -1,7 +1,7 @@
import os
import ftrack_api
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.lib import PostLaunchHook

View file

@ -169,7 +169,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
thumbnail_item["thumbnail"] = True
# Create copy of item before setting location
if "delete" not in repre["tags"]:
if "delete" not in repre.get("tags", []):
src_components_to_add.append(copy.deepcopy(thumbnail_item))
# Create copy of first thumbnail
if first_thumbnail_component is None:
@ -284,7 +284,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
not_first_components.append(review_item)
# Create copy of item before setting location
if "delete" not in repre["tags"]:
if "delete" not in repre.get("tags", []):
src_components_to_add.append(copy.deepcopy(review_item))
# Set location

View file

@ -13,10 +13,9 @@ import functools
import itertools
import distutils.version
import hashlib
import tempfile
import appdirs
import threading
import atexit
import warnings
import requests
import requests.auth
@ -241,7 +240,7 @@ class Session(object):
)
self._auto_connect_event_hub_thread = None
if auto_connect_event_hub in (None, True):
if auto_connect_event_hub is True:
# Connect to event hub in background thread so as not to block main
# session usage waiting for event hub connection.
self._auto_connect_event_hub_thread = threading.Thread(
@ -252,9 +251,7 @@ class Session(object):
# To help with migration from auto_connect_event_hub default changing
# from True to False.
self._event_hub._deprecation_warning_auto_connect = (
auto_connect_event_hub is None
)
self._event_hub._deprecation_warning_auto_connect = False
# Register to auto-close session on exit.
atexit.register(WeakMethod(self.close))
@ -271,8 +268,9 @@ class Session(object):
# rebuilding types)?
if schema_cache_path is not False:
if schema_cache_path is None:
schema_cache_path = appdirs.user_cache_dir()
schema_cache_path = os.environ.get(
'FTRACK_API_SCHEMA_CACHE_PATH', tempfile.gettempdir()
'FTRACK_API_SCHEMA_CACHE_PATH', schema_cache_path
)
schema_cache_path = os.path.join(

View file

@ -43,7 +43,7 @@ import platform
import click
from openpype.modules import OpenPypeModule
from openpype.api import get_system_settings
from openpype.settings import get_system_settings
class JobQueueModule(OpenPypeModule):

View file

@ -12,7 +12,7 @@ from openpype.client import (
get_assets,
)
from openpype.pipeline import AvalonMongoDB
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.modules.kitsu.utils.credentials import validate_credentials

View file

@ -1,4 +1,4 @@
from openpype.api import get_system_settings, get_project_settings
from openpype.settings import get_system_settings, get_project_settings
from openpype.modules.shotgrid.lib.const import MODULE_NAME

View file

@ -30,7 +30,7 @@ from .workfile import (
from . import (
legacy_io,
register_loader_plugin_path,
register_inventory_action,
register_inventory_action_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
)
@ -197,7 +197,7 @@ def install_openpype_plugins(project_name=None, host_name=None):
pyblish.api.register_plugin_path(path)
register_loader_plugin_path(path)
register_creator_plugin_path(path)
register_inventory_action(path)
register_inventory_action_path(path)
def uninstall_host():

View file

@ -166,7 +166,10 @@ class AttributeValues(object):
return self._data.pop(key, default)
def reset_values(self):
self._data = []
self._data = {}
def mark_as_stored(self):
self._origin_data = copy.deepcopy(self._data)
@property
def attr_defs(self):
@ -303,6 +306,9 @@ class PublishAttributes:
for name in self._plugin_names_order:
yield name
def mark_as_stored(self):
self._origin_data = copy.deepcopy(self._data)
def data_to_store(self):
"""Convert attribute values to "data to store"."""
@ -647,6 +653,25 @@ class CreatedInstance:
changes[key] = (old_value, None)
return changes
def mark_as_stored(self):
"""Should be called when instance data are stored.
Origin data are replaced by current data so changes are cleared.
"""
orig_keys = set(self._orig_data.keys())
for key, value in self._data.items():
orig_keys.discard(key)
if key in ("creator_attributes", "publish_attributes"):
continue
self._orig_data[key] = copy.deepcopy(value)
for key in orig_keys:
self._orig_data.pop(key)
self.creator_attributes.mark_as_stored()
self.publish_attributes.mark_as_stored()
@property
def creator_attributes(self):
return self._data["creator_attributes"]
@ -660,6 +685,18 @@ class CreatedInstance:
return self._data["publish_attributes"]
def data_to_store(self):
"""Collect data that contain json parsable types.
It is possible to recreate the instance using these data.
Todo:
We probably don't need OrderedDict. When data are loaded they
are not ordered anymore.
Returns:
OrderedDict: Ordered dictionary with instance data.
"""
output = collections.OrderedDict()
for key, value in self._data.items():
if key in ("creator_attributes", "publish_attributes"):

View file

@ -246,7 +246,7 @@ class BaseCreator:
return self.icon
def get_dynamic_data(
self, variant, task_name, asset_doc, project_name, host_name
self, variant, task_name, asset_doc, project_name, host_name, instance
):
"""Dynamic data for subset name filling.
@ -257,7 +257,13 @@ class BaseCreator:
return {}
def get_subset_name(
self, variant, task_name, asset_doc, project_name, host_name=None
self,
variant,
task_name,
asset_doc,
project_name,
host_name=None,
instance=None
):
"""Return subset name for passed context.
@ -271,16 +277,21 @@ class BaseCreator:
Asset document is not used yet but is required if would like to use
task type in subset templates.
Method is also called on subset name update. In that case origin
instance is passed in.
Args:
variant(str): Subset name variant. In most of cases user input.
task_name(str): For which task subset is created.
asset_doc(dict): Asset document for which subset is created.
project_name(str): Project name.
host_name(str): Which host creates subset.
instance(str|None): Object of 'CreatedInstance' for which is
subset name updated. Passed only on subset name update.
"""
dynamic_data = self.get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name
variant, task_name, asset_doc, project_name, host_name, instance
)
return get_subset_name(

View file

@ -9,7 +9,9 @@ import os
import logging
import collections
from openpype.lib import get_subset_name
from openpype.client import get_asset_by_id
from .subset_name import get_subset_name
class LegacyCreator(object):
@ -147,11 +149,15 @@ class LegacyCreator(object):
variant, task_name, asset_id, project_name, host_name
)
asset_doc = get_asset_by_id(
project_name, asset_id, fields=["data.tasks"]
)
return get_subset_name(
cls.family,
variant,
task_name,
asset_id,
asset_doc,
project_name,
host_name,
dynamic_data=dynamic_data

View file

@ -265,6 +265,10 @@ def get_last_workfile_with_version(
if not match:
continue
if not match.groups():
output_filenames.append(filename)
continue
file_version = int(match.group(1))
if version is None or file_version > version:
output_filenames[:] = []

View file

@ -1,5 +1,8 @@
from pyblish import api
from openpype.api import get_current_project_settings, get_system_settings
from openpype.settings import (
get_current_project_settings,
get_system_settings,
)
class CollectSettings(api.ContextPlugin):

View file

@ -418,6 +418,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
subset_group = instance.data.get("subsetGroup")
if subset_group:
data["subsetGroup"] = subset_group
elif existing_subset_doc:
# Preserve previous subset group if new version does not set it
if "subsetGroup" in existing_subset_doc.get("data", {}):
subset_group = existing_subset_doc["data"]["subsetGroup"]
data["subsetGroup"] = subset_group
subset_id = None
if existing_subset_doc:

View file

@ -4,8 +4,6 @@ import clique
import errno
import shutil
from bson.objectid import ObjectId
from pymongo import InsertOne, ReplaceOne
import pyblish.api
from openpype.client import (
@ -14,10 +12,15 @@ from openpype.client import (
get_archived_representations,
get_representations,
)
from openpype.client.operations import (
OperationsSession,
new_hero_version_doc,
prepare_hero_version_update_data,
prepare_representation_update_data,
)
from openpype.lib import create_hard_link
from openpype.pipeline import (
schema,
legacy_io,
schema
)
from openpype.pipeline.publish import get_publish_template_name
@ -187,35 +190,32 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
repre["name"].lower(): repre for repre in old_repres
}
op_session = OperationsSession()
entity_id = None
if old_version:
new_version_id = old_version["_id"]
else:
new_version_id = ObjectId()
new_hero_version = {
"_id": new_version_id,
"version_id": src_version_entity["_id"],
"parent": src_version_entity["parent"],
"type": "hero_version",
"schema": "openpype:hero_version-1.0"
}
schema.validate(new_hero_version)
# Don't make changes in database until everything is O.K.
bulk_writes = []
entity_id = old_version["_id"]
new_hero_version = new_hero_version_doc(
src_version_entity["_id"],
src_version_entity["parent"],
entity_id=entity_id
)
if old_version:
self.log.debug("Replacing old hero version.")
bulk_writes.append(
ReplaceOne(
{"_id": new_hero_version["_id"]},
new_hero_version
)
update_data = prepare_hero_version_update_data(
old_version, new_hero_version
)
op_session.update_entity(
project_name,
new_hero_version["type"],
old_version["_id"],
update_data
)
else:
self.log.debug("Creating first hero version.")
bulk_writes.append(
InsertOne(new_hero_version)
op_session.create_entity(
project_name, new_hero_version["type"], new_hero_version
)
# Separate old representations into `to replace` and `to delete`
@ -235,7 +235,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
archived_repres = list(get_archived_representations(
project_name,
# Check what is type of archived representation
version_ids=[new_version_id]
version_ids=[new_hero_version["_id"]]
))
archived_repres_by_name = {}
for repre in archived_repres:
@ -382,12 +382,15 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
# Replace current representation
if repre_name_low in old_repres_to_replace:
old_repre = old_repres_to_replace.pop(repre_name_low)
repre["_id"] = old_repre["_id"]
bulk_writes.append(
ReplaceOne(
{"_id": old_repre["_id"]},
repre
)
update_data = prepare_representation_update_data(
old_repre, repre)
op_session.update_entity(
project_name,
old_repre["type"],
old_repre["_id"],
update_data
)
# Unarchive representation
@ -395,21 +398,21 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
archived_repre = archived_repres_by_name.pop(
repre_name_low
)
old_id = archived_repre["old_id"]
repre["_id"] = old_id
bulk_writes.append(
ReplaceOne(
{"old_id": old_id},
repre
)
repre["_id"] = archived_repre["old_id"]
update_data = prepare_representation_update_data(
archived_repre, repre)
op_session.update_entity(
project_name,
old_repre["type"],
archived_repre["_id"],
update_data
)
# Create representation
else:
repre["_id"] = ObjectId()
bulk_writes.append(
InsertOne(repre)
)
repre.pop("_id", None)
op_session.create_entity(project_name, "representation",
repre)
self.path_checks = []
@ -430,28 +433,22 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
archived_repre = archived_repres_by_name.pop(
repre_name_low
)
repre["old_id"] = repre["_id"]
repre["_id"] = archived_repre["_id"]
repre["type"] = archived_repre["type"]
bulk_writes.append(
ReplaceOne(
{"_id": archived_repre["_id"]},
repre
)
)
changes = {"old_id": repre["_id"],
"_id": archived_repre["_id"],
"type": archived_repre["type"]}
op_session.update_entity(project_name,
archived_repre["type"],
archived_repre["_id"],
changes)
else:
repre["old_id"] = repre["_id"]
repre["_id"] = ObjectId()
repre["old_id"] = repre.pop("_id")
repre["type"] = "archived_representation"
bulk_writes.append(
InsertOne(repre)
)
op_session.create_entity(project_name,
"archived_representation",
repre)
if bulk_writes:
legacy_io.database[project_name].bulk_write(
bulk_writes
)
op_session.commit()
# Remove backuped previous hero
if (

View file

@ -1,3 +1,13 @@
""" Integrate Thumbnails for Openpype use in Loaders.
This thumbnail is different from 'thumbnail' representation which could
be uploaded to Ftrack, or used as any other representation in Loaders to
pull into a scene.
This one is used only as image describing content of published item and
shows up only in Loader in right column section.
"""
import os
import sys
import errno
@ -12,7 +22,7 @@ from openpype.client.operations import OperationsSession, new_thumbnail_doc
class IntegrateThumbnails(pyblish.api.InstancePlugin):
"""Integrate Thumbnails."""
"""Integrate Thumbnails for Openpype use in Loaders."""
label = "Integrate Thumbnails"
order = pyblish.api.IntegratorOrder + 0.01

View file

@ -0,0 +1,72 @@
""" Marks thumbnail representation for integrate to DB or not.
Some hosts produce thumbnail representation, most of them do not create
them explicitly, but they created during extract phase.
In some cases it might be useful to override implicit setting for host/task
This plugin needs to run after extract phase, but before integrate.py as
thumbnail is part of review family and integrated there.
It should be better to control integration of thumbnail in one place than
configure it in multiple places on host implementations.
"""
import pyblish.api
from openpype.lib.profiles_filtering import filter_profiles
class PreIntegrateThumbnails(pyblish.api.InstancePlugin):
"""Marks thumbnail representation for integrate to DB or not."""
label = "Override Integrate Thumbnail Representations"
order = pyblish.api.IntegratorOrder - 0.1
families = ["review"]
integrate_profiles = {}
def process(self, instance):
repres = instance.data.get("representations")
if not repres:
return
thumbnail_repre = None
for repre in repres:
if repre["name"] == "thumbnail":
thumbnail_repre = repre
break
if not thumbnail_repre:
return
family = instance.data["family"]
subset_name = instance.data["subset"]
host_name = instance.context.data["hostName"]
anatomy_data = instance.data["anatomyData"]
task = anatomy_data.get("task", {})
found_profile = filter_profiles(
self.integrate_profiles,
{
"hosts": host_name,
"task_names": task.get("name"),
"task_types": task.get("type"),
"families": family,
"subsets": subset_name,
},
logger=self.log
)
if not found_profile:
return
if not found_profile["integrate_thumbnail"]:
if "delete" not in thumbnail_repre["tags"]:
thumbnail_repre["tags"].append("delete")
else:
if "delete" in thumbnail_repre["tags"]:
thumbnail_repre["tags"].remove("delete")
self.log.debug(
"Thumbnail repre tags {}".format(thumbnail_repre["tags"]))

View file

@ -10,7 +10,8 @@ class ValidateVersion(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Version"
hosts = ["nuke", "maya", "houdini", "blender", "standalonepublisher"]
hosts = ["nuke", "maya", "houdini", "blender", "standalonepublisher",
"photoshop", "aftereffects"]
optional = False
active = True

View file

@ -1,4 +1,23 @@
{
"imageio": {
"project": {
"colourPolicy": "ACES 1.1",
"frameDepth": "16-bit fp",
"fieldDominance": "PROGRESSIVE"
},
"profilesMapping": {
"inputs": [
{
"flameName": "ACEScg",
"ocioName": "ACES - ACEScg"
},
{
"flameName": "Rec.709 video",
"ocioName": "Output - Rec.709"
}
]
}
},
"create": {
"CreateShotClip": {
"hierarchy": "{folder}/{sequence}",

View file

@ -164,6 +164,10 @@
}
]
},
"PreIntegrateThumbnails": {
"enabled": true,
"integrate_profiles": []
},
"IntegrateSubsetGroup": {
"subset_grouping_profiles": [
{

View file

@ -1,4 +1,29 @@
{
"imageio": {
"workfile": {
"ocioConfigName": "nuke-default",
"ocioconfigpath": {
"windows": [],
"darwin": [],
"linux": []
},
"workingSpace": "linear",
"sixteenBitLut": "sRGB",
"eightBitLut": "sRGB",
"floatLut": "linear",
"logLut": "Cineon",
"viewerLut": "sRGB",
"thumbnailLut": "sRGB"
},
"regexInputs": {
"inputs": [
{
"regex": "[^-a-zA-Z0-9](plateRef).*(?=mp4)",
"colorspace": "sRGB"
}
]
}
},
"create": {
"CreateShotClip": {
"hierarchy": "{folder}/{sequence}",

View file

@ -1,5 +1,27 @@
{
"mel_workspace": "workspace -fr \"shaders\" \"renderData/shaders\";\nworkspace -fr \"images\" \"renders\";\nworkspace -fr \"particles\" \"particles\";\nworkspace -fr \"mayaAscii\" \"\";\nworkspace -fr \"mayaBinary\" \"\";\nworkspace -fr \"scene\" \"\";\nworkspace -fr \"alembicCache\" \"cache/alembic\";\nworkspace -fr \"renderData\" \"renderData\";\nworkspace -fr \"sourceImages\" \"sourceimages\";\nworkspace -fr \"fileCache\" \"cache/nCache\";\n",
"imageio": {
"colorManagementPreference_v2": {
"enabled": true,
"configFilePath": {
"windows": [],
"darwin": [],
"linux": []
},
"renderSpace": "ACEScg",
"displayName": "sRGB",
"viewName": "ACES 1.0 SDR-video"
},
"colorManagementPreference": {
"configFilePath": {
"windows": [],
"darwin": [],
"linux": []
},
"renderSpace": "scene-linear Rec 709/sRGB",
"viewTransform": "sRGB gamma"
}
},
"mel_workspace": "workspace -fr \"shaders\" \"renderData/shaders\";\nworkspace -fr \"images\" \"renders/maya\";\nworkspace -fr \"particles\" \"particles\";\nworkspace -fr \"mayaAscii\" \"\";\nworkspace -fr \"mayaBinary\" \"\";\nworkspace -fr \"scene\" \"\";\nworkspace -fr \"alembicCache\" \"cache/alembic\";\nworkspace -fr \"renderData\" \"renderData\";\nworkspace -fr \"sourceImages\" \"sourceimages\";\nworkspace -fr \"fileCache\" \"cache/nCache\";\n",
"ext_mapping": {
"model": "ma",
"mayaAscii": "ma",
@ -34,12 +56,12 @@
},
"RenderSettings": {
"apply_render_settings": true,
"default_render_image_folder": "renders",
"enable_all_lights": false,
"default_render_image_folder": "renders/maya",
"enable_all_lights": true,
"aov_separator": "underscore",
"reset_current_frame": false,
"arnold_renderer": {
"image_prefix": "maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>",
"image_prefix": "<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>",
"image_format": "exr",
"multilayer_exr": true,
"tiled": true,
@ -47,14 +69,14 @@
"additional_options": []
},
"vray_renderer": {
"image_prefix": "maya/<scene>/<Layer>/<Layer>",
"image_prefix": "<scene>/<Layer>/<Layer>",
"engine": "1",
"image_format": "exr",
"aov_list": [],
"additional_options": []
},
"redshift_renderer": {
"image_prefix": "maya/<Scene>/<RenderLayer>/<RenderLayer>",
"image_prefix": "<Scene>/<RenderLayer>/<RenderLayer>",
"primary_gi_engine": "0",
"secondary_gi_engine": "0",
"image_format": "exr",

View file

@ -8,6 +8,197 @@
"build_workfile": "ctrl+alt+b"
}
},
"imageio": {
"enabled": false,
"viewer": {
"viewerProcess": "sRGB"
},
"baking": {
"viewerProcess": "rec709"
},
"workfile": {
"colorManagement": "Nuke",
"OCIO_config": "nuke-default",
"customOCIOConfigPath": {
"windows": [],
"darwin": [],
"linux": []
},
"workingSpaceLUT": "linear",
"monitorLut": "sRGB",
"int8Lut": "sRGB",
"int16Lut": "sRGB",
"logLut": "Cineon",
"floatLut": "linear"
},
"nodes": {
"requiredNodes": [
{
"plugins": [
"CreateWriteRender"
],
"nukeNodeClass": "Write",
"knobs": [
{
"type": "text",
"name": "file_type",
"value": "exr"
},
{
"type": "text",
"name": "datatype",
"value": "16 bit half"
},
{
"type": "text",
"name": "compression",
"value": "Zip (1 scanline)"
},
{
"type": "bool",
"name": "autocrop",
"value": true
},
{
"type": "color_gui",
"name": "tile_color",
"value": [
186,
35,
35,
255
]
},
{
"type": "text",
"name": "channels",
"value": "rgb"
},
{
"type": "text",
"name": "colorspace",
"value": "linear"
},
{
"type": "bool",
"name": "create_directories",
"value": true
}
]
},
{
"plugins": [
"CreateWritePrerender"
],
"nukeNodeClass": "Write",
"knobs": [
{
"type": "text",
"name": "file_type",
"value": "exr"
},
{
"type": "text",
"name": "datatype",
"value": "16 bit half"
},
{
"type": "text",
"name": "compression",
"value": "Zip (1 scanline)"
},
{
"type": "bool",
"name": "autocrop",
"value": true
},
{
"type": "color_gui",
"name": "tile_color",
"value": [
171,
171,
10,
255
]
},
{
"type": "text",
"name": "channels",
"value": "rgb"
},
{
"type": "text",
"name": "colorspace",
"value": "linear"
},
{
"type": "bool",
"name": "create_directories",
"value": true
}
]
},
{
"plugins": [
"CreateWriteStill"
],
"nukeNodeClass": "Write",
"knobs": [
{
"type": "text",
"name": "file_type",
"value": "tiff"
},
{
"type": "text",
"name": "datatype",
"value": "16 bit"
},
{
"type": "text",
"name": "compression",
"value": "Deflate"
},
{
"type": "color_gui",
"name": "tile_color",
"value": [
56,
162,
7,
255
]
},
{
"type": "text",
"name": "channels",
"value": "rgb"
},
{
"type": "text",
"name": "colorspace",
"value": "sRGB"
},
{
"type": "bool",
"name": "create_directories",
"value": true
}
]
}
],
"overrideNodes": []
},
"regexInputs": {
"inputs": [
{
"regex": "(beauty).*(?=.exr)",
"colorspace": "linear"
}
]
}
},
"nuke-dirmap": {
"enabled": false,
"paths": {

View file

@ -1,5 +1,6 @@
{
"level_sequences_for_layouts": false,
"delete_unmatched_assets": false,
"project_setup": {
"dev_mode": true
}

View file

@ -10,6 +10,7 @@
],
"publish": {
"CollectPublishedFiles": {
"sync_next_version": false,
"task_type_to_family": {
"Animation": [
{

Some files were not shown because too many files have changed in this diff Show more