diff --git a/CHANGELOG.md b/CHANGELOG.md
index f868e6ed6e..dca0e7ecef 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,41 +1,90 @@
# Changelog
-## [3.14.3-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
+## [3.14.4-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
-[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.2...HEAD)
+[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.3...HEAD)
**🚀 Enhancements**
+- General: Set root environments before DCC launch [\#3947](https://github.com/pypeclub/OpenPype/pull/3947)
+- Refactor: changed legacy way to update database for Hero version integrate [\#3941](https://github.com/pypeclub/OpenPype/pull/3941)
+- Maya: Moved plugin from global to maya [\#3939](https://github.com/pypeclub/OpenPype/pull/3939)
+- Fusion: Implement Alembic and FBX mesh loader [\#3927](https://github.com/pypeclub/OpenPype/pull/3927)
+- Publisher: Instances can be marked as stored [\#3846](https://github.com/pypeclub/OpenPype/pull/3846)
+
+**🐛 Bug fixes**
+
+- Maya: Deadline OutputFilePath hack regression for Renderman [\#3950](https://github.com/pypeclub/OpenPype/pull/3950)
+- Houdini: Fix validate workfile paths for non-parm file references [\#3948](https://github.com/pypeclub/OpenPype/pull/3948)
+- Photoshop: missed sync published version of workfile with workfile [\#3946](https://github.com/pypeclub/OpenPype/pull/3946)
+- Maya: fix regression of Renderman Deadline hack [\#3943](https://github.com/pypeclub/OpenPype/pull/3943)
+- Tray: Change order of attribute changes [\#3938](https://github.com/pypeclub/OpenPype/pull/3938)
+- AttributeDefs: Fix crashing multivalue of files widget [\#3937](https://github.com/pypeclub/OpenPype/pull/3937)
+- General: Fix links query on hero version [\#3900](https://github.com/pypeclub/OpenPype/pull/3900)
+- Publisher: Files Drag n Drop cleanup [\#3888](https://github.com/pypeclub/OpenPype/pull/3888)
+- Maya: Render settings validation attribute check tweak logging [\#3821](https://github.com/pypeclub/OpenPype/pull/3821)
+
+**🔀 Refactored code**
+
+- General: Direct settings imports [\#3934](https://github.com/pypeclub/OpenPype/pull/3934)
+- General: import 'Logger' from 'openpype.lib' [\#3926](https://github.com/pypeclub/OpenPype/pull/3926)
+
+**Merged pull requests:**
+
+- Maya + Yeti: Load Yeti Cache fix frame number recognition [\#3942](https://github.com/pypeclub/OpenPype/pull/3942)
+- Fusion: Implement callbacks to Fusion's event system thread [\#3928](https://github.com/pypeclub/OpenPype/pull/3928)
+- Photoshop: create single frame image in Ftrack as review [\#3908](https://github.com/pypeclub/OpenPype/pull/3908)
+- Maya: Warn correctly about nodes in render instance with unexpected names [\#3816](https://github.com/pypeclub/OpenPype/pull/3816)
+
+## [3.14.3](https://github.com/pypeclub/OpenPype/tree/3.14.3) (2022-10-03)
+
+[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.3-nightly.7...3.14.3)
+
+**🚀 Enhancements**
+
+- Publisher: Enhancement proposals [\#3897](https://github.com/pypeclub/OpenPype/pull/3897)
- Maya: better logging in Maketx [\#3886](https://github.com/pypeclub/OpenPype/pull/3886)
+- Photoshop: review can be turned off [\#3885](https://github.com/pypeclub/OpenPype/pull/3885)
- TrayPublisher: added persisting of last selected project [\#3871](https://github.com/pypeclub/OpenPype/pull/3871)
- TrayPublisher: added text filter on project name to Tray Publisher [\#3867](https://github.com/pypeclub/OpenPype/pull/3867)
- Github issues adding `running version` section [\#3864](https://github.com/pypeclub/OpenPype/pull/3864)
- Publisher: Increase size of main window [\#3862](https://github.com/pypeclub/OpenPype/pull/3862)
+- Flame: make migratable projects after creation [\#3860](https://github.com/pypeclub/OpenPype/pull/3860)
- Photoshop: synchronize image version with workfile [\#3854](https://github.com/pypeclub/OpenPype/pull/3854)
+- General: Transcoding handle float2 attr type [\#3849](https://github.com/pypeclub/OpenPype/pull/3849)
- General: Simple script for getting license information about used packages [\#3843](https://github.com/pypeclub/OpenPype/pull/3843)
-- Houdini: Increment current file on workfile publish [\#3840](https://github.com/pypeclub/OpenPype/pull/3840)
-- Publisher: Add new publisher to host tools [\#3833](https://github.com/pypeclub/OpenPype/pull/3833)
+- General: Workfile template build enhancements [\#3838](https://github.com/pypeclub/OpenPype/pull/3838)
- General: lock task workfiles when they are working on [\#3810](https://github.com/pypeclub/OpenPype/pull/3810)
-- Maya: Workspace mel loaded from settings [\#3790](https://github.com/pypeclub/OpenPype/pull/3790)
**🐛 Bug fixes**
+- Maya: Fix Render single camera validator [\#3929](https://github.com/pypeclub/OpenPype/pull/3929)
+- Flame: loading multilayer exr to batch/reel is working [\#3901](https://github.com/pypeclub/OpenPype/pull/3901)
+- Hiero: Fix inventory check on launch [\#3895](https://github.com/pypeclub/OpenPype/pull/3895)
+- WebPublisher: Fix import after refactor [\#3891](https://github.com/pypeclub/OpenPype/pull/3891)
+- TVPaint: Fix renaming of rendered files [\#3882](https://github.com/pypeclub/OpenPype/pull/3882)
+- Publisher: Nice checkbox visible in Python 2 [\#3877](https://github.com/pypeclub/OpenPype/pull/3877)
- Settings: Add missing default settings [\#3870](https://github.com/pypeclub/OpenPype/pull/3870)
- General: Copy of workfile does not use 'copy' function but 'copyfile' [\#3869](https://github.com/pypeclub/OpenPype/pull/3869)
- Tray Publisher: skip plugin if otioTimeline is missing [\#3856](https://github.com/pypeclub/OpenPype/pull/3856)
+- Flame: retimed attributes are integrated with settings [\#3855](https://github.com/pypeclub/OpenPype/pull/3855)
- Maya: Extract Playblast fix textures + labelize viewport show settings [\#3852](https://github.com/pypeclub/OpenPype/pull/3852)
-- Ftrack: Url validation does not require ftrackapp [\#3834](https://github.com/pypeclub/OpenPype/pull/3834)
-- Maya+Ftrack: Change typo in family name `mayaascii` -\> `mayaAscii` [\#3820](https://github.com/pypeclub/OpenPype/pull/3820)
-- Maya Deadline: Fix Tile Rendering by forcing integer pixel values [\#3758](https://github.com/pypeclub/OpenPype/pull/3758)
**🔀 Refactored code**
+- Maya: Remove unused 'openpype.api' imports in plugins [\#3925](https://github.com/pypeclub/OpenPype/pull/3925)
+- Resolve: Use new Extractor location [\#3918](https://github.com/pypeclub/OpenPype/pull/3918)
+- Unreal: Use new Extractor location [\#3917](https://github.com/pypeclub/OpenPype/pull/3917)
+- Flame: Use new Extractor location [\#3916](https://github.com/pypeclub/OpenPype/pull/3916)
+- Houdini: Use new Extractor location [\#3894](https://github.com/pypeclub/OpenPype/pull/3894)
+- Harmony: Use new Extractor location [\#3893](https://github.com/pypeclub/OpenPype/pull/3893)
- Hiero: Use new Extractor location [\#3851](https://github.com/pypeclub/OpenPype/pull/3851)
- Maya: Remove old legacy \(ftrack\) plug-ins that are of no use anymore [\#3819](https://github.com/pypeclub/OpenPype/pull/3819)
- Nuke: Use new Extractor location [\#3799](https://github.com/pypeclub/OpenPype/pull/3799)
**Merged pull requests:**
+- Maya: Fix Scene Inventory possibly starting off-screen due to maya preferences [\#3923](https://github.com/pypeclub/OpenPype/pull/3923)
- Maya: RenderSettings set default image format for V-Ray+Redshift to exr [\#3879](https://github.com/pypeclub/OpenPype/pull/3879)
- Remove lockfile during publish [\#3874](https://github.com/pypeclub/OpenPype/pull/3874)
@@ -43,18 +92,12 @@
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.2-nightly.5...3.14.2)
-**🆕 New features**
-
-- Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763)
-
**🚀 Enhancements**
- Flame: Adding Creator's retimed shot and handles switch [\#3826](https://github.com/pypeclub/OpenPype/pull/3826)
- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825)
- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809)
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
-- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
-- Kitsu: Drop 'entities root' setting. [\#3739](https://github.com/pypeclub/OpenPype/pull/3739)
**🐛 Bug fixes**
@@ -64,65 +107,11 @@
- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804)
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792)
-- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
-- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
-- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
-- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757)
-
-**🔀 Refactored code**
-
-- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
-- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
-- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
-- Maya: Use new Extractor location [\#3775](https://github.com/pypeclub/OpenPype/pull/3775)
-- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
-- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
-- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
-- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768)
-- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
-- Maya: Refactor submit deadline to use AbstractSubmitDeadline [\#3759](https://github.com/pypeclub/OpenPype/pull/3759)
-- General: Change publish template settings location [\#3755](https://github.com/pypeclub/OpenPype/pull/3755)
-- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749)
-- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735)
-- Fusion: Defined fusion as addon [\#3733](https://github.com/pypeclub/OpenPype/pull/3733)
-- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732)
-- Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727)
-
-**Merged pull requests:**
-
-- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
-- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.1-nightly.4...3.14.1)
-**🚀 Enhancements**
-
-- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750)
-
-**🐛 Bug fixes**
-
-- Maya: Fix typo in getPanel argument `with\_focus` -\> `withFocus` [\#3753](https://github.com/pypeclub/OpenPype/pull/3753)
-- General: Smaller fixes of imports [\#3748](https://github.com/pypeclub/OpenPype/pull/3748)
-- General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741)
-- Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737)
-
-**🔀 Refactored code**
-
-- General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751)
-- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745)
-- General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744)
-- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
-- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736)
-- Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734)
-- General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731)
-- AfterEffects: Move AE functions from general lib [\#3730](https://github.com/pypeclub/OpenPype/pull/3730)
-- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729)
-- AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728)
-- General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725)
-- Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724)
-
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
diff --git a/openpype/client/entity_links.py b/openpype/client/entity_links.py
index 66214f469c..e42ac58aff 100644
--- a/openpype/client/entity_links.py
+++ b/openpype/client/entity_links.py
@@ -2,6 +2,7 @@ from .mongo import get_project_connection
from .entities import (
get_assets,
get_asset_by_id,
+ get_version_by_id,
get_representation_by_id,
convert_id,
)
@@ -127,12 +128,20 @@ def get_linked_representation_id(
if not version_id:
return []
+ version_doc = get_version_by_id(
+ project_name, version_id, fields=["type", "version_id"]
+ )
+ if version_doc["type"] == "hero_version":
+ version_id = version_doc["version_id"]
+
if max_depth is None:
max_depth = 0
match = {
"_id": version_id,
- "type": {"$in": ["version", "hero_version"]}
+ # Links are not stored to hero versions at this moment so filter
+ # is limited to just versions
+ "type": "version"
}
graph_lookup = {
@@ -187,7 +196,7 @@ def _process_referenced_pipeline_result(result, link_type):
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
- input_links = item["data"].get("inputLinks")
+ input_links = item.get("data", {}).get("inputLinks")
if not input_links:
continue
@@ -203,7 +212,7 @@ def _process_referenced_pipeline_result(result, link_type):
continue
for output in sorted(outputs_recursive, key=lambda o: o["depth"]):
- output_links = output["data"].get("inputLinks")
+ output_links = output.get("data", {}).get("inputLinks")
if not output_links:
continue
diff --git a/openpype/client/operations.py b/openpype/client/operations.py
index 48e8645726..fd639c34a7 100644
--- a/openpype/client/operations.py
+++ b/openpype/client/operations.py
@@ -23,6 +23,7 @@ CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0"
CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0"
CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
+CURRENT_HERO_VERSION_SCHEMA = "openpype:hero_version-1.0"
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0"
@@ -162,6 +163,34 @@ def new_version_doc(version, subset_id, data=None, entity_id=None):
}
+def new_hero_version_doc(version_id, subset_id, data=None, entity_id=None):
+ """Create skeleton data of hero version document.
+
+ Args:
+ version_id (ObjectId): Is considered as unique identifier of version
+ under subset.
+ subset_id (Union[str, ObjectId]): Id of parent subset.
+ data (Dict[str, Any]): Version document data.
+ entity_id (Union[str, ObjectId]): Predefined id of document. New id is
+ created if not passed.
+
+ Returns:
+ Dict[str, Any]: Skeleton of version document.
+ """
+
+ if data is None:
+ data = {}
+
+ return {
+ "_id": _create_or_convert_to_mongo_id(entity_id),
+ "schema": CURRENT_HERO_VERSION_SCHEMA,
+ "type": "hero_version",
+ "version_id": version_id,
+ "parent": subset_id,
+ "data": data
+ }
+
+
def new_representation_doc(
name, version_id, context, data=None, entity_id=None
):
@@ -293,6 +322,20 @@ def prepare_version_update_data(old_doc, new_doc, replace=True):
return _prepare_update_data(old_doc, new_doc, replace)
+def prepare_hero_version_update_data(old_doc, new_doc, replace=True):
+ """Compare two hero version documents and prepare update data.
+
+ Based on compared values will create update data for 'UpdateOperation'.
+
+ Empty output means that documents are identical.
+
+ Returns:
+ Dict[str, Any]: Changes between old and new document.
+ """
+
+ return _prepare_update_data(old_doc, new_doc, replace)
+
+
def prepare_representation_update_data(old_doc, new_doc, replace=True):
"""Compare two representation documents and prepare update data.
diff --git a/openpype/host/__init__.py b/openpype/host/__init__.py
index 519888fce3..da1237c739 100644
--- a/openpype/host/__init__.py
+++ b/openpype/host/__init__.py
@@ -5,6 +5,7 @@ from .host import (
from .interfaces import (
IWorkfileHost,
ILoadHost,
+ IPublishHost,
INewPublisher,
)
@@ -16,6 +17,7 @@ __all__ = (
"IWorkfileHost",
"ILoadHost",
+ "IPublishHost",
"INewPublisher",
"HostDirmap",
diff --git a/openpype/host/interfaces.py b/openpype/host/interfaces.py
index cbf12b0d13..cfd089a0ad 100644
--- a/openpype/host/interfaces.py
+++ b/openpype/host/interfaces.py
@@ -282,7 +282,7 @@ class IWorkfileHost:
return self.workfile_has_unsaved_changes()
-class INewPublisher:
+class IPublishHost:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
@@ -306,7 +306,7 @@ class INewPublisher:
workflow.
"""
- if isinstance(host, INewPublisher):
+ if isinstance(host, IPublishHost):
return []
required = [
@@ -330,7 +330,7 @@ class INewPublisher:
MissingMethodsError: If there are missing methods on host
implementation.
"""
- missing = INewPublisher.get_missing_publish_methods(host)
+ missing = IPublishHost.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@@ -368,3 +368,17 @@ class INewPublisher:
"""
pass
+
+
+class INewPublisher(IPublishHost):
+ """Legacy interface replaced by 'IPublishHost'.
+
+ Deprecated:
+ 'INewPublisher' is replaced by 'IPublishHost' please change your
+ imports.
+ There is no "reasonable" way hot mark these classes as deprecated
+ to show warning of wrong import. Deprecated since 3.14.* will be
+ removed in 3.15.*
+ """
+
+ pass
diff --git a/openpype/hosts/aftereffects/api/launch_logic.py b/openpype/hosts/aftereffects/api/launch_logic.py
index 30a3e1f1c3..9c8513fe8c 100644
--- a/openpype/hosts/aftereffects/api/launch_logic.py
+++ b/openpype/hosts/aftereffects/api/launch_logic.py
@@ -12,6 +12,7 @@ from wsrpc_aiohttp import (
from Qt import QtCore
+from openpype.lib import Logger
from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
from openpype.tools.adobe_webserver.app import WebServerTool
@@ -84,8 +85,6 @@ class ProcessLauncher(QtCore.QObject):
@property
def log(self):
if self._log is None:
- from openpype.api import Logger
-
self._log = Logger.get_logger("{}-launcher".format(
self.route_name))
return self._log
diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py
index c13c22ced5..7026fe3f05 100644
--- a/openpype/hosts/aftereffects/api/pipeline.py
+++ b/openpype/hosts/aftereffects/api/pipeline.py
@@ -4,8 +4,7 @@ from Qt import QtWidgets
import pyblish.api
-from openpype import lib
-from openpype.api import Logger
+from openpype.lib import Logger, register_event_callback
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
@@ -16,9 +15,8 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.aftereffects
-from openpype.lib import register_event_callback
-from .launch_logic import get_stub
+from .launch_logic import get_stub, ConnectionNotEstablishedYet
log = Logger.get_logger(__name__)
@@ -111,7 +109,7 @@ def ls():
"""
try:
stub = get_stub() # only after AfterEffects is up
- except lib.ConnectionNotEstablishedYet:
+ except ConnectionNotEstablishedYet:
print("Not connected yet, ignoring")
return
@@ -284,7 +282,7 @@ def _get_stub():
"""
try:
stub = get_stub() # only after Photoshop is up
- except lib.ConnectionNotEstablishedYet:
+ except ConnectionNotEstablishedYet:
print("Not connected yet, ignoring")
return
diff --git a/openpype/hosts/blender/api/lib.py b/openpype/hosts/blender/api/lib.py
index 9cd1ace821..05912885f7 100644
--- a/openpype/hosts/blender/api/lib.py
+++ b/openpype/hosts/blender/api/lib.py
@@ -6,7 +6,7 @@ from typing import Dict, List, Union
import bpy
import addon_utils
-from openpype.api import Logger
+from openpype.lib import Logger
from . import pipeline
diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py
index ea405b028e..c2aee1e653 100644
--- a/openpype/hosts/blender/api/pipeline.py
+++ b/openpype/hosts/blender/api/pipeline.py
@@ -20,8 +20,8 @@ from openpype.pipeline import (
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
)
-from openpype.api import Logger
from openpype.lib import (
+ Logger,
register_event_callback,
emit_event
)
diff --git a/openpype/hosts/celaction/api/cli.py b/openpype/hosts/celaction/api/cli.py
index eb91def090..88fc11cafb 100644
--- a/openpype/hosts/celaction/api/cli.py
+++ b/openpype/hosts/celaction/api/cli.py
@@ -6,9 +6,8 @@ import argparse
import pyblish.api
import pyblish.util
-from openpype.api import Logger
-import openpype
import openpype.hosts.celaction
+from openpype.lib import Logger
from openpype.hosts.celaction import api as celaction
from openpype.tools.utils import host_tools
from openpype.pipeline import install_openpype_plugins
diff --git a/openpype/hosts/flame/api/lib.py b/openpype/hosts/flame/api/lib.py
index b7f7b24e51..6aca5c5ce6 100644
--- a/openpype/hosts/flame/api/lib.py
+++ b/openpype/hosts/flame/api/lib.py
@@ -12,6 +12,9 @@ import xml.etree.cElementTree as cET
from copy import deepcopy, copy
from xml.etree import ElementTree as ET
from pprint import pformat
+
+from openpype.lib import Logger, run_subprocess
+
from .constants import (
MARKER_COLOR,
MARKER_DURATION,
@@ -20,9 +23,7 @@ from .constants import (
MARKER_PUBLISH_DEFAULT
)
-import openpype.api as openpype
-
-log = openpype.Logger.get_logger(__name__)
+log = Logger.get_logger(__name__)
FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]")
@@ -1016,7 +1017,7 @@ class MediaInfoFile(object):
try:
# execute creation of clip xml template data
- openpype.run_subprocess(cmd_args)
+ run_subprocess(cmd_args)
except TypeError as error:
raise TypeError(
"Error creating `{}` due: {}".format(fpath, error))
diff --git a/openpype/hosts/flame/api/pipeline.py b/openpype/hosts/flame/api/pipeline.py
index 324d13bc3f..3a23389961 100644
--- a/openpype/hosts/flame/api/pipeline.py
+++ b/openpype/hosts/flame/api/pipeline.py
@@ -5,7 +5,7 @@ import os
import contextlib
from pyblish import api as pyblish
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
diff --git a/openpype/hosts/flame/api/plugin.py b/openpype/hosts/flame/api/plugin.py
index 1a26e96c79..092ce9d106 100644
--- a/openpype/hosts/flame/api/plugin.py
+++ b/openpype/hosts/flame/api/plugin.py
@@ -6,16 +6,17 @@ from xml.etree import ElementTree as ET
from Qt import QtCore, QtWidgets
-import openpype.api as openpype
import qargparse
from openpype import style
+from openpype.settings import get_current_project_settings
+from openpype.lib import Logger
from openpype.pipeline import LegacyCreator, LoaderPlugin
from . import constants
from . import lib as flib
from . import pipeline as fpipeline
-log = openpype.Logger.get_logger(__name__)
+log = Logger.get_logger(__name__)
class CreatorWidget(QtWidgets.QDialog):
@@ -305,7 +306,7 @@ class Creator(LegacyCreator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
- self.presets = openpype.get_current_project_settings()[
+ self.presets = get_current_project_settings()[
"flame"]["create"].get(self.__class__.__name__, {})
# adding basic current context flame objects
diff --git a/openpype/hosts/flame/api/render_utils.py b/openpype/hosts/flame/api/render_utils.py
index a29d6be695..7e50c2b23e 100644
--- a/openpype/hosts/flame/api/render_utils.py
+++ b/openpype/hosts/flame/api/render_utils.py
@@ -1,6 +1,6 @@
import os
from xml.etree import ElementTree as ET
-from openpype.api import Logger
+from openpype.lib import Logger
log = Logger.get_logger(__name__)
diff --git a/openpype/hosts/flame/api/utils.py b/openpype/hosts/flame/api/utils.py
index 2dfdfa8f48..fb8bdee42d 100644
--- a/openpype/hosts/flame/api/utils.py
+++ b/openpype/hosts/flame/api/utils.py
@@ -4,7 +4,7 @@ Flame utils for syncing scripts
import os
import shutil
-from openpype.api import Logger
+from openpype.lib import Logger
log = Logger.get_logger(__name__)
diff --git a/openpype/hosts/flame/plugins/publish/extract_otio_file.py b/openpype/hosts/flame/plugins/publish/extract_otio_file.py
index 7dd75974fc..e5bfa42ce6 100644
--- a/openpype/hosts/flame/plugins/publish/extract_otio_file.py
+++ b/openpype/hosts/flame/plugins/publish/extract_otio_file.py
@@ -1,10 +1,10 @@
import os
import pyblish.api
-import openpype.api
import opentimelineio as otio
+from openpype.pipeline import publish
-class ExtractOTIOFile(openpype.api.Extractor):
+class ExtractOTIOFile(publish.Extractor):
"""
Extractor export OTIO file
"""
diff --git a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py
index 5482af973c..d5294d61c2 100644
--- a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py
+++ b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py
@@ -4,7 +4,8 @@ import tempfile
from copy import deepcopy
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.flame import api as opfapi
from openpype.hosts.flame.api import MediaInfoFile
from openpype.pipeline.editorial import (
@@ -14,7 +15,7 @@ from openpype.pipeline.editorial import (
import flame
-class ExtractSubsetResources(openpype.api.Extractor):
+class ExtractSubsetResources(publish.Extractor):
"""
Extractor for transcoding files from Flame clip
"""
diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py
index 4ef44dbb61..a33e5cf289 100644
--- a/openpype/hosts/fusion/api/lib.py
+++ b/openpype/hosts/fusion/api/lib.py
@@ -3,8 +3,6 @@ import sys
import re
import contextlib
-from Qt import QtGui
-
from openpype.lib import Logger
from openpype.client import (
get_asset_by_name,
@@ -92,7 +90,7 @@ def set_asset_resolution():
})
-def validate_comp_prefs(comp=None):
+def validate_comp_prefs(comp=None, force_repair=False):
"""Validate current comp defaults with asset settings.
Validates fps, resolutionWidth, resolutionHeight, aspectRatio.
@@ -135,21 +133,22 @@ def validate_comp_prefs(comp=None):
asset_value = asset_data[key]
comp_value = comp_frame_format_prefs.get(comp_key)
if asset_value != comp_value:
- # todo: Actually show dialog to user instead of just logging
- log.warning(
- "Comp {pref} {value} does not match asset "
- "'{asset_name}' {pref} {asset_value}".format(
- pref=label,
- value=comp_value,
- asset_name=asset_doc["name"],
- asset_value=asset_value)
- )
-
invalid_msg = "{} {} should be {}".format(label,
comp_value,
asset_value)
invalid.append(invalid_msg)
+ if not force_repair:
+ # Do not log warning if we force repair anyway
+ log.warning(
+ "Comp {pref} {value} does not match asset "
+ "'{asset_name}' {pref} {asset_value}".format(
+ pref=label,
+ value=comp_value,
+ asset_name=asset_doc["name"],
+ asset_value=asset_value)
+ )
+
if invalid:
def _on_repair():
@@ -160,6 +159,11 @@ def validate_comp_prefs(comp=None):
attributes[comp_key_full] = value
comp.SetPrefs(attributes)
+ if force_repair:
+ log.info("Applying default Comp preferences..")
+ _on_repair()
+ return
+
from . import menu
from openpype.widgets import popup
from openpype.style import load_stylesheet
diff --git a/openpype/hosts/fusion/api/menu.py b/openpype/hosts/fusion/api/menu.py
index 7a6293807f..39126935e6 100644
--- a/openpype/hosts/fusion/api/menu.py
+++ b/openpype/hosts/fusion/api/menu.py
@@ -16,6 +16,7 @@ from openpype.hosts.fusion.api.lib import (
from openpype.pipeline import legacy_io
from openpype.resources import get_openpype_icon_filepath
+from .pipeline import FusionEventHandler
from .pulse import FusionPulse
self = sys.modules[__name__]
@@ -119,6 +120,10 @@ class OpenPypeMenu(QtWidgets.QWidget):
self._pulse = FusionPulse(parent=self)
self._pulse.start()
+ # Detect Fusion events as OpenPype events
+ self._event_handler = FusionEventHandler(parent=self)
+ self._event_handler.start()
+
def on_task_changed(self):
# Update current context label
label = legacy_io.Session["AVALON_ASSET"]
diff --git a/openpype/hosts/fusion/api/pipeline.py b/openpype/hosts/fusion/api/pipeline.py
index c92d072ef7..b22ee5328f 100644
--- a/openpype/hosts/fusion/api/pipeline.py
+++ b/openpype/hosts/fusion/api/pipeline.py
@@ -2,13 +2,16 @@
Basic avalon integration
"""
import os
+import sys
import logging
import pyblish.api
+from Qt import QtCore
from openpype.lib import (
Logger,
- register_event_callback
+ register_event_callback,
+ emit_event
)
from openpype.pipeline import (
register_loader_plugin_path,
@@ -39,12 +42,13 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
-class CompLogHandler(logging.Handler):
+class FusionLogHandler(logging.Handler):
+ # Keep a reference to fusion's Print function (Remote Object)
+ _print = getattr(sys.modules["__main__"], "fusion").Print
+
def emit(self, record):
entry = self.format(record)
- comp = get_current_comp()
- if comp:
- comp.Print(entry)
+ self._print(entry)
def install():
@@ -67,7 +71,7 @@ def install():
# Attach default logging handler that prints to active comp
logger = logging.getLogger()
formatter = logging.Formatter(fmt="%(message)s\n")
- handler = CompLogHandler()
+ handler = FusionLogHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
@@ -84,10 +88,10 @@ def install():
"instanceToggled", on_pyblish_instance_toggled
)
- # Fusion integration currently does not attach to direct callbacks of
- # the application. So we use workfile callbacks to allow similar behavior
- # on save and open
- register_event_callback("workfile.open.after", on_after_open)
+ # Register events
+ register_event_callback("open", on_after_open)
+ register_event_callback("save", on_save)
+ register_event_callback("new", on_new)
def uninstall():
@@ -137,8 +141,18 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
tool.SetAttrs({"TOOLB_PassThrough": passthrough})
-def on_after_open(_event):
- comp = get_current_comp()
+def on_new(event):
+ comp = event["Rets"]["comp"]
+ validate_comp_prefs(comp, force_repair=True)
+
+
+def on_save(event):
+ comp = event["sender"]
+ validate_comp_prefs(comp)
+
+
+def on_after_open(event):
+ comp = event["sender"]
validate_comp_prefs(comp)
if any_outdated_containers():
@@ -182,7 +196,7 @@ def ls():
"""
comp = get_current_comp()
- tools = comp.GetToolList(False, "Loader").values()
+ tools = comp.GetToolList(False).values()
for tool in tools:
container = parse_container(tool)
@@ -254,3 +268,114 @@ def parse_container(tool):
return container
+class FusionEventThread(QtCore.QThread):
+ """QThread which will periodically ping Fusion app for any events.
+
+ The fusion.UIManager must be set up to be notified of events before they'll
+ be reported by this thread, for example:
+ fusion.UIManager.AddNotify("Comp_Save", None)
+
+ """
+
+ on_event = QtCore.Signal(dict)
+
+ def run(self):
+
+ app = getattr(sys.modules["__main__"], "app", None)
+ if app is None:
+ # No Fusion app found
+ return
+
+ # As optimization store the GetEvent method directly because every
+ # getattr of UIManager.GetEvent tries to resolve the Remote Function
+ # through the PyRemoteObject
+ get_event = app.UIManager.GetEvent
+ delay = int(os.environ.get("OPENPYPE_FUSION_CALLBACK_INTERVAL", 1000))
+ while True:
+ if self.isInterruptionRequested():
+ return
+
+ # Process all events that have been queued up until now
+ while True:
+ event = get_event(False)
+ if not event:
+ break
+ self.on_event.emit(event)
+
+ # Wait some time before processing events again
+ # to not keep blocking the UI
+ self.msleep(delay)
+
+
+class FusionEventHandler(QtCore.QObject):
+ """Emits OpenPype events based on Fusion events captured in a QThread.
+
+ This will emit the following OpenPype events based on Fusion actions:
+ save: Comp_Save, Comp_SaveAs
+ open: Comp_Opened
+ new: Comp_New
+
+ To use this you can attach it to you Qt UI so it runs in the background.
+ E.g.
+ >>> handler = FusionEventHandler(parent=window)
+ >>> handler.start()
+
+
+ """
+ ACTION_IDS = [
+ "Comp_Save",
+ "Comp_SaveAs",
+ "Comp_New",
+ "Comp_Opened"
+ ]
+
+ def __init__(self, parent=None):
+ super(FusionEventHandler, self).__init__(parent=parent)
+
+ # Set up Fusion event callbacks
+ fusion = getattr(sys.modules["__main__"], "fusion", None)
+ ui = fusion.UIManager
+
+ # Add notifications for the ones we want to listen to
+ notifiers = []
+ for action_id in self.ACTION_IDS:
+ notifier = ui.AddNotify(action_id, None)
+ notifiers.append(notifier)
+
+ # TODO: Not entirely sure whether these must be kept to avoid
+ # garbage collection
+ self._notifiers = notifiers
+
+ self._event_thread = FusionEventThread(parent=self)
+ self._event_thread.on_event.connect(self._on_event)
+
+ def start(self):
+ self._event_thread.start()
+
+ def stop(self):
+ self._event_thread.stop()
+
+ def _on_event(self, event):
+ """Handle Fusion events to emit OpenPype events"""
+ if not event:
+ return
+
+ what = event["what"]
+
+ # Comp Save
+ if what in {"Comp_Save", "Comp_SaveAs"}:
+ if not event["Rets"].get("success"):
+ # If the Save action is cancelled it will still emit an
+ # event but with "success": False so we ignore those cases
+ return
+ # Comp was saved
+ emit_event("save", data=event)
+ return
+
+ # Comp New
+ elif what in {"Comp_New"}:
+ emit_event("new", data=event)
+
+ # Comp Opened
+ elif what in {"Comp_Opened"}:
+ emit_event("open", data=event)
diff --git a/openpype/hosts/fusion/api/pulse.py b/openpype/hosts/fusion/api/pulse.py
index 5b61f3bd63..eb7ef3785d 100644
--- a/openpype/hosts/fusion/api/pulse.py
+++ b/openpype/hosts/fusion/api/pulse.py
@@ -19,9 +19,12 @@ class PulseThread(QtCore.QThread):
while True:
if self.isInterruptionRequested():
return
- try:
- app.Test()
- except Exception:
+
+ # We don't need to call Test because PyRemoteObject of the app
+ # will actually fail to even resolve the Test function if it has
+ # gone down. So we can actually already just check by confirming
+ # the method is still getting resolved. (Optimization)
+ if app.Test is None:
self.no_response.emit()
self.msleep(interval)
diff --git a/openpype/hosts/fusion/plugins/load/load_alembic.py b/openpype/hosts/fusion/plugins/load/load_alembic.py
new file mode 100644
index 0000000000..f8b8c2cb0a
--- /dev/null
+++ b/openpype/hosts/fusion/plugins/load/load_alembic.py
@@ -0,0 +1,70 @@
+from openpype.pipeline import (
+ load,
+ get_representation_path,
+)
+from openpype.hosts.fusion.api import (
+ imprint_container,
+ get_current_comp,
+ comp_lock_and_undo_chunk
+)
+
+
+class FusionLoadAlembicMesh(load.LoaderPlugin):
+ """Load Alembic mesh into Fusion"""
+
+ families = ["pointcache", "model"]
+ representations = ["abc"]
+
+ label = "Load alembic mesh"
+ order = -10
+ icon = "code-fork"
+ color = "orange"
+
+ tool_type = "SurfaceAlembicMesh"
+
+ def load(self, context, name, namespace, data):
+ # Fallback to asset name when namespace is None
+ if namespace is None:
+ namespace = context['asset']['name']
+
+ # Create the Loader with the filename path set
+ comp = get_current_comp()
+ with comp_lock_and_undo_chunk(comp, "Create tool"):
+
+ path = self.fname
+
+ args = (-32768, -32768)
+ tool = comp.AddTool(self.tool_type, *args)
+ tool["Filename"] = path
+
+ imprint_container(tool,
+ name=name,
+ namespace=namespace,
+ context=context,
+ loader=self.__class__.__name__)
+
+ def switch(self, container, representation):
+ self.update(container, representation)
+
+ def update(self, container, representation):
+ """Update Alembic path"""
+
+ tool = container["_tool"]
+ assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
+ comp = tool.Comp()
+
+ path = get_representation_path(representation)
+
+ with comp_lock_and_undo_chunk(comp, "Update tool"):
+ tool["Filename"] = path
+
+ # Update the imprinted representation
+ tool.SetData("avalon.representation", str(representation["_id"]))
+
+ def remove(self, container):
+ tool = container["_tool"]
+ assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
+ comp = tool.Comp()
+
+ with comp_lock_and_undo_chunk(comp, "Remove tool"):
+ tool.Delete()
diff --git a/openpype/hosts/fusion/plugins/load/load_fbx.py b/openpype/hosts/fusion/plugins/load/load_fbx.py
new file mode 100644
index 0000000000..70fe82ffef
--- /dev/null
+++ b/openpype/hosts/fusion/plugins/load/load_fbx.py
@@ -0,0 +1,71 @@
+
+from openpype.pipeline import (
+ load,
+ get_representation_path,
+)
+from openpype.hosts.fusion.api import (
+ imprint_container,
+ get_current_comp,
+ comp_lock_and_undo_chunk
+)
+
+
+class FusionLoadFBXMesh(load.LoaderPlugin):
+ """Load FBX mesh into Fusion"""
+
+ families = ["*"]
+ representations = ["fbx"]
+
+ label = "Load FBX mesh"
+ order = -10
+ icon = "code-fork"
+ color = "orange"
+
+ tool_type = "SurfaceFBXMesh"
+
+ def load(self, context, name, namespace, data):
+ # Fallback to asset name when namespace is None
+ if namespace is None:
+ namespace = context['asset']['name']
+
+ # Create the Loader with the filename path set
+ comp = get_current_comp()
+ with comp_lock_and_undo_chunk(comp, "Create tool"):
+
+ path = self.fname
+
+ args = (-32768, -32768)
+ tool = comp.AddTool(self.tool_type, *args)
+ tool["ImportFile"] = path
+
+ imprint_container(tool,
+ name=name,
+ namespace=namespace,
+ context=context,
+ loader=self.__class__.__name__)
+
+ def switch(self, container, representation):
+ self.update(container, representation)
+
+ def update(self, container, representation):
+ """Update path"""
+
+ tool = container["_tool"]
+ assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
+ comp = tool.Comp()
+
+ path = get_representation_path(representation)
+
+ with comp_lock_and_undo_chunk(comp, "Update tool"):
+ tool["ImportFile"] = path
+
+ # Update the imprinted representation
+ tool.SetData("avalon.representation", str(representation["_id"]))
+
+ def remove(self, container):
+ tool = container["_tool"]
+ assert tool.ID == self.tool_type, f"Must be {self.tool_type}"
+ comp = tool.Comp()
+
+ with comp_lock_and_undo_chunk(comp, "Remove tool"):
+ tool.Delete()
diff --git a/openpype/hosts/harmony/plugins/publish/extract_palette.py b/openpype/hosts/harmony/plugins/publish/extract_palette.py
index fae778f6b0..69c6e098ff 100644
--- a/openpype/hosts/harmony/plugins/publish/extract_palette.py
+++ b/openpype/hosts/harmony/plugins/publish/extract_palette.py
@@ -6,10 +6,10 @@ import csv
from PIL import Image, ImageDraw, ImageFont
import openpype.hosts.harmony.api as harmony
-import openpype.api
+from openpype.pipeline import publish
-class ExtractPalette(openpype.api.Extractor):
+class ExtractPalette(publish.Extractor):
"""Extract palette."""
label = "Extract Palette"
diff --git a/openpype/hosts/harmony/plugins/publish/extract_template.py b/openpype/hosts/harmony/plugins/publish/extract_template.py
index d25b07bba3..458bf25a3c 100644
--- a/openpype/hosts/harmony/plugins/publish/extract_template.py
+++ b/openpype/hosts/harmony/plugins/publish/extract_template.py
@@ -3,12 +3,11 @@
import os
import shutil
-import openpype.api
+from openpype.pipeline import publish
import openpype.hosts.harmony.api as harmony
-import openpype.hosts.harmony
-class ExtractTemplate(openpype.api.Extractor):
+class ExtractTemplate(publish.Extractor):
"""Extract the connected nodes to the composite instance."""
label = "Extract Template"
@@ -50,7 +49,7 @@ class ExtractTemplate(openpype.api.Extractor):
dependencies.remove(instance.data["setMembers"][0])
# Export template.
- openpype.hosts.harmony.api.export_template(
+ harmony.export_template(
unique_backdrops, dependencies, filepath
)
diff --git a/openpype/hosts/harmony/plugins/publish/extract_workfile.py b/openpype/hosts/harmony/plugins/publish/extract_workfile.py
index 7f25ec8150..9bb3090558 100644
--- a/openpype/hosts/harmony/plugins/publish/extract_workfile.py
+++ b/openpype/hosts/harmony/plugins/publish/extract_workfile.py
@@ -4,10 +4,10 @@ import os
import shutil
from zipfile import ZipFile
-import openpype.api
+from openpype.pipeline import publish
-class ExtractWorkfile(openpype.api.Extractor):
+class ExtractWorkfile(publish.Extractor):
"""Extract and zip complete workfile folder into zip."""
label = "Extract Workfile"
diff --git a/openpype/hosts/hiero/api/menu.py b/openpype/hosts/hiero/api/menu.py
index 541a1f1f92..2a7560c6ba 100644
--- a/openpype/hosts/hiero/api/menu.py
+++ b/openpype/hosts/hiero/api/menu.py
@@ -4,7 +4,7 @@ import sys
import hiero.core
from hiero.ui import findMenuAction
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
diff --git a/openpype/hosts/hiero/api/plugin.py b/openpype/hosts/hiero/api/plugin.py
index 77fedbbbdc..ea8a9e836a 100644
--- a/openpype/hosts/hiero/api/plugin.py
+++ b/openpype/hosts/hiero/api/plugin.py
@@ -8,7 +8,7 @@ import hiero
from Qt import QtWidgets, QtCore
import qargparse
-import openpype.api as openpype
+from openpype.settings import get_current_project_settings
from openpype.lib import Logger
from openpype.pipeline import LoaderPlugin, LegacyCreator
from openpype.pipeline.context_tools import get_current_project_asset
@@ -606,7 +606,7 @@ class Creator(LegacyCreator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
import openpype.hosts.hiero.api as phiero
- self.presets = openpype.get_current_project_settings()[
+ self.presets = get_current_project_settings()[
"hiero"]["create"].get(self.__class__.__name__, {})
# adding basic current context resolve objects
diff --git a/openpype/hosts/hiero/api/tags.py b/openpype/hosts/hiero/api/tags.py
index 10df96fa53..fac26da03a 100644
--- a/openpype/hosts/hiero/api/tags.py
+++ b/openpype/hosts/hiero/api/tags.py
@@ -3,7 +3,7 @@ import os
import hiero
from openpype.client import get_project, get_assets
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import legacy_io
log = Logger.get_logger(__name__)
diff --git a/openpype/hosts/hiero/api/workio.py b/openpype/hosts/hiero/api/workio.py
index 762e22804f..040fd1435a 100644
--- a/openpype/hosts/hiero/api/workio.py
+++ b/openpype/hosts/hiero/api/workio.py
@@ -1,7 +1,7 @@
import os
import hiero
-from openpype.api import Logger
+from openpype.lib import Logger
log = Logger.get_logger(__name__)
diff --git a/openpype/hosts/houdini/api/shelves.py b/openpype/hosts/houdini/api/shelves.py
index 248d99105c..3ccab964cd 100644
--- a/openpype/hosts/houdini/api/shelves.py
+++ b/openpype/hosts/houdini/api/shelves.py
@@ -1,7 +1,6 @@
import os
import logging
import platform
-import six
from openpype.settings import get_project_settings
@@ -9,16 +8,10 @@ import hou
log = logging.getLogger("openpype.hosts.houdini.shelves")
-if six.PY2:
- FileNotFoundError = IOError
-
def generate_shelves():
"""This function generates complete shelves from shelf set to tools
in Houdini from openpype project settings houdini shelf definition.
-
- Raises:
- FileNotFoundError: Raised when the shelf set filepath does not exist
"""
current_os = platform.system().lower()
@@ -27,51 +20,41 @@ def generate_shelves():
shelves_set_config = project_settings["houdini"]["shelves"]
if not shelves_set_config:
- log.debug(
- "No custom shelves found in project settings."
- )
+ log.debug("No custom shelves found in project settings.")
return
for shelf_set_config in shelves_set_config:
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
+ shelf_set_os_filepath = shelf_set_filepath[current_os]
+ if shelf_set_os_filepath:
+ if not os.path.isfile(shelf_set_os_filepath):
+ log.error("Shelf path doesn't exist - "
+ "{}".format(shelf_set_os_filepath))
+ continue
- if shelf_set_filepath[current_os]:
- if not os.path.isfile(shelf_set_filepath[current_os]):
- raise FileNotFoundError(
- "This path doesn't exist - {}".format(
- shelf_set_filepath[current_os]
- )
- )
-
- hou.shelves.newShelfSet(file_path=shelf_set_filepath[current_os])
+ hou.shelves.newShelfSet(file_path=shelf_set_os_filepath)
continue
shelf_set_name = shelf_set_config.get('shelf_set_name')
if not shelf_set_name:
- log.warning(
- "No name found in shelf set definition."
- )
- return
-
- shelf_set = get_or_create_shelf_set(shelf_set_name)
+ log.warning("No name found in shelf set definition.")
+ continue
shelves_definition = shelf_set_config.get('shelf_definition')
-
if not shelves_definition:
log.debug(
"No shelf definition found for shelf set named '{}'".format(
shelf_set_name
)
)
- return
+ continue
+ shelf_set = get_or_create_shelf_set(shelf_set_name)
for shelf_definition in shelves_definition:
shelf_name = shelf_definition.get('shelf_name')
if not shelf_name:
- log.warning(
- "No name found in shelf definition."
- )
- return
+ log.warning("No name found in shelf definition.")
+ continue
shelf = get_or_create_shelf(shelf_name)
@@ -81,7 +64,7 @@ def generate_shelves():
shelf_name
)
)
- return
+ continue
mandatory_attributes = {'name', 'script'}
for tool_definition in shelf_definition.get('tools_list'):
@@ -91,14 +74,14 @@ def generate_shelves():
tool_definition[key] for key in mandatory_attributes
):
log.warning(
- "You need to specify at least the name and \
-the script path of the tool.")
+ "You need to specify at least the name and the "
+ "script path of the tool.")
continue
tool = get_or_create_tool(tool_definition, shelf)
if not tool:
- return
+ continue
# Add the tool to the shelf if not already in it
if tool not in shelf.tools():
@@ -121,12 +104,10 @@ def get_or_create_shelf_set(shelf_set_label):
"""
all_shelves_sets = hou.shelves.shelfSets().values()
- shelf_sets = [
- shelf for shelf in all_shelves_sets if shelf.label() == shelf_set_label
- ]
-
- if shelf_sets:
- return shelf_sets[0]
+ shelf_set = next((shelf for shelf in all_shelves_sets if
+ shelf.label() == shelf_set_label), None)
+ if shelf_set:
+ return shelf_set
shelf_set_name = shelf_set_label.replace(' ', '_').lower()
new_shelf_set = hou.shelves.newShelfSet(
@@ -148,10 +129,9 @@ def get_or_create_shelf(shelf_label):
"""
all_shelves = hou.shelves.shelves().values()
- shelf = [s for s in all_shelves if s.label() == shelf_label]
-
+ shelf = next((s for s in all_shelves if s.label() == shelf_label), None)
if shelf:
- return shelf[0]
+ return shelf
shelf_name = shelf_label.replace(' ', '_').lower()
new_shelf = hou.shelves.newShelf(
@@ -175,23 +155,21 @@ def get_or_create_tool(tool_definition, shelf):
existing_tools = shelf.tools()
tool_label = tool_definition.get('label')
- existing_tool = [
- tool for tool in existing_tools if tool.label() == tool_label
- ]
-
+ existing_tool = next(
+ (tool for tool in existing_tools if tool.label() == tool_label),
+ None
+ )
if existing_tool:
tool_definition.pop('name', None)
tool_definition.pop('label', None)
- existing_tool[0].setData(**tool_definition)
- return existing_tool[0]
+ existing_tool.setData(**tool_definition)
+ return existing_tool
tool_name = tool_label.replace(' ', '_').lower()
if not os.path.exists(tool_definition['script']):
log.warning(
- "This path doesn't exist - {}".format(
- tool_definition['script']
- )
+ "This path doesn't exist - {}".format(tool_definition['script'])
)
return
diff --git a/openpype/hosts/houdini/plugins/publish/collect_remote_publish.py b/openpype/hosts/houdini/plugins/publish/collect_remote_publish.py
index c635a53074..d56d389be0 100644
--- a/openpype/hosts/houdini/plugins/publish/collect_remote_publish.py
+++ b/openpype/hosts/houdini/plugins/publish/collect_remote_publish.py
@@ -1,7 +1,7 @@
import pyblish.api
-import openpype.api
import hou
+from openpype.pipeline.publish import RepairAction
from openpype.hosts.houdini.api import lib
@@ -13,7 +13,7 @@ class CollectRemotePublishSettings(pyblish.api.ContextPlugin):
hosts = ["houdini"]
targets = ["deadline"]
label = "Remote Publish Submission Settings"
- actions = [openpype.api.RepairAction]
+ actions = [RepairAction]
def process(self, context):
diff --git a/openpype/hosts/houdini/plugins/publish/extract_alembic.py b/openpype/hosts/houdini/plugins/publish/extract_alembic.py
index 83b790407f..758d4c560b 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_alembic.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_alembic.py
@@ -1,11 +1,12 @@
import os
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
-class ExtractAlembic(openpype.api.Extractor):
+class ExtractAlembic(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract Alembic"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_ass.py b/openpype/hosts/houdini/plugins/publish/extract_ass.py
index e56e40df85..a302b451cb 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_ass.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_ass.py
@@ -1,11 +1,12 @@
import os
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
-class ExtractAss(openpype.api.Extractor):
+class ExtractAss(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract Ass"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_composite.py b/openpype/hosts/houdini/plugins/publish/extract_composite.py
index f300b6d28d..23e875f107 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_composite.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_composite.py
@@ -1,12 +1,12 @@
import os
import pyblish.api
-import openpype.api
+from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
-class ExtractComposite(openpype.api.Extractor):
+class ExtractComposite(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract Composite (Image Sequence)"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_hda.py b/openpype/hosts/houdini/plugins/publish/extract_hda.py
index 301dd4e297..7dd03a92b7 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_hda.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_hda.py
@@ -4,10 +4,11 @@ import os
from pprint import pformat
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
-class ExtractHDA(openpype.api.Extractor):
+class ExtractHDA(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract HDA"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py
index c754d60c59..ca9be64a47 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_redshift_proxy.py
@@ -1,11 +1,12 @@
import os
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
-class ExtractRedshiftProxy(openpype.api.Extractor):
+class ExtractRedshiftProxy(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract Redshift Proxy"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_usd.py b/openpype/hosts/houdini/plugins/publish/extract_usd.py
index 0fc26900fb..78c32affb4 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_usd.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_usd.py
@@ -1,11 +1,12 @@
import os
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
-class ExtractUSD(openpype.api.Extractor):
+class ExtractUSD(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract USD"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py
index 80919c023b..f686f712bb 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_usd_layered.py
@@ -5,7 +5,6 @@ import sys
from collections import deque
import pyblish.api
-import openpype.api
from openpype.client import (
get_asset_by_name,
@@ -16,6 +15,7 @@ from openpype.client import (
from openpype.pipeline import (
get_representation_path,
legacy_io,
+ publish,
)
import openpype.hosts.houdini.api.usd as hou_usdlib
from openpype.hosts.houdini.api.lib import render_rop
@@ -160,7 +160,7 @@ def parm_values(overrides):
parm.set(value)
-class ExtractUSDLayered(openpype.api.Extractor):
+class ExtractUSDLayered(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract Layered USD"
diff --git a/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py b/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py
index 113e1b0bcb..26ec423048 100644
--- a/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py
+++ b/openpype/hosts/houdini/plugins/publish/extract_vdb_cache.py
@@ -1,11 +1,12 @@
import os
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
-class ExtractVDBCache(openpype.api.Extractor):
+class ExtractVDBCache(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract VDB Cache"
diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py
index 79b3e894e5..0bd78ff38a 100644
--- a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py
+++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py
@@ -35,6 +35,9 @@ class ValidateWorkfilePaths(pyblish.api.InstancePlugin):
def get_invalid(cls):
invalid = []
for param, _ in hou.fileReferences():
+ if param is None:
+ continue
+
# skip nodes we are not interested in
if param.node().type().name() not in cls.node_types:
continue
diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py
index 777a6ffbc9..b0a283da5a 100644
--- a/openpype/hosts/maya/api/lib_rendersettings.py
+++ b/openpype/hosts/maya/api/lib_rendersettings.py
@@ -6,7 +6,7 @@ import six
import sys
from openpype.lib import Logger
-from openpype.api import (
+from openpype.settings import (
get_project_settings,
get_current_project_settings
)
diff --git a/openpype/hosts/maya/api/lib_template_builder.py b/openpype/hosts/maya/api/lib_template_builder.py
deleted file mode 100644
index 34a8450a26..0000000000
--- a/openpype/hosts/maya/api/lib_template_builder.py
+++ /dev/null
@@ -1,253 +0,0 @@
-import json
-from collections import OrderedDict
-import maya.cmds as cmds
-
-import qargparse
-from openpype.tools.utils.widgets import OptionDialog
-from .lib import get_main_window, imprint
-
-# To change as enum
-build_types = ["context_asset", "linked_asset", "all_assets"]
-
-
-def get_placeholder_attributes(node):
- return {
- attr: cmds.getAttr("{}.{}".format(node, attr))
- for attr in cmds.listAttr(node, userDefined=True)}
-
-
-def delete_placeholder_attributes(node):
- '''
- function to delete all extra placeholder attributes
- '''
- extra_attributes = get_placeholder_attributes(node)
- for attribute in extra_attributes:
- cmds.deleteAttr(node + '.' + attribute)
-
-
-def create_placeholder():
- args = placeholder_window()
-
- if not args:
- return # operation canceled, no locator created
-
- # custom arg parse to force empty data query
- # and still imprint them on placeholder
- # and getting items when arg is of type Enumerator
- options = create_options(args)
-
- # create placeholder name dynamically from args and options
- placeholder_name = create_placeholder_name(args, options)
-
- selection = cmds.ls(selection=True)
- if not selection:
- raise ValueError("Nothing is selected")
-
- placeholder = cmds.spaceLocator(name=placeholder_name)[0]
-
- # get the long name of the placeholder (with the groups)
- placeholder_full_name = cmds.ls(selection[0], long=True)[
- 0] + '|' + placeholder.replace('|', '')
-
- if selection:
- cmds.parent(placeholder, selection[0])
-
- imprint(placeholder_full_name, options)
-
- # Some tweaks because imprint force enums to to default value so we get
- # back arg read and force them to attributes
- imprint_enum(placeholder_full_name, args)
-
- # Add helper attributes to keep placeholder info
- cmds.addAttr(
- placeholder_full_name,
- longName="parent",
- hidden=True,
- dataType="string"
- )
- cmds.addAttr(
- placeholder_full_name,
- longName="index",
- hidden=True,
- attributeType="short",
- defaultValue=-1
- )
-
- cmds.setAttr(placeholder_full_name + '.parent', "", type="string")
-
-
-def create_placeholder_name(args, options):
- placeholder_builder_type = [
- arg.read() for arg in args if 'builder_type' in str(arg)
- ][0]
- placeholder_family = options['family']
- placeholder_name = placeholder_builder_type.split('_')
-
- # add famlily in any
- if placeholder_family:
- placeholder_name.insert(1, placeholder_family)
-
- # add loader arguments if any
- if options['loader_args']:
- pos = 2
- loader_args = options['loader_args'].replace('\'', '\"')
- loader_args = json.loads(loader_args)
- values = [v for v in loader_args.values()]
- for i in range(len(values)):
- placeholder_name.insert(i + pos, values[i])
-
- placeholder_name = '_'.join(placeholder_name)
-
- return placeholder_name.capitalize()
-
-
-def update_placeholder():
- placeholder = cmds.ls(selection=True)
- if len(placeholder) == 0:
- raise ValueError("No node selected")
- if len(placeholder) > 1:
- raise ValueError("Too many selected nodes")
- placeholder = placeholder[0]
-
- args = placeholder_window(get_placeholder_attributes(placeholder))
-
- if not args:
- return # operation canceled
-
- # delete placeholder attributes
- delete_placeholder_attributes(placeholder)
-
- options = create_options(args)
-
- imprint(placeholder, options)
- imprint_enum(placeholder, args)
-
- cmds.addAttr(
- placeholder,
- longName="parent",
- hidden=True,
- dataType="string"
- )
- cmds.addAttr(
- placeholder,
- longName="index",
- hidden=True,
- attributeType="short",
- defaultValue=-1
- )
-
- cmds.setAttr(placeholder + '.parent', '', type="string")
-
-
-def create_options(args):
- options = OrderedDict()
- for arg in args:
- if not type(arg) == qargparse.Separator:
- options[str(arg)] = arg._data.get("items") or arg.read()
- return options
-
-
-def imprint_enum(placeholder, args):
- """
- Imprint method doesn't act properly with enums.
- Replacing the functionnality with this for now
- """
- enum_values = {str(arg): arg.read()
- for arg in args if arg._data.get("items")}
- string_to_value_enum_table = {
- build: i for i, build
- in enumerate(build_types)}
- for key, value in enum_values.items():
- cmds.setAttr(
- placeholder + "." + key,
- string_to_value_enum_table[value])
-
-
-def placeholder_window(options=None):
- options = options or dict()
- dialog = OptionDialog(parent=get_main_window())
- dialog.setWindowTitle("Create Placeholder")
-
- args = [
- qargparse.Separator("Main attributes"),
- qargparse.Enum(
- "builder_type",
- label="Asset Builder Type",
- default=options.get("builder_type", 0),
- items=build_types,
- help="""Asset Builder Type
-Builder type describe what template loader will look for.
-context_asset : Template loader will look for subsets of
-current context asset (Asset bob will find asset)
-linked_asset : Template loader will look for assets linked
-to current context asset.
-Linked asset are looked in avalon database under field "inputLinks"
-"""
- ),
- qargparse.String(
- "family",
- default=options.get("family", ""),
- label="OpenPype Family",
- placeholder="ex: model, look ..."),
- qargparse.String(
- "representation",
- default=options.get("representation", ""),
- label="OpenPype Representation",
- placeholder="ex: ma, abc ..."),
- qargparse.String(
- "loader",
- default=options.get("loader", ""),
- label="Loader",
- placeholder="ex: ReferenceLoader, LightLoader ...",
- help="""Loader
-Defines what openpype loader will be used to load assets.
-Useable loader depends on current host's loader list.
-Field is case sensitive.
-"""),
- qargparse.String(
- "loader_args",
- default=options.get("loader_args", ""),
- label="Loader Arguments",
- placeholder='ex: {"camera":"persp", "lights":True}',
- help="""Loader
-Defines a dictionnary of arguments used to load assets.
-Useable arguments depend on current placeholder Loader.
-Field should be a valid python dict. Anything else will be ignored.
-"""),
- qargparse.Integer(
- "order",
- default=options.get("order", 0),
- min=0,
- max=999,
- label="Order",
- placeholder="ex: 0, 100 ... (smallest order loaded first)",
- help="""Order
-Order defines asset loading priority (0 to 999)
-Priority rule is : "lowest is first to load"."""),
- qargparse.Separator(
- "Optional attributes"),
- qargparse.String(
- "asset",
- default=options.get("asset", ""),
- label="Asset filter",
- placeholder="regex filtering by asset name",
- help="Filtering assets by matching field regex to asset's name"),
- qargparse.String(
- "subset",
- default=options.get("subset", ""),
- label="Subset filter",
- placeholder="regex filtering by subset name",
- help="Filtering assets by matching field regex to subset's name"),
- qargparse.String(
- "hierarchy",
- default=options.get("hierarchy", ""),
- label="Hierarchy filter",
- placeholder="regex filtering by asset's hierarchy",
- help="Filtering assets by matching field asset's hierarchy")
- ]
- dialog.create(args)
-
- if not dialog.exec_():
- return None
-
- return args
diff --git a/openpype/hosts/maya/api/menu.py b/openpype/hosts/maya/api/menu.py
index 666f555660..e20f29049b 100644
--- a/openpype/hosts/maya/api/menu.py
+++ b/openpype/hosts/maya/api/menu.py
@@ -9,16 +9,17 @@ import maya.cmds as cmds
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.workfile import BuildWorkfile
-from openpype.pipeline.workfile.build_template import (
- build_workfile_template,
- update_workfile_template
-)
from openpype.tools.utils import host_tools
from openpype.hosts.maya.api import lib, lib_rendersettings
from .lib import get_main_window, IS_HEADLESS
from .commands import reset_frame_range
-from .lib_template_builder import create_placeholder, update_placeholder
+from .workfile_template_builder import (
+ create_placeholder,
+ update_placeholder,
+ build_workfile_template,
+ update_workfile_template,
+)
log = logging.getLogger(__name__)
@@ -161,12 +162,12 @@ def install():
cmds.menuItem(
"Create Placeholder",
parent=builder_menu,
- command=lambda *args: create_placeholder()
+ command=create_placeholder
)
cmds.menuItem(
"Update Placeholder",
parent=builder_menu,
- command=lambda *args: update_placeholder()
+ command=update_placeholder
)
cmds.menuItem(
"Build Workfile from template",
diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py
index c13b47ef4a..b3bf738a2b 100644
--- a/openpype/hosts/maya/api/pipeline.py
+++ b/openpype/hosts/maya/api/pipeline.py
@@ -42,6 +42,7 @@ from openpype.hosts.maya import MAYA_ROOT_DIR
from openpype.hosts.maya.lib import create_workspace_mel
from . import menu, lib
+from .workfile_template_builder import MayaPlaceholderLoadPlugin
from .workio import (
open_file,
save_file,
@@ -135,6 +136,11 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
def get_containers(self):
return ls()
+ def get_workfile_build_placeholder_plugins(self):
+ return [
+ MayaPlaceholderLoadPlugin
+ ]
+
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
diff --git a/openpype/hosts/maya/api/template_loader.py b/openpype/hosts/maya/api/template_loader.py
deleted file mode 100644
index ecffafc93d..0000000000
--- a/openpype/hosts/maya/api/template_loader.py
+++ /dev/null
@@ -1,252 +0,0 @@
-import re
-from maya import cmds
-
-from openpype.client import get_representations
-from openpype.pipeline import legacy_io
-from openpype.pipeline.workfile.abstract_template_loader import (
- AbstractPlaceholder,
- AbstractTemplateLoader
-)
-from openpype.pipeline.workfile.build_template_exceptions import (
- TemplateAlreadyImported
-)
-
-PLACEHOLDER_SET = 'PLACEHOLDERS_SET'
-
-
-class MayaTemplateLoader(AbstractTemplateLoader):
- """Concrete implementation of AbstractTemplateLoader for maya
- """
-
- def import_template(self, path):
- """Import template into current scene.
- Block if a template is already loaded.
- Args:
- path (str): A path to current template (usually given by
- get_template_path implementation)
- Returns:
- bool: Wether the template was succesfully imported or not
- """
- if cmds.objExists(PLACEHOLDER_SET):
- raise TemplateAlreadyImported(
- "Build template already loaded\n"
- "Clean scene if needed (File > New Scene)")
-
- cmds.sets(name=PLACEHOLDER_SET, empty=True)
- self.new_nodes = cmds.file(path, i=True, returnNewNodes=True)
- cmds.setAttr(PLACEHOLDER_SET + '.hiddenInOutliner', True)
-
- for set in cmds.listSets(allSets=True):
- if (cmds.objExists(set) and
- cmds.attributeQuery('id', node=set, exists=True) and
- cmds.getAttr(set + '.id') == 'pyblish.avalon.instance'):
- if cmds.attributeQuery('asset', node=set, exists=True):
- cmds.setAttr(
- set + '.asset',
- legacy_io.Session['AVALON_ASSET'], type='string'
- )
-
- return True
-
- def template_already_imported(self, err_msg):
- clearButton = "Clear scene and build"
- updateButton = "Update template"
- abortButton = "Abort"
-
- title = "Scene already builded"
- message = (
- "It's seems a template was already build for this scene.\n"
- "Error message reveived :\n\n\"{}\"".format(err_msg))
- buttons = [clearButton, updateButton, abortButton]
- defaultButton = clearButton
- cancelButton = abortButton
- dismissString = abortButton
- answer = cmds.confirmDialog(
- t=title,
- m=message,
- b=buttons,
- db=defaultButton,
- cb=cancelButton,
- ds=dismissString)
-
- if answer == clearButton:
- cmds.file(newFile=True, force=True)
- self.import_template(self.template_path)
- self.populate_template()
- elif answer == updateButton:
- self.update_missing_containers()
- elif answer == abortButton:
- return
-
- @staticmethod
- def get_template_nodes():
- attributes = cmds.ls('*.builder_type', long=True)
- return [attribute.rpartition('.')[0] for attribute in attributes]
-
- def get_loaded_containers_by_id(self):
- try:
- containers = cmds.sets("AVALON_CONTAINERS", q=True)
- except ValueError:
- return None
-
- return [
- cmds.getAttr(container + '.representation')
- for container in containers]
-
-
-class MayaPlaceholder(AbstractPlaceholder):
- """Concrete implementation of AbstractPlaceholder for maya
- """
-
- optional_keys = {'asset', 'subset', 'hierarchy'}
-
- def get_data(self, node):
- user_data = dict()
- for attr in self.required_keys.union(self.optional_keys):
- attribute_name = '{}.{}'.format(node, attr)
- if not cmds.attributeQuery(attr, node=node, exists=True):
- print("{} not found".format(attribute_name))
- continue
- user_data[attr] = cmds.getAttr(
- attribute_name,
- asString=True)
- user_data['parent'] = (
- cmds.getAttr(node + '.parent', asString=True)
- or node.rpartition('|')[0]
- or ""
- )
- user_data['node'] = node
- if user_data['parent']:
- siblings = cmds.listRelatives(user_data['parent'], children=True)
- else:
- siblings = cmds.ls(assemblies=True)
- node_shortname = user_data['node'].rpartition('|')[2]
- current_index = cmds.getAttr(node + '.index', asString=True)
- user_data['index'] = (
- current_index if current_index >= 0
- else siblings.index(node_shortname))
-
- self.data = user_data
-
- def parent_in_hierarchy(self, containers):
- """Parent loaded container to placeholder's parent
- ie : Set loaded content as placeholder's sibling
- Args:
- containers (String): Placeholder loaded containers
- """
- if not containers:
- return
-
- roots = cmds.sets(containers, q=True)
- nodes_to_parent = []
- for root in roots:
- if root.endswith("_RN"):
- refRoot = cmds.referenceQuery(root, n=True)[0]
- refRoot = cmds.listRelatives(refRoot, parent=True) or [refRoot]
- nodes_to_parent.extend(refRoot)
- elif root in cmds.listSets(allSets=True):
- if not cmds.sets(root, q=True):
- return
- else:
- continue
- else:
- nodes_to_parent.append(root)
-
- if self.data['parent']:
- cmds.parent(nodes_to_parent, self.data['parent'])
- # Move loaded nodes to correct index in outliner hierarchy
- placeholder_node = self.data['node']
- placeholder_form = cmds.xform(
- placeholder_node,
- q=True,
- matrix=True,
- worldSpace=True
- )
- for node in set(nodes_to_parent):
- cmds.reorder(node, front=True)
- cmds.reorder(node, relative=self.data['index'])
- cmds.xform(node, matrix=placeholder_form, ws=True)
-
- holding_sets = cmds.listSets(object=placeholder_node)
- if not holding_sets:
- return
- for holding_set in holding_sets:
- cmds.sets(roots, forceElement=holding_set)
-
- def clean(self):
- """Hide placeholder, parent them to root
- add them to placeholder set and register placeholder's parent
- to keep placeholder info available for future use
- """
- node = self.data['node']
- if self.data['parent']:
- cmds.setAttr(node + '.parent', self.data['parent'], type='string')
- if cmds.getAttr(node + '.index') < 0:
- cmds.setAttr(node + '.index', self.data['index'])
-
- holding_sets = cmds.listSets(object=node)
- if holding_sets:
- for set in holding_sets:
- cmds.sets(node, remove=set)
-
- if cmds.listRelatives(node, p=True):
- node = cmds.parent(node, world=True)[0]
- cmds.sets(node, addElement=PLACEHOLDER_SET)
- cmds.hide(node)
- cmds.setAttr(node + '.hiddenInOutliner', True)
-
- def get_representations(self, current_asset_doc, linked_asset_docs):
- project_name = legacy_io.active_project()
-
- builder_type = self.data["builder_type"]
- if builder_type == "context_asset":
- context_filters = {
- "asset": [current_asset_doc["name"]],
- "subset": [re.compile(self.data["subset"])],
- "hierarchy": [re.compile(self.data["hierarchy"])],
- "representations": [self.data["representation"]],
- "family": [self.data["family"]]
- }
-
- elif builder_type != "linked_asset":
- context_filters = {
- "asset": [re.compile(self.data["asset"])],
- "subset": [re.compile(self.data["subset"])],
- "hierarchy": [re.compile(self.data["hierarchy"])],
- "representation": [self.data["representation"]],
- "family": [self.data["family"]]
- }
-
- else:
- asset_regex = re.compile(self.data["asset"])
- linked_asset_names = []
- for asset_doc in linked_asset_docs:
- asset_name = asset_doc["name"]
- if asset_regex.match(asset_name):
- linked_asset_names.append(asset_name)
-
- context_filters = {
- "asset": linked_asset_names,
- "subset": [re.compile(self.data["subset"])],
- "hierarchy": [re.compile(self.data["hierarchy"])],
- "representation": [self.data["representation"]],
- "family": [self.data["family"]],
- }
-
- return list(get_representations(
- project_name,
- context_filters=context_filters
- ))
-
- def err_message(self):
- return (
- "Error while trying to load a representation.\n"
- "Either the subset wasn't published or the template is malformed."
- "\n\n"
- "Builder was looking for :\n{attributes}".format(
- attributes="\n".join([
- "{}: {}".format(key.title(), value)
- for key, value in self.data.items()]
- )
- )
- )
diff --git a/openpype/hosts/maya/api/workfile_template_builder.py b/openpype/hosts/maya/api/workfile_template_builder.py
new file mode 100644
index 0000000000..ef043ed0f4
--- /dev/null
+++ b/openpype/hosts/maya/api/workfile_template_builder.py
@@ -0,0 +1,330 @@
+import json
+
+from maya import cmds
+
+from openpype.pipeline import registered_host
+from openpype.pipeline.workfile.workfile_template_builder import (
+ TemplateAlreadyImported,
+ AbstractTemplateBuilder,
+ PlaceholderPlugin,
+ LoadPlaceholderItem,
+ PlaceholderLoadMixin,
+)
+from openpype.tools.workfile_template_build import (
+ WorkfileBuildPlaceholderDialog,
+)
+
+from .lib import read, imprint
+
+PLACEHOLDER_SET = "PLACEHOLDERS_SET"
+
+
+class MayaTemplateBuilder(AbstractTemplateBuilder):
+ """Concrete implementation of AbstractTemplateBuilder for maya"""
+
+ def import_template(self, path):
+ """Import template into current scene.
+ Block if a template is already loaded.
+
+ Args:
+ path (str): A path to current template (usually given by
+ get_template_path implementation)
+
+ Returns:
+ bool: Wether the template was succesfully imported or not
+ """
+
+ if cmds.objExists(PLACEHOLDER_SET):
+ raise TemplateAlreadyImported((
+ "Build template already loaded\n"
+ "Clean scene if needed (File > New Scene)"
+ ))
+
+ cmds.sets(name=PLACEHOLDER_SET, empty=True)
+ cmds.file(path, i=True, returnNewNodes=True)
+
+ cmds.setAttr(PLACEHOLDER_SET + ".hiddenInOutliner", True)
+
+ return True
+
+
+class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
+ identifier = "maya.load"
+ label = "Maya load"
+
+ def _collect_scene_placeholders(self):
+ # Cache placeholder data to shared data
+ placeholder_nodes = self.builder.get_shared_populate_data(
+ "placeholder_nodes"
+ )
+ if placeholder_nodes is None:
+ attributes = cmds.ls("*.plugin_identifier", long=True)
+ placeholder_nodes = {}
+ for attribute in attributes:
+ node_name = attribute.rpartition(".")[0]
+ placeholder_nodes[node_name] = (
+ self._parse_placeholder_node_data(node_name)
+ )
+
+ self.builder.set_shared_populate_data(
+ "placeholder_nodes", placeholder_nodes
+ )
+ return placeholder_nodes
+
+ def _parse_placeholder_node_data(self, node_name):
+ placeholder_data = read(node_name)
+ parent_name = (
+ cmds.getAttr(node_name + ".parent", asString=True)
+ or node_name.rpartition("|")[0]
+ or ""
+ )
+ if parent_name:
+ siblings = cmds.listRelatives(parent_name, children=True)
+ else:
+ siblings = cmds.ls(assemblies=True)
+ node_shortname = node_name.rpartition("|")[2]
+ current_index = cmds.getAttr(node_name + ".index", asString=True)
+ if current_index < 0:
+ current_index = siblings.index(node_shortname)
+
+ placeholder_data.update({
+ "parent": parent_name,
+ "index": current_index
+ })
+ return placeholder_data
+
+ def _create_placeholder_name(self, placeholder_data):
+ placeholder_name_parts = placeholder_data["builder_type"].split("_")
+
+ pos = 1
+ # add famlily in any
+ placeholder_family = placeholder_data["family"]
+ if placeholder_family:
+ placeholder_name_parts.insert(pos, placeholder_family)
+ pos += 1
+
+ # add loader arguments if any
+ loader_args = placeholder_data["loader_args"]
+ if loader_args:
+ loader_args = json.loads(loader_args.replace('\'', '\"'))
+ values = [v for v in loader_args.values()]
+ for value in values:
+ placeholder_name_parts.insert(pos, value)
+ pos += 1
+
+ placeholder_name = "_".join(placeholder_name_parts)
+
+ return placeholder_name.capitalize()
+
+ def _get_loaded_repre_ids(self):
+ loaded_representation_ids = self.builder.get_shared_populate_data(
+ "loaded_representation_ids"
+ )
+ if loaded_representation_ids is None:
+ try:
+ containers = cmds.sets("AVALON_CONTAINERS", q=True)
+ except ValueError:
+ containers = []
+
+ loaded_representation_ids = {
+ cmds.getAttr(container + ".representation")
+ for container in containers
+ }
+ self.builder.set_shared_populate_data(
+ "loaded_representation_ids", loaded_representation_ids
+ )
+ return loaded_representation_ids
+
+ def create_placeholder(self, placeholder_data):
+ selection = cmds.ls(selection=True)
+ if not selection:
+ raise ValueError("Nothing is selected")
+ if len(selection) > 1:
+ raise ValueError("More then one item are selected")
+
+ placeholder_data["plugin_identifier"] = self.identifier
+
+ placeholder_name = self._create_placeholder_name(placeholder_data)
+
+ placeholder = cmds.spaceLocator(name=placeholder_name)[0]
+ # TODO: this can crash if selection can't be used
+ cmds.parent(placeholder, selection[0])
+
+ # get the long name of the placeholder (with the groups)
+ placeholder_full_name = (
+ cmds.ls(selection[0], long=True)[0]
+ + "|"
+ + placeholder.replace("|", "")
+ )
+
+ imprint(placeholder_full_name, placeholder_data)
+
+ # Add helper attributes to keep placeholder info
+ cmds.addAttr(
+ placeholder_full_name,
+ longName="parent",
+ hidden=True,
+ dataType="string"
+ )
+ cmds.addAttr(
+ placeholder_full_name,
+ longName="index",
+ hidden=True,
+ attributeType="short",
+ defaultValue=-1
+ )
+
+ cmds.setAttr(placeholder_full_name + ".parent", "", type="string")
+
+ def update_placeholder(self, placeholder_item, placeholder_data):
+ node_name = placeholder_item.scene_identifier
+ new_values = {}
+ for key, value in placeholder_data.items():
+ placeholder_value = placeholder_item.data.get(key)
+ if value != placeholder_value:
+ new_values[key] = value
+ placeholder_item.data[key] = value
+
+ for key in new_values.keys():
+ cmds.deleteAttr(node_name + "." + key)
+
+ imprint(node_name, new_values)
+
+ def collect_placeholders(self):
+ output = []
+ scene_placeholders = self._collect_scene_placeholders()
+ for node_name, placeholder_data in scene_placeholders.items():
+ if placeholder_data.get("plugin_identifier") != self.identifier:
+ continue
+
+ # TODO do data validations and maybe updgrades if are invalid
+ output.append(
+ LoadPlaceholderItem(node_name, placeholder_data, self)
+ )
+
+ return output
+
+ def populate_placeholder(self, placeholder):
+ self.populate_load_placeholder(placeholder)
+
+ def repopulate_placeholder(self, placeholder):
+ repre_ids = self._get_loaded_repre_ids()
+ self.populate_load_placeholder(placeholder, repre_ids)
+
+ def get_placeholder_options(self, options=None):
+ return self.get_load_plugin_options(options)
+
+ def cleanup_placeholder(self, placeholder, failed):
+ """Hide placeholder, parent them to root
+ add them to placeholder set and register placeholder's parent
+ to keep placeholder info available for future use
+ """
+
+ node = placeholder._scene_identifier
+ node_parent = placeholder.data["parent"]
+ if node_parent:
+ cmds.setAttr(node + ".parent", node_parent, type="string")
+
+ if cmds.getAttr(node + ".index") < 0:
+ cmds.setAttr(node + ".index", placeholder.data["index"])
+
+ holding_sets = cmds.listSets(object=node)
+ if holding_sets:
+ for set in holding_sets:
+ cmds.sets(node, remove=set)
+
+ if cmds.listRelatives(node, p=True):
+ node = cmds.parent(node, world=True)[0]
+ cmds.sets(node, addElement=PLACEHOLDER_SET)
+ cmds.hide(node)
+ cmds.setAttr(node + ".hiddenInOutliner", True)
+
+ def load_succeed(self, placeholder, container):
+ self._parent_in_hierarhchy(placeholder, container)
+
+ def _parent_in_hierarchy(self, placeholder, container):
+ """Parent loaded container to placeholder's parent.
+
+ ie : Set loaded content as placeholder's sibling
+
+ Args:
+ container (str): Placeholder loaded containers
+ """
+
+ if not container:
+ return
+
+ roots = cmds.sets(container, q=True)
+ nodes_to_parent = []
+ for root in roots:
+ if root.endswith("_RN"):
+ refRoot = cmds.referenceQuery(root, n=True)[0]
+ refRoot = cmds.listRelatives(refRoot, parent=True) or [refRoot]
+ nodes_to_parent.extend(refRoot)
+ elif root not in cmds.listSets(allSets=True):
+ nodes_to_parent.append(root)
+
+ elif not cmds.sets(root, q=True):
+ return
+
+ if placeholder.data["parent"]:
+ cmds.parent(nodes_to_parent, placeholder.data["parent"])
+ # Move loaded nodes to correct index in outliner hierarchy
+ placeholder_form = cmds.xform(
+ placeholder.scene_identifier,
+ q=True,
+ matrix=True,
+ worldSpace=True
+ )
+ for node in set(nodes_to_parent):
+ cmds.reorder(node, front=True)
+ cmds.reorder(node, relative=placeholder.data["index"])
+ cmds.xform(node, matrix=placeholder_form, ws=True)
+
+ holding_sets = cmds.listSets(object=placeholder.scene_identifier)
+ if not holding_sets:
+ return
+ for holding_set in holding_sets:
+ cmds.sets(roots, forceElement=holding_set)
+
+
+def build_workfile_template(*args):
+ builder = MayaTemplateBuilder(registered_host())
+ builder.build_template()
+
+
+def update_workfile_template(*args):
+ builder = MayaTemplateBuilder(registered_host())
+ builder.rebuild_template()
+
+
+def create_placeholder(*args):
+ host = registered_host()
+ builder = MayaTemplateBuilder(host)
+ window = WorkfileBuildPlaceholderDialog(host, builder)
+ window.exec_()
+
+
+def update_placeholder(*args):
+ host = registered_host()
+ builder = MayaTemplateBuilder(host)
+ placeholder_items_by_id = {
+ placeholder_item.scene_identifier: placeholder_item
+ for placeholder_item in builder.get_placeholders()
+ }
+ placeholder_items = []
+ for node_name in cmds.ls(selection=True, long=True):
+ if node_name in placeholder_items_by_id:
+ placeholder_items.append(placeholder_items_by_id[node_name])
+
+ # TODO show UI at least
+ if len(placeholder_items) == 0:
+ raise ValueError("No node selected")
+
+ if len(placeholder_items) > 1:
+ raise ValueError("Too many selected nodes")
+
+ placeholder_item = placeholder_items[0]
+ window = WorkfileBuildPlaceholderDialog(host, builder)
+ window.set_update_mode(placeholder_item)
+ window.exec_()
diff --git a/openpype/hosts/maya/plugins/create/create_render.py b/openpype/hosts/maya/plugins/create/create_render.py
index 5418ec1f2f..2b2c978d3c 100644
--- a/openpype/hosts/maya/plugins/create/create_render.py
+++ b/openpype/hosts/maya/plugins/create/create_render.py
@@ -9,7 +9,7 @@ import requests
from maya import cmds
from maya.app.renderSetup.model import renderSetup
-from openpype.api import (
+from openpype.settings import (
get_system_settings,
get_project_settings,
)
diff --git a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
index 4e4417ff34..44cbee0502 100644
--- a/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
+++ b/openpype/hosts/maya/plugins/create/create_unreal_staticmesh.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Static Meshes."""
from openpype.hosts.maya.api import plugin, lib
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from maya import cmds # noqa
diff --git a/openpype/hosts/maya/plugins/create/create_vrayscene.py b/openpype/hosts/maya/plugins/create/create_vrayscene.py
index 45c4b7e443..59d80e6d5b 100644
--- a/openpype/hosts/maya/plugins/create/create_vrayscene.py
+++ b/openpype/hosts/maya/plugins/create/create_vrayscene.py
@@ -12,7 +12,7 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
-from openpype.api import (
+from openpype.settings import (
get_system_settings,
get_project_settings
)
diff --git a/openpype/hosts/maya/plugins/load/load_ass.py b/openpype/hosts/maya/plugins/load/load_ass.py
index d1b12ceaba..5db6fc3dfa 100644
--- a/openpype/hosts/maya/plugins/load/load_ass.py
+++ b/openpype/hosts/maya/plugins/load/load_ass.py
@@ -1,7 +1,7 @@
import os
import clique
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
diff --git a/openpype/hosts/maya/plugins/load/load_gpucache.py b/openpype/hosts/maya/plugins/load/load_gpucache.py
index 179819f904..a09f924c7b 100644
--- a/openpype/hosts/maya/plugins/load/load_gpucache.py
+++ b/openpype/hosts/maya/plugins/load/load_gpucache.py
@@ -4,7 +4,7 @@ from openpype.pipeline import (
load,
get_representation_path
)
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
class GpuCacheLoader(load.LoaderPlugin):
diff --git a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py
index d93a9f02a2..c288e23ded 100644
--- a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py
+++ b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py
@@ -5,7 +5,7 @@ import clique
import maya.cmds as cmds
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py
index 5a06661df9..c762a29326 100644
--- a/openpype/hosts/maya/plugins/load/load_reference.py
+++ b/openpype/hosts/maya/plugins/load/load_reference.py
@@ -1,7 +1,7 @@
import os
from maya import cmds
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.create import (
legacy_create,
diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py
index d458c5abda..8a386cecfd 100644
--- a/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py
+++ b/openpype/hosts/maya/plugins/load/load_vdb_to_arnold.py
@@ -1,6 +1,6 @@
import os
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py
index c6a69dfe35..1f02321dc8 100644
--- a/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py
+++ b/openpype/hosts/maya/plugins/load/load_vdb_to_redshift.py
@@ -1,6 +1,6 @@
import os
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
diff --git a/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py b/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py
index 3a16264ec0..9267c59c02 100644
--- a/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py
+++ b/openpype/hosts/maya/plugins/load/load_vdb_to_vray.py
@@ -1,6 +1,6 @@
import os
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
diff --git a/openpype/hosts/maya/plugins/load/load_vrayproxy.py b/openpype/hosts/maya/plugins/load/load_vrayproxy.py
index e3d6166d3a..720a132aa7 100644
--- a/openpype/hosts/maya/plugins/load/load_vrayproxy.py
+++ b/openpype/hosts/maya/plugins/load/load_vrayproxy.py
@@ -10,7 +10,7 @@ import os
import maya.cmds as cmds
from openpype.client import get_representation_by_name
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
legacy_io,
load,
diff --git a/openpype/hosts/maya/plugins/load/load_vrayscene.py b/openpype/hosts/maya/plugins/load/load_vrayscene.py
index 61132088cc..d87992f9a7 100644
--- a/openpype/hosts/maya/plugins/load/load_vrayscene.py
+++ b/openpype/hosts/maya/plugins/load/load_vrayscene.py
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
import os
import maya.cmds as cmds # noqa
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
diff --git a/openpype/hosts/maya/plugins/load/load_yeti_cache.py b/openpype/hosts/maya/plugins/load/load_yeti_cache.py
index 8435ba2493..090047e22d 100644
--- a/openpype/hosts/maya/plugins/load/load_yeti_cache.py
+++ b/openpype/hosts/maya/plugins/load/load_yeti_cache.py
@@ -6,7 +6,7 @@ from collections import defaultdict
import clique
from maya import cmds
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
@@ -250,7 +250,7 @@ class YetiCacheLoader(load.LoaderPlugin):
"""
name = node_name.replace(":", "_")
- pattern = r"^({name})(\.[0-4]+)?(\.fur)$".format(name=re.escape(name))
+ pattern = r"^({name})(\.[0-9]+)?(\.fur)$".format(name=re.escape(name))
files = [fname for fname in os.listdir(root) if re.match(pattern,
fname)]
diff --git a/openpype/hosts/maya/plugins/load/load_yeti_rig.py b/openpype/hosts/maya/plugins/load/load_yeti_rig.py
index 4b730ad2c1..651607de8a 100644
--- a/openpype/hosts/maya/plugins/load/load_yeti_rig.py
+++ b/openpype/hosts/maya/plugins/load/load_yeti_rig.py
@@ -1,7 +1,7 @@
import os
from collections import defaultdict
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api import lib
diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py
index 14aac2f206..b1ad3ca58e 100644
--- a/openpype/hosts/maya/plugins/publish/collect_render.py
+++ b/openpype/hosts/maya/plugins/publish/collect_render.py
@@ -102,23 +102,26 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
}
for layer in collected_render_layers:
- try:
- if layer.startswith("LAYER_"):
- # this is support for legacy mode where render layers
- # started with `LAYER_` prefix.
- expected_layer_name = re.search(
- r"^LAYER_(.*)", layer).group(1)
- else:
- # new way is to prefix render layer name with instance
- # namespace.
- expected_layer_name = re.search(
- r"^.+:(.*)", layer).group(1)
- except IndexError:
+ if layer.startswith("LAYER_"):
+ # this is support for legacy mode where render layers
+ # started with `LAYER_` prefix.
+ layer_name_pattern = r"^LAYER_(.*)"
+ else:
+ # new way is to prefix render layer name with instance
+ # namespace.
+ layer_name_pattern = r"^.+:(.*)"
+
+ # todo: We should have a more explicit way to link the renderlayer
+ match = re.match(layer_name_pattern, layer)
+ if not match:
msg = "Invalid layer name in set [ {} ]".format(layer)
self.log.warning(msg)
continue
- self.log.info("processing %s" % layer)
+ expected_layer_name = match.group(1)
+ self.log.info("Processing '{}' as layer [ {} ]"
+ "".format(layer, expected_layer_name))
+
# check if layer is part of renderSetup
if expected_layer_name not in maya_render_layers:
msg = "Render layer [ {} ] is not in " "Render Setup".format(
diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py
index 81fdba2f98..8df7edc3f4 100644
--- a/openpype/hosts/maya/plugins/publish/extract_playblast.py
+++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py
@@ -136,8 +136,10 @@ class ExtractPlayblast(publish.Extractor):
self.log.debug("playblast path {}".format(path))
collected_files = os.listdir(stagingdir)
+ patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(collected_files,
- minimum_items=1)
+ minimum_items=1,
+ patterns=patterns)
self.log.debug("filename {}".format(filename))
frame_collection = None
diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
index c4250a20bd..df06220fd3 100644
--- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
+++ b/openpype/hosts/maya/plugins/publish/submit_maya_muster.py
@@ -11,7 +11,7 @@ import pyblish.api
from openpype.lib import requests_post
from openpype.hosts.maya.api import lib
from openpype.pipeline import legacy_io
-from openpype.api import get_system_settings
+from openpype.settings import get_system_settings
# mapping between Maya renderer names and Muster template ids
diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_content.py b/openpype/hosts/maya/plugins/publish/validate_animation_content.py
index 6f7a6b905a..9dbb09a046 100644
--- a/openpype/hosts/maya/plugins/publish/validate_animation_content.py
+++ b/openpype/hosts/maya/plugins/publish/validate_animation_content.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py
index aa27633402..649913fff6 100644
--- a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py
@@ -1,7 +1,6 @@
import maya.cmds as cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
index a9ea5a6d15..229da63c42 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_namespaces.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py
index fb25b617be..3f2c59b95b 100644
--- a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py
+++ b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
from maya import cmds
diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py
index 19c1179e52..bd1529e252 100644
--- a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py
index f846319807..1ce8026fc2 100644
--- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_color_sets.py b/openpype/hosts/maya/plugins/publish/validate_color_sets.py
index cab9d6ebab..905417bafa 100644
--- a/openpype/hosts/maya/plugins/publish/validate_color_sets.py
+++ b/openpype/hosts/maya/plugins/publish/validate_color_sets.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py
index d3b8316d94..210ee4127c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py
+++ b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py
@@ -2,7 +2,6 @@ from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
index bf92ac5099..4870f27bff 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_contents.py b/openpype/hosts/maya/plugins/publish/validate_look_contents.py
index d9819b05d5..53501d11e5 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_contents.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py
index f223c1a42b..a266a0fd74 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py
@@ -2,7 +2,6 @@ from collections import defaultdict
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py
index 210fcb174d..f81e511ff3 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py
@@ -1,7 +1,6 @@
from collections import defaultdict
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidatePipelineOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py
index 95f8fa20d0..db6aadae8d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_sets.py b/openpype/hosts/maya/plugins/publish/validate_look_sets.py
index 3a60b771f4..8434ddde04 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_sets.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_sets.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py
index 7d043eddb8..9b57b06ee7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py
index 51e1232bb7..788e440d12 100644
--- a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py
+++ b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
index abfe1213a0..c1c0636b9e 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py
@@ -1,7 +1,6 @@
import pymel.core as pc
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py
index 4d2885d6e2..36a0da7a59 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py
@@ -3,7 +3,6 @@ import re
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py b/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py
index e7a73c21b0..4427c6eece 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py b/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py
index 24d6188ec8..5b67db3307 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py
index 18ceccaa28..664e2b5772 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py
index e75a132d50..d7711da722 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateMeshOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py
index 8c03b54971..0ef2716559 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import ValidateMeshOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py
index 7d88161058..c8892a8e59 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py
@@ -2,7 +2,6 @@ from maya import cmds
import maya.api.OpenMaya as om2
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py
index dde3e4fead..be7324a68f 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
import math
import maya.api.OpenMaya as om
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py
index 9621fd5aa8..2a0abe975c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py
index 3fb09356d3..6ca8c06ba5 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py
index 2711682f76..40ddb916ca 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py
index 350a5f4789..1e6d290ae7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py
@@ -3,7 +3,6 @@ import re
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py
index 0557858639..723346a285 100644
--- a/openpype/hosts/maya/plugins/publish/validate_model_content.py
+++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_model_name.py b/openpype/hosts/maya/plugins/publish/validate_model_name.py
index 99a4b2654e..2dec9ba267 100644
--- a/openpype/hosts/maya/plugins/publish/validate_model_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_model_name.py
@@ -5,7 +5,6 @@ import re
from maya import cmds
import pyblish.api
-import openpype.api
from openpype.pipeline import legacy_io
from openpype.pipeline.publish import ValidateContentsOrder
import openpype.hosts.maya.api.action
diff --git a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
index 62f360cd86..67fc1616c2 100644
--- a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
+++ b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py
@@ -1,6 +1,5 @@
import os
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_animation.py b/openpype/hosts/maya/plugins/publish/validate_no_animation.py
index 177de1468d..2e7cafe4ab 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_animation.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_animation.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py
index d4ddb28070..1a5773e6a7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py
index 95caa1007f..01c77e5b2e 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py
@@ -2,7 +2,6 @@ import pymel.core as pm
import maya.cmds as cmds
import pyblish.api
-import openpype.api
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py
index f31fd09c95..b430c2b63c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py
@@ -1,7 +1,6 @@
import maya.cmds as cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py
index 20fe34f2fd..2cfdc28128 100644
--- a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_node_ids.py
index 877ba0e781..796f4c8d76 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids.py
@@ -1,5 +1,5 @@
import pyblish.api
-import openpype.api
+
from openpype.pipeline.publish import ValidatePipelineOrder
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py
index 1fe4a34e07..68c47f3a96 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py
index a5b1215f30..b2f28fd4e5 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py
@@ -1,6 +1,5 @@
import pyblish.api
-import openpype.api
from openpype.client import get_assets
from openpype.pipeline import legacy_io
from openpype.pipeline.publish import ValidatePipelineOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py
index a7595d7392..f901dc58c4 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
from openpype.pipeline.publish import ValidatePipelineOrder
import openpype.hosts.maya.api.action
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
index 5ff18358e2..f7a5e6e292 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py
@@ -1,7 +1,6 @@
from collections import defaultdict
import pyblish.api
-import openpype.api
from openpype.pipeline.publish import ValidatePipelineOrder
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
diff --git a/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py b/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py
index 2f22d6da1e..0f608dab2c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py
+++ b/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py
index da35f42291..67ece75af8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py
+++ b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py
index fc41b1cf5b..77322fefd5 100644
--- a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py
+++ b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py
@@ -3,9 +3,8 @@ import re
import pyblish.api
from maya import cmds
-import openpype.api
import openpype.hosts.maya.api.action
-from openpype.hosts.maya.api.render_settings import RenderSettings
+from openpype.hosts.maya.api.lib_rendersettings import RenderSettings
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py
index 2474b2ead6..ca44e98605 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py
@@ -268,14 +268,20 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
# go through definitions and test if such node.attribute exists.
# if so, compare its value from the one required.
for attr, value in OrderedDict(validation_settings).items():
- # first get node of that type
cls.log.debug("{}: {}".format(attr, value))
- node_type = attr.split(".")[0]
- attribute_name = ".".join(attr.split(".")[1:])
+ if "." not in attr:
+ cls.log.warning("Skipping invalid attribute defined in "
+ "validation settings: '{}'".format(attr))
+ continue
+
+ node_type, attribute_name = attr.split(".", 1)
+
+ # first get node of that type
nodes = cmds.ls(type=node_type)
- if not isinstance(nodes, list):
- cls.log.warning("No nodes of '{}' found.".format(node_type))
+ if not nodes:
+ cls.log.warning(
+ "No nodes of type '{}' found.".format(node_type))
continue
for node in nodes:
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
index 3d486cf7a4..55b2ebd6d8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py b/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py
index 86967d7502..d5bf7fd1cf 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
index 70128ac493..03ba381f8d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py
@@ -1,7 +1,7 @@
import maya.cmds as cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
index f075f42ff2..f3ed1a36ef 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
@@ -2,7 +2,6 @@ import pymel.core as pc
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
RepairAction,
diff --git a/openpype/hosts/maya/plugins/publish/validate_shader_name.py b/openpype/hosts/maya/plugins/publish/validate_shader_name.py
index 522b42fd00..b3e51f011d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shader_name.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shader_name.py
@@ -2,7 +2,7 @@ import re
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py
index 25bd3442a3..651c6bcec9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py
@@ -3,7 +3,7 @@ import re
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import (
ValidateContentsOrder,
diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py b/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py
index 0980d6b4b6..f58c0aaf81 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
from maya import cmds
diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py
index 9e30735d40..7a7e9a0aee 100644
--- a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py
+++ b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from openpype.pipeline.publish import (
diff --git a/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py b/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py
index 86ff914cb0..b45d2b120a 100644
--- a/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py
+++ b/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_step_size.py b/openpype/hosts/maya/plugins/publish/validate_step_size.py
index 552a936966..294458f63c 100644
--- a/openpype/hosts/maya/plugins/publish/validate_step_size.py
+++ b/openpype/hosts/maya/plugins/publish/validate_step_size.py
@@ -1,5 +1,5 @@
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py
index 64faf9ecb6..4615e2ec07 100644
--- a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py
+++ b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py
@@ -3,7 +3,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py
index 9e232f6023..da569195e8 100644
--- a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py
+++ b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/plugins/publish/validate_unique_names.py b/openpype/hosts/maya/plugins/publish/validate_unique_names.py
similarity index 94%
rename from openpype/plugins/publish/validate_unique_names.py
rename to openpype/hosts/maya/plugins/publish/validate_unique_names.py
index 33a460f7cc..05776ee0f3 100644
--- a/openpype/plugins/publish/validate_unique_names.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unique_names.py
@@ -1,7 +1,6 @@
from maya import cmds
import pyblish.api
-import openpype.api
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
@@ -24,7 +23,7 @@ class ValidateUniqueNames(pyblish.api.Validator):
"""Returns the invalid transforms in the instance.
Returns:
- list: Non unique name transforms
+ list: Non-unique name transforms.
"""
diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py
index 1ed3e5531c..4211e76a73 100644
--- a/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py
@@ -2,8 +2,9 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
from openpype.pipeline.publish import ValidateMeshOrder
+import openpype.hosts.maya.api.action
class ValidateUnrealMeshTriangulated(pyblish.api.InstancePlugin):
diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
index a4bb54f5af..1425190b82 100644
--- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
+++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py
@@ -3,7 +3,7 @@
import re
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline import legacy_io
from openpype.settings import get_project_settings
diff --git a/openpype/hosts/maya/plugins/publish/validate_visible_only.py b/openpype/hosts/maya/plugins/publish/validate_visible_only.py
index f326b91796..faf634f258 100644
--- a/openpype/hosts/maya/plugins/publish/validate_visible_only.py
+++ b/openpype/hosts/maya/plugins/publish/validate_visible_only.py
@@ -1,6 +1,5 @@
import pyblish.api
-import openpype.api
from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py
index b94e5cbbed..855a96e6b9 100644
--- a/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py
+++ b/openpype/hosts/maya/plugins/publish/validate_vrayproxy_members.py
@@ -1,5 +1,4 @@
import pyblish.api
-import openpype.api
from maya import cmds
diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
index 0fe89634f5..ebef44774d 100644
--- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
+++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py
@@ -1,7 +1,7 @@
from maya import cmds
import pyblish.api
-import openpype.api
+
import openpype.hosts.maya.api.action
from openpype.pipeline.publish import ValidateContentsOrder
diff --git a/openpype/hosts/maya/startup/userSetup.py b/openpype/hosts/maya/startup/userSetup.py
index 10e68c2ddb..40cd51f2d8 100644
--- a/openpype/hosts/maya/startup/userSetup.py
+++ b/openpype/hosts/maya/startup/userSetup.py
@@ -1,5 +1,5 @@
import os
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
from maya import cmds
diff --git a/openpype/hosts/nuke/api/__init__.py b/openpype/hosts/nuke/api/__init__.py
index 962f31c177..c65058874b 100644
--- a/openpype/hosts/nuke/api/__init__.py
+++ b/openpype/hosts/nuke/api/__init__.py
@@ -21,6 +21,8 @@ from .pipeline import (
containerise,
parse_container,
update_container,
+
+ get_workfile_build_placeholder_plugins,
)
from .lib import (
maintained_selection,
@@ -55,6 +57,8 @@ __all__ = (
"parse_container",
"update_container",
+ "get_workfile_build_placeholder_plugins",
+
"maintained_selection",
"reset_selection",
"get_view_process_node",
diff --git a/openpype/hosts/nuke/api/gizmo_menu.py b/openpype/hosts/nuke/api/gizmo_menu.py
index 0f1a3e03fc..9edfc62e3b 100644
--- a/openpype/hosts/nuke/api/gizmo_menu.py
+++ b/openpype/hosts/nuke/api/gizmo_menu.py
@@ -2,7 +2,7 @@ import os
import re
import nuke
-from openpype.api import Logger
+from openpype.lib import Logger
log = Logger.get_logger(__name__)
diff --git a/openpype/hosts/nuke/api/lib_template_builder.py b/openpype/hosts/nuke/api/lib_template_builder.py
deleted file mode 100644
index 61baa23928..0000000000
--- a/openpype/hosts/nuke/api/lib_template_builder.py
+++ /dev/null
@@ -1,220 +0,0 @@
-from collections import OrderedDict
-
-import qargparse
-
-import nuke
-
-from openpype.tools.utils.widgets import OptionDialog
-
-from .lib import imprint, get_main_window
-
-
-# To change as enum
-build_types = ["context_asset", "linked_asset", "all_assets"]
-
-
-def get_placeholder_attributes(node, enumerate=False):
- list_atts = {
- "builder_type",
- "family",
- "representation",
- "loader",
- "loader_args",
- "order",
- "asset",
- "subset",
- "hierarchy",
- "siblings",
- "last_loaded"
- }
- attributes = {}
- for attr in node.knobs().keys():
- if attr in list_atts:
- if enumerate:
- try:
- attributes[attr] = node.knob(attr).values()
- except AttributeError:
- attributes[attr] = node.knob(attr).getValue()
- else:
- attributes[attr] = node.knob(attr).getValue()
-
- return attributes
-
-
-def delete_placeholder_attributes(node):
- """Delete all extra placeholder attributes."""
-
- extra_attributes = get_placeholder_attributes(node)
- for attribute in extra_attributes.keys():
- try:
- node.removeKnob(node.knob(attribute))
- except ValueError:
- continue
-
-
-def hide_placeholder_attributes(node):
- """Hide all extra placeholder attributes."""
-
- extra_attributes = get_placeholder_attributes(node)
- for attribute in extra_attributes.keys():
- try:
- node.knob(attribute).setVisible(False)
- except ValueError:
- continue
-
-
-def create_placeholder():
- args = placeholder_window()
- if not args:
- # operation canceled, no locator created
- return
-
- placeholder = nuke.nodes.NoOp()
- placeholder.setName("PLACEHOLDER")
- placeholder.knob("tile_color").setValue(4278190335)
-
- # custom arg parse to force empty data query
- # and still imprint them on placeholder
- # and getting items when arg is of type Enumerator
- options = OrderedDict()
- for arg in args:
- if not type(arg) == qargparse.Separator:
- options[str(arg)] = arg._data.get("items") or arg.read()
- imprint(placeholder, options)
- imprint(placeholder, {"is_placeholder": True})
- placeholder.knob("is_placeholder").setVisible(False)
-
-
-def update_placeholder():
- placeholder = nuke.selectedNodes()
- if not placeholder:
- raise ValueError("No node selected")
- if len(placeholder) > 1:
- raise ValueError("Too many selected nodes")
- placeholder = placeholder[0]
-
- args = placeholder_window(get_placeholder_attributes(placeholder))
- if not args:
- return # operation canceled
- # delete placeholder attributes
- delete_placeholder_attributes(placeholder)
-
- options = OrderedDict()
- for arg in args:
- if not type(arg) == qargparse.Separator:
- options[str(arg)] = arg._data.get("items") or arg.read()
- imprint(placeholder, options)
-
-
-def imprint_enum(placeholder, args):
- """
- Imprint method doesn't act properly with enums.
- Replacing the functionnality with this for now
- """
-
- enum_values = {
- str(arg): arg.read()
- for arg in args
- if arg._data.get("items")
- }
- string_to_value_enum_table = {
- build: idx
- for idx, build in enumerate(build_types)
- }
- attrs = {}
- for key, value in enum_values.items():
- attrs[key] = string_to_value_enum_table[value]
-
-
-def placeholder_window(options=None):
- options = options or dict()
- dialog = OptionDialog(parent=get_main_window())
- dialog.setWindowTitle("Create Placeholder")
-
- args = [
- qargparse.Separator("Main attributes"),
- qargparse.Enum(
- "builder_type",
- label="Asset Builder Type",
- default=options.get("builder_type", 0),
- items=build_types,
- help="""Asset Builder Type
-Builder type describe what template loader will look for.
-
-context_asset : Template loader will look for subsets of
-current context asset (Asset bob will find asset)
-
-linked_asset : Template loader will look for assets linked
-to current context asset.
-Linked asset are looked in OpenPype database under field "inputLinks"
-"""
- ),
- qargparse.String(
- "family",
- default=options.get("family", ""),
- label="OpenPype Family",
- placeholder="ex: image, plate ..."),
- qargparse.String(
- "representation",
- default=options.get("representation", ""),
- label="OpenPype Representation",
- placeholder="ex: mov, png ..."),
- qargparse.String(
- "loader",
- default=options.get("loader", ""),
- label="Loader",
- placeholder="ex: LoadClip, LoadImage ...",
- help="""Loader
-
-Defines what openpype loader will be used to load assets.
-Useable loader depends on current host's loader list.
-Field is case sensitive.
-"""),
- qargparse.String(
- "loader_args",
- default=options.get("loader_args", ""),
- label="Loader Arguments",
- placeholder='ex: {"camera":"persp", "lights":True}',
- help="""Loader
-
-Defines a dictionnary of arguments used to load assets.
-Useable arguments depend on current placeholder Loader.
-Field should be a valid python dict. Anything else will be ignored.
-"""),
- qargparse.Integer(
- "order",
- default=options.get("order", 0),
- min=0,
- max=999,
- label="Order",
- placeholder="ex: 0, 100 ... (smallest order loaded first)",
- help="""Order
-
-Order defines asset loading priority (0 to 999)
-Priority rule is : "lowest is first to load"."""),
- qargparse.Separator(
- "Optional attributes "),
- qargparse.String(
- "asset",
- default=options.get("asset", ""),
- label="Asset filter",
- placeholder="regex filtering by asset name",
- help="Filtering assets by matching field regex to asset's name"),
- qargparse.String(
- "subset",
- default=options.get("subset", ""),
- label="Subset filter",
- placeholder="regex filtering by subset name",
- help="Filtering assets by matching field regex to subset's name"),
- qargparse.String(
- "hierarchy",
- default=options.get("hierarchy", ""),
- label="Hierarchy filter",
- placeholder="regex filtering by asset's hierarchy",
- help="Filtering assets by matching field asset's hierarchy")
- ]
- dialog.create(args)
- if not dialog.exec_():
- return None
-
- return args
diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py
index bac42128cc..7db420f6af 100644
--- a/openpype/hosts/nuke/api/pipeline.py
+++ b/openpype/hosts/nuke/api/pipeline.py
@@ -7,11 +7,8 @@ import nuke
import pyblish.api
import openpype
-from openpype.api import (
- Logger,
- get_current_project_settings
-)
-from openpype.lib import register_event_callback
+from openpype.settings import get_current_project_settings
+from openpype.lib import register_event_callback, Logger
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
@@ -22,10 +19,6 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
)
from openpype.pipeline.workfile import BuildWorkfile
-from openpype.pipeline.workfile.build_template import (
- build_workfile_template,
- update_workfile_template
-)
from openpype.tools.utils import host_tools
from .command import viewer_update_and_undo_stop
@@ -40,8 +33,12 @@ from .lib import (
set_avalon_knob_data,
read_avalon_data,
)
-from .lib_template_builder import (
- create_placeholder, update_placeholder
+from .workfile_template_builder import (
+ NukePlaceholderLoadPlugin,
+ build_workfile_template,
+ update_workfile_template,
+ create_placeholder,
+ update_placeholder,
)
log = Logger.get_logger(__name__)
@@ -141,6 +138,12 @@ def _show_workfiles():
host_tools.show_workfiles(parent=None, on_top=False)
+def get_workfile_build_placeholder_plugins():
+ return [
+ NukePlaceholderLoadPlugin
+ ]
+
+
def _install_menu():
# uninstall original avalon menu
main_window = get_main_window()
diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py
index 37ce03dc55..91bb90ff99 100644
--- a/openpype/hosts/nuke/api/plugin.py
+++ b/openpype/hosts/nuke/api/plugin.py
@@ -6,7 +6,7 @@ from abc import abstractmethod
import nuke
-from openpype.api import get_current_project_settings
+from openpype.settings import get_current_project_settings
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,
diff --git a/openpype/hosts/nuke/api/template_loader.py b/openpype/hosts/nuke/api/workfile_template_builder.py
similarity index 51%
rename from openpype/hosts/nuke/api/template_loader.py
rename to openpype/hosts/nuke/api/workfile_template_builder.py
index 5ff4b8fc41..7a2e442e32 100644
--- a/openpype/hosts/nuke/api/template_loader.py
+++ b/openpype/hosts/nuke/api/workfile_template_builder.py
@@ -1,13 +1,16 @@
-import re
import collections
import nuke
-from openpype.client import get_representations
-from openpype.pipeline import legacy_io
-from openpype.pipeline.workfile.abstract_template_loader import (
- AbstractPlaceholder,
- AbstractTemplateLoader,
+from openpype.pipeline import registered_host
+from openpype.pipeline.workfile.workfile_template_builder import (
+ AbstractTemplateBuilder,
+ PlaceholderPlugin,
+ LoadPlaceholderItem,
+ PlaceholderLoadMixin,
+)
+from openpype.tools.workfile_template_build import (
+ WorkfileBuildPlaceholderDialog,
)
from .lib import (
@@ -25,19 +28,11 @@ from .lib import (
node_tempfile,
)
-from .lib_template_builder import (
- delete_placeholder_attributes,
- get_placeholder_attributes,
- hide_placeholder_attributes
-)
-
PLACEHOLDER_SET = "PLACEHOLDERS_SET"
-class NukeTemplateLoader(AbstractTemplateLoader):
- """Concrete implementation of AbstractTemplateLoader for Nuke
-
- """
+class NukeTemplateBuilder(AbstractTemplateBuilder):
+ """Concrete implementation of AbstractTemplateBuilder for maya"""
def import_template(self, path):
"""Import template into current scene.
@@ -58,224 +53,255 @@ class NukeTemplateLoader(AbstractTemplateLoader):
return True
- def preload(self, placeholder, loaders_by_name, last_representation):
- placeholder.data["nodes_init"] = nuke.allNodes()
- placeholder.data["last_repre_id"] = str(last_representation["_id"])
- def populate_template(self, ignored_ids=None):
- processed_key = "_node_processed"
+class NukePlaceholderPlugin(PlaceholderPlugin):
+ node_color = 4278190335
- processed_nodes = []
- nodes = self.get_template_nodes()
- while nodes:
- # Mark nodes as processed so they're not re-executed
- # - that can happen if processing of placeholder node fails
- for node in nodes:
- imprint(node, {processed_key: True})
- processed_nodes.append(node)
+ def _collect_scene_placeholders(self):
+ # Cache placeholder data to shared data
+ placeholder_nodes = self.builder.get_shared_populate_data(
+ "placeholder_nodes"
+ )
+ if placeholder_nodes is None:
+ placeholder_nodes = {}
+ all_groups = collections.deque()
+ all_groups.append(nuke.thisGroup())
+ while all_groups:
+ group = all_groups.popleft()
+ for node in group.nodes():
+ if isinstance(node, nuke.Group):
+ all_groups.append(node)
- super(NukeTemplateLoader, self).populate_template(ignored_ids)
+ node_knobs = node.knobs()
+ if (
+ "builder_type" not in node_knobs
+ or "is_placeholder" not in node_knobs
+ or not node.knob("is_placeholder").value()
+ ):
+ continue
- # Recollect nodes to repopulate
- nodes = []
- for node in self.get_template_nodes():
- # Skip already processed nodes
- if (
- processed_key in node.knobs()
- and node.knob(processed_key).value()
- ):
- continue
- nodes.append(node)
+ if "empty" in node_knobs and node.knob("empty").value():
+ continue
- for node in processed_nodes:
- knob = node.knob(processed_key)
+ placeholder_nodes[node.fullName()] = node
+
+ self.builder.set_shared_populate_data(
+ "placeholder_nodes", placeholder_nodes
+ )
+ return placeholder_nodes
+
+ def create_placeholder(self, placeholder_data):
+ placeholder_data["plugin_identifier"] = self.identifier
+
+ placeholder = nuke.nodes.NoOp()
+ placeholder.setName("PLACEHOLDER")
+ placeholder.knob("tile_color").setValue(self.node_color)
+
+ imprint(placeholder, placeholder_data)
+ imprint(placeholder, {"is_placeholder": True})
+ placeholder.knob("is_placeholder").setVisible(False)
+
+ def update_placeholder(self, placeholder_item, placeholder_data):
+ node = nuke.toNode(placeholder_item.scene_identifier)
+ imprint(node, placeholder_data)
+
+ def _parse_placeholder_node_data(self, node):
+ placeholder_data = {}
+ for key in self.get_placeholder_keys():
+ knob = node.knob(key)
+ value = None
if knob is not None:
- node.removeKnob(knob)
-
- @staticmethod
- def get_template_nodes():
- placeholders = []
- all_groups = collections.deque()
- all_groups.append(nuke.thisGroup())
- while all_groups:
- group = all_groups.popleft()
- for node in group.nodes():
- if isinstance(node, nuke.Group):
- all_groups.append(node)
-
- node_knobs = node.knobs()
- if (
- "builder_type" not in node_knobs
- or "is_placeholder" not in node_knobs
- or not node.knob("is_placeholder").value()
- ):
- continue
-
- if "empty" in node_knobs and node.knob("empty").value():
- continue
-
- placeholders.append(node)
-
- return placeholders
-
- def update_missing_containers(self):
- nodes_by_id = collections.defaultdict(list)
-
- for node in nuke.allNodes():
- node_knobs = node.knobs().keys()
- if "repre_id" in node_knobs:
- repre_id = node.knob("repre_id").getValue()
- nodes_by_id[repre_id].append(node.name())
-
- if "empty" in node_knobs:
- node.removeKnob(node.knob("empty"))
- imprint(node, {"empty": False})
-
- for node_names in nodes_by_id.values():
- node = None
- for node_name in node_names:
- node_by_name = nuke.toNode(node_name)
- if "builder_type" in node_by_name.knobs().keys():
- node = node_by_name
- break
-
- if node is None:
- continue
-
- placeholder = nuke.nodes.NoOp()
- placeholder.setName("PLACEHOLDER")
- placeholder.knob("tile_color").setValue(4278190335)
- attributes = get_placeholder_attributes(node, enumerate=True)
- imprint(placeholder, attributes)
- pos_x = int(node.knob("x").getValue())
- pos_y = int(node.knob("y").getValue())
- placeholder.setXYpos(pos_x, pos_y)
- imprint(placeholder, {"nb_children": 1})
- refresh_node(placeholder)
-
- self.populate_template(self.get_loaded_containers_by_id())
-
- def get_loaded_containers_by_id(self):
- repre_ids = set()
- for node in nuke.allNodes():
- if "repre_id" in node.knobs():
- repre_ids.add(node.knob("repre_id").getValue())
-
- # Removes duplicates in the list
- return list(repre_ids)
-
- def delete_placeholder(self, placeholder):
- placeholder_node = placeholder.data["node"]
- last_loaded = placeholder.data["last_loaded"]
- if not placeholder.data["delete"]:
- if "empty" in placeholder_node.knobs().keys():
- placeholder_node.removeKnob(placeholder_node.knob("empty"))
- imprint(placeholder_node, {"empty": True})
- return
-
- if not last_loaded:
- nuke.delete(placeholder_node)
- return
-
- if "last_loaded" in placeholder_node.knobs().keys():
- for node_name in placeholder_node.knob("last_loaded").values():
- node = nuke.toNode(node_name)
- try:
- delete_placeholder_attributes(node)
- except Exception:
- pass
-
- last_loaded_names = [
- loaded_node.name()
- for loaded_node in last_loaded
- ]
- imprint(placeholder_node, {"last_loaded": last_loaded_names})
-
- for node in last_loaded:
- refresh_node(node)
- refresh_node(placeholder_node)
- if "builder_type" not in node.knobs().keys():
- attributes = get_placeholder_attributes(placeholder_node, True)
- imprint(node, attributes)
- imprint(node, {"is_placeholder": False})
- hide_placeholder_attributes(node)
- node.knob("is_placeholder").setVisible(False)
- imprint(
- node,
- {
- "x": placeholder_node.xpos(),
- "y": placeholder_node.ypos()
- }
- )
- node.knob("x").setVisible(False)
- node.knob("y").setVisible(False)
- nuke.delete(placeholder_node)
+ value = knob.getValue()
+ placeholder_data[key] = value
+ return placeholder_data
-class NukePlaceholder(AbstractPlaceholder):
- """Concrete implementation of AbstractPlaceholder for Nuke"""
+class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
+ identifier = "nuke.load"
+ label = "Nuke load"
- optional_keys = {"asset", "subset", "hierarchy"}
+ def _parse_placeholder_node_data(self, node):
+ placeholder_data = super(
+ NukePlaceholderLoadPlugin, self
+ )._parse_placeholder_node_data(node)
- def get_data(self, node):
- user_data = dict()
node_knobs = node.knobs()
- for attr in self.required_keys.union(self.optional_keys):
- if attr in node_knobs:
- user_data[attr] = node_knobs[attr].getValue()
- user_data["node"] = node
-
nb_children = 0
if "nb_children" in node_knobs:
nb_children = int(node_knobs["nb_children"].getValue())
- user_data["nb_children"] = nb_children
+ placeholder_data["nb_children"] = nb_children
siblings = []
if "siblings" in node_knobs:
siblings = node_knobs["siblings"].values()
- user_data["siblings"] = siblings
+ placeholder_data["siblings"] = siblings
node_full_name = node.fullName()
- user_data["group_name"] = node_full_name.rpartition(".")[0]
- user_data["last_loaded"] = []
- user_data["delete"] = False
- self.data = user_data
+ placeholder_data["group_name"] = node_full_name.rpartition(".")[0]
+ placeholder_data["last_loaded"] = []
+ placeholder_data["delete"] = False
+ return placeholder_data
- def parent_in_hierarchy(self, containers):
- return
+ def _get_loaded_repre_ids(self):
+ loaded_representation_ids = self.builder.get_shared_populate_data(
+ "loaded_representation_ids"
+ )
+ if loaded_representation_ids is None:
+ loaded_representation_ids = set()
+ for node in nuke.allNodes():
+ if "repre_id" in node.knobs():
+ loaded_representation_ids.add(
+ node.knob("repre_id").getValue()
+ )
- def create_sib_copies(self):
- """ creating copies of the palce_holder siblings (the ones who were
- loaded with it) for the new nodes added
+ self.builder.set_shared_populate_data(
+ "loaded_representation_ids", loaded_representation_ids
+ )
+ return loaded_representation_ids
+
+ def _before_repre_load(self, placeholder, representation):
+ placeholder.data["nodes_init"] = nuke.allNodes()
+ placeholder.data["last_repre_id"] = str(representation["_id"])
+
+ def collect_placeholders(self):
+ output = []
+ scene_placeholders = self._collect_scene_placeholders()
+ for node_name, node in scene_placeholders.items():
+ plugin_identifier_knob = node.knob("plugin_identifier")
+ if (
+ plugin_identifier_knob is None
+ or plugin_identifier_knob.getValue() != self.identifier
+ ):
+ continue
+
+ placeholder_data = self._parse_placeholder_node_data(node)
+ # TODO do data validations and maybe updgrades if are invalid
+ output.append(
+ LoadPlaceholderItem(node_name, placeholder_data, self)
+ )
+
+ return output
+
+ def populate_placeholder(self, placeholder):
+ self.populate_load_placeholder(placeholder)
+
+ def repopulate_placeholder(self, placeholder):
+ repre_ids = self._get_loaded_repre_ids()
+ self.populate_load_placeholder(placeholder, repre_ids)
+
+ def get_placeholder_options(self, options=None):
+ return self.get_load_plugin_options(options)
+
+ def cleanup_placeholder(self, placeholder, failed):
+ # deselect all selected nodes
+ placeholder_node = nuke.toNode(placeholder.scene_identifier)
+
+ # getting the latest nodes added
+ # TODO get from shared populate data!
+ nodes_init = placeholder.data["nodes_init"]
+ nodes_loaded = list(set(nuke.allNodes()) - set(nodes_init))
+ self.log.debug("Loaded nodes: {}".format(nodes_loaded))
+ if not nodes_loaded:
+ return
+
+ placeholder.data["delete"] = True
+
+ nodes_loaded = self._move_to_placeholder_group(
+ placeholder, nodes_loaded
+ )
+ placeholder.data["last_loaded"] = nodes_loaded
+ refresh_nodes(nodes_loaded)
+
+ # positioning of the loaded nodes
+ min_x, min_y, _, _ = get_extreme_positions(nodes_loaded)
+ for node in nodes_loaded:
+ xpos = (node.xpos() - min_x) + placeholder_node.xpos()
+ ypos = (node.ypos() - min_y) + placeholder_node.ypos()
+ node.setXYpos(xpos, ypos)
+ refresh_nodes(nodes_loaded)
+
+ # fix the problem of z_order for backdrops
+ self._fix_z_order(placeholder)
+ self._imprint_siblings(placeholder)
+
+ if placeholder.data["nb_children"] == 0:
+ # save initial nodes postions and dimensions, update them
+ # and set inputs and outputs of loaded nodes
+
+ self._imprint_inits()
+ self._update_nodes(placeholder, nuke.allNodes(), nodes_loaded)
+ self._set_loaded_connections(placeholder)
+
+ elif placeholder.data["siblings"]:
+ # create copies of placeholder siblings for the new loaded nodes,
+ # set their inputs and outpus and update all nodes positions and
+ # dimensions and siblings names
+
+ siblings = get_nodes_by_names(placeholder.data["siblings"])
+ refresh_nodes(siblings)
+ copies = self._create_sib_copies(placeholder)
+ new_nodes = list(copies.values()) # copies nodes
+ self._update_nodes(new_nodes, nodes_loaded)
+ placeholder_node.removeKnob(placeholder_node.knob("siblings"))
+ new_nodes_name = get_names_from_nodes(new_nodes)
+ imprint(placeholder_node, {"siblings": new_nodes_name})
+ self._set_copies_connections(placeholder, copies)
+
+ self._update_nodes(
+ nuke.allNodes(),
+ new_nodes + nodes_loaded,
+ 20
+ )
+
+ new_siblings = get_names_from_nodes(new_nodes)
+ placeholder.data["siblings"] = new_siblings
+
+ else:
+ # if the placeholder doesn't have siblings, the loaded
+ # nodes will be placed in a free space
+
+ xpointer, ypointer = find_free_space_to_paste_nodes(
+ nodes_loaded, direction="bottom", offset=200
+ )
+ node = nuke.createNode("NoOp")
+ reset_selection()
+ nuke.delete(node)
+ for node in nodes_loaded:
+ xpos = (node.xpos() - min_x) + xpointer
+ ypos = (node.ypos() - min_y) + ypointer
+ node.setXYpos(xpos, ypos)
+
+ placeholder.data["nb_children"] += 1
+ reset_selection()
+ # go back to root group
+ nuke.root().begin()
+
+ def _move_to_placeholder_group(self, placeholder, nodes_loaded):
+ """
+ opening the placeholder's group and copying loaded nodes in it.
Returns :
- copies (dict) : with copied nodes names and their copies
+ nodes_loaded (list): the new list of pasted nodes
"""
- copies = {}
- siblings = get_nodes_by_names(self.data["siblings"])
- for node in siblings:
- new_node = duplicate_node(node)
+ groups_name = placeholder.data["group_name"]
+ reset_selection()
+ select_nodes(nodes_loaded)
+ if groups_name:
+ with node_tempfile() as filepath:
+ nuke.nodeCopy(filepath)
+ for node in nuke.selectedNodes():
+ nuke.delete(node)
+ group = nuke.toNode(groups_name)
+ group.begin()
+ nuke.nodePaste(filepath)
+ nodes_loaded = nuke.selectedNodes()
+ return nodes_loaded
- x_init = int(new_node.knob("x_init").getValue())
- y_init = int(new_node.knob("y_init").getValue())
- new_node.setXYpos(x_init, y_init)
- if isinstance(new_node, nuke.BackdropNode):
- w_init = new_node.knob("w_init").getValue()
- h_init = new_node.knob("h_init").getValue()
- new_node.knob("bdwidth").setValue(w_init)
- new_node.knob("bdheight").setValue(h_init)
- refresh_node(node)
-
- if "repre_id" in node.knobs().keys():
- node.removeKnob(node.knob("repre_id"))
- copies[node.name()] = new_node
- return copies
-
- def fix_z_order(self):
+ def _fix_z_order(self, placeholder):
"""Fix the problem of z_order when a backdrop is loaded."""
- nodes_loaded = self.data["last_loaded"]
+ nodes_loaded = placeholder.data["last_loaded"]
loaded_backdrops = []
bd_orders = set()
for node in nodes_loaded:
@@ -287,7 +313,7 @@ class NukePlaceholder(AbstractPlaceholder):
return
sib_orders = set()
- for node_name in self.data["siblings"]:
+ for node_name in placeholder.data["siblings"]:
node = nuke.toNode(node_name)
if isinstance(node, nuke.BackdropNode):
sib_orders.add(node.knob("z_order").getValue())
@@ -302,7 +328,56 @@ class NukePlaceholder(AbstractPlaceholder):
backdrop_node.knob("z_order").setValue(
z_order + max_order - min_order + 1)
- def update_nodes(self, nodes, considered_nodes, offset_y=None):
+ def _imprint_siblings(self, placeholder):
+ """
+ - add siblings names to placeholder attributes (nodes loaded with it)
+ - add Id to the attributes of all the other nodes
+ """
+
+ loaded_nodes = placeholder.data["last_loaded"]
+ loaded_nodes_set = set(loaded_nodes)
+ data = {"repre_id": str(placeholder.data["last_repre_id"])}
+
+ for node in loaded_nodes:
+ node_knobs = node.knobs()
+ if "builder_type" not in node_knobs:
+ # save the id of representation for all imported nodes
+ imprint(node, data)
+ node.knob("repre_id").setVisible(False)
+ refresh_node(node)
+ continue
+
+ if (
+ "is_placeholder" not in node_knobs
+ or (
+ "is_placeholder" in node_knobs
+ and node.knob("is_placeholder").value()
+ )
+ ):
+ siblings = list(loaded_nodes_set - {node})
+ siblings_name = get_names_from_nodes(siblings)
+ siblings = {"siblings": siblings_name}
+ imprint(node, siblings)
+
+ def _imprint_inits(self):
+ """Add initial positions and dimensions to the attributes"""
+
+ for node in nuke.allNodes():
+ refresh_node(node)
+ imprint(node, {"x_init": node.xpos(), "y_init": node.ypos()})
+ node.knob("x_init").setVisible(False)
+ node.knob("y_init").setVisible(False)
+ width = node.screenWidth()
+ height = node.screenHeight()
+ if "bdwidth" in node.knobs():
+ imprint(node, {"w_init": width, "h_init": height})
+ node.knob("w_init").setVisible(False)
+ node.knob("h_init").setVisible(False)
+ refresh_node(node)
+
+ def _update_nodes(
+ self, placeholder, nodes, considered_nodes, offset_y=None
+ ):
"""Adjust backdrop nodes dimensions and positions.
Considering some nodes sizes.
@@ -314,7 +389,7 @@ class NukePlaceholder(AbstractPlaceholder):
offset (int): distance between copies
"""
- placeholder_node = self.data["node"]
+ placeholder_node = nuke.toNode(placeholder.scene_identifier)
min_x, min_y, max_x, max_y = get_extreme_positions(considered_nodes)
@@ -330,7 +405,7 @@ class NukePlaceholder(AbstractPlaceholder):
min_x = placeholder_node.xpos()
min_y = placeholder_node.ypos()
else:
- siblings = get_nodes_by_names(self.data["siblings"])
+ siblings = get_nodes_by_names(placeholder.data["siblings"])
minX, _, maxX, _ = get_extreme_positions(siblings)
diff_y = max_y - min_y + 20
diff_x = abs(max_x - min_x - maxX + minX)
@@ -369,59 +444,14 @@ class NukePlaceholder(AbstractPlaceholder):
refresh_node(node)
- def imprint_inits(self):
- """Add initial positions and dimensions to the attributes"""
-
- for node in nuke.allNodes():
- refresh_node(node)
- imprint(node, {"x_init": node.xpos(), "y_init": node.ypos()})
- node.knob("x_init").setVisible(False)
- node.knob("y_init").setVisible(False)
- width = node.screenWidth()
- height = node.screenHeight()
- if "bdwidth" in node.knobs():
- imprint(node, {"w_init": width, "h_init": height})
- node.knob("w_init").setVisible(False)
- node.knob("h_init").setVisible(False)
- refresh_node(node)
-
- def imprint_siblings(self):
- """
- - add siblings names to placeholder attributes (nodes loaded with it)
- - add Id to the attributes of all the other nodes
- """
-
- loaded_nodes = self.data["last_loaded"]
- loaded_nodes_set = set(loaded_nodes)
- data = {"repre_id": str(self.data["last_repre_id"])}
-
- for node in loaded_nodes:
- node_knobs = node.knobs()
- if "builder_type" not in node_knobs:
- # save the id of representation for all imported nodes
- imprint(node, data)
- node.knob("repre_id").setVisible(False)
- refresh_node(node)
- continue
-
- if (
- "is_placeholder" not in node_knobs
- or (
- "is_placeholder" in node_knobs
- and node.knob("is_placeholder").value()
- )
- ):
- siblings = list(loaded_nodes_set - {node})
- siblings_name = get_names_from_nodes(siblings)
- siblings = {"siblings": siblings_name}
- imprint(node, siblings)
-
- def set_loaded_connections(self):
+ def _set_loaded_connections(self, placeholder):
"""
set inputs and outputs of loaded nodes"""
- placeholder_node = self.data["node"]
- input_node, output_node = get_group_io_nodes(self.data["last_loaded"])
+ placeholder_node = nuke.toNode(placeholder.scene_identifier)
+ input_node, output_node = get_group_io_nodes(
+ placeholder.data["last_loaded"]
+ )
for node in placeholder_node.dependent():
for idx in range(node.inputs()):
if node.input(idx) == placeholder_node:
@@ -432,15 +462,45 @@ class NukePlaceholder(AbstractPlaceholder):
if placeholder_node.input(idx) == node:
input_node.setInput(0, node)
- def set_copies_connections(self, copies):
+ def _create_sib_copies(self, placeholder):
+ """ creating copies of the palce_holder siblings (the ones who were
+ loaded with it) for the new nodes added
+
+ Returns :
+ copies (dict) : with copied nodes names and their copies
+ """
+
+ copies = {}
+ siblings = get_nodes_by_names(placeholder.data["siblings"])
+ for node in siblings:
+ new_node = duplicate_node(node)
+
+ x_init = int(new_node.knob("x_init").getValue())
+ y_init = int(new_node.knob("y_init").getValue())
+ new_node.setXYpos(x_init, y_init)
+ if isinstance(new_node, nuke.BackdropNode):
+ w_init = new_node.knob("w_init").getValue()
+ h_init = new_node.knob("h_init").getValue()
+ new_node.knob("bdwidth").setValue(w_init)
+ new_node.knob("bdheight").setValue(h_init)
+ refresh_node(node)
+
+ if "repre_id" in node.knobs().keys():
+ node.removeKnob(node.knob("repre_id"))
+ copies[node.name()] = new_node
+ return copies
+
+ def _set_copies_connections(self, placeholder, copies):
"""Set inputs and outputs of the copies.
Args:
copies (dict): Copied nodes by their names.
"""
- last_input, last_output = get_group_io_nodes(self.data["last_loaded"])
- siblings = get_nodes_by_names(self.data["siblings"])
+ last_input, last_output = get_group_io_nodes(
+ placeholder.data["last_loaded"]
+ )
+ siblings = get_nodes_by_names(placeholder.data["siblings"])
siblings_input, siblings_output = get_group_io_nodes(siblings)
copy_input = copies[siblings_input.name()]
copy_output = copies[siblings_output.name()]
@@ -474,166 +534,45 @@ class NukePlaceholder(AbstractPlaceholder):
siblings_input.setInput(0, copy_output)
- def move_to_placeholder_group(self, nodes_loaded):
- """
- opening the placeholder's group and copying loaded nodes in it.
- Returns :
- nodes_loaded (list): the new list of pasted nodes
- """
+def build_workfile_template(*args):
+ builder = NukeTemplateBuilder(registered_host())
+ builder.build_template()
- groups_name = self.data["group_name"]
- reset_selection()
- select_nodes(nodes_loaded)
- if groups_name:
- with node_tempfile() as filepath:
- nuke.nodeCopy(filepath)
- for node in nuke.selectedNodes():
- nuke.delete(node)
- group = nuke.toNode(groups_name)
- group.begin()
- nuke.nodePaste(filepath)
- nodes_loaded = nuke.selectedNodes()
- return nodes_loaded
- def clean(self):
- # deselect all selected nodes
- placeholder_node = self.data["node"]
+def update_workfile_template(*args):
+ builder = NukeTemplateBuilder(registered_host())
+ builder.rebuild_template()
- # getting the latest nodes added
- nodes_init = self.data["nodes_init"]
- nodes_loaded = list(set(nuke.allNodes()) - set(nodes_init))
- self.log.debug("Loaded nodes: {}".format(nodes_loaded))
- if not nodes_loaded:
- return
- self.data["delete"] = True
+def create_placeholder(*args):
+ host = registered_host()
+ builder = NukeTemplateBuilder(host)
+ window = WorkfileBuildPlaceholderDialog(host, builder)
+ window.exec_()
- nodes_loaded = self.move_to_placeholder_group(nodes_loaded)
- self.data["last_loaded"] = nodes_loaded
- refresh_nodes(nodes_loaded)
- # positioning of the loaded nodes
- min_x, min_y, _, _ = get_extreme_positions(nodes_loaded)
- for node in nodes_loaded:
- xpos = (node.xpos() - min_x) + placeholder_node.xpos()
- ypos = (node.ypos() - min_y) + placeholder_node.ypos()
- node.setXYpos(xpos, ypos)
- refresh_nodes(nodes_loaded)
+def update_placeholder(*args):
+ host = registered_host()
+ builder = NukeTemplateBuilder(host)
+ placeholder_items_by_id = {
+ placeholder_item.scene_identifier: placeholder_item
+ for placeholder_item in builder.get_placeholders()
+ }
+ placeholder_items = []
+ for node in nuke.selectedNodes():
+ node_name = node.fullName()
+ if node_name in placeholder_items_by_id:
+ placeholder_items.append(placeholder_items_by_id[node_name])
- self.fix_z_order() # fix the problem of z_order for backdrops
- self.imprint_siblings()
+ # TODO show UI at least
+ if len(placeholder_items) == 0:
+ raise ValueError("No node selected")
- if self.data["nb_children"] == 0:
- # save initial nodes postions and dimensions, update them
- # and set inputs and outputs of loaded nodes
+ if len(placeholder_items) > 1:
+ raise ValueError("Too many selected nodes")
- self.imprint_inits()
- self.update_nodes(nuke.allNodes(), nodes_loaded)
- self.set_loaded_connections()
-
- elif self.data["siblings"]:
- # create copies of placeholder siblings for the new loaded nodes,
- # set their inputs and outpus and update all nodes positions and
- # dimensions and siblings names
-
- siblings = get_nodes_by_names(self.data["siblings"])
- refresh_nodes(siblings)
- copies = self.create_sib_copies()
- new_nodes = list(copies.values()) # copies nodes
- self.update_nodes(new_nodes, nodes_loaded)
- placeholder_node.removeKnob(placeholder_node.knob("siblings"))
- new_nodes_name = get_names_from_nodes(new_nodes)
- imprint(placeholder_node, {"siblings": new_nodes_name})
- self.set_copies_connections(copies)
-
- self.update_nodes(
- nuke.allNodes(),
- new_nodes + nodes_loaded,
- 20
- )
-
- new_siblings = get_names_from_nodes(new_nodes)
- self.data["siblings"] = new_siblings
-
- else:
- # if the placeholder doesn't have siblings, the loaded
- # nodes will be placed in a free space
-
- xpointer, ypointer = find_free_space_to_paste_nodes(
- nodes_loaded, direction="bottom", offset=200
- )
- node = nuke.createNode("NoOp")
- reset_selection()
- nuke.delete(node)
- for node in nodes_loaded:
- xpos = (node.xpos() - min_x) + xpointer
- ypos = (node.ypos() - min_y) + ypointer
- node.setXYpos(xpos, ypos)
-
- self.data["nb_children"] += 1
- reset_selection()
- # go back to root group
- nuke.root().begin()
-
- def get_representations(self, current_asset_doc, linked_asset_docs):
- project_name = legacy_io.active_project()
-
- builder_type = self.data["builder_type"]
- if builder_type == "context_asset":
- context_filters = {
- "asset": [re.compile(self.data["asset"])],
- "subset": [re.compile(self.data["subset"])],
- "hierarchy": [re.compile(self.data["hierarchy"])],
- "representations": [self.data["representation"]],
- "family": [self.data["family"]]
- }
-
- elif builder_type != "linked_asset":
- context_filters = {
- "asset": [
- current_asset_doc["name"],
- re.compile(self.data["asset"])
- ],
- "subset": [re.compile(self.data["subset"])],
- "hierarchy": [re.compile(self.data["hierarchy"])],
- "representation": [self.data["representation"]],
- "family": [self.data["family"]]
- }
-
- else:
- asset_regex = re.compile(self.data["asset"])
- linked_asset_names = []
- for asset_doc in linked_asset_docs:
- asset_name = asset_doc["name"]
- if asset_regex.match(asset_name):
- linked_asset_names.append(asset_name)
-
- if not linked_asset_names:
- return []
-
- context_filters = {
- "asset": linked_asset_names,
- "subset": [re.compile(self.data["subset"])],
- "hierarchy": [re.compile(self.data["hierarchy"])],
- "representation": [self.data["representation"]],
- "family": [self.data["family"]],
- }
-
- return list(get_representations(
- project_name,
- context_filters=context_filters
- ))
-
- def err_message(self):
- return (
- "Error while trying to load a representation.\n"
- "Either the subset wasn't published or the template is malformed."
- "\n\n"
- "Builder was looking for:\n{attributes}".format(
- attributes="\n".join([
- "{}: {}".format(key.title(), value)
- for key, value in self.data.items()]
- )
- )
- )
+ placeholder_item = placeholder_items[0]
+ window = WorkfileBuildPlaceholderDialog(host, builder)
+ window.set_update_mode(placeholder_item)
+ window.exec_()
diff --git a/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py b/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py
index c04c939a8d..764499ff0c 100644
--- a/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py
+++ b/openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py
@@ -1,4 +1,4 @@
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import InventoryAction
from openpype.hosts.nuke.api.lib import set_avalon_knob_data
diff --git a/openpype/hosts/nuke/startup/menu.py b/openpype/hosts/nuke/startup/menu.py
index 1461d41385..5e29121e9b 100644
--- a/openpype/hosts/nuke/startup/menu.py
+++ b/openpype/hosts/nuke/startup/menu.py
@@ -1,7 +1,7 @@
import nuke
import os
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import install_host
from openpype.hosts.nuke import api
from openpype.hosts.nuke.api.lib import (
diff --git a/openpype/hosts/photoshop/api/launch_logic.py b/openpype/hosts/photoshop/api/launch_logic.py
index 0bbb19523d..1f0203dca6 100644
--- a/openpype/hosts/photoshop/api/launch_logic.py
+++ b/openpype/hosts/photoshop/api/launch_logic.py
@@ -10,7 +10,7 @@ from wsrpc_aiohttp import (
from Qt import QtCore
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
from openpype.tools.adobe_webserver.app import WebServerTool
diff --git a/openpype/hosts/photoshop/api/pipeline.py b/openpype/hosts/photoshop/api/pipeline.py
index f660096630..9f6fc0983c 100644
--- a/openpype/hosts/photoshop/api/pipeline.py
+++ b/openpype/hosts/photoshop/api/pipeline.py
@@ -3,8 +3,7 @@ from Qt import QtWidgets
import pyblish.api
-from openpype.api import Logger
-from openpype.lib import register_event_callback
+from openpype.lib import register_event_callback, Logger
from openpype.pipeline import (
legacy_io,
register_loader_plugin_path,
diff --git a/openpype/hosts/photoshop/plugins/publish/collect_version.py b/openpype/hosts/photoshop/plugins/publish/collect_version.py
index aff9f13bfb..cda71d8643 100644
--- a/openpype/hosts/photoshop/plugins/publish/collect_version.py
+++ b/openpype/hosts/photoshop/plugins/publish/collect_version.py
@@ -7,16 +7,21 @@ class CollectVersion(pyblish.api.InstancePlugin):
Used to synchronize version from workfile to all publishable instances:
- image (manually created or color coded)
- review
+ - workfile
Dev comment:
Explicit collector created to control this from single place and not from
3 different.
+
+ Workfile set here explicitly as version might to be forced from latest + 1
+ because of Webpublisher.
+ (This plugin must run after CollectPublishedVersion!)
"""
order = pyblish.api.CollectorOrder + 0.200
label = 'Collect Version'
hosts = ["photoshop"]
- families = ["image", "review"]
+ families = ["image", "review", "workfile"]
def process(self, instance):
workfile_version = instance.context.data["version"]
diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py
index e5fee311f8..01022ce0b2 100644
--- a/openpype/hosts/photoshop/plugins/publish/extract_review.py
+++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py
@@ -49,7 +49,7 @@ class ExtractReview(publish.Extractor):
if self.make_image_sequence and len(layers) > 1:
self.log.info("Extract layers to image sequence.")
- img_list = self._saves_sequences_layers(staging_dir, layers)
+ img_list = self._save_sequence_images(staging_dir, layers)
instance.data["representations"].append({
"name": "jpg",
@@ -64,7 +64,7 @@ class ExtractReview(publish.Extractor):
processed_img_names = img_list
else:
self.log.info("Extract layers to flatten image.")
- img_list = self._saves_flattened_layers(staging_dir, layers)
+ img_list = self._save_flatten_image(staging_dir, layers)
instance.data["representations"].append({
"name": "jpg",
@@ -84,6 +84,67 @@ class ExtractReview(publish.Extractor):
source_files_pattern = self._check_and_resize(processed_img_names,
source_files_pattern,
staging_dir)
+ self._generate_thumbnail(ffmpeg_path, instance, source_files_pattern,
+ staging_dir)
+
+ no_of_frames = len(processed_img_names)
+ if no_of_frames > 1:
+ self._generate_mov(ffmpeg_path, instance, fps, no_of_frames,
+ source_files_pattern, staging_dir)
+
+ self.log.info(f"Extracted {instance} to {staging_dir}")
+
+ def _generate_mov(self, ffmpeg_path, instance, fps, no_of_frames,
+ source_files_pattern, staging_dir):
+ """Generates .mov to upload to Ftrack.
+
+ Args:
+ ffmpeg_path (str): path to ffmpeg
+ instance (Pyblish Instance)
+ fps (str)
+ no_of_frames (int):
+ source_files_pattern (str): name of source file
+ staging_dir (str): temporary location to store thumbnail
+ Updates:
+ instance - adds representation portion
+ """
+ # Generate mov.
+ mov_path = os.path.join(staging_dir, "review.mov")
+ self.log.info(f"Generate mov review: {mov_path}")
+ args = [
+ ffmpeg_path,
+ "-y",
+ "-i", source_files_pattern,
+ "-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2",
+ "-vframes", str(no_of_frames),
+ mov_path
+ ]
+ self.log.debug("mov args:: {}".format(args))
+ _output = run_subprocess(args)
+ instance.data["representations"].append({
+ "name": "mov",
+ "ext": "mov",
+ "files": os.path.basename(mov_path),
+ "stagingDir": staging_dir,
+ "frameStart": 1,
+ "frameEnd": no_of_frames,
+ "fps": fps,
+ "preview": True,
+ "tags": self.mov_options['tags']
+ })
+
+ def _generate_thumbnail(self, ffmpeg_path, instance, source_files_pattern,
+ staging_dir):
+ """Generates scaled down thumbnail and adds it as representation.
+
+ Args:
+ ffmpeg_path (str): path to ffmpeg
+ instance (Pyblish Instance)
+ source_files_pattern (str): name of source file
+ staging_dir (str): temporary location to store thumbnail
+ Updates:
+ instance - adds representation portion
+ """
# Generate thumbnail
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
self.log.info(f"Generate thumbnail {thumbnail_path}")
@@ -96,50 +157,16 @@ class ExtractReview(publish.Extractor):
thumbnail_path
]
self.log.debug("thumbnail args:: {}".format(args))
- output = run_subprocess(args)
-
+ _output = run_subprocess(args)
instance.data["representations"].append({
"name": "thumbnail",
"ext": "jpg",
+ "outputName": "thumb",
"files": os.path.basename(thumbnail_path),
"stagingDir": staging_dir,
- "tags": ["thumbnail"]
+ "tags": ["thumbnail", "delete"]
})
- # Generate mov.
- mov_path = os.path.join(staging_dir, "review.mov")
- self.log.info(f"Generate mov review: {mov_path}")
- img_number = len(img_list)
- args = [
- ffmpeg_path,
- "-y",
- "-i", source_files_pattern,
- "-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2",
- "-vframes", str(img_number),
- mov_path
- ]
- self.log.debug("mov args:: {}".format(args))
- output = run_subprocess(args)
- self.log.debug(output)
- instance.data["representations"].append({
- "name": "mov",
- "ext": "mov",
- "files": os.path.basename(mov_path),
- "stagingDir": staging_dir,
- "frameStart": 1,
- "frameEnd": img_number,
- "fps": fps,
- "preview": True,
- "tags": self.mov_options['tags']
- })
-
- # Required for extract_review plugin (L222 onwards).
- instance.data["frameStart"] = 1
- instance.data["frameEnd"] = img_number
- instance.data["fps"] = 25
-
- self.log.info(f"Extracted {instance} to {staging_dir}")
-
def _check_and_resize(self, processed_img_names, source_files_pattern,
staging_dir):
"""Check if saved image could be used in ffmpeg.
@@ -168,37 +195,12 @@ class ExtractReview(publish.Extractor):
return source_files_pattern
- def _get_image_path_from_instances(self, instance):
- img_list = []
-
- for instance in sorted(instance.context):
- if instance.data["family"] != "image":
- continue
-
- for rep in instance.data["representations"]:
- img_path = os.path.join(
- rep["stagingDir"],
- rep["files"]
- )
- img_list.append(img_path)
-
- return img_list
-
- def _copy_image_to_staging_dir(self, staging_dir, img_list):
- copy_files = []
- for i, img_src in enumerate(img_list):
- img_filename = self.output_seq_filename % i
- img_dst = os.path.join(staging_dir, img_filename)
-
- self.log.debug(
- "Copying file .. {} -> {}".format(img_src, img_dst)
- )
- shutil.copy(img_src, img_dst)
- copy_files.append(img_filename)
-
- return copy_files
-
def _get_layers_from_image_instances(self, instance):
+ """Collect all layers from 'instance'.
+
+ Returns:
+ (list) of PSItem
+ """
layers = []
for image_instance in instance.context:
if image_instance.data["family"] != "image":
@@ -210,7 +212,12 @@ class ExtractReview(publish.Extractor):
return sorted(layers)
- def _saves_flattened_layers(self, staging_dir, layers):
+ def _save_flatten_image(self, staging_dir, layers):
+ """Creates flat image from 'layers' into 'staging_dir'.
+
+ Returns:
+ (str): path to new image
+ """
img_filename = self.output_seq_filename % 0
output_image_path = os.path.join(staging_dir, img_filename)
stub = photoshop.stub()
@@ -224,7 +231,13 @@ class ExtractReview(publish.Extractor):
return img_filename
- def _saves_sequences_layers(self, staging_dir, layers):
+ def _save_sequence_images(self, staging_dir, layers):
+ """Creates separate flat images from 'layers' into 'staging_dir'.
+
+ Used as source for multi frames .mov to review at once.
+ Returns:
+ (list): paths to new images
+ """
stub = photoshop.stub()
list_img_filename = []
diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py
index b03125d502..3995077d21 100644
--- a/openpype/hosts/resolve/api/plugin.py
+++ b/openpype/hosts/resolve/api/plugin.py
@@ -504,7 +504,7 @@ class Creator(LegacyCreator):
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
- from openpype.api import get_current_project_settings
+ from openpype.settings import get_current_project_settings
resolve_p_settings = get_current_project_settings().get("resolve")
self.presets = {}
if resolve_p_settings:
diff --git a/openpype/hosts/resolve/api/workio.py b/openpype/hosts/resolve/api/workio.py
index 5a742ecf7e..5ce73eea53 100644
--- a/openpype/hosts/resolve/api/workio.py
+++ b/openpype/hosts/resolve/api/workio.py
@@ -1,7 +1,7 @@
"""Host API required Work Files tool"""
import os
-from openpype.api import Logger
+from openpype.lib import Logger
from .lib import (
get_project_manager,
get_current_project,
diff --git a/openpype/hosts/resolve/plugins/publish/extract_workfile.py b/openpype/hosts/resolve/plugins/publish/extract_workfile.py
index ea8f19cd8c..535f879b58 100644
--- a/openpype/hosts/resolve/plugins/publish/extract_workfile.py
+++ b/openpype/hosts/resolve/plugins/publish/extract_workfile.py
@@ -1,10 +1,11 @@
import os
import pyblish.api
-import openpype.api
+
+from openpype.pipeline import publish
from openpype.hosts.resolve.api.lib import get_project_manager
-class ExtractWorkfile(openpype.api.Extractor):
+class ExtractWorkfile(publish.Extractor):
"""
Extractor export DRP workfile file representation
"""
diff --git a/openpype/hosts/traypublisher/api/pipeline.py b/openpype/hosts/traypublisher/api/pipeline.py
index 2d9db7801e..0a8ddaa343 100644
--- a/openpype/hosts/traypublisher/api/pipeline.py
+++ b/openpype/hosts/traypublisher/api/pipeline.py
@@ -9,7 +9,7 @@ from openpype.pipeline import (
register_creator_plugin_path,
legacy_io,
)
-from openpype.host import HostBase, INewPublisher
+from openpype.host import HostBase, IPublishHost
ROOT_DIR = os.path.dirname(os.path.dirname(
@@ -19,7 +19,7 @@ PUBLISH_PATH = os.path.join(ROOT_DIR, "plugins", "publish")
CREATE_PATH = os.path.join(ROOT_DIR, "plugins", "create")
-class TrayPublisherHost(HostBase, INewPublisher):
+class TrayPublisherHost(HostBase, IPublishHost):
name = "traypublisher"
def install(self):
diff --git a/openpype/hosts/traypublisher/api/plugin.py b/openpype/hosts/traypublisher/api/plugin.py
index a3eead51c8..89c25389cb 100644
--- a/openpype/hosts/traypublisher/api/plugin.py
+++ b/openpype/hosts/traypublisher/api/plugin.py
@@ -11,27 +11,9 @@ from .pipeline import (
remove_instances,
HostContext,
)
+from openpype.lib.transcoding import IMAGE_EXTENSIONS, VIDEO_EXTENSIONS
+
-IMAGE_EXTENSIONS = [
- ".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
- ".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
- ".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
- ".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
- ".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
- ".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
- ".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
- ".pictor", ".png", ".psb", ".psp", ".qtvr", ".ras",
- ".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
- ".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
- ".xpm", ".xwd"
-]
-VIDEO_EXTENSIONS = [
- ".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
- ".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
- ".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
- ".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
- ".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
-]
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
@@ -104,6 +86,8 @@ class TrayPublishCreator(Creator):
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
+ new_instance.mark_as_stored()
+
# Add instance to current context
self._add_instance_to_context(new_instance)
diff --git a/openpype/hosts/traypublisher/plugins/create/create_from_settings.py b/openpype/hosts/traypublisher/plugins/create/create_from_settings.py
index 41c1c29bb0..df6253b0c2 100644
--- a/openpype/hosts/traypublisher/plugins/create/create_from_settings.py
+++ b/openpype/hosts/traypublisher/plugins/create/create_from_settings.py
@@ -1,5 +1,6 @@
import os
-from openpype.api import get_project_settings, Logger
+from openpype.lib import Logger
+from openpype.settings import get_project_settings
log = Logger.get_logger(__name__)
diff --git a/openpype/hosts/tvpaint/api/pipeline.py b/openpype/hosts/tvpaint/api/pipeline.py
index 427c927264..cbaa059809 100644
--- a/openpype/hosts/tvpaint/api/pipeline.py
+++ b/openpype/hosts/tvpaint/api/pipeline.py
@@ -10,7 +10,7 @@ import pyblish.api
from openpype.client import get_project, get_asset_by_name
from openpype.hosts import tvpaint
-from openpype.api import get_current_project_settings
+from openpype.settings import get_current_project_settings
from openpype.lib import register_event_callback
from openpype.pipeline import (
legacy_io,
diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py
index 926c932a85..c1d66ddf2a 100644
--- a/openpype/hosts/unreal/plugins/load/load_layout.py
+++ b/openpype/hosts/unreal/plugins/load/load_layout.py
@@ -24,7 +24,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.pipeline.context_tools import get_current_project_asset
-from openpype.api import get_current_project_settings
+from openpype.settings import get_current_project_settings
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
diff --git a/openpype/hosts/unreal/plugins/publish/extract_camera.py b/openpype/hosts/unreal/plugins/publish/extract_camera.py
index ce53824563..4e37cc6a86 100644
--- a/openpype/hosts/unreal/plugins/publish/extract_camera.py
+++ b/openpype/hosts/unreal/plugins/publish/extract_camera.py
@@ -6,10 +6,10 @@ import unreal
from unreal import EditorAssetLibrary as eal
from unreal import EditorLevelLibrary as ell
-import openpype.api
+from openpype.pipeline import publish
-class ExtractCamera(openpype.api.Extractor):
+class ExtractCamera(publish.Extractor):
"""Extract a camera."""
label = "Extract Camera"
diff --git a/openpype/hosts/unreal/plugins/publish/extract_layout.py b/openpype/hosts/unreal/plugins/publish/extract_layout.py
index 8924df36a7..cac7991f00 100644
--- a/openpype/hosts/unreal/plugins/publish/extract_layout.py
+++ b/openpype/hosts/unreal/plugins/publish/extract_layout.py
@@ -3,18 +3,15 @@ import os
import json
import math
-from bson.objectid import ObjectId
-
import unreal
from unreal import EditorLevelLibrary as ell
from unreal import EditorAssetLibrary as eal
from openpype.client import get_representation_by_name
-import openpype.api
-from openpype.pipeline import legacy_io
+from openpype.pipeline import legacy_io, publish
-class ExtractLayout(openpype.api.Extractor):
+class ExtractLayout(publish.Extractor):
"""Extract a layout."""
label = "Extract Layout"
diff --git a/openpype/hosts/unreal/plugins/publish/extract_look.py b/openpype/hosts/unreal/plugins/publish/extract_look.py
index ea39949417..f999ad8651 100644
--- a/openpype/hosts/unreal/plugins/publish/extract_look.py
+++ b/openpype/hosts/unreal/plugins/publish/extract_look.py
@@ -5,10 +5,10 @@ import os
import unreal
from unreal import MaterialEditingLibrary as mat_lib
-import openpype.api
+from openpype.pipeline import publish
-class ExtractLook(openpype.api.Extractor):
+class ExtractLook(publish.Extractor):
"""Extract look."""
label = "Extract Look"
diff --git a/openpype/hosts/unreal/plugins/publish/extract_render.py b/openpype/hosts/unreal/plugins/publish/extract_render.py
index 37fe7e916f..8ff38fbee0 100644
--- a/openpype/hosts/unreal/plugins/publish/extract_render.py
+++ b/openpype/hosts/unreal/plugins/publish/extract_render.py
@@ -2,10 +2,10 @@ from pathlib import Path
import unreal
-import openpype.api
+from openpype.pipeline import publish
-class ExtractRender(openpype.api.Extractor):
+class ExtractRender(publish.Extractor):
"""Extract render."""
label = "Extract Render"
diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py
index e249ae4f1c..990dc7495a 100644
--- a/openpype/lib/applications.py
+++ b/openpype/lib/applications.py
@@ -1403,6 +1403,7 @@ def get_app_environments_for_context(
"env": env
})
+ data["env"].update(anatomy.root_environments())
prepare_app_environments(data, env_group, modules_manager)
prepare_context_environments(data, env_group, modules_manager)
diff --git a/openpype/lib/attribute_definitions.py b/openpype/lib/attribute_definitions.py
index 17658eef93..37446f01f8 100644
--- a/openpype/lib/attribute_definitions.py
+++ b/openpype/lib/attribute_definitions.py
@@ -9,6 +9,49 @@ import six
import clique
+def get_attributes_keys(attribute_definitions):
+ """Collect keys from list of attribute definitions.
+
+ Args:
+ attribute_definitions (List[AbtractAttrDef]): Objects of attribute
+ definitions.
+
+ Returns:
+ Set[str]: Keys that will be created using passed attribute definitions.
+ """
+
+ keys = set()
+ if not attribute_definitions:
+ return keys
+
+ for attribute_def in attribute_definitions:
+ if not isinstance(attribute_def, UIDef):
+ keys.add(attribute_def.key)
+ return keys
+
+
+def get_default_values(attribute_definitions):
+ """Receive default values for attribute definitions.
+
+ Args:
+ attribute_definitions (List[AbtractAttrDef]): Attribute definitions for
+ which default values should be collected.
+
+ Returns:
+ Dict[str, Any]: Default values for passet attribute definitions.
+ """
+
+ output = {}
+ if not attribute_definitions:
+ return output
+
+ for attr_def in attribute_definitions:
+ # Skip UI definitions
+ if not isinstance(attr_def, UIDef):
+ output[attr_def.key] = attr_def.default
+ return output
+
+
class AbstractAttrDefMeta(ABCMeta):
"""Meta class to validate existence of 'key' attribute.
diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py
index 87ab6bf4ed..e736ba8ef0 100644
--- a/openpype/lib/transcoding.py
+++ b/openpype/lib/transcoding.py
@@ -42,6 +42,28 @@ XML_CHAR_REF_REGEX_HEX = re.compile(r"?[0-9a-fA-F]+;")
# Regex to parse array attributes
ARRAY_TYPE_REGEX = re.compile(r"^(int|float|string)\[\d+\]$")
+IMAGE_EXTENSIONS = [
+ ".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
+ ".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
+ ".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
+ ".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
+ ".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
+ ".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
+ ".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
+ ".pictor", ".png", ".psb", ".psp", ".qtvr", ".ras",
+ ".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
+ ".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
+ ".xpm", ".xwd"
+]
+
+VIDEO_EXTENSIONS = [
+ ".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
+ ".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
+ ".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
+ ".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
+ ".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
+]
+
def get_transcode_temp_directory():
"""Creates temporary folder for transcoding.
diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
index 44f2b5b2b4..7fbe134410 100644
--- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py
@@ -475,6 +475,13 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
layer_metadata = render_products.layer_data
layer_prefix = layer_metadata.filePrefix
+ plugin_info = copy.deepcopy(self.plugin_info)
+ plugin_info.update({
+ # Output directory and filename
+ "OutputFilePath": data["dirname"].replace("\\", "/"),
+ "OutputFilePrefix": layer_prefix,
+ })
+
# This hack is here because of how Deadline handles Renderman version.
# it considers everything with `renderman` set as version older than
# Renderman 22, and so if we are using renderman > 21 we need to set
@@ -491,12 +498,11 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
if int(rman_version.split(".")[0]) > 22:
renderer = "renderman22"
- plugin_info = copy.deepcopy(self.plugin_info)
- plugin_info.update({
- # Output directory and filename
- "OutputFilePath": data["dirname"].replace("\\", "/"),
- "OutputFilePrefix": layer_prefix,
- })
+ plugin_info["Renderer"] = renderer
+
+ # this is needed because renderman plugin in Deadline
+ # handles directory and file prefixes separately
+ plugin_info["OutputFilePath"] = job_info.OutputDirectory[0]
return job_info, plugin_info
diff --git a/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py b/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py
index d04440a564..c19cfd1502 100644
--- a/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py
+++ b/openpype/modules/ftrack/event_handlers_user/action_create_cust_attrs.py
@@ -18,7 +18,7 @@ from openpype_modules.ftrack.lib import (
tool_definitions_from_app_manager
)
-from openpype.api import get_system_settings
+from openpype.settings import get_system_settings
from openpype.lib import ApplicationManager
"""
diff --git a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py
index d5a95fad91..86ecffd5b8 100644
--- a/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py
+++ b/openpype/modules/ftrack/launch_hooks/post_ftrack_changes.py
@@ -1,7 +1,7 @@
import os
import ftrack_api
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.lib import PostLaunchHook
diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py
index 5ff75e7060..96f573fe25 100644
--- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py
+++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py
@@ -9,6 +9,7 @@ from openpype.lib.transcoding import (
convert_ffprobe_fps_to_float,
)
from openpype.lib.profiles_filtering import filter_profiles
+from openpype.lib.transcoding import VIDEO_EXTENSIONS
class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
@@ -121,6 +122,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
review_representations = []
thumbnail_representations = []
other_representations = []
+ has_movie_review = False
for repre in instance_repres:
self.log.debug("Representation {}".format(repre))
repre_tags = repre.get("tags") or []
@@ -129,6 +131,8 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
elif "ftrackreview" in repre_tags:
review_representations.append(repre)
+ if self._is_repre_video(repre):
+ has_movie_review = True
else:
other_representations.append(repre)
@@ -146,65 +150,53 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
# TODO what if there is multiple thumbnails?
first_thumbnail_component = None
first_thumbnail_component_repre = None
- for repre in thumbnail_representations:
- repre_path = self._get_repre_path(instance, repre, False)
- if not repre_path:
- self.log.warning(
- "Published path is not set and source was removed."
+
+ if has_movie_review:
+ for repre in thumbnail_representations:
+ repre_path = self._get_repre_path(instance, repre, False)
+ if not repre_path:
+ self.log.warning(
+ "Published path is not set and source was removed."
+ )
+ continue
+
+ # Create copy of base comp item and append it
+ thumbnail_item = copy.deepcopy(base_component_item)
+ thumbnail_item["component_path"] = repre_path
+ thumbnail_item["component_data"] = {
+ "name": "thumbnail"
+ }
+ thumbnail_item["thumbnail"] = True
+
+ # Create copy of item before setting location
+ if "delete" not in repre["tags"]:
+ src_components_to_add.append(copy.deepcopy(thumbnail_item))
+ # Create copy of first thumbnail
+ if first_thumbnail_component is None:
+ first_thumbnail_component_repre = repre
+ first_thumbnail_component = thumbnail_item
+ # Set location
+ thumbnail_item["component_location_name"] = (
+ ftrack_server_location_name
)
- continue
- # Create copy of base comp item and append it
- thumbnail_item = copy.deepcopy(base_component_item)
- thumbnail_item["component_path"] = repre_path
- thumbnail_item["component_data"] = {
- "name": "thumbnail"
- }
- thumbnail_item["thumbnail"] = True
-
- # Create copy of item before setting location
- src_components_to_add.append(copy.deepcopy(thumbnail_item))
- # Create copy of first thumbnail
- if first_thumbnail_component is None:
- first_thumbnail_component_repre = repre
- first_thumbnail_component = thumbnail_item
- # Set location
- thumbnail_item["component_location_name"] = (
- ftrack_server_location_name
- )
-
- # Add item to component list
- component_list.append(thumbnail_item)
+ # Add item to component list
+ component_list.append(thumbnail_item)
if first_thumbnail_component is not None:
- width = first_thumbnail_component_repre.get("width")
- height = first_thumbnail_component_repre.get("height")
- if not width or not height:
- component_path = first_thumbnail_component["component_path"]
- streams = []
- try:
- streams = get_ffprobe_streams(component_path)
- except Exception:
- self.log.debug((
- "Failed to retrieve information about intput {}"
- ).format(component_path))
+ metadata = self._prepare_image_component_metadata(
+ first_thumbnail_component_repre,
+ first_thumbnail_component["component_path"]
+ )
- for stream in streams:
- if "width" in stream and "height" in stream:
- width = stream["width"]
- height = stream["height"]
- break
-
- if width and height:
+ if metadata:
component_data = first_thumbnail_component["component_data"]
- component_data["name"] = "ftrackreview-image"
- component_data["metadata"] = {
- "ftr_meta": json.dumps({
- "width": width,
- "height": height,
- "format": "image"
- })
- }
+ component_data["metadata"] = metadata
+
+ if review_representations:
+ component_data["name"] = "thumbnail"
+ else:
+ component_data["name"] = "ftrackreview-image"
# Create review components
# Change asset name of each new component for review
@@ -213,6 +205,11 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
extended_asset_name = ""
multiple_reviewable = len(review_representations) > 1
for repre in review_representations:
+ if not self._is_repre_video(repre) and has_movie_review:
+ self.log.debug("Movie repre has priority "
+ "from {}".format(repre))
+ continue
+
repre_path = self._get_repre_path(instance, repre, False)
if not repre_path:
self.log.warning(
@@ -261,12 +258,23 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
# Change location
review_item["component_path"] = repre_path
# Change component data
- review_item["component_data"] = {
- # Default component name is "main".
- "name": "ftrackreview-mp4",
- "metadata": self._prepare_component_metadata(
+
+ if self._is_repre_video(repre):
+ component_name = "ftrackreview-mp4"
+ metadata = self._prepare_video_component_metadata(
instance, repre, repre_path, True
)
+ else:
+ component_name = "ftrackreview-image"
+ metadata = self._prepare_image_component_metadata(
+ repre, repre_path
+ )
+ review_item["thumbnail"] = True
+
+ review_item["component_data"] = {
+ # Default component name is "main".
+ "name": component_name,
+ "metadata": metadata
}
if is_first_review_repre:
@@ -276,7 +284,8 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
not_first_components.append(review_item)
# Create copy of item before setting location
- src_components_to_add.append(copy.deepcopy(review_item))
+ if "delete" not in repre["tags"]:
+ src_components_to_add.append(copy.deepcopy(review_item))
# Set location
review_item["component_location_name"] = (
@@ -422,7 +431,18 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
return matching_profile["status"] or None
def _prepare_component_metadata(
- self, instance, repre, component_path, is_review
+ self, instance, repre, component_path, is_review=None
+ ):
+ if self._is_repre_video(repre):
+ return self._prepare_video_component_metadata(instance, repre,
+ component_path,
+ is_review)
+ else:
+ return self._prepare_image_component_metadata(repre,
+ component_path)
+
+ def _prepare_video_component_metadata(
+ self, instance, repre, component_path, is_review=None
):
metadata = {}
if "openpype_version" in self.additional_metadata_keys:
@@ -434,9 +454,9 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
try:
streams = get_ffprobe_streams(component_path)
except Exception:
- self.log.debug((
- "Failed to retrieve information about intput {}"
- ).format(component_path))
+ self.log.debug(
+ "Failed to retrieve information about "
+ "input {}".format(component_path))
# Find video streams
video_streams = [
@@ -480,9 +500,9 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
input_framerate
)
except ValueError:
- self.log.warning((
- "Could not convert ffprobe fps to float \"{}\""
- ).format(input_framerate))
+ self.log.warning(
+ "Could not convert ffprobe "
+ "fps to float \"{}\"".format(input_framerate))
continue
stream_width = tmp_width
@@ -554,3 +574,37 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
"frameRate": float(fps)
})
return metadata
+
+ def _prepare_image_component_metadata(self, repre, component_path):
+ width = repre.get("width")
+ height = repre.get("height")
+ if not width or not height:
+ streams = []
+ try:
+ streams = get_ffprobe_streams(component_path)
+ except Exception:
+ self.log.debug(
+ "Failed to retrieve information "
+ "about input {}".format(component_path))
+
+ for stream in streams:
+ if "width" in stream and "height" in stream:
+ width = stream["width"]
+ height = stream["height"]
+ break
+
+ metadata = {}
+ if width and height:
+ metadata = {
+ "ftr_meta": json.dumps({
+ "width": width,
+ "height": height,
+ "format": "image"
+ })
+ }
+
+ return metadata
+
+ def _is_repre_video(self, repre):
+ repre_ext = ".{}".format(repre["ext"])
+ return repre_ext in VIDEO_EXTENSIONS
diff --git a/openpype/modules/ftrack/scripts/sub_event_status.py b/openpype/modules/ftrack/scripts/sub_event_status.py
index 3163642e3f..6c7ecb8351 100644
--- a/openpype/modules/ftrack/scripts/sub_event_status.py
+++ b/openpype/modules/ftrack/scripts/sub_event_status.py
@@ -15,8 +15,8 @@ from openpype_modules.ftrack.ftrack_server.lib import (
TOPIC_STATUS_SERVER,
TOPIC_STATUS_SERVER_RESULT
)
-from openpype.api import Logger
from openpype.lib import (
+ Logger,
is_current_version_studio_latest,
is_running_from_build,
get_expected_version,
diff --git a/openpype/modules/ftrack/scripts/sub_event_storer.py b/openpype/modules/ftrack/scripts/sub_event_storer.py
index 204cce89e8..a7e77951af 100644
--- a/openpype/modules/ftrack/scripts/sub_event_storer.py
+++ b/openpype/modules/ftrack/scripts/sub_event_storer.py
@@ -17,10 +17,10 @@ from openpype_modules.ftrack.ftrack_server.lib import (
)
from openpype_modules.ftrack.lib import get_ftrack_event_mongo_info
from openpype.lib import (
+ Logger,
get_openpype_version,
get_build_version
)
-from openpype.api import Logger
log = Logger.get_logger("Event storer")
subprocess_started = datetime.datetime.now()
diff --git a/openpype/modules/job_queue/module.py b/openpype/modules/job_queue/module.py
index f1d7251e85..7075fcea14 100644
--- a/openpype/modules/job_queue/module.py
+++ b/openpype/modules/job_queue/module.py
@@ -43,7 +43,7 @@ import platform
import click
from openpype.modules import OpenPypeModule
-from openpype.api import get_system_settings
+from openpype.settings import get_system_settings
class JobQueueModule(OpenPypeModule):
diff --git a/openpype/modules/kitsu/utils/update_op_with_zou.py b/openpype/modules/kitsu/utils/update_op_with_zou.py
index 4a064f6a16..10e80b3c89 100644
--- a/openpype/modules/kitsu/utils/update_op_with_zou.py
+++ b/openpype/modules/kitsu/utils/update_op_with_zou.py
@@ -21,6 +21,9 @@ from openpype.pipeline import AvalonMongoDB
from openpype.settings import get_project_settings
from openpype.modules.kitsu.utils.credentials import validate_credentials
+from openpype.lib import Logger
+
+log = Logger.get_logger(__name__)
# Accepted namin pattern for OP
naming_pattern = re.compile("^[a-zA-Z0-9_.]*$")
@@ -230,7 +233,6 @@ def update_op_assets(
},
)
)
-
return assets_with_update
@@ -248,7 +250,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
project_name = project["name"]
project_doc = get_project(project_name)
if not project_doc:
- print(f"Creating project '{project_name}'")
+ log.info(f"Creating project '{project_name}'")
project_doc = create_project(project_name, project_name)
# Project data and tasks
@@ -268,12 +270,18 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
{
"code": project_code,
"fps": float(project["fps"]),
- "resolutionWidth": int(project["resolution"].split("x")[0]),
- "resolutionHeight": int(project["resolution"].split("x")[1]),
"zou_id": project["id"],
}
)
+ match_res = re.match(r"(\d+)x(\d+)", project["resolution"])
+ if match_res:
+ project_data['resolutionWidth'] = int(match_res.group(1))
+ project_data['resolutionHeight'] = int(match_res.group(2))
+ else:
+ log.warning(f"\'{project['resolution']}\' does not match the expected"
+ " format for the resolution, for example: 1920x1080")
+
return UpdateOne(
{"_id": project_doc["_id"]},
{
@@ -334,7 +342,7 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
if not project:
project = gazu.project.get_project_by_name(project["name"])
- print(f"Synchronizing {project['name']}...")
+ log.info(f"Synchronizing {project['name']}...")
# Get all assets from zou
all_assets = gazu.asset.all_assets_for_project(project)
diff --git a/openpype/modules/kitsu/utils/update_zou_with_op.py b/openpype/modules/kitsu/utils/update_zou_with_op.py
index da924aa5ee..39baf31b93 100644
--- a/openpype/modules/kitsu/utils/update_zou_with_op.py
+++ b/openpype/modules/kitsu/utils/update_zou_with_op.py
@@ -12,7 +12,7 @@ from openpype.client import (
get_assets,
)
from openpype.pipeline import AvalonMongoDB
-from openpype.api import get_project_settings
+from openpype.settings import get_project_settings
from openpype.modules.kitsu.utils.credentials import validate_credentials
diff --git a/openpype/modules/log_viewer/log_view_module.py b/openpype/modules/log_viewer/log_view_module.py
index 14be6b392e..da1628b71f 100644
--- a/openpype/modules/log_viewer/log_view_module.py
+++ b/openpype/modules/log_viewer/log_view_module.py
@@ -1,4 +1,3 @@
-from openpype.api import Logger
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule
diff --git a/openpype/modules/shotgrid/lib/settings.py b/openpype/modules/shotgrid/lib/settings.py
index 924099f04b..5b0b728f55 100644
--- a/openpype/modules/shotgrid/lib/settings.py
+++ b/openpype/modules/shotgrid/lib/settings.py
@@ -1,4 +1,4 @@
-from openpype.api import get_system_settings, get_project_settings
+from openpype.settings import get_system_settings, get_project_settings
from openpype.modules.shotgrid.lib.const import MODULE_NAME
diff --git a/openpype/modules/sync_server/providers/local_drive.py b/openpype/modules/sync_server/providers/local_drive.py
index 01bc891d08..8f55dc529b 100644
--- a/openpype/modules/sync_server/providers/local_drive.py
+++ b/openpype/modules/sync_server/providers/local_drive.py
@@ -4,7 +4,7 @@ import shutil
import threading
import time
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.pipeline import Anatomy
from .abstract_provider import AbstractProvider
diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py
index eaaed39357..c1cf4dab44 100644
--- a/openpype/pipeline/create/context.py
+++ b/openpype/pipeline/create/context.py
@@ -7,7 +7,11 @@ from uuid import uuid4
from contextlib import contextmanager
from openpype.client import get_assets
-from openpype.host import INewPublisher
+from openpype.settings import (
+ get_system_settings,
+ get_project_settings
+)
+from openpype.host import IPublishHost
from openpype.pipeline import legacy_io
from openpype.pipeline.mongodb import (
AvalonMongoDB,
@@ -20,11 +24,6 @@ from .creator_plugins import (
discover_creator_plugins,
)
-from openpype.api import (
- get_system_settings,
- get_project_settings
-)
-
UpdateData = collections.namedtuple("UpdateData", ["instance", "changes"])
@@ -167,7 +166,10 @@ class AttributeValues:
return self._data.pop(key, default)
def reset_values(self):
- self._data = []
+ self._data = {}
+
+ def mark_as_stored(self):
+ self._origin_data = copy.deepcopy(self._data)
@property
def attr_defs(self):
@@ -304,6 +306,9 @@ class PublishAttributes:
for name in self._plugin_names_order:
yield name
+ def mark_as_stored(self):
+ self._origin_data = copy.deepcopy(self._data)
+
def data_to_store(self):
"""Convert attribute values to "data to store"."""
@@ -402,8 +407,12 @@ class CreatedInstance:
self.creator = creator
# Instance members may have actions on them
+ # TODO implement members logic
self._members = []
+ # Data that can be used for lifetime of object
+ self._transient_data = {}
+
# Create a copy of passed data to avoid changing them on the fly
data = copy.deepcopy(data or {})
# Store original value of passed data
@@ -596,6 +605,26 @@ class CreatedInstance:
return self
+ @property
+ def transient_data(self):
+ """Data stored for lifetime of instance object.
+
+ These data are not stored to scene and will be lost on object
+ deletion.
+
+ Can be used to store objects. In some host implementations is not
+ possible to reference to object in scene with some unique identifier
+ (e.g. node in Fusion.). In that case it is handy to store the object
+ here. Should be used that way only if instance data are stored on the
+ node itself.
+
+ Returns:
+ Dict[str, Any]: Dictionary object where you can store data related
+ to instance for lifetime of instance object.
+ """
+
+ return self._transient_data
+
def changes(self):
"""Calculate and return changes."""
@@ -623,6 +652,25 @@ class CreatedInstance:
changes[key] = (old_value, None)
return changes
+ def mark_as_stored(self):
+ """Should be called when instance data are stored.
+
+ Origin data are replaced by current data so changes are cleared.
+ """
+
+ orig_keys = set(self._orig_data.keys())
+ for key, value in self._data.items():
+ orig_keys.discard(key)
+ if key in ("creator_attributes", "publish_attributes"):
+ continue
+ self._orig_data[key] = copy.deepcopy(value)
+
+ for key in orig_keys:
+ self._orig_data.pop(key)
+
+ self.creator_attributes.mark_as_stored()
+ self.publish_attributes.mark_as_stored()
+
@property
def creator_attributes(self):
return self._data["creator_attributes"]
@@ -636,6 +684,18 @@ class CreatedInstance:
return self._data["publish_attributes"]
def data_to_store(self):
+ """Collect data that contain json parsable types.
+
+ It is possible to recreate the instance using these data.
+
+ Todo:
+ We probably don't need OrderedDict. When data are loaded they
+ are not ordered anymore.
+
+ Returns:
+ OrderedDict: Ordered dictionary with instance data.
+ """
+
output = collections.OrderedDict()
for key, value in self._data.items():
if key in ("creator_attributes", "publish_attributes"):
@@ -771,7 +831,7 @@ class CreateContext:
"""
missing = set(
- INewPublisher.get_missing_publish_methods(host)
+ IPublishHost.get_missing_publish_methods(host)
)
return missing
diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py
index bf2fdd2c5f..945a97a99c 100644
--- a/openpype/pipeline/create/creator_plugins.py
+++ b/openpype/pipeline/create/creator_plugins.py
@@ -9,7 +9,7 @@ from abc import (
import six
from openpype.settings import get_system_settings, get_project_settings
-from .subset_name import get_subset_name
+from openpype.lib import Logger
from openpype.pipeline.plugin_discover import (
discover,
register_plugin,
@@ -18,6 +18,7 @@ from openpype.pipeline.plugin_discover import (
deregister_plugin_path
)
+from .subset_name import get_subset_name
from .legacy_create import LegacyCreator
@@ -81,6 +82,13 @@ class BaseCreator:
# - we may use UI inside processing this attribute should be checked
self.headless = headless
+ self.apply_settings(project_settings, system_settings)
+
+ def apply_settings(self, project_settings, system_settings):
+ """Method called on initialization of plugin to apply settings."""
+
+ pass
+
@property
def identifier(self):
"""Identifier of creator (must be unique).
@@ -136,8 +144,6 @@ class BaseCreator:
"""
if self._log is None:
- from openpype.api import Logger
-
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
diff --git a/openpype/pipeline/load/__init__.py b/openpype/pipeline/load/__init__.py
index bf38a0b3c8..e96f64f2a4 100644
--- a/openpype/pipeline/load/__init__.py
+++ b/openpype/pipeline/load/__init__.py
@@ -5,6 +5,7 @@ from .utils import (
InvalidRepresentationContext,
get_repres_contexts,
+ get_contexts_for_repre_docs,
get_subset_contexts,
get_representation_context,
@@ -54,6 +55,7 @@ __all__ = (
"InvalidRepresentationContext",
"get_repres_contexts",
+ "get_contexts_for_repre_docs",
"get_subset_contexts",
"get_representation_context",
diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py
index 363120600c..784d4628f3 100644
--- a/openpype/pipeline/load/utils.py
+++ b/openpype/pipeline/load/utils.py
@@ -87,13 +87,20 @@ def get_repres_contexts(representation_ids, dbcon=None):
if not dbcon:
dbcon = legacy_io
- contexts = {}
if not representation_ids:
- return contexts
+ return {}
project_name = dbcon.active_project()
repre_docs = get_representations(project_name, representation_ids)
+ return get_contexts_for_repre_docs(project_name, repre_docs)
+
+
+def get_contexts_for_repre_docs(project_name, repre_docs):
+ contexts = {}
+ if not repre_docs:
+ return contexts
+
repre_docs_by_id = {}
version_ids = set()
for repre_doc in repre_docs:
diff --git a/openpype/pipeline/plugin_discover.py b/openpype/pipeline/plugin_discover.py
index 004e530b1c..7edd9ac290 100644
--- a/openpype/pipeline/plugin_discover.py
+++ b/openpype/pipeline/plugin_discover.py
@@ -2,7 +2,7 @@ import os
import inspect
import traceback
-from openpype.api import Logger
+from openpype.lib import Logger
from openpype.lib.python_module_tools import (
modules_from_path,
classes_from_module,
diff --git a/openpype/pipeline/workfile/abstract_template_loader.py b/openpype/pipeline/workfile/abstract_template_loader.py
deleted file mode 100644
index e2fbea98ca..0000000000
--- a/openpype/pipeline/workfile/abstract_template_loader.py
+++ /dev/null
@@ -1,528 +0,0 @@
-import os
-from abc import ABCMeta, abstractmethod
-
-import six
-import logging
-from functools import reduce
-
-from openpype.client import (
- get_asset_by_name,
- get_linked_assets,
-)
-from openpype.settings import get_project_settings
-from openpype.lib import (
- StringTemplate,
- Logger,
- filter_profiles,
-)
-from openpype.pipeline import legacy_io, Anatomy
-from openpype.pipeline.load import (
- get_loaders_by_name,
- get_representation_context,
- load_with_repre_context,
-)
-
-from .build_template_exceptions import (
- TemplateAlreadyImported,
- TemplateLoadingFailed,
- TemplateProfileNotFound,
- TemplateNotFound
-)
-
-log = logging.getLogger(__name__)
-
-
-def update_representations(entities, entity):
- if entity['context']['subset'] not in entities:
- entities[entity['context']['subset']] = entity
- else:
- current = entities[entity['context']['subset']]
- incomming = entity
- entities[entity['context']['subset']] = max(
- current, incomming,
- key=lambda entity: entity["context"].get("version", -1))
-
- return entities
-
-
-def parse_loader_args(loader_args):
- if not loader_args:
- return dict()
- try:
- parsed_args = eval(loader_args)
- if not isinstance(parsed_args, dict):
- return dict()
- else:
- return parsed_args
- except Exception as err:
- print(
- "Error while parsing loader arguments '{}'.\n{}: {}\n\n"
- "Continuing with default arguments. . .".format(
- loader_args,
- err.__class__.__name__,
- err))
- return dict()
-
-
-@six.add_metaclass(ABCMeta)
-class AbstractTemplateLoader:
- """
- Abstraction of Template Loader.
- Properties:
- template_path : property to get current template path
- Methods:
- import_template : Abstract Method. Used to load template,
- depending on current host
- get_template_nodes : Abstract Method. Used to query nodes acting
- as placeholders. Depending on current host
- """
-
- _log = None
-
- def __init__(self, placeholder_class):
- # TODO template loader should expect host as and argument
- # - host have all responsibility for most of code (also provide
- # placeholder class)
- # - also have responsibility for current context
- # - this won't work in DCCs where multiple workfiles with
- # different contexts can be opened at single time
- # - template loader should have ability to change context
- project_name = legacy_io.active_project()
- asset_name = legacy_io.Session["AVALON_ASSET"]
-
- self.loaders_by_name = get_loaders_by_name()
- self.current_asset = asset_name
- self.project_name = project_name
- self.host_name = legacy_io.Session["AVALON_APP"]
- self.task_name = legacy_io.Session["AVALON_TASK"]
- self.placeholder_class = placeholder_class
- self.current_asset_doc = get_asset_by_name(project_name, asset_name)
- self.task_type = (
- self.current_asset_doc
- .get("data", {})
- .get("tasks", {})
- .get(self.task_name, {})
- .get("type")
- )
-
- self.log.info(
- "BUILDING ASSET FROM TEMPLATE :\n"
- "Starting templated build for {asset} in {project}\n\n"
- "Asset : {asset}\n"
- "Task : {task_name} ({task_type})\n"
- "Host : {host}\n"
- "Project : {project}\n".format(
- asset=self.current_asset,
- host=self.host_name,
- project=self.project_name,
- task_name=self.task_name,
- task_type=self.task_type
- ))
- # Skip if there is no loader
- if not self.loaders_by_name:
- self.log.warning(
- "There is no registered loaders. No assets will be loaded")
- return
-
- @property
- def log(self):
- if self._log is None:
- self._log = Logger.get_logger(self.__class__.__name__)
- return self._log
-
- def template_already_imported(self, err_msg):
- """In case template was already loaded.
- Raise the error as a default action.
- Override this method in your template loader implementation
- to manage this case."""
- self.log.error("{}: {}".format(
- err_msg.__class__.__name__,
- err_msg))
- raise TemplateAlreadyImported(err_msg)
-
- def template_loading_failed(self, err_msg):
- """In case template loading failed
- Raise the error as a default action.
- Override this method in your template loader implementation
- to manage this case.
- """
- self.log.error("{}: {}".format(
- err_msg.__class__.__name__,
- err_msg))
- raise TemplateLoadingFailed(err_msg)
-
- @property
- def template_path(self):
- """
- Property returning template path. Avoiding setter.
- Getting template path from open pype settings based on current avalon
- session and solving the path variables if needed.
- Returns:
- str: Solved template path
- Raises:
- TemplateProfileNotFound: No profile found from settings for
- current avalon session
- KeyError: Could not solve path because a key does not exists
- in avalon context
- TemplateNotFound: Solved path does not exists on current filesystem
- """
- project_name = self.project_name
- host_name = self.host_name
- task_name = self.task_name
- task_type = self.task_type
-
- anatomy = Anatomy(project_name)
- project_settings = get_project_settings(project_name)
-
- build_info = project_settings[host_name]["templated_workfile_build"]
- profile = filter_profiles(
- build_info["profiles"],
- {
- "task_types": task_type,
- "task_names": task_name
- }
- )
-
- if not profile:
- raise TemplateProfileNotFound(
- "No matching profile found for task '{}' of type '{}' "
- "with host '{}'".format(task_name, task_type, host_name)
- )
-
- path = profile["path"]
- if not path:
- raise TemplateLoadingFailed(
- "Template path is not set.\n"
- "Path need to be set in {}\\Template Workfile Build "
- "Settings\\Profiles".format(host_name.title()))
-
- # Try fill path with environments and anatomy roots
- fill_data = {
- key: value
- for key, value in os.environ.items()
- }
- fill_data["root"] = anatomy.roots
- result = StringTemplate.format_template(path, fill_data)
- if result.solved:
- path = result.normalized()
-
- if path and os.path.exists(path):
- self.log.info("Found template at: '{}'".format(path))
- return path
-
- solved_path = None
- while True:
- try:
- solved_path = anatomy.path_remapper(path)
- except KeyError as missing_key:
- raise KeyError(
- "Could not solve key '{}' in template path '{}'".format(
- missing_key, path))
-
- if solved_path is None:
- solved_path = path
- if solved_path == path:
- break
- path = solved_path
-
- solved_path = os.path.normpath(solved_path)
- if not os.path.exists(solved_path):
- raise TemplateNotFound(
- "Template found in openPype settings for task '{}' with host "
- "'{}' does not exists. (Not found : {})".format(
- task_name, host_name, solved_path))
-
- self.log.info("Found template at: '{}'".format(solved_path))
-
- return solved_path
-
- def populate_template(self, ignored_ids=None):
- """
- Use template placeholders to load assets and parent them in hierarchy
- Arguments :
- ignored_ids :
- Returns:
- None
- """
-
- loaders_by_name = self.loaders_by_name
- current_asset_doc = self.current_asset_doc
- linked_assets = get_linked_assets(current_asset_doc)
-
- ignored_ids = ignored_ids or []
- placeholders = self.get_placeholders()
- self.log.debug("Placeholders found in template: {}".format(
- [placeholder.name for placeholder in placeholders]
- ))
- for placeholder in placeholders:
- self.log.debug("Start to processing placeholder {}".format(
- placeholder.name
- ))
- placeholder_representations = self.get_placeholder_representations(
- placeholder,
- current_asset_doc,
- linked_assets
- )
-
- if not placeholder_representations:
- self.log.info(
- "There's no representation for this placeholder: "
- "{}".format(placeholder.name)
- )
- continue
-
- for representation in placeholder_representations:
- self.preload(placeholder, loaders_by_name, representation)
-
- if self.load_data_is_incorrect(
- placeholder,
- representation,
- ignored_ids):
- continue
-
- self.log.info(
- "Loading {}_{} with loader {}\n"
- "Loader arguments used : {}".format(
- representation['context']['asset'],
- representation['context']['subset'],
- placeholder.loader_name,
- placeholder.loader_args))
-
- try:
- container = self.load(
- placeholder, loaders_by_name, representation)
- except Exception:
- self.load_failed(placeholder, representation)
- else:
- self.load_succeed(placeholder, container)
- finally:
- self.postload(placeholder)
-
- def get_placeholder_representations(
- self, placeholder, current_asset_doc, linked_asset_docs
- ):
- placeholder_representations = placeholder.get_representations(
- current_asset_doc,
- linked_asset_docs
- )
- for repre_doc in reduce(
- update_representations,
- placeholder_representations,
- dict()
- ).values():
- yield repre_doc
-
- def load_data_is_incorrect(
- self, placeholder, last_representation, ignored_ids):
- if not last_representation:
- self.log.warning(placeholder.err_message())
- return True
- if (str(last_representation['_id']) in ignored_ids):
- print("Ignoring : ", last_representation['_id'])
- return True
- return False
-
- def preload(self, placeholder, loaders_by_name, last_representation):
- pass
-
- def load(self, placeholder, loaders_by_name, last_representation):
- repre = get_representation_context(last_representation)
- return load_with_repre_context(
- loaders_by_name[placeholder.loader_name],
- repre,
- options=parse_loader_args(placeholder.loader_args))
-
- def load_succeed(self, placeholder, container):
- placeholder.parent_in_hierarchy(container)
-
- def load_failed(self, placeholder, last_representation):
- self.log.warning(
- "Got error trying to load {}:{} with {}".format(
- last_representation['context']['asset'],
- last_representation['context']['subset'],
- placeholder.loader_name
- ),
- exc_info=True
- )
-
- def postload(self, placeholder):
- placeholder.clean()
-
- def update_missing_containers(self):
- loaded_containers_ids = self.get_loaded_containers_by_id()
- self.populate_template(ignored_ids=loaded_containers_ids)
-
- def get_placeholders(self):
- placeholders = map(self.placeholder_class, self.get_template_nodes())
- valid_placeholders = filter(
- lambda i: i.is_valid,
- placeholders
- )
- sorted_placeholders = list(sorted(
- valid_placeholders,
- key=lambda i: i.order
- ))
- return sorted_placeholders
-
- @abstractmethod
- def get_loaded_containers_by_id(self):
- """
- Collect already loaded containers for updating scene
- Return:
- dict (string, node): A dictionnary id as key
- and containers as value
- """
- pass
-
- @abstractmethod
- def import_template(self, template_path):
- """
- Import template in current host
- Args:
- template_path (str): fullpath to current task and
- host's template file
- Return:
- None
- """
- pass
-
- @abstractmethod
- def get_template_nodes(self):
- """
- Returning a list of nodes acting as host placeholders for
- templating. The data representation is by user.
- AbstractLoadTemplate (and LoadTemplate) won't directly manipulate nodes
- Args :
- None
- Returns:
- list(AnyNode): Solved template path
- """
- pass
-
-
-@six.add_metaclass(ABCMeta)
-class AbstractPlaceholder:
- """Abstraction of placeholders logic.
-
- Properties:
- required_keys: A list of mandatory keys to decribe placeholder
- and assets to load.
- optional_keys: A list of optional keys to decribe
- placeholder and assets to load
- loader_name: Name of linked loader to use while loading assets
-
- Args:
- identifier (str): Placeholder identifier. Should be possible to be
- used as identifier in "a scene" (e.g. unique node name).
- """
-
- required_keys = {
- "builder_type",
- "family",
- "representation",
- "order",
- "loader",
- "loader_args"
- }
- optional_keys = {}
-
- def __init__(self, identifier):
- self._log = None
- self._name = identifier
- self.get_data(identifier)
-
- @property
- def log(self):
- if self._log is None:
- self._log = Logger.get_logger(repr(self))
- return self._log
-
- def __repr__(self):
- return "< {} {} >".format(self.__class__.__name__, self.name)
-
- @property
- def name(self):
- return self._name
-
- @property
- def loader_args(self):
- return self.data["loader_args"]
-
- @property
- def builder_type(self):
- return self.data["builder_type"]
-
- @property
- def order(self):
- return self.data["order"]
-
- @property
- def loader_name(self):
- """Return placeholder loader name.
-
- Returns:
- str: Loader name that will be used to load placeholder
- representations.
- """
-
- return self.data["loader"]
-
- @property
- def is_valid(self):
- """Test validity of placeholder.
-
- i.e.: every required key exists in placeholder data
-
- Returns:
- bool: True if every key is in data
- """
-
- if set(self.required_keys).issubset(self.data.keys()):
- self.log.debug("Valid placeholder : {}".format(self.name))
- return True
- self.log.info("Placeholder is not valid : {}".format(self.name))
- return False
-
- @abstractmethod
- def parent_in_hierarchy(self, container):
- """Place loaded container in correct hierarchy given by placeholder
-
- Args:
- container (Dict[str, Any]): Loaded container created by loader.
- """
-
- pass
-
- @abstractmethod
- def clean(self):
- """Clean placeholder from hierarchy after loading assets."""
-
- pass
-
- @abstractmethod
- def get_representations(self, current_asset_doc, linked_asset_docs):
- """Query representations based on placeholder data.
-
- Args:
- current_asset_doc (Dict[str, Any]): Document of current
- context asset.
- linked_asset_docs (List[Dict[str, Any]]): Documents of assets
- linked to current context asset.
-
- Returns:
- Iterable[Dict[str, Any]]: Representations that are matching
- placeholder filters.
- """
-
- pass
-
- @abstractmethod
- def get_data(self, identifier):
- """Collect information about placeholder by identifier.
-
- Args:
- identifier (str): A unique placeholder identifier defined by
- implementation.
- """
-
- pass
diff --git a/openpype/pipeline/workfile/build_template.py b/openpype/pipeline/workfile/build_template.py
deleted file mode 100644
index 3328dfbc9e..0000000000
--- a/openpype/pipeline/workfile/build_template.py
+++ /dev/null
@@ -1,72 +0,0 @@
-import os
-from importlib import import_module
-from openpype.lib import classes_from_module
-from openpype.host import HostBase
-from openpype.pipeline import registered_host
-
-from .abstract_template_loader import (
- AbstractPlaceholder,
- AbstractTemplateLoader)
-
-from .build_template_exceptions import (
- TemplateLoadingFailed,
- TemplateAlreadyImported,
- MissingHostTemplateModule,
- MissingTemplatePlaceholderClass,
- MissingTemplateLoaderClass
-)
-
-_module_path_format = 'openpype.hosts.{host}.api.template_loader'
-
-
-def build_workfile_template(*args):
- template_loader = build_template_loader()
- try:
- template_loader.import_template(template_loader.template_path)
- except TemplateAlreadyImported as err:
- template_loader.template_already_imported(err)
- except TemplateLoadingFailed as err:
- template_loader.template_loading_failed(err)
- else:
- template_loader.populate_template()
-
-
-def update_workfile_template(*args):
- template_loader = build_template_loader()
- template_loader.update_missing_containers()
-
-
-def build_template_loader():
- # TODO refactor to use advantage of 'HostBase' and don't import dynamically
- # - hosts should have methods that gives option to return builders
- host = registered_host()
- if isinstance(host, HostBase):
- host_name = host.name
- else:
- host_name = os.environ.get("AVALON_APP")
- if not host_name:
- host_name = host.__name__.split(".")[-2]
-
- module_path = _module_path_format.format(host=host_name)
- module = import_module(module_path)
- if not module:
- raise MissingHostTemplateModule(
- "No template loader found for host {}".format(host_name))
-
- template_loader_class = classes_from_module(
- AbstractTemplateLoader,
- module
- )
- template_placeholder_class = classes_from_module(
- AbstractPlaceholder,
- module
- )
-
- if not template_loader_class:
- raise MissingTemplateLoaderClass()
- template_loader_class = template_loader_class[0]
-
- if not template_placeholder_class:
- raise MissingTemplatePlaceholderClass()
- template_placeholder_class = template_placeholder_class[0]
- return template_loader_class(template_placeholder_class)
diff --git a/openpype/pipeline/workfile/build_template_exceptions.py b/openpype/pipeline/workfile/build_template_exceptions.py
deleted file mode 100644
index 7a5075e3dc..0000000000
--- a/openpype/pipeline/workfile/build_template_exceptions.py
+++ /dev/null
@@ -1,35 +0,0 @@
-class MissingHostTemplateModule(Exception):
- """Error raised when expected module does not exists"""
- pass
-
-
-class MissingTemplatePlaceholderClass(Exception):
- """Error raised when module doesn't implement a placeholder class"""
- pass
-
-
-class MissingTemplateLoaderClass(Exception):
- """Error raised when module doesn't implement a template loader class"""
- pass
-
-
-class TemplateNotFound(Exception):
- """Exception raised when template does not exist."""
- pass
-
-
-class TemplateProfileNotFound(Exception):
- """Exception raised when current profile
- doesn't match any template profile"""
- pass
-
-
-class TemplateAlreadyImported(Exception):
- """Error raised when Template was already imported by host for
- this session"""
- pass
-
-
-class TemplateLoadingFailed(Exception):
- """Error raised whend Template loader was unable to load the template"""
- pass
diff --git a/openpype/pipeline/workfile/build_workfile.py b/openpype/pipeline/workfile/build_workfile.py
index 0b8a444436..87b9df158f 100644
--- a/openpype/pipeline/workfile/build_workfile.py
+++ b/openpype/pipeline/workfile/build_workfile.py
@@ -1,3 +1,14 @@
+"""Workfile build based on settings.
+
+Workfile builder will do stuff based on project settings. Advantage is that
+it need only access to settings. Disadvantage is that it is hard to focus
+build per context and being explicit about loaded content.
+
+For more explicit workfile build is recommended 'AbstractTemplateBuilder'
+from '~/openpype/pipeline/workfile/workfile_template_builder'. Which gives
+more abilities to define how build happens but require more code to achive it.
+"""
+
import os
import re
import collections
diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py
new file mode 100644
index 0000000000..582657c735
--- /dev/null
+++ b/openpype/pipeline/workfile/workfile_template_builder.py
@@ -0,0 +1,1451 @@
+"""Workfile build mechanism using workfile templates.
+
+Build templates are manually prepared using plugin definitions which create
+placeholders inside the template which are populated on import.
+
+This approach is very explicit to achive very specific build logic that can be
+targeted by task types and names.
+
+Placeholders are created using placeholder plugins which should care about
+logic and data of placeholder items. 'PlaceholderItem' is used to keep track
+about it's progress.
+"""
+
+import os
+import re
+import collections
+import copy
+from abc import ABCMeta, abstractmethod
+
+import six
+
+from openpype.client import (
+ get_asset_by_name,
+ get_linked_assets,
+ get_representations,
+)
+from openpype.settings import (
+ get_project_settings,
+ get_system_settings,
+)
+from openpype.host import HostBase
+from openpype.lib import (
+ Logger,
+ StringTemplate,
+ filter_profiles,
+ attribute_definitions,
+)
+from openpype.lib.attribute_definitions import get_attributes_keys
+from openpype.pipeline import legacy_io, Anatomy
+from openpype.pipeline.load import (
+ get_loaders_by_name,
+ get_contexts_for_repre_docs,
+ load_with_repre_context,
+)
+from openpype.pipeline.create import get_legacy_creator_by_name
+
+
+class TemplateNotFound(Exception):
+ """Exception raised when template does not exist."""
+ pass
+
+
+class TemplateProfileNotFound(Exception):
+ """Exception raised when current profile
+ doesn't match any template profile"""
+ pass
+
+
+class TemplateAlreadyImported(Exception):
+ """Error raised when Template was already imported by host for
+ this session"""
+ pass
+
+
+class TemplateLoadFailed(Exception):
+ """Error raised whend Template loader was unable to load the template"""
+ pass
+
+
+@six.add_metaclass(ABCMeta)
+class AbstractTemplateBuilder(object):
+ """Abstraction of Template Builder.
+
+ Builder cares about context, shared data, cache, discovery of plugins
+ and trigger logic. Provides public api for host workfile build systen.
+
+ Rest of logic is based on plugins that care about collection and creation
+ of placeholder items.
+
+ Population of placeholders happens in loops. Each loop will collect all
+ available placeholders, skip already populated, and populate the rest.
+
+ Builder item has 2 types of shared data. Refresh lifetime which are cleared
+ on refresh and populate lifetime which are cleared after loop of
+ placeholder population.
+
+ Args:
+ host (Union[HostBase, ModuleType]): Implementation of host.
+ """
+
+ _log = None
+
+ def __init__(self, host):
+ # Get host name
+ if isinstance(host, HostBase):
+ host_name = host.name
+ else:
+ host_name = os.environ.get("AVALON_APP")
+
+ self._host = host
+ self._host_name = host_name
+
+ # Shared data across placeholder plugins
+ self._shared_data = {}
+ self._shared_populate_data = {}
+
+ # Where created objects of placeholder plugins will be stored
+ self._placeholder_plugins = None
+ self._loaders_by_name = None
+ self._creators_by_name = None
+
+ self._system_settings = None
+ self._project_settings = None
+
+ self._current_asset_doc = None
+ self._linked_asset_docs = None
+ self._task_type = None
+
+ @property
+ def project_name(self):
+ return legacy_io.active_project()
+
+ @property
+ def current_asset_name(self):
+ return legacy_io.Session["AVALON_ASSET"]
+
+ @property
+ def current_task_name(self):
+ return legacy_io.Session["AVALON_TASK"]
+
+ @property
+ def system_settings(self):
+ if self._system_settings is None:
+ self._system_settings = get_system_settings()
+ return self._system_settings
+
+ @property
+ def project_settings(self):
+ if self._project_settings is None:
+ self._project_settings = get_project_settings(self.project_name)
+ return self._project_settings
+
+ @property
+ def current_asset_doc(self):
+ if self._current_asset_doc is None:
+ self._current_asset_doc = get_asset_by_name(
+ self.project_name, self.current_asset_name
+ )
+ return self._current_asset_doc
+
+ @property
+ def linked_asset_docs(self):
+ if self._linked_asset_docs is None:
+ self._linked_asset_docs = get_linked_assets(
+ self.current_asset_doc
+ )
+ return self._linked_asset_docs
+
+ @property
+ def current_task_type(self):
+ asset_doc = self.current_asset_doc
+ if not asset_doc:
+ return None
+ return (
+ asset_doc
+ .get("data", {})
+ .get("tasks", {})
+ .get(self.current_task_name, {})
+ .get("type")
+ )
+
+ def get_placeholder_plugin_classes(self):
+ """Get placeholder plugin classes that can be used to build template.
+
+ Default implementation looks for method
+ 'get_workfile_build_placeholder_plugins' on host.
+
+ Returns:
+ List[PlaceholderPlugin]: Plugin classes available for host.
+ """
+
+ if hasattr(self._host, "get_workfile_build_placeholder_plugins"):
+ return self._host.get_workfile_build_placeholder_plugins()
+ return []
+
+ @property
+ def host(self):
+ """Access to host implementation.
+
+ Returns:
+ Union[HostBase, ModuleType]: Implementation of host.
+ """
+
+ return self._host
+
+ @property
+ def host_name(self):
+ """Name of 'host' implementation.
+
+ Returns:
+ str: Host's name.
+ """
+
+ return self._host_name
+
+ @property
+ def log(self):
+ """Dynamically created logger for the plugin."""
+
+ if self._log is None:
+ self._log = Logger.get_logger(repr(self))
+ return self._log
+
+ def refresh(self):
+ """Reset cached data."""
+
+ self._placeholder_plugins = None
+ self._loaders_by_name = None
+ self._creators_by_name = None
+
+ self._current_asset_doc = None
+ self._linked_asset_docs = None
+ self._task_type = None
+
+ self._system_settings = None
+ self._project_settings = None
+
+ self.clear_shared_data()
+ self.clear_shared_populate_data()
+
+ def get_loaders_by_name(self):
+ if self._loaders_by_name is None:
+ self._loaders_by_name = get_loaders_by_name()
+ return self._loaders_by_name
+
+ def get_creators_by_name(self):
+ if self._creators_by_name is None:
+ self._creators_by_name = get_legacy_creator_by_name()
+ return self._creators_by_name
+
+ def get_shared_data(self, key):
+ """Receive shared data across plugins and placeholders.
+
+ This can be used to scroll scene only once to look for placeholder
+ items if the storing is unified but each placeholder plugin would have
+ to call it again.
+
+ Args:
+ key (str): Key under which are shared data stored.
+
+ Returns:
+ Union[None, Any]: None if key was not set.
+ """
+
+ return self._shared_data.get(key)
+
+ def set_shared_data(self, key, value):
+ """Store share data across plugins and placeholders.
+
+ Store data that can be afterwards accessed from any future call. It
+ is good practice to check if the same value is not already stored under
+ different key or if the key is not already used for something else.
+
+ Key should be self explanatory to content.
+ - wrong: 'asset'
+ - good: 'asset_name'
+
+ Args:
+ key (str): Key under which is key stored.
+ value (Any): Value that should be stored under the key.
+ """
+
+ self._shared_data[key] = value
+
+ def clear_shared_data(self):
+ """Clear shared data.
+
+ Method only clear shared data to default state.
+ """
+
+ self._shared_data = {}
+
+ def clear_shared_populate_data(self):
+ """Receive shared data across plugins and placeholders.
+
+ These data are cleared after each loop of populating of template.
+
+ This can be used to scroll scene only once to look for placeholder
+ items if the storing is unified but each placeholder plugin would have
+ to call it again.
+
+ Args:
+ key (str): Key under which are shared data stored.
+
+ Returns:
+ Union[None, Any]: None if key was not set.
+ """
+
+ self._shared_populate_data = {}
+
+ def get_shared_populate_data(self, key):
+ """Store share populate data across plugins and placeholders.
+
+ These data are cleared after each loop of populating of template.
+
+ Store data that can be afterwards accessed from any future call. It
+ is good practice to check if the same value is not already stored under
+ different key or if the key is not already used for something else.
+
+ Key should be self explanatory to content.
+ - wrong: 'asset'
+ - good: 'asset_name'
+
+ Args:
+ key (str): Key under which is key stored.
+ value (Any): Value that should be stored under the key.
+ """
+
+ return self._shared_populate_data.get(key)
+
+ def set_shared_populate_data(self, key, value):
+ """Store share populate data across plugins and placeholders.
+
+ These data are cleared after each loop of populating of template.
+
+ Store data that can be afterwards accessed from any future call. It
+ is good practice to check if the same value is not already stored under
+ different key or if the key is not already used for something else.
+
+ Key should be self explanatory to content.
+ - wrong: 'asset'
+ - good: 'asset_name'
+
+ Args:
+ key (str): Key under which is key stored.
+ value (Any): Value that should be stored under the key.
+ """
+
+ self._shared_populate_data[key] = value
+
+ @property
+ def placeholder_plugins(self):
+ """Access to initialized placeholder plugins.
+
+ Returns:
+ List[PlaceholderPlugin]: Initialized plugins available for host.
+ """
+
+ if self._placeholder_plugins is None:
+ placeholder_plugins = {}
+ for cls in self.get_placeholder_plugin_classes():
+ try:
+ plugin = cls(self)
+ placeholder_plugins[plugin.identifier] = plugin
+
+ except Exception:
+ self.log.warning(
+ "Failed to initialize placeholder plugin {}".format(
+ cls.__name__
+ ),
+ exc_info=True
+ )
+
+ self._placeholder_plugins = placeholder_plugins
+ return self._placeholder_plugins
+
+ def create_placeholder(self, plugin_identifier, placeholder_data):
+ """Create new placeholder using plugin identifier and data.
+
+ Args:
+ plugin_identifier (str): Identifier of plugin. That's how builder
+ know which plugin should be used.
+ placeholder_data (Dict[str, Any]): Placeholder item data. They
+ should match options required by the plugin.
+
+ Returns:
+ PlaceholderItem: Created placeholder item.
+ """
+
+ plugin = self.placeholder_plugins[plugin_identifier]
+ return plugin.create_placeholder(placeholder_data)
+
+ def get_placeholders(self):
+ """Collect placeholder items from scene.
+
+ Each placeholder plugin can collect it's placeholders and return them.
+ This method does not use cached values but always go through the scene.
+
+ Returns:
+ List[PlaceholderItem]: Sorted placeholder items.
+ """
+
+ placeholders = []
+ for placeholder_plugin in self.placeholder_plugins.values():
+ result = placeholder_plugin.collect_placeholders()
+ if result:
+ placeholders.extend(result)
+
+ return list(sorted(
+ placeholders,
+ key=lambda i: i.order
+ ))
+
+ def build_template(self, template_path=None, level_limit=None):
+ """Main callback for building workfile from template path.
+
+ Todo:
+ Handle report of populated placeholders from
+ 'populate_scene_placeholders' to be shown to a user.
+
+ Args:
+ template_path (str): Path to a template file with placeholders.
+ Template from settings 'get_template_path' used when not
+ passed.
+ level_limit (int): Limit of populate loops. Related to
+ 'populate_scene_placeholders' method.
+ """
+
+ if template_path is None:
+ template_path = self.get_template_path()
+ self.import_template(template_path)
+ self.populate_scene_placeholders(level_limit)
+
+ def rebuild_template(self):
+ """Go through existing placeholders in scene and update them.
+
+ This could not make sense for all plugin types so this is optional
+ logic for plugins.
+
+ Note:
+ Logic is not importing the template again but using placeholders
+ that were already available. We should maybe change the method
+ name.
+
+ Question:
+ Should this also handle subloops as it is possible that another
+ template is loaded during processing?
+ """
+
+ if not self.placeholder_plugins:
+ self.log.info("There are no placeholder plugins available.")
+ return
+
+ placeholders = self.get_placeholders()
+ if not placeholders:
+ self.log.info("No placeholders were found.")
+ return
+
+ for placeholder in placeholders:
+ plugin = placeholder.plugin
+ plugin.repopulate_placeholder(placeholder)
+
+ self.clear_shared_populate_data()
+
+ @abstractmethod
+ def import_template(self, template_path):
+ """
+ Import template in current host.
+
+ Should load the content of template into scene so
+ 'populate_scene_placeholders' can be started.
+
+ Args:
+ template_path (str): Fullpath for current task and
+ host's template file.
+ """
+
+ pass
+
+ def _prepare_placeholders(self, placeholders):
+ """Run preparation part for placeholders on plugins.
+
+ Args:
+ placeholders (List[PlaceholderItem]): Placeholder items that will
+ be processed.
+ """
+
+ # Prepare placeholder items by plugin
+ plugins_by_identifier = {}
+ placeholders_by_plugin_id = collections.defaultdict(list)
+ for placeholder in placeholders:
+ plugin = placeholder.plugin
+ identifier = plugin.identifier
+ plugins_by_identifier[identifier] = plugin
+ placeholders_by_plugin_id[identifier].append(placeholder)
+
+ # Plugin should prepare data for passed placeholders
+ for identifier, placeholders in placeholders_by_plugin_id.items():
+ plugin = plugins_by_identifier[identifier]
+ plugin.prepare_placeholders(placeholders)
+
+ def populate_scene_placeholders(self, level_limit=None):
+ """Find placeholders in scene using plugins and process them.
+
+ This should happen after 'import_template'.
+
+ Collect available placeholders from scene. All of them are processed
+ after that shared data are cleared. Placeholder items are collected
+ again and if there are any new the loop happens again. This is possible
+ to change with defying 'level_limit'.
+
+ Placeholders are marked as processed so they're not re-processed. To
+ identify which placeholders were already processed is used
+ placeholder's 'scene_identifier'.
+
+ Args:
+ level_limit (int): Level of loops that can happen. Default is 1000.
+ """
+
+ if not self.placeholder_plugins:
+ self.log.warning("There are no placeholder plugins available.")
+ return
+
+ placeholders = self.get_placeholders()
+ if not placeholders:
+ self.log.warning("No placeholders were found.")
+ return
+
+ # Avoid infinite loop
+ # - 1000 iterations of placeholders processing must be enough
+ if not level_limit:
+ level_limit = 1000
+
+ placeholder_by_scene_id = {
+ placeholder.scene_identifier: placeholder
+ for placeholder in placeholders
+ }
+ all_processed = len(placeholders) == 0
+ # Counter is checked at the ned of a loop so the loop happens at least
+ # once.
+ iter_counter = 0
+ while not all_processed:
+ filtered_placeholders = []
+ for placeholder in placeholders:
+ if placeholder.finished:
+ continue
+
+ if placeholder.in_progress:
+ self.log.warning((
+ "Placeholder that should be processed"
+ " is already in progress."
+ ))
+ continue
+ filtered_placeholders.append(placeholder)
+
+ self._prepare_placeholders(filtered_placeholders)
+
+ for placeholder in filtered_placeholders:
+ placeholder.set_in_progress()
+ placeholder_plugin = placeholder.plugin
+ try:
+ placeholder_plugin.populate_placeholder(placeholder)
+
+ except Exception as exc:
+ self.log.warning(
+ (
+ "Failed to process placeholder {} with plugin {}"
+ ).format(
+ placeholder.scene_identifier,
+ placeholder_plugin.__class__.__name__
+ ),
+ exc_info=True
+ )
+ placeholder.set_failed(exc)
+
+ placeholder.set_finished()
+
+ # Clear shared data before getting new placeholders
+ self.clear_shared_populate_data()
+
+ iter_counter += 1
+ if iter_counter >= level_limit:
+ break
+
+ all_processed = True
+ collected_placeholders = self.get_placeholders()
+ for placeholder in collected_placeholders:
+ identifier = placeholder.scene_identifier
+ if identifier in placeholder_by_scene_id:
+ continue
+
+ all_processed = False
+ placeholder_by_scene_id[identifier] = placeholder
+ placeholders.append(placeholder)
+
+ self.refresh()
+
+ def _get_build_profiles(self):
+ """Get build profiles for workfile build template path.
+
+ Returns:
+ List[Dict[str, Any]]: Profiles for template path resolving.
+ """
+
+ return (
+ self.project_settings
+ [self.host_name]
+ ["templated_workfile_build"]
+ ["profiles"]
+ )
+
+ def get_template_path(self):
+ """Unified way how template path is received usign settings.
+
+ Method is dependent on '_get_build_profiles' which should return filter
+ profiles to resolve path to a template. Default implementation looks
+ into host settings:
+ - 'project_settings/{host name}/templated_workfile_build/profiles'
+
+ Returns:
+ str: Path to a template file with placeholders.
+
+ Raises:
+ TemplateProfileNotFound: When profiles are not filled.
+ TemplateLoadFailed: Profile was found but path is not set.
+ TemplateNotFound: Path was set but file does not exists.
+ """
+
+ host_name = self.host_name
+ project_name = self.project_name
+ task_name = self.current_task_name
+ task_type = self.current_task_type
+
+ build_profiles = self._get_build_profiles()
+ profile = filter_profiles(
+ build_profiles,
+ {
+ "task_types": task_type,
+ "task_names": task_name
+ }
+ )
+
+ if not profile:
+ raise TemplateProfileNotFound((
+ "No matching profile found for task '{}' of type '{}' "
+ "with host '{}'"
+ ).format(task_name, task_type, host_name))
+
+ path = profile["path"]
+ if not path:
+ raise TemplateLoadFailed((
+ "Template path is not set.\n"
+ "Path need to be set in {}\\Template Workfile Build "
+ "Settings\\Profiles"
+ ).format(host_name.title()))
+
+ # Try fill path with environments and anatomy roots
+ anatomy = Anatomy(project_name)
+ fill_data = {
+ key: value
+ for key, value in os.environ.items()
+ }
+ fill_data["root"] = anatomy.roots
+ result = StringTemplate.format_template(path, fill_data)
+ if result.solved:
+ path = result.normalized()
+
+ if path and os.path.exists(path):
+ self.log.info("Found template at: '{}'".format(path))
+ return path
+
+ solved_path = None
+ while True:
+ try:
+ solved_path = anatomy.path_remapper(path)
+ except KeyError as missing_key:
+ raise KeyError(
+ "Could not solve key '{}' in template path '{}'".format(
+ missing_key, path))
+
+ if solved_path is None:
+ solved_path = path
+ if solved_path == path:
+ break
+ path = solved_path
+
+ solved_path = os.path.normpath(solved_path)
+ if not os.path.exists(solved_path):
+ raise TemplateNotFound(
+ "Template found in openPype settings for task '{}' with host "
+ "'{}' does not exists. (Not found : {})".format(
+ task_name, host_name, solved_path))
+
+ self.log.info("Found template at: '{}'".format(solved_path))
+
+ return solved_path
+
+
+@six.add_metaclass(ABCMeta)
+class PlaceholderPlugin(object):
+ """Plugin which care about handling of placeholder items logic.
+
+ Plugin create and update placeholders in scene and populate them on
+ template import. Populating means that based on placeholder data happens
+ a logic in the scene. Most common logic is to load representation using
+ loaders or to create instances in scene.
+ """
+
+ label = None
+ _log = None
+
+ def __init__(self, builder):
+ self._builder = builder
+
+ @property
+ def builder(self):
+ """Access to builder which initialized the plugin.
+
+ Returns:
+ AbstractTemplateBuilder: Loader of template build.
+ """
+
+ return self._builder
+
+ @property
+ def project_name(self):
+ return self._builder.project_name
+
+ @property
+ def log(self):
+ """Dynamically created logger for the plugin."""
+
+ if self._log is None:
+ self._log = Logger.get_logger(repr(self))
+ return self._log
+
+ @property
+ def identifier(self):
+ """Identifier which will be stored to placeholder.
+
+ Default implementation uses class name.
+
+ Returns:
+ str: Unique identifier of placeholder plugin.
+ """
+
+ return self.__class__.__name__
+
+ @abstractmethod
+ def create_placeholder(self, placeholder_data):
+ """Create new placeholder in scene and get it's item.
+
+ It matters on the plugin implementation if placeholder will use
+ selection in scene or create new node.
+
+ Args:
+ placeholder_data (Dict[str, Any]): Data that were created
+ based on attribute definitions from 'get_placeholder_options'.
+
+ Returns:
+ PlaceholderItem: Created placeholder item.
+ """
+
+ pass
+
+ @abstractmethod
+ def update_placeholder(self, placeholder_item, placeholder_data):
+ """Update placeholder item with new data.
+
+ New data should be propagated to object of placeholder item itself
+ and also into the scene.
+
+ Reason:
+ Some placeholder plugins may require some special way how the
+ updates should be propagated to object.
+
+ Args:
+ placeholder_item (PlaceholderItem): Object of placeholder that
+ should be updated.
+ placeholder_data (Dict[str, Any]): Data related to placeholder.
+ Should match plugin options.
+ """
+
+ pass
+
+ @abstractmethod
+ def collect_placeholders(self):
+ """Collect placeholders from scene.
+
+ Returns:
+ List[PlaceholderItem]: Placeholder objects.
+ """
+
+ pass
+
+ def get_placeholder_options(self, options=None):
+ """Placeholder options for data showed.
+
+ Returns:
+ List[AbtractAttrDef]: Attribute definitions of placeholder options.
+ """
+
+ return []
+
+ def get_placeholder_keys(self):
+ """Get placeholder keys that are stored in scene.
+
+ Returns:
+ Set[str]: Key of placeholder keys that are stored in scene.
+ """
+
+ option_keys = get_attributes_keys(self.get_placeholder_options())
+ option_keys.add("plugin_identifier")
+ return option_keys
+
+ def prepare_placeholders(self, placeholders):
+ """Preparation part of placeholders.
+
+ Args:
+ placeholders (List[PlaceholderItem]): List of placeholders that
+ will be processed.
+ """
+
+ pass
+
+ @abstractmethod
+ def populate_placeholder(self, placeholder):
+ """Process single placeholder item.
+
+ Processing of placeholders is defined by their order thus can't be
+ processed in batch.
+
+ Args:
+ placeholder (PlaceholderItem): Placeholder that should be
+ processed.
+ """
+
+ pass
+
+ def repopulate_placeholder(self, placeholder):
+ """Update scene with current context for passed placeholder.
+
+ Can be used to re-run placeholder logic (if it make sense).
+ """
+
+ pass
+
+ def get_plugin_shared_data(self, key):
+ """Receive shared data across plugin and placeholders.
+
+ Using shared data from builder but stored under plugin identifier.
+
+ Args:
+ key (str): Key under which are shared data stored.
+
+ Returns:
+ Union[None, Any]: None if key was not set.
+ """
+
+ plugin_data = self.builder.get_shared_data(self.identifier)
+ if plugin_data is None:
+ return None
+ return plugin_data.get(key)
+
+ def set_plugin_shared_data(self, key, value):
+ """Store share data across plugin and placeholders.
+
+ Using shared data from builder but stored under plugin identifier.
+
+ Key should be self explanatory to content.
+ - wrong: 'asset'
+ - good: 'asset_name'
+
+ Args:
+ key (str): Key under which is key stored.
+ value (Any): Value that should be stored under the key.
+ """
+
+ plugin_data = self.builder.get_shared_data(self.identifier)
+ if plugin_data is None:
+ plugin_data = {}
+ plugin_data[key] = value
+ self.builder.set_shared_data(self.identifier, plugin_data)
+
+ def get_plugin_shared_populate_data(self, key):
+ """Receive shared data across plugin and placeholders.
+
+ Using shared populate data from builder but stored under plugin
+ identifier.
+
+ Shared populate data are cleaned up during populate while loop.
+
+ Args:
+ key (str): Key under which are shared data stored.
+
+ Returns:
+ Union[None, Any]: None if key was not set.
+ """
+
+ plugin_data = self.builder.get_shared_populate_data(self.identifier)
+ if plugin_data is None:
+ return None
+ return plugin_data.get(key)
+
+ def set_plugin_shared_populate_data(self, key, value):
+ """Store share data across plugin and placeholders.
+
+ Using shared data from builder but stored under plugin identifier.
+
+ Key should be self explanatory to content.
+ - wrong: 'asset'
+ - good: 'asset_name'
+
+ Shared populate data are cleaned up during populate while loop.
+
+ Args:
+ key (str): Key under which is key stored.
+ value (Any): Value that should be stored under the key.
+ """
+
+ plugin_data = self.builder.get_shared_populate_data(self.identifier)
+ if plugin_data is None:
+ plugin_data = {}
+ plugin_data[key] = value
+ self.builder.set_shared_populate_data(self.identifier, plugin_data)
+
+
+class PlaceholderItem(object):
+ """Item representing single item in scene that is a placeholder to process.
+
+ Items are always created and updated by their plugins. Each plugin can use
+ modified class of 'PlacehoderItem' but only to add more options instead of
+ new other.
+
+ Scene identifier is used to avoid processing of the palceholder item
+ multiple times so must be unique across whole workfile builder.
+
+ Args:
+ scene_identifier (str): Unique scene identifier. If placeholder is
+ created from the same "node" it must have same identifier.
+ data (Dict[str, Any]): Data related to placeholder. They're defined
+ by plugin.
+ plugin (PlaceholderPlugin): Plugin which created the placeholder item.
+ """
+
+ default_order = 100
+
+ def __init__(self, scene_identifier, data, plugin):
+ self._log = None
+ self._scene_identifier = scene_identifier
+ self._data = data
+ self._plugin = plugin
+
+ # Keep track about state of Placeholder process
+ self._state = 0
+
+ # Error messages to be shown in UI
+ # - all other messages should be logged
+ self._errors = [] # -> List[str]
+
+ @property
+ def plugin(self):
+ """Access to plugin which created placeholder.
+
+ Returns:
+ PlaceholderPlugin: Plugin object.
+ """
+
+ return self._plugin
+
+ @property
+ def builder(self):
+ """Access to builder.
+
+ Returns:
+ AbstractTemplateBuilder: Builder which is the top part of
+ placeholder.
+ """
+
+ return self.plugin.builder
+
+ @property
+ def data(self):
+ """Placeholder data which can modify how placeholder is processed.
+
+ Possible general keys
+ - order: Can define the order in which is palceholder processed.
+ Lower == earlier.
+
+ Other keys are defined by placeholder and should validate them on item
+ creation.
+
+ Returns:
+ Dict[str, Any]: Placeholder item data.
+ """
+
+ return self._data
+
+ def to_dict(self):
+ """Create copy of item's data.
+
+ Returns:
+ Dict[str, Any]: Placeholder data.
+ """
+
+ return copy.deepcopy(self.data)
+
+ @property
+ def log(self):
+ if self._log is None:
+ self._log = Logger.get_logger(repr(self))
+ return self._log
+
+ def __repr__(self):
+ return "< {} {} >".format(self.__class__.__name__, self.name)
+
+ @property
+ def order(self):
+ """Order of item processing."""
+
+ order = self._data.get("order")
+ if order is None:
+ return self.default_order
+ return order
+
+ @property
+ def scene_identifier(self):
+ return self._scene_identifier
+
+ @property
+ def finished(self):
+ """Item was already processed."""
+
+ return self._state == 2
+
+ @property
+ def in_progress(self):
+ """Processing is in progress."""
+
+ return self._state == 1
+
+ def set_in_progress(self):
+ """Change to in progress state."""
+
+ self._state = 1
+
+ def set_finished(self):
+ """Change to finished state."""
+
+ self._state = 2
+
+ def set_failed(self, exception):
+ self.add_error(str(exception))
+
+ def add_error(self, error):
+ """Set placeholder item as failed and mark it as finished."""
+
+ self._errors.append(error)
+
+ def get_errors(self):
+ """Exception with which the placeholder process failed.
+
+ Gives ability to access the exception.
+ """
+
+ return self._errors
+
+
+class PlaceholderLoadMixin(object):
+ """Mixin prepared for loading placeholder plugins.
+
+ Implementation prepares options for placeholders with
+ 'get_load_plugin_options'.
+
+ For placeholder population is implemented 'populate_load_placeholder'.
+
+ PlaceholderItem can have implemented methods:
+ - 'load_failed' - called when loading of one representation failed
+ - 'load_succeed' - called when loading of one representation succeeded
+ """
+
+ def get_load_plugin_options(self, options=None):
+ """Unified attribute definitions for load placeholder.
+
+ Common function for placeholder plugins used for loading of
+ repsentations. Use it in 'get_placeholder_options'.
+
+ Args:
+ plugin (PlaceholderPlugin): Plugin used for loading of
+ representations.
+ options (Dict[str, Any]): Already available options which are used
+ as defaults for attributes.
+
+ Returns:
+ List[AbtractAttrDef]: Attribute definitions common for load
+ plugins.
+ """
+
+ loaders_by_name = self.builder.get_loaders_by_name()
+ loader_items = [
+ (loader_name, loader.label or loader_name)
+ for loader_name, loader in loaders_by_name.items()
+ ]
+
+ loader_items = list(sorted(loader_items, key=lambda i: i[1]))
+ options = options or {}
+ return [
+ attribute_definitions.UISeparatorDef(),
+ attribute_definitions.UILabelDef("Main attributes"),
+ attribute_definitions.UISeparatorDef(),
+
+ attribute_definitions.EnumDef(
+ "builder_type",
+ label="Asset Builder Type",
+ default=options.get("builder_type"),
+ items=[
+ ("context_asset", "Current asset"),
+ ("linked_asset", "Linked assets"),
+ ("all_assets", "All assets")
+ ],
+ tooltip=(
+ "Asset Builder Type\n"
+ "\nBuilder type describe what template loader will look"
+ " for."
+ "\ncontext_asset : Template loader will look for subsets"
+ " of current context asset (Asset bob will find asset)"
+ "\nlinked_asset : Template loader will look for assets"
+ " linked to current context asset."
+ "\nLinked asset are looked in database under"
+ " field \"inputLinks\""
+ )
+ ),
+ attribute_definitions.TextDef(
+ "family",
+ label="Family",
+ default=options.get("family"),
+ placeholder="model, look, ..."
+ ),
+ attribute_definitions.TextDef(
+ "representation",
+ label="Representation name",
+ default=options.get("representation"),
+ placeholder="ma, abc, ..."
+ ),
+ attribute_definitions.EnumDef(
+ "loader",
+ label="Loader",
+ default=options.get("loader"),
+ items=loader_items,
+ tooltip=(
+ "Loader"
+ "\nDefines what OpenPype loader will be used to"
+ " load assets."
+ "\nUseable loader depends on current host's loader list."
+ "\nField is case sensitive."
+ )
+ ),
+ attribute_definitions.TextDef(
+ "loader_args",
+ label="Loader Arguments",
+ default=options.get("loader_args"),
+ placeholder='{"camera":"persp", "lights":True}',
+ tooltip=(
+ "Loader"
+ "\nDefines a dictionnary of arguments used to load assets."
+ "\nUseable arguments depend on current placeholder Loader."
+ "\nField should be a valid python dict."
+ " Anything else will be ignored."
+ )
+ ),
+ attribute_definitions.NumberDef(
+ "order",
+ label="Order",
+ default=options.get("order") or 0,
+ decimals=0,
+ minimum=0,
+ maximum=999,
+ tooltip=(
+ "Order"
+ "\nOrder defines asset loading priority (0 to 999)"
+ "\nPriority rule is : \"lowest is first to load\"."
+ )
+ ),
+ attribute_definitions.UISeparatorDef(),
+ attribute_definitions.UILabelDef("Optional attributes"),
+ attribute_definitions.UISeparatorDef(),
+ attribute_definitions.TextDef(
+ "asset",
+ label="Asset filter",
+ default=options.get("asset"),
+ placeholder="regex filtering by asset name",
+ tooltip=(
+ "Filtering assets by matching field regex to asset's name"
+ )
+ ),
+ attribute_definitions.TextDef(
+ "subset",
+ label="Subset filter",
+ default=options.get("subset"),
+ placeholder="regex filtering by subset name",
+ tooltip=(
+ "Filtering assets by matching field regex to subset's name"
+ )
+ ),
+ attribute_definitions.TextDef(
+ "hierarchy",
+ label="Hierarchy filter",
+ default=options.get("hierarchy"),
+ placeholder="regex filtering by asset's hierarchy",
+ tooltip=(
+ "Filtering assets by matching field asset's hierarchy"
+ )
+ )
+ ]
+
+ def parse_loader_args(self, loader_args):
+ """Helper function to parse string of loader arugments.
+
+ Empty dictionary is returned if conversion fails.
+
+ Args:
+ loader_args (str): Loader args filled by user.
+
+ Returns:
+ Dict[str, Any]: Parsed arguments used as dictionary.
+ """
+
+ if not loader_args:
+ return {}
+
+ try:
+ parsed_args = eval(loader_args)
+ if isinstance(parsed_args, dict):
+ return parsed_args
+
+ except Exception as err:
+ print(
+ "Error while parsing loader arguments '{}'.\n{}: {}\n\n"
+ "Continuing with default arguments. . .".format(
+ loader_args, err.__class__.__name__, err))
+
+ return {}
+
+ def _get_representations(self, placeholder):
+ """Prepared query of representations based on load options.
+
+ This function is directly connected to options defined in
+ 'get_load_plugin_options'.
+
+ Note:
+ This returns all representation documents from all versions of
+ matching subset. To filter for last version use
+ '_reduce_last_version_repre_docs'.
+
+ Args:
+ placeholder (PlaceholderItem): Item which should be populated.
+
+ Returns:
+ List[Dict[str, Any]]: Representation documents matching filters
+ from placeholder data.
+ """
+
+ project_name = self.builder.project_name
+ current_asset_doc = self.builder.current_asset_doc
+ linked_asset_docs = self.builder.linked_asset_docs
+
+ builder_type = placeholder.data["builder_type"]
+ if builder_type == "context_asset":
+ context_filters = {
+ "asset": [current_asset_doc["name"]],
+ "subset": [re.compile(placeholder.data["subset"])],
+ "hierarchy": [re.compile(placeholder.data["hierarchy"])],
+ "representation": [placeholder.data["representation"]],
+ "family": [placeholder.data["family"]]
+ }
+
+ elif builder_type != "linked_asset":
+ context_filters = {
+ "asset": [re.compile(placeholder.data["asset"])],
+ "subset": [re.compile(placeholder.data["subset"])],
+ "hierarchy": [re.compile(placeholder.data["hierarchy"])],
+ "representation": [placeholder.data["representation"]],
+ "family": [placeholder.data["family"]]
+ }
+
+ else:
+ asset_regex = re.compile(placeholder.data["asset"])
+ linked_asset_names = []
+ for asset_doc in linked_asset_docs:
+ asset_name = asset_doc["name"]
+ if asset_regex.match(asset_name):
+ linked_asset_names.append(asset_name)
+
+ context_filters = {
+ "asset": linked_asset_names,
+ "subset": [re.compile(placeholder.data["subset"])],
+ "hierarchy": [re.compile(placeholder.data["hierarchy"])],
+ "representation": [placeholder.data["representation"]],
+ "family": [placeholder.data["family"]],
+ }
+
+ return list(get_representations(
+ project_name,
+ context_filters=context_filters
+ ))
+
+ def _before_repre_load(self, placeholder, representation):
+ """Can be overriden. Is called before representation is loaded."""
+
+ pass
+
+ def _reduce_last_version_repre_docs(self, representations):
+ """Reduce representations to last verison."""
+
+ mapping = {}
+ for repre_doc in representations:
+ repre_context = repre_doc["context"]
+
+ asset_name = repre_context["asset"]
+ subset_name = repre_context["subset"]
+ version = repre_context.get("version", -1)
+
+ if asset_name not in mapping:
+ mapping[asset_name] = {}
+
+ subset_mapping = mapping[asset_name]
+ if subset_name not in subset_mapping:
+ subset_mapping[subset_name] = collections.defaultdict(list)
+
+ version_mapping = subset_mapping[subset_name]
+ version_mapping[version].append(repre_doc)
+
+ output = []
+ for subset_mapping in mapping.values():
+ for version_mapping in subset_mapping.values():
+ last_version = tuple(sorted(version_mapping.keys()))[-1]
+ output.extend(version_mapping[last_version])
+ return output
+
+ def populate_load_placeholder(self, placeholder, ignore_repre_ids=None):
+ """Load placeholder is goind to load matching representations.
+
+ Note:
+ Ignore repre ids is to avoid loading the same representation again
+ on load. But the representation can be loaded with different loader
+ and there could be published new version of matching subset for the
+ representation. We should maybe expect containers.
+
+ Also import loaders don't have containers at all...
+
+ Args:
+ placeholder (PlaceholderItem): Placeholder item with information
+ about requested representations.
+ ignore_repre_ids (Iterable[Union[str, ObjectId]]): Representation
+ ids that should be skipped.
+ """
+
+ if ignore_repre_ids is None:
+ ignore_repre_ids = set()
+
+ # TODO check loader existence
+ loader_name = placeholder.data["loader"]
+ loader_args = placeholder.data["loader_args"]
+
+ placeholder_representations = self._get_representations(placeholder)
+
+ filtered_representations = []
+ for representation in self._reduce_last_version_repre_docs(
+ placeholder_representations
+ ):
+ repre_id = str(representation["_id"])
+ if repre_id not in ignore_repre_ids:
+ filtered_representations.append(representation)
+
+ if not filtered_representations:
+ self.log.info((
+ "There's no representation for this placeholder: {}"
+ ).format(placeholder.scene_identifier))
+ return
+
+ repre_load_contexts = get_contexts_for_repre_docs(
+ self.project_name, filtered_representations
+ )
+ loaders_by_name = self.builder.get_loaders_by_name()
+ for repre_load_context in repre_load_contexts.values():
+ representation = repre_load_context["representation"]
+ repre_context = representation["context"]
+ self._before_repre_load(
+ placeholder, representation
+ )
+ self.log.info(
+ "Loading {} from {} with loader {}\n"
+ "Loader arguments used : {}".format(
+ repre_context["subset"],
+ repre_context["asset"],
+ loader_name,
+ loader_args
+ )
+ )
+ try:
+ container = load_with_repre_context(
+ loaders_by_name[loader_name],
+ repre_load_context,
+ options=self.parse_loader_args(loader_args)
+ )
+
+ except Exception:
+ failed = True
+ self.load_failed(placeholder, representation)
+
+ else:
+ failed = False
+ self.load_succeed(placeholder, container)
+ self.cleanup_placeholder(placeholder, failed)
+
+ def load_failed(self, placeholder, representation):
+ if hasattr(placeholder, "load_failed"):
+ placeholder.load_failed(representation)
+
+ def load_succeed(self, placeholder, container):
+ if hasattr(placeholder, "load_succeed"):
+ placeholder.load_succeed(container)
+
+ def cleanup_placeholder(self, placeholder, failed):
+ """Cleanup placeholder after load of single representation.
+
+ Can be called multiple times during placeholder item populating and is
+ called even if loading failed.
+
+ Args:
+ placeholder (PlaceholderItem): Item which was just used to load
+ representation.
+ failed (bool): Loading of representation failed.
+ """
+
+ pass
+
+
+class LoadPlaceholderItem(PlaceholderItem):
+ """PlaceholderItem for plugin which is loading representations.
+
+ Connected to 'PlaceholderLoadMixin'.
+ """
+
+ def __init__(self, *args, **kwargs):
+ super(LoadPlaceholderItem, self).__init__(*args, **kwargs)
+ self._failed_representations = []
+
+ def get_errors(self):
+ if not self._failed_representations:
+ return []
+ message = (
+ "Failed to load {} representations using Loader {}"
+ ).format(
+ len(self._failed_representations),
+ self.data["loader"]
+ )
+ return [message]
+
+ def load_failed(self, representation):
+ self._failed_representations.append(representation)
diff --git a/openpype/plugins/publish/collect_from_create_context.py b/openpype/plugins/publish/collect_from_create_context.py
index 9236c698ed..fc0f97b187 100644
--- a/openpype/plugins/publish/collect_from_create_context.py
+++ b/openpype/plugins/publish/collect_from_create_context.py
@@ -25,7 +25,9 @@ class CollectFromCreateContext(pyblish.api.ContextPlugin):
for created_instance in create_context.instances:
instance_data = created_instance.data_to_store()
if instance_data["active"]:
- self.create_instance(context, instance_data)
+ self.create_instance(
+ context, instance_data, created_instance.transient_data
+ )
# Update global data to context
context.data.update(create_context.context_data_to_store())
@@ -37,7 +39,7 @@ class CollectFromCreateContext(pyblish.api.ContextPlugin):
legacy_io.Session[key] = value
os.environ[key] = value
- def create_instance(self, context, in_data):
+ def create_instance(self, context, in_data, transient_data):
subset = in_data["subset"]
# If instance data already contain families then use it
instance_families = in_data.get("families") or []
@@ -56,5 +58,8 @@ class CollectFromCreateContext(pyblish.api.ContextPlugin):
for key, value in in_data.items():
if key not in instance.data:
instance.data[key] = value
+
+ instance.data["transientData"] = transient_data
+
self.log.info("collected instance: {}".format(instance.data))
self.log.info("parsing data: {}".format(in_data))
diff --git a/openpype/plugins/publish/collect_settings.py b/openpype/plugins/publish/collect_settings.py
index d56eabd1b5..a418a6400c 100644
--- a/openpype/plugins/publish/collect_settings.py
+++ b/openpype/plugins/publish/collect_settings.py
@@ -1,5 +1,8 @@
from pyblish import api
-from openpype.api import get_current_project_settings, get_system_settings
+from openpype.settings import (
+ get_current_project_settings,
+ get_system_settings,
+)
class CollectSettings(api.ContextPlugin):
diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py
index c0760a5471..5f4d284740 100644
--- a/openpype/plugins/publish/integrate_hero_version.py
+++ b/openpype/plugins/publish/integrate_hero_version.py
@@ -4,8 +4,6 @@ import clique
import errno
import shutil
-from bson.objectid import ObjectId
-from pymongo import InsertOne, ReplaceOne
import pyblish.api
from openpype.client import (
@@ -14,10 +12,15 @@ from openpype.client import (
get_archived_representations,
get_representations,
)
+from openpype.client.operations import (
+ OperationsSession,
+ new_hero_version_doc,
+ prepare_hero_version_update_data,
+ prepare_representation_update_data,
+)
from openpype.lib import create_hard_link
from openpype.pipeline import (
- schema,
- legacy_io,
+ schema
)
from openpype.pipeline.publish import get_publish_template_name
@@ -187,35 +190,32 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
repre["name"].lower(): repre for repre in old_repres
}
+ op_session = OperationsSession()
+
+ entity_id = None
if old_version:
- new_version_id = old_version["_id"]
- else:
- new_version_id = ObjectId()
-
- new_hero_version = {
- "_id": new_version_id,
- "version_id": src_version_entity["_id"],
- "parent": src_version_entity["parent"],
- "type": "hero_version",
- "schema": "openpype:hero_version-1.0"
- }
- schema.validate(new_hero_version)
-
- # Don't make changes in database until everything is O.K.
- bulk_writes = []
+ entity_id = old_version["_id"]
+ new_hero_version = new_hero_version_doc(
+ src_version_entity["_id"],
+ src_version_entity["parent"],
+ entity_id=entity_id
+ )
if old_version:
self.log.debug("Replacing old hero version.")
- bulk_writes.append(
- ReplaceOne(
- {"_id": new_hero_version["_id"]},
- new_hero_version
- )
+ update_data = prepare_hero_version_update_data(
+ old_version, new_hero_version
+ )
+ op_session.update_entity(
+ project_name,
+ new_hero_version["type"],
+ old_version["_id"],
+ update_data
)
else:
self.log.debug("Creating first hero version.")
- bulk_writes.append(
- InsertOne(new_hero_version)
+ op_session.create_entity(
+ project_name, new_hero_version["type"], new_hero_version
)
# Separate old representations into `to replace` and `to delete`
@@ -235,7 +235,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
archived_repres = list(get_archived_representations(
project_name,
# Check what is type of archived representation
- version_ids=[new_version_id]
+ version_ids=[new_hero_version["_id"]]
))
archived_repres_by_name = {}
for repre in archived_repres:
@@ -382,12 +382,15 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
# Replace current representation
if repre_name_low in old_repres_to_replace:
old_repre = old_repres_to_replace.pop(repre_name_low)
+
repre["_id"] = old_repre["_id"]
- bulk_writes.append(
- ReplaceOne(
- {"_id": old_repre["_id"]},
- repre
- )
+ update_data = prepare_representation_update_data(
+ old_repre, repre)
+ op_session.update_entity(
+ project_name,
+ old_repre["type"],
+ old_repre["_id"],
+ update_data
)
# Unarchive representation
@@ -395,21 +398,21 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
archived_repre = archived_repres_by_name.pop(
repre_name_low
)
- old_id = archived_repre["old_id"]
- repre["_id"] = old_id
- bulk_writes.append(
- ReplaceOne(
- {"old_id": old_id},
- repre
- )
+ repre["_id"] = archived_repre["old_id"]
+ update_data = prepare_representation_update_data(
+ archived_repre, repre)
+ op_session.update_entity(
+ project_name,
+ old_repre["type"],
+ archived_repre["_id"],
+ update_data
)
# Create representation
else:
- repre["_id"] = ObjectId()
- bulk_writes.append(
- InsertOne(repre)
- )
+ repre.pop("_id", None)
+ op_session.create_entity(project_name, "representation",
+ repre)
self.path_checks = []
@@ -430,28 +433,22 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
archived_repre = archived_repres_by_name.pop(
repre_name_low
)
- repre["old_id"] = repre["_id"]
- repre["_id"] = archived_repre["_id"]
- repre["type"] = archived_repre["type"]
- bulk_writes.append(
- ReplaceOne(
- {"_id": archived_repre["_id"]},
- repre
- )
- )
+ changes = {"old_id": repre["_id"],
+ "_id": archived_repre["_id"],
+ "type": archived_repre["type"]}
+ op_session.update_entity(project_name,
+ archived_repre["type"],
+ archived_repre["_id"],
+ changes)
else:
- repre["old_id"] = repre["_id"]
- repre["_id"] = ObjectId()
+ repre["old_id"] = repre.pop("_id")
repre["type"] = "archived_representation"
- bulk_writes.append(
- InsertOne(repre)
- )
+ op_session.create_entity(project_name,
+ "archived_representation",
+ repre)
- if bulk_writes:
- legacy_io.database[project_name].bulk_write(
- bulk_writes
- )
+ op_session.commit()
# Remove backuped previous hero
if (
diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json
index cdf829db57..1517983569 100644
--- a/openpype/settings/defaults/project_settings/houdini.json
+++ b/openpype/settings/defaults/project_settings/houdini.json
@@ -1,15 +1,5 @@
{
- "shelves": [
- {
- "shelf_set_name": "OpenPype Shelves",
- "shelf_set_source_path": {
- "windows": "",
- "darwin": "",
- "linux": ""
- },
- "shelf_definition": []
- }
- ],
+ "shelves": [],
"create": {
"CreateArnoldAss": {
"enabled": true,
diff --git a/openpype/settings/defaults/project_settings/photoshop.json b/openpype/settings/defaults/project_settings/photoshop.json
index feda219179..fa0dc7b1c4 100644
--- a/openpype/settings/defaults/project_settings/photoshop.json
+++ b/openpype/settings/defaults/project_settings/photoshop.json
@@ -40,7 +40,10 @@
"make_image_sequence": false,
"max_downscale_size": 8192,
"jpg_options": {
- "tags": []
+ "tags": [
+ "review",
+ "ftrackreview"
+ ]
},
"mov_options": {
"tags": [
diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json
index 6ee02ca78f..0cbb684fc6 100644
--- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json
+++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_render_settings.json
@@ -140,7 +140,7 @@
},
{
"type": "label",
- "label": "Add additional options - put attribute and value, like AASamples"
+ "label": "Add additional options - put attribute and value, like defaultArnoldRenderOptions.AASamples = 4"
},
{
"type": "dict-modifiable",
@@ -276,7 +276,7 @@
},
{
"type": "label",
- "label": "Add additional options - put attribute and value, like aaFilterSize"
+ "label": "Add additional options - put attribute and value, like vraySettings.aaFilterSize = 1.5"
},
{
"type": "dict-modifiable",
@@ -405,7 +405,7 @@
},
{
"type": "label",
- "label": "Add additional options - put attribute and value, like reflectionMaxTraceDepth"
+ "label": "Add additional options - put attribute and value, like redshiftOptions.reflectionMaxTraceDepth = 3"
},
{
"type": "dict-modifiable",
diff --git a/openpype/style/__init__.py b/openpype/style/__init__.py
index b2a1a4ce6c..473fb42bb5 100644
--- a/openpype/style/__init__.py
+++ b/openpype/style/__init__.py
@@ -1,4 +1,5 @@
import os
+import copy
import json
import collections
import six
@@ -19,6 +20,9 @@ class _Cache:
disabled_entity_icon_color = None
deprecated_entity_font_color = None
+ colors_data = None
+ objected_colors = None
+
def get_style_image_path(image_name):
# All filenames are lowered
@@ -46,8 +50,11 @@ def _get_colors_raw_data():
def get_colors_data():
"""Only color data from stylesheet data."""
- data = _get_colors_raw_data()
- return data.get("color") or {}
+ if _Cache.colors_data is None:
+ data = _get_colors_raw_data()
+ color_data = data.get("color") or {}
+ _Cache.colors_data = color_data
+ return copy.deepcopy(_Cache.colors_data)
def _convert_color_values_to_objects(value):
@@ -75,17 +82,38 @@ def _convert_color_values_to_objects(value):
return parse_color(value)
-def get_objected_colors():
+def get_objected_colors(*keys):
"""Colors parsed from stylesheet data into color definitions.
+ You can pass multiple arguments to get a key from the data dict's colors.
+ Because this functions returns a deep copy of the cached data this allows
+ a much smaller dataset to be copied and thus result in a faster function.
+ It is however a micro-optimization in the area of 0.001s and smaller.
+
+ For example:
+ >>> get_colors_data() # copy of full colors dict
+ >>> get_colors_data("font")
+ >>> get_colors_data("loader", "asset-view")
+
+ Args:
+ *keys: Each key argument will return a key nested deeper in the
+ objected colors data.
+
Returns:
- dict: Parsed color objects by keys in data.
+ Any: Parsed color objects by keys in data.
"""
- colors_data = get_colors_data()
- output = {}
- for key, value in colors_data.items():
- output[key] = _convert_color_values_to_objects(value)
- return output
+ if _Cache.objected_colors is None:
+ colors_data = get_colors_data()
+ output = {}
+ for key, value in colors_data.items():
+ output[key] = _convert_color_values_to_objects(value)
+
+ _Cache.objected_colors = output
+
+ output = _Cache.objected_colors
+ for key in keys:
+ output = output[key]
+ return copy.deepcopy(output)
def _load_stylesheet():
diff --git a/openpype/style/color_defs.py b/openpype/style/color_defs.py
index 0f4e145ca0..f1eab38c24 100644
--- a/openpype/style/color_defs.py
+++ b/openpype/style/color_defs.py
@@ -296,7 +296,7 @@ class HSLColor:
if "%" in sat_str:
sat = float(sat_str.rstrip("%")) / 100
else:
- sat = float(sat)
+ sat = float(sat_str)
if "%" in light_str:
light = float(light_str.rstrip("%")) / 100
@@ -337,8 +337,8 @@ class HSLAColor:
as float (0-1 range).
Examples:
- "hsl(27, 0.7, 0.3)"
- "hsl(27, 70%, 30%)"
+ "hsla(27, 0.7, 0.3, 0.5)"
+ "hsla(27, 70%, 30%, 0.5)"
"""
def __init__(self, value):
modified_color = value.lower().strip()
@@ -350,7 +350,7 @@ class HSLAColor:
if "%" in sat_str:
sat = float(sat_str.rstrip("%")) / 100
else:
- sat = float(sat)
+ sat = float(sat_str)
if "%" in light_str:
light = float(light_str.rstrip("%")) / 100
diff --git a/openpype/style/data.json b/openpype/style/data.json
index 15d9472e3e..adda49de23 100644
--- a/openpype/style/data.json
+++ b/openpype/style/data.json
@@ -20,7 +20,7 @@
"color": {
"font": "#D3D8DE",
"font-hover": "#F0F2F5",
- "font-disabled": "#99A3B2",
+ "font-disabled": "#5b6779",
"font-view-selection": "#ffffff",
"font-view-hover": "#F0F2F5",
diff --git a/openpype/tools/creator/window.py b/openpype/tools/creator/window.py
index a3937d6a40..e2396ed29e 100644
--- a/openpype/tools/creator/window.py
+++ b/openpype/tools/creator/window.py
@@ -6,7 +6,7 @@ from Qt import QtWidgets, QtCore
from openpype.client import get_asset_by_name, get_subsets
from openpype import style
-from openpype.api import get_current_project_settings
+from openpype.settings import get_current_project_settings
from openpype.tools.utils.lib import qt_app_context
from openpype.pipeline import legacy_io
from openpype.pipeline.create import (
diff --git a/openpype/tools/launcher/actions.py b/openpype/tools/launcher/actions.py
index 546bda1c34..b954110da4 100644
--- a/openpype/tools/launcher/actions.py
+++ b/openpype/tools/launcher/actions.py
@@ -4,8 +4,9 @@ from Qt import QtWidgets, QtGui
from openpype import PLUGINS_DIR
from openpype import style
-from openpype.api import Logger, resources
+from openpype.api import resources
from openpype.lib import (
+ Logger,
ApplictionExecutableNotFound,
ApplicationLaunchFailed
)
diff --git a/openpype/tools/loader/widgets.py b/openpype/tools/loader/widgets.py
index c028aa4174..d37ce500e0 100644
--- a/openpype/tools/loader/widgets.py
+++ b/openpype/tools/loader/widgets.py
@@ -1256,7 +1256,11 @@ class RepresentationWidget(QtWidgets.QWidget):
repre_doc["parent"]
for repre_doc in repre_docs
]
- version_docs = get_versions(project_name, version_ids=version_ids)
+ version_docs = get_versions(
+ project_name,
+ version_ids=version_ids,
+ hero=True
+ )
version_docs_by_id = {}
version_docs_by_subset_id = collections.defaultdict(list)
diff --git a/openpype/tools/publisher/widgets/border_label_widget.py b/openpype/tools/publisher/widgets/border_label_widget.py
index 696a9050b8..8e09dd817e 100644
--- a/openpype/tools/publisher/widgets/border_label_widget.py
+++ b/openpype/tools/publisher/widgets/border_label_widget.py
@@ -158,8 +158,7 @@ class BorderedLabelWidget(QtWidgets.QFrame):
"""
def __init__(self, label, parent):
super(BorderedLabelWidget, self).__init__(parent)
- colors_data = get_objected_colors()
- color_value = colors_data.get("border")
+ color_value = get_objected_colors("border")
color = None
if color_value:
color = color_value.get_qcolor()
diff --git a/openpype/tools/publisher/widgets/list_view_widgets.py b/openpype/tools/publisher/widgets/list_view_widgets.py
index 6e31ba635b..a701181e5b 100644
--- a/openpype/tools/publisher/widgets/list_view_widgets.py
+++ b/openpype/tools/publisher/widgets/list_view_widgets.py
@@ -54,8 +54,7 @@ class ListItemDelegate(QtWidgets.QStyledItemDelegate):
def __init__(self, parent):
super(ListItemDelegate, self).__init__(parent)
- colors_data = get_objected_colors()
- group_color_info = colors_data["publisher"]["list-view-group"]
+ group_color_info = get_objected_colors("publisher", "list-view-group")
self._group_colors = {
key: value.get_qcolor()
diff --git a/openpype/tools/publisher/widgets/widgets.py b/openpype/tools/publisher/widgets/widgets.py
index aa7e3be687..d1fa71343c 100644
--- a/openpype/tools/publisher/widgets/widgets.py
+++ b/openpype/tools/publisher/widgets/widgets.py
@@ -16,6 +16,7 @@ from openpype.tools.utils import (
BaseClickableFrame,
set_style_property,
)
+from openpype.style import get_objected_colors
from openpype.pipeline.create import (
SUBSET_NAME_ALLOWED_SYMBOLS,
TaskNotSetError,
@@ -125,28 +126,21 @@ class PublishIconBtn(IconButton):
def __init__(self, pixmap_path, *args, **kwargs):
super(PublishIconBtn, self).__init__(*args, **kwargs)
- loaded_image = QtGui.QImage(pixmap_path)
+ colors = get_objected_colors()
+ icon = self.generate_icon(
+ pixmap_path,
+ enabled_color=colors["font"].get_qcolor(),
+ disabled_color=colors["font-disabled"].get_qcolor())
+ self.setIcon(icon)
- pixmap = self.paint_image_with_color(loaded_image, QtCore.Qt.white)
-
- self._base_image = loaded_image
- self._enabled_icon = QtGui.QIcon(pixmap)
- self._disabled_icon = None
-
- self.setIcon(self._enabled_icon)
-
- def get_enabled_icon(self):
- """Enabled icon."""
- return self._enabled_icon
-
- def get_disabled_icon(self):
- """Disabled icon."""
- if self._disabled_icon is None:
- pixmap = self.paint_image_with_color(
- self._base_image, QtCore.Qt.gray
- )
- self._disabled_icon = QtGui.QIcon(pixmap)
- return self._disabled_icon
+ def generate_icon(self, pixmap_path, enabled_color, disabled_color):
+ icon = QtGui.QIcon()
+ image = QtGui.QImage(pixmap_path)
+ enabled_pixmap = self.paint_image_with_color(image, enabled_color)
+ icon.addPixmap(enabled_pixmap, icon.Normal)
+ disabled_pixmap = self.paint_image_with_color(image, disabled_color)
+ icon.addPixmap(disabled_pixmap, icon.Disabled)
+ return icon
@staticmethod
def paint_image_with_color(image, color):
@@ -187,13 +181,6 @@ class PublishIconBtn(IconButton):
return pixmap
- def setEnabled(self, enabled):
- super(PublishIconBtn, self).setEnabled(enabled)
- if self.isEnabled():
- self.setIcon(self.get_enabled_icon())
- else:
- self.setIcon(self.get_disabled_icon())
-
class ResetBtn(PublishIconBtn):
"""Publish reset button."""
diff --git a/openpype/tools/pyblish_pype/control.py b/openpype/tools/pyblish_pype/control.py
index 05e53a989a..90bb002ba5 100644
--- a/openpype/tools/pyblish_pype/control.py
+++ b/openpype/tools/pyblish_pype/control.py
@@ -22,7 +22,7 @@ import pyblish.version
from . import util
from .constants import InstanceStates
-from openpype.api import get_project_settings
+from openpype.settings import get_current_project_settings
class IterationBreak(Exception):
@@ -204,7 +204,7 @@ class Controller(QtCore.QObject):
def presets_by_hosts(self):
# Get global filters as base
- presets = get_project_settings(os.environ['AVALON_PROJECT']) or {}
+ presets = get_current_project_settings()
if not presets:
return {}
diff --git a/openpype/tools/pyblish_pype/model.py b/openpype/tools/pyblish_pype/model.py
index 1479d91bb5..383d8304a5 100644
--- a/openpype/tools/pyblish_pype/model.py
+++ b/openpype/tools/pyblish_pype/model.py
@@ -34,7 +34,7 @@ import qtawesome
from six import text_type
from .constants import PluginStates, InstanceStates, GroupStates, Roles
-from openpype.api import get_system_settings
+from openpype.settings import get_system_settings
# ItemTypes
diff --git a/openpype/tools/sceneinventory/window.py b/openpype/tools/sceneinventory/window.py
index 578f47d1c0..8bac1beb30 100644
--- a/openpype/tools/sceneinventory/window.py
+++ b/openpype/tools/sceneinventory/window.py
@@ -40,8 +40,6 @@ class SceneInventoryWindow(QtWidgets.QDialog):
project_name = os.getenv("AVALON_PROJECT") or ""
self.setWindowTitle("Scene Inventory 1.0 - {}".format(project_name))
self.setObjectName("SceneInventory")
- # Maya only property
- self.setProperty("saveWindowPref", True)
self.resize(1100, 480)
diff --git a/openpype/tools/settings/local_settings/window.py b/openpype/tools/settings/local_settings/window.py
index 6a2db3fff5..76c2d851e9 100644
--- a/openpype/tools/settings/local_settings/window.py
+++ b/openpype/tools/settings/local_settings/window.py
@@ -1,19 +1,18 @@
-import logging
from Qt import QtWidgets, QtGui
from openpype import style
+from openpype.settings import (
+ SystemSettings,
+ ProjectSettings
+)
from openpype.settings.lib import (
get_local_settings,
save_local_settings
)
+from openpype.lib import Logger
from openpype.tools.settings import CHILD_OFFSET
from openpype.tools.utils import MessageOverlayObject
-from openpype.api import (
- Logger,
- SystemSettings,
- ProjectSettings
-)
from openpype.modules import ModulesManager
from .widgets import (
diff --git a/openpype/tools/settings/settings/widgets.py b/openpype/tools/settings/settings/widgets.py
index 1a4a6877b0..722717df89 100644
--- a/openpype/tools/settings/settings/widgets.py
+++ b/openpype/tools/settings/settings/widgets.py
@@ -323,7 +323,7 @@ class SettingsToolBtn(ImageButton):
@classmethod
def _get_icon_type(cls, btn_type):
if btn_type not in cls._cached_icons:
- settings_colors = get_objected_colors()["settings"]
+ settings_colors = get_objected_colors("settings")
normal_color = settings_colors["image-btn"].get_qcolor()
hover_color = settings_colors["image-btn-hover"].get_qcolor()
disabled_color = settings_colors["image-btn-disabled"].get_qcolor()
@@ -789,8 +789,7 @@ class ProjectModel(QtGui.QStandardItemModel):
self._items_by_name = {}
self._versions_by_project = {}
- colors = get_objected_colors()
- font_color = colors["font"].get_qcolor()
+ font_color = get_objected_colors("font").get_qcolor()
font_color.setAlpha(67)
self._version_font_color = font_color
self._current_version = get_openpype_version()
diff --git a/openpype/tools/standalonepublish/widgets/widget_components.py b/openpype/tools/standalonepublish/widgets/widget_components.py
index b3280089c3..237e1da583 100644
--- a/openpype/tools/standalonepublish/widgets/widget_components.py
+++ b/openpype/tools/standalonepublish/widgets/widget_components.py
@@ -6,9 +6,10 @@ import string
from Qt import QtWidgets, QtCore
-from openpype.api import execute, Logger
from openpype.pipeline import legacy_io
from openpype.lib import (
+ execute,
+ Logger,
get_openpype_execute_args,
apply_project_environments_value
)
diff --git a/openpype/tools/stdout_broker/app.py b/openpype/tools/stdout_broker/app.py
index a42d93dab4..f8dc2111aa 100644
--- a/openpype/tools/stdout_broker/app.py
+++ b/openpype/tools/stdout_broker/app.py
@@ -6,8 +6,8 @@ import websocket
import json
from datetime import datetime
+from openpype.lib import Logger
from openpype_modules.webserver.host_console_listener import MsgAction
-from openpype.api import Logger
log = Logger.get_logger(__name__)
diff --git a/openpype/tools/tray/pype_tray.py b/openpype/tools/tray/pype_tray.py
index 348573a191..3842a4e216 100644
--- a/openpype/tools/tray/pype_tray.py
+++ b/openpype/tools/tray/pype_tray.py
@@ -144,8 +144,7 @@ class VersionUpdateDialog(QtWidgets.QDialog):
"gifts.png"
)
src_image = QtGui.QImage(image_path)
- colors = style.get_objected_colors()
- color_value = colors["font"]
+ color_value = style.get_objected_colors("font")
return paint_image_with_color(
src_image,
@@ -776,12 +775,26 @@ class PypeTrayStarter(QtCore.QObject):
def main():
log = Logger.get_logger(__name__)
app = QtWidgets.QApplication.instance()
+
+ high_dpi_scale_attr = None
if not app:
+ # 'AA_EnableHighDpiScaling' must be set before app instance creation
+ high_dpi_scale_attr = getattr(
+ QtCore.Qt, "AA_EnableHighDpiScaling", None
+ )
+ if high_dpi_scale_attr is not None:
+ QtWidgets.QApplication.setAttribute(high_dpi_scale_attr)
+
app = QtWidgets.QApplication([])
+ if high_dpi_scale_attr is None:
+ log.debug((
+ "Attribute 'AA_EnableHighDpiScaling' was not set."
+ " UI quality may be affected."
+ ))
+
for attr_name in (
- "AA_EnableHighDpiScaling",
- "AA_UseHighDpiPixmaps"
+ "AA_UseHighDpiPixmaps",
):
attr = getattr(QtCore.Qt, attr_name, None)
if attr is None:
diff --git a/openpype/tools/utils/assets_widget.py b/openpype/tools/utils/assets_widget.py
index 772946e9e1..2a1fb4567c 100644
--- a/openpype/tools/utils/assets_widget.py
+++ b/openpype/tools/utils/assets_widget.py
@@ -114,7 +114,7 @@ class UnderlinesAssetDelegate(QtWidgets.QItemDelegate):
def __init__(self, *args, **kwargs):
super(UnderlinesAssetDelegate, self).__init__(*args, **kwargs)
- asset_view_colors = get_objected_colors()["loader"]["asset-view"]
+ asset_view_colors = get_objected_colors("loader", "asset-view")
self._selected_color = (
asset_view_colors["selected"].get_qcolor()
)
diff --git a/openpype/tools/utils/host_tools.py b/openpype/tools/utils/host_tools.py
index d2f05d3302..552ce0d432 100644
--- a/openpype/tools/utils/host_tools.py
+++ b/openpype/tools/utils/host_tools.py
@@ -7,6 +7,7 @@ import os
import pyblish.api
from openpype.host import IWorkfileHost, ILoadHost
+from openpype.lib import Logger
from openpype.pipeline import (
registered_host,
legacy_io,
@@ -23,6 +24,7 @@ class HostToolsHelper:
Class may also contain tools that are available only for one or few hosts.
"""
+
def __init__(self, parent=None):
self._log = None
# Global parent for all tools (may and may not be set)
@@ -42,8 +44,6 @@ class HostToolsHelper:
@property
def log(self):
if self._log is None:
- from openpype.api import Logger
-
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py
index 97b680b77e..d8dd80046a 100644
--- a/openpype/tools/utils/lib.py
+++ b/openpype/tools/utils/lib.py
@@ -16,11 +16,8 @@ from openpype.style import (
get_objected_colors,
)
from openpype.resources import get_image_path
-from openpype.lib import filter_profiles
-from openpype.api import (
- get_project_settings,
- Logger
-)
+from openpype.lib import filter_profiles, Logger
+from openpype.settings import get_project_settings
from openpype.pipeline import registered_host
log = Logger.get_logger(__name__)
@@ -822,8 +819,6 @@ def get_warning_pixmap(color=None):
src_image_path = get_image_path("warning.png")
src_image = QtGui.QImage(src_image_path)
if color is None:
- colors = get_objected_colors()
- color_value = colors["delete-btn-bg"]
- color = color_value.get_qcolor()
+ color = get_objected_colors("delete-btn-bg").get_qcolor()
return paint_image_with_color(src_image, color)
diff --git a/openpype/tools/utils/overlay_messages.py b/openpype/tools/utils/overlay_messages.py
index 62de2cf272..cbcbb15621 100644
--- a/openpype/tools/utils/overlay_messages.py
+++ b/openpype/tools/utils/overlay_messages.py
@@ -14,8 +14,7 @@ class CloseButton(QtWidgets.QFrame):
def __init__(self, parent):
super(CloseButton, self).__init__(parent)
- colors = get_objected_colors()
- close_btn_color = colors["overlay-messages"]["close-btn"]
+ close_btn_color = get_objected_colors("overlay-messages", "close-btn")
self._color = close_btn_color.get_qcolor()
self._mouse_pressed = False
policy = QtWidgets.QSizePolicy(
diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py
index df0d349822..c8133b3359 100644
--- a/openpype/tools/utils/widgets.py
+++ b/openpype/tools/utils/widgets.py
@@ -40,7 +40,7 @@ class PlaceholderLineEdit(QtWidgets.QLineEdit):
# Change placeholder palette color
if hasattr(QtGui.QPalette, "PlaceholderText"):
filter_palette = self.palette()
- color_obj = get_objected_colors()["font"]
+ color_obj = get_objected_colors("font")
color = color_obj.get_qcolor()
color.setAlpha(67)
filter_palette.setColor(
diff --git a/openpype/tools/workfile_template_build/__init__.py b/openpype/tools/workfile_template_build/__init__.py
new file mode 100644
index 0000000000..82a22aea50
--- /dev/null
+++ b/openpype/tools/workfile_template_build/__init__.py
@@ -0,0 +1,5 @@
+from .window import WorkfileBuildPlaceholderDialog
+
+__all__ = (
+ "WorkfileBuildPlaceholderDialog",
+)
diff --git a/openpype/tools/workfile_template_build/window.py b/openpype/tools/workfile_template_build/window.py
new file mode 100644
index 0000000000..ea4e2fec5a
--- /dev/null
+++ b/openpype/tools/workfile_template_build/window.py
@@ -0,0 +1,242 @@
+from Qt import QtWidgets
+
+from openpype import style
+from openpype.lib import Logger
+from openpype.pipeline import legacy_io
+from openpype.widgets.attribute_defs import AttributeDefinitionsWidget
+
+
+class WorkfileBuildPlaceholderDialog(QtWidgets.QDialog):
+ def __init__(self, host, builder, parent=None):
+ super(WorkfileBuildPlaceholderDialog, self).__init__(parent)
+ self.setWindowTitle("Workfile Placeholder Manager")
+
+ self._log = None
+
+ self._first_show = True
+ self._first_refreshed = False
+
+ self._builder = builder
+ self._host = host
+ # Mode can be 0 (create) or 1 (update)
+ # TODO write it a little bit better
+ self._mode = 0
+ self._update_item = None
+ self._last_selected_plugin = None
+
+ host_name = getattr(self._host, "name", None)
+ if not host_name:
+ host_name = legacy_io.Session.get("AVALON_APP") or "NA"
+ self._host_name = host_name
+
+ plugins_combo = QtWidgets.QComboBox(self)
+
+ content_widget = QtWidgets.QWidget(self)
+ content_layout = QtWidgets.QVBoxLayout(content_widget)
+ content_layout.setContentsMargins(0, 0, 0, 0)
+
+ btns_widget = QtWidgets.QWidget(self)
+ create_btn = QtWidgets.QPushButton("Create", btns_widget)
+ save_btn = QtWidgets.QPushButton("Save", btns_widget)
+ close_btn = QtWidgets.QPushButton("Close", btns_widget)
+
+ create_btn.setVisible(False)
+ save_btn.setVisible(False)
+
+ btns_layout = QtWidgets.QHBoxLayout(btns_widget)
+ btns_layout.addStretch(1)
+ btns_layout.addWidget(create_btn, 0)
+ btns_layout.addWidget(save_btn, 0)
+ btns_layout.addWidget(close_btn, 0)
+
+ main_layout = QtWidgets.QVBoxLayout(self)
+ main_layout.addWidget(plugins_combo, 0)
+ main_layout.addWidget(content_widget, 1)
+ main_layout.addWidget(btns_widget, 0)
+
+ create_btn.clicked.connect(self._on_create_click)
+ save_btn.clicked.connect(self._on_save_click)
+ close_btn.clicked.connect(self._on_close_click)
+ plugins_combo.currentIndexChanged.connect(self._on_plugin_change)
+
+ self._attr_defs_widget = None
+ self._plugins_combo = plugins_combo
+
+ self._content_widget = content_widget
+ self._content_layout = content_layout
+
+ self._create_btn = create_btn
+ self._save_btn = save_btn
+ self._close_btn = close_btn
+
+ @property
+ def log(self):
+ if self._log is None:
+ self._log = Logger.get_logger(self.__class__.__name__)
+ return self._log
+
+ def _clear_content_widget(self):
+ while self._content_layout.count() > 0:
+ item = self._content_layout.takeAt(0)
+ widget = item.widget()
+ if widget:
+ widget.setVisible(False)
+ widget.deleteLater()
+
+ def _add_message_to_content(self, message):
+ msg_label = QtWidgets.QLabel(message, self._content_widget)
+ self._content_layout.addWidget(msg_label, 0)
+ self._content_layout.addStretch(1)
+
+ def refresh(self):
+ self._first_refreshed = True
+
+ self._clear_content_widget()
+
+ if not self._builder:
+ self._add_message_to_content((
+ "Host \"{}\" does not have implemented logic"
+ " for template workfile build."
+ ).format(self._host_name))
+ self._update_ui_visibility()
+ return
+
+ placeholder_plugins = self._builder.placeholder_plugins
+
+ if self._mode == 1:
+ self._last_selected_plugin
+ plugin = self._builder.placeholder_plugins.get(
+ self._last_selected_plugin
+ )
+ self._create_option_widgets(
+ plugin, self._update_item.to_dict()
+ )
+ self._update_ui_visibility()
+ return
+
+ if not placeholder_plugins:
+ self._add_message_to_content((
+ "Host \"{}\" does not have implemented plugins"
+ " for template workfile build."
+ ).format(self._host_name))
+ self._update_ui_visibility()
+ return
+
+ last_selected_plugin = self._last_selected_plugin
+ self._last_selected_plugin = None
+ self._plugins_combo.clear()
+ for identifier, plugin in placeholder_plugins.items():
+ label = plugin.label or identifier
+ self._plugins_combo.addItem(label, identifier)
+
+ index = self._plugins_combo.findData(last_selected_plugin)
+ if index < 0:
+ index = 0
+ self._plugins_combo.setCurrentIndex(index)
+ self._on_plugin_change()
+
+ self._update_ui_visibility()
+
+ def set_create_mode(self):
+ if self._mode == 0:
+ return
+
+ self._mode = 0
+ self._update_item = None
+ self.refresh()
+
+ def set_update_mode(self, update_item):
+ if self._mode == 1:
+ return
+
+ self._mode = 1
+ self._update_item = update_item
+ if update_item:
+ self._last_selected_plugin = update_item.plugin.identifier
+ self.refresh()
+ return
+
+ self._clear_content_widget()
+ self._add_message_to_content((
+ "Nothing to update."
+ " (You maybe don't have selected placeholder.)"
+ ))
+ self._update_ui_visibility()
+
+ def _create_option_widgets(self, plugin, options=None):
+ self._clear_content_widget()
+ attr_defs = plugin.get_placeholder_options(options)
+ widget = AttributeDefinitionsWidget(attr_defs, self._content_widget)
+ self._content_layout.addWidget(widget, 0)
+ self._content_layout.addStretch(1)
+ self._attr_defs_widget = widget
+ self._last_selected_plugin = plugin.identifier
+
+ def _update_ui_visibility(self):
+ create_mode = self._mode == 0
+ self._plugins_combo.setVisible(create_mode)
+
+ if not self._builder:
+ self._save_btn.setVisible(False)
+ self._create_btn.setVisible(False)
+ return
+
+ save_enabled = not create_mode
+ if save_enabled:
+ save_enabled = self._update_item is not None
+ self._save_btn.setVisible(save_enabled)
+ self._create_btn.setVisible(create_mode)
+
+ def _on_plugin_change(self):
+ index = self._plugins_combo.currentIndex()
+ plugin_identifier = self._plugins_combo.itemData(index)
+ if plugin_identifier == self._last_selected_plugin:
+ return
+
+ plugin = self._builder.placeholder_plugins.get(plugin_identifier)
+ self._create_option_widgets(plugin)
+
+ def _on_save_click(self):
+ options = self._attr_defs_widget.current_value()
+ plugin = self._builder.placeholder_plugins.get(
+ self._last_selected_plugin
+ )
+ # TODO much better error handling
+ try:
+ plugin.update_placeholder(self._update_item, options)
+ self.accept()
+ except Exception:
+ self.log.warning("Something went wrong", exc_info=True)
+ dialog = QtWidgets.QMessageBox(self)
+ dialog.setWindowTitle("Something went wrong")
+ dialog.setText("Something went wrong")
+ dialog.exec_()
+
+ def _on_create_click(self):
+ options = self._attr_defs_widget.current_value()
+ plugin = self._builder.placeholder_plugins.get(
+ self._last_selected_plugin
+ )
+ # TODO much better error handling
+ try:
+ plugin.create_placeholder(options)
+ self.accept()
+ except Exception:
+ self.log.warning("Something went wrong", exc_info=True)
+ dialog = QtWidgets.QMessageBox(self)
+ dialog.setWindowTitle("Something went wrong")
+ dialog.setText("Something went wrong")
+ dialog.exec_()
+
+ def _on_close_click(self):
+ self.reject()
+
+ def showEvent(self, event):
+ super(WorkfileBuildPlaceholderDialog, self).showEvent(event)
+ if not self._first_refreshed:
+ self.refresh()
+
+ if self._first_show:
+ self._first_show = False
+ self.setStyleSheet(style.load_stylesheet())
+ self.resize(390, 450)
diff --git a/openpype/version.py b/openpype/version.py
index 26b145f1db..3a0c538daf 100644
--- a/openpype/version.py
+++ b/openpype/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
-__version__ = "3.14.3-nightly.3"
+__version__ = "3.14.4-nightly.3"
diff --git a/openpype/widgets/attribute_defs/files_widget.py b/openpype/widgets/attribute_defs/files_widget.py
index d29aa1b607..3f1e6a34e1 100644
--- a/openpype/widgets/attribute_defs/files_widget.py
+++ b/openpype/widgets/attribute_defs/files_widget.py
@@ -138,11 +138,13 @@ class DropEmpty(QtWidgets.QWidget):
allowed_items = [item + "s" for item in allowed_items]
if not allowed_items:
+ self._drop_label_widget.setVisible(False)
self._items_label_widget.setText(
"It is not allowed to add anything here!"
)
return
+ self._drop_label_widget.setVisible(True)
items_label = "Multiple "
if self._single_item:
items_label = "Single "
@@ -235,10 +237,41 @@ class FilesModel(QtGui.QStandardItemModel):
self._filenames_by_dirpath = collections.defaultdict(set)
self._items_by_dirpath = collections.defaultdict(list)
+ self.rowsAboutToBeRemoved.connect(self._on_about_to_be_removed)
+ self.rowsInserted.connect(self._on_insert)
+
@property
def id(self):
return self._id
+ def _on_about_to_be_removed(self, parent_index, start, end):
+ """Make sure that removed items are removed from items mapping.
+
+ Connected with '_on_insert'. When user drag item and drop it to same
+ view the item is actually removed and creted again but it happens in
+ inner calls of Qt.
+ """
+
+ for row in range(start, end + 1):
+ index = self.index(row, 0, parent_index)
+ item_id = index.data(ITEM_ID_ROLE)
+ if item_id is not None:
+ self._items_by_id.pop(item_id, None)
+
+ def _on_insert(self, parent_index, start, end):
+ """Make sure new added items are stored in items mapping.
+
+ Connected to '_on_about_to_be_removed'. Some items are not created
+ using '_create_item' but are recreated using Qt. So the item is not in
+ mapping and if it would it would not lead to same item pointer.
+ """
+
+ for row in range(start, end + 1):
+ index = self.index(start, end, parent_index)
+ item_id = index.data(ITEM_ID_ROLE)
+ if item_id not in self._items_by_id:
+ self._items_by_id[item_id] = self.item(row)
+
def set_multivalue(self, multivalue):
"""Disable filtering."""
@@ -250,6 +283,15 @@ class FilesModel(QtGui.QStandardItemModel):
if not items:
return
+ if self._multivalue:
+ _items = []
+ for item in items:
+ if isinstance(item, (tuple, list, set)):
+ _items.extend(item)
+ else:
+ _items.append(item)
+ items = _items
+
file_items = FileDefItem.from_value(items, self._allow_sequences)
if not file_items:
return
@@ -352,6 +394,10 @@ class FilesModel(QtGui.QStandardItemModel):
src_item_id = index.data(ITEM_ID_ROLE)
src_item = self._items_by_id.get(src_item_id)
+ src_row = None
+ if src_item:
+ src_row = src_item.row()
+
# Take out items that should be moved
items = []
for item_id in item_ids:
@@ -365,10 +411,12 @@ class FilesModel(QtGui.QStandardItemModel):
return False
# Calculate row where items should be inserted
- if src_item:
- src_row = src_item.row()
- else:
- src_row = root.rowCount()
+ row_count = root.rowCount()
+ if src_row is None:
+ src_row = row_count
+
+ if src_row > row_count:
+ src_row = row_count
root.insertRow(src_row, items)
return True
@@ -576,6 +624,7 @@ class FilesView(QtWidgets.QListView):
self.customContextMenuRequested.connect(self._on_context_menu_request)
self._remove_btn = remove_btn
+ self._multivalue = False
def setSelectionModel(self, *args, **kwargs):
"""Catch selection model set to register signal callback.
@@ -590,8 +639,16 @@ class FilesView(QtWidgets.QListView):
def set_multivalue(self, multivalue):
"""Disable remove button on multivalue."""
+ self._multivalue = multivalue
self._remove_btn.setVisible(not multivalue)
+ def update_remove_btn_visibility(self):
+ model = self.model()
+ visible = False
+ if not self._multivalue and model:
+ visible = model.rowCount() > 0
+ self._remove_btn.setVisible(visible)
+
def has_selected_item_ids(self):
"""Is any index selected."""
for index in self.selectionModel().selectedIndexes():
@@ -655,6 +712,7 @@ class FilesView(QtWidgets.QListView):
def showEvent(self, event):
super(FilesView, self).showEvent(event)
self._update_remove_btn()
+ self.update_remove_btn_visibility()
class FilesWidget(QtWidgets.QFrame):
@@ -673,12 +731,13 @@ class FilesWidget(QtWidgets.QFrame):
files_proxy_model.setSourceModel(files_model)
files_view = FilesView(self)
files_view.setModel(files_proxy_model)
- files_view.setVisible(False)
- layout = QtWidgets.QHBoxLayout(self)
+ layout = QtWidgets.QStackedLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
- layout.addWidget(empty_widget, 1)
- layout.addWidget(files_view, 1)
+ layout.setStackingMode(layout.StackAll)
+ layout.addWidget(empty_widget)
+ layout.addWidget(files_view)
+ layout.setCurrentWidget(empty_widget)
files_proxy_model.rowsInserted.connect(self._on_rows_inserted)
files_proxy_model.rowsRemoved.connect(self._on_rows_removed)
@@ -698,13 +757,16 @@ class FilesWidget(QtWidgets.QFrame):
self._widgets_by_id = {}
+ self._layout = layout
+
def _set_multivalue(self, multivalue):
- if self._multivalue == multivalue:
+ if self._multivalue is multivalue:
return
self._multivalue = multivalue
self._files_view.set_multivalue(multivalue)
self._files_model.set_multivalue(multivalue)
self._files_proxy_model.set_multivalue(multivalue)
+ self.setEnabled(not multivalue)
def set_value(self, value, multivalue):
self._in_set_value = True
@@ -774,6 +836,8 @@ class FilesWidget(QtWidgets.QFrame):
if not self._in_set_value:
self.value_changed.emit()
+ self._update_visibility()
+
def _on_rows_removed(self, parent_index, start_row, end_row):
available_item_ids = set()
for row in range(self._files_proxy_model.rowCount()):
@@ -793,6 +857,7 @@ class FilesWidget(QtWidgets.QFrame):
if not self._in_set_value:
self.value_changed.emit()
+ self._update_visibility()
def _on_split_request(self):
if self._multivalue:
@@ -836,29 +901,6 @@ class FilesWidget(QtWidgets.QFrame):
menu.popup(pos)
- def sizeHint(self):
- # Get size hints of widget and visible widgets
- result = super(FilesWidget, self).sizeHint()
- if not self._files_view.isVisible():
- not_visible_hint = self._files_view.sizeHint()
- else:
- not_visible_hint = self._empty_widget.sizeHint()
-
- # Get margins of this widget
- margins = self.layout().contentsMargins()
-
- # Change size hint based on result of maximum size hint of widgets
- result.setWidth(max(
- result.width(),
- not_visible_hint.width() + margins.left() + margins.right()
- ))
- result.setHeight(max(
- result.height(),
- not_visible_hint.height() + margins.top() + margins.bottom()
- ))
-
- return result
-
def dragEnterEvent(self, event):
if self._multivalue:
return
@@ -890,7 +932,6 @@ class FilesWidget(QtWidgets.QFrame):
mime_data = event.mimeData()
if mime_data.hasUrls():
event.accept()
- # event.setDropAction(QtCore.Qt.CopyAction)
filepaths = []
for url in mime_data.urls():
filepath = url.toLocalFile()
@@ -956,13 +997,15 @@ class FilesWidget(QtWidgets.QFrame):
def _add_filepaths(self, filepaths):
self._files_model.add_filepaths(filepaths)
- self._update_visibility()
def _remove_item_by_ids(self, item_ids):
self._files_model.remove_item_by_ids(item_ids)
- self._update_visibility()
def _update_visibility(self):
files_exists = self._files_proxy_model.rowCount() > 0
- self._files_view.setVisible(files_exists)
- self._empty_widget.setVisible(not files_exists)
+ if files_exists:
+ current_widget = self._files_view
+ else:
+ current_widget = self._empty_widget
+ self._layout.setCurrentWidget(current_widget)
+ self._files_view.update_remove_btn_visibility()
diff --git a/openpype/widgets/attribute_defs/widgets.py b/openpype/widgets/attribute_defs/widgets.py
index 60ae952553..dc697b08a6 100644
--- a/openpype/widgets/attribute_defs/widgets.py
+++ b/openpype/widgets/attribute_defs/widgets.py
@@ -108,10 +108,12 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
row = 0
for attr_def in attr_defs:
- if attr_def.key in self._current_keys:
- raise KeyError("Duplicated key \"{}\"".format(attr_def.key))
+ if not isinstance(attr_def, UIDef):
+ if attr_def.key in self._current_keys:
+ raise KeyError(
+ "Duplicated key \"{}\"".format(attr_def.key))
- self._current_keys.add(attr_def.key)
+ self._current_keys.add(attr_def.key)
widget = create_widget_for_attr_def(attr_def, self)
expand_cols = 2
diff --git a/openpype/widgets/nice_checkbox.py b/openpype/widgets/nice_checkbox.py
index 56e6d2ac24..334a5d197b 100644
--- a/openpype/widgets/nice_checkbox.py
+++ b/openpype/widgets/nice_checkbox.py
@@ -66,8 +66,7 @@ class NiceCheckbox(QtWidgets.QFrame):
if cls._checked_bg_color is not None:
return
- colors_data = get_objected_colors()
- colors_info = colors_data["nice-checkbox"]
+ colors_info = get_objected_colors("nice-checkbox")
cls._checked_bg_color = colors_info["bg-checked"].get_qcolor()
cls._unchecked_bg_color = colors_info["bg-unchecked"].get_qcolor()
diff --git a/openpype/widgets/password_dialog.py b/openpype/widgets/password_dialog.py
index 9990642ca1..58add7832f 100644
--- a/openpype/widgets/password_dialog.py
+++ b/openpype/widgets/password_dialog.py
@@ -3,7 +3,7 @@ from Qt import QtWidgets, QtCore, QtGui
from openpype import style
from openpype.resources import get_resource
-from openpype.api import get_system_settings
+from openpype.settings import get_system_settings
from openpype.settings.lib import (
get_local_settings,
save_local_settings
diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py
index 85121f0d73..78a9f81095 100644
--- a/tests/lib/testing_classes.py
+++ b/tests/lib/testing_classes.py
@@ -10,7 +10,7 @@ import glob
import platform
from tests.lib.db_handler import DBHandler
-from distribution.file_handler import RemoteFileHandler
+from common.openpype_common.distribution.file_handler import RemoteFileHandler
class BaseTest: