mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into feature/OP-3835_Move-publish-utils-to-pipeline
This commit is contained in:
commit
cd4b5ce227
113 changed files with 1909 additions and 982 deletions
73
CHANGELOG.md
73
CHANGELOG.md
|
|
@ -1,30 +1,61 @@
|
|||
# Changelog
|
||||
|
||||
## [3.14.1-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.0...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.0...3.14.1)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
|
||||
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
|
||||
- Blender: validators code correction with settings and defaults [\#3662](https://github.com/pypeclub/OpenPype/pull/3662)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750)
|
||||
- Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720)
|
||||
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
|
||||
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
|
||||
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
|
||||
- Ftrack: More logs related to auto sync value change [\#3671](https://github.com/pypeclub/OpenPype/pull/3671)
|
||||
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
|
||||
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
|
||||
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: Fix typo in getPanel argument `with\_focus` -\> `withFocus` [\#3753](https://github.com/pypeclub/OpenPype/pull/3753)
|
||||
- General: Smaller fixes of imports [\#3748](https://github.com/pypeclub/OpenPype/pull/3748)
|
||||
- General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741)
|
||||
- Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737)
|
||||
- Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721)
|
||||
- Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716)
|
||||
- Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709)
|
||||
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
|
||||
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
|
||||
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
|
||||
- RoyalRender: handle host name that is not set [\#3695](https://github.com/pypeclub/OpenPype/pull/3695)
|
||||
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
|
||||
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751)
|
||||
- General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744)
|
||||
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
|
||||
- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736)
|
||||
- Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734)
|
||||
- General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731)
|
||||
- AfterEffects: Move AE functions from general lib [\#3730](https://github.com/pypeclub/OpenPype/pull/3730)
|
||||
- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729)
|
||||
- AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728)
|
||||
- General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725)
|
||||
- Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724)
|
||||
- General: Move subset name functionality [\#3723](https://github.com/pypeclub/OpenPype/pull/3723)
|
||||
- General: Move creators plugin getter [\#3714](https://github.com/pypeclub/OpenPype/pull/3714)
|
||||
- General: Move constants from lib to client [\#3713](https://github.com/pypeclub/OpenPype/pull/3713)
|
||||
- Loader: Subset groups using client operations [\#3710](https://github.com/pypeclub/OpenPype/pull/3710)
|
||||
- TVPaint: Defined as module [\#3707](https://github.com/pypeclub/OpenPype/pull/3707)
|
||||
- StandalonePublisher: Define StandalonePublisher as module [\#3706](https://github.com/pypeclub/OpenPype/pull/3706)
|
||||
|
|
@ -33,6 +64,7 @@
|
|||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
|
||||
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
|
||||
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
|
||||
|
||||
|
|
@ -40,10 +72,6 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Maya: Build workfile by template [\#3578](https://github.com/pypeclub/OpenPype/pull/3578)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
|
||||
|
|
@ -69,7 +97,6 @@
|
|||
- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647)
|
||||
- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639)
|
||||
- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637)
|
||||
- General: Workfiles builder using query functions [\#3598](https://github.com/pypeclub/OpenPype/pull/3598)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
|
|
@ -82,21 +109,11 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Support for mutliple installed versions - 3.13 [\#3605](https://github.com/pypeclub/OpenPype/pull/3605)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
|
||||
- Ftrack: Comment template can contain optional keys [\#3615](https://github.com/pypeclub/OpenPype/pull/3615)
|
||||
- Ftrack: Add more metadata to ftrack components [\#3612](https://github.com/pypeclub/OpenPype/pull/3612)
|
||||
- General: Add context to pyblish context [\#3594](https://github.com/pypeclub/OpenPype/pull/3594)
|
||||
- Kitsu: Shot&Sequence name with prefix over appends [\#3593](https://github.com/pypeclub/OpenPype/pull/3593)
|
||||
- Photoshop: implemented {layer} placeholder in subset template [\#3591](https://github.com/pypeclub/OpenPype/pull/3591)
|
||||
- General: Python module appdirs from git [\#3589](https://github.com/pypeclub/OpenPype/pull/3589)
|
||||
- Ftrack: Update ftrack api to 2.3.3 [\#3588](https://github.com/pypeclub/OpenPype/pull/3588)
|
||||
- General: New Integrator small fixes [\#3583](https://github.com/pypeclub/OpenPype/pull/3583)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -106,38 +123,20 @@
|
|||
- General: Extract review aspect ratio scale is calculated by ffmpeg [\#3620](https://github.com/pypeclub/OpenPype/pull/3620)
|
||||
- Maya: Fix types of default settings [\#3617](https://github.com/pypeclub/OpenPype/pull/3617)
|
||||
- Integrator: Don't force to have dot before frame [\#3611](https://github.com/pypeclub/OpenPype/pull/3611)
|
||||
- AfterEffects: refactored integrate doesnt work formulti frame publishes [\#3610](https://github.com/pypeclub/OpenPype/pull/3610)
|
||||
- Maya look data contents fails with custom attribute on group [\#3607](https://github.com/pypeclub/OpenPype/pull/3607)
|
||||
- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600)
|
||||
- Bugfix: Add OCIO as submodule to prepare for handling `maketx` color space conversion. [\#3590](https://github.com/pypeclub/OpenPype/pull/3590)
|
||||
- Fix general settings environment variables resolution [\#3587](https://github.com/pypeclub/OpenPype/pull/3587)
|
||||
- Editorial publishing workflow improvements [\#3580](https://github.com/pypeclub/OpenPype/pull/3580)
|
||||
- General: Update imports in start script [\#3579](https://github.com/pypeclub/OpenPype/pull/3579)
|
||||
- Nuke: render family integration consistency [\#3576](https://github.com/pypeclub/OpenPype/pull/3576)
|
||||
- Ftrack: Handle missing published path in integrator [\#3570](https://github.com/pypeclub/OpenPype/pull/3570)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
|
||||
- General: Naive implementation of document create, update, delete [\#3601](https://github.com/pypeclub/OpenPype/pull/3601)
|
||||
- General: Use query functions in general code [\#3596](https://github.com/pypeclub/OpenPype/pull/3596)
|
||||
- General: Separate extraction of template data into more functions [\#3574](https://github.com/pypeclub/OpenPype/pull/3574)
|
||||
- General: Lib cleanup [\#3571](https://github.com/pypeclub/OpenPype/pull/3571)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Webpublisher: timeout for PS studio processing [\#3619](https://github.com/pypeclub/OpenPype/pull/3619)
|
||||
- Core: translated validate\_containers.py into New publisher style [\#3614](https://github.com/pypeclub/OpenPype/pull/3614)
|
||||
- Enable write color sets on animation publish automatically [\#3582](https://github.com/pypeclub/OpenPype/pull/3582)
|
||||
|
||||
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: fix Review image plane attribute [\#3569](https://github.com/pypeclub/OpenPype/pull/3569)
|
||||
|
||||
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
|
||||
|
|
|
|||
|
|
@ -24,6 +24,7 @@ CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
|
|||
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
|
||||
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
|
||||
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
|
||||
CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0"
|
||||
|
||||
|
||||
def _create_or_convert_to_mongo_id(mongo_id):
|
||||
|
|
@ -195,6 +196,29 @@ def new_representation_doc(
|
|||
}
|
||||
|
||||
|
||||
def new_thumbnail_doc(data=None, entity_id=None):
|
||||
"""Create skeleton data of thumbnail document.
|
||||
|
||||
Args:
|
||||
data (Dict[str, Any]): Thumbnail document data.
|
||||
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
|
||||
created if not passed.
|
||||
|
||||
Returns:
|
||||
Dict[str, Any]: Skeleton of thumbnail document.
|
||||
"""
|
||||
|
||||
if data is None:
|
||||
data = {}
|
||||
|
||||
return {
|
||||
"_id": _create_or_convert_to_mongo_id(entity_id),
|
||||
"type": "thumbnail",
|
||||
"schema": CURRENT_THUMBNAIL_SCHEMA,
|
||||
"data": data
|
||||
}
|
||||
|
||||
|
||||
def new_workfile_info_doc(
|
||||
filename, asset_id, task_name, files, data=None, entity_id=None
|
||||
):
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from .module import AfterEffectsModule
|
||||
from .addon import AfterEffectsAddon
|
||||
|
||||
|
||||
__all__ = (
|
||||
"AfterEffectsModule",
|
||||
"AfterEffectsAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
|
||||
class AfterEffectsModule(OpenPypeModule, IHostModule):
|
||||
class AfterEffectsAddon(OpenPypeModule, IHostAddon):
|
||||
name = "aftereffects"
|
||||
host_name = "aftereffects"
|
||||
|
||||
|
|
@ -1,5 +1,7 @@
|
|||
import os
|
||||
import sys
|
||||
import re
|
||||
import json
|
||||
import contextlib
|
||||
import traceback
|
||||
import logging
|
||||
|
|
@ -77,3 +79,57 @@ def get_extension_manifest_path():
|
|||
"CSXS",
|
||||
"manifest.xml"
|
||||
)
|
||||
|
||||
|
||||
def get_unique_layer_name(layers, name):
|
||||
"""
|
||||
Gets all layer names and if 'name' is present in them, increases
|
||||
suffix by 1 (eg. creates unique layer name - for Loader)
|
||||
Args:
|
||||
layers (list): of strings, names only
|
||||
name (string): checked value
|
||||
|
||||
Returns:
|
||||
(string): name_00X (without version)
|
||||
"""
|
||||
names = {}
|
||||
for layer in layers:
|
||||
layer_name = re.sub(r'_\d{3}$', '', layer)
|
||||
if layer_name in names.keys():
|
||||
names[layer_name] = names[layer_name] + 1
|
||||
else:
|
||||
names[layer_name] = 1
|
||||
occurrences = names.get(name, 0)
|
||||
|
||||
return "{}_{:0>3d}".format(name, occurrences + 1)
|
||||
|
||||
|
||||
def get_background_layers(file_url):
|
||||
"""
|
||||
Pulls file name from background json file, enrich with folder url for
|
||||
AE to be able import files.
|
||||
|
||||
Order is important, follows order in json.
|
||||
|
||||
Args:
|
||||
file_url (str): abs url of background json
|
||||
|
||||
Returns:
|
||||
(list): of abs paths to images
|
||||
"""
|
||||
with open(file_url) as json_file:
|
||||
data = json.load(json_file)
|
||||
|
||||
layers = list()
|
||||
bg_folder = os.path.dirname(file_url)
|
||||
for child in data['children']:
|
||||
if child.get("filename"):
|
||||
layers.append(os.path.join(bg_folder, child.get("filename")).
|
||||
replace("\\", "/"))
|
||||
else:
|
||||
for layer in child['children']:
|
||||
if layer.get("filename"):
|
||||
layers.append(os.path.join(bg_folder,
|
||||
layer.get("filename")).
|
||||
replace("\\", "/"))
|
||||
return layers
|
||||
|
|
|
|||
|
|
@ -1,14 +1,14 @@
|
|||
import re
|
||||
|
||||
from openpype.lib import (
|
||||
get_background_layers,
|
||||
get_unique_layer_name
|
||||
)
|
||||
from openpype.pipeline import get_representation_path
|
||||
from openpype.hosts.aftereffects.api import (
|
||||
AfterEffectsLoader,
|
||||
containerise
|
||||
)
|
||||
from openpype.hosts.aftereffects.api.lib import (
|
||||
get_background_layers,
|
||||
get_unique_layer_name,
|
||||
)
|
||||
|
||||
|
||||
class BackgroundLoader(AfterEffectsLoader):
|
||||
|
|
|
|||
|
|
@ -1,12 +1,11 @@
|
|||
import re
|
||||
|
||||
from openpype import lib
|
||||
|
||||
from openpype.pipeline import get_representation_path
|
||||
from openpype.hosts.aftereffects.api import (
|
||||
AfterEffectsLoader,
|
||||
containerise
|
||||
)
|
||||
from openpype.hosts.aftereffects.api.lib import get_unique_layer_name
|
||||
|
||||
|
||||
class FileLoader(AfterEffectsLoader):
|
||||
|
|
@ -28,7 +27,7 @@ class FileLoader(AfterEffectsLoader):
|
|||
stub = self.get_stub()
|
||||
layers = stub.get_items(comps=True, folders=True, footages=True)
|
||||
existing_layers = [layer.name for layer in layers]
|
||||
comp_name = lib.get_unique_layer_name(
|
||||
comp_name = get_unique_layer_name(
|
||||
existing_layers, "{}_{}".format(context["asset"]["name"], name))
|
||||
|
||||
import_options = {}
|
||||
|
|
@ -87,7 +86,7 @@ class FileLoader(AfterEffectsLoader):
|
|||
if namespace_from_container != layer_name:
|
||||
layers = stub.get_items(comps=True)
|
||||
existing_layers = [layer.name for layer in layers]
|
||||
layer_name = lib.get_unique_layer_name(
|
||||
layer_name = get_unique_layer_name(
|
||||
existing_layers,
|
||||
"{}_{}".format(context["asset"], context["subset"]))
|
||||
else: # switching version - keep same name
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
|
|
@ -71,13 +71,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
|
||||
# workfile instance
|
||||
family = "workfile"
|
||||
subset = get_subset_name_with_asset_doc(
|
||||
subset = get_subset_name(
|
||||
family,
|
||||
self.default_variant,
|
||||
context.data["anatomyData"]["task"]["name"],
|
||||
context.data["assetEntity"],
|
||||
context.data["anatomyData"]["project"]["name"],
|
||||
host_name=context.data["hostName"]
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
# Create instance
|
||||
instance = context.create_instance(subset)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from .module import BlenderModule
|
||||
from .addon import BlenderAddon
|
||||
|
||||
|
||||
__all__ = (
|
||||
"BlenderModule",
|
||||
"BlenderAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
BLENDER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class BlenderModule(OpenPypeModule, IHostModule):
|
||||
class BlenderAddon(OpenPypeModule, IHostAddon):
|
||||
name = "blender"
|
||||
host_name = "blender"
|
||||
|
||||
|
|
@ -234,7 +234,7 @@ def lsattrs(attrs: Dict) -> List:
|
|||
def read(node: bpy.types.bpy_struct_meta_idprop):
|
||||
"""Return user-defined attributes from `node`"""
|
||||
|
||||
data = dict(node.get(pipeline.AVALON_PROPERTY))
|
||||
data = dict(node.get(pipeline.AVALON_PROPERTY, {}))
|
||||
|
||||
# Ignore hidden/internal data
|
||||
data = {
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ PREVIEW_COLLECTIONS: Dict = dict()
|
|||
# This seems like a good value to keep the Qt app responsive and doesn't slow
|
||||
# down Blender. At least on macOS I the interace of Blender gets very laggy if
|
||||
# you make it smaller.
|
||||
TIMER_INTERVAL: float = 0.01
|
||||
TIMER_INTERVAL: float = 0.01 if platform.system() == "Windows" else 0.1
|
||||
|
||||
|
||||
class BlenderApplication(QtWidgets.QApplication):
|
||||
|
|
@ -164,6 +164,12 @@ def _process_app_events() -> Optional[float]:
|
|||
dialog.setDetailedText(detail)
|
||||
dialog.exec_()
|
||||
|
||||
# Refresh Manager
|
||||
if GlobalClass.app:
|
||||
manager = GlobalClass.app.get_window("WM_OT_avalon_manager")
|
||||
if manager:
|
||||
manager.refresh()
|
||||
|
||||
if not GlobalClass.is_windows:
|
||||
if OpenFileCacher.opening_file:
|
||||
return TIMER_INTERVAL
|
||||
|
|
@ -192,10 +198,11 @@ class LaunchQtApp(bpy.types.Operator):
|
|||
self._app = BlenderApplication.get_app()
|
||||
GlobalClass.app = self._app
|
||||
|
||||
bpy.app.timers.register(
|
||||
_process_app_events,
|
||||
persistent=True
|
||||
)
|
||||
if not bpy.app.timers.is_registered(_process_app_events):
|
||||
bpy.app.timers.register(
|
||||
_process_app_events,
|
||||
persistent=True
|
||||
)
|
||||
|
||||
def execute(self, context):
|
||||
"""Execute the operator.
|
||||
|
|
|
|||
|
|
@ -1,4 +1,10 @@
|
|||
from openpype.pipeline import install_host
|
||||
from openpype.hosts.blender import api
|
||||
|
||||
install_host(api)
|
||||
|
||||
def register():
|
||||
install_host(api)
|
||||
|
||||
|
||||
def unregister():
|
||||
pass
|
||||
|
|
|
|||
|
|
@ -1,8 +1,9 @@
|
|||
from typing import List
|
||||
|
||||
import mathutils
|
||||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.blender.api.action
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
|
||||
|
|
@ -18,18 +19,15 @@ class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
|
|||
order = ValidateContentsOrder
|
||||
hosts = ["blender"]
|
||||
families = ["camera"]
|
||||
category = "geometry"
|
||||
version = (0, 1, 0)
|
||||
label = "Zero Keyframe"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
|
||||
_identity = mathutils.Matrix()
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
@staticmethod
|
||||
def get_invalid(instance) -> List:
|
||||
invalid = []
|
||||
for obj in [obj for obj in instance]:
|
||||
if obj.type == "CAMERA":
|
||||
for obj in instance:
|
||||
if isinstance(obj, bpy.types.Object) and obj.type == "CAMERA":
|
||||
if obj.animation_data and obj.animation_data.action:
|
||||
action = obj.animation_data.action
|
||||
frames_set = set()
|
||||
|
|
@ -46,4 +44,5 @@ class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
|
|||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
f"Object found in instance is not in Object Mode: {invalid}")
|
||||
f"Camera must have a keyframe at frame 0: {invalid}"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3,13 +3,14 @@ from typing import List
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.blender.api.action
|
||||
|
||||
|
||||
class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
|
||||
"""Validate that the current mesh has UV's."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["blender"]
|
||||
families = ["model"]
|
||||
category = "geometry"
|
||||
|
|
@ -25,7 +26,10 @@ class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
|
|||
for uv_layer in obj.data.uv_layers:
|
||||
for polygon in obj.data.polygons:
|
||||
for loop_index in polygon.loop_indices:
|
||||
if not uv_layer.data[loop_index].uv:
|
||||
if (
|
||||
loop_index >= len(uv_layer.data)
|
||||
or not uv_layer.data[loop_index].uv
|
||||
):
|
||||
return False
|
||||
|
||||
return True
|
||||
|
|
@ -33,20 +37,20 @@ class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
invalid = []
|
||||
# TODO (jasper): only check objects in the collection that will be published?
|
||||
for obj in [
|
||||
obj for obj in instance]:
|
||||
try:
|
||||
if obj.type == 'MESH':
|
||||
# Make sure we are in object mode.
|
||||
bpy.ops.object.mode_set(mode='OBJECT')
|
||||
if not cls.has_uvs(obj):
|
||||
invalid.append(obj)
|
||||
except:
|
||||
continue
|
||||
for obj in instance:
|
||||
if isinstance(obj, bpy.types.Object) and obj.type == 'MESH':
|
||||
if obj.mode != "OBJECT":
|
||||
cls.log.warning(
|
||||
f"Mesh object {obj.name} should be in 'OBJECT' mode"
|
||||
" to be properly checked."
|
||||
)
|
||||
if not cls.has_uvs(obj):
|
||||
invalid.append(obj)
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(f"Meshes found in instance without valid UV's: {invalid}")
|
||||
raise RuntimeError(
|
||||
f"Meshes found in instance without valid UV's: {invalid}"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3,28 +3,27 @@ from typing import List
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.blender.api.action
|
||||
|
||||
|
||||
class ValidateMeshNoNegativeScale(pyblish.api.Validator):
|
||||
"""Ensure that meshes don't have a negative scale."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["blender"]
|
||||
families = ["model"]
|
||||
category = "geometry"
|
||||
label = "Mesh No Negative Scale"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
|
||||
@staticmethod
|
||||
def get_invalid(instance) -> List:
|
||||
invalid = []
|
||||
# TODO (jasper): only check objects in the collection that will be published?
|
||||
for obj in [
|
||||
obj for obj in bpy.data.objects if obj.type == 'MESH'
|
||||
]:
|
||||
if any(v < 0 for v in obj.scale):
|
||||
invalid.append(obj)
|
||||
|
||||
for obj in instance:
|
||||
if isinstance(obj, bpy.types.Object) and obj.type == 'MESH':
|
||||
if any(v < 0 for v in obj.scale):
|
||||
invalid.append(obj)
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -1,6 +1,9 @@
|
|||
from typing import List
|
||||
|
||||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.blender.api.action
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
|
||||
|
|
@ -20,13 +23,13 @@ class ValidateNoColonsInName(pyblish.api.InstancePlugin):
|
|||
label = "No Colons in names"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
@staticmethod
|
||||
def get_invalid(instance) -> List:
|
||||
invalid = []
|
||||
for obj in [obj for obj in instance]:
|
||||
for obj in instance:
|
||||
if ':' in obj.name:
|
||||
invalid.append(obj)
|
||||
if obj.type == 'ARMATURE':
|
||||
if isinstance(obj, bpy.types.Object) and obj.type == 'ARMATURE':
|
||||
for bone in obj.data.bones:
|
||||
if ':' in bone.name:
|
||||
invalid.append(obj)
|
||||
|
|
@ -37,4 +40,5 @@ class ValidateNoColonsInName(pyblish.api.InstancePlugin):
|
|||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
f"Objects found with colon in name: {invalid}")
|
||||
f"Objects found with colon in name: {invalid}"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
from typing import List
|
||||
|
||||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.hosts.blender.api.action
|
||||
|
||||
|
|
@ -10,26 +12,21 @@ class ValidateObjectIsInObjectMode(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
hosts = ["blender"]
|
||||
families = ["model", "rig", "layout"]
|
||||
category = "geometry"
|
||||
label = "Validate Object Mode"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
optional = False
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
@staticmethod
|
||||
def get_invalid(instance) -> List:
|
||||
invalid = []
|
||||
for obj in [obj for obj in instance]:
|
||||
try:
|
||||
if obj.type == 'MESH' or obj.type == 'ARMATURE':
|
||||
# Check if the object is in object mode.
|
||||
if not obj.mode == 'OBJECT':
|
||||
invalid.append(obj)
|
||||
except Exception:
|
||||
continue
|
||||
for obj in instance:
|
||||
if isinstance(obj, bpy.types.Object) and obj.mode != "OBJECT":
|
||||
invalid.append(obj)
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
f"Object found in instance is not in Object Mode: {invalid}")
|
||||
f"Object found in instance is not in Object Mode: {invalid}"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
from typing import List
|
||||
|
||||
import mathutils
|
||||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.blender.api.action
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
|
||||
|
|
@ -19,7 +21,6 @@ class ValidateTransformZero(pyblish.api.InstancePlugin):
|
|||
order = ValidateContentsOrder
|
||||
hosts = ["blender"]
|
||||
families = ["model"]
|
||||
category = "geometry"
|
||||
version = (0, 1, 0)
|
||||
label = "Transform Zero"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
|
|
@ -29,8 +30,11 @@ class ValidateTransformZero(pyblish.api.InstancePlugin):
|
|||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
invalid = []
|
||||
for obj in [obj for obj in instance]:
|
||||
if obj.matrix_basis != cls._identity:
|
||||
for obj in instance:
|
||||
if (
|
||||
isinstance(obj, bpy.types.Object)
|
||||
and obj.matrix_basis != cls._identity
|
||||
):
|
||||
invalid.append(obj)
|
||||
return invalid
|
||||
|
||||
|
|
@ -38,4 +42,6 @@ class ValidateTransformZero(pyblish.api.InstancePlugin):
|
|||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
f"Object found in instance is not in Object Mode: {invalid}")
|
||||
"Object found in instance has not"
|
||||
f" transform to zero: {invalid}"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
import pyblish.api
|
||||
|
||||
import openpype.lib as oplib
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollecTimelineOTIO(pyblish.api.ContextPlugin):
|
||||
|
|
@ -24,11 +24,14 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
|
|||
sequence = opfapi.get_current_sequence(opfapi.CTX.selection)
|
||||
|
||||
# create subset name
|
||||
subset_name = oplib.get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
context.data["projectName"],
|
||||
context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
# adding otio timeline to context
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
HARMONY_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class HarmonyAddon(OpenPypeModule, IHostModule):
|
||||
class HarmonyAddon(OpenPypeModule, IHostAddon):
|
||||
name = "harmony"
|
||||
host_name = "harmony"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect current workfile from Harmony."""
|
||||
import pyblish.api
|
||||
import os
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
|
|
@ -17,13 +17,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"""Plugin entry point."""
|
||||
family = "workfile"
|
||||
basename = os.path.basename(context.data["currentFile"])
|
||||
subset = get_subset_name_with_asset_doc(
|
||||
subset = get_subset_name(
|
||||
family,
|
||||
"",
|
||||
context.data["anatomyData"]["task"]["name"],
|
||||
context.data["assetEntity"],
|
||||
context.data["anatomyData"]["project"]["name"],
|
||||
host_name=context.data["hostName"]
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
# Create instance
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
from .module import (
|
||||
from .addon import (
|
||||
HIERO_ROOT_DIR,
|
||||
HieroModule,
|
||||
HieroAddon,
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"HIERO_ROOT_DIR",
|
||||
"HieroModule",
|
||||
"HieroAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import os
|
||||
import platform
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
HIERO_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class HieroModule(OpenPypeModule, IHostModule):
|
||||
class HieroAddon(OpenPypeModule, IHostAddon):
|
||||
name = "hiero"
|
||||
host_name = "hiero"
|
||||
|
||||
|
|
@ -1,27 +1,28 @@
|
|||
import os
|
||||
import hou
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.5
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
label = "Houdini Current File"
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, context):
|
||||
"""Inject the current working file"""
|
||||
|
||||
filepath = hou.hipFile.path()
|
||||
if not os.path.exists(filepath):
|
||||
current_file = hou.hipFile.path()
|
||||
if not os.path.exists(current_file):
|
||||
# By default Houdini will even point a new scene to a path.
|
||||
# However if the file is not saved at all and does not exist,
|
||||
# we assume the user never set it.
|
||||
filepath = ""
|
||||
|
||||
elif os.path.basename(filepath) == "untitled.hip":
|
||||
elif os.path.basename(current_file) == "untitled.hip":
|
||||
# Due to even a new file being called 'untitled.hip' we are unable
|
||||
# to confirm the current scene was ever saved because the file
|
||||
# could have existed already. We will allow it if the file exists,
|
||||
|
|
@ -33,4 +34,43 @@ class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
|
|||
"saved correctly."
|
||||
)
|
||||
|
||||
context.data["currentFile"] = filepath
|
||||
context.data["currentFile"] = current_file
|
||||
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
task = legacy_io.Session["AVALON_TASK"]
|
||||
|
||||
data = {}
|
||||
|
||||
# create instance
|
||||
instance = context.create_instance(name=filename)
|
||||
subset = 'workfile' + task.capitalize()
|
||||
|
||||
data.update({
|
||||
"subset": subset,
|
||||
"asset": os.getenv("AVALON_ASSET", None),
|
||||
"label": subset,
|
||||
"publish": True,
|
||||
"family": 'workfile',
|
||||
"families": ['workfile'],
|
||||
"setMembers": [current_file],
|
||||
"frameStart": context.data['frameStart'],
|
||||
"frameEnd": context.data['frameEnd'],
|
||||
"handleStart": context.data['handleStart'],
|
||||
"handleEnd": context.data['handleEnd']
|
||||
})
|
||||
|
||||
data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
instance.data.update(data)
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('Scene path: {}'.format(current_file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
self.log.info('subset: {}'.format(subset))
|
||||
|
|
|
|||
|
|
@ -0,0 +1,57 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import openpype.api
|
||||
import pyblish.api
|
||||
import hou
|
||||
|
||||
|
||||
class ValidateWorkfilePaths(pyblish.api.InstancePlugin):
|
||||
"""Validate workfile paths so they are absolute."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["workfile"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Workfile Paths"
|
||||
actions = [openpype.api.RepairAction]
|
||||
optional = True
|
||||
|
||||
node_types = ["file", "alembic"]
|
||||
prohibited_vars = ["$HIP", "$JOB"]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid()
|
||||
self.log.info(
|
||||
"node types to check: {}".format(", ".join(self.node_types)))
|
||||
self.log.info(
|
||||
"prohibited vars: {}".format(", ".join(self.prohibited_vars))
|
||||
)
|
||||
if invalid:
|
||||
for param in invalid:
|
||||
self.log.error(
|
||||
"{}: {}".format(param.path(), param.unexpandedString()))
|
||||
|
||||
raise RuntimeError("Invalid paths found")
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls):
|
||||
invalid = []
|
||||
for param, _ in hou.fileReferences():
|
||||
# skip nodes we are not interested in
|
||||
if param.node().type().name() not in cls.node_types:
|
||||
continue
|
||||
|
||||
if any(
|
||||
v for v in cls.prohibited_vars
|
||||
if v in param.unexpandedString()):
|
||||
invalid.append(param)
|
||||
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
invalid = cls.get_invalid()
|
||||
for param in invalid:
|
||||
cls.log.info("processing: {}".format(param.path()))
|
||||
cls.log.info("Replacing {} for {}".format(
|
||||
param.unexpandedString(),
|
||||
hou.text.expandString(param.unexpandedString())))
|
||||
param.set(hou.text.expandString(param.unexpandedString()))
|
||||
10
openpype/hosts/houdini/startup/python3.9libs/pythonrc.py
Normal file
10
openpype/hosts/houdini/startup/python3.9libs/pythonrc.py
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
from openpype.pipeline import install_host
|
||||
from openpype.hosts.houdini import api
|
||||
|
||||
|
||||
def main():
|
||||
print("Installing OpenPype ...")
|
||||
install_host(api)
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
from .module import OpenPypeMaya
|
||||
from .addon import MayaAddon
|
||||
|
||||
|
||||
__all__ = (
|
||||
"OpenPypeMaya",
|
||||
"MayaAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
MAYA_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class OpenPypeMaya(OpenPypeModule, IHostModule):
|
||||
name = "openpype_maya"
|
||||
class MayaAddon(OpenPypeModule, IHostAddon):
|
||||
name = "maya"
|
||||
host_name = "maya"
|
||||
|
||||
def initialize(self, module_settings):
|
||||
|
|
@ -128,7 +128,7 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
# Update preset with current panel setting
|
||||
# if override_viewport_options is turned off
|
||||
if not override_viewport_options:
|
||||
panel = cmds.getPanel(with_focus=True)
|
||||
panel = cmds.getPanel(withFocus=True)
|
||||
panel_preset = capture.parse_active_view()
|
||||
preset.update(panel_preset)
|
||||
cmds.setFocus(panel)
|
||||
|
|
|
|||
|
|
@ -100,9 +100,9 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
# camera.
|
||||
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
|
||||
preset["isolate"] = instance.data["setMembers"]
|
||||
|
||||
|
||||
# Show or Hide Image Plane
|
||||
image_plane = instance.data.get("imagePlane", True)
|
||||
image_plane = instance.data.get("imagePlane", True)
|
||||
if "viewport_options" in preset:
|
||||
preset["viewport_options"]["imagePlane"] = image_plane
|
||||
else:
|
||||
|
|
@ -117,7 +117,7 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
# Update preset with current panel setting
|
||||
# if override_viewport_options is turned off
|
||||
if not override_viewport_options:
|
||||
panel = cmds.getPanel(with_focus=True)
|
||||
panel = cmds.getPanel(withFocus=True)
|
||||
panel_preset = capture.parse_active_view()
|
||||
preset.update(panel_preset)
|
||||
cmds.setFocus(panel)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
from .module import (
|
||||
from .addon import (
|
||||
NUKE_ROOT_DIR,
|
||||
NukeModule,
|
||||
NukeAddon,
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"NUKE_ROOT_DIR",
|
||||
"NukeModule",
|
||||
"NukeAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import os
|
||||
import platform
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
NUKE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class NukeModule(OpenPypeModule, IHostModule):
|
||||
class NukeAddon(OpenPypeModule, IHostAddon):
|
||||
name = "nuke"
|
||||
host_name = "nuke"
|
||||
|
||||
|
|
@ -1952,15 +1952,25 @@ class WorkfileSettings(object):
|
|||
if not write_node:
|
||||
return
|
||||
|
||||
# write all knobs to node
|
||||
for knob in nuke_imageio_writes["knobs"]:
|
||||
value = knob["value"]
|
||||
if isinstance(value, six.text_type):
|
||||
value = str(value)
|
||||
if str(value).startswith("0x"):
|
||||
value = int(value, 16)
|
||||
try:
|
||||
# write all knobs to node
|
||||
for knob in nuke_imageio_writes["knobs"]:
|
||||
value = knob["value"]
|
||||
if isinstance(value, six.text_type):
|
||||
value = str(value)
|
||||
if str(value).startswith("0x"):
|
||||
value = int(value, 16)
|
||||
|
||||
write_node[knob["name"]].setValue(value)
|
||||
log.debug("knob: {}| value: {}".format(
|
||||
knob["name"], value
|
||||
))
|
||||
write_node[knob["name"]].setValue(value)
|
||||
except TypeError:
|
||||
log.warning(
|
||||
"Legacy workflow didnt work, switching to current")
|
||||
|
||||
set_node_knobs_from_settings(
|
||||
write_node, nuke_imageio_writes["knobs"])
|
||||
|
||||
def set_reads_colorspace(self, read_clrs_inputs):
|
||||
""" Setting colorspace to Read nodes
|
||||
|
|
@ -2017,12 +2027,14 @@ class WorkfileSettings(object):
|
|||
# get imageio
|
||||
nuke_colorspace = get_nuke_imageio_settings()
|
||||
|
||||
log.info("Setting colorspace to workfile...")
|
||||
try:
|
||||
self.set_root_colorspace(nuke_colorspace["workfile"])
|
||||
except AttributeError:
|
||||
msg = "set_colorspace(): missing `workfile` settings in template"
|
||||
nuke.message(msg)
|
||||
|
||||
log.info("Setting colorspace to viewers...")
|
||||
try:
|
||||
self.set_viewers_colorspace(nuke_colorspace["viewer"])
|
||||
except AttributeError:
|
||||
|
|
@ -2030,24 +2042,18 @@ class WorkfileSettings(object):
|
|||
nuke.message(msg)
|
||||
log.error(msg)
|
||||
|
||||
log.info("Setting colorspace to write nodes...")
|
||||
try:
|
||||
self.set_writes_colorspace()
|
||||
except AttributeError as _error:
|
||||
nuke.message(_error)
|
||||
log.error(_error)
|
||||
|
||||
log.info("Setting colorspace to read nodes...")
|
||||
read_clrs_inputs = nuke_colorspace["regexInputs"].get("inputs", [])
|
||||
if read_clrs_inputs:
|
||||
self.set_reads_colorspace(read_clrs_inputs)
|
||||
|
||||
try:
|
||||
for key in nuke_colorspace:
|
||||
log.debug("Preset's colorspace key: {}".format(key))
|
||||
except TypeError:
|
||||
msg = "Nuke is not in templates! Contact your supervisor!"
|
||||
nuke.message(msg)
|
||||
log.error(msg)
|
||||
|
||||
def reset_frame_range_handles(self):
|
||||
"""Set frame range to current asset"""
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
PHOTOSHOP_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class PhotoshopAddon(OpenPypeModule, IHostModule):
|
||||
class PhotoshopAddon(OpenPypeModule, IHostAddon):
|
||||
name = "photoshop"
|
||||
host_name = "photoshop"
|
||||
|
||||
|
|
|
|||
|
|
@ -9,14 +9,22 @@ from openpype.settings import get_project_settings
|
|||
|
||||
|
||||
class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
||||
"""Creates instances for configured color code of a layer.
|
||||
"""Creates instances for layers marked by configurable color.
|
||||
|
||||
Used in remote publishing when artists marks publishable layers by color-
|
||||
coding.
|
||||
coding. Top level layers (group) must be marked by specific color to be
|
||||
published as an instance of 'image' family.
|
||||
|
||||
Can add group for all publishable layers to allow creation of flattened
|
||||
image. (Cannot contain special background layer as it cannot be grouped!)
|
||||
|
||||
Based on value `create_flatten_image` from Settings:
|
||||
- "yes": create flattened 'image' subset of all publishable layers + create
|
||||
'image' subset per publishable layer
|
||||
- "only": create ONLY flattened 'image' subset of all publishable layers
|
||||
- "no": do not create flattened 'image' subset at all,
|
||||
only separate subsets per marked layer.
|
||||
|
||||
Identifier:
|
||||
id (str): "pyblish.avalon.instance"
|
||||
"""
|
||||
|
|
@ -32,8 +40,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
# TODO check if could be set globally, probably doesn't make sense when
|
||||
# flattened template cannot
|
||||
subset_template_name = ""
|
||||
create_flatten_image = False
|
||||
# probably not possible to configure this globally
|
||||
create_flatten_image = "no"
|
||||
flatten_subset_template = ""
|
||||
|
||||
def process(self, context):
|
||||
|
|
@ -62,6 +69,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
publishable_layers = []
|
||||
created_instances = []
|
||||
family_from_settings = None
|
||||
for layer in layers:
|
||||
self.log.debug("Layer:: {}".format(layer))
|
||||
if layer.parents:
|
||||
|
|
@ -80,6 +88,9 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
self.log.debug("!!! Not found family or template, skip")
|
||||
continue
|
||||
|
||||
if not family_from_settings:
|
||||
family_from_settings = resolved_family
|
||||
|
||||
fill_pairs = {
|
||||
"variant": variant,
|
||||
"family": resolved_family,
|
||||
|
|
@ -98,13 +109,16 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
"Subset {} already created, skipping.".format(subset))
|
||||
continue
|
||||
|
||||
instance = self._create_instance(context, layer, resolved_family,
|
||||
asset_name, subset, task_name)
|
||||
if self.create_flatten_image != "flatten_only":
|
||||
instance = self._create_instance(context, layer,
|
||||
resolved_family,
|
||||
asset_name, subset, task_name)
|
||||
created_instances.append(instance)
|
||||
|
||||
existing_subset_names.append(subset)
|
||||
publishable_layers.append(layer)
|
||||
created_instances.append(instance)
|
||||
|
||||
if self.create_flatten_image and publishable_layers:
|
||||
if self.create_flatten_image != "no" and publishable_layers:
|
||||
self.log.debug("create_flatten_image")
|
||||
if not self.flatten_subset_template:
|
||||
self.log.warning("No template for flatten image")
|
||||
|
|
@ -116,7 +130,7 @@ class CollectColorCodedInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
first_layer = publishable_layers[0] # dummy layer
|
||||
first_layer.name = subset
|
||||
family = created_instances[0].data["family"] # inherit family
|
||||
family = family_from_settings # inherit family
|
||||
instance = self._create_instance(context, first_layer,
|
||||
family,
|
||||
asset_name, subset, task_name)
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import os
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectReview(pyblish.api.ContextPlugin):
|
||||
|
|
@ -27,13 +27,14 @@ class CollectReview(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
family = "review"
|
||||
subset = get_subset_name_with_asset_doc(
|
||||
subset = get_subset_name(
|
||||
family,
|
||||
context.data.get("variant", ''),
|
||||
context.data["anatomyData"]["task"]["name"],
|
||||
context.data["assetEntity"],
|
||||
context.data["anatomyData"]["project"]["name"],
|
||||
host_name=context.data["hostName"]
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
instance = context.create_instance(subset)
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
|
|
@ -24,13 +24,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
family = "workfile"
|
||||
# context.data["variant"] might come only from collect_batch_data
|
||||
variant = context.data.get("variant") or self.default_variant
|
||||
subset = get_subset_name_with_asset_doc(
|
||||
subset = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
context.data["anatomyData"]["task"]["name"],
|
||||
context.data["assetEntity"],
|
||||
context.data["anatomyData"]["project"]["name"],
|
||||
host_name=context.data["hostName"]
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
file_path = context.data["currentFile"]
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from .standalonepublish_module import StandAlonePublishModule
|
||||
from .addon import StandAlonePublishAddon
|
||||
|
||||
|
||||
__all__ = (
|
||||
"StandAlonePublishModule",
|
||||
"StandAlonePublishAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -5,18 +5,18 @@ import click
|
|||
from openpype.lib import get_openpype_execute_args
|
||||
from openpype.lib.execute import run_detached_process
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import ITrayAction, IHostModule
|
||||
from openpype.modules.interfaces import ITrayAction, IHostAddon
|
||||
|
||||
STANDALONEPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class StandAlonePublishModule(OpenPypeModule, ITrayAction, IHostModule):
|
||||
class StandAlonePublishAddon(OpenPypeModule, ITrayAction, IHostAddon):
|
||||
label = "Publish"
|
||||
name = "standalonepublish_tool"
|
||||
name = "standalonepublisher"
|
||||
host_name = "standalonepublisher"
|
||||
|
||||
def initialize(self, modules_settings):
|
||||
self.enabled = modules_settings[self.name]["enabled"]
|
||||
self.enabled = modules_settings["standalonepublish_tool"]["enabled"]
|
||||
self.publish_paths = [
|
||||
os.path.join(STANDALONEPUBLISH_ROOT_DIR, "plugins", "publish")
|
||||
]
|
||||
|
|
@ -42,7 +42,7 @@ class StandAlonePublishModule(OpenPypeModule, ITrayAction, IHostModule):
|
|||
|
||||
|
||||
@click.group(
|
||||
StandAlonePublishModule.name,
|
||||
StandAlonePublishAddon.name,
|
||||
help="StandalonePublisher related commands.")
|
||||
def cli_main():
|
||||
pass
|
||||
|
|
@ -2,8 +2,8 @@ import copy
|
|||
import json
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
||||
|
|
@ -44,12 +44,14 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
task_name = available_task_names[_task_name_low]
|
||||
break
|
||||
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
self.new_instance_family,
|
||||
self.subset_name_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name
|
||||
project_name,
|
||||
host_name=context.data["hostName"],
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
instance_name = f"{asset_name}_{subset_name}"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from .module import TrayPublishModule
|
||||
from .addon import TrayPublishAddon
|
||||
|
||||
|
||||
__all__ = (
|
||||
"TrayPublishModule",
|
||||
"TrayPublishAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -5,15 +5,15 @@ import click
|
|||
from openpype.lib import get_openpype_execute_args
|
||||
from openpype.lib.execute import run_detached_process
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import ITrayAction, IHostModule
|
||||
from openpype.modules.interfaces import ITrayAction, IHostAddon
|
||||
|
||||
TRAYPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class TrayPublishModule(OpenPypeModule, IHostModule, ITrayAction):
|
||||
class TrayPublishAddon(OpenPypeModule, IHostAddon, ITrayAction):
|
||||
label = "New Publish (beta)"
|
||||
name = "traypublish_tool"
|
||||
host_name = "traypublish"
|
||||
name = "traypublisher"
|
||||
host_name = "traypublisher"
|
||||
|
||||
def initialize(self, modules_settings):
|
||||
self.enabled = True
|
||||
|
|
@ -28,7 +28,7 @@ class TrayPublishModule(OpenPypeModule, IHostModule, ITrayAction):
|
|||
self._experimental_tools = ExperimentalTools()
|
||||
|
||||
def tray_menu(self, *args, **kwargs):
|
||||
super(TrayPublishModule, self).tray_menu(*args, **kwargs)
|
||||
super(TrayPublishAddon, self).tray_menu(*args, **kwargs)
|
||||
traypublisher = self._experimental_tools.get("traypublisher")
|
||||
visible = False
|
||||
if traypublisher and traypublisher.enabled:
|
||||
|
|
@ -53,7 +53,7 @@ class TrayPublishModule(OpenPypeModule, IHostModule, ITrayAction):
|
|||
click_group.add_command(cli_main)
|
||||
|
||||
|
||||
@click.group(TrayPublishModule.name, help="TrayPublisher related commands.")
|
||||
@click.group(TrayPublishAddon.name, help="TrayPublisher related commands.")
|
||||
def cli_main():
|
||||
pass
|
||||
|
||||
|
|
@ -6,13 +6,15 @@ from openpype.client import get_assets, get_asset_by_name
|
|||
from openpype.lib import (
|
||||
FileDef,
|
||||
BoolDef,
|
||||
get_subset_name_with_asset_doc,
|
||||
TaskNotSetError,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
CreatedInstance,
|
||||
CreatorError
|
||||
)
|
||||
from openpype.pipeline.create import (
|
||||
get_subset_name,
|
||||
TaskNotSetError,
|
||||
)
|
||||
|
||||
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
|
||||
|
||||
|
|
@ -130,7 +132,7 @@ class BatchMovieCreator(TrayPublishCreator):
|
|||
task_name = self._get_task_name(asset_doc)
|
||||
|
||||
try:
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
self.family,
|
||||
variant,
|
||||
task_name,
|
||||
|
|
@ -143,7 +145,7 @@ class BatchMovieCreator(TrayPublishCreator):
|
|||
# but user have ability to change it
|
||||
# NOTE: This expect that there is not task 'Undefined' on asset
|
||||
task_name = "Undefined"
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
self.family,
|
||||
variant,
|
||||
task_name,
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
from .tvpaint_module import (
|
||||
from .addon import (
|
||||
get_launch_script_path,
|
||||
TVPaintModule,
|
||||
TVPaintAddon,
|
||||
TVPAINT_ROOT_DIR,
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"get_launch_script_path",
|
||||
"TVPaintModule",
|
||||
"TVPaintAddon",
|
||||
"TVPAINT_ROOT_DIR",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
TVPAINT_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
|
@ -13,7 +13,7 @@ def get_launch_script_path():
|
|||
)
|
||||
|
||||
|
||||
class TVPaintModule(OpenPypeModule, IHostModule):
|
||||
class TVPaintAddon(OpenPypeModule, IHostAddon):
|
||||
name = "tvpaint"
|
||||
host_name = "tvpaint"
|
||||
|
||||
|
|
@ -3,8 +3,8 @@ import copy
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
|
|
@ -107,13 +107,14 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
# Use empty variant value
|
||||
variant = ""
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
new_subset_name = get_subset_name_with_asset_doc(
|
||||
new_subset_name = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
host_name,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
instance_data["subset"] = new_subset_name
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import copy
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectRenderScene(pyblish.api.ContextPlugin):
|
||||
|
|
@ -75,14 +75,15 @@ class CollectRenderScene(pyblish.api.ContextPlugin):
|
|||
dynamic_data["render_pass"] = dynamic_data["renderpass"]
|
||||
|
||||
task_name = workfile_context["task"]
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
"render",
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
dynamic_data=dynamic_data,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
instance_data = {
|
||||
|
|
|
|||
|
|
@ -3,8 +3,8 @@ import json
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
|
|
@ -39,13 +39,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
# Use empty variant value
|
||||
variant = ""
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
host_name,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
# Create Workfile instance
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from .module import UnrealModule
|
||||
from .addon import UnrealAddon
|
||||
|
||||
|
||||
__all__ = (
|
||||
"UnrealModule",
|
||||
"UnrealAddon",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,18 +1,18 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class UnrealModule(OpenPypeModule, IHostModule):
|
||||
class UnrealAddon(OpenPypeModule, IHostAddon):
|
||||
name = "unreal"
|
||||
host_name = "unreal"
|
||||
|
||||
def initialize(self, module_settings):
|
||||
self.enabled = True
|
||||
|
||||
def add_implementation_envs(self, env, app) -> None:
|
||||
def add_implementation_envs(self, env, app):
|
||||
"""Modify environments to contain all required for implementation."""
|
||||
# Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation
|
||||
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Unreal launching and project tools."""
|
||||
import sys
|
||||
|
||||
import os
|
||||
import platform
|
||||
import json
|
||||
|
|
@ -9,7 +9,7 @@ import subprocess
|
|||
import re
|
||||
from pathlib import Path
|
||||
from collections import OrderedDict
|
||||
from openpype.api import get_project_settings
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
|
||||
def get_engine_versions(env=None):
|
||||
|
|
|
|||
|
|
@ -3,12 +3,12 @@ import os
|
|||
import click
|
||||
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.modules.interfaces import IHostModule
|
||||
from openpype.modules.interfaces import IHostAddon
|
||||
|
||||
WEBPUBLISHER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class WebpublisherAddon(OpenPypeModule, IHostModule):
|
||||
class WebpublisherAddon(OpenPypeModule, IHostAddon):
|
||||
name = "webpublisher"
|
||||
host_name = "webpublisher"
|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ from openpype.lib import (
|
|||
get_ffprobe_streams,
|
||||
convert_ffprobe_fps_value,
|
||||
)
|
||||
from openpype.lib.plugin_tools import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
from openpype_modules.webpublisher.lib import parse_json
|
||||
|
||||
|
||||
|
|
@ -78,9 +78,14 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
is_sequence,
|
||||
extension.replace(".", ''))
|
||||
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
family, variant, task_name, asset_doc,
|
||||
project_name=project_name, host_name="webpublisher"
|
||||
subset_name = get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name=project_name,
|
||||
host_name="webpublisher",
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
version = self._get_next_version(
|
||||
project_name, asset_doc, subset_name
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import re
|
|||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
|
||||
class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
||||
|
|
@ -47,13 +47,14 @@ class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
|||
new_instances = []
|
||||
|
||||
# Workfile instance
|
||||
workfile_subset_name = get_subset_name_with_asset_doc(
|
||||
workfile_subset_name = get_subset_name(
|
||||
self.workfile_family,
|
||||
self.workfile_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
host_name,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
workfile_instance = self._create_workfile_instance(
|
||||
context, workfile_subset_name
|
||||
|
|
@ -61,13 +62,14 @@ class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
|||
new_instances.append(workfile_instance)
|
||||
|
||||
# Review instance
|
||||
review_subset_name = get_subset_name_with_asset_doc(
|
||||
review_subset_name = get_subset_name(
|
||||
self.review_family,
|
||||
self.review_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
host_name,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
review_instance = self._create_review_instance(
|
||||
context, review_subset_name
|
||||
|
|
@ -114,14 +116,15 @@ class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
|||
"family": "render"
|
||||
}
|
||||
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
self.render_pass_family,
|
||||
render_pass,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
dynamic_data=dynamic_data,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
|
||||
instance = self._create_render_pass_instance(
|
||||
|
|
@ -137,14 +140,15 @@ class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
|||
# Override family for subset name
|
||||
"family": "render"
|
||||
}
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
subset_name = get_subset_name(
|
||||
self.render_layer_family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
dynamic_data=dynamic_data,
|
||||
project_settings=context.data["project_settings"]
|
||||
)
|
||||
instance = self._create_render_layer_instance(
|
||||
context, layers, subset_name
|
||||
|
|
|
|||
|
|
@ -189,11 +189,11 @@ from .plugin_tools import (
|
|||
filter_pyblish_plugins,
|
||||
set_plugin_attributes_from_settings,
|
||||
source_hash,
|
||||
get_unique_layer_name,
|
||||
get_background_layers,
|
||||
)
|
||||
|
||||
from .path_tools import (
|
||||
format_file_size,
|
||||
collect_frames,
|
||||
create_hard_link,
|
||||
version_up,
|
||||
get_version_from_path,
|
||||
|
|
@ -354,9 +354,9 @@ __all__ = [
|
|||
"filter_pyblish_plugins",
|
||||
"set_plugin_attributes_from_settings",
|
||||
"source_hash",
|
||||
"get_unique_layer_name",
|
||||
"get_background_layers",
|
||||
|
||||
"format_file_size",
|
||||
"collect_frames",
|
||||
"create_hard_link",
|
||||
"version_up",
|
||||
"get_version_from_path",
|
||||
|
|
|
|||
|
|
@ -469,6 +469,19 @@ class ApplicationManager:
|
|||
for tool in group:
|
||||
self.tools[tool.full_name] = tool
|
||||
|
||||
def find_latest_available_variant_for_group(self, group_name):
|
||||
group = self.app_groups.get(group_name)
|
||||
if group is None or not group.enabled:
|
||||
return None
|
||||
|
||||
output = None
|
||||
for _, variant in reversed(sorted(group.variants.items())):
|
||||
executable = variant.find_executable()
|
||||
if executable:
|
||||
output = variant
|
||||
break
|
||||
return output
|
||||
|
||||
def launch(self, app_name, **data):
|
||||
"""Launch procedure.
|
||||
|
||||
|
|
@ -950,6 +963,63 @@ class ApplicationLaunchContext:
|
|||
)
|
||||
self.kwargs["env"] = value
|
||||
|
||||
def _collect_addons_launch_hook_paths(self):
|
||||
"""Helper to collect application launch hooks from addons.
|
||||
|
||||
Module have to have implemented 'get_launch_hook_paths' method which
|
||||
can expect appliction as argument or nothing.
|
||||
|
||||
Returns:
|
||||
List[str]: Paths to launch hook directories.
|
||||
"""
|
||||
|
||||
expected_types = (list, tuple, set)
|
||||
|
||||
output = []
|
||||
for module in self.modules_manager.get_enabled_modules():
|
||||
# Skip module if does not have implemented 'get_launch_hook_paths'
|
||||
func = getattr(module, "get_launch_hook_paths", None)
|
||||
if func is None:
|
||||
continue
|
||||
|
||||
func = module.get_launch_hook_paths
|
||||
if hasattr(inspect, "signature"):
|
||||
sig = inspect.signature(func)
|
||||
expect_args = len(sig.parameters) > 0
|
||||
else:
|
||||
expect_args = len(inspect.getargspec(func)[0]) > 0
|
||||
|
||||
# Pass application argument if method expect it.
|
||||
try:
|
||||
if expect_args:
|
||||
hook_paths = func(self.application)
|
||||
else:
|
||||
hook_paths = func()
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to call 'get_launch_hook_paths'",
|
||||
exc_info=True
|
||||
)
|
||||
continue
|
||||
|
||||
if not hook_paths:
|
||||
continue
|
||||
|
||||
# Convert string to list
|
||||
if isinstance(hook_paths, six.string_types):
|
||||
hook_paths = [hook_paths]
|
||||
|
||||
# Skip invalid types
|
||||
if not isinstance(hook_paths, expected_types):
|
||||
self.log.warning((
|
||||
"Result of `get_launch_hook_paths`"
|
||||
" has invalid type {}. Expected {}"
|
||||
).format(type(hook_paths), expected_types))
|
||||
continue
|
||||
|
||||
output.extend(hook_paths)
|
||||
return output
|
||||
|
||||
def paths_to_launch_hooks(self):
|
||||
"""Directory paths where to look for launch hooks."""
|
||||
# This method has potential to be part of application manager (maybe).
|
||||
|
|
@ -983,9 +1053,7 @@ class ApplicationLaunchContext:
|
|||
paths.append(path)
|
||||
|
||||
# Load modules paths
|
||||
paths.extend(
|
||||
self.modules_manager.collect_launch_hook_paths(self.application)
|
||||
)
|
||||
paths.extend(self._collect_addons_launch_hook_paths())
|
||||
|
||||
return paths
|
||||
|
||||
|
|
|
|||
|
|
@ -1,81 +1,113 @@
|
|||
"""Functions useful for delivery action or loader"""
|
||||
import os
|
||||
import shutil
|
||||
import glob
|
||||
import clique
|
||||
import collections
|
||||
|
||||
from .path_templates import (
|
||||
StringTemplate,
|
||||
TemplateUnsolved,
|
||||
)
|
||||
import functools
|
||||
import warnings
|
||||
|
||||
|
||||
class DeliveryDeprecatedWarning(DeprecationWarning):
|
||||
pass
|
||||
|
||||
|
||||
def deprecated(new_destination):
|
||||
"""Mark functions as deprecated.
|
||||
|
||||
It will result in a warning being emitted when the function is used.
|
||||
"""
|
||||
|
||||
func = None
|
||||
if callable(new_destination):
|
||||
func = new_destination
|
||||
new_destination = None
|
||||
|
||||
def _decorator(decorated_func):
|
||||
if new_destination is None:
|
||||
warning_message = (
|
||||
" Please check content of deprecated function to figure out"
|
||||
" possible replacement."
|
||||
)
|
||||
else:
|
||||
warning_message = " Please replace your usage with '{}'.".format(
|
||||
new_destination
|
||||
)
|
||||
|
||||
@functools.wraps(decorated_func)
|
||||
def wrapper(*args, **kwargs):
|
||||
warnings.simplefilter("always", DeliveryDeprecatedWarning)
|
||||
warnings.warn(
|
||||
(
|
||||
"Call to deprecated function '{}'"
|
||||
"\nFunction was moved or removed.{}"
|
||||
).format(decorated_func.__name__, warning_message),
|
||||
category=DeliveryDeprecatedWarning,
|
||||
stacklevel=4
|
||||
)
|
||||
return decorated_func(*args, **kwargs)
|
||||
return wrapper
|
||||
|
||||
if func is None:
|
||||
return _decorator
|
||||
return _decorator(func)
|
||||
|
||||
|
||||
@deprecated("openpype.lib.path_tools.collect_frames")
|
||||
def collect_frames(files):
|
||||
"""Returns dict of source path and its frame, if from sequence
|
||||
|
||||
Uses clique as most precise solution, used when anatomy template that
|
||||
created files is not known.
|
||||
|
||||
Assumption is that frames are separated by '.', negative frames are not
|
||||
allowed.
|
||||
|
||||
Args:
|
||||
files(list) or (set with single value): list of source paths
|
||||
|
||||
Returns:
|
||||
(dict): {'/asset/subset_v001.0001.png': '0001', ....}
|
||||
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
Returns dict of source path and its frame, if from sequence
|
||||
|
||||
Uses clique as most precise solution, used when anatomy template that
|
||||
created files is not known.
|
||||
from .path_tools import collect_frames
|
||||
|
||||
Assumption is that frames are separated by '.', negative frames are not
|
||||
allowed.
|
||||
return collect_frames(files)
|
||||
|
||||
Args:
|
||||
files(list) or (set with single value): list of source paths
|
||||
Returns:
|
||||
(dict): {'/asset/subset_v001.0001.png': '0001', ....}
|
||||
|
||||
@deprecated("openpype.lib.path_tools.format_file_size")
|
||||
def sizeof_fmt(num, suffix=None):
|
||||
"""Returns formatted string with size in appropriate unit
|
||||
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
collections, remainder = clique.assemble(files, minimum_items=1,
|
||||
patterns=patterns)
|
||||
|
||||
sources_and_frames = {}
|
||||
if collections:
|
||||
for collection in collections:
|
||||
src_head = collection.head
|
||||
src_tail = collection.tail
|
||||
|
||||
for index in collection.indexes:
|
||||
src_frame = collection.format("{padding}") % index
|
||||
src_file_name = "{}{}{}".format(src_head, src_frame,
|
||||
src_tail)
|
||||
sources_and_frames[src_file_name] = src_frame
|
||||
else:
|
||||
sources_and_frames[remainder.pop()] = None
|
||||
|
||||
return sources_and_frames
|
||||
|
||||
|
||||
def sizeof_fmt(num, suffix='B'):
|
||||
"""Returns formatted string with size in appropriate unit"""
|
||||
for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
|
||||
if abs(num) < 1024.0:
|
||||
return "%3.1f%s%s" % (num, unit, suffix)
|
||||
num /= 1024.0
|
||||
return "%.1f%s%s" % (num, 'Yi', suffix)
|
||||
from .path_tools import format_file_size
|
||||
return format_file_size(num, suffix)
|
||||
|
||||
|
||||
@deprecated("openpype.pipeline.load.get_representation_path_with_anatomy")
|
||||
def path_from_representation(representation, anatomy):
|
||||
try:
|
||||
template = representation["data"]["template"]
|
||||
"""Get representation path using representation document and anatomy.
|
||||
|
||||
except KeyError:
|
||||
return None
|
||||
Args:
|
||||
representation (Dict[str, Any]): Representation document.
|
||||
anatomy (Anatomy): Project anatomy.
|
||||
|
||||
try:
|
||||
context = representation["context"]
|
||||
context["root"] = anatomy.roots
|
||||
path = StringTemplate.format_strict_template(template, context)
|
||||
return os.path.normpath(path)
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
|
||||
except TemplateUnsolved:
|
||||
# Template references unavailable data
|
||||
return None
|
||||
from openpype.pipeline.load import get_representation_path_with_anatomy
|
||||
|
||||
return path
|
||||
return get_representation_path_with_anatomy(representation, anatomy)
|
||||
|
||||
|
||||
@deprecated
|
||||
def copy_file(src_path, dst_path):
|
||||
"""Hardlink file if possible(to save space), copy if not"""
|
||||
from openpype.lib import create_hard_link # safer importing
|
||||
|
|
@ -91,131 +123,96 @@ def copy_file(src_path, dst_path):
|
|||
shutil.copyfile(src_path, dst_path)
|
||||
|
||||
|
||||
@deprecated("openpype.pipeline.delivery.get_format_dict")
|
||||
def get_format_dict(anatomy, location_path):
|
||||
"""Returns replaced root values from user provider value.
|
||||
|
||||
Args:
|
||||
anatomy (Anatomy)
|
||||
location_path (str): user provided value
|
||||
Returns:
|
||||
(dict): prepared for formatting of a template
|
||||
Args:
|
||||
anatomy (Anatomy)
|
||||
location_path (str): user provided value
|
||||
|
||||
Returns:
|
||||
(dict): prepared for formatting of a template
|
||||
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
format_dict = {}
|
||||
if location_path:
|
||||
location_path = location_path.replace("\\", "/")
|
||||
root_names = anatomy.root_names_from_templates(
|
||||
anatomy.templates["delivery"]
|
||||
)
|
||||
if root_names is None:
|
||||
format_dict["root"] = location_path
|
||||
else:
|
||||
format_dict["root"] = {}
|
||||
for name in root_names:
|
||||
format_dict["root"][name] = location_path
|
||||
return format_dict
|
||||
|
||||
from openpype.pipeline.delivery import get_format_dict
|
||||
|
||||
return get_format_dict(anatomy, location_path)
|
||||
|
||||
|
||||
@deprecated("openpype.pipeline.delivery.check_destination_path")
|
||||
def check_destination_path(repre_id,
|
||||
anatomy, anatomy_data,
|
||||
datetime_data, template_name):
|
||||
""" Try to create destination path based on 'template_name'.
|
||||
|
||||
In the case that path cannot be filled, template contains unmatched
|
||||
keys, provide error message to filter out repre later.
|
||||
In the case that path cannot be filled, template contains unmatched
|
||||
keys, provide error message to filter out repre later.
|
||||
|
||||
Args:
|
||||
anatomy (Anatomy)
|
||||
anatomy_data (dict): context to fill anatomy
|
||||
datetime_data (dict): values with actual date
|
||||
template_name (str): to pick correct delivery template
|
||||
Returns:
|
||||
(collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"}
|
||||
Args:
|
||||
anatomy (Anatomy)
|
||||
anatomy_data (dict): context to fill anatomy
|
||||
datetime_data (dict): values with actual date
|
||||
template_name (str): to pick correct delivery template
|
||||
|
||||
Returns:
|
||||
(collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"}
|
||||
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
anatomy_data.update(datetime_data)
|
||||
anatomy_filled = anatomy.format_all(anatomy_data)
|
||||
dest_path = anatomy_filled["delivery"][template_name]
|
||||
report_items = collections.defaultdict(list)
|
||||
|
||||
if not dest_path.solved:
|
||||
msg = (
|
||||
"Missing keys in Representation's context"
|
||||
" for anatomy template \"{}\"."
|
||||
).format(template_name)
|
||||
from openpype.pipeline.delivery import check_destination_path
|
||||
|
||||
sub_msg = (
|
||||
"Representation: {}<br>"
|
||||
).format(repre_id)
|
||||
|
||||
if dest_path.missing_keys:
|
||||
keys = ", ".join(dest_path.missing_keys)
|
||||
sub_msg += (
|
||||
"- Missing keys: \"{}\"<br>"
|
||||
).format(keys)
|
||||
|
||||
if dest_path.invalid_types:
|
||||
items = []
|
||||
for key, value in dest_path.invalid_types.items():
|
||||
items.append("\"{}\" {}".format(key, str(value)))
|
||||
|
||||
keys = ", ".join(items)
|
||||
sub_msg += (
|
||||
"- Invalid value DataType: \"{}\"<br>"
|
||||
).format(keys)
|
||||
|
||||
report_items[msg].append(sub_msg)
|
||||
|
||||
return report_items
|
||||
return check_destination_path(
|
||||
repre_id,
|
||||
anatomy,
|
||||
anatomy_data,
|
||||
datetime_data,
|
||||
template_name
|
||||
)
|
||||
|
||||
|
||||
@deprecated("openpype.pipeline.delivery.deliver_single_file")
|
||||
def process_single_file(
|
||||
src_path, repre, anatomy, template_name, anatomy_data, format_dict,
|
||||
report_items, log
|
||||
):
|
||||
"""Copy single file to calculated path based on template
|
||||
|
||||
Args:
|
||||
src_path(str): path of source representation file
|
||||
_repre (dict): full repre, used only in process_sequence, here only
|
||||
as to share same signature
|
||||
anatomy (Anatomy)
|
||||
template_name (string): user selected delivery template name
|
||||
anatomy_data (dict): data from repre to fill anatomy with
|
||||
format_dict (dict): root dictionary with names and values
|
||||
report_items (collections.defaultdict): to return error messages
|
||||
log (Logger): for log printing
|
||||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
Args:
|
||||
src_path(str): path of source representation file
|
||||
_repre (dict): full repre, used only in process_sequence, here only
|
||||
as to share same signature
|
||||
anatomy (Anatomy)
|
||||
template_name (string): user selected delivery template name
|
||||
anatomy_data (dict): data from repre to fill anatomy with
|
||||
format_dict (dict): root dictionary with names and values
|
||||
report_items (collections.defaultdict): to return error messages
|
||||
log (Logger): for log printing
|
||||
|
||||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
# Make sure path is valid for all platforms
|
||||
src_path = os.path.normpath(src_path.replace("\\", "/"))
|
||||
|
||||
if not os.path.exists(src_path):
|
||||
msg = "{} doesn't exist for {}".format(src_path, repre["_id"])
|
||||
report_items["Source file was not found"].append(msg)
|
||||
return report_items, 0
|
||||
from openpype.pipeline.delivery import deliver_single_file
|
||||
|
||||
anatomy_filled = anatomy.format(anatomy_data)
|
||||
if format_dict:
|
||||
template_result = anatomy_filled["delivery"][template_name]
|
||||
delivery_path = template_result.rootless.format(**format_dict)
|
||||
else:
|
||||
delivery_path = anatomy_filled["delivery"][template_name]
|
||||
|
||||
# Backwards compatibility when extension contained `.`
|
||||
delivery_path = delivery_path.replace("..", ".")
|
||||
# Make sure path is valid for all platforms
|
||||
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
|
||||
|
||||
delivery_folder = os.path.dirname(delivery_path)
|
||||
if not os.path.exists(delivery_folder):
|
||||
os.makedirs(delivery_folder)
|
||||
|
||||
log.debug("Copying single: {} -> {}".format(src_path, delivery_path))
|
||||
copy_file(src_path, delivery_path)
|
||||
|
||||
return report_items, 1
|
||||
return deliver_single_file(
|
||||
src_path, repre, anatomy, template_name, anatomy_data, format_dict,
|
||||
report_items, log
|
||||
)
|
||||
|
||||
|
||||
@deprecated("openpype.pipeline.delivery.deliver_sequence")
|
||||
def process_sequence(
|
||||
src_path, repre, anatomy, template_name, anatomy_data, format_dict,
|
||||
report_items, log
|
||||
|
|
@ -223,128 +220,33 @@ def process_sequence(
|
|||
""" For Pype2(mainly - works in 3 too) where representation might not
|
||||
contain files.
|
||||
|
||||
Uses listing physical files (not 'files' on repre as a)might not be
|
||||
present, b)might not be reliable for representation and copying them.
|
||||
Uses listing physical files (not 'files' on repre as a)might not be
|
||||
present, b)might not be reliable for representation and copying them.
|
||||
|
||||
TODO Should be refactored when files are sufficient to drive all
|
||||
representations.
|
||||
TODO Should be refactored when files are sufficient to drive all
|
||||
representations.
|
||||
|
||||
Args:
|
||||
src_path(str): path of source representation file
|
||||
repre (dict): full representation
|
||||
anatomy (Anatomy)
|
||||
template_name (string): user selected delivery template name
|
||||
anatomy_data (dict): data from repre to fill anatomy with
|
||||
format_dict (dict): root dictionary with names and values
|
||||
report_items (collections.defaultdict): to return error messages
|
||||
log (Logger): for log printing
|
||||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
Args:
|
||||
src_path(str): path of source representation file
|
||||
repre (dict): full representation
|
||||
anatomy (Anatomy)
|
||||
template_name (string): user selected delivery template name
|
||||
anatomy_data (dict): data from repre to fill anatomy with
|
||||
format_dict (dict): root dictionary with names and values
|
||||
report_items (collections.defaultdict): to return error messages
|
||||
log (Logger): for log printing
|
||||
|
||||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
|
||||
Deprecated:
|
||||
Function was moved to different location and will be removed
|
||||
after 3.16.* release.
|
||||
"""
|
||||
src_path = os.path.normpath(src_path.replace("\\", "/"))
|
||||
|
||||
def hash_path_exist(myPath):
|
||||
res = myPath.replace('#', '*')
|
||||
glob_search_results = glob.glob(res)
|
||||
if len(glob_search_results) > 0:
|
||||
return True
|
||||
return False
|
||||
from openpype.pipeline.delivery import deliver_sequence
|
||||
|
||||
if not hash_path_exist(src_path):
|
||||
msg = "{} doesn't exist for {}".format(src_path,
|
||||
repre["_id"])
|
||||
report_items["Source file was not found"].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
delivery_templates = anatomy.templates.get("delivery") or {}
|
||||
delivery_template = delivery_templates.get(template_name)
|
||||
if delivery_template is None:
|
||||
msg = (
|
||||
"Delivery template \"{}\" in anatomy of project \"{}\""
|
||||
" was not found"
|
||||
).format(template_name, anatomy.project_name)
|
||||
report_items[""].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
# Check if 'frame' key is available in template which is required
|
||||
# for sequence delivery
|
||||
if "{frame" not in delivery_template:
|
||||
msg = (
|
||||
"Delivery template \"{}\" in anatomy of project \"{}\""
|
||||
"does not contain '{{frame}}' key to fill. Delivery of sequence"
|
||||
" can't be processed."
|
||||
).format(template_name, anatomy.project_name)
|
||||
report_items[""].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
dir_path, file_name = os.path.split(str(src_path))
|
||||
|
||||
context = repre["context"]
|
||||
ext = context.get("ext", context.get("representation"))
|
||||
|
||||
if not ext:
|
||||
msg = "Source extension not found, cannot find collection"
|
||||
report_items[msg].append(src_path)
|
||||
log.warning("{} <{}>".format(msg, context))
|
||||
return report_items, 0
|
||||
|
||||
ext = "." + ext
|
||||
# context.representation could be .psd
|
||||
ext = ext.replace("..", ".")
|
||||
|
||||
src_collections, remainder = clique.assemble(os.listdir(dir_path))
|
||||
src_collection = None
|
||||
for col in src_collections:
|
||||
if col.tail != ext:
|
||||
continue
|
||||
|
||||
src_collection = col
|
||||
break
|
||||
|
||||
if src_collection is None:
|
||||
msg = "Source collection of files was not found"
|
||||
report_items[msg].append(src_path)
|
||||
log.warning("{} <{}>".format(msg, src_path))
|
||||
return report_items, 0
|
||||
|
||||
frame_indicator = "@####@"
|
||||
|
||||
anatomy_data["frame"] = frame_indicator
|
||||
anatomy_filled = anatomy.format(anatomy_data)
|
||||
|
||||
if format_dict:
|
||||
template_result = anatomy_filled["delivery"][template_name]
|
||||
delivery_path = template_result.rootless.format(**format_dict)
|
||||
else:
|
||||
delivery_path = anatomy_filled["delivery"][template_name]
|
||||
|
||||
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
|
||||
delivery_folder = os.path.dirname(delivery_path)
|
||||
dst_head, dst_tail = delivery_path.split(frame_indicator)
|
||||
dst_padding = src_collection.padding
|
||||
dst_collection = clique.Collection(
|
||||
head=dst_head,
|
||||
tail=dst_tail,
|
||||
padding=dst_padding
|
||||
return deliver_sequence(
|
||||
src_path, repre, anatomy, template_name, anatomy_data, format_dict,
|
||||
report_items, log
|
||||
)
|
||||
|
||||
if not os.path.exists(delivery_folder):
|
||||
os.makedirs(delivery_folder)
|
||||
|
||||
src_head = src_collection.head
|
||||
src_tail = src_collection.tail
|
||||
uploaded = 0
|
||||
for index in src_collection.indexes:
|
||||
src_padding = src_collection.format("{padding}") % index
|
||||
src_file_name = "{}{}{}".format(src_head, src_padding, src_tail)
|
||||
src = os.path.normpath(
|
||||
os.path.join(dir_path, src_file_name)
|
||||
)
|
||||
|
||||
dst_padding = dst_collection.format("{padding}") % index
|
||||
dst = "{}{}{}".format(dst_head, dst_padding, dst_tail)
|
||||
log.debug("Copying single: {} -> {}".format(src, dst))
|
||||
copy_file(src, dst)
|
||||
uploaded += 1
|
||||
|
||||
return report_items, uploaded
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@ import logging
|
|||
import six
|
||||
import platform
|
||||
|
||||
import clique
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
|
|
@ -14,6 +16,27 @@ from .profiles_filtering import filter_profiles
|
|||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def format_file_size(file_size, suffix=None):
|
||||
"""Returns formatted string with size in appropriate unit.
|
||||
|
||||
Args:
|
||||
file_size (int): Size of file in bytes.
|
||||
suffix (str): Suffix for formatted size. Default is 'B' (as bytes).
|
||||
|
||||
Returns:
|
||||
str: Formatted size using proper unit and passed suffix (e.g. 7 MiB).
|
||||
"""
|
||||
|
||||
if suffix is None:
|
||||
suffix = "B"
|
||||
|
||||
for unit in ["", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi"]:
|
||||
if abs(file_size) < 1024.0:
|
||||
return "%3.1f%s%s" % (file_size, unit, suffix)
|
||||
file_size /= 1024.0
|
||||
return "%.1f%s%s" % (file_size, "Yi", suffix)
|
||||
|
||||
|
||||
def create_hard_link(src_path, dst_path):
|
||||
"""Create hardlink of file.
|
||||
|
||||
|
|
@ -50,6 +73,43 @@ def create_hard_link(src_path, dst_path):
|
|||
)
|
||||
|
||||
|
||||
def collect_frames(files):
|
||||
"""Returns dict of source path and its frame, if from sequence
|
||||
|
||||
Uses clique as most precise solution, used when anatomy template that
|
||||
created files is not known.
|
||||
|
||||
Assumption is that frames are separated by '.', negative frames are not
|
||||
allowed.
|
||||
|
||||
Args:
|
||||
files(list) or (set with single value): list of source paths
|
||||
|
||||
Returns:
|
||||
(dict): {'/asset/subset_v001.0001.png': '0001', ....}
|
||||
"""
|
||||
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
collections, remainder = clique.assemble(
|
||||
files, minimum_items=1, patterns=patterns)
|
||||
|
||||
sources_and_frames = {}
|
||||
if collections:
|
||||
for collection in collections:
|
||||
src_head = collection.head
|
||||
src_tail = collection.tail
|
||||
|
||||
for index in collection.indexes:
|
||||
src_frame = collection.format("{padding}") % index
|
||||
src_file_name = "{}{}{}".format(
|
||||
src_head, src_frame, src_tail)
|
||||
sources_and_frames[src_file_name] = src_frame
|
||||
else:
|
||||
sources_and_frames[remainder.pop()] = None
|
||||
|
||||
return sources_and_frames
|
||||
|
||||
|
||||
def _rreplace(s, a, b, n=1):
|
||||
"""Replace a with b in string s from right side n times."""
|
||||
return b.join(s.rsplit(a, n))
|
||||
|
|
@ -119,12 +179,12 @@ def get_version_from_path(file):
|
|||
"""Find version number in file path string.
|
||||
|
||||
Args:
|
||||
file (string): file path
|
||||
file (str): file path
|
||||
|
||||
Returns:
|
||||
v: version number in string ('001')
|
||||
|
||||
str: version number in string ('001')
|
||||
"""
|
||||
|
||||
pattern = re.compile(r"[\._]v([0-9]+)", re.IGNORECASE)
|
||||
try:
|
||||
return pattern.findall(file)[-1]
|
||||
|
|
@ -140,16 +200,17 @@ def get_last_version_from_path(path_dir, filter):
|
|||
"""Find last version of given directory content.
|
||||
|
||||
Args:
|
||||
path_dir (string): directory path
|
||||
path_dir (str): directory path
|
||||
filter (list): list of strings used as file name filter
|
||||
|
||||
Returns:
|
||||
string: file name with last version
|
||||
str: file name with last version
|
||||
|
||||
Example:
|
||||
last_version_file = get_last_version_from_path(
|
||||
"/project/shots/shot01/work", ["shot01", "compositing", "nk"])
|
||||
"""
|
||||
|
||||
assert os.path.isdir(path_dir), "`path_dir` argument needs to be directory"
|
||||
assert isinstance(filter, list) and (
|
||||
len(filter) != 0), "`filter` argument needs to be list and not empty"
|
||||
|
|
|
|||
|
|
@ -11,13 +11,8 @@ import functools
|
|||
from openpype.client import get_asset_by_id
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
from .profiles_filtering import filter_profiles
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
# Subset name template used when plugin does not have defined any
|
||||
DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}"
|
||||
|
||||
|
||||
class PluginToolsDeprecatedWarning(DeprecationWarning):
|
||||
pass
|
||||
|
|
@ -64,13 +59,14 @@ def deprecated(new_destination):
|
|||
return _decorator(func)
|
||||
|
||||
|
||||
class TaskNotSetError(KeyError):
|
||||
def __init__(self, msg=None):
|
||||
if not msg:
|
||||
msg = "Creator's subset name template requires task name."
|
||||
super(TaskNotSetError, self).__init__(msg)
|
||||
@deprecated("openpype.pipeline.create.TaskNotSetError")
|
||||
def TaskNotSetError(*args, **kwargs):
|
||||
from openpype.pipeline.create import TaskNotSetError
|
||||
|
||||
return TaskNotSetError(*args, **kwargs)
|
||||
|
||||
|
||||
@deprecated("openpype.pipeline.create.get_subset_name")
|
||||
def get_subset_name_with_asset_doc(
|
||||
family,
|
||||
variant,
|
||||
|
|
@ -109,61 +105,22 @@ def get_subset_name_with_asset_doc(
|
|||
dbcon (AvalonMongoDB): Mongo connection to be able query asset document
|
||||
if 'asset_doc' is not passed.
|
||||
"""
|
||||
if not family:
|
||||
return ""
|
||||
|
||||
if not host_name:
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
# Use only last part of class family value split by dot (`.`)
|
||||
family = family.rsplit(".", 1)[-1]
|
||||
|
||||
if project_name is None:
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
project_name = legacy_io.Session["AVALON_PROJECT"]
|
||||
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
|
||||
# Get settings
|
||||
tools_settings = get_project_settings(project_name)["global"]["tools"]
|
||||
profiles = tools_settings["creator"]["subset_name_profiles"]
|
||||
filtering_criteria = {
|
||||
"families": family,
|
||||
"hosts": host_name,
|
||||
"tasks": task_name,
|
||||
"task_types": task_type
|
||||
}
|
||||
|
||||
matching_profile = filter_profiles(profiles, filtering_criteria)
|
||||
template = None
|
||||
if matching_profile:
|
||||
template = matching_profile["template"]
|
||||
|
||||
# Make sure template is set (matching may have empty string)
|
||||
if not template:
|
||||
template = default_template or DEFAULT_SUBSET_TEMPLATE
|
||||
|
||||
# Simple check of task name existence for template with {task} in
|
||||
# - missing task should be possible only in Standalone publisher
|
||||
if not task_name and "{task" in template.lower():
|
||||
raise TaskNotSetError()
|
||||
|
||||
fill_pairs = {
|
||||
"variant": variant,
|
||||
"family": family,
|
||||
"task": task_name
|
||||
}
|
||||
if dynamic_data:
|
||||
# Dynamic data may override default values
|
||||
for key, value in dynamic_data.items():
|
||||
fill_pairs[key] = value
|
||||
|
||||
return template.format(**prepare_template_data(fill_pairs))
|
||||
return get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
default_template,
|
||||
dynamic_data
|
||||
)
|
||||
|
||||
|
||||
@deprecated
|
||||
def get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
|
|
@ -183,16 +140,18 @@ def get_subset_name(
|
|||
`get_subset_name_with_asset_doc` where asset document is expected.
|
||||
"""
|
||||
|
||||
from openpype.pipeline.create import get_subset_name
|
||||
|
||||
if project_name is None:
|
||||
project_name = dbcon.project_name
|
||||
|
||||
asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"])
|
||||
|
||||
return get_subset_name_with_asset_doc(
|
||||
return get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc or {},
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
default_template,
|
||||
|
|
@ -254,6 +213,9 @@ def filter_pyblish_plugins(plugins):
|
|||
Args:
|
||||
plugins (dict): Dictionary of plugins produced by :mod:`pyblish-base`
|
||||
`discover()` method.
|
||||
|
||||
Deprecated:
|
||||
Function will be removed after release version 3.15.*
|
||||
"""
|
||||
|
||||
from openpype.pipeline.publish.lib import filter_pyblish_plugins
|
||||
|
|
@ -277,6 +239,9 @@ def set_plugin_attributes_from_settings(
|
|||
Value from environment `AVALON_APP` is used if not entered.
|
||||
project_name (str): Name of project for which settings will be loaded.
|
||||
Value from environment `AVALON_PROJECT` is used if not entered.
|
||||
|
||||
Deprecated:
|
||||
Function will be removed after release version 3.15.*
|
||||
"""
|
||||
|
||||
# Function is not used anymore
|
||||
|
|
@ -373,57 +338,3 @@ def source_hash(filepath, *args):
|
|||
time = str(os.path.getmtime(filepath))
|
||||
size = str(os.path.getsize(filepath))
|
||||
return "|".join([file_name, time, size] + list(args)).replace(".", ",")
|
||||
|
||||
|
||||
def get_unique_layer_name(layers, name):
|
||||
"""
|
||||
Gets all layer names and if 'name' is present in them, increases
|
||||
suffix by 1 (eg. creates unique layer name - for Loader)
|
||||
Args:
|
||||
layers (list): of strings, names only
|
||||
name (string): checked value
|
||||
|
||||
Returns:
|
||||
(string): name_00X (without version)
|
||||
"""
|
||||
names = {}
|
||||
for layer in layers:
|
||||
layer_name = re.sub(r'_\d{3}$', '', layer)
|
||||
if layer_name in names.keys():
|
||||
names[layer_name] = names[layer_name] + 1
|
||||
else:
|
||||
names[layer_name] = 1
|
||||
occurrences = names.get(name, 0)
|
||||
|
||||
return "{}_{:0>3d}".format(name, occurrences + 1)
|
||||
|
||||
|
||||
def get_background_layers(file_url):
|
||||
"""
|
||||
Pulls file name from background json file, enrich with folder url for
|
||||
AE to be able import files.
|
||||
|
||||
Order is important, follows order in json.
|
||||
|
||||
Args:
|
||||
file_url (str): abs url of background json
|
||||
|
||||
Returns:
|
||||
(list): of abs paths to images
|
||||
"""
|
||||
with open(file_url) as json_file:
|
||||
data = json.load(json_file)
|
||||
|
||||
layers = list()
|
||||
bg_folder = os.path.dirname(file_url)
|
||||
for child in data['children']:
|
||||
if child.get("filename"):
|
||||
layers.append(os.path.join(bg_folder, child.get("filename")).
|
||||
replace("\\", "/"))
|
||||
else:
|
||||
for layer in child['children']:
|
||||
if layer.get("filename"):
|
||||
layers.append(os.path.join(bg_folder,
|
||||
layer.get("filename")).
|
||||
replace("\\", "/"))
|
||||
return layers
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@
|
|||
from .base import (
|
||||
OpenPypeModule,
|
||||
OpenPypeAddOn,
|
||||
OpenPypeInterface,
|
||||
|
||||
load_modules,
|
||||
|
||||
|
|
@ -20,7 +19,6 @@ from .base import (
|
|||
__all__ = (
|
||||
"OpenPypeModule",
|
||||
"OpenPypeAddOn",
|
||||
"OpenPypeInterface",
|
||||
|
||||
"load_modules",
|
||||
|
||||
|
|
|
|||
|
|
@ -32,6 +32,14 @@ from openpype.lib import (
|
|||
import_module_from_dirpath
|
||||
)
|
||||
|
||||
from .interfaces import (
|
||||
OpenPypeInterface,
|
||||
IPluginPaths,
|
||||
IHostAddon,
|
||||
ITrayModule,
|
||||
ITrayService
|
||||
)
|
||||
|
||||
# Files that will be always ignored on modules import
|
||||
IGNORED_FILENAMES = (
|
||||
"__pycache__",
|
||||
|
|
@ -389,31 +397,6 @@ def _load_modules():
|
|||
log.error(msg, exc_info=True)
|
||||
|
||||
|
||||
class _OpenPypeInterfaceMeta(ABCMeta):
|
||||
"""OpenPypeInterface meta class to print proper string."""
|
||||
|
||||
def __str__(self):
|
||||
return "<'OpenPypeInterface.{}'>".format(self.__name__)
|
||||
|
||||
def __repr__(self):
|
||||
return str(self)
|
||||
|
||||
|
||||
@six.add_metaclass(_OpenPypeInterfaceMeta)
|
||||
class OpenPypeInterface:
|
||||
"""Base class of Interface that can be used as Mixin with abstract parts.
|
||||
|
||||
This is way how OpenPype module or addon can tell that has implementation
|
||||
for specific part or for other module/addon.
|
||||
|
||||
Child classes of OpenPypeInterface may be used as mixin in different
|
||||
OpenPype modules which means they have to have implemented methods defined
|
||||
in the interface. By default interface does not have any abstract parts.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class OpenPypeModule:
|
||||
"""Base class of pype module.
|
||||
|
|
@ -747,8 +730,6 @@ class ModulesManager:
|
|||
and "actions" each containing list of paths.
|
||||
"""
|
||||
# Output structure
|
||||
from openpype_interfaces import IPluginPaths
|
||||
|
||||
output = {
|
||||
"publish": [],
|
||||
"create": [],
|
||||
|
|
@ -805,8 +786,6 @@ class ModulesManager:
|
|||
list: List of creator plugin paths.
|
||||
"""
|
||||
# Output structure
|
||||
from openpype_interfaces import IPluginPaths
|
||||
|
||||
output = []
|
||||
for module in self.get_enabled_modules():
|
||||
# Skip module that do not inherit from `IPluginPaths`
|
||||
|
|
@ -821,68 +800,6 @@ class ModulesManager:
|
|||
output.extend(paths)
|
||||
return output
|
||||
|
||||
def collect_launch_hook_paths(self, app):
|
||||
"""Helper to collect application launch hooks.
|
||||
|
||||
It used to be based on 'ILaunchHookPaths' which is not true anymore.
|
||||
Module just have to have implemented 'get_launch_hook_paths' method.
|
||||
|
||||
Args:
|
||||
app (Application): Application object which can be used for
|
||||
filtering of which launch hook paths are returned.
|
||||
|
||||
Returns:
|
||||
list: Paths to launch hook directories.
|
||||
"""
|
||||
|
||||
str_type = type("")
|
||||
expected_types = (list, tuple, set)
|
||||
|
||||
output = []
|
||||
for module in self.get_enabled_modules():
|
||||
# Skip module if does not have implemented 'get_launch_hook_paths'
|
||||
func = getattr(module, "get_launch_hook_paths", None)
|
||||
if func is None:
|
||||
continue
|
||||
|
||||
func = module.get_launch_hook_paths
|
||||
if hasattr(inspect, "signature"):
|
||||
sig = inspect.signature(func)
|
||||
expect_args = len(sig.parameters) > 0
|
||||
else:
|
||||
expect_args = len(inspect.getargspec(func)[0]) > 0
|
||||
|
||||
# Pass application argument if method expect it.
|
||||
try:
|
||||
if expect_args:
|
||||
hook_paths = func(app)
|
||||
else:
|
||||
hook_paths = func()
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
"Failed to call 'get_launch_hook_paths'",
|
||||
exc_info=True
|
||||
)
|
||||
continue
|
||||
|
||||
if not hook_paths:
|
||||
continue
|
||||
|
||||
# Convert string to list
|
||||
if isinstance(hook_paths, str_type):
|
||||
hook_paths = [hook_paths]
|
||||
|
||||
# Skip invalid types
|
||||
if not isinstance(hook_paths, expected_types):
|
||||
self.log.warning((
|
||||
"Result of `get_launch_hook_paths`"
|
||||
" has invalid type {}. Expected {}"
|
||||
).format(type(hook_paths), expected_types))
|
||||
continue
|
||||
|
||||
output.extend(hook_paths)
|
||||
return output
|
||||
|
||||
def get_host_module(self, host_name):
|
||||
"""Find host module by host name.
|
||||
|
||||
|
|
@ -891,15 +808,13 @@ class ModulesManager:
|
|||
|
||||
Returns:
|
||||
OpenPypeModule: Found host module by name.
|
||||
None: There was not found module inheriting IHostModule which has
|
||||
None: There was not found module inheriting IHostAddon which has
|
||||
host name set to passed 'host_name'.
|
||||
"""
|
||||
|
||||
from openpype_interfaces import IHostModule
|
||||
|
||||
for module in self.get_enabled_modules():
|
||||
if (
|
||||
isinstance(module, IHostModule)
|
||||
isinstance(module, IHostAddon)
|
||||
and module.host_name == host_name
|
||||
):
|
||||
return module
|
||||
|
|
@ -910,15 +825,13 @@ class ModulesManager:
|
|||
|
||||
Returns:
|
||||
Iterable[str]: All available host names based on enabled modules
|
||||
inheriting 'IHostModule'.
|
||||
inheriting 'IHostAddon'.
|
||||
"""
|
||||
|
||||
from openpype_interfaces import IHostModule
|
||||
|
||||
host_names = {
|
||||
module.host_name
|
||||
for module in self.get_enabled_modules()
|
||||
if isinstance(module, IHostModule)
|
||||
if isinstance(module, IHostAddon)
|
||||
}
|
||||
return host_names
|
||||
|
||||
|
|
@ -1096,8 +1009,6 @@ class TrayModulesManager(ModulesManager):
|
|||
self.tray_menu(tray_menu)
|
||||
|
||||
def get_enabled_tray_modules(self):
|
||||
from openpype_interfaces import ITrayModule
|
||||
|
||||
output = []
|
||||
for module in self.modules:
|
||||
if module.enabled and isinstance(module, ITrayModule):
|
||||
|
|
@ -1173,8 +1084,6 @@ class TrayModulesManager(ModulesManager):
|
|||
self._report["Tray menu"] = report
|
||||
|
||||
def start_modules(self):
|
||||
from openpype_interfaces import ITrayService
|
||||
|
||||
report = {}
|
||||
time_start = time.time()
|
||||
prev_start_time = time_start
|
||||
|
|
|
|||
|
|
@ -3,8 +3,10 @@ import attr
|
|||
import getpass
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import env_value_to_bool
|
||||
from openpype.lib.delivery import collect_frames
|
||||
from openpype.lib import (
|
||||
env_value_to_bool,
|
||||
collect_frames,
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype_modules.deadline import abstract_submit_deadline
|
||||
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
|
||||
|
|
|
|||
|
|
@ -114,6 +114,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
instance.data["deadlineSubmissionJob"] = resp.json()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
# add to list of job Id
|
||||
if not instance.data.get("bakingSubmissionJobs"):
|
||||
instance.data["bakingSubmissionJobs"] = []
|
||||
|
||||
instance.data["bakingSubmissionJobs"].append(
|
||||
resp.json()["_id"])
|
||||
|
||||
# redefinition of families
|
||||
if "render.farm" in families:
|
||||
instance.data['family'] = 'write'
|
||||
|
|
|
|||
|
|
@ -296,6 +296,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
for assembly_id in instance.data.get("assemblySubmissionJobs"):
|
||||
payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501
|
||||
job_index += 1
|
||||
elif instance.data.get("bakingSubmissionJobs"):
|
||||
self.log.info("Adding baking submission jobs as dependencies...")
|
||||
job_index = 0
|
||||
for assembly_id in instance.data["bakingSubmissionJobs"]:
|
||||
payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501
|
||||
job_index += 1
|
||||
else:
|
||||
payload["JobInfo"]["JobDependency0"] = job["_id"]
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import requests
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib.delivery import collect_frames
|
||||
from openpype.lib import collect_frames
|
||||
from openpype_modules.deadline.abstract_submit_deadline import requests_get
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,11 @@ from openpype.client import (
|
|||
get_versions,
|
||||
get_representations
|
||||
)
|
||||
from openpype.lib import StringTemplate, TemplateUnsolved
|
||||
from openpype.lib import (
|
||||
StringTemplate,
|
||||
TemplateUnsolved,
|
||||
format_file_size,
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB, Anatomy
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
|
||||
|
|
@ -134,13 +138,6 @@ class DeleteOldVersions(BaseAction):
|
|||
"title": self.inteface_title
|
||||
}
|
||||
|
||||
def sizeof_fmt(self, num, suffix='B'):
|
||||
for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
|
||||
if abs(num) < 1024.0:
|
||||
return "%3.1f%s%s" % (num, unit, suffix)
|
||||
num /= 1024.0
|
||||
return "%.1f%s%s" % (num, 'Yi', suffix)
|
||||
|
||||
def launch(self, session, entities, event):
|
||||
values = event["data"].get("values")
|
||||
if not values:
|
||||
|
|
@ -359,7 +356,7 @@ class DeleteOldVersions(BaseAction):
|
|||
dir_paths, file_paths_by_dir, delete=False
|
||||
)
|
||||
|
||||
msg = "Total size of files: " + self.sizeof_fmt(size)
|
||||
msg = "Total size of files: {}".format(format_file_size(size))
|
||||
|
||||
self.log.warning(msg)
|
||||
|
||||
|
|
@ -430,7 +427,7 @@ class DeleteOldVersions(BaseAction):
|
|||
"message": msg
|
||||
}
|
||||
|
||||
msg = "Total size of files deleted: " + self.sizeof_fmt(size)
|
||||
msg = "Total size of files deleted: {}".format(format_file_size(size))
|
||||
|
||||
self.log.warning(msg)
|
||||
|
||||
|
|
|
|||
|
|
@ -10,19 +10,19 @@ from openpype.client import (
|
|||
get_versions,
|
||||
get_representations
|
||||
)
|
||||
from openpype.pipeline import Anatomy
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
|
||||
from openpype_modules.ftrack.lib.custom_attributes import (
|
||||
query_custom_attributes
|
||||
)
|
||||
from openpype.lib.dateutils import get_datetime_data
|
||||
from openpype.lib.delivery import (
|
||||
path_from_representation,
|
||||
from openpype.pipeline import Anatomy
|
||||
from openpype.pipeline.load import get_representation_path_with_anatomy
|
||||
from openpype.pipeline.delivery import (
|
||||
get_format_dict,
|
||||
check_destination_path,
|
||||
process_single_file,
|
||||
process_sequence
|
||||
deliver_single_file,
|
||||
deliver_sequence,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -580,7 +580,7 @@ class Delivery(BaseAction):
|
|||
if frame:
|
||||
repre["context"]["frame"] = len(str(frame)) * "#"
|
||||
|
||||
repre_path = path_from_representation(repre, anatomy)
|
||||
repre_path = get_representation_path_with_anatomy(repre, anatomy)
|
||||
# TODO add backup solution where root of path from component
|
||||
# is replaced with root
|
||||
args = (
|
||||
|
|
@ -594,9 +594,9 @@ class Delivery(BaseAction):
|
|||
self.log
|
||||
)
|
||||
if not frame:
|
||||
process_single_file(*args)
|
||||
deliver_single_file(*args)
|
||||
else:
|
||||
process_sequence(*args)
|
||||
deliver_sequence(*args)
|
||||
|
||||
return self.report(report_items)
|
||||
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@ from openpype.modules import OpenPypeModule
|
|||
from openpype_interfaces import (
|
||||
ITrayModule,
|
||||
IPluginPaths,
|
||||
ILaunchHookPaths,
|
||||
ISettingsChangeListener
|
||||
)
|
||||
from openpype.settings import SaveWarningExc
|
||||
|
|
@ -21,7 +20,6 @@ class FtrackModule(
|
|||
OpenPypeModule,
|
||||
ITrayModule,
|
||||
IPluginPaths,
|
||||
ILaunchHookPaths,
|
||||
ISettingsChangeListener
|
||||
):
|
||||
name = "ftrack"
|
||||
|
|
@ -85,7 +83,8 @@ class FtrackModule(
|
|||
}
|
||||
|
||||
def get_launch_hook_paths(self):
|
||||
"""Implementation of `ILaunchHookPaths`."""
|
||||
"""Implementation for applications launch hooks."""
|
||||
|
||||
return os.path.join(FTRACK_MODULE_DIR, "launch_hooks")
|
||||
|
||||
def modify_application_launch_arguments(self, application, env):
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Provides:
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.lib.plugin_tools import filter_profiles
|
||||
from openpype.lib import filter_profiles
|
||||
|
||||
|
||||
class CollectFtrackFamily(pyblish.api.InstancePlugin):
|
||||
|
|
|
|||
|
|
@ -1,8 +1,33 @@
|
|||
from abc import abstractmethod, abstractproperty
|
||||
from abc import ABCMeta, abstractmethod, abstractproperty
|
||||
|
||||
import six
|
||||
|
||||
from openpype import resources
|
||||
|
||||
from openpype.modules import OpenPypeInterface
|
||||
|
||||
class _OpenPypeInterfaceMeta(ABCMeta):
|
||||
"""OpenPypeInterface meta class to print proper string."""
|
||||
|
||||
def __str__(self):
|
||||
return "<'OpenPypeInterface.{}'>".format(self.__name__)
|
||||
|
||||
def __repr__(self):
|
||||
return str(self)
|
||||
|
||||
|
||||
@six.add_metaclass(_OpenPypeInterfaceMeta)
|
||||
class OpenPypeInterface:
|
||||
"""Base class of Interface that can be used as Mixin with abstract parts.
|
||||
|
||||
This is way how OpenPype module or addon can tell OpenPype that contain
|
||||
implementation for specific functionality.
|
||||
|
||||
Child classes of OpenPypeInterface may be used as mixin in different
|
||||
OpenPype modules which means they have to have implemented methods defined
|
||||
in the interface. By default interface does not have any abstract parts.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class IPluginPaths(OpenPypeInterface):
|
||||
|
|
@ -56,6 +81,13 @@ class ILaunchHookPaths(OpenPypeInterface):
|
|||
|
||||
Expected result is list of paths.
|
||||
["path/to/launch_hooks_dir"]
|
||||
|
||||
Deprecated:
|
||||
This interface is not needed since OpenPype 3.14.*. Addon just have to
|
||||
implement 'get_launch_hook_paths' which can expect Application object
|
||||
or nothing as argument.
|
||||
|
||||
Interface class will be removed after 3.16.*.
|
||||
"""
|
||||
|
||||
@abstractmethod
|
||||
|
|
@ -353,8 +385,8 @@ class ISettingsChangeListener(OpenPypeInterface):
|
|||
pass
|
||||
|
||||
|
||||
class IHostModule(OpenPypeInterface):
|
||||
"""Module which also contain a host implementation."""
|
||||
class IHostAddon(OpenPypeInterface):
|
||||
"""Addon which also contain a host implementation."""
|
||||
|
||||
@abstractproperty
|
||||
def host_name(self):
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ import os
|
|||
from openpype_interfaces import (
|
||||
ITrayModule,
|
||||
IPluginPaths,
|
||||
ILaunchHookPaths,
|
||||
)
|
||||
|
||||
from openpype.modules import OpenPypeModule
|
||||
|
|
@ -11,9 +10,7 @@ from openpype.modules import OpenPypeModule
|
|||
SHOTGRID_MODULE_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class ShotgridModule(
|
||||
OpenPypeModule, ITrayModule, IPluginPaths, ILaunchHookPaths
|
||||
):
|
||||
class ShotgridModule(OpenPypeModule, ITrayModule, IPluginPaths):
|
||||
leecher_manager_url = None
|
||||
name = "shotgrid"
|
||||
enabled = False
|
||||
|
|
|
|||
|
|
@ -1,14 +1,11 @@
|
|||
import os
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype_interfaces import (
|
||||
IPluginPaths,
|
||||
ILaunchHookPaths
|
||||
)
|
||||
from openpype.modules.interfaces import IPluginPaths
|
||||
|
||||
SLACK_MODULE_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
|
||||
|
||||
class SlackIntegrationModule(OpenPypeModule, IPluginPaths, ILaunchHookPaths):
|
||||
class SlackIntegrationModule(OpenPypeModule, IPluginPaths):
|
||||
"""Allows sending notification to Slack channels during publishing."""
|
||||
|
||||
name = "slack"
|
||||
|
|
@ -18,7 +15,8 @@ class SlackIntegrationModule(OpenPypeModule, IPluginPaths, ILaunchHookPaths):
|
|||
self.enabled = slack_settings["enabled"]
|
||||
|
||||
def get_launch_hook_paths(self):
|
||||
"""Implementation of `ILaunchHookPaths`."""
|
||||
"""Implementation for applications launch hooks."""
|
||||
|
||||
return os.path.join(SLACK_MODULE_DIR, "launch_hooks")
|
||||
|
||||
def get_plugin_paths(self):
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@ class AbstractProvider:
|
|||
CODE = ''
|
||||
LABEL = ''
|
||||
|
||||
_log = None
|
||||
|
||||
def __init__(self, project_name, site_name, tree=None, presets=None):
|
||||
self.presets = None
|
||||
self.active = False
|
||||
|
|
@ -19,6 +21,12 @@ class AbstractProvider:
|
|||
|
||||
super(AbstractProvider, self).__init__()
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
if self._log is None:
|
||||
self._log = Logger.get_logger(self.__class__.__name__)
|
||||
return self._log
|
||||
|
||||
@abc.abstractmethod
|
||||
def is_active(self):
|
||||
"""
|
||||
|
|
@ -199,11 +207,11 @@ class AbstractProvider:
|
|||
path = anatomy.fill_root(path)
|
||||
except KeyError:
|
||||
msg = "Error in resolving local root from anatomy"
|
||||
log.error(msg)
|
||||
self.log.error(msg)
|
||||
raise ValueError(msg)
|
||||
except IndexError:
|
||||
msg = "Path {} contains unfillable placeholder"
|
||||
log.error(msg)
|
||||
self.log.error(msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
return path
|
||||
|
|
|
|||
|
|
@ -2,12 +2,9 @@ import os
|
|||
|
||||
import dropbox
|
||||
|
||||
from openpype.api import Logger
|
||||
from .abstract_provider import AbstractProvider
|
||||
from ..utils import EditableScopes
|
||||
|
||||
log = Logger().get_logger("SyncServer")
|
||||
|
||||
|
||||
class DropboxHandler(AbstractProvider):
|
||||
CODE = 'dropbox'
|
||||
|
|
@ -20,26 +17,26 @@ class DropboxHandler(AbstractProvider):
|
|||
self.dbx = None
|
||||
|
||||
if not self.presets:
|
||||
log.info(
|
||||
self.log.info(
|
||||
"Sync Server: There are no presets for {}.".format(site_name)
|
||||
)
|
||||
return
|
||||
|
||||
if not self.presets["enabled"]:
|
||||
log.debug("Sync Server: Site {} not enabled for {}.".
|
||||
self.log.debug("Sync Server: Site {} not enabled for {}.".
|
||||
format(site_name, project_name))
|
||||
return
|
||||
|
||||
token = self.presets.get("token", "")
|
||||
if not token:
|
||||
msg = "Sync Server: No access token for dropbox provider"
|
||||
log.info(msg)
|
||||
self.log.info(msg)
|
||||
return
|
||||
|
||||
team_folder_name = self.presets.get("team_folder_name", "")
|
||||
if not team_folder_name:
|
||||
msg = "Sync Server: No team folder name for dropbox provider"
|
||||
log.info(msg)
|
||||
self.log.info(msg)
|
||||
return
|
||||
|
||||
acting_as_member = self.presets.get("acting_as_member", "")
|
||||
|
|
@ -47,7 +44,7 @@ class DropboxHandler(AbstractProvider):
|
|||
msg = (
|
||||
"Sync Server: No acting member for dropbox provider"
|
||||
)
|
||||
log.info(msg)
|
||||
self.log.info(msg)
|
||||
return
|
||||
|
||||
try:
|
||||
|
|
@ -55,7 +52,7 @@ class DropboxHandler(AbstractProvider):
|
|||
token, acting_as_member, team_folder_name
|
||||
)
|
||||
except Exception as e:
|
||||
log.info("Could not establish dropbox object: {}".format(e))
|
||||
self.log.info("Could not establish dropbox object: {}".format(e))
|
||||
return
|
||||
|
||||
super(AbstractProvider, self).__init__()
|
||||
|
|
@ -448,7 +445,7 @@ class DropboxHandler(AbstractProvider):
|
|||
path = anatomy.fill_root(path)
|
||||
except KeyError:
|
||||
msg = "Error in resolving local root from anatomy"
|
||||
log.error(msg)
|
||||
self.log.error(msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
return path
|
||||
|
|
|
|||
|
|
@ -5,12 +5,12 @@ import sys
|
|||
import six
|
||||
import platform
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.api import get_system_settings
|
||||
from openpype.lib import Logger
|
||||
from openpype.settings import get_system_settings
|
||||
from .abstract_provider import AbstractProvider
|
||||
from ..utils import time_function, ResumableError
|
||||
|
||||
log = Logger().get_logger("SyncServer")
|
||||
log = Logger.get_logger("GDriveHandler")
|
||||
|
||||
try:
|
||||
from googleapiclient.discovery import build
|
||||
|
|
@ -69,13 +69,17 @@ class GDriveHandler(AbstractProvider):
|
|||
|
||||
self.presets = presets
|
||||
if not self.presets:
|
||||
log.info("Sync Server: There are no presets for {}.".
|
||||
format(site_name))
|
||||
self.log.info(
|
||||
"Sync Server: There are no presets for {}.".format(site_name)
|
||||
)
|
||||
return
|
||||
|
||||
if not self.presets["enabled"]:
|
||||
log.debug("Sync Server: Site {} not enabled for {}.".
|
||||
format(site_name, project_name))
|
||||
self.log.debug(
|
||||
"Sync Server: Site {} not enabled for {}.".format(
|
||||
site_name, project_name
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
current_platform = platform.system().lower()
|
||||
|
|
@ -85,20 +89,22 @@ class GDriveHandler(AbstractProvider):
|
|||
if not cred_path:
|
||||
msg = "Sync Server: Please, fill the credentials for gdrive "\
|
||||
"provider for platform '{}' !".format(current_platform)
|
||||
log.info(msg)
|
||||
self.log.info(msg)
|
||||
return
|
||||
|
||||
try:
|
||||
cred_path = cred_path.format(**os.environ)
|
||||
except KeyError as e:
|
||||
log.info("Sync Server: The key(s) {} does not exist in the "
|
||||
"environment variables".format(" ".join(e.args)))
|
||||
self.log.info((
|
||||
"Sync Server: The key(s) {} does not exist in the "
|
||||
"environment variables"
|
||||
).format(" ".join(e.args)))
|
||||
return
|
||||
|
||||
if not os.path.exists(cred_path):
|
||||
msg = "Sync Server: No credentials for gdrive provider " + \
|
||||
"for '{}' on path '{}'!".format(site_name, cred_path)
|
||||
log.info(msg)
|
||||
self.log.info(msg)
|
||||
return
|
||||
|
||||
self.service = None
|
||||
|
|
@ -318,7 +324,7 @@ class GDriveHandler(AbstractProvider):
|
|||
fields='id')
|
||||
|
||||
media.stream()
|
||||
log.debug("Start Upload! {}".format(source_path))
|
||||
self.log.debug("Start Upload! {}".format(source_path))
|
||||
last_tick = status = response = None
|
||||
status_val = 0
|
||||
while response is None:
|
||||
|
|
@ -331,7 +337,7 @@ class GDriveHandler(AbstractProvider):
|
|||
if not last_tick or \
|
||||
time.time() - last_tick >= server.LOG_PROGRESS_SEC:
|
||||
last_tick = time.time()
|
||||
log.debug("Uploaded %d%%." %
|
||||
self.log.debug("Uploaded %d%%." %
|
||||
int(status_val * 100))
|
||||
server.update_db(project_name=project_name,
|
||||
new_file_id=None,
|
||||
|
|
@ -350,8 +356,9 @@ class GDriveHandler(AbstractProvider):
|
|||
if 'has not granted' in ex._get_reason().strip():
|
||||
raise PermissionError(ex._get_reason().strip())
|
||||
|
||||
log.warning("Forbidden received, hit quota. "
|
||||
"Injecting 60s delay.")
|
||||
self.log.warning(
|
||||
"Forbidden received, hit quota. Injecting 60s delay."
|
||||
)
|
||||
time.sleep(60)
|
||||
return False
|
||||
raise
|
||||
|
|
@ -417,7 +424,7 @@ class GDriveHandler(AbstractProvider):
|
|||
if not last_tick or \
|
||||
time.time() - last_tick >= server.LOG_PROGRESS_SEC:
|
||||
last_tick = time.time()
|
||||
log.debug("Downloaded %d%%." %
|
||||
self.log.debug("Downloaded %d%%." %
|
||||
int(status_val * 100))
|
||||
server.update_db(project_name=project_name,
|
||||
new_file_id=None,
|
||||
|
|
@ -629,9 +636,9 @@ class GDriveHandler(AbstractProvider):
|
|||
["gdrive"]
|
||||
)
|
||||
except KeyError:
|
||||
log.info(("Sync Server: There are no presets for Gdrive " +
|
||||
"provider.").
|
||||
format(str(provider_presets)))
|
||||
log.info((
|
||||
"Sync Server: There are no presets for Gdrive provider."
|
||||
).format(str(provider_presets)))
|
||||
return
|
||||
return provider_presets
|
||||
|
||||
|
|
@ -704,7 +711,7 @@ class GDriveHandler(AbstractProvider):
|
|||
roots[self.MY_DRIVE_STR] = self.service.files() \
|
||||
.get(fileId='root').execute()
|
||||
except errors.HttpError:
|
||||
log.warning("HttpError in sync loop, "
|
||||
self.log.warning("HttpError in sync loop, "
|
||||
"trying next loop",
|
||||
exc_info=True)
|
||||
raise ResumableError
|
||||
|
|
@ -727,7 +734,7 @@ class GDriveHandler(AbstractProvider):
|
|||
Returns:
|
||||
(dictionary) path as a key, folder id as a value
|
||||
"""
|
||||
log.debug("build_tree len {}".format(len(folders)))
|
||||
self.log.debug("build_tree len {}".format(len(folders)))
|
||||
if not self.root: # build only when necessary, could be expensive
|
||||
self.root = self._prepare_root_info()
|
||||
|
||||
|
|
@ -779,9 +786,9 @@ class GDriveHandler(AbstractProvider):
|
|||
loop_cnt += 1
|
||||
|
||||
if len(no_parents_yet) > 0:
|
||||
log.debug("Some folders path are not resolved {}".
|
||||
self.log.debug("Some folders path are not resolved {}".
|
||||
format(no_parents_yet))
|
||||
log.debug("Remove deleted folders from trash.")
|
||||
self.log.debug("Remove deleted folders from trash.")
|
||||
|
||||
return tree
|
||||
|
||||
|
|
|
|||
|
|
@ -4,10 +4,10 @@ import time
|
|||
import threading
|
||||
import platform
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.api import get_system_settings
|
||||
from openpype.lib import Logger
|
||||
from openpype.settings import get_system_settings
|
||||
from .abstract_provider import AbstractProvider
|
||||
log = Logger().get_logger("SyncServer")
|
||||
log = Logger.get_logger("SyncServer-SFTPHandler")
|
||||
|
||||
pysftp = None
|
||||
try:
|
||||
|
|
@ -43,8 +43,9 @@ class SFTPHandler(AbstractProvider):
|
|||
|
||||
self.presets = presets
|
||||
if not self.presets:
|
||||
log.warning("Sync Server: There are no presets for {}.".
|
||||
format(site_name))
|
||||
self.log.warning(
|
||||
"Sync Server: There are no presets for {}.".format(site_name)
|
||||
)
|
||||
return
|
||||
|
||||
# store to instance for reconnect
|
||||
|
|
@ -423,7 +424,7 @@ class SFTPHandler(AbstractProvider):
|
|||
return pysftp.Connection(**conn_params)
|
||||
except (paramiko.ssh_exception.SSHException,
|
||||
pysftp.exceptions.ConnectionException):
|
||||
log.warning("Couldn't connect", exc_info=True)
|
||||
self.log.warning("Couldn't connect", exc_info=True)
|
||||
|
||||
def _mark_progress(self, project_name, file, representation, server, site,
|
||||
source_path, target_path, direction):
|
||||
|
|
@ -445,7 +446,7 @@ class SFTPHandler(AbstractProvider):
|
|||
time.time() - last_tick >= server.LOG_PROGRESS_SEC:
|
||||
status_val = target_file_size / source_file_size
|
||||
last_tick = time.time()
|
||||
log.debug(direction + "ed %d%%." % int(status_val * 100))
|
||||
self.log.debug(direction + "ed %d%%." % int(status_val * 100))
|
||||
server.update_db(project_name=project_name,
|
||||
new_file_id=None,
|
||||
file=file,
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ class TimersManagerModuleRestApi:
|
|||
happens in Workfile app.
|
||||
"""
|
||||
def __init__(self, user_module, server_manager):
|
||||
self.log = None
|
||||
self._log = None
|
||||
self.module = user_module
|
||||
self.server_manager = server_manager
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,6 @@ from openpype.client import get_asset_by_name
|
|||
from openpype.modules import OpenPypeModule
|
||||
from openpype_interfaces import (
|
||||
ITrayService,
|
||||
ILaunchHookPaths,
|
||||
IPluginPaths
|
||||
)
|
||||
from openpype.lib.events import register_event_callback
|
||||
|
|
@ -79,7 +78,6 @@ class ExampleTimersManagerConnector:
|
|||
class TimersManager(
|
||||
OpenPypeModule,
|
||||
ITrayService,
|
||||
ILaunchHookPaths,
|
||||
IPluginPaths
|
||||
):
|
||||
""" Handles about Timers.
|
||||
|
|
@ -185,12 +183,11 @@ class TimersManager(
|
|||
)
|
||||
|
||||
def get_launch_hook_paths(self):
|
||||
"""Implementation of `ILaunchHookPaths`."""
|
||||
"""Implementation for applications launch hooks."""
|
||||
|
||||
return os.path.join(
|
||||
TIMER_MODULE_DIR,
|
||||
"launch_hooks"
|
||||
)
|
||||
return [
|
||||
os.path.join(TIMER_MODULE_DIR, "launch_hooks")
|
||||
]
|
||||
|
||||
def get_plugin_paths(self):
|
||||
"""Implementation of `IPluginPaths`."""
|
||||
|
|
|
|||
|
|
@ -53,9 +53,12 @@ class WebServerModule(OpenPypeModule, ITrayService):
|
|||
try:
|
||||
module.webserver_initialization(self.server_manager)
|
||||
except Exception:
|
||||
self.log.warning((
|
||||
"Failed to connect module \"{}\" to webserver."
|
||||
).format(module.name))
|
||||
self.log.warning(
|
||||
(
|
||||
"Failed to connect module \"{}\" to webserver."
|
||||
).format(module.name),
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
def tray_init(self):
|
||||
self.create_server_manager()
|
||||
|
|
|
|||
|
|
@ -1,6 +1,13 @@
|
|||
from .constants import (
|
||||
SUBSET_NAME_ALLOWED_SYMBOLS
|
||||
SUBSET_NAME_ALLOWED_SYMBOLS,
|
||||
DEFAULT_SUBSET_TEMPLATE,
|
||||
)
|
||||
|
||||
from .subset_name import (
|
||||
TaskNotSetError,
|
||||
get_subset_name,
|
||||
)
|
||||
|
||||
from .creator_plugins import (
|
||||
CreatorError,
|
||||
|
||||
|
|
@ -32,6 +39,10 @@ from .legacy_create import (
|
|||
|
||||
__all__ = (
|
||||
"SUBSET_NAME_ALLOWED_SYMBOLS",
|
||||
"DEFAULT_SUBSET_TEMPLATE",
|
||||
|
||||
"TaskNotSetError",
|
||||
"get_subset_name",
|
||||
|
||||
"CreatorError",
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
SUBSET_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_."
|
||||
DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}"
|
||||
|
||||
|
||||
__all__ = (
|
||||
"SUBSET_NAME_ALLOWED_SYMBOLS",
|
||||
"DEFAULT_SUBSET_TEMPLATE",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ from abc import (
|
|||
import six
|
||||
|
||||
from openpype.settings import get_system_settings, get_project_settings
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from .subset_name import get_subset_name
|
||||
from openpype.pipeline.plugin_discover import (
|
||||
discover,
|
||||
register_plugin,
|
||||
|
|
@ -75,6 +75,7 @@ class BaseCreator:
|
|||
):
|
||||
# Reference to CreateContext
|
||||
self.create_context = create_context
|
||||
self.project_settings = project_settings
|
||||
|
||||
# Creator is running in headless mode (without UI elemets)
|
||||
# - we may use UI inside processing this attribute should be checked
|
||||
|
|
@ -276,14 +277,15 @@ class BaseCreator:
|
|||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
||||
return get_subset_name_with_asset_doc(
|
||||
return get_subset_name(
|
||||
self.family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
dynamic_data=dynamic_data,
|
||||
project_settings=self.project_settings
|
||||
)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
|
|
|
|||
109
openpype/pipeline/create/subset_name.py
Normal file
109
openpype/pipeline/create/subset_name.py
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
import os
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.lib import filter_profiles, prepare_template_data
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
from .constants import DEFAULT_SUBSET_TEMPLATE
|
||||
|
||||
|
||||
class TaskNotSetError(KeyError):
|
||||
def __init__(self, msg=None):
|
||||
if not msg:
|
||||
msg = "Creator's subset name template requires task name."
|
||||
super(TaskNotSetError, self).__init__(msg)
|
||||
|
||||
|
||||
def get_subset_name(
|
||||
family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name=None,
|
||||
host_name=None,
|
||||
default_template=None,
|
||||
dynamic_data=None,
|
||||
project_settings=None
|
||||
):
|
||||
"""Calculate subset name based on passed context and OpenPype settings.
|
||||
|
||||
Subst name templates are defined in `project_settings/global/tools/creator
|
||||
/subset_name_profiles` where are profiles with host name, family, task name
|
||||
and task type filters. If context does not match any profile then
|
||||
`DEFAULT_SUBSET_TEMPLATE` is used as default template.
|
||||
|
||||
That's main reason why so many arguments are required to calculate subset
|
||||
name.
|
||||
|
||||
Args:
|
||||
family (str): Instance family.
|
||||
variant (str): In most of cases it is user input during creation.
|
||||
task_name (str): Task name on which context is instance created.
|
||||
asset_doc (dict): Queried asset document with it's tasks in data.
|
||||
Used to get task type.
|
||||
project_name (str): Name of project on which is instance created.
|
||||
Important for project settings that are loaded.
|
||||
host_name (str): One of filtering criteria for template profile
|
||||
filters.
|
||||
default_template (str): Default template if any profile does not match
|
||||
passed context. Constant 'DEFAULT_SUBSET_TEMPLATE' is used if
|
||||
is not passed.
|
||||
dynamic_data (dict): Dynamic data specific for a creator which creates
|
||||
instance.
|
||||
dbcon (AvalonMongoDB): Mongo connection to be able query asset document
|
||||
if 'asset_doc' is not passed.
|
||||
"""
|
||||
|
||||
if not family:
|
||||
return ""
|
||||
|
||||
if not host_name:
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
|
||||
# Use only last part of class family value split by dot (`.`)
|
||||
family = family.rsplit(".", 1)[-1]
|
||||
|
||||
if project_name is None:
|
||||
project_name = legacy_io.Session["AVALON_PROJECT"]
|
||||
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
|
||||
# Get settings
|
||||
if not project_settings:
|
||||
project_settings = get_project_settings(project_name)
|
||||
tools_settings = project_settings["global"]["tools"]
|
||||
profiles = tools_settings["creator"]["subset_name_profiles"]
|
||||
filtering_criteria = {
|
||||
"families": family,
|
||||
"hosts": host_name,
|
||||
"tasks": task_name,
|
||||
"task_types": task_type
|
||||
}
|
||||
|
||||
matching_profile = filter_profiles(profiles, filtering_criteria)
|
||||
template = None
|
||||
if matching_profile:
|
||||
template = matching_profile["template"]
|
||||
|
||||
# Make sure template is set (matching may have empty string)
|
||||
if not template:
|
||||
template = default_template or DEFAULT_SUBSET_TEMPLATE
|
||||
|
||||
# Simple check of task name existence for template with {task} in
|
||||
# - missing task should be possible only in Standalone publisher
|
||||
if not task_name and "{task" in template.lower():
|
||||
raise TaskNotSetError()
|
||||
|
||||
fill_pairs = {
|
||||
"variant": variant,
|
||||
"family": family,
|
||||
"task": task_name
|
||||
}
|
||||
if dynamic_data:
|
||||
# Dynamic data may override default values
|
||||
for key, value in dynamic_data.items():
|
||||
fill_pairs[key] = value
|
||||
|
||||
return template.format(**prepare_template_data(fill_pairs))
|
||||
310
openpype/pipeline/delivery.py
Normal file
310
openpype/pipeline/delivery.py
Normal file
|
|
@ -0,0 +1,310 @@
|
|||
"""Functions useful for delivery of published representations."""
|
||||
import os
|
||||
import shutil
|
||||
import glob
|
||||
import clique
|
||||
import collections
|
||||
|
||||
from openpype.lib import create_hard_link
|
||||
|
||||
|
||||
def _copy_file(src_path, dst_path):
|
||||
"""Hardlink file if possible(to save space), copy if not.
|
||||
|
||||
Because of using hardlinks should not be function used in other parts
|
||||
of pipeline.
|
||||
"""
|
||||
|
||||
if os.path.exists(dst_path):
|
||||
return
|
||||
try:
|
||||
create_hard_link(
|
||||
src_path,
|
||||
dst_path
|
||||
)
|
||||
except OSError:
|
||||
shutil.copyfile(src_path, dst_path)
|
||||
|
||||
|
||||
def get_format_dict(anatomy, location_path):
|
||||
"""Returns replaced root values from user provider value.
|
||||
|
||||
Args:
|
||||
anatomy (Anatomy): Project anatomy.
|
||||
location_path (str): User provided value.
|
||||
|
||||
Returns:
|
||||
(dict): Prepared data for formatting of a template.
|
||||
"""
|
||||
|
||||
format_dict = {}
|
||||
if not location_path:
|
||||
return format_dict
|
||||
|
||||
location_path = location_path.replace("\\", "/")
|
||||
root_names = anatomy.root_names_from_templates(
|
||||
anatomy.templates["delivery"]
|
||||
)
|
||||
format_dict["root"] = {}
|
||||
for name in root_names:
|
||||
format_dict["root"][name] = location_path
|
||||
return format_dict
|
||||
|
||||
|
||||
def check_destination_path(
|
||||
repre_id,
|
||||
anatomy,
|
||||
anatomy_data,
|
||||
datetime_data,
|
||||
template_name
|
||||
):
|
||||
""" Try to create destination path based on 'template_name'.
|
||||
|
||||
In the case that path cannot be filled, template contains unmatched
|
||||
keys, provide error message to filter out repre later.
|
||||
|
||||
Args:
|
||||
repre_id (str): Representation id.
|
||||
anatomy (Anatomy): Project anatomy.
|
||||
anatomy_data (dict): Template data to fill anatomy templates.
|
||||
datetime_data (dict): Values with actual date.
|
||||
template_name (str): Name of template which should be used from anatomy
|
||||
templates.
|
||||
Returns:
|
||||
Dict[str, List[str]]: Report of happened errors. Key is message title
|
||||
value is detailed information.
|
||||
"""
|
||||
|
||||
anatomy_data.update(datetime_data)
|
||||
anatomy_filled = anatomy.format_all(anatomy_data)
|
||||
dest_path = anatomy_filled["delivery"][template_name]
|
||||
report_items = collections.defaultdict(list)
|
||||
|
||||
if not dest_path.solved:
|
||||
msg = (
|
||||
"Missing keys in Representation's context"
|
||||
" for anatomy template \"{}\"."
|
||||
).format(template_name)
|
||||
|
||||
sub_msg = (
|
||||
"Representation: {}<br>"
|
||||
).format(repre_id)
|
||||
|
||||
if dest_path.missing_keys:
|
||||
keys = ", ".join(dest_path.missing_keys)
|
||||
sub_msg += (
|
||||
"- Missing keys: \"{}\"<br>"
|
||||
).format(keys)
|
||||
|
||||
if dest_path.invalid_types:
|
||||
items = []
|
||||
for key, value in dest_path.invalid_types.items():
|
||||
items.append("\"{}\" {}".format(key, str(value)))
|
||||
|
||||
keys = ", ".join(items)
|
||||
sub_msg += (
|
||||
"- Invalid value DataType: \"{}\"<br>"
|
||||
).format(keys)
|
||||
|
||||
report_items[msg].append(sub_msg)
|
||||
|
||||
return report_items
|
||||
|
||||
|
||||
def deliver_single_file(
|
||||
src_path,
|
||||
repre,
|
||||
anatomy,
|
||||
template_name,
|
||||
anatomy_data,
|
||||
format_dict,
|
||||
report_items,
|
||||
log
|
||||
):
|
||||
"""Copy single file to calculated path based on template
|
||||
|
||||
Args:
|
||||
src_path(str): path of source representation file
|
||||
repre (dict): full repre, used only in deliver_sequence, here only
|
||||
as to share same signature
|
||||
anatomy (Anatomy)
|
||||
template_name (string): user selected delivery template name
|
||||
anatomy_data (dict): data from repre to fill anatomy with
|
||||
format_dict (dict): root dictionary with names and values
|
||||
report_items (collections.defaultdict): to return error messages
|
||||
log (logging.Logger): for log printing
|
||||
|
||||
Returns:
|
||||
(collections.defaultdict, int)
|
||||
"""
|
||||
|
||||
# Make sure path is valid for all platforms
|
||||
src_path = os.path.normpath(src_path.replace("\\", "/"))
|
||||
|
||||
if not os.path.exists(src_path):
|
||||
msg = "{} doesn't exist for {}".format(src_path, repre["_id"])
|
||||
report_items["Source file was not found"].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
anatomy_filled = anatomy.format(anatomy_data)
|
||||
if format_dict:
|
||||
template_result = anatomy_filled["delivery"][template_name]
|
||||
delivery_path = template_result.rootless.format(**format_dict)
|
||||
else:
|
||||
delivery_path = anatomy_filled["delivery"][template_name]
|
||||
|
||||
# Backwards compatibility when extension contained `.`
|
||||
delivery_path = delivery_path.replace("..", ".")
|
||||
# Make sure path is valid for all platforms
|
||||
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
|
||||
|
||||
delivery_folder = os.path.dirname(delivery_path)
|
||||
if not os.path.exists(delivery_folder):
|
||||
os.makedirs(delivery_folder)
|
||||
|
||||
log.debug("Copying single: {} -> {}".format(src_path, delivery_path))
|
||||
_copy_file(src_path, delivery_path)
|
||||
|
||||
return report_items, 1
|
||||
|
||||
|
||||
def deliver_sequence(
|
||||
src_path,
|
||||
repre,
|
||||
anatomy,
|
||||
template_name,
|
||||
anatomy_data,
|
||||
format_dict,
|
||||
report_items,
|
||||
log
|
||||
):
|
||||
""" For Pype2(mainly - works in 3 too) where representation might not
|
||||
contain files.
|
||||
|
||||
Uses listing physical files (not 'files' on repre as a)might not be
|
||||
present, b)might not be reliable for representation and copying them.
|
||||
|
||||
TODO Should be refactored when files are sufficient to drive all
|
||||
representations.
|
||||
|
||||
Args:
|
||||
src_path(str): path of source representation file
|
||||
repre (dict): full representation
|
||||
anatomy (Anatomy)
|
||||
template_name (string): user selected delivery template name
|
||||
anatomy_data (dict): data from repre to fill anatomy with
|
||||
format_dict (dict): root dictionary with names and values
|
||||
report_items (collections.defaultdict): to return error messages
|
||||
log (logging.Logger): for log printing
|
||||
|
||||
Returns:
|
||||
(collections.defaultdict, int)
|
||||
"""
|
||||
|
||||
src_path = os.path.normpath(src_path.replace("\\", "/"))
|
||||
|
||||
def hash_path_exist(myPath):
|
||||
res = myPath.replace('#', '*')
|
||||
glob_search_results = glob.glob(res)
|
||||
if len(glob_search_results) > 0:
|
||||
return True
|
||||
return False
|
||||
|
||||
if not hash_path_exist(src_path):
|
||||
msg = "{} doesn't exist for {}".format(
|
||||
src_path, repre["_id"])
|
||||
report_items["Source file was not found"].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
delivery_templates = anatomy.templates.get("delivery") or {}
|
||||
delivery_template = delivery_templates.get(template_name)
|
||||
if delivery_template is None:
|
||||
msg = (
|
||||
"Delivery template \"{}\" in anatomy of project \"{}\""
|
||||
" was not found"
|
||||
).format(template_name, anatomy.project_name)
|
||||
report_items[""].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
# Check if 'frame' key is available in template which is required
|
||||
# for sequence delivery
|
||||
if "{frame" not in delivery_template:
|
||||
msg = (
|
||||
"Delivery template \"{}\" in anatomy of project \"{}\""
|
||||
"does not contain '{{frame}}' key to fill. Delivery of sequence"
|
||||
" can't be processed."
|
||||
).format(template_name, anatomy.project_name)
|
||||
report_items[""].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
dir_path, file_name = os.path.split(str(src_path))
|
||||
|
||||
context = repre["context"]
|
||||
ext = context.get("ext", context.get("representation"))
|
||||
|
||||
if not ext:
|
||||
msg = "Source extension not found, cannot find collection"
|
||||
report_items[msg].append(src_path)
|
||||
log.warning("{} <{}>".format(msg, context))
|
||||
return report_items, 0
|
||||
|
||||
ext = "." + ext
|
||||
# context.representation could be .psd
|
||||
ext = ext.replace("..", ".")
|
||||
|
||||
src_collections, remainder = clique.assemble(os.listdir(dir_path))
|
||||
src_collection = None
|
||||
for col in src_collections:
|
||||
if col.tail != ext:
|
||||
continue
|
||||
|
||||
src_collection = col
|
||||
break
|
||||
|
||||
if src_collection is None:
|
||||
msg = "Source collection of files was not found"
|
||||
report_items[msg].append(src_path)
|
||||
log.warning("{} <{}>".format(msg, src_path))
|
||||
return report_items, 0
|
||||
|
||||
frame_indicator = "@####@"
|
||||
|
||||
anatomy_data["frame"] = frame_indicator
|
||||
anatomy_filled = anatomy.format(anatomy_data)
|
||||
|
||||
if format_dict:
|
||||
template_result = anatomy_filled["delivery"][template_name]
|
||||
delivery_path = template_result.rootless.format(**format_dict)
|
||||
else:
|
||||
delivery_path = anatomy_filled["delivery"][template_name]
|
||||
|
||||
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
|
||||
delivery_folder = os.path.dirname(delivery_path)
|
||||
dst_head, dst_tail = delivery_path.split(frame_indicator)
|
||||
dst_padding = src_collection.padding
|
||||
dst_collection = clique.Collection(
|
||||
head=dst_head,
|
||||
tail=dst_tail,
|
||||
padding=dst_padding
|
||||
)
|
||||
|
||||
if not os.path.exists(delivery_folder):
|
||||
os.makedirs(delivery_folder)
|
||||
|
||||
src_head = src_collection.head
|
||||
src_tail = src_collection.tail
|
||||
uploaded = 0
|
||||
for index in src_collection.indexes:
|
||||
src_padding = src_collection.format("{padding}") % index
|
||||
src_file_name = "{}{}{}".format(src_head, src_padding, src_tail)
|
||||
src = os.path.normpath(
|
||||
os.path.join(dir_path, src_file_name)
|
||||
)
|
||||
|
||||
dst_padding = dst_collection.format("{padding}") % index
|
||||
dst = "{}{}{}".format(dst_head, dst_padding, dst_tail)
|
||||
log.debug("Copying single: {} -> {}".format(src, dst))
|
||||
_copy_file(src, dst)
|
||||
uploaded += 1
|
||||
|
||||
return report_items, uploaded
|
||||
|
|
@ -1,6 +1,8 @@
|
|||
from .utils import (
|
||||
HeroVersionType,
|
||||
|
||||
IncompatibleLoaderError,
|
||||
InvalidRepresentationContext,
|
||||
|
||||
get_repres_contexts,
|
||||
get_subset_contexts,
|
||||
|
|
@ -20,6 +22,7 @@ from .utils import (
|
|||
|
||||
get_representation_path_from_context,
|
||||
get_representation_path,
|
||||
get_representation_path_with_anatomy,
|
||||
|
||||
is_compatible_loader,
|
||||
|
||||
|
|
@ -46,7 +49,9 @@ from .plugins import (
|
|||
__all__ = (
|
||||
# utils.py
|
||||
"HeroVersionType",
|
||||
|
||||
"IncompatibleLoaderError",
|
||||
"InvalidRepresentationContext",
|
||||
|
||||
"get_repres_contexts",
|
||||
"get_subset_contexts",
|
||||
|
|
@ -66,6 +71,7 @@ __all__ = (
|
|||
|
||||
"get_representation_path_from_context",
|
||||
"get_representation_path",
|
||||
"get_representation_path_with_anatomy",
|
||||
|
||||
"is_compatible_loader",
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,10 @@ from openpype.client import (
|
|||
get_representation_by_name,
|
||||
get_representation_parents
|
||||
)
|
||||
from openpype.lib import (
|
||||
StringTemplate,
|
||||
TemplateUnsolved,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
schema,
|
||||
legacy_io,
|
||||
|
|
@ -61,6 +65,11 @@ class IncompatibleLoaderError(ValueError):
|
|||
pass
|
||||
|
||||
|
||||
class InvalidRepresentationContext(ValueError):
|
||||
"""Representation path can't be received using representation document."""
|
||||
pass
|
||||
|
||||
|
||||
def get_repres_contexts(representation_ids, dbcon=None):
|
||||
"""Return parenthood context for representation.
|
||||
|
||||
|
|
@ -515,6 +524,52 @@ def get_representation_path_from_context(context):
|
|||
return get_representation_path(representation, root)
|
||||
|
||||
|
||||
def get_representation_path_with_anatomy(repre_doc, anatomy):
|
||||
"""Receive representation path using representation document and anatomy.
|
||||
|
||||
Anatomy is used to replace 'root' key in representation file. Ideally
|
||||
should be used instead of 'get_representation_path' which is based on
|
||||
"current context".
|
||||
|
||||
Future notes:
|
||||
We want also be able store resources into representation and I can
|
||||
imagine the result should also contain paths to possible resources.
|
||||
|
||||
Args:
|
||||
repre_doc (Dict[str, Any]): Representation document.
|
||||
anatomy (Anatomy): Project anatomy object.
|
||||
|
||||
Returns:
|
||||
Union[None, TemplateResult]: None if path can't be received
|
||||
|
||||
Raises:
|
||||
InvalidRepresentationContext: When representation data are probably
|
||||
invalid or not available.
|
||||
"""
|
||||
|
||||
try:
|
||||
template = repre_doc["data"]["template"]
|
||||
|
||||
except KeyError:
|
||||
raise InvalidRepresentationContext((
|
||||
"Representation document does not"
|
||||
" contain template in data ('data.template')"
|
||||
))
|
||||
|
||||
try:
|
||||
context = repre_doc["context"]
|
||||
context["root"] = anatomy.roots
|
||||
path = StringTemplate.format_strict_template(template, context)
|
||||
|
||||
except TemplateUnsolved as exc:
|
||||
raise InvalidRepresentationContext((
|
||||
"Couldn't resolve representation template with available data."
|
||||
" Reason: {}".format(str(exc))
|
||||
))
|
||||
|
||||
return path.normalized()
|
||||
|
||||
|
||||
def get_representation_path(representation, root=None, dbcon=None):
|
||||
"""Get filename from representation document
|
||||
|
||||
|
|
@ -533,8 +588,6 @@ def get_representation_path(representation, root=None, dbcon=None):
|
|||
|
||||
"""
|
||||
|
||||
from openpype.lib import StringTemplate, TemplateUnsolved
|
||||
|
||||
if dbcon is None:
|
||||
dbcon = legacy_io
|
||||
|
||||
|
|
@ -737,6 +790,7 @@ def get_outdated_containers(host=None, project_name=None):
|
|||
|
||||
if host is None:
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
host = registered_host()
|
||||
|
||||
if project_name is None:
|
||||
|
|
|
|||
|
|
@ -28,27 +28,37 @@ def get_general_template_data(system_settings=None):
|
|||
}
|
||||
|
||||
|
||||
def get_project_template_data(project_doc):
|
||||
def get_project_template_data(project_doc=None, project_name=None):
|
||||
"""Extract data from project document that are used in templates.
|
||||
|
||||
Project document must have 'name' and (at this moment) optional
|
||||
key 'data.code'.
|
||||
|
||||
One of 'project_name' or 'project_doc' must be passed. With prepared
|
||||
project document is function much faster because don't have to query.
|
||||
|
||||
Output contains formatting keys:
|
||||
- 'project[name]' - Project name
|
||||
- 'project[code]' - Project code
|
||||
|
||||
Args:
|
||||
project_doc (Dict[str, Any]): Queried project document.
|
||||
project_name (str): Name of project.
|
||||
|
||||
Returns:
|
||||
Dict[str, Dict[str, str]]: Template data based on project document.
|
||||
"""
|
||||
|
||||
if not project_name:
|
||||
project_name = project_doc["name"]
|
||||
|
||||
if not project_doc:
|
||||
project_code = get_project(project_name, fields=["data.code"])
|
||||
|
||||
project_code = project_doc.get("data", {}).get("code")
|
||||
return {
|
||||
"project": {
|
||||
"name": project_doc["name"],
|
||||
"name": project_name,
|
||||
"code": project_code
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ import logging
|
|||
|
||||
from openpype.client import get_project
|
||||
from . import legacy_io
|
||||
from .anatomy import Anatomy
|
||||
from .plugin_discover import (
|
||||
discover,
|
||||
register_plugin,
|
||||
|
|
@ -73,19 +74,20 @@ class ThumbnailResolver(object):
|
|||
|
||||
|
||||
class TemplateResolver(ThumbnailResolver):
|
||||
|
||||
priority = 90
|
||||
|
||||
def process(self, thumbnail_entity, thumbnail_type):
|
||||
|
||||
if not os.environ.get("AVALON_THUMBNAIL_ROOT"):
|
||||
return
|
||||
|
||||
template = thumbnail_entity["data"].get("template")
|
||||
if not template:
|
||||
self.log.debug("Thumbnail entity does not have set template")
|
||||
return
|
||||
|
||||
thumbnail_root_format_key = "{thumbnail_root}"
|
||||
thumbnail_root = os.environ.get("AVALON_THUMBNAIL_ROOT") or ""
|
||||
# Check if template require thumbnail root and if is avaiable
|
||||
if thumbnail_root_format_key in template and not thumbnail_root:
|
||||
return
|
||||
|
||||
project_name = self.dbcon.active_project()
|
||||
project = get_project(project_name, fields=["name", "data.code"])
|
||||
|
||||
|
|
@ -95,12 +97,16 @@ class TemplateResolver(ThumbnailResolver):
|
|||
template_data.update({
|
||||
"_id": str(thumbnail_entity["_id"]),
|
||||
"thumbnail_type": thumbnail_type,
|
||||
"thumbnail_root": os.environ.get("AVALON_THUMBNAIL_ROOT"),
|
||||
"thumbnail_root": thumbnail_root,
|
||||
"project": {
|
||||
"name": project["name"],
|
||||
"code": project["data"].get("code")
|
||||
}
|
||||
},
|
||||
})
|
||||
# Add anatomy roots if is in template
|
||||
if "{root" in template:
|
||||
anatomy = Anatomy(project_name)
|
||||
template_data["root"] = anatomy.roots
|
||||
|
||||
try:
|
||||
filepath = os.path.normpath(template.format(**template_data))
|
||||
|
|
|
|||
|
|
@ -419,9 +419,14 @@ def get_custom_workfile_template(
|
|||
# when path is available try to format it in case
|
||||
# there are some anatomy template strings
|
||||
if matching_item:
|
||||
# extend anatomy context with os.environ to
|
||||
# also allow formatting against env
|
||||
full_context_data = os.environ.copy()
|
||||
full_context_data.update(anatomy_context_data)
|
||||
|
||||
template = matching_item["path"][platform.system().lower()]
|
||||
return StringTemplate.format_strict_template(
|
||||
template, anatomy_context_data
|
||||
template, full_context_data
|
||||
).normalized()
|
||||
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -7,11 +7,15 @@ from pymongo import UpdateOne
|
|||
import qargparse
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.client import get_versions, get_representations
|
||||
from openpype import style
|
||||
from openpype.pipeline import load, AvalonMongoDB, Anatomy
|
||||
from openpype.lib import StringTemplate
|
||||
from openpype.client import get_versions, get_representations
|
||||
from openpype.modules import ModulesManager
|
||||
from openpype.lib import format_file_size
|
||||
from openpype.pipeline import load, AvalonMongoDB, Anatomy
|
||||
from openpype.pipeline.load import (
|
||||
get_representation_path_with_anatomy,
|
||||
InvalidRepresentationContext,
|
||||
)
|
||||
|
||||
|
||||
class DeleteOldVersions(load.SubsetLoaderPlugin):
|
||||
|
|
@ -38,13 +42,6 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
)
|
||||
]
|
||||
|
||||
def sizeof_fmt(self, num, suffix='B'):
|
||||
for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
|
||||
if abs(num) < 1024.0:
|
||||
return "%3.1f%s%s" % (num, unit, suffix)
|
||||
num /= 1024.0
|
||||
return "%.1f%s%s" % (num, 'Yi', suffix)
|
||||
|
||||
def delete_whole_dir_paths(self, dir_paths, delete=True):
|
||||
size = 0
|
||||
|
||||
|
|
@ -80,27 +77,28 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
|
||||
def path_from_representation(self, representation, anatomy):
|
||||
try:
|
||||
template = representation["data"]["template"]
|
||||
|
||||
context = representation["context"]
|
||||
except KeyError:
|
||||
return (None, None)
|
||||
|
||||
try:
|
||||
path = get_representation_path_with_anatomy(
|
||||
representation, anatomy
|
||||
)
|
||||
except InvalidRepresentationContext:
|
||||
return (None, None)
|
||||
|
||||
sequence_path = None
|
||||
try:
|
||||
context = representation["context"]
|
||||
context["root"] = anatomy.roots
|
||||
path = str(StringTemplate.format_template(template, context))
|
||||
if "frame" in context:
|
||||
context["frame"] = self.sequence_splitter
|
||||
sequence_path = os.path.normpath(str(
|
||||
StringTemplate.format_template(template, context)
|
||||
))
|
||||
if "frame" in context:
|
||||
context["frame"] = self.sequence_splitter
|
||||
sequence_path = get_representation_path_with_anatomy(
|
||||
representation, anatomy
|
||||
)
|
||||
|
||||
except KeyError:
|
||||
# Template references unavailable data
|
||||
return (None, None)
|
||||
if sequence_path:
|
||||
sequence_path = sequence_path.normalized()
|
||||
|
||||
return (os.path.normpath(path), sequence_path)
|
||||
return (path.normalized(), sequence_path)
|
||||
|
||||
def delete_only_repre_files(self, dir_paths, file_paths, delete=True):
|
||||
size = 0
|
||||
|
|
@ -456,7 +454,7 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
|
|||
size += self.main(project_name, data, remove_publish_folder)
|
||||
print("Progressing {}/{}".format(count + 1, len(contexts)))
|
||||
|
||||
msg = "Total size of files: " + self.sizeof_fmt(size)
|
||||
msg = "Total size of files: {}".format(format_file_size(size))
|
||||
self.log.info(msg)
|
||||
self.message(msg)
|
||||
|
||||
|
|
|
|||
|
|
@ -7,15 +7,17 @@ from openpype.client import get_representations
|
|||
from openpype.pipeline import load, Anatomy
|
||||
from openpype import resources, style
|
||||
|
||||
from openpype.lib.dateutils import get_datetime_data
|
||||
from openpype.lib.delivery import (
|
||||
sizeof_fmt,
|
||||
path_from_representation,
|
||||
from openpype.lib import (
|
||||
format_file_size,
|
||||
collect_frames,
|
||||
get_datetime_data,
|
||||
)
|
||||
from openpype.pipeline.load import get_representation_path_with_anatomy
|
||||
from openpype.pipeline.delivery import (
|
||||
get_format_dict,
|
||||
check_destination_path,
|
||||
process_single_file,
|
||||
process_sequence,
|
||||
collect_frames
|
||||
deliver_single_file,
|
||||
deliver_sequence,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -167,7 +169,9 @@ class DeliveryOptionsDialog(QtWidgets.QDialog):
|
|||
if repre["name"] not in selected_repres:
|
||||
continue
|
||||
|
||||
repre_path = path_from_representation(repre, self.anatomy)
|
||||
repre_path = get_representation_path_with_anatomy(
|
||||
repre, self.anatomy
|
||||
)
|
||||
|
||||
anatomy_data = copy.deepcopy(repre["context"])
|
||||
new_report_items = check_destination_path(str(repre["_id"]),
|
||||
|
|
@ -202,7 +206,7 @@ class DeliveryOptionsDialog(QtWidgets.QDialog):
|
|||
args[0] = src_path
|
||||
if frame:
|
||||
anatomy_data["frame"] = frame
|
||||
new_report_items, uploaded = process_single_file(*args)
|
||||
new_report_items, uploaded = deliver_single_file(*args)
|
||||
report_items.update(new_report_items)
|
||||
self._update_progress(uploaded)
|
||||
else: # fallback for Pype2 and representations without files
|
||||
|
|
@ -211,9 +215,9 @@ class DeliveryOptionsDialog(QtWidgets.QDialog):
|
|||
repre["context"]["frame"] = len(str(frame)) * "#"
|
||||
|
||||
if not frame:
|
||||
new_report_items, uploaded = process_single_file(*args)
|
||||
new_report_items, uploaded = deliver_single_file(*args)
|
||||
else:
|
||||
new_report_items, uploaded = process_sequence(*args)
|
||||
new_report_items, uploaded = deliver_sequence(*args)
|
||||
report_items.update(new_report_items)
|
||||
self._update_progress(uploaded)
|
||||
|
||||
|
|
@ -263,8 +267,9 @@ class DeliveryOptionsDialog(QtWidgets.QDialog):
|
|||
|
||||
def _prepare_label(self):
|
||||
"""Provides text with no of selected files and their size."""
|
||||
label = "{} files, size {}".format(self.files_selected,
|
||||
sizeof_fmt(self.size_selected))
|
||||
label = "{} files, size {}".format(
|
||||
self.files_selected,
|
||||
format_file_size(self.size_selected))
|
||||
return label
|
||||
|
||||
def _get_selected_repres(self):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
"""
|
||||
Requires:
|
||||
Optional:
|
||||
context -> hostName (str)
|
||||
context -> currentFile (str)
|
||||
Provides:
|
||||
context -> label (str)
|
||||
|
|
@ -16,16 +17,27 @@ class CollectContextLabel(pyblish.api.ContextPlugin):
|
|||
label = "Context Label"
|
||||
|
||||
def process(self, context):
|
||||
# Add ability to use custom context label
|
||||
label = context.data.get("label")
|
||||
if label:
|
||||
self.log.debug("Context label is already set to \"{}\"".format(
|
||||
label
|
||||
))
|
||||
return
|
||||
|
||||
# Get last registered host
|
||||
host = pyblish.api.registered_hosts()[-1]
|
||||
host_name = context.data.get("hostName")
|
||||
if not host_name:
|
||||
host_name = pyblish.api.registered_hosts()[-1]
|
||||
# Use host name as base for label
|
||||
label = host_name.title()
|
||||
|
||||
# Get scene name from "currentFile"
|
||||
path = context.data.get("currentFile") or "<Unsaved>"
|
||||
base = os.path.basename(path)
|
||||
# Get scene name from "currentFile" and use basename as ending of label
|
||||
path = context.data.get("currentFile")
|
||||
if path:
|
||||
label += " - {}".format(os.path.basename(path))
|
||||
|
||||
# Set label
|
||||
label = "{host} - {scene}".format(host=host.title(), scene=base)
|
||||
if host == "standalonepublisher":
|
||||
label = host.title()
|
||||
context.data["label"] = label
|
||||
self.log.debug("Context label is changed to \"{}\"".format(
|
||||
label
|
||||
))
|
||||
|
|
|
|||
|
|
@ -6,10 +6,9 @@ import copy
|
|||
|
||||
import six
|
||||
import pyblish.api
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import get_version_by_id
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client.operations import OperationsSession, new_thumbnail_doc
|
||||
|
||||
|
||||
class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
||||
|
|
@ -24,13 +23,9 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
if not os.environ.get("AVALON_THUMBNAIL_ROOT"):
|
||||
self.log.warning(
|
||||
"AVALON_THUMBNAIL_ROOT is not set."
|
||||
" Skipping thumbnail integration."
|
||||
)
|
||||
return
|
||||
env_key = "AVALON_THUMBNAIL_ROOT"
|
||||
thumbnail_root_format_key = "{thumbnail_root}"
|
||||
thumbnail_root = os.environ.get(env_key) or ""
|
||||
|
||||
published_repres = instance.data.get("published_representations")
|
||||
if not published_repres:
|
||||
|
|
@ -51,6 +46,16 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
).format(project_name))
|
||||
return
|
||||
|
||||
thumbnail_template = anatomy.templates["publish"]["thumbnail"]
|
||||
if (
|
||||
not thumbnail_root
|
||||
and thumbnail_root_format_key in thumbnail_template
|
||||
):
|
||||
self.log.warning((
|
||||
"{} is not set. Skipping thumbnail integration."
|
||||
).format(env_key))
|
||||
return
|
||||
|
||||
thumb_repre = None
|
||||
thumb_repre_anatomy_data = None
|
||||
for repre_info in published_repres.values():
|
||||
|
|
@ -66,10 +71,6 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
)
|
||||
return
|
||||
|
||||
legacy_io.install()
|
||||
|
||||
thumbnail_template = anatomy.templates["publish"]["thumbnail"]
|
||||
|
||||
version = get_version_by_id(project_name, thumb_repre["parent"])
|
||||
if not version:
|
||||
raise AssertionError(
|
||||
|
|
@ -88,14 +89,15 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
|
||||
filename, file_extension = os.path.splitext(src_full_path)
|
||||
# Create id for mongo entity now to fill anatomy template
|
||||
thumbnail_id = ObjectId()
|
||||
thumbnail_doc = new_thumbnail_doc()
|
||||
thumbnail_id = thumbnail_doc["_id"]
|
||||
|
||||
# Prepare anatomy template fill data
|
||||
template_data = copy.deepcopy(thumb_repre_anatomy_data)
|
||||
template_data.update({
|
||||
"_id": str(thumbnail_id),
|
||||
"thumbnail_root": os.environ.get("AVALON_THUMBNAIL_ROOT"),
|
||||
"ext": file_extension[1:],
|
||||
"thumbnail_root": thumbnail_root,
|
||||
"thumbnail_type": "thumbnail"
|
||||
})
|
||||
|
||||
|
|
@ -117,8 +119,8 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
shutil.copy(src_full_path, dst_full_path)
|
||||
|
||||
# Clean template data from keys that are dynamic
|
||||
template_data.pop("_id")
|
||||
template_data.pop("thumbnail_root")
|
||||
for key in ("_id", "thumbnail_root"):
|
||||
template_data.pop(key, None)
|
||||
|
||||
repre_context = template_filled.used_values
|
||||
for key in self.required_context_keys:
|
||||
|
|
@ -127,34 +129,40 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
repre_context[key] = template_data[key]
|
||||
|
||||
thumbnail_entity = {
|
||||
"_id": thumbnail_id,
|
||||
"type": "thumbnail",
|
||||
"schema": "openpype:thumbnail-1.0",
|
||||
"data": {
|
||||
"template": thumbnail_template,
|
||||
"template_data": repre_context
|
||||
}
|
||||
op_session = OperationsSession()
|
||||
|
||||
thumbnail_doc["data"] = {
|
||||
"template": thumbnail_template,
|
||||
"template_data": repre_context
|
||||
}
|
||||
# Create thumbnail entity
|
||||
legacy_io.insert_one(thumbnail_entity)
|
||||
self.log.debug(
|
||||
"Creating entity in database {}".format(str(thumbnail_entity))
|
||||
op_session.create_entity(
|
||||
project_name, thumbnail_doc["type"], thumbnail_doc
|
||||
)
|
||||
# Create thumbnail entity
|
||||
self.log.debug(
|
||||
"Creating entity in database {}".format(str(thumbnail_doc))
|
||||
)
|
||||
|
||||
# Set thumbnail id for version
|
||||
legacy_io.update_many(
|
||||
{"_id": version["_id"]},
|
||||
{"$set": {"data.thumbnail_id": thumbnail_id}}
|
||||
op_session.update_entity(
|
||||
project_name,
|
||||
version["type"],
|
||||
version["_id"],
|
||||
{"data.thumbnail_id": thumbnail_id}
|
||||
)
|
||||
self.log.debug("Setting thumbnail for version \"{}\" <{}>".format(
|
||||
version["name"], str(version["_id"])
|
||||
))
|
||||
|
||||
asset_entity = instance.data["assetEntity"]
|
||||
legacy_io.update_many(
|
||||
{"_id": asset_entity["_id"]},
|
||||
{"$set": {"data.thumbnail_id": thumbnail_id}}
|
||||
op_session.update_entity(
|
||||
project_name,
|
||||
asset_entity["type"],
|
||||
asset_entity["_id"],
|
||||
{"data.thumbnail_id": thumbnail_id}
|
||||
)
|
||||
self.log.debug("Setting thumbnail for asset \"{}\" <{}>".format(
|
||||
asset_entity["name"], str(version["_id"])
|
||||
))
|
||||
|
||||
op_session.commit()
|
||||
|
|
|
|||
|
|
@ -2,5 +2,69 @@
|
|||
"workfile_builder": {
|
||||
"create_first_version": false,
|
||||
"custom_templates": []
|
||||
},
|
||||
"publish": {
|
||||
"ValidateCameraZeroKeyframe": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateMeshHasUvs": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateMeshNoNegativeScale": {
|
||||
"enabled": true,
|
||||
"optional": false,
|
||||
"active": true
|
||||
},
|
||||
"ValidateTransformZero": {
|
||||
"enabled": true,
|
||||
"optional": false,
|
||||
"active": true
|
||||
},
|
||||
"ExtractBlend": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true,
|
||||
"families": [
|
||||
"model",
|
||||
"camera",
|
||||
"rig",
|
||||
"action",
|
||||
"layout"
|
||||
]
|
||||
},
|
||||
"ExtractBlendAnimation": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ExtractCamera": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ExtractFBX": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": false
|
||||
},
|
||||
"ExtractAnimationFBX": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": false
|
||||
},
|
||||
"ExtractABC": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": false
|
||||
},
|
||||
"ExtractLayout": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -47,6 +47,18 @@
|
|||
}
|
||||
},
|
||||
"publish": {
|
||||
"ValidateWorkfilePaths": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"node_types": [
|
||||
"file",
|
||||
"alembic"
|
||||
],
|
||||
"prohibited_vars": [
|
||||
"$HIP",
|
||||
"$JOB"
|
||||
]
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@
|
|||
},
|
||||
"publish": {
|
||||
"CollectColorCodedInstances": {
|
||||
"create_flatten_image": false,
|
||||
"create_flatten_image": "no",
|
||||
"flatten_subset_template": "",
|
||||
"color_code_mapping": []
|
||||
},
|
||||
|
|
|
|||
|
|
@ -12,6 +12,10 @@
|
|||
"workfile_builder/builder_on_start",
|
||||
"workfile_builder/profiles"
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema",
|
||||
"name": "schema_blender_publish"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -10,22 +10,8 @@
|
|||
"name": "schema_houdini_create"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "publish",
|
||||
"label": "Publish plugins",
|
||||
"children": [
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateContainers",
|
||||
"label": "ValidateContainers"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
"type": "schema",
|
||||
"name": "schema_houdini_publish"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -45,9 +45,15 @@
|
|||
"label": "Set color for publishable layers, set its resulting family and template for subset name. \nCan create flatten image from published instances.(Applicable only for remote publishing!)"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "create_flatten_image",
|
||||
"label": "Create flatten image"
|
||||
"label": "Create flatten image",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"enum_items": [
|
||||
{ "flatten_with_images": "Flatten with images" },
|
||||
{ "flatten_only": "Flatten only" },
|
||||
{ "no": "No" }
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue