mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-02 00:44:52 +01:00
Merge branch 'develop' into feature/OP-1915_flame-ftrack-direct-link
This commit is contained in:
commit
29d9143b60
238 changed files with 9005 additions and 3550 deletions
74
CHANGELOG.md
74
CHANGELOG.md
|
|
@ -1,18 +1,69 @@
|
|||
# Changelog
|
||||
|
||||
## [3.6.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.7.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.1...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Ftrack: Synchronize input links [\#2287](https://github.com/pypeclub/OpenPype/pull/2287)
|
||||
- StandalonePublisher: Remove unused plugin ExtractHarmonyZip [\#2277](https://github.com/pypeclub/OpenPype/pull/2277)
|
||||
- Ftrack: Support multiple reviews [\#2271](https://github.com/pypeclub/OpenPype/pull/2271)
|
||||
- Ftrack: Remove unused clean component plugin [\#2269](https://github.com/pypeclub/OpenPype/pull/2269)
|
||||
- Royal Render: Support for rr channels in separate dirs [\#2268](https://github.com/pypeclub/OpenPype/pull/2268)
|
||||
- Houdini: Add experimental tools action [\#2267](https://github.com/pypeclub/OpenPype/pull/2267)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: Deadline - fix limit groups [\#2295](https://github.com/pypeclub/OpenPype/pull/2295)
|
||||
- New Publisher: Fix mapping of indexes [\#2285](https://github.com/pypeclub/OpenPype/pull/2285)
|
||||
- Alternate site for site sync doesnt work for sequences [\#2284](https://github.com/pypeclub/OpenPype/pull/2284)
|
||||
- FFmpeg: Execute ffprobe using list of arguments instead of string command [\#2281](https://github.com/pypeclub/OpenPype/pull/2281)
|
||||
- Nuke: Anatomy fill data use task as dictionary [\#2278](https://github.com/pypeclub/OpenPype/pull/2278)
|
||||
- Bug: fix variable name \_asset\_id in workfiles application [\#2274](https://github.com/pypeclub/OpenPype/pull/2274)
|
||||
|
||||
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: inventory update removes all loaded read nodes [\#2294](https://github.com/pypeclub/OpenPype/pull/2294)
|
||||
|
||||
## [3.6.3](https://github.com/pypeclub/OpenPype/tree/3.6.3) (2021-11-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.3-nightly.1...3.6.3)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Deadline: Fix publish targets [\#2280](https://github.com/pypeclub/OpenPype/pull/2280)
|
||||
|
||||
## [3.6.2](https://github.com/pypeclub/OpenPype/tree/3.6.2) (2021-11-18)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.2-nightly.2...3.6.2)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265)
|
||||
- SceneInventory: Choose loader in asset switcher [\#2262](https://github.com/pypeclub/OpenPype/pull/2262)
|
||||
- Style: New fonts in OpenPype style [\#2256](https://github.com/pypeclub/OpenPype/pull/2256)
|
||||
- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255)
|
||||
- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251)
|
||||
- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244)
|
||||
- Added endpoint for configured extensions [\#2221](https://github.com/pypeclub/OpenPype/pull/2221)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Version handling fixes [\#2272](https://github.com/pypeclub/OpenPype/pull/2272)
|
||||
- Tools: Parenting of tools in Nuke and Hiero [\#2266](https://github.com/pypeclub/OpenPype/pull/2266)
|
||||
- limiting validator to specific editorial hosts [\#2264](https://github.com/pypeclub/OpenPype/pull/2264)
|
||||
- Tools: Select Context dialog attribute fix [\#2261](https://github.com/pypeclub/OpenPype/pull/2261)
|
||||
- Maya: Render publishing fails on linux [\#2260](https://github.com/pypeclub/OpenPype/pull/2260)
|
||||
- LookAssigner: Fix tool reopen [\#2259](https://github.com/pypeclub/OpenPype/pull/2259)
|
||||
- Standalone: editorial not publishing thumbnails on all subsets [\#2258](https://github.com/pypeclub/OpenPype/pull/2258)
|
||||
- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247)
|
||||
- Maya: Support for configurable AOV separator characters [\#2197](https://github.com/pypeclub/OpenPype/pull/2197)
|
||||
- Maya: texture colorspace modes in looks [\#2195](https://github.com/pypeclub/OpenPype/pull/2195)
|
||||
|
||||
## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16)
|
||||
|
||||
|
|
@ -83,25 +134,6 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Added project and task into context change message in Maya [\#2131](https://github.com/pypeclub/OpenPype/pull/2131)
|
||||
- Add ExtractBurnin to photoshop review [\#2124](https://github.com/pypeclub/OpenPype/pull/2124)
|
||||
- PYPE-1218 - changed namespace to contain subset name in Maya [\#2114](https://github.com/pypeclub/OpenPype/pull/2114)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: make rig validators configurable in settings [\#2137](https://github.com/pypeclub/OpenPype/pull/2137)
|
||||
- Settings: Updated readme for entity types in settings [\#2132](https://github.com/pypeclub/OpenPype/pull/2132)
|
||||
- Nuke: unified clip loader [\#2128](https://github.com/pypeclub/OpenPype/pull/2128)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: fix model publishing [\#2130](https://github.com/pypeclub/OpenPype/pull/2130)
|
||||
- Fix - oiiotool wasn't recognized even if present [\#2129](https://github.com/pypeclub/OpenPype/pull/2129)
|
||||
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
|
||||
- Hiero: publishing effect first time makes wrong resources path [\#2115](https://github.com/pypeclub/OpenPype/pull/2115)
|
||||
|
||||
## [3.4.1](https://github.com/pypeclub/OpenPype/tree/3.4.1) (2021-09-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.1-nightly.1...3.4.1)
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import tempfile
|
|||
from pathlib import Path
|
||||
from typing import Union, Callable, List, Tuple
|
||||
import hashlib
|
||||
import platform
|
||||
|
||||
from zipfile import ZipFile, BadZipFile
|
||||
|
||||
|
|
@ -196,21 +197,23 @@ class OpenPypeVersion(semver.VersionInfo):
|
|||
return str(self.finalize_version())
|
||||
|
||||
@staticmethod
|
||||
def version_in_str(string: str) -> Tuple:
|
||||
def version_in_str(string: str) -> Union[None, OpenPypeVersion]:
|
||||
"""Find OpenPype version in given string.
|
||||
|
||||
Args:
|
||||
string (str): string to search.
|
||||
|
||||
Returns:
|
||||
tuple: True/False and OpenPypeVersion if found.
|
||||
OpenPypeVersion: of detected or None.
|
||||
|
||||
"""
|
||||
m = re.search(OpenPypeVersion._VERSION_REGEX, string)
|
||||
if not m:
|
||||
return False, None
|
||||
return None
|
||||
version = OpenPypeVersion.parse(string[m.start():m.end()])
|
||||
return True, version
|
||||
if "staging" in string[m.start():m.end()]:
|
||||
version.staging = True
|
||||
return version
|
||||
|
||||
@classmethod
|
||||
def parse(cls, version):
|
||||
|
|
@ -531,6 +534,7 @@ class BootstrapRepos:
|
|||
processed_path = file
|
||||
self._print(f"- processing {processed_path}")
|
||||
|
||||
|
||||
checksums.append(
|
||||
(
|
||||
sha256sum(file.as_posix()),
|
||||
|
|
@ -542,7 +546,10 @@ class BootstrapRepos:
|
|||
|
||||
checksums_str = ""
|
||||
for c in checksums:
|
||||
checksums_str += "{}:{}\n".format(c[0], c[1])
|
||||
file_str = c[1]
|
||||
if platform.system().lower() == "windows":
|
||||
file_str = c[1].as_posix().replace("\\", "/")
|
||||
checksums_str += "{}:{}\n".format(c[0], file_str)
|
||||
zip_file.writestr("checksums", checksums_str)
|
||||
# test if zip is ok
|
||||
zip_file.testzip()
|
||||
|
|
@ -563,6 +570,8 @@ class BootstrapRepos:
|
|||
and string with reason as second.
|
||||
|
||||
"""
|
||||
if os.getenv("OPENPYPE_DONT_VALIDATE_VERSION"):
|
||||
return True, "Disabled validation"
|
||||
if not path.exists():
|
||||
return False, "Path doesn't exist"
|
||||
|
||||
|
|
@ -589,13 +598,16 @@ class BootstrapRepos:
|
|||
|
||||
# calculate and compare checksums in the zip file
|
||||
for file in checksums:
|
||||
file_name = file[1]
|
||||
if platform.system().lower() == "windows":
|
||||
file_name = file_name.replace("/", "\\")
|
||||
h = hashlib.sha256()
|
||||
try:
|
||||
h.update(zip_file.read(file[1]))
|
||||
h.update(zip_file.read(file_name))
|
||||
except FileNotFoundError:
|
||||
return False, f"Missing file [ {file[1]} ]"
|
||||
return False, f"Missing file [ {file_name} ]"
|
||||
if h.hexdigest() != file[0]:
|
||||
return False, f"Invalid checksum on {file[1]}"
|
||||
return False, f"Invalid checksum on {file_name}"
|
||||
|
||||
# get list of files in zip minus `checksums` file itself
|
||||
# and turn in to set to compare against list of files
|
||||
|
|
@ -604,7 +616,7 @@ class BootstrapRepos:
|
|||
files_in_zip = zip_file.namelist()
|
||||
files_in_zip.remove("checksums")
|
||||
files_in_zip = set(files_in_zip)
|
||||
files_in_checksum = set([file[1] for file in checksums])
|
||||
files_in_checksum = {file[1] for file in checksums}
|
||||
diff = files_in_zip.difference(files_in_checksum)
|
||||
if diff:
|
||||
return False, f"Missing files {diff}"
|
||||
|
|
@ -628,16 +640,19 @@ class BootstrapRepos:
|
|||
]
|
||||
files_in_dir.remove("checksums")
|
||||
files_in_dir = set(files_in_dir)
|
||||
files_in_checksum = set([file[1] for file in checksums])
|
||||
files_in_checksum = {file[1] for file in checksums}
|
||||
|
||||
for file in checksums:
|
||||
file_name = file[1]
|
||||
if platform.system().lower() == "windows":
|
||||
file_name = file_name.replace("/", "\\")
|
||||
try:
|
||||
current = sha256sum((path / file[1]).as_posix())
|
||||
current = sha256sum((path / file_name).as_posix())
|
||||
except FileNotFoundError:
|
||||
return False, f"Missing file [ {file[1]} ]"
|
||||
return False, f"Missing file [ {file_name} ]"
|
||||
|
||||
if file[0] != current:
|
||||
return False, f"Invalid checksum on {file[1]}"
|
||||
return False, f"Invalid checksum on {file_name}"
|
||||
diff = files_in_dir.difference(files_in_checksum)
|
||||
if diff:
|
||||
return False, f"Missing files {diff}"
|
||||
|
|
@ -1161,9 +1176,9 @@ class BootstrapRepos:
|
|||
name = item.name if item.is_dir() else item.stem
|
||||
result = OpenPypeVersion.version_in_str(name)
|
||||
|
||||
if result[0]:
|
||||
if result:
|
||||
detected_version: OpenPypeVersion
|
||||
detected_version = result[1]
|
||||
detected_version = result
|
||||
|
||||
if item.is_dir() and not self._is_openpype_in_dir(
|
||||
item, detected_version
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Definition of Igniter version."""
|
||||
|
||||
__version__ = "1.0.1"
|
||||
__version__ = "1.0.2"
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ from .lib import (
|
|||
version_up,
|
||||
get_asset,
|
||||
get_hierarchy,
|
||||
get_workdir_data,
|
||||
get_version_from_path,
|
||||
get_last_version_from_path,
|
||||
get_app_environments_for_context,
|
||||
|
|
|
|||
|
|
@ -384,3 +384,15 @@ def syncserver(debug, active_site):
|
|||
if debug:
|
||||
os.environ['OPENPYPE_DEBUG'] = '3'
|
||||
PypeCommands().syncserver(active_site)
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("directory")
|
||||
def repack_version(directory):
|
||||
"""Repack OpenPype version from directory.
|
||||
|
||||
This command will re-create zip file from specified directory,
|
||||
recalculating file checksums. It will try to use version detected in
|
||||
directory name.
|
||||
"""
|
||||
PypeCommands().repack_version(directory)
|
||||
|
|
|
|||
|
|
@ -126,7 +126,8 @@ class CollectFarmRender(openpype.lib.abstract_collect_render.
|
|||
# because of using 'renderFarm' as a family, replace 'Farm' with
|
||||
# capitalized task name - issue of avalon-core Creator app
|
||||
subset_name = node.split("/")[1]
|
||||
task_name = context.data["anatomyData"]["task"].capitalize()
|
||||
task_name = context.data["anatomyData"]["task"][
|
||||
"name"].capitalize()
|
||||
replace_str = ""
|
||||
if task_name.lower() not in subset_name.lower():
|
||||
replace_str = task_name
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class CollectPalettes(pyblish.api.ContextPlugin):
|
|||
|
||||
# skip collecting if not in allowed task
|
||||
if self.allowed_tasks:
|
||||
task_name = context.data["anatomyData"]["task"].lower()
|
||||
task_name = context.data["anatomyData"]["task"]["name"].lower()
|
||||
if (not any([re.search(pattern, task_name)
|
||||
for pattern in self.allowed_tasks])):
|
||||
return
|
||||
|
|
|
|||
|
|
@ -30,6 +30,7 @@ self = sys.modules[__name__]
|
|||
self._has_been_setup = False
|
||||
self._has_menu = False
|
||||
self._registered_gui = None
|
||||
self._parent = None
|
||||
self.pype_tag_name = "openpypeData"
|
||||
self.default_sequence_name = "openpypeSequence"
|
||||
self.default_bin_name = "openpypeBin"
|
||||
|
|
@ -1029,3 +1030,15 @@ def before_project_save(event):
|
|||
|
||||
# also mark old versions of loaded containers
|
||||
check_inventory_versions()
|
||||
|
||||
|
||||
def get_main_window():
|
||||
"""Acquire Nuke's main window"""
|
||||
if self._parent is None:
|
||||
top_widgets = QtWidgets.QApplication.topLevelWidgets()
|
||||
name = "Foundry::UI::DockMainWindow"
|
||||
main_window = next(widget for widget in top_widgets if
|
||||
widget.inherits("QMainWindow") and
|
||||
widget.metaObject().className() == name)
|
||||
self._parent = main_window
|
||||
return self._parent
|
||||
|
|
|
|||
|
|
@ -37,12 +37,16 @@ def menu_install():
|
|||
Installing menu into Hiero
|
||||
|
||||
"""
|
||||
from Qt import QtGui
|
||||
from . import (
|
||||
publish, launch_workfiles_app, reload_config,
|
||||
apply_colorspace_project, apply_colorspace_clips
|
||||
)
|
||||
from .lib import get_main_window
|
||||
|
||||
main_window = get_main_window()
|
||||
|
||||
# here is the best place to add menu
|
||||
from avalon.vendor.Qt import QtGui
|
||||
|
||||
menu_name = os.environ['AVALON_LABEL']
|
||||
|
||||
|
|
@ -86,15 +90,21 @@ def menu_install():
|
|||
|
||||
creator_action = menu.addAction("Create ...")
|
||||
creator_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
creator_action.triggered.connect(host_tools.show_creator)
|
||||
creator_action.triggered.connect(
|
||||
lambda: host_tools.show_creator(parent=main_window)
|
||||
)
|
||||
|
||||
loader_action = menu.addAction("Load ...")
|
||||
loader_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
loader_action.triggered.connect(host_tools.show_loader)
|
||||
loader_action.triggered.connect(
|
||||
lambda: host_tools.show_loader(parent=main_window)
|
||||
)
|
||||
|
||||
sceneinventory_action = menu.addAction("Manage ...")
|
||||
sceneinventory_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
sceneinventory_action.triggered.connect(host_tools.show_scene_inventory)
|
||||
sceneinventory_action.triggered.connect(
|
||||
lambda: host_tools.show_scene_inventory(parent=main_window)
|
||||
)
|
||||
menu.addSeparator()
|
||||
|
||||
if os.getenv("OPENPYPE_DEVELOP"):
|
||||
|
|
|
|||
|
|
@ -209,9 +209,11 @@ def update_container(track_item, data=None):
|
|||
|
||||
def launch_workfiles_app(*args):
|
||||
''' Wrapping function for workfiles launcher '''
|
||||
from .lib import get_main_window
|
||||
|
||||
main_window = get_main_window()
|
||||
# show workfile gui
|
||||
host_tools.show_workfiles()
|
||||
host_tools.show_workfiles(parent=main_window)
|
||||
|
||||
|
||||
def publish(parent):
|
||||
|
|
|
|||
|
|
@ -3,9 +3,10 @@
|
|||
import contextlib
|
||||
|
||||
import logging
|
||||
from Qt import QtCore, QtGui
|
||||
from openpype.tools.utils.widgets import AssetWidget
|
||||
from avalon import style, io
|
||||
from Qt import QtWidgets, QtCore, QtGui
|
||||
from avalon import io
|
||||
from openpype import style
|
||||
from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget
|
||||
|
||||
from pxr import Sdf
|
||||
|
||||
|
|
@ -13,6 +14,60 @@ from pxr import Sdf
|
|||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SelectAssetDialog(QtWidgets.QWidget):
|
||||
"""Frameless assets dialog to select asset with double click.
|
||||
|
||||
Args:
|
||||
parm: Parameter where selected asset name is set.
|
||||
"""
|
||||
def __init__(self, parm):
|
||||
self.setWindowTitle("Pick Asset")
|
||||
self.setWindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.Popup)
|
||||
|
||||
assets_widget = SingleSelectAssetsWidget(io, parent=self)
|
||||
|
||||
layout = QtWidgets.QHBoxLayout(self)
|
||||
layout.addWidget(assets_widget)
|
||||
|
||||
assets_widget.double_clicked.connect(self._set_parameter)
|
||||
self._assets_widget = assets_widget
|
||||
self._parm = parm
|
||||
|
||||
def _set_parameter(self):
|
||||
name = self._assets_widget.get_selected_asset_name()
|
||||
self._parm.set(name)
|
||||
self.close()
|
||||
|
||||
def _on_show(self):
|
||||
pos = QtGui.QCursor.pos()
|
||||
# Select the current asset if there is any
|
||||
select_id = None
|
||||
name = self._parm.eval()
|
||||
if name:
|
||||
db_asset = io.find_one(
|
||||
{"name": name, "type": "asset"},
|
||||
{"_id": True}
|
||||
)
|
||||
if db_asset:
|
||||
select_id = db_asset["_id"]
|
||||
|
||||
# Set stylesheet
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
# Refresh assets (is threaded)
|
||||
self._assets_widget.refresh()
|
||||
# Select asset - must be done after refresh
|
||||
if select_id is not None:
|
||||
self._assets_widget.select_asset(select_id)
|
||||
|
||||
# Show cursor (top right of window) near cursor
|
||||
self.resize(250, 400)
|
||||
self.move(self.mapFromGlobal(pos) - QtCore.QPoint(self.width(), 0))
|
||||
|
||||
def showEvent(self, event):
|
||||
super(SelectAssetDialog, self).showEvent(event)
|
||||
self._on_show()
|
||||
|
||||
|
||||
def pick_asset(node):
|
||||
"""Show a user interface to select an Asset in the project
|
||||
|
||||
|
|
@ -21,43 +76,15 @@ def pick_asset(node):
|
|||
|
||||
"""
|
||||
|
||||
pos = QtGui.QCursor.pos()
|
||||
|
||||
parm = node.parm("asset_name")
|
||||
if not parm:
|
||||
log.error("Node has no 'asset' parameter: %s", node)
|
||||
return
|
||||
|
||||
# Construct the AssetWidget as a frameless popup so it automatically
|
||||
# Construct a frameless popup so it automatically
|
||||
# closes when clicked outside of it.
|
||||
global tool
|
||||
tool = AssetWidget(io)
|
||||
tool.setContentsMargins(5, 5, 5, 5)
|
||||
tool.setWindowTitle("Pick Asset")
|
||||
tool.setStyleSheet(style.load_stylesheet())
|
||||
tool.setWindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.Popup)
|
||||
tool.refresh()
|
||||
|
||||
# Select the current asset if there is any
|
||||
name = parm.eval()
|
||||
if name:
|
||||
db_asset = io.find_one({"name": name, "type": "asset"})
|
||||
if db_asset:
|
||||
silo = db_asset.get("silo")
|
||||
if silo:
|
||||
tool.set_silo(silo)
|
||||
tool.select_assets([name], expand=True)
|
||||
|
||||
# Show cursor (top right of window) near cursor
|
||||
tool.resize(250, 400)
|
||||
tool.move(tool.mapFromGlobal(pos) - QtCore.QPoint(tool.width(), 0))
|
||||
|
||||
def set_parameter_callback(index):
|
||||
name = index.data(tool.model.DocumentRole)["name"]
|
||||
parm.set(name)
|
||||
tool.close()
|
||||
|
||||
tool.view.doubleClicked.connect(set_parameter_callback)
|
||||
tool = SelectAssetDialog(parm)
|
||||
tool.show()
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -67,6 +67,16 @@ from avalon.houdini import pipeline
|
|||
pipeline.reload_pipeline()]]></scriptCode>
|
||||
</scriptItem>
|
||||
</subMenu>
|
||||
|
||||
<separatorItem/>
|
||||
<scriptItem id="experimental_tools">
|
||||
<label>Experimental tools...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
host_tools.show_experimental_tools_dialog(parent)]]></scriptCode>
|
||||
</scriptItem>
|
||||
</subMenu>
|
||||
</menuBar>
|
||||
</mainMenu>
|
||||
|
|
|
|||
|
|
@ -180,6 +180,7 @@ class ARenderProducts:
|
|||
self.layer = layer
|
||||
self.render_instance = render_instance
|
||||
self.multipart = False
|
||||
self.aov_separator = render_instance.data.get("aovSeparator", "_")
|
||||
|
||||
# Initialize
|
||||
self.layer_data = self._get_layer_data()
|
||||
|
|
@ -676,7 +677,7 @@ class RenderProductsVray(ARenderProducts):
|
|||
|
||||
"""
|
||||
prefix = super(RenderProductsVray, self).get_renderer_prefix()
|
||||
prefix = "{}.<aov>".format(prefix)
|
||||
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
|
||||
return prefix
|
||||
|
||||
def _get_layer_data(self):
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ from openpype.api import (
|
|||
from openpype.modules import ModulesManager
|
||||
|
||||
from avalon.api import Session
|
||||
from avalon.api import CreatorError
|
||||
|
||||
|
||||
class CreateRender(plugin.Creator):
|
||||
|
|
@ -81,13 +82,21 @@ class CreateRender(plugin.Creator):
|
|||
}
|
||||
|
||||
_image_prefixes = {
|
||||
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>',
|
||||
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'vray': 'maya/<scene>/<Layer>/<Layer>',
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>',
|
||||
'renderman': 'maya/<Scene>/<layer>/<layer>_<aov>',
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>'
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>',
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>' # noqa
|
||||
}
|
||||
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
_project_settings = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor."""
|
||||
super(CreateRender, self).__init__(*args, **kwargs)
|
||||
|
|
@ -95,12 +104,24 @@ class CreateRender(plugin.Creator):
|
|||
if not deadline_settings["enabled"]:
|
||||
self.deadline_servers = {}
|
||||
return
|
||||
project_settings = get_project_settings(Session["AVALON_PROJECT"])
|
||||
self._project_settings = get_project_settings(
|
||||
Session["AVALON_PROJECT"])
|
||||
|
||||
# project_settings/maya/create/CreateRender/aov_separator
|
||||
try:
|
||||
self.aov_separator = self._aov_chars[(
|
||||
self._project_settings["maya"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
self.aov_separator = "_"
|
||||
|
||||
try:
|
||||
default_servers = deadline_settings["deadline_urls"]
|
||||
project_servers = (
|
||||
project_settings["deadline"]
|
||||
["deadline_servers"]
|
||||
self._project_settings["deadline"]["deadline_servers"]
|
||||
)
|
||||
self.deadline_servers = {
|
||||
k: default_servers[k]
|
||||
|
|
@ -409,8 +430,10 @@ class CreateRender(plugin.Creator):
|
|||
renderer (str): Renderer name.
|
||||
|
||||
"""
|
||||
prefix = self._image_prefixes[renderer]
|
||||
prefix = prefix.replace("{aov_separator}", self.aov_separator)
|
||||
cmds.setAttr(self._image_prefix_nodes[renderer],
|
||||
self._image_prefixes[renderer],
|
||||
prefix,
|
||||
type="string")
|
||||
|
||||
asset = get_asset()
|
||||
|
|
@ -446,37 +469,37 @@ class CreateRender(plugin.Creator):
|
|||
|
||||
self._set_global_output_settings()
|
||||
|
||||
@staticmethod
|
||||
def _set_renderer_option(renderer_node, arg=None, value=None):
|
||||
# type: (str, str, str) -> str
|
||||
"""Set option on renderer node.
|
||||
|
||||
If renderer settings node doesn't exists, it is created first.
|
||||
|
||||
Args:
|
||||
renderer_node (str): Renderer name.
|
||||
arg (str, optional): Argument name.
|
||||
value (str, optional): Argument value.
|
||||
|
||||
Returns:
|
||||
str: Renderer settings node.
|
||||
|
||||
"""
|
||||
settings = cmds.ls(type=renderer_node)
|
||||
result = settings[0] if settings else cmds.createNode(renderer_node)
|
||||
cmds.setAttr(arg.format(result), value)
|
||||
return result
|
||||
|
||||
def _set_vray_settings(self, asset):
|
||||
# type: (dict) -> None
|
||||
"""Sets important settings for Vray."""
|
||||
node = self._set_renderer_option(
|
||||
"VRaySettingsNode", "{}.fileNameRenderElementSeparator", "_"
|
||||
)
|
||||
settings = cmds.ls(type="VRaySettingsNode")
|
||||
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
|
||||
|
||||
# set separator
|
||||
# set it in vray menu
|
||||
if cmds.optionMenuGrp("vrayRenderElementSeparator", exists=True,
|
||||
q=True):
|
||||
items = cmds.optionMenuGrp(
|
||||
"vrayRenderElementSeparator", ill=True, query=True)
|
||||
|
||||
separators = [cmds.menuItem(i, label=True, query=True) for i in items] # noqa: E501
|
||||
try:
|
||||
sep_idx = separators.index(self.aov_separator)
|
||||
except ValueError:
|
||||
raise CreatorError(
|
||||
"AOV character {} not in {}".format(
|
||||
self.aov_separator, separators))
|
||||
|
||||
cmds.optionMenuGrp(
|
||||
"vrayRenderElementSeparator", sl=sep_idx + 1, edit=True)
|
||||
cmds.setAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node),
|
||||
self.aov_separator,
|
||||
type="string"
|
||||
)
|
||||
# set format to exr
|
||||
cmds.setAttr(
|
||||
"{}.imageFormatStr".format(node), 5)
|
||||
"{}.imageFormatStr".format(node), "exr", type="string")
|
||||
|
||||
# animType
|
||||
cmds.setAttr(
|
||||
|
|
|
|||
|
|
@ -532,7 +532,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "raw"
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
|
|
|
|||
|
|
@ -41,6 +41,7 @@ Provides:
|
|||
|
||||
import re
|
||||
import os
|
||||
import platform
|
||||
import json
|
||||
|
||||
from maya import cmds
|
||||
|
|
@ -61,6 +62,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
label = "Collect Render Layers"
|
||||
sync_workfile_version = False
|
||||
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
def process(self, context):
|
||||
"""Entry point to collector."""
|
||||
render_instance = None
|
||||
|
|
@ -166,6 +173,18 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
if renderer.startswith("renderman"):
|
||||
renderer = "renderman"
|
||||
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
context.data["project_settings"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
|
||||
render_instance.data["aovSeparator"] = aov_separator
|
||||
|
||||
# return all expected files for all cameras and aovs in given
|
||||
# frame range
|
||||
layer_render_products = get_layer_render_products(
|
||||
|
|
@ -255,12 +274,28 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
common_publish_meta_path, part)
|
||||
if part == expected_layer_name:
|
||||
break
|
||||
|
||||
# TODO: replace this terrible linux hotfix with real solution :)
|
||||
if platform.system().lower() in ["linux", "darwin"]:
|
||||
common_publish_meta_path = "/" + common_publish_meta_path
|
||||
|
||||
self.log.info(
|
||||
"Publish meta path: {}".format(common_publish_meta_path))
|
||||
|
||||
self.log.info(full_exp_files)
|
||||
self.log.info("collecting layer: {}".format(layer_name))
|
||||
# Get layer specific settings, might be overrides
|
||||
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
context.data["project_settings"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
|
||||
data = {
|
||||
"subset": expected_layer_name,
|
||||
"attachTo": attach_to,
|
||||
|
|
@ -302,7 +337,8 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
"convertToScanline") or False,
|
||||
"useReferencedAovs": render_instance.data.get(
|
||||
"useReferencedAovs") or render_instance.data.get(
|
||||
"vrayUseReferencedAovs") or False
|
||||
"vrayUseReferencedAovs") or False,
|
||||
"aovSeparator": aov_separator
|
||||
}
|
||||
|
||||
if deadline_url:
|
||||
|
|
|
|||
|
|
@ -332,10 +332,10 @@ class ExtractLook(openpype.api.Extractor):
|
|||
if do_maketx and files_metadata[filepath]["color_space"].lower() == "srgb": # noqa: E501
|
||||
linearize = True
|
||||
# set its file node to 'raw' as tx will be linearized
|
||||
files_metadata[filepath]["color_space"] = "raw"
|
||||
files_metadata[filepath]["color_space"] = "Raw"
|
||||
|
||||
if do_maketx:
|
||||
color_space = "raw"
|
||||
# if do_maketx:
|
||||
# color_space = "Raw"
|
||||
|
||||
source, mode, texture_hash = self._process_texture(
|
||||
filepath,
|
||||
|
|
@ -383,11 +383,11 @@ class ExtractLook(openpype.api.Extractor):
|
|||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have color space attribute
|
||||
color_space = "raw"
|
||||
color_space = "Raw"
|
||||
else:
|
||||
if files_metadata[source]["color_space"] == "raw":
|
||||
if files_metadata[source]["color_space"] == "Raw":
|
||||
# set color space to raw if we linearized it
|
||||
color_space = "raw"
|
||||
color_space = "Raw"
|
||||
# Remap file node filename to destination
|
||||
remap[color_space_attr] = color_space
|
||||
attr = resource["attribute"]
|
||||
|
|
|
|||
|
|
@ -55,13 +55,19 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
ImagePrefixTokens = {
|
||||
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>',
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
|
||||
'vray': 'maya/<Scene>/<Layer>/<Layer>',
|
||||
'renderman': '<layer>_<aov>.<f4>.<ext>'
|
||||
'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>' # noqa
|
||||
}
|
||||
|
||||
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>_<RenderPass>"
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>{aov_separator}<RenderPass>" # noqa: E501
|
||||
|
||||
# WARNING: There is bug? in renderman, translating <scene> token
|
||||
# to something left behind mayas default image prefix. So instead
|
||||
|
|
@ -107,6 +113,9 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
anim_override = lib.get_attr_in_layer("defaultRenderGlobals.animation",
|
||||
layer=layer)
|
||||
|
||||
prefix = prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_"))
|
||||
if not anim_override:
|
||||
invalid = True
|
||||
cls.log.error("Animation needs to be enabled. Use the same "
|
||||
|
|
@ -138,12 +147,16 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
node = vray_settings[0]
|
||||
|
||||
if cmds.getAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node)) != "_":
|
||||
invalid = False
|
||||
scene_sep = cmds.getAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node))
|
||||
if scene_sep != instance.data.get("aovSeparator", "_"):
|
||||
cls.log.error("AOV separator is not set correctly.")
|
||||
invalid = True
|
||||
|
||||
if renderer == "redshift":
|
||||
redshift_AOV_prefix = cls.redshift_AOV_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
if re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error(("Do not use AOV token [ {} ] - "
|
||||
|
|
@ -155,7 +168,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
for aov in rs_aovs:
|
||||
aov_prefix = cmds.getAttr("{}.filePrefix".format(aov))
|
||||
# check their image prefix
|
||||
if aov_prefix != cls.redshift_AOV_prefix:
|
||||
if aov_prefix != redshift_AOV_prefix:
|
||||
cls.log.error(("AOV ({}) image prefix is not set "
|
||||
"correctly {} != {}").format(
|
||||
cmds.getAttr("{}.name".format(aov)),
|
||||
|
|
@ -181,7 +194,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
file_prefix = cmds.getAttr("rmanGlobals.imageFileFormat")
|
||||
dir_prefix = cmds.getAttr("rmanGlobals.imageOutputDir")
|
||||
|
||||
if file_prefix.lower() != cls.ImagePrefixTokens[renderer].lower():
|
||||
if file_prefix.lower() != prefix.lower():
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ]".format(file_prefix))
|
||||
|
||||
|
|
@ -198,18 +211,20 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"You can't use '<renderpass>' token "
|
||||
"with merge AOVs turned on".format(prefix))
|
||||
else:
|
||||
if not re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<renderpass>' or "
|
||||
"token".format(prefix))
|
||||
elif not re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<renderpass>' or "
|
||||
"token".format(prefix))
|
||||
|
||||
# prefix check
|
||||
if prefix.lower() != cls.ImagePrefixTokens[renderer].lower():
|
||||
default_prefix = cls.ImagePrefixTokens[renderer]
|
||||
default_prefix = default_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_"))
|
||||
if prefix.lower() != default_prefix.lower():
|
||||
cls.log.warning("warning: prefix differs from "
|
||||
"recommended {}".format(
|
||||
cls.ImagePrefixTokens[renderer]))
|
||||
default_prefix))
|
||||
|
||||
if padding != cls.DEFAULT_PADDING:
|
||||
invalid = True
|
||||
|
|
@ -257,9 +272,14 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
renderer = instance.data['renderer']
|
||||
layer_node = instance.data['setMembers']
|
||||
redshift_AOV_prefix = cls.redshift_AOV_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
default_prefix = cls.ImagePrefixTokens[renderer].replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
|
||||
with lib.renderlayer(layer_node):
|
||||
default = lib.RENDER_ATTRS['default']
|
||||
|
|
@ -270,7 +290,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
node = render_attrs["node"]
|
||||
prefix_attr = render_attrs["prefix"]
|
||||
|
||||
fname_prefix = cls.ImagePrefixTokens[renderer]
|
||||
fname_prefix = default_prefix
|
||||
cmds.setAttr("{}.{}".format(node, prefix_attr),
|
||||
fname_prefix, type="string")
|
||||
|
||||
|
|
@ -281,7 +301,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
# renderman handles stuff differently
|
||||
cmds.setAttr("rmanGlobals.imageFileFormat",
|
||||
cls.ImagePrefixTokens[renderer],
|
||||
default_prefix,
|
||||
type="string")
|
||||
cmds.setAttr("rmanGlobals.imageOutputDir",
|
||||
cls.RendermanDirPrefix,
|
||||
|
|
@ -294,10 +314,13 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
node = vray_settings[0]
|
||||
|
||||
cmds.optionMenuGrp("vrayRenderElementSeparator",
|
||||
v=instance.data.get("aovSeparator", "_"))
|
||||
cmds.setAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(
|
||||
node),
|
||||
"_"
|
||||
instance.data.get("aovSeparator", "_"),
|
||||
type="string"
|
||||
)
|
||||
|
||||
if renderer == "redshift":
|
||||
|
|
@ -306,7 +329,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
for aov in rs_aovs:
|
||||
# fix AOV prefixes
|
||||
cmds.setAttr(
|
||||
"{}.filePrefix".format(aov), cls.redshift_AOV_prefix)
|
||||
"{}.filePrefix".format(aov), redshift_AOV_prefix)
|
||||
# fix AOV file format
|
||||
default_ext = cmds.getAttr(
|
||||
"redshiftOptions.imageFormat", asString=True)
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ from openpype.api import (
|
|||
BuildWorkfile,
|
||||
get_version_from_path,
|
||||
get_anatomy_settings,
|
||||
get_hierarchy,
|
||||
get_workdir_data,
|
||||
get_asset,
|
||||
get_current_project_settings,
|
||||
ApplicationManager
|
||||
|
|
@ -41,6 +41,10 @@ opnl.workfiles_launched = False
|
|||
opnl._node_tab_name = "{}".format(os.getenv("AVALON_LABEL") or "Avalon")
|
||||
|
||||
|
||||
def get_nuke_imageio_settings():
|
||||
return get_anatomy_settings(opnl.project_name)["imageio"]["nuke"]
|
||||
|
||||
|
||||
def get_created_node_imageio_setting(**kwarg):
|
||||
''' Get preset data for dataflow (fileType, compression, bitDepth)
|
||||
'''
|
||||
|
|
@ -51,8 +55,7 @@ def get_created_node_imageio_setting(**kwarg):
|
|||
assert any([creator, nodeclass]), nuke.message(
|
||||
"`{}`: Missing mandatory kwargs `host`, `cls`".format(__file__))
|
||||
|
||||
imageio = get_anatomy_settings(opnl.project_name)["imageio"]
|
||||
imageio_nodes = imageio["nuke"]["nodes"]["requiredNodes"]
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]["requiredNodes"]
|
||||
|
||||
imageio_node = None
|
||||
for node in imageio_nodes:
|
||||
|
|
@ -70,8 +73,7 @@ def get_imageio_input_colorspace(filename):
|
|||
''' Get input file colorspace based on regex in settings.
|
||||
'''
|
||||
imageio_regex_inputs = (
|
||||
get_anatomy_settings(opnl.project_name)
|
||||
["imageio"]["nuke"]["regexInputs"]["inputs"])
|
||||
get_nuke_imageio_settings()["regexInputs"]["inputs"])
|
||||
|
||||
preset_clrsp = None
|
||||
for regexInput in imageio_regex_inputs:
|
||||
|
|
@ -268,15 +270,21 @@ def format_anatomy(data):
|
|||
if not version:
|
||||
file = script_name()
|
||||
data["version"] = get_version_from_path(file)
|
||||
project_document = io.find_one({"type": "project"})
|
||||
|
||||
project_doc = io.find_one({"type": "project"})
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": data["avalon"]["asset"]
|
||||
})
|
||||
task_name = os.environ["AVALON_TASK"]
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
context_data = get_workdir_data(
|
||||
project_doc, asset_doc, task_name, host_name
|
||||
)
|
||||
data.update(context_data)
|
||||
data.update({
|
||||
"subset": data["avalon"]["subset"],
|
||||
"asset": data["avalon"]["asset"],
|
||||
"task": os.environ["AVALON_TASK"],
|
||||
"family": data["avalon"]["family"],
|
||||
"project": {"name": project_document["name"],
|
||||
"code": project_document["data"].get("code", '')},
|
||||
"hierarchy": get_hierarchy(),
|
||||
"frame": "#" * padding,
|
||||
})
|
||||
return anatomy.format(data)
|
||||
|
|
@ -547,8 +555,7 @@ def add_rendering_knobs(node, farm=True):
|
|||
Return:
|
||||
node (obj): with added knobs
|
||||
'''
|
||||
knob_options = [
|
||||
"Use existing frames", "Local"]
|
||||
knob_options = ["Use existing frames", "Local"]
|
||||
if farm:
|
||||
knob_options.append("On farm")
|
||||
|
||||
|
|
@ -906,8 +913,7 @@ class WorkfileSettings(object):
|
|||
''' Setting colorpace following presets
|
||||
'''
|
||||
# get imageio
|
||||
imageio = get_anatomy_settings(opnl.project_name)["imageio"]
|
||||
nuke_colorspace = imageio["nuke"]
|
||||
nuke_colorspace = get_nuke_imageio_settings()
|
||||
|
||||
try:
|
||||
self.set_root_colorspace(nuke_colorspace["workfile"])
|
||||
|
|
@ -1164,386 +1170,6 @@ def get_write_node_template_attr(node):
|
|||
return anlib.fix_data_for_node_create(correct_data)
|
||||
|
||||
|
||||
class ExporterReview:
|
||||
"""
|
||||
Base class object for generating review data from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
_temp_nodes = []
|
||||
data = dict({
|
||||
"representations": list()
|
||||
})
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance
|
||||
):
|
||||
|
||||
self.log = klass.log
|
||||
self.instance = instance
|
||||
self.path_in = self.instance.data.get("path", None)
|
||||
self.staging_dir = self.instance.data["stagingDir"]
|
||||
self.collection = self.instance.data.get("collection", None)
|
||||
|
||||
def get_file_info(self):
|
||||
if self.collection:
|
||||
self.log.debug("Collection: `{}`".format(self.collection))
|
||||
# get path
|
||||
self.fname = os.path.basename(self.collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
self.fhead = self.collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
self.first_frame = min(self.collection.indexes)
|
||||
self.last_frame = max(self.collection.indexes)
|
||||
if "slate" in self.instance.data["families"]:
|
||||
self.first_frame += 1
|
||||
else:
|
||||
self.fname = os.path.basename(self.path_in)
|
||||
self.fhead = os.path.splitext(self.fname)[0] + "."
|
||||
self.first_frame = self.instance.data.get("frameStartHandle", None)
|
||||
self.last_frame = self.instance.data.get("frameEndHandle", None)
|
||||
|
||||
if "#" in self.fhead:
|
||||
self.fhead = self.fhead.replace("#", "")[:-1]
|
||||
|
||||
def get_representation_data(self, tags=None, range=False):
|
||||
add_tags = []
|
||||
if tags:
|
||||
add_tags = tags
|
||||
|
||||
repre = {
|
||||
'name': self.name,
|
||||
'ext': self.ext,
|
||||
'files': self.file,
|
||||
"stagingDir": self.staging_dir,
|
||||
"tags": [self.name.replace("_", "-")] + add_tags
|
||||
}
|
||||
|
||||
if range:
|
||||
repre.update({
|
||||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
|
||||
self.data["representations"].append(repre)
|
||||
|
||||
def get_view_process_node(self):
|
||||
"""
|
||||
Will get any active view process.
|
||||
|
||||
Arguments:
|
||||
self (class): in object definition
|
||||
|
||||
Returns:
|
||||
nuke.Node: copy node of Input Process node
|
||||
"""
|
||||
anlib.reset_selection()
|
||||
ipn_orig = None
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ip = v['input_process'].getValue()
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
# copy selected to clipboard
|
||||
nuke.nodeCopy('%clipboard%')
|
||||
# reset selection
|
||||
anlib.reset_selection()
|
||||
# paste node and selection is on it only
|
||||
nuke.nodePaste('%clipboard%')
|
||||
# assign to variable
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
||||
def clean_nodes(self):
|
||||
for node in self._temp_nodes:
|
||||
nuke.delete(node)
|
||||
self._temp_nodes = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
|
||||
class ExporterReviewLut(ExporterReview):
|
||||
"""
|
||||
Generator object for review lut from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
cube_size=None,
|
||||
lut_size=None,
|
||||
lut_style=None):
|
||||
# initialize parent class
|
||||
ExporterReview.__init__(self, klass, instance)
|
||||
self._temp_nodes = []
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
if hasattr(klass, "viewer_lut_raw"):
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
else:
|
||||
self.viewer_lut_raw = False
|
||||
|
||||
self.name = name or "baked_lut"
|
||||
self.ext = ext or "cube"
|
||||
self.cube_size = cube_size or 32
|
||||
self.lut_size = lut_size or 1024
|
||||
self.lut_style = lut_style or "linear"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def generate_lut(self):
|
||||
# ---------- start nodes creation
|
||||
|
||||
# CMSTestPattern
|
||||
cms_node = nuke.createNode("CMSTestPattern")
|
||||
cms_node["cube_size"].setValue(self.cube_size)
|
||||
# connect
|
||||
self._temp_nodes.append(cms_node)
|
||||
self.previous_node = cms_node
|
||||
self.log.debug("CMSTestPattern... `{}`".format(self._temp_nodes))
|
||||
|
||||
# Node View Process
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug("ViewProcess... `{}`".format(self._temp_nodes))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(self._temp_nodes))
|
||||
|
||||
# GenerateLUT
|
||||
gen_lut_node = nuke.createNode("GenerateLUT")
|
||||
gen_lut_node["file"].setValue(self.path)
|
||||
gen_lut_node["file_type"].setValue(".{}".format(self.ext))
|
||||
gen_lut_node["lut1d"].setValue(self.lut_size)
|
||||
gen_lut_node["style1d"].setValue(self.lut_style)
|
||||
# connect
|
||||
gen_lut_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(gen_lut_node)
|
||||
self.log.debug("GenerateLUT... `{}`".format(self._temp_nodes))
|
||||
|
||||
# ---------- end nodes creation
|
||||
|
||||
# Export lut file
|
||||
nuke.execute(
|
||||
gen_lut_node.name(),
|
||||
int(self.first_frame),
|
||||
int(self.first_frame))
|
||||
|
||||
self.log.info("Exported...")
|
||||
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data()
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
# ---------- Clean up
|
||||
self.clean_nodes()
|
||||
|
||||
return self.data
|
||||
|
||||
|
||||
class ExporterReviewMov(ExporterReview):
|
||||
"""
|
||||
Metaclass for generating review mov files
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
):
|
||||
# initialize parent class
|
||||
ExporterReview.__init__(self, klass, instance)
|
||||
|
||||
# passing presets for nodes to self
|
||||
if hasattr(klass, "nodes"):
|
||||
self.nodes = klass.nodes
|
||||
else:
|
||||
self.nodes = {}
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
self.bake_colorspace_fallback = klass.bake_colorspace_fallback
|
||||
self.bake_colorspace_main = klass.bake_colorspace_main
|
||||
self.write_colorspace = instance.data["colorspace"]
|
||||
|
||||
self.name = name or "baked"
|
||||
self.ext = ext or "mov"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def render(self, render_node_name):
|
||||
self.log.info("Rendering... ")
|
||||
# Render Write node
|
||||
nuke.execute(
|
||||
render_node_name,
|
||||
int(self.first_frame),
|
||||
int(self.last_frame))
|
||||
|
||||
self.log.info("Rendered...")
|
||||
|
||||
def save_file(self):
|
||||
import shutil
|
||||
with anlib.maintained_selection():
|
||||
self.log.info("Saving nodes as file... ")
|
||||
# create nk path
|
||||
path = os.path.splitext(self.path)[0] + ".nk"
|
||||
# save file to the path
|
||||
shutil.copyfile(self.instance.context.data["currentFile"], path)
|
||||
|
||||
self.log.info("Nodes exported...")
|
||||
return path
|
||||
|
||||
def generate_mov(self, farm=False):
|
||||
# ---------- start nodes creation
|
||||
|
||||
# Read node
|
||||
r_node = nuke.createNode("Read")
|
||||
r_node["file"].setValue(self.path_in)
|
||||
r_node["first"].setValue(self.first_frame)
|
||||
r_node["origfirst"].setValue(self.first_frame)
|
||||
r_node["last"].setValue(self.last_frame)
|
||||
r_node["origlast"].setValue(self.last_frame)
|
||||
r_node["colorspace"].setValue(self.write_colorspace)
|
||||
|
||||
# connect
|
||||
self._temp_nodes.append(r_node)
|
||||
self.previous_node = r_node
|
||||
self.log.debug("Read... `{}`".format(self._temp_nodes))
|
||||
|
||||
# View Process node
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug("ViewProcess... `{}`".format(self._temp_nodes))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
colorspaces = [
|
||||
self.bake_colorspace_main, self.bake_colorspace_fallback
|
||||
]
|
||||
|
||||
if any(colorspaces):
|
||||
# OCIOColorSpace with controled output
|
||||
dag_node = nuke.createNode("OCIOColorSpace")
|
||||
self._temp_nodes.append(dag_node)
|
||||
for c in colorspaces:
|
||||
test = dag_node["out_colorspace"].setValue(str(c))
|
||||
if test:
|
||||
self.log.info(
|
||||
"Baking in colorspace... `{}`".format(c))
|
||||
break
|
||||
|
||||
if not test:
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
else:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(self._temp_nodes))
|
||||
|
||||
# Write node
|
||||
write_node = nuke.createNode("Write")
|
||||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(self.path)
|
||||
write_node["file_type"].setValue(self.ext)
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO change this to use conditions, if possible.
|
||||
try:
|
||||
write_node["meta_codec"].setValue("ap4h")
|
||||
except Exception:
|
||||
self.log.info("`meta_codec` knob was not found")
|
||||
|
||||
try:
|
||||
write_node["mov64_codec"].setValue("ap4h")
|
||||
except Exception:
|
||||
self.log.info("`mov64_codec` knob was not found")
|
||||
write_node["mov64_write_timecode"].setValue(1)
|
||||
write_node["raw"].setValue(1)
|
||||
# connect
|
||||
write_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(write_node)
|
||||
self.log.debug("Write... `{}`".format(self._temp_nodes))
|
||||
# ---------- end nodes creation
|
||||
|
||||
# ---------- render or save to nk
|
||||
if farm:
|
||||
nuke.scriptSave()
|
||||
path_nk = self.save_file()
|
||||
self.data.update({
|
||||
"bakeScriptPath": path_nk,
|
||||
"bakeWriteNodeName": write_node.name(),
|
||||
"bakeRenderPath": self.path
|
||||
})
|
||||
else:
|
||||
self.render(write_node.name())
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data(
|
||||
tags=["review", "delete"],
|
||||
range=True
|
||||
)
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
# ---------- Clean up
|
||||
self.clean_nodes()
|
||||
nuke.scriptSave()
|
||||
return self.data
|
||||
|
||||
|
||||
def get_dependent_nodes(nodes):
|
||||
"""Get all dependent nodes connected to the list of nodes.
|
||||
|
||||
|
|
@ -1654,6 +1280,8 @@ def launch_workfiles_app():
|
|||
from openpype.lib import (
|
||||
env_value_to_bool
|
||||
)
|
||||
from avalon.nuke.pipeline import get_main_window
|
||||
|
||||
# get all imortant settings
|
||||
open_at_start = env_value_to_bool(
|
||||
env_key="OPENPYPE_WORKFILE_TOOL_ON_START",
|
||||
|
|
@ -1665,7 +1293,8 @@ def launch_workfiles_app():
|
|||
|
||||
if not opnl.workfiles_launched:
|
||||
opnl.workfiles_launched = True
|
||||
host_tools.show_workfiles()
|
||||
main_window = get_main_window()
|
||||
host_tools.show_workfiles(parent=main_window)
|
||||
|
||||
|
||||
def process_workfile_builder():
|
||||
|
|
|
|||
|
|
@ -1,16 +1,21 @@
|
|||
import os
|
||||
import nuke
|
||||
from avalon.api import Session
|
||||
from avalon.nuke.pipeline import get_main_window
|
||||
|
||||
from .lib import WorkfileSettings
|
||||
from openpype.api import Logger, BuildWorkfile, get_current_project_settings
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
from avalon.nuke.pipeline import get_main_window
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
menu_label = os.environ["AVALON_LABEL"]
|
||||
|
||||
|
||||
def install():
|
||||
main_window = get_main_window()
|
||||
menubar = nuke.menu("Nuke")
|
||||
menu = menubar.findItem(menu_label)
|
||||
|
||||
|
|
@ -25,7 +30,7 @@ def install():
|
|||
menu.removeItem(rm_item[1].name())
|
||||
menu.addCommand(
|
||||
name,
|
||||
host_tools.show_workfiles,
|
||||
lambda: host_tools.show_workfiles(parent=main_window),
|
||||
index=2
|
||||
)
|
||||
menu.addSeparator(index=3)
|
||||
|
|
@ -88,7 +93,7 @@ def install():
|
|||
menu.addSeparator()
|
||||
menu.addCommand(
|
||||
"Experimental tools...",
|
||||
host_tools.show_experimental_tools_dialog
|
||||
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
|
||||
)
|
||||
|
||||
# adding shortcuts
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import os
|
||||
import random
|
||||
import string
|
||||
|
||||
|
|
@ -49,8 +50,11 @@ def get_review_presets_config():
|
|||
|
||||
class NukeLoader(api.Loader):
|
||||
container_id_knob = "containerId"
|
||||
container_id = ''.join(random.choice(
|
||||
string.ascii_uppercase + string.digits) for _ in range(10))
|
||||
container_id = None
|
||||
|
||||
def reset_container_id(self):
|
||||
self.container_id = ''.join(random.choice(
|
||||
string.ascii_uppercase + string.digits) for _ in range(10))
|
||||
|
||||
def get_container_id(self, node):
|
||||
id_knob = node.knobs().get(self.container_id_knob)
|
||||
|
|
@ -67,13 +71,16 @@ class NukeLoader(api.Loader):
|
|||
source_id = self.get_container_id(node)
|
||||
|
||||
if source_id:
|
||||
node[self.container_id_knob].setValue(self.container_id)
|
||||
node[self.container_id_knob].setValue(source_id)
|
||||
else:
|
||||
HIDEN_FLAG = 0x00040000
|
||||
_knob = anlib.Knobby(
|
||||
"String_Knob",
|
||||
self.container_id,
|
||||
flags=[nuke.READ_ONLY, HIDEN_FLAG])
|
||||
flags=[
|
||||
nuke.READ_ONLY,
|
||||
HIDEN_FLAG
|
||||
])
|
||||
knob = _knob.create(self.container_id_knob)
|
||||
node.addKnob(knob)
|
||||
|
||||
|
|
@ -94,3 +101,415 @@ class NukeLoader(api.Loader):
|
|||
nuke.delete(member)
|
||||
|
||||
return dependent_nodes
|
||||
|
||||
|
||||
class ExporterReview(object):
|
||||
"""
|
||||
Base class object for generating review data from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
data = None
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance
|
||||
):
|
||||
|
||||
self.log = klass.log
|
||||
self.instance = instance
|
||||
self.path_in = self.instance.data.get("path", None)
|
||||
self.staging_dir = self.instance.data["stagingDir"]
|
||||
self.collection = self.instance.data.get("collection", None)
|
||||
self.data = dict({
|
||||
"representations": list()
|
||||
})
|
||||
|
||||
def get_file_info(self):
|
||||
if self.collection:
|
||||
self.log.debug("Collection: `{}`".format(self.collection))
|
||||
# get path
|
||||
self.fname = os.path.basename(self.collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
self.fhead = self.collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
self.first_frame = min(self.collection.indexes)
|
||||
self.last_frame = max(self.collection.indexes)
|
||||
if "slate" in self.instance.data["families"]:
|
||||
self.first_frame += 1
|
||||
else:
|
||||
self.fname = os.path.basename(self.path_in)
|
||||
self.fhead = os.path.splitext(self.fname)[0] + "."
|
||||
self.first_frame = self.instance.data.get("frameStartHandle", None)
|
||||
self.last_frame = self.instance.data.get("frameEndHandle", None)
|
||||
|
||||
if "#" in self.fhead:
|
||||
self.fhead = self.fhead.replace("#", "")[:-1]
|
||||
|
||||
def get_representation_data(self, tags=None, range=False):
|
||||
add_tags = tags or []
|
||||
|
||||
repre = {
|
||||
'outputName': self.name,
|
||||
'name': self.name,
|
||||
'ext': self.ext,
|
||||
'files': self.file,
|
||||
"stagingDir": self.staging_dir,
|
||||
"tags": [self.name.replace("_", "-")] + add_tags
|
||||
}
|
||||
|
||||
if range:
|
||||
repre.update({
|
||||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
|
||||
self.data["representations"].append(repre)
|
||||
|
||||
def get_view_input_process_node(self):
|
||||
"""
|
||||
Will get any active view process.
|
||||
|
||||
Arguments:
|
||||
self (class): in object definition
|
||||
|
||||
Returns:
|
||||
nuke.Node: copy node of Input Process node
|
||||
"""
|
||||
anlib.reset_selection()
|
||||
ipn_orig = None
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ip = v['input_process'].getValue()
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
# copy selected to clipboard
|
||||
nuke.nodeCopy('%clipboard%')
|
||||
# reset selection
|
||||
anlib.reset_selection()
|
||||
# paste node and selection is on it only
|
||||
nuke.nodePaste('%clipboard%')
|
||||
# assign to variable
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
||||
def get_imageio_baking_profile(self):
|
||||
from . import lib as opnlib
|
||||
nuke_imageio = opnlib.get_nuke_imageio_settings()
|
||||
|
||||
# TODO: this is only securing backward compatibility lets remove
|
||||
# this once all projects's anotomy are upated to newer config
|
||||
if "baking" in nuke_imageio.keys():
|
||||
return nuke_imageio["baking"]["viewerProcess"]
|
||||
else:
|
||||
return nuke_imageio["viewer"]["viewerProcess"]
|
||||
|
||||
|
||||
|
||||
|
||||
class ExporterReviewLut(ExporterReview):
|
||||
"""
|
||||
Generator object for review lut from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
|
||||
"""
|
||||
_temp_nodes = []
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
cube_size=None,
|
||||
lut_size=None,
|
||||
lut_style=None):
|
||||
# initialize parent class
|
||||
super(ExporterReviewLut, self).__init__(klass, instance)
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
if hasattr(klass, "viewer_lut_raw"):
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
else:
|
||||
self.viewer_lut_raw = False
|
||||
|
||||
self.name = name or "baked_lut"
|
||||
self.ext = ext or "cube"
|
||||
self.cube_size = cube_size or 32
|
||||
self.lut_size = lut_size or 1024
|
||||
self.lut_style = lut_style or "linear"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def clean_nodes(self):
|
||||
for node in self._temp_nodes:
|
||||
nuke.delete(node)
|
||||
self._temp_nodes = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
def generate_lut(self):
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
|
||||
# ---------- start nodes creation
|
||||
|
||||
# CMSTestPattern
|
||||
cms_node = nuke.createNode("CMSTestPattern")
|
||||
cms_node["cube_size"].setValue(self.cube_size)
|
||||
# connect
|
||||
self._temp_nodes.append(cms_node)
|
||||
self.previous_node = cms_node
|
||||
self.log.debug("CMSTestPattern... `{}`".format(self._temp_nodes))
|
||||
|
||||
if bake_viewer_process:
|
||||
# Node View Process
|
||||
if bake_viewer_input_process_node:
|
||||
ipn = self.get_view_input_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug(
|
||||
"ViewProcess... `{}`".format(self._temp_nodes))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(self._temp_nodes))
|
||||
|
||||
# GenerateLUT
|
||||
gen_lut_node = nuke.createNode("GenerateLUT")
|
||||
gen_lut_node["file"].setValue(self.path)
|
||||
gen_lut_node["file_type"].setValue(".{}".format(self.ext))
|
||||
gen_lut_node["lut1d"].setValue(self.lut_size)
|
||||
gen_lut_node["style1d"].setValue(self.lut_style)
|
||||
# connect
|
||||
gen_lut_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(gen_lut_node)
|
||||
self.log.debug("GenerateLUT... `{}`".format(self._temp_nodes))
|
||||
|
||||
# ---------- end nodes creation
|
||||
|
||||
# Export lut file
|
||||
nuke.execute(
|
||||
gen_lut_node.name(),
|
||||
int(self.first_frame),
|
||||
int(self.first_frame))
|
||||
|
||||
self.log.info("Exported...")
|
||||
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data()
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
# ---------- Clean up
|
||||
self.clean_nodes()
|
||||
|
||||
return self.data
|
||||
|
||||
|
||||
class ExporterReviewMov(ExporterReview):
|
||||
"""
|
||||
Metaclass for generating review mov files
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
_temp_nodes = {}
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
):
|
||||
# initialize parent class
|
||||
super(ExporterReviewMov, self).__init__(klass, instance)
|
||||
# passing presets for nodes to self
|
||||
self.nodes = klass.nodes if hasattr(klass, "nodes") else {}
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
self.write_colorspace = instance.data["colorspace"]
|
||||
|
||||
self.name = name or "baked"
|
||||
self.ext = ext or "mov"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def clean_nodes(self, node_name):
|
||||
for node in self._temp_nodes[node_name]:
|
||||
nuke.delete(node)
|
||||
self._temp_nodes[node_name] = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
def render(self, render_node_name):
|
||||
self.log.info("Rendering... ")
|
||||
# Render Write node
|
||||
nuke.execute(
|
||||
render_node_name,
|
||||
int(self.first_frame),
|
||||
int(self.last_frame))
|
||||
|
||||
self.log.info("Rendered...")
|
||||
|
||||
def save_file(self):
|
||||
import shutil
|
||||
with anlib.maintained_selection():
|
||||
self.log.info("Saving nodes as file... ")
|
||||
# create nk path
|
||||
path = os.path.splitext(self.path)[0] + ".nk"
|
||||
# save file to the path
|
||||
shutil.copyfile(self.instance.context.data["currentFile"], path)
|
||||
|
||||
self.log.info("Nodes exported...")
|
||||
return path
|
||||
|
||||
def generate_mov(self, farm=False, **kwargs):
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
viewer_process_override = kwargs[
|
||||
"viewer_process_override"]
|
||||
|
||||
baking_view_profile = (
|
||||
viewer_process_override or self.get_imageio_baking_profile())
|
||||
|
||||
fps = self.instance.context.data["fps"]
|
||||
|
||||
self.log.debug(">> baking_view_profile `{}`".format(
|
||||
baking_view_profile))
|
||||
|
||||
add_tags = kwargs.get("add_tags", [])
|
||||
|
||||
self.log.info(
|
||||
"__ add_tags: `{0}`".format(add_tags))
|
||||
|
||||
subset = self.instance.data["subset"]
|
||||
self._temp_nodes[subset] = []
|
||||
# ---------- start nodes creation
|
||||
|
||||
# Read node
|
||||
r_node = nuke.createNode("Read")
|
||||
r_node["file"].setValue(self.path_in)
|
||||
r_node["first"].setValue(self.first_frame)
|
||||
r_node["origfirst"].setValue(self.first_frame)
|
||||
r_node["last"].setValue(self.last_frame)
|
||||
r_node["origlast"].setValue(self.last_frame)
|
||||
r_node["colorspace"].setValue(self.write_colorspace)
|
||||
|
||||
# connect
|
||||
self._temp_nodes[subset].append(r_node)
|
||||
self.previous_node = r_node
|
||||
self.log.debug("Read... `{}`".format(self._temp_nodes[subset]))
|
||||
|
||||
# only create colorspace baking if toggled on
|
||||
if bake_viewer_process:
|
||||
if bake_viewer_input_process_node:
|
||||
# View Process node
|
||||
ipn = self.get_view_input_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes[subset].append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug(
|
||||
"ViewProcess... `{}`".format(
|
||||
self._temp_nodes[subset]))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node["view"].setValue(str(baking_view_profile))
|
||||
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes[subset].append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(
|
||||
self._temp_nodes[subset]))
|
||||
|
||||
# Write node
|
||||
write_node = nuke.createNode("Write")
|
||||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(str(self.path))
|
||||
write_node["file_type"].setValue(str(self.ext))
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO should't this come from settings on outputs?
|
||||
try:
|
||||
write_node["meta_codec"].setValue("ap4h")
|
||||
except Exception:
|
||||
self.log.info("`meta_codec` knob was not found")
|
||||
|
||||
try:
|
||||
write_node["mov64_codec"].setValue("ap4h")
|
||||
write_node["mov64_fps"].setValue(float(fps))
|
||||
except Exception:
|
||||
self.log.info("`mov64_codec` knob was not found")
|
||||
|
||||
write_node["mov64_write_timecode"].setValue(1)
|
||||
write_node["raw"].setValue(1)
|
||||
# connect
|
||||
write_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes[subset].append(write_node)
|
||||
self.log.debug("Write... `{}`".format(self._temp_nodes[subset]))
|
||||
# ---------- end nodes creation
|
||||
|
||||
# ---------- render or save to nk
|
||||
if farm:
|
||||
nuke.scriptSave()
|
||||
path_nk = self.save_file()
|
||||
self.data.update({
|
||||
"bakeScriptPath": path_nk,
|
||||
"bakeWriteNodeName": write_node.name(),
|
||||
"bakeRenderPath": self.path
|
||||
})
|
||||
else:
|
||||
self.render(write_node.name())
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data(
|
||||
tags=["review", "delete"] + add_tags,
|
||||
range=True
|
||||
)
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
self.clean_nodes(subset)
|
||||
nuke.scriptSave()
|
||||
|
||||
return self.data
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ import nukescripts
|
|||
from openpype.hosts.nuke.api import lib as pnlib
|
||||
from avalon.nuke import lib as anlib
|
||||
from avalon.nuke import containerise, update_container
|
||||
reload(pnlib)
|
||||
|
||||
class LoadBackdropNodes(api.Loader):
|
||||
"""Loading Published Backdrop nodes (workfile, nukenodes)"""
|
||||
|
|
|
|||
|
|
@ -67,6 +67,9 @@ class LoadClip(plugin.NukeLoader):
|
|||
|
||||
def load(self, context, name, namespace, options):
|
||||
|
||||
# reste container id so it is always unique for each instance
|
||||
self.reset_container_id()
|
||||
|
||||
is_sequence = len(context["representation"]["files"]) > 1
|
||||
|
||||
file = self.fname.replace("\\", "/")
|
||||
|
|
@ -251,8 +254,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
"handleStart": str(self.handle_start),
|
||||
"handleEnd": str(self.handle_end),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir"),
|
||||
"author": version_data.get("author")
|
||||
}
|
||||
|
||||
# change color of read_node
|
||||
|
|
|
|||
|
|
@ -217,8 +217,7 @@ class LoadImage(api.Loader):
|
|||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir"),
|
||||
"author": version_data.get("author")
|
||||
})
|
||||
|
||||
# change color of node
|
||||
|
|
|
|||
|
|
@ -135,8 +135,7 @@ class LinkAsGroup(api.Loader):
|
|||
"source": version["data"].get("source"),
|
||||
"handles": version["data"].get("handles"),
|
||||
"fps": version["data"].get("fps"),
|
||||
"author": version["data"].get("author"),
|
||||
"outputDir": version["data"].get("outputDir"),
|
||||
"author": version["data"].get("author")
|
||||
})
|
||||
|
||||
# Update the imprinted representation
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ from openpype.hosts.nuke.api import lib as pnlib
|
|||
import nuke
|
||||
import os
|
||||
import openpype
|
||||
reload(pnlib)
|
||||
|
||||
class ExtractBackdropNode(openpype.api.Extractor):
|
||||
"""Extracting content of backdrop nodes
|
||||
|
|
|
|||
|
|
@ -1,16 +1,9 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from avalon.nuke import lib as anlib
|
||||
from openpype.hosts.nuke.api import lib as pnlib
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
import openpype
|
||||
|
||||
try:
|
||||
from __builtin__ import reload
|
||||
except ImportError:
|
||||
from importlib import reload
|
||||
|
||||
reload(pnlib)
|
||||
|
||||
|
||||
class ExtractReviewDataLut(openpype.api.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
|
|
@ -45,7 +38,7 @@ class ExtractReviewDataLut(openpype.api.Extractor):
|
|||
|
||||
# generate data
|
||||
with anlib.maintained_selection():
|
||||
exporter = pnlib.ExporterReviewLut(
|
||||
exporter = plugin.ExporterReviewLut(
|
||||
self, instance
|
||||
)
|
||||
data = exporter.generate_lut()
|
||||
|
|
|
|||
|
|
@ -1,16 +1,9 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from avalon.nuke import lib as anlib
|
||||
from openpype.hosts.nuke.api import lib as pnlib
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
import openpype
|
||||
|
||||
try:
|
||||
from __builtin__ import reload
|
||||
except ImportError:
|
||||
from importlib import reload
|
||||
|
||||
reload(pnlib)
|
||||
|
||||
|
||||
class ExtractReviewDataMov(openpype.api.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
|
|
@ -27,15 +20,15 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
|
||||
# presets
|
||||
viewer_lut_raw = None
|
||||
bake_colorspace_fallback = None
|
||||
bake_colorspace_main = None
|
||||
outputs = {}
|
||||
|
||||
def process(self, instance):
|
||||
families = instance.data["families"]
|
||||
task_type = instance.context.data["taskType"]
|
||||
self.log.info("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
instance.data["representations"] = []
|
||||
|
||||
staging_dir = os.path.normpath(
|
||||
os.path.dirname(instance.data['path']))
|
||||
|
|
@ -45,28 +38,80 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
self.log.info(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
self.log.info(self.outputs)
|
||||
|
||||
# generate data
|
||||
with anlib.maintained_selection():
|
||||
exporter = pnlib.ExporterReviewMov(
|
||||
self, instance)
|
||||
for o_name, o_data in self.outputs.items():
|
||||
f_families = o_data["filter"]["families"]
|
||||
f_task_types = o_data["filter"]["task_types"]
|
||||
|
||||
if "render.farm" in families:
|
||||
instance.data["families"].remove("review")
|
||||
data = exporter.generate_mov(farm=True)
|
||||
# test if family found in context
|
||||
test_families = any([
|
||||
# first if exact family set is mathing
|
||||
# make sure only interesetion of list is correct
|
||||
bool(set(families).intersection(f_families)),
|
||||
# and if famiies are set at all
|
||||
# if not then return True because we want this preset
|
||||
# to be active if nothig is set
|
||||
bool(not f_families)
|
||||
])
|
||||
|
||||
self.log.debug(
|
||||
"_ data: {}".format(data))
|
||||
# test task types from filter
|
||||
test_task_types = any([
|
||||
# check if actual task type is defined in task types
|
||||
# set in preset's filter
|
||||
bool(task_type in f_task_types),
|
||||
# and if taskTypes are defined in preset filter
|
||||
# if not then return True, because we want this filter
|
||||
# to be active if no taskType is set
|
||||
bool(not f_task_types)
|
||||
])
|
||||
|
||||
instance.data.update({
|
||||
"bakeRenderPath": data.get("bakeRenderPath"),
|
||||
"bakeScriptPath": data.get("bakeScriptPath"),
|
||||
"bakeWriteNodeName": data.get("bakeWriteNodeName")
|
||||
})
|
||||
else:
|
||||
data = exporter.generate_mov()
|
||||
# we need all filters to be positive for this
|
||||
# preset to be activated
|
||||
test_all = all([
|
||||
test_families,
|
||||
test_task_types
|
||||
])
|
||||
|
||||
# assign to representations
|
||||
instance.data["representations"] += data["representations"]
|
||||
# if it is not positive then skip this preset
|
||||
if not test_all:
|
||||
continue
|
||||
|
||||
self.log.info(
|
||||
"Baking output `{}` with settings: {}".format(
|
||||
o_name, o_data))
|
||||
|
||||
# create exporter instance
|
||||
exporter = plugin.ExporterReviewMov(
|
||||
self, instance, o_name, o_data["extension"])
|
||||
|
||||
if "render.farm" in families:
|
||||
if "review" in instance.data["families"]:
|
||||
instance.data["families"].remove("review")
|
||||
|
||||
data = exporter.generate_mov(farm=True, **o_data)
|
||||
|
||||
self.log.debug(
|
||||
"_ data: {}".format(data))
|
||||
|
||||
if not instance.data.get("bakingNukeScripts"):
|
||||
instance.data["bakingNukeScripts"] = []
|
||||
|
||||
instance.data["bakingNukeScripts"].append({
|
||||
"bakeRenderPath": data.get("bakeRenderPath"),
|
||||
"bakeScriptPath": data.get("bakeScriptPath"),
|
||||
"bakeWriteNodeName": data.get("bakeWriteNodeName")
|
||||
})
|
||||
else:
|
||||
data = exporter.generate_mov(**o_data)
|
||||
|
||||
self.log.info(data["representations"])
|
||||
|
||||
# assign to representations
|
||||
instance.data["representations"] += data["representations"]
|
||||
|
||||
self.log.debug(
|
||||
"_ representations: {}".format(instance.data["representations"]))
|
||||
"_ representations: {}".format(
|
||||
instance.data["representations"]))
|
||||
|
|
|
|||
|
|
@ -238,7 +238,7 @@ class CollectInstanceResources(pyblish.api.InstancePlugin):
|
|||
})
|
||||
|
||||
# exception for mp4 preview
|
||||
if ".mp4" in _reminding_file:
|
||||
if ext in ["mp4", "mov"]:
|
||||
frame_start = 0
|
||||
frame_end = (
|
||||
(instance_data["frameEnd"] - instance_data["frameStart"])
|
||||
|
|
@ -255,6 +255,7 @@ class CollectInstanceResources(pyblish.api.InstancePlugin):
|
|||
"step": 1,
|
||||
"fps": self.context.data.get("fps"),
|
||||
"name": "review",
|
||||
"thumbnail": True,
|
||||
"tags": ["review", "ftrackreview", "delete"],
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -49,10 +49,22 @@ class CollectHarmonyScenes(pyblish.api.InstancePlugin):
|
|||
|
||||
# fix anatomy data
|
||||
anatomy_data_new = copy.deepcopy(anatomy_data)
|
||||
|
||||
project_entity = context.data["projectEntity"]
|
||||
asset_entity = context.data["assetEntity"]
|
||||
|
||||
task_type = asset_entity["data"]["tasks"].get(task, {}).get("type")
|
||||
project_task_types = project_entity["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
# updating hierarchy data
|
||||
anatomy_data_new.update({
|
||||
"asset": asset_data["name"],
|
||||
"task": task,
|
||||
"task": {
|
||||
"name": task,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"subset": subset_name
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ class CollectHarmonyZips(pyblish.api.InstancePlugin):
|
|||
anatomy_data = instance.context.data["anatomyData"]
|
||||
repres = instance.data["representations"]
|
||||
files = repres[0]["files"]
|
||||
project_entity = context.data["projectEntity"]
|
||||
|
||||
if files.endswith(".zip"):
|
||||
# A zip file was dropped
|
||||
|
|
@ -45,14 +46,24 @@ class CollectHarmonyZips(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.info("Copied data: {}".format(new_instance.data))
|
||||
|
||||
task_type = asset_data["data"]["tasks"].get(task, {}).get("type")
|
||||
project_task_types = project_entity["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
# fix anatomy data
|
||||
anatomy_data_new = copy.deepcopy(anatomy_data)
|
||||
# updating hierarchy data
|
||||
anatomy_data_new.update({
|
||||
"asset": asset_data["name"],
|
||||
"task": task,
|
||||
"subset": subset_name
|
||||
})
|
||||
anatomy_data_new.update(
|
||||
{
|
||||
"asset": asset_data["name"],
|
||||
"task": {
|
||||
"name": task,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"subset": subset_name
|
||||
}
|
||||
)
|
||||
|
||||
new_instance.data["label"] = f"{instance_name}"
|
||||
new_instance.data["subset"] = subset_name
|
||||
|
|
|
|||
|
|
@ -1,415 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Extract Harmony scene from zip file."""
|
||||
import glob
|
||||
import os
|
||||
import shutil
|
||||
import six
|
||||
import sys
|
||||
import tempfile
|
||||
import zipfile
|
||||
|
||||
import pyblish.api
|
||||
from avalon import api, io
|
||||
import openpype.api
|
||||
from openpype.lib import get_workfile_template_key_from_context
|
||||
|
||||
|
||||
class ExtractHarmonyZip(openpype.api.Extractor):
|
||||
"""Extract Harmony zip."""
|
||||
|
||||
# Pyblish settings
|
||||
label = "Extract Harmony zip"
|
||||
order = pyblish.api.ExtractorOrder + 0.02
|
||||
hosts = ["standalonepublisher"]
|
||||
families = ["scene"]
|
||||
|
||||
# Properties
|
||||
session = None
|
||||
task_types = None
|
||||
task_statuses = None
|
||||
assetversion_statuses = None
|
||||
|
||||
# Presets
|
||||
create_workfile = True
|
||||
default_task = "harmonyIngest"
|
||||
default_task_type = "Ingest"
|
||||
default_task_status = "Ingested"
|
||||
assetversion_status = "Ingested"
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
context = instance.context
|
||||
self.session = context.data["ftrackSession"]
|
||||
asset_doc = context.data["assetEntity"]
|
||||
# asset_name = instance.data["asset"]
|
||||
subset_name = instance.data["subset"]
|
||||
instance_name = instance.data["name"]
|
||||
family = instance.data["family"]
|
||||
task = context.data["anatomyData"]["task"] or self.default_task
|
||||
project_entity = instance.context.data["projectEntity"]
|
||||
ftrack_id = asset_doc["data"]["ftrackId"]
|
||||
repres = instance.data["representations"]
|
||||
submitted_staging_dir = repres[0]["stagingDir"]
|
||||
submitted_files = repres[0]["files"]
|
||||
|
||||
# Get all the ftrack entities needed
|
||||
|
||||
# Asset Entity
|
||||
query = 'AssetBuild where id is "{}"'.format(ftrack_id)
|
||||
asset_entity = self.session.query(query).first()
|
||||
|
||||
# Project Entity
|
||||
query = 'Project where full_name is "{}"'.format(
|
||||
project_entity["name"]
|
||||
)
|
||||
project_entity = self.session.query(query).one()
|
||||
|
||||
# Get Task types and Statuses for creation if needed
|
||||
self.task_types = self._get_all_task_types(project_entity)
|
||||
self.task_statuses = self._get_all_task_statuses(project_entity)
|
||||
|
||||
# Get Statuses of AssetVersions
|
||||
self.assetversion_statuses = self._get_all_assetversion_statuses(
|
||||
project_entity
|
||||
)
|
||||
|
||||
# Setup the status that we want for the AssetVersion
|
||||
if self.assetversion_status:
|
||||
instance.data["assetversion_status"] = self.assetversion_status
|
||||
|
||||
# Create the default_task if it does not exist
|
||||
if task == self.default_task:
|
||||
existing_tasks = []
|
||||
entity_children = asset_entity.get('children', [])
|
||||
for child in entity_children:
|
||||
if child.entity_type.lower() == 'task':
|
||||
existing_tasks.append(child['name'].lower())
|
||||
|
||||
if task.lower() in existing_tasks:
|
||||
print("Task {} already exists".format(task))
|
||||
|
||||
else:
|
||||
self.create_task(
|
||||
name=task,
|
||||
task_type=self.default_task_type,
|
||||
task_status=self.default_task_status,
|
||||
parent=asset_entity,
|
||||
)
|
||||
|
||||
# Find latest version
|
||||
latest_version = self._find_last_version(subset_name, asset_doc)
|
||||
version_number = 1
|
||||
if latest_version is not None:
|
||||
version_number += latest_version
|
||||
|
||||
self.log.info(
|
||||
"Next version of instance \"{}\" will be {}".format(
|
||||
instance_name, version_number
|
||||
)
|
||||
)
|
||||
|
||||
# update instance info
|
||||
instance.data["task"] = task
|
||||
instance.data["version_name"] = "{}_{}".format(subset_name, task)
|
||||
instance.data["family"] = family
|
||||
instance.data["subset"] = subset_name
|
||||
instance.data["version"] = version_number
|
||||
instance.data["latestVersion"] = latest_version
|
||||
instance.data["anatomyData"].update({
|
||||
"subset": subset_name,
|
||||
"family": family,
|
||||
"version": version_number
|
||||
})
|
||||
|
||||
# Copy `families` and check if `family` is not in current families
|
||||
families = instance.data.get("families") or list()
|
||||
if families:
|
||||
families = list(set(families))
|
||||
|
||||
instance.data["families"] = families
|
||||
|
||||
# Prepare staging dir for new instance and zip + sanitize scene name
|
||||
staging_dir = tempfile.mkdtemp(prefix="pyblish_tmp_")
|
||||
|
||||
# Handle if the representation is a .zip and not an .xstage
|
||||
pre_staged = False
|
||||
if submitted_files.endswith(".zip"):
|
||||
submitted_zip_file = os.path.join(submitted_staging_dir,
|
||||
submitted_files
|
||||
).replace("\\", "/")
|
||||
|
||||
pre_staged = self.sanitize_prezipped_project(instance,
|
||||
submitted_zip_file,
|
||||
staging_dir)
|
||||
|
||||
# Get the file to work with
|
||||
source_dir = str(repres[0]["stagingDir"])
|
||||
source_file = str(repres[0]["files"])
|
||||
|
||||
staging_scene_dir = os.path.join(staging_dir, "scene")
|
||||
staging_scene = os.path.join(staging_scene_dir, source_file)
|
||||
|
||||
# If the file is an .xstage / directory, we must stage it
|
||||
if not pre_staged:
|
||||
shutil.copytree(source_dir, staging_scene_dir)
|
||||
|
||||
# Rename this latest file as 'scene.xstage'
|
||||
# This is is determined in the collector from the latest scene in a
|
||||
# submitted directory / directory the submitted .xstage is in.
|
||||
# In the case of a zip file being submitted, this is determined within
|
||||
# the self.sanitize_project() method in this extractor.
|
||||
os.rename(staging_scene,
|
||||
os.path.join(staging_scene_dir, "scene.xstage")
|
||||
)
|
||||
|
||||
# Required to set the current directory where the zip will end up
|
||||
os.chdir(staging_dir)
|
||||
|
||||
# Create the zip file
|
||||
zip_filepath = shutil.make_archive(os.path.basename(source_dir),
|
||||
"zip",
|
||||
staging_scene_dir
|
||||
)
|
||||
|
||||
zip_filename = os.path.basename(zip_filepath)
|
||||
|
||||
self.log.info("Zip file: {}".format(zip_filepath))
|
||||
|
||||
# Setup representation
|
||||
new_repre = {
|
||||
"name": "zip",
|
||||
"ext": "zip",
|
||||
"files": zip_filename,
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
|
||||
self.log.debug(
|
||||
"Creating new representation: {}".format(new_repre)
|
||||
)
|
||||
instance.data["representations"] = [new_repre]
|
||||
|
||||
self.log.debug("Completed prep of zipped Harmony scene: {}"
|
||||
.format(zip_filepath)
|
||||
)
|
||||
|
||||
# If this extractor is setup to also extract a workfile...
|
||||
if self.create_workfile:
|
||||
workfile_path = self.extract_workfile(instance,
|
||||
staging_scene
|
||||
)
|
||||
|
||||
self.log.debug("Extracted Workfile to: {}".format(workfile_path))
|
||||
|
||||
def extract_workfile(self, instance, staging_scene):
|
||||
"""Extract a valid workfile for this corresponding publish.
|
||||
|
||||
Args:
|
||||
instance (:class:`pyblish.api.Instance`): Instance data.
|
||||
staging_scene (str): path of staging scene.
|
||||
|
||||
Returns:
|
||||
str: Path to workdir.
|
||||
|
||||
"""
|
||||
# Since the staging scene was renamed to "scene.xstage" for publish
|
||||
# rename the staging scene in the temp stagingdir
|
||||
staging_scene = os.path.join(os.path.dirname(staging_scene),
|
||||
"scene.xstage")
|
||||
|
||||
# Setup the data needed to form a valid work path filename
|
||||
anatomy = openpype.api.Anatomy()
|
||||
project_entity = instance.context.data["projectEntity"]
|
||||
|
||||
data = {
|
||||
"root": api.registered_root(),
|
||||
"project": {
|
||||
"name": project_entity["name"],
|
||||
"code": project_entity["data"].get("code", '')
|
||||
},
|
||||
"asset": instance.data["asset"],
|
||||
"hierarchy": openpype.api.get_hierarchy(instance.data["asset"]),
|
||||
"family": instance.data["family"],
|
||||
"task": instance.data.get("task"),
|
||||
"subset": instance.data["subset"],
|
||||
"version": 1,
|
||||
"ext": "zip",
|
||||
}
|
||||
host_name = "harmony"
|
||||
template_name = get_workfile_template_key_from_context(
|
||||
instance.data["asset"],
|
||||
instance.data.get("task"),
|
||||
host_name,
|
||||
project_name=project_entity["name"],
|
||||
dbcon=io
|
||||
)
|
||||
|
||||
# Get a valid work filename first with version 1
|
||||
file_template = anatomy.templates[template_name]["file"]
|
||||
anatomy_filled = anatomy.format(data)
|
||||
work_path = anatomy_filled[template_name]["path"]
|
||||
|
||||
# Get the final work filename with the proper version
|
||||
data["version"] = api.last_workfile_with_version(
|
||||
os.path.dirname(work_path),
|
||||
file_template,
|
||||
data,
|
||||
api.HOST_WORKFILE_EXTENSIONS[host_name]
|
||||
)[1]
|
||||
|
||||
base_name = os.path.splitext(os.path.basename(work_path))[0]
|
||||
|
||||
staging_work_path = os.path.join(os.path.dirname(staging_scene),
|
||||
base_name + ".xstage"
|
||||
)
|
||||
|
||||
# Rename this latest file after the workfile path filename
|
||||
os.rename(staging_scene, staging_work_path)
|
||||
|
||||
# Required to set the current directory where the zip will end up
|
||||
os.chdir(os.path.dirname(os.path.dirname(staging_scene)))
|
||||
|
||||
# Create the zip file
|
||||
zip_filepath = shutil.make_archive(base_name,
|
||||
"zip",
|
||||
os.path.dirname(staging_scene)
|
||||
)
|
||||
self.log.info(staging_scene)
|
||||
self.log.info(work_path)
|
||||
self.log.info(staging_work_path)
|
||||
self.log.info(os.path.dirname(os.path.dirname(staging_scene)))
|
||||
self.log.info(base_name)
|
||||
self.log.info(zip_filepath)
|
||||
|
||||
# Create the work path on disk if it does not exist
|
||||
os.makedirs(os.path.dirname(work_path), exist_ok=True)
|
||||
shutil.copy(zip_filepath, work_path)
|
||||
|
||||
return work_path
|
||||
|
||||
def sanitize_prezipped_project(
|
||||
self, instance, zip_filepath, staging_dir):
|
||||
"""Fix when a zip contains a folder.
|
||||
|
||||
Handle zip file root contains folder instead of the project.
|
||||
|
||||
Args:
|
||||
instance (:class:`pyblish.api.Instance`): Instance data.
|
||||
zip_filepath (str): Path to zip.
|
||||
staging_dir (str): Path to staging directory.
|
||||
|
||||
"""
|
||||
zip = zipfile.ZipFile(zip_filepath)
|
||||
zip_contents = zipfile.ZipFile.namelist(zip)
|
||||
|
||||
# Determine if any xstage file is in root of zip
|
||||
project_in_root = [pth for pth in zip_contents
|
||||
if "/" not in pth and pth.endswith(".xstage")]
|
||||
|
||||
staging_scene_dir = os.path.join(staging_dir, "scene")
|
||||
|
||||
# The project is nested, so we must extract and move it
|
||||
if not project_in_root:
|
||||
|
||||
staging_tmp_dir = os.path.join(staging_dir, "tmp")
|
||||
|
||||
with zipfile.ZipFile(zip_filepath, "r") as zip_ref:
|
||||
zip_ref.extractall(staging_tmp_dir)
|
||||
|
||||
nested_project_folder = os.path.join(staging_tmp_dir,
|
||||
zip_contents[0]
|
||||
)
|
||||
|
||||
shutil.copytree(nested_project_folder, staging_scene_dir)
|
||||
|
||||
else:
|
||||
# The project is not nested, so we just extract to scene folder
|
||||
with zipfile.ZipFile(zip_filepath, "r") as zip_ref:
|
||||
zip_ref.extractall(staging_scene_dir)
|
||||
|
||||
latest_file = max(glob.iglob(staging_scene_dir + "/*.xstage"),
|
||||
key=os.path.getctime).replace("\\", "/")
|
||||
|
||||
instance.data["representations"][0]["stagingDir"] = staging_scene_dir
|
||||
instance.data["representations"][0]["files"] = os.path.basename(
|
||||
latest_file)
|
||||
|
||||
# We have staged the scene already so return True
|
||||
return True
|
||||
|
||||
def _find_last_version(self, subset_name, asset_doc):
|
||||
"""Find last version of subset."""
|
||||
subset_doc = io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset_doc["_id"]
|
||||
})
|
||||
|
||||
if subset_doc is None:
|
||||
self.log.debug("Subset entity does not exist yet.")
|
||||
else:
|
||||
version_doc = io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset_doc["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if version_doc:
|
||||
return int(version_doc["name"])
|
||||
return None
|
||||
|
||||
def _get_all_task_types(self, project):
|
||||
"""Get all task types."""
|
||||
tasks = {}
|
||||
proj_template = project['project_schema']
|
||||
temp_task_types = proj_template['_task_type_schema']['types']
|
||||
|
||||
for type in temp_task_types:
|
||||
if type['name'] not in tasks:
|
||||
tasks[type['name']] = type
|
||||
|
||||
return tasks
|
||||
|
||||
def _get_all_task_statuses(self, project):
|
||||
"""Get all statuses of tasks."""
|
||||
statuses = {}
|
||||
proj_template = project['project_schema']
|
||||
temp_task_statuses = proj_template.get_statuses("Task")
|
||||
|
||||
for status in temp_task_statuses:
|
||||
if status['name'] not in statuses:
|
||||
statuses[status['name']] = status
|
||||
|
||||
return statuses
|
||||
|
||||
def _get_all_assetversion_statuses(self, project):
|
||||
"""Get statuses of all asset versions."""
|
||||
statuses = {}
|
||||
proj_template = project['project_schema']
|
||||
temp_task_statuses = proj_template.get_statuses("AssetVersion")
|
||||
|
||||
for status in temp_task_statuses:
|
||||
if status['name'] not in statuses:
|
||||
statuses[status['name']] = status
|
||||
|
||||
return statuses
|
||||
|
||||
def _create_task(self, name, task_type, parent, task_status):
|
||||
"""Create task."""
|
||||
task_data = {
|
||||
'name': name,
|
||||
'parent': parent,
|
||||
}
|
||||
self.log.info(task_type)
|
||||
task_data['type'] = self.task_types[task_type]
|
||||
task_data['status'] = self.task_statuses[task_status]
|
||||
self.log.info(task_data)
|
||||
task = self.session.create('Task', task_data)
|
||||
try:
|
||||
self.session.commit()
|
||||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
return task
|
||||
682
openpype/hosts/tvpaint/lib.py
Normal file
682
openpype/hosts/tvpaint/lib.py
Normal file
|
|
@ -0,0 +1,682 @@
|
|||
import os
|
||||
import shutil
|
||||
import collections
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
|
||||
def backwards_id_conversion(data_by_layer_id):
|
||||
"""Convert layer ids to strings from integers."""
|
||||
for key in tuple(data_by_layer_id.keys()):
|
||||
if not isinstance(key, str):
|
||||
data_by_layer_id[str(key)] = data_by_layer_id.pop(key)
|
||||
|
||||
|
||||
def get_frame_filename_template(frame_end, filename_prefix=None, ext=None):
|
||||
"""Get file template with frame key for rendered files.
|
||||
|
||||
This is simple template contains `{frame}{ext}` for sequential outputs
|
||||
and `single_file{ext}` for single file output. Output is rendered to
|
||||
temporary folder so filename should not matter as integrator change
|
||||
them.
|
||||
"""
|
||||
frame_padding = 4
|
||||
frame_end_str_len = len(str(frame_end))
|
||||
if frame_end_str_len > frame_padding:
|
||||
frame_padding = frame_end_str_len
|
||||
|
||||
ext = ext or ".png"
|
||||
filename_prefix = filename_prefix or ""
|
||||
|
||||
return "{}{{frame:0>{}}}{}".format(filename_prefix, frame_padding, ext)
|
||||
|
||||
|
||||
def get_layer_pos_filename_template(range_end, filename_prefix=None, ext=None):
|
||||
filename_prefix = filename_prefix or ""
|
||||
new_filename_prefix = filename_prefix + "pos_{pos}."
|
||||
return get_frame_filename_template(range_end, new_filename_prefix, ext)
|
||||
|
||||
|
||||
def _calculate_pre_behavior_copy(
|
||||
range_start, exposure_frames, pre_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
):
|
||||
"""Calculate frames before first exposure frame based on pre behavior.
|
||||
|
||||
Function may skip whole processing if first exposure frame is before
|
||||
layer's first frame. In that case pre behavior does not make sense.
|
||||
|
||||
Args:
|
||||
range_start(int): First frame of range which should be rendered.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
pre_beh(str): Pre behavior of layer (enum of 4 strings).
|
||||
layer_frame_start(int): First frame of layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
output_idx_by_frame_idx(dict): References to already prepared frames
|
||||
and where result will be stored.
|
||||
"""
|
||||
# Check if last layer frame is after range end
|
||||
if layer_frame_start < range_start:
|
||||
return
|
||||
|
||||
first_exposure_frame = min(exposure_frames)
|
||||
# Skip if last exposure frame is after range end
|
||||
if first_exposure_frame < range_start:
|
||||
return
|
||||
|
||||
# Calculate frame count of layer
|
||||
frame_count = layer_frame_end - layer_frame_start + 1
|
||||
|
||||
if pre_beh == "none":
|
||||
# Just fill all frames from last exposure frame to range end with None
|
||||
for frame_idx in range(range_start, layer_frame_start):
|
||||
output_idx_by_frame_idx[frame_idx] = None
|
||||
|
||||
elif pre_beh == "hold":
|
||||
# Keep first frame for whole time
|
||||
for frame_idx in range(range_start, layer_frame_start):
|
||||
output_idx_by_frame_idx[frame_idx] = first_exposure_frame
|
||||
|
||||
elif pre_beh in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in reversed(range(range_start, layer_frame_start)):
|
||||
eq_frame_idx_offset = (
|
||||
(layer_frame_end - frame_idx) % frame_count
|
||||
)
|
||||
eq_frame_idx = layer_frame_end - eq_frame_idx_offset
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
elif pre_beh == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in reversed(range(range_start, layer_frame_start)):
|
||||
eq_frame_idx_offset = (layer_frame_start - frame_idx) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = (seq_len - eq_frame_idx_offset)
|
||||
eq_frame_idx = layer_frame_start + eq_frame_idx_offset
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
|
||||
def _calculate_post_behavior_copy(
|
||||
range_end, exposure_frames, post_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
):
|
||||
"""Calculate frames after last frame of layer based on post behavior.
|
||||
|
||||
Function may skip whole processing if last layer frame is after range_end.
|
||||
In that case post behavior does not make sense.
|
||||
|
||||
Args:
|
||||
range_end(int): Last frame of range which should be rendered.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
post_beh(str): Post behavior of layer (enum of 4 strings).
|
||||
layer_frame_start(int): First frame of layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
output_idx_by_frame_idx(dict): References to already prepared frames
|
||||
and where result will be stored.
|
||||
"""
|
||||
# Check if last layer frame is after range end
|
||||
if layer_frame_end >= range_end:
|
||||
return
|
||||
|
||||
last_exposure_frame = max(exposure_frames)
|
||||
# Skip if last exposure frame is after range end
|
||||
# - this is probably irrelevant with layer frame end check?
|
||||
if last_exposure_frame >= range_end:
|
||||
return
|
||||
|
||||
# Calculate frame count of layer
|
||||
frame_count = layer_frame_end - layer_frame_start + 1
|
||||
|
||||
if post_beh == "none":
|
||||
# Just fill all frames from last exposure frame to range end with None
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
output_idx_by_frame_idx[frame_idx] = None
|
||||
|
||||
elif post_beh == "hold":
|
||||
# Keep last exposure frame to the end
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
output_idx_by_frame_idx[frame_idx] = last_exposure_frame
|
||||
|
||||
elif post_beh in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
eq_frame_idx = frame_idx % frame_count
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
elif post_beh == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
eq_frame_idx_offset = (frame_idx - layer_frame_end) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = seq_len - eq_frame_idx_offset
|
||||
eq_frame_idx = layer_frame_end - eq_frame_idx_offset
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
|
||||
def _calculate_in_range_frames(
|
||||
range_start, range_end,
|
||||
exposure_frames, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
):
|
||||
"""Calculate frame references in defined range.
|
||||
|
||||
Function may skip whole processing if last layer frame is after range_end.
|
||||
In that case post behavior does not make sense.
|
||||
|
||||
Args:
|
||||
range_start(int): First frame of range which should be rendered.
|
||||
range_end(int): Last frame of range which should be rendered.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
output_idx_by_frame_idx(dict): References to already prepared frames
|
||||
and where result will be stored.
|
||||
"""
|
||||
# Calculate in range frames
|
||||
in_range_frames = []
|
||||
for frame_idx in exposure_frames:
|
||||
if range_start <= frame_idx <= range_end:
|
||||
output_idx_by_frame_idx[frame_idx] = frame_idx
|
||||
in_range_frames.append(frame_idx)
|
||||
|
||||
if in_range_frames:
|
||||
first_in_range_frame = min(in_range_frames)
|
||||
# Calculate frames from first exposure frames to range end or last
|
||||
# frame of layer (post behavior should be calculated since that time)
|
||||
previous_exposure = first_in_range_frame
|
||||
for frame_idx in range(first_in_range_frame, range_end + 1):
|
||||
if frame_idx > layer_frame_end:
|
||||
break
|
||||
|
||||
if frame_idx in exposure_frames:
|
||||
previous_exposure = frame_idx
|
||||
else:
|
||||
output_idx_by_frame_idx[frame_idx] = previous_exposure
|
||||
|
||||
# There can be frames before first exposure frame in range
|
||||
# First check if we don't alreade have first range frame filled
|
||||
if range_start in output_idx_by_frame_idx:
|
||||
return
|
||||
|
||||
first_exposure_frame = max(exposure_frames)
|
||||
last_exposure_frame = max(exposure_frames)
|
||||
# Check if is first exposure frame smaller than defined range
|
||||
# if not then skip
|
||||
if first_exposure_frame >= range_start:
|
||||
return
|
||||
|
||||
# Check is if last exposure frame is also before range start
|
||||
# in that case we can't use fill frames before out range
|
||||
if last_exposure_frame < range_start:
|
||||
return
|
||||
|
||||
closest_exposure_frame = first_exposure_frame
|
||||
for frame_idx in exposure_frames:
|
||||
if frame_idx >= range_start:
|
||||
break
|
||||
if frame_idx > closest_exposure_frame:
|
||||
closest_exposure_frame = frame_idx
|
||||
|
||||
output_idx_by_frame_idx[closest_exposure_frame] = closest_exposure_frame
|
||||
for frame_idx in range(range_start, range_end + 1):
|
||||
if frame_idx in output_idx_by_frame_idx:
|
||||
break
|
||||
output_idx_by_frame_idx[frame_idx] = closest_exposure_frame
|
||||
|
||||
|
||||
def _cleanup_frame_references(output_idx_by_frame_idx):
|
||||
"""Cleanup frame references to frame reference.
|
||||
|
||||
Cleanup not direct references to rendered frame.
|
||||
```
|
||||
// Example input
|
||||
{
|
||||
1: 1,
|
||||
2: 1,
|
||||
3: 2
|
||||
}
|
||||
// Result
|
||||
{
|
||||
1: 1,
|
||||
2: 1,
|
||||
3: 1 // Changed reference to final rendered frame
|
||||
}
|
||||
```
|
||||
Result is dictionary where keys leads to frame that should be rendered.
|
||||
"""
|
||||
for frame_idx in tuple(output_idx_by_frame_idx.keys()):
|
||||
reference_idx = output_idx_by_frame_idx[frame_idx]
|
||||
# Skip transparent frames
|
||||
if reference_idx is None or reference_idx == frame_idx:
|
||||
continue
|
||||
|
||||
real_reference_idx = reference_idx
|
||||
_tmp_reference_idx = reference_idx
|
||||
while True:
|
||||
_temp = output_idx_by_frame_idx[_tmp_reference_idx]
|
||||
if _temp == _tmp_reference_idx:
|
||||
real_reference_idx = _tmp_reference_idx
|
||||
break
|
||||
_tmp_reference_idx = _temp
|
||||
|
||||
if real_reference_idx != reference_idx:
|
||||
output_idx_by_frame_idx[frame_idx] = real_reference_idx
|
||||
|
||||
|
||||
def _cleanup_out_range_frames(output_idx_by_frame_idx, range_start, range_end):
|
||||
"""Cleanup frame references to frames out of passed range.
|
||||
|
||||
First available frame in range is used
|
||||
```
|
||||
// Example input. Range 2-3
|
||||
{
|
||||
1: 1,
|
||||
2: 1,
|
||||
3: 1
|
||||
}
|
||||
// Result
|
||||
{
|
||||
2: 2, // Redirect to self as is first that refence out range
|
||||
3: 2 // Redirect to first redirected frame
|
||||
}
|
||||
```
|
||||
Result is dictionary where keys leads to frame that should be rendered.
|
||||
"""
|
||||
in_range_frames_by_out_frames = collections.defaultdict(set)
|
||||
out_range_frames = set()
|
||||
for frame_idx in tuple(output_idx_by_frame_idx.keys()):
|
||||
# Skip frames that are already out of range
|
||||
if frame_idx < range_start or frame_idx > range_end:
|
||||
out_range_frames.add(frame_idx)
|
||||
continue
|
||||
|
||||
reference_idx = output_idx_by_frame_idx[frame_idx]
|
||||
# Skip transparent frames
|
||||
if reference_idx is None:
|
||||
continue
|
||||
|
||||
# Skip references in range
|
||||
if reference_idx < range_start or reference_idx > range_end:
|
||||
in_range_frames_by_out_frames[reference_idx].add(frame_idx)
|
||||
|
||||
for reference_idx in tuple(in_range_frames_by_out_frames.keys()):
|
||||
frame_indexes = in_range_frames_by_out_frames.pop(reference_idx)
|
||||
new_reference = None
|
||||
for frame_idx in frame_indexes:
|
||||
if new_reference is None:
|
||||
new_reference = frame_idx
|
||||
output_idx_by_frame_idx[frame_idx] = new_reference
|
||||
|
||||
# Finally remove out of range frames
|
||||
for frame_idx in out_range_frames:
|
||||
output_idx_by_frame_idx.pop(frame_idx)
|
||||
|
||||
|
||||
def calculate_layer_frame_references(
|
||||
range_start, range_end,
|
||||
layer_frame_start,
|
||||
layer_frame_end,
|
||||
exposure_frames,
|
||||
pre_beh, post_beh
|
||||
):
|
||||
"""Calculate frame references for one layer based on it's data.
|
||||
|
||||
Output is dictionary where key is frame index referencing to rendered frame
|
||||
index. If frame index should be rendered then is referencing to self.
|
||||
|
||||
```
|
||||
// Example output
|
||||
{
|
||||
1: 1, // Reference to self - will be rendered
|
||||
2: 1, // Reference to frame 1 - will be copied
|
||||
3: 1, // Reference to frame 1 - will be copied
|
||||
4: 4, // Reference to self - will be rendered
|
||||
...
|
||||
20: 4 // Reference to frame 4 - will be copied
|
||||
21: None // Has reference to None - transparent image
|
||||
}
|
||||
```
|
||||
|
||||
Args:
|
||||
range_start(int): First frame of range which should be rendered.
|
||||
range_end(int): Last frame of range which should be rendered.
|
||||
layer_frame_start(int)L First frame of layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
pre_beh(str): Pre behavior of layer (enum of 4 strings).
|
||||
post_beh(str): Post behavior of layer (enum of 4 strings).
|
||||
"""
|
||||
# Output variable
|
||||
output_idx_by_frame_idx = {}
|
||||
# Skip if layer does not have any exposure frames
|
||||
if not exposure_frames:
|
||||
return output_idx_by_frame_idx
|
||||
|
||||
# First calculate in range frames
|
||||
_calculate_in_range_frames(
|
||||
range_start, range_end,
|
||||
exposure_frames, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
)
|
||||
# Calculate frames by pre behavior of layer
|
||||
_calculate_pre_behavior_copy(
|
||||
range_start, exposure_frames, pre_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
)
|
||||
# Calculate frames by post behavior of layer
|
||||
_calculate_post_behavior_copy(
|
||||
range_end, exposure_frames, post_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
)
|
||||
# Cleanup of referenced frames
|
||||
_cleanup_frame_references(output_idx_by_frame_idx)
|
||||
|
||||
# Remove frames out of range
|
||||
_cleanup_out_range_frames(output_idx_by_frame_idx, range_start, range_end)
|
||||
|
||||
return output_idx_by_frame_idx
|
||||
|
||||
|
||||
def calculate_layers_extraction_data(
|
||||
layers_data,
|
||||
exposure_frames_by_layer_id,
|
||||
behavior_by_layer_id,
|
||||
range_start,
|
||||
range_end,
|
||||
skip_not_visible=True,
|
||||
filename_prefix=None,
|
||||
ext=None
|
||||
):
|
||||
"""Calculate extraction data for passed layers data.
|
||||
|
||||
```
|
||||
{
|
||||
<layer_id>: {
|
||||
"frame_references": {...},
|
||||
"filenames_by_frame_index": {...}
|
||||
},
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Frame references contains frame index reference to rendered frame index.
|
||||
|
||||
Filename by frame index represents filename under which should be frame
|
||||
stored. Directory is not handled here because each usage may need different
|
||||
approach.
|
||||
|
||||
Args:
|
||||
layers_data(list): Layers data loaded from TVPaint.
|
||||
exposure_frames_by_layer_id(dict): Exposure frames of layers stored by
|
||||
layer id.
|
||||
behavior_by_layer_id(dict): Pre and Post behavior of layers stored by
|
||||
layer id.
|
||||
range_start(int): First frame of rendered range.
|
||||
range_end(int): Last frame of rendered range.
|
||||
skip_not_visible(bool): Skip calculations for hidden layers (Skipped
|
||||
by default).
|
||||
filename_prefix(str): Prefix before filename.
|
||||
ext(str): Extension which filenames will have ('.png' is default).
|
||||
|
||||
Returns:
|
||||
dict: Prepared data for rendering by layer position.
|
||||
"""
|
||||
# Make sure layer ids are strings
|
||||
# backwards compatibility when layer ids were integers
|
||||
backwards_id_conversion(exposure_frames_by_layer_id)
|
||||
backwards_id_conversion(behavior_by_layer_id)
|
||||
|
||||
layer_template = get_layer_pos_filename_template(
|
||||
range_end, filename_prefix, ext
|
||||
)
|
||||
output = {}
|
||||
for layer_data in layers_data:
|
||||
if skip_not_visible and not layer_data["visible"]:
|
||||
continue
|
||||
|
||||
orig_layer_id = layer_data["layer_id"]
|
||||
layer_id = str(orig_layer_id)
|
||||
|
||||
# Skip if does not have any exposure frames (empty layer)
|
||||
exposure_frames = exposure_frames_by_layer_id[layer_id]
|
||||
if not exposure_frames:
|
||||
continue
|
||||
|
||||
layer_position = layer_data["position"]
|
||||
layer_frame_start = layer_data["frame_start"]
|
||||
layer_frame_end = layer_data["frame_end"]
|
||||
|
||||
layer_behavior = behavior_by_layer_id[layer_id]
|
||||
|
||||
pre_behavior = layer_behavior["pre"]
|
||||
post_behavior = layer_behavior["post"]
|
||||
|
||||
frame_references = calculate_layer_frame_references(
|
||||
range_start, range_end,
|
||||
layer_frame_start,
|
||||
layer_frame_end,
|
||||
exposure_frames,
|
||||
pre_behavior, post_behavior
|
||||
)
|
||||
# All values in 'frame_references' reference to a frame that must be
|
||||
# rendered out
|
||||
frames_to_render = set(frame_references.values())
|
||||
# Remove 'None' reference (transparent image)
|
||||
if None in frames_to_render:
|
||||
frames_to_render.remove(None)
|
||||
|
||||
# Skip layer if has nothing to render
|
||||
if not frames_to_render:
|
||||
continue
|
||||
|
||||
# All filenames that should be as output (not final output)
|
||||
filename_frames = (
|
||||
set(range(range_start, range_end + 1))
|
||||
| frames_to_render
|
||||
)
|
||||
filenames_by_frame_index = {}
|
||||
for frame_idx in filename_frames:
|
||||
filenames_by_frame_index[frame_idx] = layer_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
|
||||
# Store objects under the layer id
|
||||
output[orig_layer_id] = {
|
||||
"frame_references": frame_references,
|
||||
"filenames_by_frame_index": filenames_by_frame_index
|
||||
}
|
||||
return output
|
||||
|
||||
|
||||
def create_transparent_image_from_source(src_filepath, dst_filepath):
|
||||
"""Create transparent image of same type and size as source image."""
|
||||
img_obj = Image.open(src_filepath)
|
||||
painter = ImageDraw.Draw(img_obj)
|
||||
painter.rectangle((0, 0, *img_obj.size), fill=(0, 0, 0, 0))
|
||||
img_obj.save(dst_filepath)
|
||||
|
||||
|
||||
def fill_reference_frames(frame_references, filepaths_by_frame):
|
||||
# Store path to first transparent image if there is any
|
||||
for frame_idx, ref_idx in frame_references.items():
|
||||
# Frame referencing to self should be rendered and used as source
|
||||
# and reference indexes with None can't be filled
|
||||
if ref_idx is None or frame_idx == ref_idx:
|
||||
continue
|
||||
|
||||
# Get destination filepath
|
||||
src_filepath = filepaths_by_frame[ref_idx]
|
||||
dst_filepath = filepaths_by_frame[frame_idx]
|
||||
|
||||
if hasattr(os, "link"):
|
||||
os.link(src_filepath, dst_filepath)
|
||||
else:
|
||||
shutil.copy(src_filepath, dst_filepath)
|
||||
|
||||
|
||||
def copy_render_file(src_path, dst_path):
|
||||
"""Create copy file of an image."""
|
||||
if hasattr(os, "link"):
|
||||
os.link(src_path, dst_path)
|
||||
else:
|
||||
shutil.copy(src_path, dst_path)
|
||||
|
||||
|
||||
def cleanup_rendered_layers(filepaths_by_layer_id):
|
||||
"""Delete all files for each individual layer files after compositing."""
|
||||
# Collect all filepaths from data
|
||||
all_filepaths = []
|
||||
for filepaths_by_frame in filepaths_by_layer_id.values():
|
||||
all_filepaths.extend(filepaths_by_frame.values())
|
||||
|
||||
# Loop over loop
|
||||
for filepath in set(all_filepaths):
|
||||
if filepath is not None and os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
|
||||
|
||||
def composite_rendered_layers(
|
||||
layers_data, filepaths_by_layer_id,
|
||||
range_start, range_end,
|
||||
dst_filepaths_by_frame, cleanup=True
|
||||
):
|
||||
"""Composite multiple rendered layers by their position.
|
||||
|
||||
Result is single frame sequence with transparency matching content
|
||||
created in TVPaint. Missing source filepaths are replaced with transparent
|
||||
images but at least one image must be rendered and exist.
|
||||
|
||||
Function can be used even if single layer was created to fill transparent
|
||||
filepaths.
|
||||
|
||||
Args:
|
||||
layers_data(list): Layers data loaded from TVPaint.
|
||||
filepaths_by_layer_id(dict): Rendered filepaths stored by frame index
|
||||
per layer id. Used as source for compositing.
|
||||
range_start(int): First frame of rendered range.
|
||||
range_end(int): Last frame of rendered range.
|
||||
dst_filepaths_by_frame(dict): Output filepaths by frame where final
|
||||
image after compositing will be stored. Path must not clash with
|
||||
source filepaths.
|
||||
cleanup(bool): Remove all source filepaths when done with compositing.
|
||||
"""
|
||||
# Prepare layers by their position
|
||||
# - position tells in which order will compositing happen
|
||||
layer_ids_by_position = {}
|
||||
for layer in layers_data:
|
||||
layer_position = layer["position"]
|
||||
layer_ids_by_position[layer_position] = layer["layer_id"]
|
||||
|
||||
# Sort layer positions
|
||||
sorted_positions = tuple(sorted(layer_ids_by_position.keys()))
|
||||
# Prepare variable where filepaths without any rendered content
|
||||
# - transparent will be created
|
||||
transparent_filepaths = set()
|
||||
# Store first final filepath
|
||||
first_dst_filepath = None
|
||||
for frame_idx in range(range_start, range_end + 1):
|
||||
dst_filepath = dst_filepaths_by_frame[frame_idx]
|
||||
src_filepaths = []
|
||||
for layer_position in sorted_positions:
|
||||
layer_id = layer_ids_by_position[layer_position]
|
||||
filepaths_by_frame = filepaths_by_layer_id[layer_id]
|
||||
src_filepath = filepaths_by_frame.get(frame_idx)
|
||||
if src_filepath is not None:
|
||||
src_filepaths.append(src_filepath)
|
||||
|
||||
if not src_filepaths:
|
||||
transparent_filepaths.add(dst_filepath)
|
||||
continue
|
||||
|
||||
# Store first destionation filepath to be used for transparent images
|
||||
if first_dst_filepath is None:
|
||||
first_dst_filepath = dst_filepath
|
||||
|
||||
if len(src_filepaths) == 1:
|
||||
src_filepath = src_filepaths[0]
|
||||
if cleanup:
|
||||
os.rename(src_filepath, dst_filepath)
|
||||
else:
|
||||
copy_render_file(src_filepath, dst_filepath)
|
||||
|
||||
else:
|
||||
composite_images(src_filepaths, dst_filepath)
|
||||
|
||||
# Store first transparent filepath to be able copy it
|
||||
transparent_filepath = None
|
||||
for dst_filepath in transparent_filepaths:
|
||||
if transparent_filepath is None:
|
||||
create_transparent_image_from_source(
|
||||
first_dst_filepath, dst_filepath
|
||||
)
|
||||
transparent_filepath = dst_filepath
|
||||
else:
|
||||
copy_render_file(transparent_filepath, dst_filepath)
|
||||
|
||||
# Remove all files that were used as source for compositing
|
||||
if cleanup:
|
||||
cleanup_rendered_layers(filepaths_by_layer_id)
|
||||
|
||||
|
||||
def composite_images(input_image_paths, output_filepath):
|
||||
"""Composite images in order from passed list.
|
||||
|
||||
Raises:
|
||||
ValueError: When entered list is empty.
|
||||
"""
|
||||
if not input_image_paths:
|
||||
raise ValueError("Nothing to composite.")
|
||||
|
||||
img_obj = None
|
||||
for image_filepath in input_image_paths:
|
||||
_img_obj = Image.open(image_filepath)
|
||||
if img_obj is None:
|
||||
img_obj = _img_obj
|
||||
else:
|
||||
img_obj.alpha_composite(_img_obj)
|
||||
img_obj.save(output_filepath)
|
||||
|
||||
|
||||
def rename_filepaths_by_frame_start(
|
||||
filepaths_by_frame, range_start, range_end, new_frame_start
|
||||
):
|
||||
"""Change frames in filenames of finished images to new frame start."""
|
||||
# Skip if source first frame is same as destination first frame
|
||||
if range_start == new_frame_start:
|
||||
return
|
||||
|
||||
# Calculate frame end
|
||||
new_frame_end = range_end + (new_frame_start - range_start)
|
||||
# Create filename template
|
||||
filename_template = get_frame_filename_template(
|
||||
max(range_end, new_frame_end)
|
||||
)
|
||||
|
||||
# Use differnet ranges based on Mark In and output Frame Start values
|
||||
# - this is to make sure that filename renaming won't affect files that
|
||||
# are not renamed yet
|
||||
if range_start < new_frame_start:
|
||||
source_range = range(range_end, range_start - 1, -1)
|
||||
output_range = range(new_frame_end, new_frame_start - 1, -1)
|
||||
else:
|
||||
# This is less possible situation as frame start will be in most
|
||||
# cases higher than Mark In.
|
||||
source_range = range(range_start, range_end + 1)
|
||||
output_range = range(new_frame_start, new_frame_end + 1)
|
||||
|
||||
new_dst_filepaths = {}
|
||||
for src_frame, dst_frame in zip(source_range, output_range):
|
||||
src_filepath = filepaths_by_frame[src_frame]
|
||||
src_dirpath = os.path.dirname(src_filepath)
|
||||
dst_filename = filename_template.format(frame=dst_frame)
|
||||
dst_filepath = os.path.join(src_dirpath, dst_filename)
|
||||
|
||||
os.rename(src_filepath, dst_filepath)
|
||||
|
||||
new_dst_filepaths[dst_frame] = dst_filepath
|
||||
return new_dst_filepaths
|
||||
102
openpype/hosts/tvpaint/plugins/load/load_workfile.py
Normal file
102
openpype/hosts/tvpaint/plugins/load/load_workfile.py
Normal file
|
|
@ -0,0 +1,102 @@
|
|||
import getpass
|
||||
import os
|
||||
|
||||
from avalon.tvpaint import lib, pipeline, get_current_workfile_context
|
||||
from avalon import api, io
|
||||
from openpype.lib import (
|
||||
get_workfile_template_key_from_context,
|
||||
get_workdir_data
|
||||
)
|
||||
from openpype.api import Anatomy
|
||||
|
||||
|
||||
class LoadWorkfile(pipeline.Loader):
|
||||
"""Load workfile."""
|
||||
|
||||
families = ["workfile"]
|
||||
representations = ["tvpp"]
|
||||
|
||||
label = "Load Workfile"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
# Load context of current workfile as first thing
|
||||
# - which context and extension has
|
||||
host = api.registered_host()
|
||||
current_file = host.current_file()
|
||||
|
||||
context = get_current_workfile_context()
|
||||
|
||||
filepath = self.fname.replace("\\", "/")
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise FileExistsError(
|
||||
"The loaded file does not exist. Try downloading it first."
|
||||
)
|
||||
|
||||
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(
|
||||
filepath
|
||||
)
|
||||
lib.execute_george_through_file(george_script)
|
||||
|
||||
# Save workfile.
|
||||
host_name = "tvpaint"
|
||||
asset_name = context.get("asset")
|
||||
task_name = context.get("task")
|
||||
# Far cases when there is workfile without context
|
||||
if not asset_name:
|
||||
asset_name = io.Session["AVALON_ASSET"]
|
||||
task_name = io.Session["AVALON_TASK"]
|
||||
|
||||
project_doc = io.find_one({
|
||||
"type": "project"
|
||||
})
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
project_name = project_doc["name"]
|
||||
|
||||
template_key = get_workfile_template_key_from_context(
|
||||
asset_name,
|
||||
task_name,
|
||||
host_name,
|
||||
project_name=project_name,
|
||||
dbcon=io
|
||||
)
|
||||
anatomy = Anatomy(project_name)
|
||||
|
||||
data = get_workdir_data(project_doc, asset_doc, task_name, host_name)
|
||||
data["root"] = anatomy.roots
|
||||
data["user"] = getpass.getuser()
|
||||
|
||||
template = anatomy.templates[template_key]["file"]
|
||||
|
||||
# Define saving file extension
|
||||
if current_file:
|
||||
# Match the extension of current file
|
||||
_, extension = os.path.splitext(current_file)
|
||||
else:
|
||||
# Fall back to the first extension supported for this host.
|
||||
extension = host.file_extensions()[0]
|
||||
|
||||
data["ext"] = extension
|
||||
|
||||
work_root = api.format_template_with_optional_keys(
|
||||
data, anatomy.templates[template_key]["folder"]
|
||||
)
|
||||
version = api.last_workfile_with_version(
|
||||
work_root, template, data, host.file_extensions()
|
||||
)[1]
|
||||
|
||||
if version is None:
|
||||
version = 1
|
||||
else:
|
||||
version += 1
|
||||
|
||||
data["version"] = version
|
||||
|
||||
path = os.path.join(
|
||||
work_root,
|
||||
api.format_template_with_optional_keys(data, template)
|
||||
)
|
||||
host.save_file(path)
|
||||
|
|
@ -1,12 +1,18 @@
|
|||
import os
|
||||
import shutil
|
||||
import copy
|
||||
import tempfile
|
||||
|
||||
import pyblish.api
|
||||
from avalon.tvpaint import lib
|
||||
from openpype.hosts.tvpaint.api.lib import composite_images
|
||||
from PIL import Image, ImageDraw
|
||||
from openpype.hosts.tvpaint.lib import (
|
||||
calculate_layers_extraction_data,
|
||||
get_frame_filename_template,
|
||||
fill_reference_frames,
|
||||
composite_rendered_layers,
|
||||
rename_filepaths_by_frame_start
|
||||
)
|
||||
from PIL import Image
|
||||
|
||||
|
||||
class ExtractSequence(pyblish.api.Extractor):
|
||||
|
|
@ -111,14 +117,6 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
filename_template = self._get_filename_template(
|
||||
# Use the biggest number
|
||||
max(mark_out, frame_end)
|
||||
)
|
||||
ext = os.path.splitext(filename_template)[1].replace(".", "")
|
||||
|
||||
self.log.debug("Using file template \"{}\"".format(filename_template))
|
||||
|
||||
# Save to staging dir
|
||||
output_dir = instance.data.get("stagingDir")
|
||||
if not output_dir:
|
||||
|
|
@ -133,30 +131,30 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
)
|
||||
|
||||
if instance.data["family"] == "review":
|
||||
output_filenames, thumbnail_fullpath = self.render_review(
|
||||
filename_template, output_dir, mark_in, mark_out,
|
||||
scene_bg_color
|
||||
result = self.render_review(
|
||||
output_dir, mark_in, mark_out, scene_bg_color
|
||||
)
|
||||
else:
|
||||
# Render output
|
||||
output_filenames, thumbnail_fullpath = self.render(
|
||||
filename_template, output_dir,
|
||||
mark_in, mark_out,
|
||||
filtered_layers
|
||||
result = self.render(
|
||||
output_dir, mark_in, mark_out, filtered_layers
|
||||
)
|
||||
|
||||
output_filepaths_by_frame_idx, thumbnail_fullpath = result
|
||||
|
||||
# Change scene frame Start back to previous value
|
||||
lib.execute_george("tv_startframe {}".format(scene_start_frame))
|
||||
|
||||
# Sequence of one frame
|
||||
if not output_filenames:
|
||||
if not output_filepaths_by_frame_idx:
|
||||
self.log.warning("Extractor did not create any output.")
|
||||
return
|
||||
|
||||
repre_files = self._rename_output_files(
|
||||
filename_template, output_dir,
|
||||
mark_in, mark_out,
|
||||
output_frame_start, output_frame_end
|
||||
output_filepaths_by_frame_idx,
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_frame_start
|
||||
)
|
||||
|
||||
# Fill tags and new families
|
||||
|
|
@ -169,9 +167,11 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
if single_file:
|
||||
repre_files = repre_files[0]
|
||||
|
||||
# Extension is harcoded
|
||||
# - changing extension would require change code
|
||||
new_repre = {
|
||||
"name": ext,
|
||||
"ext": ext,
|
||||
"name": "png",
|
||||
"ext": "png",
|
||||
"files": repre_files,
|
||||
"stagingDir": output_dir,
|
||||
"tags": tags
|
||||
|
|
@ -206,69 +206,28 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(thumbnail_repre)
|
||||
|
||||
def _get_filename_template(self, frame_end):
|
||||
"""Get filetemplate for rendered files.
|
||||
|
||||
This is simple template contains `{frame}{ext}` for sequential outputs
|
||||
and `single_file{ext}` for single file output. Output is rendered to
|
||||
temporary folder so filename should not matter as integrator change
|
||||
them.
|
||||
"""
|
||||
frame_padding = 4
|
||||
frame_end_str_len = len(str(frame_end))
|
||||
if frame_end_str_len > frame_padding:
|
||||
frame_padding = frame_end_str_len
|
||||
|
||||
return "{{frame:0>{}}}".format(frame_padding) + ".png"
|
||||
|
||||
def _rename_output_files(
|
||||
self, filename_template, output_dir,
|
||||
mark_in, mark_out, output_frame_start, output_frame_end
|
||||
self, filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
):
|
||||
# Use differnet ranges based on Mark In and output Frame Start values
|
||||
# - this is to make sure that filename renaming won't affect files that
|
||||
# are not renamed yet
|
||||
mark_start_is_less = bool(mark_in < output_frame_start)
|
||||
if mark_start_is_less:
|
||||
marks_range = range(mark_out, mark_in - 1, -1)
|
||||
frames_range = range(output_frame_end, output_frame_start - 1, -1)
|
||||
else:
|
||||
# This is less possible situation as frame start will be in most
|
||||
# cases higher than Mark In.
|
||||
marks_range = range(mark_in, mark_out + 1)
|
||||
frames_range = range(output_frame_start, output_frame_end + 1)
|
||||
new_filepaths_by_frame = rename_filepaths_by_frame_start(
|
||||
filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
)
|
||||
|
||||
repre_filepaths = []
|
||||
for mark, frame in zip(marks_range, frames_range):
|
||||
new_filename = filename_template.format(frame=frame)
|
||||
new_filepath = os.path.join(output_dir, new_filename)
|
||||
repre_filenames = []
|
||||
for filepath in new_filepaths_by_frame.values():
|
||||
repre_filenames.append(os.path.basename(filepath))
|
||||
|
||||
repre_filepaths.append(new_filepath)
|
||||
if mark_in < output_frame_start:
|
||||
repre_filenames = list(reversed(repre_filenames))
|
||||
|
||||
if mark != frame:
|
||||
old_filename = filename_template.format(frame=mark)
|
||||
old_filepath = os.path.join(output_dir, old_filename)
|
||||
os.rename(old_filepath, new_filepath)
|
||||
|
||||
# Reverse repre files order if output
|
||||
if mark_start_is_less:
|
||||
repre_filepaths = list(reversed(repre_filepaths))
|
||||
|
||||
return [
|
||||
os.path.basename(path)
|
||||
for path in repre_filepaths
|
||||
]
|
||||
return repre_filenames
|
||||
|
||||
def render_review(
|
||||
self, filename_template, output_dir, mark_in, mark_out, scene_bg_color
|
||||
self, output_dir, mark_in, mark_out, scene_bg_color
|
||||
):
|
||||
""" Export images from TVPaint using `tv_savesequence` command.
|
||||
|
||||
Args:
|
||||
filename_template (str): Filename template of an output. Template
|
||||
should already contain extension. Template may contain only
|
||||
keyword argument `{frame}` or index argument (for same value).
|
||||
Extension in template must match `save_mode`.
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
|
|
@ -279,6 +238,8 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
tuple: With 2 items first is list of filenames second is path to
|
||||
thumbnail.
|
||||
"""
|
||||
filename_template = get_frame_filename_template(mark_out)
|
||||
|
||||
self.log.debug("Preparing data for rendering.")
|
||||
first_frame_filepath = os.path.join(
|
||||
output_dir,
|
||||
|
|
@ -313,12 +274,13 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
lib.execute_george_through_file("\n".join(george_script_lines))
|
||||
|
||||
first_frame_filepath = None
|
||||
output_filenames = []
|
||||
for frame in range(mark_in, mark_out + 1):
|
||||
filename = filename_template.format(frame=frame)
|
||||
output_filenames.append(filename)
|
||||
|
||||
output_filepaths_by_frame_idx = {}
|
||||
for frame_idx in range(mark_in, mark_out + 1):
|
||||
filename = filename_template.format(frame=frame_idx)
|
||||
filepath = os.path.join(output_dir, filename)
|
||||
|
||||
output_filepaths_by_frame_idx[frame_idx] = filepath
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise AssertionError(
|
||||
"Output was not rendered. File was not found {}".format(
|
||||
|
|
@ -337,16 +299,12 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
source_img = source_img.convert("RGB")
|
||||
source_img.save(thumbnail_filepath)
|
||||
|
||||
return output_filenames, thumbnail_filepath
|
||||
return output_filepaths_by_frame_idx, thumbnail_filepath
|
||||
|
||||
def render(self, filename_template, output_dir, mark_in, mark_out, layers):
|
||||
def render(self, output_dir, mark_in, mark_out, layers):
|
||||
""" Export images from TVPaint.
|
||||
|
||||
Args:
|
||||
filename_template (str): Filename template of an output. Template
|
||||
should already contain extension. Template may contain only
|
||||
keyword argument `{frame}` or index argument (for same value).
|
||||
Extension in template must match `save_mode`.
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
|
|
@ -360,12 +318,15 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
# Map layers by position
|
||||
layers_by_position = {}
|
||||
layers_by_id = {}
|
||||
layer_ids = []
|
||||
for layer in layers:
|
||||
layer_id = layer["layer_id"]
|
||||
position = layer["position"]
|
||||
layers_by_position[position] = layer
|
||||
layers_by_id[layer_id] = layer
|
||||
|
||||
layer_ids.append(layer["layer_id"])
|
||||
layer_ids.append(layer_id)
|
||||
|
||||
# Sort layer positions in reverse order
|
||||
sorted_positions = list(reversed(sorted(layers_by_position.keys())))
|
||||
|
|
@ -374,59 +335,45 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
self.log.debug("Collecting pre/post behavior of individual layers.")
|
||||
behavior_by_layer_id = lib.get_layers_pre_post_behavior(layer_ids)
|
||||
|
||||
tmp_filename_template = "pos_{pos}." + filename_template
|
||||
|
||||
files_by_position = {}
|
||||
for position in sorted_positions:
|
||||
layer = layers_by_position[position]
|
||||
behavior = behavior_by_layer_id[layer["layer_id"]]
|
||||
|
||||
files_by_frames = self._render_layer(
|
||||
layer,
|
||||
tmp_filename_template,
|
||||
output_dir,
|
||||
behavior,
|
||||
mark_in,
|
||||
mark_out
|
||||
)
|
||||
if files_by_frames:
|
||||
files_by_position[position] = files_by_frames
|
||||
else:
|
||||
self.log.warning((
|
||||
"Skipped layer \"{}\". Probably out of Mark In/Out range."
|
||||
).format(layer["name"]))
|
||||
|
||||
if not files_by_position:
|
||||
layer_names = set(layer["name"] for layer in layers)
|
||||
joined_names = ", ".join(
|
||||
["\"{}\"".format(name) for name in layer_names]
|
||||
)
|
||||
self.log.warning(
|
||||
"Layers {} do not have content in range {} - {}".format(
|
||||
joined_names, mark_in, mark_out
|
||||
)
|
||||
)
|
||||
return [], None
|
||||
|
||||
output_filepaths = self._composite_files(
|
||||
files_by_position,
|
||||
mark_in,
|
||||
mark_out,
|
||||
filename_template,
|
||||
output_dir
|
||||
exposure_frames_by_layer_id = lib.get_layers_exposure_frames(
|
||||
layer_ids, layers
|
||||
)
|
||||
self._cleanup_tmp_files(files_by_position)
|
||||
|
||||
output_filenames = [
|
||||
os.path.basename(filepath)
|
||||
for filepath in output_filepaths
|
||||
]
|
||||
extraction_data_by_layer_id = calculate_layers_extraction_data(
|
||||
layers,
|
||||
exposure_frames_by_layer_id,
|
||||
behavior_by_layer_id,
|
||||
mark_in,
|
||||
mark_out
|
||||
)
|
||||
# Render layers
|
||||
filepaths_by_layer_id = {}
|
||||
for layer_id, render_data in extraction_data_by_layer_id.items():
|
||||
layer = layers_by_id[layer_id]
|
||||
filepaths_by_layer_id[layer_id] = self._render_layer(
|
||||
render_data, layer, output_dir
|
||||
)
|
||||
|
||||
# Prepare final filepaths where compositing should store result
|
||||
output_filepaths_by_frame = {}
|
||||
thumbnail_src_filepath = None
|
||||
if output_filepaths:
|
||||
thumbnail_src_filepath = output_filepaths[0]
|
||||
finale_template = get_frame_filename_template(mark_out)
|
||||
for frame_idx in range(mark_in, mark_out + 1):
|
||||
filename = finale_template.format(frame=frame_idx)
|
||||
|
||||
filepath = os.path.join(output_dir, filename)
|
||||
output_filepaths_by_frame[frame_idx] = filepath
|
||||
|
||||
if thumbnail_src_filepath is None:
|
||||
thumbnail_src_filepath = filepath
|
||||
|
||||
self.log.info("Started compositing of layer frames.")
|
||||
composite_rendered_layers(
|
||||
layers, filepaths_by_layer_id,
|
||||
mark_in, mark_out,
|
||||
output_filepaths_by_frame
|
||||
)
|
||||
|
||||
self.log.info("Compositing finished")
|
||||
thumbnail_filepath = None
|
||||
if thumbnail_src_filepath and os.path.exists(thumbnail_src_filepath):
|
||||
source_img = Image.open(thumbnail_src_filepath)
|
||||
|
|
@ -449,7 +396,7 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
).format(source_img.mode))
|
||||
source_img.save(thumbnail_filepath)
|
||||
|
||||
return output_filenames, thumbnail_filepath
|
||||
return output_filepaths_by_frame, thumbnail_filepath
|
||||
|
||||
def _get_review_bg_color(self):
|
||||
red = green = blue = 255
|
||||
|
|
@ -460,338 +407,43 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
red, green, blue = self.review_bg
|
||||
return (red, green, blue)
|
||||
|
||||
def _render_layer(
|
||||
self,
|
||||
layer,
|
||||
tmp_filename_template,
|
||||
output_dir,
|
||||
behavior,
|
||||
mark_in_index,
|
||||
mark_out_index
|
||||
):
|
||||
def _render_layer(self, render_data, layer, output_dir):
|
||||
frame_references = render_data["frame_references"]
|
||||
filenames_by_frame_index = render_data["filenames_by_frame_index"]
|
||||
|
||||
layer_id = layer["layer_id"]
|
||||
frame_start_index = layer["frame_start"]
|
||||
frame_end_index = layer["frame_end"]
|
||||
|
||||
pre_behavior = behavior["pre"]
|
||||
post_behavior = behavior["post"]
|
||||
|
||||
# Check if layer is before mark in
|
||||
if frame_end_index < mark_in_index:
|
||||
# Skip layer if post behavior is "none"
|
||||
if post_behavior == "none":
|
||||
return {}
|
||||
|
||||
# Check if layer is after mark out
|
||||
elif frame_start_index > mark_out_index:
|
||||
# Skip layer if pre behavior is "none"
|
||||
if pre_behavior == "none":
|
||||
return {}
|
||||
|
||||
exposure_frames = lib.get_exposure_frames(
|
||||
layer_id, frame_start_index, frame_end_index
|
||||
)
|
||||
|
||||
if frame_start_index not in exposure_frames:
|
||||
exposure_frames.append(frame_start_index)
|
||||
|
||||
layer_files_by_frame = {}
|
||||
george_script_lines = [
|
||||
"tv_layerset {}".format(layer_id),
|
||||
"tv_SaveMode \"PNG\""
|
||||
]
|
||||
layer_position = layer["position"]
|
||||
|
||||
for frame_idx in exposure_frames:
|
||||
filename = tmp_filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
filepaths_by_frame = {}
|
||||
frames_to_render = []
|
||||
for frame_idx, ref_idx in frame_references.items():
|
||||
# None reference is skipped because does not have source
|
||||
if ref_idx is None:
|
||||
filepaths_by_frame[frame_idx] = None
|
||||
continue
|
||||
filename = filenames_by_frame_index[frame_idx]
|
||||
dst_path = "/".join([output_dir, filename])
|
||||
layer_files_by_frame[frame_idx] = os.path.normpath(dst_path)
|
||||
filepaths_by_frame[frame_idx] = dst_path
|
||||
if frame_idx != ref_idx:
|
||||
continue
|
||||
|
||||
frames_to_render.append(str(frame_idx))
|
||||
# Go to frame
|
||||
george_script_lines.append("tv_layerImage {}".format(frame_idx))
|
||||
# Store image to output
|
||||
george_script_lines.append("tv_saveimage \"{}\"".format(dst_path))
|
||||
|
||||
self.log.debug("Rendering Exposure frames {} of layer {} ({})".format(
|
||||
str(exposure_frames), layer_id, layer["name"]
|
||||
",".join(frames_to_render), layer_id, layer["name"]
|
||||
))
|
||||
# Let TVPaint render layer's image
|
||||
lib.execute_george_through_file("\n".join(george_script_lines))
|
||||
|
||||
# Fill frames between `frame_start_index` and `frame_end_index`
|
||||
self.log.debug((
|
||||
"Filling frames between first and last frame of layer ({} - {})."
|
||||
).format(frame_start_index + 1, frame_end_index + 1))
|
||||
self.log.debug("Filling frames not rendered frames.")
|
||||
fill_reference_frames(frame_references, filepaths_by_frame)
|
||||
|
||||
_debug_filled_frames = []
|
||||
prev_filepath = None
|
||||
for frame_idx in range(frame_start_index, frame_end_index + 1):
|
||||
if frame_idx in layer_files_by_frame:
|
||||
prev_filepath = layer_files_by_frame[frame_idx]
|
||||
continue
|
||||
|
||||
if prev_filepath is None:
|
||||
raise ValueError("BUG: First frame of layer was not rendered!")
|
||||
_debug_filled_frames.append(frame_idx)
|
||||
filename = tmp_filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(prev_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
self.log.debug("Filled frames {}".format(str(_debug_filled_frames)))
|
||||
|
||||
# Fill frames by pre/post behavior of layer
|
||||
self.log.debug((
|
||||
"Completing image sequence of layer by pre/post behavior."
|
||||
" PRE: {} | POST: {}"
|
||||
).format(pre_behavior, post_behavior))
|
||||
|
||||
# Pre behavior
|
||||
self._fill_frame_by_pre_behavior(
|
||||
layer,
|
||||
pre_behavior,
|
||||
mark_in_index,
|
||||
layer_files_by_frame,
|
||||
tmp_filename_template,
|
||||
output_dir
|
||||
)
|
||||
self._fill_frame_by_post_behavior(
|
||||
layer,
|
||||
post_behavior,
|
||||
mark_out_index,
|
||||
layer_files_by_frame,
|
||||
tmp_filename_template,
|
||||
output_dir
|
||||
)
|
||||
return layer_files_by_frame
|
||||
|
||||
def _fill_frame_by_pre_behavior(
|
||||
self,
|
||||
layer,
|
||||
pre_behavior,
|
||||
mark_in_index,
|
||||
layer_files_by_frame,
|
||||
filename_template,
|
||||
output_dir
|
||||
):
|
||||
layer_position = layer["position"]
|
||||
frame_start_index = layer["frame_start"]
|
||||
frame_end_index = layer["frame_end"]
|
||||
frame_count = frame_end_index - frame_start_index + 1
|
||||
if mark_in_index >= frame_start_index:
|
||||
self.log.debug((
|
||||
"Skipping pre-behavior."
|
||||
" All frames after Mark In are rendered."
|
||||
))
|
||||
return
|
||||
|
||||
if pre_behavior == "none":
|
||||
# Empty frames are handled during `_composite_files`
|
||||
pass
|
||||
|
||||
elif pre_behavior == "hold":
|
||||
# Keep first frame for whole time
|
||||
eq_frame_filepath = layer_files_by_frame[frame_start_index]
|
||||
for frame_idx in range(mark_in_index, frame_start_index):
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif pre_behavior in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in reversed(range(mark_in_index, frame_start_index)):
|
||||
eq_frame_idx_offset = (
|
||||
(frame_end_index - frame_idx) % frame_count
|
||||
)
|
||||
eq_frame_idx = frame_end_index - eq_frame_idx_offset
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif pre_behavior == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in reversed(range(mark_in_index, frame_start_index)):
|
||||
eq_frame_idx_offset = (frame_start_index - frame_idx) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = (seq_len - eq_frame_idx_offset)
|
||||
eq_frame_idx = frame_start_index + eq_frame_idx_offset
|
||||
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
def _fill_frame_by_post_behavior(
|
||||
self,
|
||||
layer,
|
||||
post_behavior,
|
||||
mark_out_index,
|
||||
layer_files_by_frame,
|
||||
filename_template,
|
||||
output_dir
|
||||
):
|
||||
layer_position = layer["position"]
|
||||
frame_start_index = layer["frame_start"]
|
||||
frame_end_index = layer["frame_end"]
|
||||
frame_count = frame_end_index - frame_start_index + 1
|
||||
if mark_out_index <= frame_end_index:
|
||||
self.log.debug((
|
||||
"Skipping post-behavior."
|
||||
" All frames up to Mark Out are rendered."
|
||||
))
|
||||
return
|
||||
|
||||
if post_behavior == "none":
|
||||
# Empty frames are handled during `_composite_files`
|
||||
pass
|
||||
|
||||
elif post_behavior == "hold":
|
||||
# Keep first frame for whole time
|
||||
eq_frame_filepath = layer_files_by_frame[frame_end_index]
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif post_behavior in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
eq_frame_idx = frame_idx % frame_count
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif post_behavior == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
eq_frame_idx_offset = (frame_idx - frame_end_index) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = seq_len - eq_frame_idx_offset
|
||||
eq_frame_idx = frame_end_index - eq_frame_idx_offset
|
||||
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
def _composite_files(
|
||||
self, files_by_position, frame_start, frame_end,
|
||||
filename_template, output_dir
|
||||
):
|
||||
"""Composite frames when more that one layer was exported.
|
||||
|
||||
This method is used when more than one layer is rendered out so and
|
||||
output should be composition of each frame of rendered layers.
|
||||
Missing frames are filled with transparent images.
|
||||
"""
|
||||
self.log.debug("Preparing files for compisiting.")
|
||||
# Prepare paths to images by frames into list where are stored
|
||||
# in order of compositing.
|
||||
images_by_frame = {}
|
||||
for frame_idx in range(frame_start, frame_end + 1):
|
||||
images_by_frame[frame_idx] = []
|
||||
for position in sorted(files_by_position.keys(), reverse=True):
|
||||
position_data = files_by_position[position]
|
||||
if frame_idx in position_data:
|
||||
filepath = position_data[frame_idx]
|
||||
images_by_frame[frame_idx].append(filepath)
|
||||
|
||||
output_filepaths = []
|
||||
missing_frame_paths = []
|
||||
random_frame_path = None
|
||||
for frame_idx in sorted(images_by_frame.keys()):
|
||||
image_filepaths = images_by_frame[frame_idx]
|
||||
output_filename = filename_template.format(frame=frame_idx)
|
||||
output_filepath = os.path.join(output_dir, output_filename)
|
||||
output_filepaths.append(output_filepath)
|
||||
|
||||
# Store information about missing frame and skip
|
||||
if not image_filepaths:
|
||||
missing_frame_paths.append(output_filepath)
|
||||
continue
|
||||
|
||||
# Just rename the file if is no need of compositing
|
||||
if len(image_filepaths) == 1:
|
||||
os.rename(image_filepaths[0], output_filepath)
|
||||
|
||||
# Composite images
|
||||
else:
|
||||
composite_images(image_filepaths, output_filepath)
|
||||
|
||||
# Store path of random output image that will 100% exist after all
|
||||
# multiprocessing as mockup for missing frames
|
||||
if random_frame_path is None:
|
||||
random_frame_path = output_filepath
|
||||
|
||||
self.log.debug(
|
||||
"Creating transparent images for frames without render {}.".format(
|
||||
str(missing_frame_paths)
|
||||
)
|
||||
)
|
||||
# Fill the sequence with transparent frames
|
||||
transparent_filepath = None
|
||||
for filepath in missing_frame_paths:
|
||||
if transparent_filepath is None:
|
||||
img_obj = Image.open(random_frame_path)
|
||||
painter = ImageDraw.Draw(img_obj)
|
||||
painter.rectangle((0, 0, *img_obj.size), fill=(0, 0, 0, 0))
|
||||
img_obj.save(filepath)
|
||||
transparent_filepath = filepath
|
||||
else:
|
||||
self._copy_image(transparent_filepath, filepath)
|
||||
return output_filepaths
|
||||
|
||||
def _cleanup_tmp_files(self, files_by_position):
|
||||
"""Remove temporary files that were used for compositing."""
|
||||
for data in files_by_position.values():
|
||||
for filepath in data.values():
|
||||
if os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
|
||||
def _copy_image(self, src_path, dst_path):
|
||||
"""Create a copy of an image.
|
||||
|
||||
This was added to be able easier change copy method.
|
||||
"""
|
||||
# Create hardlink of image instead of copying if possible
|
||||
if hasattr(os, "link"):
|
||||
os.link(src_path, dst_path)
|
||||
else:
|
||||
shutil.copy(src_path, dst_path)
|
||||
return filepaths_by_frame
|
||||
|
|
|
|||
21
openpype/hosts/tvpaint/worker/__init__.py
Normal file
21
openpype/hosts/tvpaint/worker/__init__.py
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
from .worker_job import (
|
||||
JobFailed,
|
||||
ExecuteSimpleGeorgeScript,
|
||||
ExecuteGeorgeScript,
|
||||
CollectSceneData,
|
||||
SenderTVPaintCommands,
|
||||
ProcessTVPaintCommands
|
||||
)
|
||||
|
||||
from .worker import main
|
||||
|
||||
__all__ = (
|
||||
"JobFailed",
|
||||
"ExecuteSimpleGeorgeScript",
|
||||
"ExecuteGeorgeScript",
|
||||
"CollectSceneData",
|
||||
"SenderTVPaintCommands",
|
||||
"ProcessTVPaintCommands",
|
||||
|
||||
"main"
|
||||
)
|
||||
133
openpype/hosts/tvpaint/worker/worker.py
Normal file
133
openpype/hosts/tvpaint/worker/worker.py
Normal file
|
|
@ -0,0 +1,133 @@
|
|||
import signal
|
||||
import time
|
||||
import asyncio
|
||||
|
||||
from avalon.tvpaint.communication_server import (
|
||||
BaseCommunicator,
|
||||
CommunicationWrapper
|
||||
)
|
||||
from openpype_modules.job_queue.job_workers import WorkerJobsConnection
|
||||
|
||||
from .worker_job import ProcessTVPaintCommands
|
||||
|
||||
|
||||
class TVPaintWorkerCommunicator(BaseCommunicator):
|
||||
"""Modified commuicator which cares about processing jobs.
|
||||
|
||||
Received jobs are send to TVPaint by parsing 'ProcessTVPaintCommands'.
|
||||
"""
|
||||
def __init__(self, server_url):
|
||||
super().__init__()
|
||||
|
||||
self.return_code = 1
|
||||
self._server_url = server_url
|
||||
self._worker_connection = None
|
||||
|
||||
def _start_webserver(self):
|
||||
"""Create connection to workers server before TVPaint server."""
|
||||
loop = self.websocket_server.loop
|
||||
self._worker_connection = WorkerJobsConnection(
|
||||
self._server_url, "tvpaint", loop
|
||||
)
|
||||
asyncio.ensure_future(
|
||||
self._worker_connection.main_loop(register_worker=False),
|
||||
loop=loop
|
||||
)
|
||||
|
||||
super()._start_webserver()
|
||||
|
||||
def _on_client_connect(self, *args, **kwargs):
|
||||
super()._on_client_connect(*args, **kwargs)
|
||||
# Register as "ready to work" worker
|
||||
self._worker_connection.register_as_worker()
|
||||
|
||||
def stop(self):
|
||||
"""Stop worker connection and TVPaint server."""
|
||||
self._worker_connection.stop()
|
||||
self.return_code = 0
|
||||
super().stop()
|
||||
|
||||
@property
|
||||
def current_job(self):
|
||||
"""Retrieve job which should be processed."""
|
||||
if self._worker_connection:
|
||||
return self._worker_connection.current_job
|
||||
return None
|
||||
|
||||
def _check_process(self):
|
||||
if self.process is None:
|
||||
return True
|
||||
|
||||
if self.process.poll() is not None:
|
||||
asyncio.ensure_future(
|
||||
self._worker_connection.disconnect(),
|
||||
loop=self.websocket_server.loop
|
||||
)
|
||||
self._exit()
|
||||
return False
|
||||
return True
|
||||
|
||||
def _process_job(self):
|
||||
job = self.current_job
|
||||
if job is None:
|
||||
return
|
||||
|
||||
# Prepare variables used for sendig
|
||||
success = False
|
||||
message = "Unknown function"
|
||||
data = None
|
||||
job_data = job["data"]
|
||||
workfile = job_data["workfile"]
|
||||
# Currently can process only "commands" function
|
||||
if job_data.get("function") == "commands":
|
||||
try:
|
||||
commands = ProcessTVPaintCommands(
|
||||
workfile, job_data["commands"], self
|
||||
)
|
||||
commands.execute()
|
||||
data = commands.response_data()
|
||||
success = True
|
||||
message = "Executed"
|
||||
|
||||
except Exception as exc:
|
||||
message = "Error on worker: {}".format(str(exc))
|
||||
|
||||
self._worker_connection.finish_job(success, message, data)
|
||||
|
||||
def main_loop(self):
|
||||
"""Main loop where jobs are processed.
|
||||
|
||||
Server is stopped by killing this process or TVPaint process.
|
||||
"""
|
||||
while self.server_is_running:
|
||||
if self._check_process():
|
||||
self._process_job()
|
||||
time.sleep(1)
|
||||
|
||||
return self.return_code
|
||||
|
||||
|
||||
def _start_tvpaint(tvpaint_executable_path, server_url):
|
||||
communicator = TVPaintWorkerCommunicator(server_url)
|
||||
CommunicationWrapper.set_communicator(communicator)
|
||||
communicator.launch([tvpaint_executable_path])
|
||||
|
||||
|
||||
def main(tvpaint_executable_path, server_url):
|
||||
# Register terminal signal handler
|
||||
def signal_handler(*_args):
|
||||
print("Termination signal received. Stopping.")
|
||||
if CommunicationWrapper.communicator is not None:
|
||||
CommunicationWrapper.communicator.stop()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
_start_tvpaint(tvpaint_executable_path, server_url)
|
||||
|
||||
communicator = CommunicationWrapper.communicator
|
||||
if communicator is None:
|
||||
print("Communicator is not set")
|
||||
return 1
|
||||
|
||||
return communicator.main_loop()
|
||||
537
openpype/hosts/tvpaint/worker/worker_job.py
Normal file
537
openpype/hosts/tvpaint/worker/worker_job.py
Normal file
|
|
@ -0,0 +1,537 @@
|
|||
import os
|
||||
import tempfile
|
||||
import inspect
|
||||
import copy
|
||||
import json
|
||||
import time
|
||||
from uuid import uuid4
|
||||
from abc import ABCMeta, abstractmethod, abstractproperty
|
||||
|
||||
import six
|
||||
|
||||
from openpype.api import PypeLogger
|
||||
from openpype.modules import ModulesManager
|
||||
|
||||
|
||||
TMP_FILE_PREFIX = "opw_tvp_"
|
||||
|
||||
|
||||
class JobFailed(Exception):
|
||||
"""Raised when job was sent and finished unsuccessfully."""
|
||||
def __init__(self, job_status):
|
||||
job_state = job_status["state"]
|
||||
job_message = job_status["message"] or "Unknown issue"
|
||||
error_msg = (
|
||||
"Job didn't finish properly."
|
||||
" Job state: \"{}\" | Job message: \"{}\""
|
||||
).format(job_state, job_message)
|
||||
|
||||
self.job_status = job_status
|
||||
|
||||
super().__init__(error_msg)
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class BaseCommand:
|
||||
"""Abstract TVPaint command which can be executed through worker.
|
||||
|
||||
Each command must have unique name and implemented 'execute' and
|
||||
'from_existing' methods.
|
||||
|
||||
Command also have id which is created on command creation.
|
||||
|
||||
The idea is that command is just a data container on sender side send
|
||||
througth server to a worker where is replicated one by one, executed and
|
||||
result sent back to sender through server.
|
||||
"""
|
||||
@abstractproperty
|
||||
def name(self):
|
||||
"""Command name (must be unique)."""
|
||||
pass
|
||||
|
||||
def __init__(self, data=None):
|
||||
if data is None:
|
||||
data = {}
|
||||
else:
|
||||
data = copy.deepcopy(data)
|
||||
|
||||
# Use 'id' from data when replicating on process side
|
||||
command_id = data.get("id")
|
||||
if command_id is None:
|
||||
command_id = str(uuid4())
|
||||
data["id"] = command_id
|
||||
data["command"] = self.name
|
||||
|
||||
self._parent = None
|
||||
self._result = None
|
||||
self._command_data = data
|
||||
self._done = False
|
||||
|
||||
def job_queue_root(self):
|
||||
"""Access to job queue root.
|
||||
|
||||
Job queue root is shared access point to files shared across senders
|
||||
and workers.
|
||||
"""
|
||||
if self._parent is None:
|
||||
return None
|
||||
return self._parent.job_queue_root()
|
||||
|
||||
def set_parent(self, parent):
|
||||
self._parent = parent
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
"""Command id."""
|
||||
return self._command_data["id"]
|
||||
|
||||
@property
|
||||
def parent(self):
|
||||
"""Parent of command expected type of 'TVPaintCommands'."""
|
||||
return self._parent
|
||||
|
||||
@property
|
||||
def communicator(self):
|
||||
"""TVPaint communicator.
|
||||
|
||||
Available only on worker side.
|
||||
"""
|
||||
return self._parent.communicator
|
||||
|
||||
@property
|
||||
def done(self):
|
||||
"""Is command done."""
|
||||
return self._done
|
||||
|
||||
def set_done(self):
|
||||
"""Change state of done."""
|
||||
self._done = True
|
||||
|
||||
def set_result(self, result):
|
||||
"""Set result of executed command."""
|
||||
self._result = result
|
||||
|
||||
def result(self):
|
||||
"""Result of command."""
|
||||
return copy.deepcopy(self._result)
|
||||
|
||||
def response_data(self):
|
||||
"""Data send as response to sender."""
|
||||
return {
|
||||
"id": self.id,
|
||||
"result": self._result,
|
||||
"done": self._done
|
||||
}
|
||||
|
||||
def command_data(self):
|
||||
"""Raw command data."""
|
||||
return copy.deepcopy(self._command_data)
|
||||
|
||||
@abstractmethod
|
||||
def execute(self):
|
||||
"""Execute command on worker side."""
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
@abstractmethod
|
||||
def from_existing(cls, data):
|
||||
"""Recreate object based on passed data."""
|
||||
pass
|
||||
|
||||
def execute_george(self, george_script):
|
||||
"""Execute george script in TVPaint."""
|
||||
return self.parent.execute_george(george_script)
|
||||
|
||||
def execute_george_through_file(self, george_script):
|
||||
"""Execute george script through temp file in TVPaint."""
|
||||
return self.parent.execute_george_through_file(george_script)
|
||||
|
||||
|
||||
class ExecuteSimpleGeorgeScript(BaseCommand):
|
||||
"""Execute simple george script in TVPaint.
|
||||
|
||||
Args:
|
||||
script(str): Script that will be executed.
|
||||
"""
|
||||
name = "execute_george_simple"
|
||||
|
||||
def __init__(self, script, data=None):
|
||||
data = data or {}
|
||||
data["script"] = script
|
||||
self._script = script
|
||||
super().__init__(data)
|
||||
|
||||
def execute(self):
|
||||
self._result = self.execute_george(self._script)
|
||||
|
||||
@classmethod
|
||||
def from_existing(cls, data):
|
||||
script = data.pop("script")
|
||||
return cls(script, data)
|
||||
|
||||
|
||||
class ExecuteGeorgeScript(BaseCommand):
|
||||
"""Execute multiline george script in TVPaint.
|
||||
|
||||
Args:
|
||||
script_lines(list): Lines that will be executed in george script
|
||||
through temp george file.
|
||||
tmp_file_keys(list): List of formatting keys in george script that
|
||||
require replacement with path to a temp file where result will be
|
||||
stored. The content of file is stored to result by the key.
|
||||
root_dir_key(str): Formatting key that will be replaced in george
|
||||
script with job queue root which can be different on worker side.
|
||||
data(dict): Raw data about command.
|
||||
"""
|
||||
name = "execute_george_through_file"
|
||||
|
||||
def __init__(
|
||||
self, script_lines, tmp_file_keys=None, root_dir_key=None, data=None
|
||||
):
|
||||
data = data or {}
|
||||
if not tmp_file_keys:
|
||||
tmp_file_keys = data.get("tmp_file_keys") or []
|
||||
|
||||
data["script_lines"] = script_lines
|
||||
data["tmp_file_keys"] = tmp_file_keys
|
||||
data["root_dir_key"] = root_dir_key
|
||||
self._script_lines = script_lines
|
||||
self._tmp_file_keys = tmp_file_keys
|
||||
self._root_dir_key = root_dir_key
|
||||
super().__init__(data)
|
||||
|
||||
def execute(self):
|
||||
filepath_by_key = {}
|
||||
script = self._script_lines
|
||||
if isinstance(script, list):
|
||||
script = "\n".join(script)
|
||||
|
||||
# Replace temporary files in george script
|
||||
for key in self._tmp_file_keys:
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix=TMP_FILE_PREFIX, suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
format_key = "{" + key + "}"
|
||||
output_path = output_file.name.replace("\\", "/")
|
||||
script = script.replace(format_key, output_path)
|
||||
filepath_by_key[key] = output_path
|
||||
|
||||
# Replace job queue root in script
|
||||
if self._root_dir_key:
|
||||
job_queue_root = self.job_queue_root()
|
||||
format_key = "{" + self._root_dir_key + "}"
|
||||
script = script.replace(
|
||||
format_key, job_queue_root.replace("\\", "/")
|
||||
)
|
||||
|
||||
# Execute the script
|
||||
self.execute_george_through_file(script)
|
||||
|
||||
# Store result of temporary files
|
||||
result = {}
|
||||
for key, filepath in filepath_by_key.items():
|
||||
with open(filepath, "r") as stream:
|
||||
data = stream.read()
|
||||
result[key] = data
|
||||
os.remove(filepath)
|
||||
|
||||
self._result = result
|
||||
|
||||
@classmethod
|
||||
def from_existing(cls, data):
|
||||
"""Recreate the object from data."""
|
||||
script_lines = data.pop("script_lines")
|
||||
tmp_file_keys = data.pop("tmp_file_keys", None)
|
||||
root_dir_key = data.pop("root_dir_key", None)
|
||||
return cls(script_lines, tmp_file_keys, root_dir_key, data)
|
||||
|
||||
|
||||
class CollectSceneData(BaseCommand):
|
||||
"""Helper command which will collect all usefull info about workfile.
|
||||
|
||||
Result is dictionary with all layers data, exposure frames by layer ids
|
||||
pre/post behavior of layers by their ids, group information and scene data.
|
||||
"""
|
||||
name = "collect_scene_data"
|
||||
|
||||
def execute(self):
|
||||
from avalon.tvpaint.lib import (
|
||||
get_layers_data,
|
||||
get_groups_data,
|
||||
get_layers_pre_post_behavior,
|
||||
get_layers_exposure_frames,
|
||||
get_scene_data
|
||||
)
|
||||
|
||||
groups_data = get_groups_data(communicator=self.communicator)
|
||||
layers_data = get_layers_data(communicator=self.communicator)
|
||||
layer_ids = [
|
||||
layer_data["layer_id"]
|
||||
for layer_data in layers_data
|
||||
]
|
||||
pre_post_beh_by_layer_id = get_layers_pre_post_behavior(
|
||||
layer_ids, communicator=self.communicator
|
||||
)
|
||||
exposure_frames_by_layer_id = get_layers_exposure_frames(
|
||||
layer_ids, layers_data, communicator=self.communicator
|
||||
)
|
||||
|
||||
self._result = {
|
||||
"layers_data": layers_data,
|
||||
"exposure_frames_by_layer_id": exposure_frames_by_layer_id,
|
||||
"pre_post_beh_by_layer_id": pre_post_beh_by_layer_id,
|
||||
"groups_data": groups_data,
|
||||
"scene_data": get_scene_data(self.communicator)
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_existing(cls, data):
|
||||
return cls(data)
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class TVPaintCommands:
|
||||
"""Wrapper around TVPaint commands to be able send multiple commands.
|
||||
|
||||
Commands may send one or multiple commands at once. Also gives api access
|
||||
for commands info.
|
||||
|
||||
Base for sender and receiver which are extending the logic for their
|
||||
purposes. One of differences is preparation of workfile path.
|
||||
|
||||
Args:
|
||||
workfile(str): Path to workfile.
|
||||
job_queue_module(JobQueueModule): Object of OpenPype module JobQueue.
|
||||
"""
|
||||
def __init__(self, workfile, job_queue_module=None):
|
||||
self._log = None
|
||||
self._commands = []
|
||||
self._command_classes_by_name = None
|
||||
if job_queue_module is None:
|
||||
manager = ModulesManager()
|
||||
job_queue_module = manager.modules_by_name["job_queue"]
|
||||
self._job_queue_module = job_queue_module
|
||||
|
||||
self._workfile = self._prepare_workfile(workfile)
|
||||
|
||||
@abstractmethod
|
||||
def _prepare_workfile(self, workfile):
|
||||
"""Modification of workfile path on initialization to match platorm."""
|
||||
pass
|
||||
|
||||
def job_queue_root(self):
|
||||
"""Job queue root for current platform using current settings."""
|
||||
return self._job_queue_module.get_jobs_root_from_settings()
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
"""Access to logger object."""
|
||||
if self._log is None:
|
||||
self._log = PypeLogger.get_logger(self.__class__.__name__)
|
||||
return self._log
|
||||
|
||||
@property
|
||||
def classes_by_name(self):
|
||||
"""Prepare commands classes for validation and recreation of commands.
|
||||
|
||||
It is expected that all commands are defined in this python file so
|
||||
we're looking for all implementation of BaseCommand in globals.
|
||||
"""
|
||||
if self._command_classes_by_name is None:
|
||||
command_classes_by_name = {}
|
||||
for attr in globals().values():
|
||||
if (
|
||||
not inspect.isclass(attr)
|
||||
or not issubclass(attr, BaseCommand)
|
||||
or attr is BaseCommand
|
||||
):
|
||||
continue
|
||||
|
||||
if inspect.isabstract(attr):
|
||||
self.log.debug(
|
||||
"Skipping abstract class {}".format(attr.__name__)
|
||||
)
|
||||
command_classes_by_name[attr.name] = attr
|
||||
self._command_classes_by_name = command_classes_by_name
|
||||
|
||||
return self._command_classes_by_name
|
||||
|
||||
def add_command(self, command):
|
||||
"""Add command to process."""
|
||||
command.set_parent(self)
|
||||
self._commands.append(command)
|
||||
|
||||
def result(self):
|
||||
"""Result of commands in list in which they were processed."""
|
||||
return [
|
||||
command.result()
|
||||
for command in self._commands
|
||||
]
|
||||
|
||||
def response_data(self):
|
||||
"""Data which should be send from worker."""
|
||||
return [
|
||||
command.response_data()
|
||||
for command in self._commands
|
||||
]
|
||||
|
||||
|
||||
class SenderTVPaintCommands(TVPaintCommands):
|
||||
"""Sender implementation of TVPaint Commands."""
|
||||
def _prepare_workfile(self, workfile):
|
||||
"""Remove job queue root from workfile path.
|
||||
|
||||
It is expected that worker will add it's root before passed workfile.
|
||||
"""
|
||||
new_workfile = workfile.replace("\\", "/")
|
||||
job_queue_root = self.job_queue_root().replace("\\", "/")
|
||||
if job_queue_root not in new_workfile:
|
||||
raise ValueError((
|
||||
"Workfile is not located in JobQueue root."
|
||||
" Workfile path: \"{}\". JobQueue root: \"{}\""
|
||||
).format(workfile, job_queue_root))
|
||||
return new_workfile.replace(job_queue_root, "")
|
||||
|
||||
def commands_data(self):
|
||||
"""Commands data to be able recreate them."""
|
||||
return [
|
||||
command.command_data()
|
||||
for command in self._commands
|
||||
]
|
||||
|
||||
def to_job_data(self):
|
||||
"""Convert commands to job data before sending to workers server."""
|
||||
return {
|
||||
"workfile": self._workfile,
|
||||
"function": "commands",
|
||||
"commands": self.commands_data()
|
||||
}
|
||||
|
||||
def set_result(self, result):
|
||||
commands_by_id = {
|
||||
command.id: command
|
||||
for command in self._commands
|
||||
}
|
||||
|
||||
for item in result:
|
||||
command = commands_by_id[item["id"]]
|
||||
command.set_result(item["result"])
|
||||
command.set_done()
|
||||
|
||||
def _send_job(self):
|
||||
"""Send job to a workers server."""
|
||||
# Send job data to job queue server
|
||||
job_data = self.to_job_data()
|
||||
self.log.debug("Sending job to JobQueue server.\n{}".format(
|
||||
json.dumps(job_data, indent=4)
|
||||
))
|
||||
job_id = self._job_queue_module.send_job("tvpaint", job_data)
|
||||
self.log.info((
|
||||
"Job sent to JobQueue server and got id \"{}\"."
|
||||
" Waiting for finishing the job."
|
||||
).format(job_id))
|
||||
|
||||
return job_id
|
||||
|
||||
def send_job_and_wait(self):
|
||||
"""Send job to workers server and wait for response.
|
||||
|
||||
Result of job is stored into the object.
|
||||
|
||||
Raises:
|
||||
JobFailed: When job was finished but not successfully.
|
||||
"""
|
||||
job_id = self._send_job()
|
||||
while True:
|
||||
job_status = self._job_queue_module.get_job_status(job_id)
|
||||
if job_status["done"]:
|
||||
break
|
||||
time.sleep(1)
|
||||
|
||||
# Check if job state is done
|
||||
if job_status["state"] != "done":
|
||||
raise JobFailed(job_status)
|
||||
|
||||
self.set_result(job_status["result"])
|
||||
|
||||
self.log.debug("Job is done and result is stored.")
|
||||
|
||||
|
||||
class ProcessTVPaintCommands(TVPaintCommands):
|
||||
"""Worker side of TVPaint Commands.
|
||||
|
||||
It is expected this object is created only on worker's side from existing
|
||||
data loaded from job.
|
||||
|
||||
Workfile path logic is based on 'SenderTVPaintCommands'.
|
||||
"""
|
||||
def __init__(self, workfile, commands, communicator):
|
||||
super(ProcessTVPaintCommands, self).__init__(workfile)
|
||||
|
||||
self._communicator = communicator
|
||||
|
||||
self.commands_from_data(commands)
|
||||
|
||||
def _prepare_workfile(self, workfile):
|
||||
"""Preprend job queue root before passed workfile."""
|
||||
workfile = workfile.replace("\\", "/")
|
||||
job_queue_root = self.job_queue_root().replace("\\", "/")
|
||||
new_workfile = "/".join([job_queue_root, workfile])
|
||||
while "//" in new_workfile:
|
||||
new_workfile = new_workfile.replace("//", "/")
|
||||
return os.path.normpath(new_workfile)
|
||||
|
||||
@property
|
||||
def communicator(self):
|
||||
"""Access to TVPaint communicator."""
|
||||
return self._communicator
|
||||
|
||||
def commands_from_data(self, commands_data):
|
||||
"""Recreate command from passed data."""
|
||||
for command_data in commands_data:
|
||||
command_name = command_data["command"]
|
||||
|
||||
klass = self.classes_by_name[command_name]
|
||||
command = klass.from_existing(command_data)
|
||||
self.add_command(command)
|
||||
|
||||
def execute_george(self, george_script):
|
||||
"""Helper method to execute george script."""
|
||||
return self.communicator.execute_george(george_script)
|
||||
|
||||
def execute_george_through_file(self, george_script):
|
||||
"""Helper method to execute george script through temp file."""
|
||||
temporary_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix=TMP_FILE_PREFIX, suffix=".grg", delete=False
|
||||
)
|
||||
temporary_file.write(george_script)
|
||||
temporary_file.close()
|
||||
temp_file_path = temporary_file.name.replace("\\", "/")
|
||||
self.execute_george("tv_runscript {}".format(temp_file_path))
|
||||
os.remove(temp_file_path)
|
||||
|
||||
def _open_workfile(self):
|
||||
"""Open workfile in TVPaint."""
|
||||
workfile = self._workfile
|
||||
print("Opening workfile {}".format(workfile))
|
||||
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(workfile)
|
||||
self.execute_george_through_file(george_script)
|
||||
|
||||
def _close_workfile(self):
|
||||
"""Close workfile in TVPaint."""
|
||||
print("Closing workfile")
|
||||
self.execute_george_through_file("tv_projectclose")
|
||||
|
||||
def execute(self):
|
||||
"""Execute commands."""
|
||||
# First open the workfile
|
||||
self._open_workfile()
|
||||
# Execute commands one by one
|
||||
# TODO maybe stop processing when command fails?
|
||||
print("Commands execution started ({})".format(len(self._commands)))
|
||||
for command in self._commands:
|
||||
command.execute()
|
||||
command.set_done()
|
||||
# Finally close workfile
|
||||
self._close_workfile()
|
||||
|
|
@ -0,0 +1,255 @@
|
|||
"""
|
||||
Requires:
|
||||
CollectTVPaintWorkfileData
|
||||
|
||||
Provides:
|
||||
Instances
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
|
||||
|
||||
class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
||||
label = "Collect TVPaint Instances"
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
hosts = ["webpublisher"]
|
||||
targets = ["tvpaint_worker"]
|
||||
|
||||
workfile_family = "workfile"
|
||||
workfile_variant = ""
|
||||
review_family = "review"
|
||||
review_variant = "Main"
|
||||
render_pass_family = "renderPass"
|
||||
render_layer_family = "renderLayer"
|
||||
render_layer_pass_name = "beauty"
|
||||
|
||||
# Set by settings
|
||||
# Regex must constain 'layer' and 'variant' groups which are extracted from
|
||||
# name when instances are created
|
||||
layer_name_regex = r"(?P<layer>L[0-9]{3}_\w+)_(?P<pass>.+)"
|
||||
|
||||
def process(self, context):
|
||||
# Prepare compiled regex
|
||||
layer_name_regex = re.compile(self.layer_name_regex)
|
||||
|
||||
layers_data = context.data["layersData"]
|
||||
|
||||
host_name = "tvpaint"
|
||||
task_name = context.data.get("task")
|
||||
asset_doc = context.data["assetEntity"]
|
||||
project_doc = context.data["projectEntity"]
|
||||
project_name = project_doc["name"]
|
||||
|
||||
new_instances = []
|
||||
|
||||
# Workfile instance
|
||||
workfile_subset_name = get_subset_name_with_asset_doc(
|
||||
self.workfile_family,
|
||||
self.workfile_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
)
|
||||
workfile_instance = self._create_workfile_instance(
|
||||
context, workfile_subset_name
|
||||
)
|
||||
new_instances.append(workfile_instance)
|
||||
|
||||
# Review instance
|
||||
review_subset_name = get_subset_name_with_asset_doc(
|
||||
self.review_family,
|
||||
self.review_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
)
|
||||
review_instance = self._create_review_instance(
|
||||
context, review_subset_name
|
||||
)
|
||||
new_instances.append(review_instance)
|
||||
|
||||
# Get render layers and passes from TVPaint layers
|
||||
# - it's based on regex extraction
|
||||
layers_by_layer_and_pass = {}
|
||||
for layer in layers_data:
|
||||
# Filter only visible layers
|
||||
if not layer["visible"]:
|
||||
continue
|
||||
|
||||
result = layer_name_regex.search(layer["name"])
|
||||
# Layer name not matching layer name regex
|
||||
# should raise an exception?
|
||||
if result is None:
|
||||
continue
|
||||
render_layer = result.group("layer")
|
||||
render_pass = result.group("pass")
|
||||
|
||||
render_pass_maping = layers_by_layer_and_pass.get(
|
||||
render_layer
|
||||
)
|
||||
if render_pass_maping is None:
|
||||
render_pass_maping = {}
|
||||
layers_by_layer_and_pass[render_layer] = render_pass_maping
|
||||
|
||||
if render_pass not in render_pass_maping:
|
||||
render_pass_maping[render_pass] = []
|
||||
render_pass_maping[render_pass].append(copy.deepcopy(layer))
|
||||
|
||||
layers_by_render_layer = {}
|
||||
for render_layer, render_passes in layers_by_layer_and_pass.items():
|
||||
render_layer_layers = []
|
||||
layers_by_render_layer[render_layer] = render_layer_layers
|
||||
for render_pass, layers in render_passes.items():
|
||||
render_layer_layers.extend(copy.deepcopy(layers))
|
||||
dynamic_data = {
|
||||
"render_pass": render_pass,
|
||||
"render_layer": render_layer,
|
||||
# Override family for subset name
|
||||
"family": "render"
|
||||
}
|
||||
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.render_pass_family,
|
||||
render_pass,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
)
|
||||
|
||||
instance = self._create_render_pass_instance(
|
||||
context, layers, subset_name
|
||||
)
|
||||
new_instances.append(instance)
|
||||
|
||||
for render_layer, layers in layers_by_render_layer.items():
|
||||
variant = render_layer
|
||||
dynamic_data = {
|
||||
"render_pass": self.render_layer_pass_name,
|
||||
"render_layer": render_layer,
|
||||
# Override family for subset name
|
||||
"family": "render"
|
||||
}
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.render_pass_family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
)
|
||||
instance = self._create_render_layer_instance(
|
||||
context, layers, subset_name
|
||||
)
|
||||
new_instances.append(instance)
|
||||
|
||||
# Set data same for all instances
|
||||
frame_start = context.data.get("frameStart")
|
||||
frame_end = context.data.get("frameEnd")
|
||||
|
||||
for instance in new_instances:
|
||||
if (
|
||||
instance.data.get("frameStart") is None
|
||||
or instance.data.get("frameEnd") is None
|
||||
):
|
||||
instance.data["frameStart"] = frame_start
|
||||
instance.data["frameEnd"] = frame_end
|
||||
|
||||
if instance.data.get("asset") is None:
|
||||
instance.data["asset"] = asset_doc["name"]
|
||||
|
||||
if instance.data.get("task") is None:
|
||||
instance.data["task"] = task_name
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
if "source" not in instance.data:
|
||||
instance.data["source"] = "webpublisher"
|
||||
|
||||
def _create_workfile_instance(self, context, subset_name):
|
||||
workfile_path = context.data["workfilePath"]
|
||||
staging_dir = os.path.dirname(workfile_path)
|
||||
filename = os.path.basename(workfile_path)
|
||||
ext = os.path.splitext(filename)[-1]
|
||||
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"label": subset_name,
|
||||
"subset": subset_name,
|
||||
"family": self.workfile_family,
|
||||
"families": [],
|
||||
"stagingDir": staging_dir,
|
||||
"representations": [{
|
||||
"name": ext.lstrip("."),
|
||||
"ext": ext.lstrip("."),
|
||||
"files": filename,
|
||||
"stagingDir": staging_dir
|
||||
}]
|
||||
})
|
||||
|
||||
def _create_review_instance(self, context, subset_name):
|
||||
staging_dir = self._create_staging_dir(context, subset_name)
|
||||
layers_data = context.data["layersData"]
|
||||
# Filter hidden layers
|
||||
filtered_layers_data = [
|
||||
copy.deepcopy(layer)
|
||||
for layer in layers_data
|
||||
if layer["visible"]
|
||||
]
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"label": subset_name,
|
||||
"subset": subset_name,
|
||||
"family": self.review_family,
|
||||
"families": [],
|
||||
"layers": filtered_layers_data,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
def _create_render_pass_instance(self, context, layers, subset_name):
|
||||
staging_dir = self._create_staging_dir(context, subset_name)
|
||||
# Global instance data modifications
|
||||
# Fill families
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"subset": subset_name,
|
||||
"label": subset_name,
|
||||
"family": self.render_pass_family,
|
||||
# Add `review` family for thumbnail integration
|
||||
"families": [self.render_pass_family, "review"],
|
||||
"representations": [],
|
||||
"layers": layers,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
def _create_render_layer_instance(self, context, layers, subset_name):
|
||||
staging_dir = self._create_staging_dir(context, subset_name)
|
||||
# Global instance data modifications
|
||||
# Fill families
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"subset": subset_name,
|
||||
"label": subset_name,
|
||||
"family": self.render_pass_family,
|
||||
# Add `review` family for thumbnail integration
|
||||
"families": [self.render_pass_family, "review"],
|
||||
"representations": [],
|
||||
"layers": layers,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
def _create_staging_dir(self, context, subset_name):
|
||||
context_staging_dir = context.data["contextStagingDir"]
|
||||
staging_dir = os.path.join(context_staging_dir, subset_name)
|
||||
if not os.path.exists(staging_dir):
|
||||
os.makedirs(staging_dir)
|
||||
return staging_dir
|
||||
|
|
@ -0,0 +1,142 @@
|
|||
"""
|
||||
Requires:
|
||||
CollectPublishedFiles
|
||||
CollectModules
|
||||
|
||||
Provides:
|
||||
workfilePath - Path to tvpaint workfile
|
||||
sceneData - Scene data loaded from the workfile
|
||||
groupsData -
|
||||
layersData
|
||||
layersExposureFrames
|
||||
layersPrePostBehavior
|
||||
"""
|
||||
import os
|
||||
import uuid
|
||||
import json
|
||||
import shutil
|
||||
import pyblish.api
|
||||
from openpype.lib.plugin_tools import parse_json
|
||||
from openpype.hosts.tvpaint.worker import (
|
||||
SenderTVPaintCommands,
|
||||
CollectSceneData
|
||||
)
|
||||
|
||||
|
||||
class CollectTVPaintWorkfileData(pyblish.api.ContextPlugin):
|
||||
label = "Collect TVPaint Workfile data"
|
||||
order = pyblish.api.CollectorOrder - 0.4
|
||||
hosts = ["webpublisher"]
|
||||
targets = ["tvpaint_worker"]
|
||||
|
||||
def process(self, context):
|
||||
# Get JobQueue module
|
||||
modules = context.data["openPypeModules"]
|
||||
job_queue_module = modules["job_queue"]
|
||||
jobs_root = job_queue_module.get_jobs_root()
|
||||
if not jobs_root:
|
||||
raise ValueError("Job Queue root is not set.")
|
||||
|
||||
context.data["jobsRoot"] = jobs_root
|
||||
|
||||
context_staging_dir = self._create_context_staging_dir(jobs_root)
|
||||
workfile_path = self._extract_workfile_path(
|
||||
context, context_staging_dir
|
||||
)
|
||||
context.data["contextStagingDir"] = context_staging_dir
|
||||
context.data["workfilePath"] = workfile_path
|
||||
|
||||
# Prepare tvpaint command
|
||||
collect_scene_data_command = CollectSceneData()
|
||||
# Create TVPaint sender commands
|
||||
commands = SenderTVPaintCommands(workfile_path, job_queue_module)
|
||||
commands.add_command(collect_scene_data_command)
|
||||
|
||||
# Send job and wait for answer
|
||||
commands.send_job_and_wait()
|
||||
|
||||
collected_data = collect_scene_data_command.result()
|
||||
layers_data = collected_data["layers_data"]
|
||||
groups_data = collected_data["groups_data"]
|
||||
scene_data = collected_data["scene_data"]
|
||||
exposure_frames_by_layer_id = (
|
||||
collected_data["exposure_frames_by_layer_id"]
|
||||
)
|
||||
pre_post_beh_by_layer_id = (
|
||||
collected_data["pre_post_beh_by_layer_id"]
|
||||
)
|
||||
|
||||
# Store results
|
||||
# scene data store the same way as TVPaint collector
|
||||
scene_data = {
|
||||
"sceneWidth": scene_data["width"],
|
||||
"sceneHeight": scene_data["height"],
|
||||
"scenePixelAspect": scene_data["pixel_aspect"],
|
||||
"sceneFps": scene_data["fps"],
|
||||
"sceneFieldOrder": scene_data["field_order"],
|
||||
"sceneMarkIn": scene_data["mark_in"],
|
||||
# scene_data["mark_in_state"],
|
||||
"sceneMarkInState": scene_data["mark_in_set"],
|
||||
"sceneMarkOut": scene_data["mark_out"],
|
||||
# scene_data["mark_out_state"],
|
||||
"sceneMarkOutState": scene_data["mark_out_set"],
|
||||
"sceneStartFrame": scene_data["start_frame"],
|
||||
"sceneBgColor": scene_data["bg_color"]
|
||||
}
|
||||
context.data["sceneData"] = scene_data
|
||||
# Store only raw data
|
||||
context.data["groupsData"] = groups_data
|
||||
context.data["layersData"] = layers_data
|
||||
context.data["layersExposureFrames"] = exposure_frames_by_layer_id
|
||||
context.data["layersPrePostBehavior"] = pre_post_beh_by_layer_id
|
||||
|
||||
self.log.debug(
|
||||
(
|
||||
"Collected data"
|
||||
"\nScene data: {}"
|
||||
"\nLayers data: {}"
|
||||
"\nExposure frames: {}"
|
||||
"\nPre/Post behavior: {}"
|
||||
).format(
|
||||
json.dumps(scene_data, indent=4),
|
||||
json.dumps(layers_data, indent=4),
|
||||
json.dumps(exposure_frames_by_layer_id, indent=4),
|
||||
json.dumps(pre_post_beh_by_layer_id, indent=4)
|
||||
)
|
||||
)
|
||||
|
||||
def _create_context_staging_dir(self, jobs_root):
|
||||
if not os.path.exists(jobs_root):
|
||||
os.makedirs(jobs_root)
|
||||
|
||||
random_folder_name = str(uuid.uuid4())
|
||||
full_path = os.path.join(jobs_root, random_folder_name)
|
||||
if not os.path.exists(full_path):
|
||||
os.makedirs(full_path)
|
||||
return full_path
|
||||
|
||||
def _extract_workfile_path(self, context, context_staging_dir):
|
||||
"""Find first TVPaint file in tasks and use it."""
|
||||
batch_dir = context.data["batchDir"]
|
||||
batch_data = context.data["batchData"]
|
||||
src_workfile_path = None
|
||||
for task_id in batch_data["tasks"]:
|
||||
if src_workfile_path is not None:
|
||||
break
|
||||
task_dir = os.path.join(batch_dir, task_id)
|
||||
task_manifest_path = os.path.join(task_dir, "manifest.json")
|
||||
task_data = parse_json(task_manifest_path)
|
||||
task_files = task_data["files"]
|
||||
for filename in task_files:
|
||||
_, ext = os.path.splitext(filename)
|
||||
if ext.lower() == ".tvpp":
|
||||
src_workfile_path = os.path.join(task_dir, filename)
|
||||
break
|
||||
|
||||
# Copy workfile to job queue work root
|
||||
new_workfile_path = os.path.join(
|
||||
context_staging_dir, os.path.basename(src_workfile_path)
|
||||
)
|
||||
shutil.copy(src_workfile_path, new_workfile_path)
|
||||
|
||||
return new_workfile_path
|
||||
|
|
@ -0,0 +1,535 @@
|
|||
import os
|
||||
import copy
|
||||
|
||||
from openpype.hosts.tvpaint.worker import (
|
||||
SenderTVPaintCommands,
|
||||
ExecuteSimpleGeorgeScript,
|
||||
ExecuteGeorgeScript
|
||||
)
|
||||
|
||||
import pyblish.api
|
||||
from openpype.hosts.tvpaint.lib import (
|
||||
calculate_layers_extraction_data,
|
||||
get_frame_filename_template,
|
||||
fill_reference_frames,
|
||||
composite_rendered_layers,
|
||||
rename_filepaths_by_frame_start
|
||||
)
|
||||
from PIL import Image
|
||||
|
||||
|
||||
class ExtractTVPaintSequences(pyblish.api.Extractor):
|
||||
label = "Extract TVPaint Sequences"
|
||||
hosts = ["webpublisher"]
|
||||
targets = ["tvpaint_worker"]
|
||||
|
||||
# Context plugin does not have families filtering
|
||||
families_filter = ["review", "renderPass", "renderLayer"]
|
||||
|
||||
job_queue_root_key = "jobs_root"
|
||||
|
||||
# Modifiable with settings
|
||||
review_bg = [255, 255, 255, 255]
|
||||
|
||||
def process(self, context):
|
||||
# Get workfle path
|
||||
workfile_path = context.data["workfilePath"]
|
||||
jobs_root = context.data["jobsRoot"]
|
||||
jobs_root_slashed = jobs_root.replace("\\", "/")
|
||||
|
||||
# Prepare scene data
|
||||
scene_data = context.data["sceneData"]
|
||||
scene_mark_in = scene_data["sceneMarkIn"]
|
||||
scene_mark_out = scene_data["sceneMarkOut"]
|
||||
scene_start_frame = scene_data["sceneStartFrame"]
|
||||
scene_bg_color = scene_data["sceneBgColor"]
|
||||
|
||||
# Prepare layers behavior
|
||||
behavior_by_layer_id = context.data["layersPrePostBehavior"]
|
||||
exposure_frames_by_layer_id = context.data["layersExposureFrames"]
|
||||
|
||||
# Handles are not stored per instance but on Context
|
||||
handle_start = context.data["handleStart"]
|
||||
handle_end = context.data["handleEnd"]
|
||||
|
||||
# Get JobQueue module
|
||||
modules = context.data["openPypeModules"]
|
||||
job_queue_module = modules["job_queue"]
|
||||
|
||||
tvpaint_commands = SenderTVPaintCommands(
|
||||
workfile_path, job_queue_module
|
||||
)
|
||||
|
||||
# Change scene Start Frame to 0 to prevent frame index issues
|
||||
# - issue is that TVPaint versions deal with frame indexes in a
|
||||
# different way when Start Frame is not `0`
|
||||
# NOTE It will be set back after rendering
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteSimpleGeorgeScript("tv_startframe 0")
|
||||
)
|
||||
|
||||
root_key_replacement = "{" + self.job_queue_root_key + "}"
|
||||
after_render_instances = []
|
||||
for instance in context:
|
||||
instance_families = set(instance.data.get("families", []))
|
||||
instance_families.add(instance.data["family"])
|
||||
valid = False
|
||||
for family in instance_families:
|
||||
if family in self.families_filter:
|
||||
valid = True
|
||||
break
|
||||
|
||||
if not valid:
|
||||
continue
|
||||
|
||||
self.log.info("* Preparing commands for instance \"{}\"".format(
|
||||
instance.data["label"]
|
||||
))
|
||||
# Get all layers and filter out not visible
|
||||
layers = instance.data["layers"]
|
||||
filtered_layers = [layer for layer in layers if layer["visible"]]
|
||||
if not filtered_layers:
|
||||
self.log.info(
|
||||
"None of the layers from the instance"
|
||||
" are visible. Extraction skipped."
|
||||
)
|
||||
continue
|
||||
|
||||
joined_layer_names = ", ".join([
|
||||
"\"{}\"".format(str(layer["name"]))
|
||||
for layer in filtered_layers
|
||||
])
|
||||
self.log.debug(
|
||||
"Instance has {} layers with names: {}".format(
|
||||
len(filtered_layers), joined_layer_names
|
||||
)
|
||||
)
|
||||
|
||||
# Staging dir must be created during collection
|
||||
staging_dir = instance.data["stagingDir"].replace("\\", "/")
|
||||
|
||||
job_root_template = staging_dir.replace(
|
||||
jobs_root_slashed, root_key_replacement
|
||||
)
|
||||
|
||||
# Frame start/end may be stored as float
|
||||
frame_start = int(instance.data["frameStart"])
|
||||
frame_end = int(instance.data["frameEnd"])
|
||||
|
||||
# Prepare output frames
|
||||
output_frame_start = frame_start - handle_start
|
||||
output_frame_end = frame_end + handle_end
|
||||
|
||||
# Change output frame start to 0 if handles cause it's negative
|
||||
# number
|
||||
if output_frame_start < 0:
|
||||
self.log.warning((
|
||||
"Frame start with handles has negative value."
|
||||
" Changed to \"0\". Frames start: {}, Handle Start: {}"
|
||||
).format(frame_start, handle_start))
|
||||
output_frame_start = 0
|
||||
|
||||
# Create copy of scene Mark In/Out
|
||||
mark_in, mark_out = scene_mark_in, scene_mark_out
|
||||
|
||||
# Fix possible changes of output frame
|
||||
mark_out, output_frame_end = self._fix_range_changes(
|
||||
mark_in, mark_out, output_frame_start, output_frame_end
|
||||
)
|
||||
filename_template = get_frame_filename_template(
|
||||
max(scene_mark_out, output_frame_end)
|
||||
)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
self.log.debug(
|
||||
"Files will be rendered to folder: {}".format(staging_dir)
|
||||
)
|
||||
|
||||
output_filepaths_by_frame_idx = {}
|
||||
for frame_idx in range(mark_in, mark_out + 1):
|
||||
filename = filename_template.format(frame=frame_idx)
|
||||
filepath = os.path.join(staging_dir, filename)
|
||||
output_filepaths_by_frame_idx[frame_idx] = filepath
|
||||
|
||||
# Prepare data for post render processing
|
||||
post_render_data = {
|
||||
"output_dir": staging_dir,
|
||||
"layers": filtered_layers,
|
||||
"output_filepaths_by_frame_idx": output_filepaths_by_frame_idx,
|
||||
"instance": instance,
|
||||
"is_layers_render": False,
|
||||
"output_frame_start": output_frame_start,
|
||||
"output_frame_end": output_frame_end
|
||||
}
|
||||
# Store them to list
|
||||
after_render_instances.append(post_render_data)
|
||||
|
||||
# Review rendering
|
||||
if instance.data["family"] == "review":
|
||||
self.add_render_review_command(
|
||||
tvpaint_commands, mark_in, mark_out, scene_bg_color,
|
||||
job_root_template, filename_template
|
||||
)
|
||||
continue
|
||||
|
||||
# Layers rendering
|
||||
extraction_data_by_layer_id = calculate_layers_extraction_data(
|
||||
filtered_layers,
|
||||
exposure_frames_by_layer_id,
|
||||
behavior_by_layer_id,
|
||||
mark_in,
|
||||
mark_out
|
||||
)
|
||||
filepaths_by_layer_id = self.add_render_command(
|
||||
tvpaint_commands,
|
||||
job_root_template,
|
||||
staging_dir,
|
||||
filtered_layers,
|
||||
extraction_data_by_layer_id
|
||||
)
|
||||
# Add more data to post render processing
|
||||
post_render_data.update({
|
||||
"is_layers_render": True,
|
||||
"extraction_data_by_layer_id": extraction_data_by_layer_id,
|
||||
"filepaths_by_layer_id": filepaths_by_layer_id
|
||||
})
|
||||
|
||||
# Change scene frame Start back to previous value
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteSimpleGeorgeScript(
|
||||
"tv_startframe {}".format(scene_start_frame)
|
||||
)
|
||||
)
|
||||
self.log.info("Sending the job and waiting for response...")
|
||||
tvpaint_commands.send_job_and_wait()
|
||||
self.log.info("Render job finished")
|
||||
|
||||
for post_render_data in after_render_instances:
|
||||
self._post_render_processing(post_render_data, mark_in, mark_out)
|
||||
|
||||
def _fix_range_changes(
|
||||
self, mark_in, mark_out, output_frame_start, output_frame_end
|
||||
):
|
||||
# Check Marks range and output range
|
||||
output_range = output_frame_end - output_frame_start
|
||||
marks_range = mark_out - mark_in
|
||||
|
||||
# Lower Mark Out if mark range is bigger than output
|
||||
# - do not rendered not used frames
|
||||
if output_range < marks_range:
|
||||
new_mark_out = mark_out - (marks_range - output_range)
|
||||
self.log.warning((
|
||||
"Lowering render range to {} frames. Changed Mark Out {} -> {}"
|
||||
).format(marks_range + 1, mark_out, new_mark_out))
|
||||
# Assign new mark out to variable
|
||||
mark_out = new_mark_out
|
||||
|
||||
# Lower output frame end so representation has right `frameEnd` value
|
||||
elif output_range > marks_range:
|
||||
new_output_frame_end = (
|
||||
output_frame_end - (output_range - marks_range)
|
||||
)
|
||||
self.log.warning((
|
||||
"Lowering representation range to {} frames."
|
||||
" Changed frame end {} -> {}"
|
||||
).format(output_range + 1, mark_out, new_output_frame_end))
|
||||
output_frame_end = new_output_frame_end
|
||||
return mark_out, output_frame_end
|
||||
|
||||
def _post_render_processing(self, post_render_data, mark_in, mark_out):
|
||||
# Unpack values
|
||||
instance = post_render_data["instance"]
|
||||
output_filepaths_by_frame_idx = (
|
||||
post_render_data["output_filepaths_by_frame_idx"]
|
||||
)
|
||||
is_layers_render = post_render_data["is_layers_render"]
|
||||
output_dir = post_render_data["output_dir"]
|
||||
layers = post_render_data["layers"]
|
||||
output_frame_start = post_render_data["output_frame_start"]
|
||||
output_frame_end = post_render_data["output_frame_end"]
|
||||
|
||||
# Trigger post processing of layers rendering
|
||||
# - only few frames were rendered this will complete the sequence
|
||||
# - multiple layers can be in single instance they must be composite
|
||||
# over each other
|
||||
if is_layers_render:
|
||||
self._finish_layer_render(
|
||||
layers,
|
||||
post_render_data["extraction_data_by_layer_id"],
|
||||
post_render_data["filepaths_by_layer_id"],
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_filepaths_by_frame_idx
|
||||
)
|
||||
|
||||
# Create thumbnail
|
||||
thumbnail_filepath = os.path.join(output_dir, "thumbnail.jpg")
|
||||
thumbnail_src_path = output_filepaths_by_frame_idx[mark_in]
|
||||
self._create_thumbnail(thumbnail_src_path, thumbnail_filepath)
|
||||
|
||||
# Rename filepaths to final frames
|
||||
repre_files = self._rename_output_files(
|
||||
output_filepaths_by_frame_idx,
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_frame_start
|
||||
)
|
||||
|
||||
# Fill tags and new families
|
||||
family_lowered = instance.data["family"].lower()
|
||||
tags = []
|
||||
if family_lowered in ("review", "renderlayer"):
|
||||
tags.append("review")
|
||||
|
||||
# Sequence of one frame
|
||||
single_file = len(repre_files) == 1
|
||||
if single_file:
|
||||
repre_files = repre_files[0]
|
||||
|
||||
# Extension is harcoded
|
||||
# - changing extension would require change code
|
||||
new_repre = {
|
||||
"name": "png",
|
||||
"ext": "png",
|
||||
"files": repre_files,
|
||||
"stagingDir": output_dir,
|
||||
"tags": tags
|
||||
}
|
||||
|
||||
if not single_file:
|
||||
new_repre["frameStart"] = output_frame_start
|
||||
new_repre["frameEnd"] = output_frame_end
|
||||
|
||||
self.log.debug("Creating new representation: {}".format(new_repre))
|
||||
|
||||
instance.data["representations"].append(new_repre)
|
||||
|
||||
if family_lowered in ("renderpass", "renderlayer"):
|
||||
# Change family to render
|
||||
instance.data["family"] = "render"
|
||||
|
||||
thumbnail_ext = os.path.splitext(thumbnail_filepath)[1]
|
||||
# Create thumbnail representation
|
||||
thumbnail_repre = {
|
||||
"name": "thumbnail",
|
||||
"ext": thumbnail_ext.replace(".", ""),
|
||||
"outputName": "thumb",
|
||||
"files": os.path.basename(thumbnail_filepath),
|
||||
"stagingDir": output_dir,
|
||||
"tags": ["thumbnail"]
|
||||
}
|
||||
instance.data["representations"].append(thumbnail_repre)
|
||||
|
||||
def _rename_output_files(
|
||||
self, filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
):
|
||||
new_filepaths_by_frame = rename_filepaths_by_frame_start(
|
||||
filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
)
|
||||
|
||||
repre_filenames = []
|
||||
for filepath in new_filepaths_by_frame.values():
|
||||
repre_filenames.append(os.path.basename(filepath))
|
||||
|
||||
if mark_in < output_frame_start:
|
||||
repre_filenames = list(reversed(repre_filenames))
|
||||
|
||||
return repre_filenames
|
||||
|
||||
def add_render_review_command(
|
||||
self,
|
||||
tvpaint_commands,
|
||||
mark_in,
|
||||
mark_out,
|
||||
scene_bg_color,
|
||||
job_root_template,
|
||||
filename_template
|
||||
):
|
||||
""" Export images from TVPaint using `tv_savesequence` command.
|
||||
|
||||
Args:
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
scene_bg_color (list): Bg color set in scene. Result of george
|
||||
script command `tv_background`.
|
||||
"""
|
||||
self.log.debug("Preparing data for rendering.")
|
||||
bg_color = self._get_review_bg_color()
|
||||
first_frame_filepath = "/".join([
|
||||
job_root_template,
|
||||
filename_template.format(frame=mark_in)
|
||||
])
|
||||
|
||||
george_script_lines = [
|
||||
# Change bg color to color from settings
|
||||
"tv_background \"color\" {} {} {}".format(*bg_color),
|
||||
"tv_SaveMode \"PNG\"",
|
||||
"export_path = \"{}\"".format(
|
||||
first_frame_filepath.replace("\\", "/")
|
||||
),
|
||||
"tv_savesequence '\"'export_path'\"' {} {}".format(
|
||||
mark_in, mark_out
|
||||
)
|
||||
]
|
||||
if scene_bg_color:
|
||||
# Change bg color back to previous scene bg color
|
||||
_scene_bg_color = copy.deepcopy(scene_bg_color)
|
||||
bg_type = _scene_bg_color.pop(0)
|
||||
orig_color_command = [
|
||||
"tv_background",
|
||||
"\"{}\"".format(bg_type)
|
||||
]
|
||||
orig_color_command.extend(_scene_bg_color)
|
||||
|
||||
george_script_lines.append(" ".join(orig_color_command))
|
||||
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteGeorgeScript(
|
||||
george_script_lines,
|
||||
root_dir_key=self.job_queue_root_key
|
||||
)
|
||||
)
|
||||
|
||||
def add_render_command(
|
||||
self,
|
||||
tvpaint_commands,
|
||||
job_root_template,
|
||||
staging_dir,
|
||||
layers,
|
||||
extraction_data_by_layer_id
|
||||
):
|
||||
""" Export images from TVPaint.
|
||||
|
||||
Args:
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
layers (list): List of layers to be exported.
|
||||
|
||||
Retruns:
|
||||
tuple: With 2 items first is list of filenames second is path to
|
||||
thumbnail.
|
||||
"""
|
||||
# Map layers by position
|
||||
layers_by_id = {
|
||||
layer["layer_id"]: layer
|
||||
for layer in layers
|
||||
}
|
||||
|
||||
# Render layers
|
||||
filepaths_by_layer_id = {}
|
||||
for layer_id, render_data in extraction_data_by_layer_id.items():
|
||||
layer = layers_by_id[layer_id]
|
||||
frame_references = render_data["frame_references"]
|
||||
filenames_by_frame_index = render_data["filenames_by_frame_index"]
|
||||
|
||||
filepaths_by_frame = {}
|
||||
command_filepath_by_frame = {}
|
||||
for frame_idx, ref_idx in frame_references.items():
|
||||
# None reference is skipped because does not have source
|
||||
if ref_idx is None:
|
||||
filepaths_by_frame[frame_idx] = None
|
||||
continue
|
||||
filename = filenames_by_frame_index[frame_idx]
|
||||
|
||||
filepaths_by_frame[frame_idx] = os.path.join(
|
||||
staging_dir, filename
|
||||
)
|
||||
if frame_idx == ref_idx:
|
||||
command_filepath_by_frame[frame_idx] = "/".join(
|
||||
[job_root_template, filename]
|
||||
)
|
||||
|
||||
self._add_render_layer_command(
|
||||
tvpaint_commands, layer, command_filepath_by_frame
|
||||
)
|
||||
filepaths_by_layer_id[layer_id] = filepaths_by_frame
|
||||
|
||||
return filepaths_by_layer_id
|
||||
|
||||
def _add_render_layer_command(
|
||||
self, tvpaint_commands, layer, filepaths_by_frame
|
||||
):
|
||||
george_script_lines = [
|
||||
# Set current layer by position
|
||||
"tv_layergetid {}".format(layer["position"]),
|
||||
"layer_id = result",
|
||||
"tv_layerset layer_id",
|
||||
"tv_SaveMode \"PNG\""
|
||||
]
|
||||
|
||||
for frame_idx, filepath in filepaths_by_frame.items():
|
||||
if filepath is None:
|
||||
continue
|
||||
|
||||
# Go to frame
|
||||
george_script_lines.append("tv_layerImage {}".format(frame_idx))
|
||||
# Store image to output
|
||||
george_script_lines.append(
|
||||
"tv_saveimage \"{}\"".format(filepath.replace("\\", "/"))
|
||||
)
|
||||
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteGeorgeScript(
|
||||
george_script_lines,
|
||||
root_dir_key=self.job_queue_root_key
|
||||
)
|
||||
)
|
||||
|
||||
def _finish_layer_render(
|
||||
self,
|
||||
layers,
|
||||
extraction_data_by_layer_id,
|
||||
filepaths_by_layer_id,
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_filepaths_by_frame_idx
|
||||
):
|
||||
# Fill frames between `frame_start_index` and `frame_end_index`
|
||||
self.log.debug("Filling frames not rendered frames.")
|
||||
for layer_id, render_data in extraction_data_by_layer_id.items():
|
||||
frame_references = render_data["frame_references"]
|
||||
filepaths_by_frame = filepaths_by_layer_id[layer_id]
|
||||
fill_reference_frames(frame_references, filepaths_by_frame)
|
||||
|
||||
# Prepare final filepaths where compositing should store result
|
||||
self.log.info("Started compositing of layer frames.")
|
||||
composite_rendered_layers(
|
||||
layers, filepaths_by_layer_id,
|
||||
mark_in, mark_out,
|
||||
output_filepaths_by_frame_idx
|
||||
)
|
||||
|
||||
def _create_thumbnail(self, thumbnail_src_path, thumbnail_filepath):
|
||||
if not os.path.exists(thumbnail_src_path):
|
||||
return
|
||||
|
||||
source_img = Image.open(thumbnail_src_path)
|
||||
|
||||
# Composite background only on rgba images
|
||||
# - just making sure
|
||||
if source_img.mode.lower() == "rgba":
|
||||
bg_color = self._get_review_bg_color()
|
||||
self.log.debug("Adding thumbnail background color {}.".format(
|
||||
" ".join([str(val) for val in bg_color])
|
||||
))
|
||||
bg_image = Image.new("RGBA", source_img.size, bg_color)
|
||||
thumbnail_obj = Image.alpha_composite(bg_image, source_img)
|
||||
thumbnail_obj.convert("RGB").save(thumbnail_filepath)
|
||||
|
||||
else:
|
||||
self.log.info((
|
||||
"Source for thumbnail has mode \"{}\" (Expected: RGBA)."
|
||||
" Can't use thubmanail background color."
|
||||
).format(source_img.mode))
|
||||
source_img.save(thumbnail_filepath)
|
||||
|
||||
def _get_review_bg_color(self):
|
||||
red = green = blue = 255
|
||||
if self.review_bg:
|
||||
if len(self.review_bg) == 4:
|
||||
red, green, blue, _ = self.review_bg
|
||||
elif len(self.review_bg) == 3:
|
||||
red, green, blue = self.review_bg
|
||||
return (red, green, blue)
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Cleanup leftover files from publish."""
|
||||
import os
|
||||
import shutil
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CleanUpJobRoot(pyblish.api.ContextPlugin):
|
||||
"""Cleans up the job root directory after a successful publish.
|
||||
|
||||
Remove all files in job root as all of them should be published.
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 1
|
||||
label = "Clean Up Job Root"
|
||||
optional = True
|
||||
active = True
|
||||
|
||||
def process(self, context):
|
||||
context_staging_dir = context.data.get("contextStagingDir")
|
||||
if not context_staging_dir:
|
||||
self.log.info("Key 'contextStagingDir' is empty.")
|
||||
|
||||
elif not os.path.exists(context_staging_dir):
|
||||
self.log.info((
|
||||
"Job root directory for this publish does not"
|
||||
" exists anymore \"{}\"."
|
||||
).format(context_staging_dir))
|
||||
else:
|
||||
self.log.info("Deleting job root with all files.")
|
||||
shutil.rmtree(context_staging_dir)
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateWorkfileData(pyblish.api.ContextPlugin):
|
||||
"""Validate mark in and out are enabled and it's duration.
|
||||
|
||||
Mark In/Out does not have to match frameStart and frameEnd but duration is
|
||||
important.
|
||||
"""
|
||||
|
||||
label = "Validate Workfile Data"
|
||||
order = pyblish.api.ValidatorOrder
|
||||
|
||||
def process(self, context):
|
||||
# Data collected in `CollectAvalonEntities`
|
||||
frame_start = context.data["frameStart"]
|
||||
frame_end = context.data["frameEnd"]
|
||||
handle_start = context.data["handleStart"]
|
||||
handle_end = context.data["handleEnd"]
|
||||
|
||||
scene_data = context.data["sceneData"]
|
||||
scene_mark_in = scene_data["sceneMarkIn"]
|
||||
scene_mark_out = scene_data["sceneMarkOut"]
|
||||
|
||||
expected_range = (
|
||||
(frame_end - frame_start + 1)
|
||||
+ handle_start
|
||||
+ handle_end
|
||||
)
|
||||
marks_range = scene_mark_out - scene_mark_in + 1
|
||||
if expected_range != marks_range:
|
||||
raise AssertionError((
|
||||
"Wrong Mark In/Out range."
|
||||
" Expected range is {} frames got {} frames"
|
||||
).format(expected_range, marks_range))
|
||||
|
|
@ -198,6 +198,15 @@ class WebpublisherBatchPublishEndpoint(_RestApiEndpoint):
|
|||
# - filter defines command and can extend arguments dictionary
|
||||
# This is used only if 'studio_processing' is enabled on batch
|
||||
studio_processing_filters = [
|
||||
# TVPaint filter
|
||||
{
|
||||
"extensions": [".tvpp"],
|
||||
"command": "remotepublish",
|
||||
"arguments": {
|
||||
"targets": ["tvpaint_worker"]
|
||||
},
|
||||
"add_to_queue": False
|
||||
},
|
||||
# Photoshop filter
|
||||
{
|
||||
"extensions": [".psd", ".psb"],
|
||||
|
|
|
|||
|
|
@ -989,6 +989,14 @@ class Templates:
|
|||
invalid_required = []
|
||||
missing_required = []
|
||||
replace_keys = []
|
||||
|
||||
task_data = data.get("task")
|
||||
if (
|
||||
isinstance(task_data, StringType)
|
||||
and "{task[name]}" in orig_template
|
||||
):
|
||||
data["task"] = {"name": task_data}
|
||||
|
||||
for group in self.key_pattern.findall(template):
|
||||
orig_key = group[1:-1]
|
||||
key = str(orig_key)
|
||||
|
|
@ -1074,6 +1082,10 @@ class Templates:
|
|||
output = collections.defaultdict(dict)
|
||||
for key, orig_value in templates.items():
|
||||
if isinstance(orig_value, StringType):
|
||||
# Replace {task} by '{task[name]}' for backward compatibility
|
||||
if '{task}' in orig_value:
|
||||
orig_value = orig_value.replace('{task}', '{task[name]}')
|
||||
|
||||
output[key] = self._format(orig_value, data)
|
||||
continue
|
||||
|
||||
|
|
|
|||
|
|
@ -1280,23 +1280,12 @@ def prepare_context_environments(data):
|
|||
|
||||
anatomy = data["anatomy"]
|
||||
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
task_type = workdir_data["task"]["type"]
|
||||
# Temp solution how to pass task type to `_prepare_last_workfile`
|
||||
data["task_type"] = task_type
|
||||
|
||||
workfile_template_key = get_workfile_template_key(
|
||||
task_type,
|
||||
app.host_name,
|
||||
project_name=project_name,
|
||||
project_settings=project_settings
|
||||
)
|
||||
|
||||
try:
|
||||
workdir = get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy, template_key=workfile_template_key
|
||||
)
|
||||
workdir = get_workdir_with_workdir_data(workdir_data, anatomy)
|
||||
|
||||
except Exception as exc:
|
||||
raise ApplicationLaunchFailed(
|
||||
|
|
@ -1329,10 +1318,10 @@ def prepare_context_environments(data):
|
|||
)
|
||||
data["env"].update(context_env)
|
||||
|
||||
_prepare_last_workfile(data, workdir, workfile_template_key)
|
||||
_prepare_last_workfile(data, workdir)
|
||||
|
||||
|
||||
def _prepare_last_workfile(data, workdir, workfile_template_key):
|
||||
def _prepare_last_workfile(data, workdir):
|
||||
"""last workfile workflow preparation.
|
||||
|
||||
Function check if should care about last workfile workflow and tries
|
||||
|
|
@ -1395,6 +1384,10 @@ def _prepare_last_workfile(data, workdir, workfile_template_key):
|
|||
anatomy = data["anatomy"]
|
||||
# Find last workfile
|
||||
file_template = anatomy.templates["work"]["file"]
|
||||
# Replace {task} by '{task[name]}' for backward compatibility
|
||||
if '{task}' in file_template:
|
||||
file_template = file_template.replace('{task}', '{task[name]}')
|
||||
|
||||
workdir_data.update({
|
||||
"version": 1,
|
||||
"user": get_openpype_username(),
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ import platform
|
|||
import logging
|
||||
import collections
|
||||
import functools
|
||||
import getpass
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from .anatomy import Anatomy
|
||||
|
|
@ -257,19 +258,40 @@ def get_hierarchy(asset_name=None):
|
|||
return "/".join(hierarchy_items)
|
||||
|
||||
|
||||
@with_avalon
|
||||
def get_linked_assets(asset_entity):
|
||||
"""Return linked assets for `asset_entity` from DB
|
||||
def get_linked_asset_ids(asset_doc):
|
||||
"""Return linked asset ids for `asset_doc` from DB
|
||||
|
||||
Args:
|
||||
asset_entity (dict): asset document from DB
|
||||
Args:
|
||||
asset_doc (dict): Asset document from DB.
|
||||
|
||||
Returns:
|
||||
(list) of MongoDB documents
|
||||
Returns:
|
||||
(list): MongoDB ids of input links.
|
||||
"""
|
||||
inputs = asset_entity["data"].get("inputs", [])
|
||||
inputs = [avalon.io.find_one({"_id": x}) for x in inputs]
|
||||
return inputs
|
||||
output = []
|
||||
if not asset_doc:
|
||||
return output
|
||||
|
||||
input_links = asset_doc["data"].get("inputsLinks") or []
|
||||
if input_links:
|
||||
output = [item["_id"] for item in input_links]
|
||||
return output
|
||||
|
||||
|
||||
@with_avalon
|
||||
def get_linked_assets(asset_doc):
|
||||
"""Return linked assets for `asset_doc` from DB
|
||||
|
||||
Args:
|
||||
asset_doc (dict): Asset document from DB
|
||||
|
||||
Returns:
|
||||
(list) Asset documents of input links for passed asset doc.
|
||||
"""
|
||||
link_ids = get_linked_asset_ids(asset_doc)
|
||||
if not link_ids:
|
||||
return []
|
||||
|
||||
return list(avalon.io.find({"_id": {"$in": link_ids}}))
|
||||
|
||||
|
||||
@with_avalon
|
||||
|
|
@ -464,6 +486,7 @@ def get_workfile_template_key(
|
|||
return default
|
||||
|
||||
|
||||
# TODO rename function as is not just "work" specific
|
||||
def get_workdir_data(project_doc, asset_doc, task_name, host_name):
|
||||
"""Prepare data for workdir template filling from entered information.
|
||||
|
||||
|
|
@ -479,22 +502,31 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
|
|||
"""
|
||||
hierarchy = "/".join(asset_doc["data"]["parents"])
|
||||
|
||||
task_type = asset_doc['data']['tasks'].get(task_name, {}).get('type')
|
||||
|
||||
project_task_types = project_doc["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
data = {
|
||||
"project": {
|
||||
"name": project_doc["name"],
|
||||
"code": project_doc["data"].get("code")
|
||||
},
|
||||
"task": task_name,
|
||||
"task": {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"asset": asset_doc["name"],
|
||||
"app": host_name,
|
||||
"hierarchy": hierarchy
|
||||
"user": getpass.getuser(),
|
||||
"hierarchy": hierarchy,
|
||||
}
|
||||
return data
|
||||
|
||||
|
||||
def get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy=None, project_name=None,
|
||||
template_key=None, dbcon=None
|
||||
workdir_data, anatomy=None, project_name=None, template_key=None
|
||||
):
|
||||
"""Fill workdir path from entered data and project's anatomy.
|
||||
|
||||
|
|
@ -529,12 +561,10 @@ def get_workdir_with_workdir_data(
|
|||
anatomy = Anatomy(project_name)
|
||||
|
||||
if not template_key:
|
||||
template_key = get_workfile_template_key_from_context(
|
||||
workdir_data["asset"],
|
||||
workdir_data["task"],
|
||||
template_key = get_workfile_template_key(
|
||||
workdir_data["task"]["type"],
|
||||
workdir_data["app"],
|
||||
project_name=workdir_data["project"]["name"],
|
||||
dbcon=dbcon
|
||||
project_name=workdir_data["project"]["name"]
|
||||
)
|
||||
|
||||
anatomy_filled = anatomy.format(workdir_data)
|
||||
|
|
@ -648,7 +678,7 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
|
|||
anatomy = Anatomy(project_doc["name"])
|
||||
# Get workdir path (result is anatomy.TemplateResult)
|
||||
template_workdir = get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy, dbcon=dbcon
|
||||
workdir_data, anatomy
|
||||
)
|
||||
template_workdir_path = str(template_workdir).replace("\\", "/")
|
||||
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ def run_subprocess(*args, **kwargs):
|
|||
if full_output:
|
||||
full_output += "\n"
|
||||
full_output += _stderr
|
||||
logger.warning(_stderr)
|
||||
logger.info(_stderr)
|
||||
|
||||
if proc.returncode != 0:
|
||||
exc_msg = "Executing arguments was not successful: \"{}\"".format(args)
|
||||
|
|
|
|||
|
|
@ -531,12 +531,20 @@ def should_decompress(file_url):
|
|||
and we can decompress (oiiotool supported)
|
||||
"""
|
||||
if oiio_supported():
|
||||
output = run_subprocess([
|
||||
get_oiio_tools_path(),
|
||||
"--info", "-v", file_url])
|
||||
return "compression: \"dwaa\"" in output or \
|
||||
"compression: \"dwab\"" in output
|
||||
|
||||
try:
|
||||
output = run_subprocess([
|
||||
get_oiio_tools_path(),
|
||||
"--info", "-v", file_url])
|
||||
return "compression: \"dwaa\"" in output or \
|
||||
"compression: \"dwab\"" in output
|
||||
except RuntimeError:
|
||||
_name, ext = os.path.splitext(file_url)
|
||||
# TODO: should't the list of allowed extensions be
|
||||
# taken from an OIIO variable of supported formats
|
||||
if ext not in [".mxf"]:
|
||||
# Reraise exception
|
||||
raise
|
||||
return False
|
||||
return False
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -71,18 +71,24 @@ def ffprobe_streams(path_to_file, logger=None):
|
|||
"Getting information about input \"{}\".".format(path_to_file)
|
||||
)
|
||||
args = [
|
||||
"\"{}\"".format(get_ffmpeg_tool_path("ffprobe")),
|
||||
"-v quiet",
|
||||
"-print_format json",
|
||||
get_ffmpeg_tool_path("ffprobe"),
|
||||
"-hide_banner",
|
||||
"-loglevel", "fatal",
|
||||
"-show_error",
|
||||
"-show_format",
|
||||
"-show_streams",
|
||||
"\"{}\"".format(path_to_file)
|
||||
"-show_programs",
|
||||
"-show_chapters",
|
||||
"-show_private_data",
|
||||
"-print_format", "json",
|
||||
path_to_file
|
||||
]
|
||||
command = " ".join(args)
|
||||
logger.debug("FFprobe command: \"{}\"".format(command))
|
||||
|
||||
logger.debug("FFprobe command: {}".format(
|
||||
subprocess.list2cmdline(args)
|
||||
))
|
||||
popen = subprocess.Popen(
|
||||
command,
|
||||
shell=True,
|
||||
args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE
|
||||
)
|
||||
|
|
|
|||
|
|
@ -288,6 +288,22 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
"pluginInfo", {})
|
||||
)
|
||||
|
||||
self.limit_groups = (
|
||||
context.data["project_settings"].get(
|
||||
"deadline", {}).get(
|
||||
"publish", {}).get(
|
||||
"MayaSubmitDeadline", {}).get(
|
||||
"limit", [])
|
||||
)
|
||||
|
||||
self.group = (
|
||||
context.data["project_settings"].get(
|
||||
"deadline", {}).get(
|
||||
"publish", {}).get(
|
||||
"MayaSubmitDeadline", {}).get(
|
||||
"group", "none")
|
||||
)
|
||||
|
||||
context = instance.context
|
||||
workspace = context.data["workspaceDir"]
|
||||
anatomy = context.data['anatomy']
|
||||
|
|
|
|||
|
|
@ -94,24 +94,27 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
render_path).replace("\\", "/")
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
if instance.data.get("bakeScriptPath"):
|
||||
render_path = instance.data.get("bakeRenderPath")
|
||||
script_path = instance.data.get("bakeScriptPath")
|
||||
exe_node_name = instance.data.get("bakeWriteNodeName")
|
||||
if instance.data.get("bakingNukeScripts"):
|
||||
for baking_script in instance.data["bakingNukeScripts"]:
|
||||
render_path = baking_script["bakeRenderPath"]
|
||||
script_path = baking_script["bakeScriptPath"]
|
||||
exe_node_name = baking_script["bakeWriteNodeName"]
|
||||
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start += 1
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start += 1
|
||||
|
||||
resp = self.payload_submit(instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
response.json()
|
||||
)
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
instance.data["deadlineSubmissionJob"] = resp.json()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
resp = self.payload_submit(
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
response.json()
|
||||
)
|
||||
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
instance.data["deadlineSubmissionJob"] = resp.json()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
# redefinition of families
|
||||
if "render.farm" in families:
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
families = ["render.farm", "prerender.farm",
|
||||
"renderlayer", "imagesequence", "vrayscene"]
|
||||
|
||||
aov_filter = {"maya": [r".*(?:\.|_)*([Bb]eauty)(?:\.|_)*.*"],
|
||||
aov_filter = {"maya": [r".*(?:[\._-])*([Bb]eauty)(?:[\.|_])*.*"],
|
||||
"aftereffects": [r".*"], # for everything from AE
|
||||
"harmony": [r".*"], # for everything from AE
|
||||
"celaction": [r".*"]}
|
||||
|
|
@ -142,8 +142,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
instance_transfer = {
|
||||
"slate": ["slateFrame"],
|
||||
"review": ["lutPath"],
|
||||
"render2d": ["bakeScriptPath", "bakeRenderPath",
|
||||
"bakeWriteNodeName", "version"],
|
||||
"render2d": ["bakingNukeScripts", "version"],
|
||||
"renderlayer": ["convertToScanline"]
|
||||
}
|
||||
|
||||
|
|
@ -231,7 +230,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
args = [
|
||||
'publish',
|
||||
roothless_metadata_path,
|
||||
"--targets {}".format("deadline")
|
||||
"--targets", "deadline",
|
||||
"--targets", "filesequence"
|
||||
]
|
||||
|
||||
# Generate the payload for Deadline submission
|
||||
|
|
@ -505,9 +505,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
representations = []
|
||||
collections, remainders = clique.assemble(exp_files)
|
||||
bake_render_path = instance.get("bakeRenderPath", [])
|
||||
bake_renders = instance.get("bakingNukeScripts", [])
|
||||
|
||||
# create representation for every collected sequence
|
||||
# create representation for every collected sequento ce
|
||||
for collection in collections:
|
||||
ext = collection.tail.lstrip(".")
|
||||
preview = False
|
||||
|
|
@ -523,7 +523,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
preview = True
|
||||
break
|
||||
|
||||
if bake_render_path:
|
||||
if bake_renders:
|
||||
preview = False
|
||||
|
||||
staging = os.path.dirname(list(collection)[0])
|
||||
|
|
@ -595,7 +595,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
})
|
||||
self._solve_families(instance, True)
|
||||
|
||||
if remainder in bake_render_path:
|
||||
if (bake_renders
|
||||
and remainder in bake_renders[0]["bakeRenderPath"]):
|
||||
rep.update({
|
||||
"fps": instance.get("fps"),
|
||||
"tags": ["review", "delete"]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,147 @@
|
|||
from pymongo import UpdateOne
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
from openpype_modules.ftrack.lib import (
|
||||
CUST_ATTR_ID_KEY,
|
||||
query_custom_attributes,
|
||||
|
||||
BaseEvent
|
||||
)
|
||||
|
||||
|
||||
class SyncLinksToAvalon(BaseEvent):
|
||||
"""Synchronize inpug linkts to avalon documents."""
|
||||
# Run after sync to avalon event handler
|
||||
priority = 110
|
||||
|
||||
def __init__(self, session):
|
||||
self.dbcon = AvalonMongoDB()
|
||||
|
||||
super(SyncLinksToAvalon, self).__init__(session)
|
||||
|
||||
def launch(self, session, event):
|
||||
# Try to commit and if any error happen then recreate session
|
||||
entities_info = event["data"]["entities"]
|
||||
dependency_changes = []
|
||||
removed_entities = set()
|
||||
for entity_info in entities_info:
|
||||
action = entity_info.get("action")
|
||||
entityType = entity_info.get("entityType")
|
||||
if action not in ("remove", "add"):
|
||||
continue
|
||||
|
||||
if entityType == "task":
|
||||
removed_entities.add(entity_info["entityId"])
|
||||
elif entityType == "dependency":
|
||||
dependency_changes.append(entity_info)
|
||||
|
||||
# Care only about dependency changes
|
||||
if not dependency_changes:
|
||||
return
|
||||
|
||||
project_id = None
|
||||
for entity_info in dependency_changes:
|
||||
for parent_info in entity_info["parents"]:
|
||||
if parent_info["entityType"] == "show":
|
||||
project_id = parent_info["entityId"]
|
||||
if project_id is not None:
|
||||
break
|
||||
|
||||
changed_to_ids = set()
|
||||
for entity_info in dependency_changes:
|
||||
to_id_change = entity_info["changes"]["to_id"]
|
||||
if to_id_change["new"] is not None:
|
||||
changed_to_ids.add(to_id_change["new"])
|
||||
|
||||
if to_id_change["old"] is not None:
|
||||
changed_to_ids.add(to_id_change["old"])
|
||||
|
||||
self._update_in_links(session, changed_to_ids, project_id)
|
||||
|
||||
def _update_in_links(self, session, ftrack_ids, project_id):
|
||||
if not ftrack_ids or project_id is None:
|
||||
return
|
||||
|
||||
attr_def = session.query((
|
||||
"select id from CustomAttributeConfiguration where key is \"{}\""
|
||||
).format(CUST_ATTR_ID_KEY)).first()
|
||||
if attr_def is None:
|
||||
return
|
||||
|
||||
project_entity = session.query((
|
||||
"select full_name from Project where id is \"{}\""
|
||||
).format(project_id)).first()
|
||||
if not project_entity:
|
||||
return
|
||||
|
||||
project_name = project_entity["full_name"]
|
||||
mongo_id_by_ftrack_id = self._get_mongo_ids_by_ftrack_ids(
|
||||
session, attr_def["id"], ftrack_ids
|
||||
)
|
||||
|
||||
filtered_ftrack_ids = tuple(mongo_id_by_ftrack_id.keys())
|
||||
context_links = session.query((
|
||||
"select from_id, to_id from TypedContextLink where to_id in ({})"
|
||||
).format(self.join_query_keys(filtered_ftrack_ids))).all()
|
||||
|
||||
mapping_by_to_id = {
|
||||
ftrack_id: set()
|
||||
for ftrack_id in filtered_ftrack_ids
|
||||
}
|
||||
all_from_ids = set()
|
||||
for context_link in context_links:
|
||||
to_id = context_link["to_id"]
|
||||
from_id = context_link["from_id"]
|
||||
if from_id == to_id:
|
||||
continue
|
||||
all_from_ids.add(from_id)
|
||||
mapping_by_to_id[to_id].add(from_id)
|
||||
|
||||
mongo_id_by_ftrack_id.update(self._get_mongo_ids_by_ftrack_ids(
|
||||
session, attr_def["id"], all_from_ids
|
||||
))
|
||||
self.log.info(mongo_id_by_ftrack_id)
|
||||
bulk_writes = []
|
||||
for to_id, from_ids in mapping_by_to_id.items():
|
||||
dst_mongo_id = mongo_id_by_ftrack_id[to_id]
|
||||
links = []
|
||||
for ftrack_id in from_ids:
|
||||
link_mongo_id = mongo_id_by_ftrack_id.get(ftrack_id)
|
||||
if link_mongo_id is None:
|
||||
continue
|
||||
|
||||
links.append({
|
||||
"_id": ObjectId(link_mongo_id),
|
||||
"linkedBy": "ftrack",
|
||||
"type": "breakdown"
|
||||
})
|
||||
|
||||
bulk_writes.append(UpdateOne(
|
||||
{"_id": ObjectId(dst_mongo_id)},
|
||||
{"$set": {"data.inputLinks": links}}
|
||||
))
|
||||
|
||||
if bulk_writes:
|
||||
self.dbcon.database[project_name].bulk_write(bulk_writes)
|
||||
|
||||
def _get_mongo_ids_by_ftrack_ids(self, session, attr_id, ftrack_ids):
|
||||
output = query_custom_attributes(
|
||||
session, [attr_id], ftrack_ids
|
||||
)
|
||||
mongo_id_by_ftrack_id = {}
|
||||
for item in output:
|
||||
mongo_id = item["value"]
|
||||
if not mongo_id:
|
||||
continue
|
||||
|
||||
ftrack_id = item["entity_id"]
|
||||
|
||||
mongo_id_by_ftrack_id[ftrack_id] = mongo_id
|
||||
return mongo_id_by_ftrack_id
|
||||
|
||||
|
||||
def register(session):
|
||||
'''Register plugin. Called when used as an plugin.'''
|
||||
SyncLinksToAvalon(session).register()
|
||||
|
|
@ -22,7 +22,7 @@ from .custom_attributes import get_openpype_attr
|
|||
|
||||
from bson.objectid import ObjectId
|
||||
from bson.errors import InvalidId
|
||||
from pymongo import UpdateOne
|
||||
from pymongo import UpdateOne, ReplaceOne
|
||||
import ftrack_api
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
|
@ -328,7 +328,7 @@ class SyncEntitiesFactory:
|
|||
server_url=self._server_url,
|
||||
api_key=self._api_key,
|
||||
api_user=self._api_user,
|
||||
auto_connect_event_hub=True
|
||||
auto_connect_event_hub=False
|
||||
)
|
||||
|
||||
self.duplicates = {}
|
||||
|
|
@ -341,6 +341,7 @@ class SyncEntitiesFactory:
|
|||
}
|
||||
|
||||
self.create_list = []
|
||||
self.unarchive_list = []
|
||||
self.updates = collections.defaultdict(dict)
|
||||
|
||||
self.avalon_project = None
|
||||
|
|
@ -1169,16 +1170,43 @@ class SyncEntitiesFactory:
|
|||
entity
|
||||
)
|
||||
|
||||
def _get_input_links(self, ftrack_ids):
|
||||
tupled_ids = tuple(ftrack_ids)
|
||||
mapping_by_to_id = {
|
||||
ftrack_id: set()
|
||||
for ftrack_id in tupled_ids
|
||||
}
|
||||
ids_len = len(tupled_ids)
|
||||
chunk_size = int(5000 / ids_len)
|
||||
all_links = []
|
||||
for idx in range(0, ids_len, chunk_size):
|
||||
entity_ids_joined = join_query_keys(
|
||||
tupled_ids[idx:idx + chunk_size]
|
||||
)
|
||||
|
||||
all_links.extend(self.session.query((
|
||||
"select from_id, to_id from"
|
||||
" TypedContextLink where to_id in ({})"
|
||||
).format(entity_ids_joined)).all())
|
||||
|
||||
for context_link in all_links:
|
||||
to_id = context_link["to_id"]
|
||||
from_id = context_link["from_id"]
|
||||
if from_id == to_id:
|
||||
continue
|
||||
mapping_by_to_id[to_id].add(from_id)
|
||||
return mapping_by_to_id
|
||||
|
||||
def prepare_ftrack_ent_data(self):
|
||||
not_set_ids = []
|
||||
for id, entity_dict in self.entities_dict.items():
|
||||
for ftrack_id, entity_dict in self.entities_dict.items():
|
||||
entity = entity_dict["entity"]
|
||||
if entity is None:
|
||||
not_set_ids.append(id)
|
||||
not_set_ids.append(ftrack_id)
|
||||
continue
|
||||
|
||||
self.entities_dict[id]["final_entity"] = {}
|
||||
self.entities_dict[id]["final_entity"]["name"] = (
|
||||
self.entities_dict[ftrack_id]["final_entity"] = {}
|
||||
self.entities_dict[ftrack_id]["final_entity"]["name"] = (
|
||||
entity_dict["name"]
|
||||
)
|
||||
data = {}
|
||||
|
|
@ -1191,58 +1219,59 @@ class SyncEntitiesFactory:
|
|||
for key, val in entity_dict.get("hier_attrs", []).items():
|
||||
data[key] = val
|
||||
|
||||
if id == self.ft_project_id:
|
||||
project_name = entity["full_name"]
|
||||
data["code"] = entity["name"]
|
||||
self.entities_dict[id]["final_entity"]["data"] = data
|
||||
self.entities_dict[id]["final_entity"]["type"] = "project"
|
||||
if ftrack_id != self.ft_project_id:
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items) - 1:]
|
||||
|
||||
proj_schema = entity["project_schema"]
|
||||
task_types = proj_schema["_task_type_schema"]["types"]
|
||||
proj_apps, warnings = get_project_apps(
|
||||
data.pop("applications", [])
|
||||
)
|
||||
for msg, items in warnings.items():
|
||||
if not msg or not items:
|
||||
continue
|
||||
self.report_items["warning"][msg] = items
|
||||
|
||||
current_project_anatomy_data = get_anatomy_settings(
|
||||
project_name, exclude_locals=True
|
||||
)
|
||||
anatomy_tasks = current_project_anatomy_data["tasks"]
|
||||
tasks = {}
|
||||
default_type_data = {
|
||||
"short_name": ""
|
||||
}
|
||||
for task_type in task_types:
|
||||
task_type_name = task_type["name"]
|
||||
tasks[task_type_name] = copy.deepcopy(
|
||||
anatomy_tasks.get(task_type_name)
|
||||
or default_type_data
|
||||
)
|
||||
|
||||
project_config = {
|
||||
"tasks": tasks,
|
||||
"apps": proj_apps
|
||||
}
|
||||
for key, value in current_project_anatomy_data.items():
|
||||
if key in project_config or key == "attributes":
|
||||
continue
|
||||
project_config[key] = value
|
||||
|
||||
self.entities_dict[id]["final_entity"]["config"] = (
|
||||
project_config
|
||||
)
|
||||
data["parents"] = parents
|
||||
data["tasks"] = self.entities_dict[ftrack_id].pop("tasks", {})
|
||||
self.entities_dict[ftrack_id]["final_entity"]["data"] = data
|
||||
self.entities_dict[ftrack_id]["final_entity"]["type"] = "asset"
|
||||
continue
|
||||
project_name = entity["full_name"]
|
||||
data["code"] = entity["name"]
|
||||
self.entities_dict[ftrack_id]["final_entity"]["data"] = data
|
||||
self.entities_dict[ftrack_id]["final_entity"]["type"] = (
|
||||
"project"
|
||||
)
|
||||
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items) - 1:]
|
||||
proj_schema = entity["project_schema"]
|
||||
task_types = proj_schema["_task_type_schema"]["types"]
|
||||
proj_apps, warnings = get_project_apps(
|
||||
data.pop("applications", [])
|
||||
)
|
||||
for msg, items in warnings.items():
|
||||
if not msg or not items:
|
||||
continue
|
||||
self.report_items["warning"][msg] = items
|
||||
|
||||
data["parents"] = parents
|
||||
data["tasks"] = self.entities_dict[id].pop("tasks", {})
|
||||
self.entities_dict[id]["final_entity"]["data"] = data
|
||||
self.entities_dict[id]["final_entity"]["type"] = "asset"
|
||||
current_project_anatomy_data = get_anatomy_settings(
|
||||
project_name, exclude_locals=True
|
||||
)
|
||||
anatomy_tasks = current_project_anatomy_data["tasks"]
|
||||
tasks = {}
|
||||
default_type_data = {
|
||||
"short_name": ""
|
||||
}
|
||||
for task_type in task_types:
|
||||
task_type_name = task_type["name"]
|
||||
tasks[task_type_name] = copy.deepcopy(
|
||||
anatomy_tasks.get(task_type_name)
|
||||
or default_type_data
|
||||
)
|
||||
|
||||
project_config = {
|
||||
"tasks": tasks,
|
||||
"apps": proj_apps
|
||||
}
|
||||
for key, value in current_project_anatomy_data.items():
|
||||
if key in project_config or key == "attributes":
|
||||
continue
|
||||
project_config[key] = value
|
||||
|
||||
self.entities_dict[ftrack_id]["final_entity"]["config"] = (
|
||||
project_config
|
||||
)
|
||||
|
||||
if not_set_ids:
|
||||
self.log.debug((
|
||||
|
|
@ -1433,6 +1462,28 @@ class SyncEntitiesFactory:
|
|||
for child_id in entity_dict["children"]:
|
||||
children_queue.append(child_id)
|
||||
|
||||
def set_input_links(self):
|
||||
ftrack_ids = set(self.create_ftrack_ids) | set(self.update_ftrack_ids)
|
||||
|
||||
input_links_by_ftrack_id = self._get_input_links(ftrack_ids)
|
||||
|
||||
for ftrack_id in ftrack_ids:
|
||||
input_links = []
|
||||
final_entity = self.entities_dict[ftrack_id]["final_entity"]
|
||||
final_entity["data"]["inputLinks"] = input_links
|
||||
link_ids = input_links_by_ftrack_id[ftrack_id]
|
||||
if not link_ids:
|
||||
continue
|
||||
|
||||
for ftrack_link_id in link_ids:
|
||||
mongo_id = self.ftrack_avalon_mapper.get(ftrack_link_id)
|
||||
if mongo_id is not None:
|
||||
input_links.append({
|
||||
"_id": ObjectId(mongo_id),
|
||||
"linkedBy": "ftrack",
|
||||
"type": "breakdown"
|
||||
})
|
||||
|
||||
def prepare_changes(self):
|
||||
self.log.debug("* Preparing changes for avalon/ftrack")
|
||||
hierarchy_changing_ids = []
|
||||
|
|
@ -1806,9 +1857,28 @@ class SyncEntitiesFactory:
|
|||
for ftrack_id in self.create_ftrack_ids:
|
||||
# CHECK it is possible that entity was already created
|
||||
# because is parent of another entity which was processed first
|
||||
if ftrack_id in self.ftrack_avalon_mapper:
|
||||
continue
|
||||
self.create_avalon_entity(ftrack_id)
|
||||
if ftrack_id not in self.ftrack_avalon_mapper:
|
||||
self.create_avalon_entity(ftrack_id)
|
||||
|
||||
self.set_input_links()
|
||||
|
||||
unarchive_writes = []
|
||||
for item in self.unarchive_list:
|
||||
mongo_id = item["_id"]
|
||||
unarchive_writes.append(ReplaceOne(
|
||||
{"_id": mongo_id},
|
||||
item
|
||||
))
|
||||
av_ent_path_items = item["data"]["parents"]
|
||||
av_ent_path_items.append(item["name"])
|
||||
av_ent_path = "/".join(av_ent_path_items)
|
||||
self.log.debug(
|
||||
"Entity was unarchived <{}>".format(av_ent_path)
|
||||
)
|
||||
self.remove_from_archived(mongo_id)
|
||||
|
||||
if unarchive_writes:
|
||||
self.dbcon.bulk_write(unarchive_writes)
|
||||
|
||||
if len(self.create_list) > 0:
|
||||
self.dbcon.insert_many(self.create_list)
|
||||
|
|
@ -1899,14 +1969,8 @@ class SyncEntitiesFactory:
|
|||
|
||||
if unarchive is False:
|
||||
self.create_list.append(item)
|
||||
return
|
||||
# If unarchive then replace entity data in database
|
||||
self.dbcon.replace_one({"_id": new_id}, item)
|
||||
self.remove_from_archived(mongo_id)
|
||||
av_ent_path_items = item["data"]["parents"]
|
||||
av_ent_path_items.append(item["name"])
|
||||
av_ent_path = "/".join(av_ent_path_items)
|
||||
self.log.debug("Entity was unarchived <{}>".format(av_ent_path))
|
||||
else:
|
||||
self.unarchive_list.append(item)
|
||||
|
||||
def check_unarchivation(self, ftrack_id, mongo_id, name):
|
||||
archived_by_id = self.avalon_archived_by_id.get(mongo_id)
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ class CollectUsername(pyblish.api.ContextPlugin):
|
|||
order = pyblish.api.CollectorOrder - 0.488
|
||||
label = "Collect ftrack username"
|
||||
hosts = ["webpublisher", "photoshop"]
|
||||
targets = ["remotepublish", "filespublish"]
|
||||
targets = ["remotepublish", "filespublish", "tvpaint_worker"]
|
||||
|
||||
_context = None
|
||||
|
||||
|
|
|
|||
|
|
@ -1,208 +1,266 @@
|
|||
import pyblish.api
|
||||
import json
|
||||
import os
|
||||
import json
|
||||
import copy
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
||||
"""Collect ftrack component data
|
||||
"""Collect ftrack component data (not integrate yet).
|
||||
|
||||
Add ftrack component list to instance.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.48
|
||||
label = 'Integrate Ftrack Component'
|
||||
label = "Integrate Ftrack Component"
|
||||
families = ["ftrack"]
|
||||
|
||||
family_mapping = {'camera': 'cam',
|
||||
'look': 'look',
|
||||
'mayaascii': 'scene',
|
||||
'model': 'geo',
|
||||
'rig': 'rig',
|
||||
'setdress': 'setdress',
|
||||
'pointcache': 'cache',
|
||||
'render': 'render',
|
||||
'render2d': 'render',
|
||||
'nukescript': 'comp',
|
||||
'write': 'render',
|
||||
'review': 'mov',
|
||||
'plate': 'img',
|
||||
'audio': 'audio',
|
||||
'workfile': 'scene',
|
||||
'animation': 'cache',
|
||||
'image': 'img',
|
||||
'reference': 'reference'
|
||||
}
|
||||
family_mapping = {
|
||||
"camera": "cam",
|
||||
"look": "look",
|
||||
"mayaascii": "scene",
|
||||
"model": "geo",
|
||||
"rig": "rig",
|
||||
"setdress": "setdress",
|
||||
"pointcache": "cache",
|
||||
"render": "render",
|
||||
"render2d": "render",
|
||||
"nukescript": "comp",
|
||||
"write": "render",
|
||||
"review": "mov",
|
||||
"plate": "img",
|
||||
"audio": "audio",
|
||||
"workfile": "scene",
|
||||
"animation": "cache",
|
||||
"image": "img",
|
||||
"reference": "reference"
|
||||
}
|
||||
|
||||
def process(self, instance):
|
||||
self.ftrack_locations = {}
|
||||
self.log.debug('instance {}'.format(instance))
|
||||
self.log.debug("instance {}".format(instance))
|
||||
|
||||
if instance.data.get('version'):
|
||||
version_number = int(instance.data.get('version'))
|
||||
else:
|
||||
instance_version = instance.data.get("version")
|
||||
if instance_version is None:
|
||||
raise ValueError("Instance version not set")
|
||||
|
||||
family = instance.data['family'].lower()
|
||||
version_number = int(instance_version)
|
||||
|
||||
family = instance.data["family"]
|
||||
family_low = instance.data["family"].lower()
|
||||
|
||||
asset_type = instance.data.get("ftrackFamily")
|
||||
if not asset_type and family in self.family_mapping:
|
||||
asset_type = self.family_mapping[family]
|
||||
if not asset_type and family_low in self.family_mapping:
|
||||
asset_type = self.family_mapping[family_low]
|
||||
|
||||
# Ignore this instance if neither "ftrackFamily" or a family mapping is
|
||||
# found.
|
||||
if not asset_type:
|
||||
self.log.info((
|
||||
"Family \"{}\" does not match any asset type mapping"
|
||||
).format(family))
|
||||
return
|
||||
|
||||
componentList = []
|
||||
instance_repres = instance.data.get("representations")
|
||||
if not instance_repres:
|
||||
self.log.info((
|
||||
"Skipping instance. Does not have any representations {}"
|
||||
).format(str(instance)))
|
||||
return
|
||||
|
||||
# Prepare FPS
|
||||
instance_fps = instance.data.get("fps")
|
||||
if instance_fps is None:
|
||||
instance_fps = instance.context.data["fps"]
|
||||
|
||||
# Base of component item data
|
||||
# - create a copy of this object when want to use it
|
||||
base_component_item = {
|
||||
"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
"comment": instance.context.data.get("comment") or ""
|
||||
},
|
||||
"component_overwrite": False,
|
||||
# This can be change optionally
|
||||
"thumbnail": False,
|
||||
# These must be changed for each component
|
||||
"component_data": None,
|
||||
"component_path": None,
|
||||
"component_location": None
|
||||
}
|
||||
|
||||
ft_session = instance.context.data["ftrackSession"]
|
||||
|
||||
for comp in instance.data['representations']:
|
||||
self.log.debug('component {}'.format(comp))
|
||||
# Filter types of representations
|
||||
review_representations = []
|
||||
thumbnail_representations = []
|
||||
other_representations = []
|
||||
for repre in instance_repres:
|
||||
self.log.debug("Representation {}".format(repre))
|
||||
repre_tags = repre.get("tags") or []
|
||||
if repre.get("thumbnail") or "thumbnail" in repre_tags:
|
||||
thumbnail_representations.append(repre)
|
||||
|
||||
if comp.get('thumbnail') or ("thumbnail" in comp.get('tags', [])):
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.server', ft_session
|
||||
)
|
||||
component_data = {
|
||||
"name": "thumbnail" # Default component name is "main".
|
||||
}
|
||||
comp['thumbnail'] = True
|
||||
comp_files = comp["files"]
|
||||
elif "ftrackreview" in repre_tags:
|
||||
review_representations.append(repre)
|
||||
|
||||
else:
|
||||
other_representations.append(repre)
|
||||
|
||||
# Prepare ftrack locations
|
||||
unmanaged_location = ft_session.query(
|
||||
"Location where name is \"ftrack.unmanaged\""
|
||||
).one()
|
||||
ftrack_server_location = ft_session.query(
|
||||
"Location where name is \"ftrack.server\""
|
||||
).one()
|
||||
|
||||
# Components data
|
||||
component_list = []
|
||||
# Components that will be duplicated to unmanaged location
|
||||
src_components_to_add = []
|
||||
|
||||
# Create thumbnail components
|
||||
# TODO what if there is multiple thumbnails?
|
||||
first_thumbnail_component = None
|
||||
for repre in thumbnail_representations:
|
||||
published_path = repre.get("published_path")
|
||||
if not published_path:
|
||||
comp_files = repre["files"]
|
||||
if isinstance(comp_files, (tuple, list, set)):
|
||||
filename = comp_files[0]
|
||||
else:
|
||||
filename = comp_files
|
||||
|
||||
comp['published_path'] = os.path.join(
|
||||
comp['stagingDir'], filename
|
||||
)
|
||||
|
||||
elif comp.get('ftrackreview') or ("ftrackreview" in comp.get('tags', [])):
|
||||
'''
|
||||
Ftrack bug requirement:
|
||||
- Start frame must be 0
|
||||
- End frame must be {duration}
|
||||
EXAMPLE: When mov has 55 frames:
|
||||
- Start frame should be 0
|
||||
- End frame should be 55 (do not ask why please!)
|
||||
'''
|
||||
start_frame = 0
|
||||
end_frame = 1
|
||||
if 'frameEndFtrack' in comp and 'frameStartFtrack' in comp:
|
||||
end_frame += (
|
||||
comp['frameEndFtrack'] - comp['frameStartFtrack']
|
||||
)
|
||||
else:
|
||||
end_frame += (
|
||||
instance.data["frameEnd"] - instance.data["frameStart"]
|
||||
)
|
||||
|
||||
fps = comp.get('fps')
|
||||
if fps is None:
|
||||
fps = instance.data.get(
|
||||
"fps", instance.context.data['fps']
|
||||
)
|
||||
|
||||
comp['fps'] = fps
|
||||
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.server', ft_session
|
||||
published_path = os.path.join(
|
||||
repre["stagingDir"], filename
|
||||
)
|
||||
component_data = {
|
||||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {'ftr_meta': json.dumps({
|
||||
'frameIn': int(start_frame),
|
||||
'frameOut': int(end_frame),
|
||||
'frameRate': float(comp['fps'])})}
|
||||
}
|
||||
comp['thumbnail'] = False
|
||||
else:
|
||||
component_data = {
|
||||
"name": comp['name']
|
||||
}
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.unmanaged', ft_session
|
||||
)
|
||||
comp['thumbnail'] = False
|
||||
if not os.path.exists(published_path):
|
||||
continue
|
||||
repre["published_path"] = published_path
|
||||
|
||||
self.log.debug('location {}'.format(location))
|
||||
|
||||
component_item = {
|
||||
"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
"comment": instance.context.data.get("comment", "")
|
||||
},
|
||||
"component_data": component_data,
|
||||
"component_path": comp['published_path'],
|
||||
'component_location': location,
|
||||
"component_overwrite": False,
|
||||
"thumbnail": comp['thumbnail']
|
||||
# Create copy of base comp item and append it
|
||||
thumbnail_item = copy.deepcopy(base_component_item)
|
||||
thumbnail_item["component_path"] = repre["published_path"]
|
||||
thumbnail_item["component_data"] = {
|
||||
"name": "thumbnail"
|
||||
}
|
||||
thumbnail_item["thumbnail"] = True
|
||||
# Create copy of item before setting location
|
||||
src_components_to_add.append(copy.deepcopy(thumbnail_item))
|
||||
# Create copy of first thumbnail
|
||||
if first_thumbnail_component is None:
|
||||
first_thumbnail_component = copy.deepcopy(thumbnail_item)
|
||||
# Set location
|
||||
thumbnail_item["component_location"] = ftrack_server_location
|
||||
# Add item to component list
|
||||
component_list.append(thumbnail_item)
|
||||
|
||||
# Add custom attributes for AssetVersion
|
||||
assetversion_cust_attrs = {}
|
||||
intent_val = instance.context.data.get("intent")
|
||||
if intent_val and isinstance(intent_val, dict):
|
||||
intent_val = intent_val.get("value")
|
||||
# Create review components
|
||||
# Change asset name of each new component for review
|
||||
is_first_review_repre = True
|
||||
not_first_components = []
|
||||
for repre in review_representations:
|
||||
frame_start = repre.get("frameStartFtrack")
|
||||
frame_end = repre.get("frameEndFtrack")
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = instance.data["frameStart"]
|
||||
frame_end = instance.data["frameEnd"]
|
||||
|
||||
if intent_val:
|
||||
assetversion_cust_attrs["intent"] = intent_val
|
||||
# Frame end of uploaded video file should be duration in frames
|
||||
# - frame start is always 0
|
||||
# - frame end is duration in frames
|
||||
duration = frame_end - frame_start + 1
|
||||
|
||||
component_item["assetversion_data"]["custom_attributes"] = (
|
||||
assetversion_cust_attrs
|
||||
)
|
||||
fps = repre.get("fps")
|
||||
if fps is None:
|
||||
fps = instance_fps
|
||||
|
||||
componentList.append(component_item)
|
||||
# Create copy with ftrack.unmanaged location if thumb or prev
|
||||
if comp.get('thumbnail') or comp.get('preview') \
|
||||
or ("preview" in comp.get('tags', [])) \
|
||||
or ("review" in comp.get('tags', [])) \
|
||||
or ("thumbnail" in comp.get('tags', [])):
|
||||
unmanaged_loc = self.get_ftrack_location(
|
||||
'ftrack.unmanaged', ft_session
|
||||
)
|
||||
|
||||
component_data_src = component_data.copy()
|
||||
name = component_data['name'] + '_src'
|
||||
component_data_src['name'] = name
|
||||
|
||||
component_item_src = {
|
||||
"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
},
|
||||
"component_data": component_data_src,
|
||||
"component_path": comp['published_path'],
|
||||
'component_location': unmanaged_loc,
|
||||
"component_overwrite": False,
|
||||
"thumbnail": False
|
||||
# Create copy of base comp item and append it
|
||||
review_item = copy.deepcopy(base_component_item)
|
||||
# Change location
|
||||
review_item["component_path"] = repre["published_path"]
|
||||
# Change component data
|
||||
review_item["component_data"] = {
|
||||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {
|
||||
"ftr_meta": json.dumps({
|
||||
"frameIn": 0,
|
||||
"frameOut": int(duration),
|
||||
"frameRate": float(fps)
|
||||
})
|
||||
}
|
||||
}
|
||||
# Create copy of item before setting location or changing asset
|
||||
src_components_to_add.append(copy.deepcopy(review_item))
|
||||
if is_first_review_repre:
|
||||
is_first_review_repre = False
|
||||
else:
|
||||
# Add representation name to asset name of "not first" review
|
||||
asset_name = review_item["asset_data"]["name"]
|
||||
review_item["asset_data"]["name"] = "_".join(
|
||||
(asset_name, repre["name"])
|
||||
)
|
||||
not_first_components.append(review_item)
|
||||
|
||||
componentList.append(component_item_src)
|
||||
# Set location
|
||||
review_item["component_location"] = ftrack_server_location
|
||||
# Add item to component list
|
||||
component_list.append(review_item)
|
||||
|
||||
self.log.debug('componentsList: {}'.format(str(componentList)))
|
||||
instance.data["ftrackComponentsList"] = componentList
|
||||
# Duplicate thumbnail component for all not first reviews
|
||||
if first_thumbnail_component is not None:
|
||||
for component_item in not_first_components:
|
||||
asset_name = component_item["asset_data"]["name"]
|
||||
new_thumbnail_component = copy.deepcopy(
|
||||
first_thumbnail_component
|
||||
)
|
||||
new_thumbnail_component["asset_data"]["name"] = asset_name
|
||||
new_thumbnail_component["component_location"] = (
|
||||
ftrack_server_location
|
||||
)
|
||||
component_list.append(new_thumbnail_component)
|
||||
|
||||
def get_ftrack_location(self, name, session):
|
||||
if name in self.ftrack_locations:
|
||||
return self.ftrack_locations[name]
|
||||
# Add source components for review and thubmnail components
|
||||
for copy_src_item in src_components_to_add:
|
||||
# Make sure thumbnail is disabled
|
||||
copy_src_item["thumbnail"] = False
|
||||
# Set location
|
||||
copy_src_item["component_location"] = unmanaged_location
|
||||
# Modify name of component to have suffix "_src"
|
||||
component_data = copy_src_item["component_data"]
|
||||
component_name = component_data["name"]
|
||||
component_data["name"] = component_name + "_src"
|
||||
component_list.append(copy_src_item)
|
||||
|
||||
location = session.query(
|
||||
'Location where name is "{}"'.format(name)
|
||||
).one()
|
||||
self.ftrack_locations[name] = location
|
||||
return location
|
||||
# Add others representations as component
|
||||
for repre in other_representations:
|
||||
published_path = repre.get("published_path")
|
||||
if not published_path:
|
||||
continue
|
||||
# Create copy of base comp item and append it
|
||||
other_item = copy.deepcopy(base_component_item)
|
||||
other_item["component_data"] = {
|
||||
"name": repre["name"]
|
||||
}
|
||||
other_item["component_location"] = unmanaged_location
|
||||
other_item["component_path"] = published_path
|
||||
component_list.append(other_item)
|
||||
|
||||
def json_obj_parser(obj):
|
||||
return str(obj)
|
||||
|
||||
self.log.debug("Components list: {}".format(
|
||||
json.dumps(
|
||||
component_list,
|
||||
sort_keys=True,
|
||||
indent=4,
|
||||
default=json_obj_parser
|
||||
)
|
||||
))
|
||||
instance.data["ftrackComponentsList"] = component_list
|
||||
|
|
|
|||
|
|
@ -1,30 +0,0 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
|
||||
|
||||
class IntegrateCleanComponentData(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
Cleaning up thumbnail an mov files after they have been integrated
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.5
|
||||
label = 'Clean component data'
|
||||
families = ["ftrack"]
|
||||
optional = True
|
||||
active = False
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
for comp in instance.data['representations']:
|
||||
self.log.debug('component {}'.format(comp))
|
||||
|
||||
if "%" in comp['published_path'] or "#" in comp['published_path']:
|
||||
continue
|
||||
|
||||
if comp.get('thumbnail') or ("thumbnail" in comp.get('tags', [])):
|
||||
os.remove(comp['published_path'])
|
||||
self.log.info('Thumbnail image was erased')
|
||||
|
||||
elif comp.get('preview') or ("preview" in comp.get('tags', [])):
|
||||
os.remove(comp['published_path'])
|
||||
self.log.info('Preview mov file was erased')
|
||||
6
openpype/modules/default_modules/job_queue/__init__.py
Normal file
6
openpype/modules/default_modules/job_queue/__init__.py
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
from .module import JobQueueModule
|
||||
|
||||
|
||||
__all__ = (
|
||||
"JobQueueModule",
|
||||
)
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
from .server import WebServerManager
|
||||
from .utils import main
|
||||
|
||||
|
||||
__all__ = (
|
||||
"WebServerManager",
|
||||
"main"
|
||||
)
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
import json
|
||||
|
||||
from aiohttp.web_response import Response
|
||||
|
||||
|
||||
class JobQueueResource:
|
||||
def __init__(self, job_queue, server_manager):
|
||||
self.server_manager = server_manager
|
||||
|
||||
self._prefix = "/api"
|
||||
|
||||
self._job_queue = job_queue
|
||||
|
||||
self.endpoint_defs = (
|
||||
("POST", "/jobs", self.post_job),
|
||||
("GET", "/jobs", self.get_jobs),
|
||||
("GET", "/jobs/{job_id}", self.get_job)
|
||||
)
|
||||
|
||||
self.register()
|
||||
|
||||
def register(self):
|
||||
for methods, url, callback in self.endpoint_defs:
|
||||
final_url = self._prefix + url
|
||||
self.server_manager.add_route(
|
||||
methods, final_url, callback
|
||||
)
|
||||
|
||||
async def get_jobs(self, request):
|
||||
jobs_data = []
|
||||
for job in self._job_queue.get_jobs():
|
||||
jobs_data.append(job.status())
|
||||
return Response(status=200, body=self.encode(jobs_data))
|
||||
|
||||
async def post_job(self, request):
|
||||
data = await request.json()
|
||||
host_name = data.get("host_name")
|
||||
if not host_name:
|
||||
return Response(
|
||||
status=400, message="Key \"host_name\" not filled."
|
||||
)
|
||||
|
||||
job = self._job_queue.create_job(host_name, data)
|
||||
return Response(status=201, text=job.id)
|
||||
|
||||
async def get_job(self, request):
|
||||
job_id = request.match_info["job_id"]
|
||||
content = self._job_queue.get_job_status(job_id)
|
||||
if content is None:
|
||||
content = {}
|
||||
return Response(
|
||||
status=200,
|
||||
body=self.encode(content),
|
||||
content_type="application/json"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def encode(cls, data):
|
||||
return json.dumps(
|
||||
data,
|
||||
indent=4
|
||||
).encode("utf-8")
|
||||
240
openpype/modules/default_modules/job_queue/job_server/jobs.py
Normal file
240
openpype/modules/default_modules/job_queue/job_server/jobs.py
Normal file
|
|
@ -0,0 +1,240 @@
|
|||
import datetime
|
||||
import collections
|
||||
from uuid import uuid4
|
||||
|
||||
|
||||
class Job:
|
||||
"""Job related to specific host name.
|
||||
|
||||
Data must contain everything needed to finish the job.
|
||||
"""
|
||||
# Remove done jobs each n days to clear memory
|
||||
keep_in_memory_days = 3
|
||||
|
||||
def __init__(self, host_name, data, job_id=None, created_time=None):
|
||||
if job_id is None:
|
||||
job_id = str(uuid4())
|
||||
self._id = job_id
|
||||
if created_time is None:
|
||||
created_time = datetime.datetime.now()
|
||||
self._created_time = created_time
|
||||
self._started_time = None
|
||||
self._done_time = None
|
||||
self.host_name = host_name
|
||||
self.data = data
|
||||
self._result_data = None
|
||||
|
||||
self._started = False
|
||||
self._done = False
|
||||
self._errored = False
|
||||
self._message = None
|
||||
self._deleted = False
|
||||
|
||||
self._worker = None
|
||||
|
||||
def keep_in_memory(self):
|
||||
if self._done_time is None:
|
||||
return True
|
||||
|
||||
now = datetime.datetime.now()
|
||||
delta = now - self._done_time
|
||||
return delta.days < self.keep_in_memory_days
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return self._id
|
||||
|
||||
@property
|
||||
def done(self):
|
||||
return self._done
|
||||
|
||||
def reset(self):
|
||||
self._started = False
|
||||
self._started_time = None
|
||||
self._done = False
|
||||
self._done_time = None
|
||||
self._errored = False
|
||||
self._message = None
|
||||
|
||||
self._worker = None
|
||||
|
||||
@property
|
||||
def started(self):
|
||||
return self._started
|
||||
|
||||
@property
|
||||
def deleted(self):
|
||||
return self._deleted
|
||||
|
||||
def set_deleted(self):
|
||||
self._deleted = True
|
||||
self.set_worker(None)
|
||||
|
||||
def set_worker(self, worker):
|
||||
if worker is self._worker:
|
||||
return
|
||||
|
||||
if self._worker is not None:
|
||||
self._worker.set_current_job(None)
|
||||
|
||||
self._worker = worker
|
||||
if worker is not None:
|
||||
worker.set_current_job(self)
|
||||
|
||||
def set_started(self):
|
||||
self._started_time = datetime.datetime.now()
|
||||
self._started = True
|
||||
|
||||
def set_done(self, success=True, message=None, data=None):
|
||||
self._done = True
|
||||
self._done_time = datetime.datetime.now()
|
||||
self._errored = not success
|
||||
self._message = message
|
||||
self._result_data = data
|
||||
if self._worker is not None:
|
||||
self._worker.set_current_job(None)
|
||||
|
||||
def status(self):
|
||||
worker_id = None
|
||||
if self._worker is not None:
|
||||
worker_id = self._worker.id
|
||||
output = {
|
||||
"id": self.id,
|
||||
"worker_id": worker_id,
|
||||
"done": self._done
|
||||
}
|
||||
output["message"] = self._message or None
|
||||
|
||||
state = "waiting"
|
||||
if self._deleted:
|
||||
state = "deleted"
|
||||
elif self._errored:
|
||||
state = "error"
|
||||
elif self._done:
|
||||
state = "done"
|
||||
elif self._started:
|
||||
state = "started"
|
||||
|
||||
output["result"] = self._result_data
|
||||
|
||||
output["state"] = state
|
||||
|
||||
return output
|
||||
|
||||
|
||||
class JobQueue:
|
||||
"""Queue holds jobs that should be done and workers that can do them.
|
||||
|
||||
Also asign jobs to a worker.
|
||||
"""
|
||||
old_jobs_check_minutes_interval = 30
|
||||
|
||||
def __init__(self):
|
||||
self._last_old_jobs_check = datetime.datetime.now()
|
||||
self._jobs_by_id = {}
|
||||
self._job_queue_by_host_name = collections.defaultdict(
|
||||
collections.deque
|
||||
)
|
||||
self._workers_by_id = {}
|
||||
self._workers_by_host_name = collections.defaultdict(list)
|
||||
|
||||
def workers(self):
|
||||
"""All currently registered workers."""
|
||||
return self._workers_by_id.values()
|
||||
|
||||
def add_worker(self, worker):
|
||||
host_name = worker.host_name
|
||||
print("Added new worker for \"{}\"".format(host_name))
|
||||
self._workers_by_id[worker.id] = worker
|
||||
self._workers_by_host_name[host_name].append(worker)
|
||||
|
||||
def get_worker(self, worker_id):
|
||||
return self._workers_by_id.get(worker_id)
|
||||
|
||||
def remove_worker(self, worker):
|
||||
# Look if worker had assigned job to do
|
||||
job = worker.current_job
|
||||
if job is not None and not job.done:
|
||||
# Reset job
|
||||
job.set_worker(None)
|
||||
job.reset()
|
||||
# Add job back to queue
|
||||
self._job_queue_by_host_name[job.host_name].appendleft(job)
|
||||
|
||||
# Remove worker from registered workers
|
||||
self._workers_by_id.pop(worker.id, None)
|
||||
host_name = worker.host_name
|
||||
if worker in self._workers_by_host_name[host_name]:
|
||||
self._workers_by_host_name[host_name].remove(worker)
|
||||
|
||||
print("Removed worker for \"{}\"".format(host_name))
|
||||
|
||||
def assign_jobs(self):
|
||||
"""Try to assign job for each idle worker.
|
||||
|
||||
Error all jobs without needed worker.
|
||||
"""
|
||||
available_host_names = set()
|
||||
for worker in self._workers_by_id.values():
|
||||
host_name = worker.host_name
|
||||
available_host_names.add(host_name)
|
||||
if worker.is_idle():
|
||||
jobs = self._job_queue_by_host_name[host_name]
|
||||
while jobs:
|
||||
job = jobs.popleft()
|
||||
if not job.deleted:
|
||||
worker.set_current_job(job)
|
||||
break
|
||||
|
||||
for host_name in tuple(self._job_queue_by_host_name.keys()):
|
||||
if host_name in available_host_names:
|
||||
continue
|
||||
|
||||
jobs_deque = self._job_queue_by_host_name[host_name]
|
||||
message = ("Not available workers for \"{}\"").format(host_name)
|
||||
while jobs_deque:
|
||||
job = jobs_deque.popleft()
|
||||
if not job.deleted:
|
||||
job.set_done(False, message)
|
||||
self._remove_old_jobs()
|
||||
|
||||
def get_jobs(self):
|
||||
return self._jobs_by_id.values()
|
||||
|
||||
def get_job(self, job_id):
|
||||
"""Job by it's id."""
|
||||
return self._jobs_by_id.get(job_id)
|
||||
|
||||
def create_job(self, host_name, job_data):
|
||||
"""Create new job from passed data and add it to queue."""
|
||||
job = Job(host_name, job_data)
|
||||
self._jobs_by_id[job.id] = job
|
||||
self._job_queue_by_host_name[host_name].append(job)
|
||||
return job
|
||||
|
||||
def _remove_old_jobs(self):
|
||||
"""Once in specific time look if should remove old finished jobs."""
|
||||
delta = datetime.datetime.now() - self._last_old_jobs_check
|
||||
if delta.seconds < self.old_jobs_check_minutes_interval:
|
||||
return
|
||||
|
||||
for job_id in tuple(self._jobs_by_id.keys()):
|
||||
job = self._jobs_by_id[job_id]
|
||||
if not job.keep_in_memory():
|
||||
self._jobs_by_id.pop(job_id)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
"""Delete job and eventually stop it."""
|
||||
job = self._jobs_by_id.get(job_id)
|
||||
if job is None:
|
||||
return
|
||||
|
||||
job.set_deleted()
|
||||
self._jobs_by_id.pop(job.id)
|
||||
|
||||
def get_job_status(self, job_id):
|
||||
"""Job's status based on id."""
|
||||
job = self._jobs_by_id.get(job_id)
|
||||
if job is None:
|
||||
return {}
|
||||
return job.status()
|
||||
154
openpype/modules/default_modules/job_queue/job_server/server.py
Normal file
154
openpype/modules/default_modules/job_queue/job_server/server.py
Normal file
|
|
@ -0,0 +1,154 @@
|
|||
import threading
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
from .jobs import JobQueue
|
||||
from .job_queue_route import JobQueueResource
|
||||
from .workers_rpc_route import WorkerRpc
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class WebServerManager:
|
||||
"""Manger that care about web server thread."""
|
||||
def __init__(self, port, host, loop=None):
|
||||
self.port = port
|
||||
self.host = host
|
||||
self.app = web.Application()
|
||||
if loop is None:
|
||||
loop = asyncio.new_event_loop()
|
||||
|
||||
# add route with multiple methods for single "external app"
|
||||
self.webserver_thread = WebServerThread(self, loop)
|
||||
|
||||
@property
|
||||
def url(self):
|
||||
return "http://{}:{}".format(self.host, self.port)
|
||||
|
||||
def add_route(self, *args, **kwargs):
|
||||
self.app.router.add_route(*args, **kwargs)
|
||||
|
||||
def add_static(self, *args, **kwargs):
|
||||
self.app.router.add_static(*args, **kwargs)
|
||||
|
||||
def start_server(self):
|
||||
if self.webserver_thread and not self.webserver_thread.is_alive():
|
||||
self.webserver_thread.start()
|
||||
|
||||
def stop_server(self):
|
||||
if not self.is_running:
|
||||
return
|
||||
|
||||
try:
|
||||
log.debug("Stopping Web server")
|
||||
self.webserver_thread.stop()
|
||||
|
||||
except Exception as exc:
|
||||
print("Errored", str(exc))
|
||||
log.warning(
|
||||
"Error has happened during Killing Web server",
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
if self.webserver_thread is not None:
|
||||
return self.webserver_thread.is_running
|
||||
return False
|
||||
|
||||
|
||||
class WebServerThread(threading.Thread):
|
||||
""" Listener for requests in thread."""
|
||||
def __init__(self, manager, loop):
|
||||
super(WebServerThread, self).__init__()
|
||||
|
||||
self._is_running = False
|
||||
self._stopped = False
|
||||
self.manager = manager
|
||||
self.loop = loop
|
||||
self.runner = None
|
||||
self.site = None
|
||||
|
||||
job_queue = JobQueue()
|
||||
self.job_queue_route = JobQueueResource(job_queue, manager)
|
||||
self.workers_route = WorkerRpc(job_queue, manager, loop=loop)
|
||||
|
||||
@property
|
||||
def port(self):
|
||||
return self.manager.port
|
||||
|
||||
@property
|
||||
def host(self):
|
||||
return self.manager.host
|
||||
|
||||
@property
|
||||
def stopped(self):
|
||||
return self._stopped
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self._is_running
|
||||
|
||||
def run(self):
|
||||
self._is_running = True
|
||||
|
||||
try:
|
||||
log.info("Starting WebServer server")
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.loop.run_until_complete(self.start_server())
|
||||
|
||||
asyncio.ensure_future(self.check_shutdown(), loop=self.loop)
|
||||
self.loop.run_forever()
|
||||
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Web Server service has failed", exc_info=True
|
||||
)
|
||||
finally:
|
||||
self.loop.close()
|
||||
|
||||
self._is_running = False
|
||||
log.info("Web server stopped")
|
||||
|
||||
async def start_server(self):
|
||||
""" Starts runner and TCPsite """
|
||||
self.runner = web.AppRunner(self.manager.app)
|
||||
await self.runner.setup()
|
||||
self.site = web.TCPSite(self.runner, self.host, self.port)
|
||||
await self.site.start()
|
||||
|
||||
def stop(self):
|
||||
"""Sets _stopped flag to True, 'check_shutdown' shuts server down"""
|
||||
self._stopped = True
|
||||
|
||||
async def check_shutdown(self):
|
||||
""" Future that is running and checks if server should be running
|
||||
periodically.
|
||||
"""
|
||||
while not self._stopped:
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
print("Starting shutdown")
|
||||
if self.workers_route:
|
||||
await self.workers_route.stop()
|
||||
|
||||
print("Stopping site")
|
||||
await self.site.stop()
|
||||
print("Site stopped")
|
||||
await self.runner.cleanup()
|
||||
|
||||
print("Runner stopped")
|
||||
tasks = [
|
||||
task
|
||||
for task in asyncio.all_tasks()
|
||||
if task is not asyncio.current_task()
|
||||
]
|
||||
list(map(lambda task: task.cancel(), tasks)) # cancel all the tasks
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
log.debug(f'Finished awaiting cancelled tasks, results: {results}...')
|
||||
await self.loop.shutdown_asyncgens()
|
||||
# to really make sure everything else has time to stop
|
||||
await asyncio.sleep(0.07)
|
||||
self.loop.stop()
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
import sys
|
||||
import signal
|
||||
import time
|
||||
import socket
|
||||
|
||||
from .server import WebServerManager
|
||||
|
||||
|
||||
class SharedObjects:
|
||||
stopped = False
|
||||
|
||||
@classmethod
|
||||
def stop(cls):
|
||||
cls.stopped = True
|
||||
|
||||
|
||||
def main(port=None, host=None):
|
||||
def signal_handler(sig, frame):
|
||||
print("Signal to kill process received. Termination starts.")
|
||||
SharedObjects.stop()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
port = int(port or 8079)
|
||||
host = str(host or "localhost")
|
||||
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as con:
|
||||
result_of_check = con.connect_ex((host, port))
|
||||
|
||||
if result_of_check == 0:
|
||||
print((
|
||||
"Server {}:{} is already running or address is occupied."
|
||||
).format(host, port))
|
||||
return 1
|
||||
|
||||
print("Running server {}:{}".format(host, port))
|
||||
manager = WebServerManager(port, host)
|
||||
manager.start_server()
|
||||
|
||||
stopped = False
|
||||
while manager.is_running:
|
||||
if not stopped and SharedObjects.stopped:
|
||||
stopped = True
|
||||
manager.stop_server()
|
||||
time.sleep(0.1)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
122
openpype/modules/default_modules/job_queue/job_server/workers.py
Normal file
122
openpype/modules/default_modules/job_queue/job_server/workers.py
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
import asyncio
|
||||
from uuid import uuid4
|
||||
from aiohttp import WSCloseCode
|
||||
from aiohttp_json_rpc.protocol import encode_request
|
||||
|
||||
|
||||
class WorkerState:
|
||||
IDLE = object()
|
||||
JOB_ASSIGNED = object()
|
||||
JOB_SENT = object()
|
||||
|
||||
|
||||
class Worker:
|
||||
"""Worker that can handle jobs of specific host."""
|
||||
def __init__(self, host_name, http_request):
|
||||
self._id = None
|
||||
self.host_name = host_name
|
||||
self._http_request = http_request
|
||||
self._state = WorkerState.IDLE
|
||||
self._job = None
|
||||
|
||||
# Give ability to send requests to worker
|
||||
http_request.request_id = str(uuid4())
|
||||
http_request.pending_requests = {}
|
||||
|
||||
async def send_job(self):
|
||||
if self._job is not None:
|
||||
data = {
|
||||
"job_id": self._job.id,
|
||||
"worker_id": self.id,
|
||||
"data": self._job.data
|
||||
}
|
||||
return await self.call("start_job", data)
|
||||
return False
|
||||
|
||||
async def call(self, method, params=None, timeout=None):
|
||||
"""Call method on worker's side."""
|
||||
request_id = self._http_request.request_id
|
||||
self._http_request.request_id = str(uuid4())
|
||||
pending_requests = self._http_request.pending_requests
|
||||
pending_requests[request_id] = asyncio.Future()
|
||||
|
||||
request = encode_request(method, id=request_id, params=params)
|
||||
|
||||
await self._http_request.ws.send_str(request)
|
||||
|
||||
if timeout:
|
||||
await asyncio.wait_for(
|
||||
pending_requests[request_id],
|
||||
timeout=timeout
|
||||
)
|
||||
|
||||
else:
|
||||
await pending_requests[request_id]
|
||||
|
||||
result = pending_requests[request_id].result()
|
||||
del pending_requests[request_id]
|
||||
|
||||
return result
|
||||
|
||||
async def close(self):
|
||||
return await self.ws.close(
|
||||
code=WSCloseCode.GOING_AWAY,
|
||||
message="Server shutdown"
|
||||
)
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
if self._id is None:
|
||||
self._id = str(uuid4())
|
||||
return self._id
|
||||
|
||||
@property
|
||||
def state(self):
|
||||
return self._state
|
||||
|
||||
@property
|
||||
def current_job(self):
|
||||
return self._job
|
||||
|
||||
@property
|
||||
def http_request(self):
|
||||
return self._http_request
|
||||
|
||||
@property
|
||||
def ws(self):
|
||||
return self.http_request.ws
|
||||
|
||||
def connection_is_alive(self):
|
||||
if self.ws.closed or self.ws._writer.transport.is_closing():
|
||||
return False
|
||||
return True
|
||||
|
||||
def is_idle(self):
|
||||
return self._state is WorkerState.IDLE
|
||||
|
||||
def job_assigned(self):
|
||||
return (
|
||||
self._state is WorkerState.JOB_ASSIGNED
|
||||
or self._state is WorkerState.JOB_SENT
|
||||
)
|
||||
|
||||
def is_working(self):
|
||||
return self._state is WorkerState.JOB_SENT
|
||||
|
||||
def set_current_job(self, job):
|
||||
if job is self._job:
|
||||
return
|
||||
|
||||
self._job = job
|
||||
if job is None:
|
||||
self._set_idle()
|
||||
else:
|
||||
self._state = WorkerState.JOB_ASSIGNED
|
||||
job.set_worker(self)
|
||||
|
||||
def _set_idle(self):
|
||||
self._job = None
|
||||
self._state = WorkerState.IDLE
|
||||
|
||||
def set_working(self):
|
||||
self._state = WorkerState.JOB_SENT
|
||||
|
|
@ -0,0 +1,124 @@
|
|||
import asyncio
|
||||
|
||||
import aiohttp
|
||||
from aiohttp_json_rpc import JsonRpc
|
||||
from aiohttp_json_rpc.protocol import (
|
||||
encode_error, decode_msg, JsonRpcMsgTyp
|
||||
)
|
||||
from aiohttp_json_rpc.exceptions import RpcError
|
||||
from .workers import Worker
|
||||
|
||||
|
||||
class WorkerRpc(JsonRpc):
|
||||
def __init__(self, job_queue, manager, **kwargs):
|
||||
super().__init__(**kwargs)
|
||||
|
||||
self._job_queue = job_queue
|
||||
self._manager = manager
|
||||
|
||||
self._stopped = False
|
||||
|
||||
# Register methods
|
||||
self.add_methods(
|
||||
("", self.register_worker),
|
||||
("", self.job_done)
|
||||
)
|
||||
asyncio.ensure_future(self._rpc_loop(), loop=self.loop)
|
||||
|
||||
self._manager.add_route(
|
||||
"*", "/ws", self.handle_request
|
||||
)
|
||||
|
||||
# Panel routes for tools
|
||||
async def register_worker(self, request, host_name):
|
||||
worker = Worker(host_name, request.http_request)
|
||||
self._job_queue.add_worker(worker)
|
||||
return worker.id
|
||||
|
||||
async def _rpc_loop(self):
|
||||
while self.loop.is_running():
|
||||
if self._stopped:
|
||||
break
|
||||
|
||||
for worker in tuple(self._job_queue.workers()):
|
||||
if not worker.connection_is_alive():
|
||||
self._job_queue.remove_worker(worker)
|
||||
self._job_queue.assign_jobs()
|
||||
|
||||
await self.send_jobs()
|
||||
await asyncio.sleep(5)
|
||||
|
||||
async def job_done(self, worker_id, job_id, success, message, data):
|
||||
worker = self._job_queue.get_worker(worker_id)
|
||||
if worker is not None:
|
||||
worker.set_current_job(None)
|
||||
|
||||
job = self._job_queue.get_job(job_id)
|
||||
if job is not None:
|
||||
job.set_done(success, message, data)
|
||||
return True
|
||||
|
||||
async def send_jobs(self):
|
||||
invalid_workers = []
|
||||
for worker in self._job_queue.workers():
|
||||
if worker.job_assigned() and not worker.is_working():
|
||||
try:
|
||||
await worker.send_job()
|
||||
|
||||
except ConnectionResetError:
|
||||
invalid_workers.append(worker)
|
||||
|
||||
for worker in invalid_workers:
|
||||
self._job_queue.remove_worker(worker)
|
||||
|
||||
async def handle_websocket_request(self, http_request):
|
||||
"""Overide this method to catch CLOSING messages."""
|
||||
http_request.msg_id = 0
|
||||
http_request.pending = {}
|
||||
|
||||
# prepare and register websocket
|
||||
ws = aiohttp.web_ws.WebSocketResponse()
|
||||
await ws.prepare(http_request)
|
||||
http_request.ws = ws
|
||||
self.clients.append(http_request)
|
||||
|
||||
while not ws.closed:
|
||||
self.logger.debug('waiting for messages')
|
||||
raw_msg = await ws.receive()
|
||||
|
||||
if raw_msg.type == aiohttp.WSMsgType.TEXT:
|
||||
self.logger.debug('raw msg received: %s', raw_msg.data)
|
||||
self.loop.create_task(
|
||||
self._handle_rpc_msg(http_request, raw_msg)
|
||||
)
|
||||
|
||||
elif raw_msg.type == aiohttp.WSMsgType.CLOSING:
|
||||
break
|
||||
|
||||
self.clients.remove(http_request)
|
||||
return ws
|
||||
|
||||
async def _handle_rpc_msg(self, http_request, raw_msg):
|
||||
# This is duplicated code from super but there is no way how to do it
|
||||
# to be able handle server->client requests
|
||||
try:
|
||||
_raw_message = raw_msg.data
|
||||
msg = decode_msg(_raw_message)
|
||||
|
||||
except RpcError as error:
|
||||
await self._ws_send_str(http_request, encode_error(error))
|
||||
return
|
||||
|
||||
if msg.type in (JsonRpcMsgTyp.RESULT, JsonRpcMsgTyp.ERROR):
|
||||
request_id = msg.data["id"]
|
||||
if request_id in http_request.pending_requests:
|
||||
future = http_request.pending_requests[request_id]
|
||||
future.set_result(msg.data["result"])
|
||||
return
|
||||
|
||||
return await super()._handle_rpc_msg(http_request, raw_msg)
|
||||
|
||||
async def stop(self):
|
||||
self._stopped = True
|
||||
for worker in tuple(self._job_queue.workers()):
|
||||
await worker.close()
|
||||
|
|
@ -0,0 +1,5 @@
|
|||
from .base_worker import WorkerJobsConnection
|
||||
|
||||
__all__ = (
|
||||
"WorkerJobsConnection",
|
||||
)
|
||||
|
|
@ -0,0 +1,190 @@
|
|||
import sys
|
||||
import datetime
|
||||
import asyncio
|
||||
import traceback
|
||||
|
||||
from aiohttp_json_rpc import JsonRpcClient
|
||||
|
||||
|
||||
class WorkerClient(JsonRpcClient):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.add_methods(
|
||||
("", self.start_job),
|
||||
)
|
||||
self.current_job = None
|
||||
self._id = None
|
||||
|
||||
def set_id(self, worker_id):
|
||||
self._id = worker_id
|
||||
|
||||
async def start_job(self, job_data):
|
||||
if self.current_job is not None:
|
||||
return False
|
||||
|
||||
print("Got new job {}".format(str(job_data)))
|
||||
self.current_job = job_data
|
||||
return True
|
||||
|
||||
def finish_job(self, success, message, data):
|
||||
asyncio.ensure_future(
|
||||
self._finish_job(success, message, data),
|
||||
loop=self._loop
|
||||
)
|
||||
|
||||
async def _finish_job(self, success, message, data):
|
||||
print("Current job", self.current_job)
|
||||
job_id = self.current_job["job_id"]
|
||||
self.current_job = None
|
||||
|
||||
return await self.call(
|
||||
"job_done", [self._id, job_id, success, message, data]
|
||||
)
|
||||
|
||||
|
||||
class WorkerJobsConnection:
|
||||
"""WS connection to Job server.
|
||||
|
||||
Helper class to create a connection to process jobs from job server.
|
||||
|
||||
To be able receive jobs is needed to create a connection and then register
|
||||
as worker for specific host.
|
||||
"""
|
||||
retry_time_seconds = 5
|
||||
|
||||
def __init__(self, server_url, host_name, loop=None):
|
||||
self.client = None
|
||||
self._loop = loop
|
||||
|
||||
self._host_name = host_name
|
||||
self._server_url = server_url
|
||||
|
||||
self._is_running = False
|
||||
self._connecting = False
|
||||
self._connected = False
|
||||
self._stopped = False
|
||||
|
||||
def stop(self):
|
||||
print("Stopping worker")
|
||||
self._stopped = True
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self._is_running
|
||||
|
||||
@property
|
||||
def current_job(self):
|
||||
if self.client is not None:
|
||||
return self.client.current_job
|
||||
return None
|
||||
|
||||
def finish_job(self, success=True, message=None, data=None):
|
||||
"""Worker finished job and sets the result which is send to server."""
|
||||
if self.client is None:
|
||||
print((
|
||||
"Couldn't sent job status to server because"
|
||||
" client is not connected."
|
||||
))
|
||||
else:
|
||||
self.client.finish_job(success, message, data)
|
||||
|
||||
async def main_loop(self, register_worker=True):
|
||||
"""Main loop of connection which keep connection to server alive."""
|
||||
self._is_running = True
|
||||
|
||||
while not self._stopped:
|
||||
start_time = datetime.datetime.now()
|
||||
await self._connection_loop(register_worker)
|
||||
delta = datetime.datetime.now() - start_time
|
||||
print("Connection loop took {}s".format(str(delta)))
|
||||
# Check if was stopped and stop while loop in that case
|
||||
if self._stopped:
|
||||
break
|
||||
|
||||
if delta.seconds < 60:
|
||||
print((
|
||||
"Can't connect to server will try in {} seconds."
|
||||
).format(self.retry_time_seconds))
|
||||
|
||||
await asyncio.sleep(self.retry_time_seconds)
|
||||
self._is_running = False
|
||||
|
||||
async def _connect(self):
|
||||
self.client = WorkerClient()
|
||||
print("Connecting to {}".format(self._server_url))
|
||||
try:
|
||||
await self.client.connect_url(self._server_url)
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except Exception:
|
||||
traceback.print_exception(*sys.exc_info())
|
||||
|
||||
async def _connection_loop(self, register_worker):
|
||||
self._connecting = True
|
||||
future = asyncio.run_coroutine_threadsafe(
|
||||
self._connect(), loop=self._loop
|
||||
)
|
||||
|
||||
while self._connecting:
|
||||
if not future.done():
|
||||
await asyncio.sleep(0.07)
|
||||
continue
|
||||
|
||||
session = getattr(self.client, "_session", None)
|
||||
ws = getattr(self.client, "_ws", None)
|
||||
if session is not None:
|
||||
if session.closed:
|
||||
self._connecting = False
|
||||
self._connected = False
|
||||
break
|
||||
|
||||
elif ws is not None:
|
||||
self._connecting = False
|
||||
self._connected = True
|
||||
|
||||
if self._stopped:
|
||||
break
|
||||
|
||||
await asyncio.sleep(0.07)
|
||||
|
||||
if not self._connected:
|
||||
self.client = None
|
||||
return
|
||||
|
||||
print("Connected to job queue server")
|
||||
if register_worker:
|
||||
self.register_as_worker()
|
||||
|
||||
while self._connected and self._loop.is_running():
|
||||
if self._stopped or ws.closed:
|
||||
break
|
||||
|
||||
await asyncio.sleep(0.3)
|
||||
|
||||
await self._stop_cleanup()
|
||||
|
||||
def register_as_worker(self):
|
||||
"""Register as worker ready to work on server side."""
|
||||
asyncio.ensure_future(self._register_as_worker(), loop=self._loop)
|
||||
|
||||
async def _register_as_worker(self):
|
||||
worker_id = await self.client.call(
|
||||
"register_worker", [self._host_name]
|
||||
)
|
||||
self.client.set_id(worker_id)
|
||||
print(
|
||||
"Registered as worker with id {}".format(worker_id)
|
||||
)
|
||||
|
||||
async def disconnect(self):
|
||||
await self._stop_cleanup()
|
||||
|
||||
async def _stop_cleanup(self):
|
||||
print("Cleanup after stop")
|
||||
if self.client is not None and hasattr(self.client, "_ws"):
|
||||
await self.client.disconnect()
|
||||
|
||||
self.client = None
|
||||
self._connecting = False
|
||||
self._connected = False
|
||||
241
openpype/modules/default_modules/job_queue/module.py
Normal file
241
openpype/modules/default_modules/job_queue/module.py
Normal file
|
|
@ -0,0 +1,241 @@
|
|||
"""Job queue OpenPype module was created for remote execution of commands.
|
||||
|
||||
## Why is needed
|
||||
Primarily created for hosts which are not easilly controlled from command line
|
||||
or in headless mode and is easier to keep one process of host running listening
|
||||
for jobs to do.
|
||||
|
||||
### Example
|
||||
One of examples is TVPaint which does not have headless mode, can run only one
|
||||
process at one time and it's impossible to know what should be executed inside
|
||||
TVPaint before we know all data about the file that should be processed.
|
||||
|
||||
## Idea
|
||||
Idea is that there is a server, workers and workstation/s which need to process
|
||||
something on a worker.
|
||||
|
||||
Workers and workstation/s must have access to server through adress to it's
|
||||
running instance. Workers use WebSockets and workstations are using HTTP calls.
|
||||
Also both of them must have access to job queue root which is set in
|
||||
settings. Root is used as temp where files needed for job can be stored before
|
||||
sending the job or where result files are stored when job is done.
|
||||
|
||||
Server's address must be set in settings when is running so workers and
|
||||
workstations know where to send or receive jobs.
|
||||
|
||||
## Command line commands
|
||||
### start_server
|
||||
- start server which is handles jobs
|
||||
- it is possible to specify port and host address (default is localhost:8079)
|
||||
|
||||
### start_worker
|
||||
- start worker which will process jobs
|
||||
- has required possitional argument which is application name from OpenPype
|
||||
settings e.g. 'tvpaint/11-5' ('tvpaint' is group '11-5' is variant)
|
||||
- it is possible to specify server url but url from settings is used when not
|
||||
passed (this is added mainly for developing purposes)
|
||||
"""
|
||||
|
||||
import sys
|
||||
import json
|
||||
import copy
|
||||
import platform
|
||||
|
||||
import click
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype.api import get_system_settings
|
||||
|
||||
|
||||
class JobQueueModule(OpenPypeModule):
|
||||
name = "job_queue"
|
||||
|
||||
def initialize(self, modules_settings):
|
||||
server_url = modules_settings.get("server_url") or ""
|
||||
|
||||
self._server_url = self.url_conversion(server_url)
|
||||
jobs_root_mapping = self._roots_mapping_conversion(
|
||||
modules_settings.get("jobs_root")
|
||||
)
|
||||
|
||||
self._jobs_root_mapping = jobs_root_mapping
|
||||
|
||||
# Is always enabled
|
||||
# - the module does nothing until is used
|
||||
self.enabled = True
|
||||
|
||||
@classmethod
|
||||
def _root_conversion(cls, root_path):
|
||||
"""Make sure root path does not end with slash."""
|
||||
# Return empty string if path is invalid
|
||||
if not root_path:
|
||||
return ""
|
||||
|
||||
# Remove all slashes
|
||||
while root_path.endswith("/") or root_path.endswith("\\"):
|
||||
root_path = root_path[:-1]
|
||||
return root_path
|
||||
|
||||
@classmethod
|
||||
def _roots_mapping_conversion(cls, roots_mapping):
|
||||
roots_mapping = roots_mapping or {}
|
||||
for platform_name in ("windows", "linux", "darwin"):
|
||||
roots_mapping[platform_name] = cls._root_conversion(
|
||||
roots_mapping.get(platform_name)
|
||||
)
|
||||
return roots_mapping
|
||||
|
||||
@staticmethod
|
||||
def url_conversion(url, ws=False):
|
||||
if sys.version_info[0] == 2:
|
||||
from urlparse import urlsplit, urlunsplit
|
||||
else:
|
||||
from urllib.parse import urlsplit, urlunsplit
|
||||
|
||||
if not url:
|
||||
return url
|
||||
|
||||
url_parts = list(urlsplit(url))
|
||||
scheme = url_parts[0]
|
||||
if not scheme:
|
||||
if ws:
|
||||
url = "ws://{}".format(url)
|
||||
else:
|
||||
url = "http://{}".format(url)
|
||||
url_parts = list(urlsplit(url))
|
||||
|
||||
elif ws:
|
||||
if scheme not in ("ws", "wss"):
|
||||
if scheme == "https":
|
||||
url_parts[0] = "wss"
|
||||
else:
|
||||
url_parts[0] = "ws"
|
||||
|
||||
elif scheme not in ("http", "https"):
|
||||
if scheme == "wss":
|
||||
url_parts[0] = "https"
|
||||
else:
|
||||
url_parts[0] = "http"
|
||||
|
||||
return urlunsplit(url_parts)
|
||||
|
||||
def get_jobs_root_mapping(self):
|
||||
return copy.deepcopy(self._jobs_root_mapping)
|
||||
|
||||
def get_jobs_root(self):
|
||||
return self._jobs_root_mapping.get(platform.system().lower())
|
||||
|
||||
@classmethod
|
||||
def get_jobs_root_from_settings(cls):
|
||||
module_settings = get_system_settings()["modules"]
|
||||
jobs_root_mapping = module_settings.get(cls.name, {}).get("jobs_root")
|
||||
converted_mapping = cls._roots_mapping_conversion(jobs_root_mapping)
|
||||
|
||||
return converted_mapping[platform.system().lower()]
|
||||
|
||||
@property
|
||||
def server_url(self):
|
||||
return self._server_url
|
||||
|
||||
def send_job(self, host_name, job_data):
|
||||
import requests
|
||||
|
||||
job_data = job_data or {}
|
||||
job_data["host_name"] = host_name
|
||||
api_path = "{}/api/jobs".format(self._server_url)
|
||||
post_request = requests.post(api_path, data=json.dumps(job_data))
|
||||
return str(post_request.content.decode())
|
||||
|
||||
def get_job_status(self, job_id):
|
||||
import requests
|
||||
|
||||
api_path = "{}/api/jobs/{}".format(self._server_url, job_id)
|
||||
return requests.get(api_path).json()
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
|
||||
@classmethod
|
||||
def get_server_url_from_settings(cls):
|
||||
module_settings = get_system_settings()["modules"]
|
||||
return cls.url_conversion(
|
||||
module_settings
|
||||
.get(cls.name, {})
|
||||
.get("server_url")
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def start_server(cls, port=None, host=None):
|
||||
from .job_server import main
|
||||
|
||||
return main(port, host)
|
||||
|
||||
@classmethod
|
||||
def start_worker(cls, app_name, server_url=None):
|
||||
import requests
|
||||
from openpype.lib import ApplicationManager
|
||||
|
||||
if not server_url:
|
||||
server_url = cls.get_server_url_from_settings()
|
||||
|
||||
if not server_url:
|
||||
raise ValueError("Server url is not set.")
|
||||
|
||||
http_server_url = cls.url_conversion(server_url)
|
||||
|
||||
# Validate url
|
||||
requests.get(http_server_url)
|
||||
|
||||
ws_server_url = cls.url_conversion(server_url) + "/ws"
|
||||
|
||||
app_manager = ApplicationManager()
|
||||
app = app_manager.applications.get(app_name)
|
||||
if app is None:
|
||||
raise ValueError(
|
||||
"Didn't find application \"{}\" in settings.".format(app_name)
|
||||
)
|
||||
|
||||
if app.host_name == "tvpaint":
|
||||
return cls._start_tvpaint_worker(app, ws_server_url)
|
||||
raise ValueError("Unknown host \"{}\"".format(app.host_name))
|
||||
|
||||
@classmethod
|
||||
def _start_tvpaint_worker(cls, app, server_url):
|
||||
from openpype.hosts.tvpaint.worker import main
|
||||
|
||||
executable = app.find_executable()
|
||||
if not executable:
|
||||
raise ValueError((
|
||||
"Executable for app \"{}\" is not set"
|
||||
" or accessible on this workstation."
|
||||
).format(app.full_name))
|
||||
|
||||
return main(str(executable), server_url)
|
||||
|
||||
|
||||
@click.group(
|
||||
JobQueueModule.name,
|
||||
help="Application job server. Can be used as render farm."
|
||||
)
|
||||
def cli_main():
|
||||
pass
|
||||
|
||||
|
||||
@cli_main.command(
|
||||
"start_server",
|
||||
help="Start server handling workers and their jobs."
|
||||
)
|
||||
@click.option("--port", help="Server port")
|
||||
@click.option("--host", help="Server host (ip address)")
|
||||
def cli_start_server(port, host):
|
||||
JobQueueModule.start_server(port, host)
|
||||
|
||||
|
||||
@cli_main.command(
|
||||
"start_worker", help=(
|
||||
"Start a worker for a specific application. (e.g. \"tvpaint/11.5\")"
|
||||
)
|
||||
)
|
||||
@click.argument("app_name")
|
||||
@click.option("--server_url", help="Server url which handle workers and jobs.")
|
||||
def cli_start_worker(app_name, server_url):
|
||||
JobQueueModule.start_worker(app_name, server_url)
|
||||
|
|
@ -176,6 +176,7 @@ class PythonCodeEditor(QtWidgets.QPlainTextEdit):
|
|||
|
||||
|
||||
class PythonTabWidget(QtWidgets.QWidget):
|
||||
add_tab_requested = QtCore.Signal()
|
||||
before_execute = QtCore.Signal(str)
|
||||
|
||||
def __init__(self, parent):
|
||||
|
|
@ -185,11 +186,15 @@ class PythonTabWidget(QtWidgets.QWidget):
|
|||
|
||||
self.setFocusProxy(code_input)
|
||||
|
||||
add_tab_btn = QtWidgets.QPushButton("Add tab...", self)
|
||||
add_tab_btn.setToolTip("Add new tab")
|
||||
|
||||
execute_btn = QtWidgets.QPushButton("Execute", self)
|
||||
execute_btn.setToolTip("Execute command (Ctrl + Enter)")
|
||||
|
||||
btns_layout = QtWidgets.QHBoxLayout()
|
||||
btns_layout.setContentsMargins(0, 0, 0, 0)
|
||||
btns_layout.addWidget(add_tab_btn)
|
||||
btns_layout.addStretch(1)
|
||||
btns_layout.addWidget(execute_btn)
|
||||
|
||||
|
|
@ -198,12 +203,16 @@ class PythonTabWidget(QtWidgets.QWidget):
|
|||
layout.addWidget(code_input, 1)
|
||||
layout.addLayout(btns_layout, 0)
|
||||
|
||||
add_tab_btn.clicked.connect(self._on_add_tab_clicked)
|
||||
execute_btn.clicked.connect(self._on_execute_clicked)
|
||||
code_input.execute_requested.connect(self.execute)
|
||||
|
||||
self._code_input = code_input
|
||||
self._interpreter = InteractiveInterpreter()
|
||||
|
||||
def _on_add_tab_clicked(self):
|
||||
self.add_tab_requested.emit()
|
||||
|
||||
def _on_execute_clicked(self):
|
||||
self.execute()
|
||||
|
||||
|
|
@ -352,9 +361,6 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
|
|||
tab_widget.setTabsClosable(False)
|
||||
tab_widget.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
|
||||
|
||||
add_tab_btn = QtWidgets.QPushButton("+", tab_widget)
|
||||
tab_widget.setCornerWidget(add_tab_btn, QtCore.Qt.TopLeftCorner)
|
||||
|
||||
widgets_splitter = QtWidgets.QSplitter(self)
|
||||
widgets_splitter.setOrientation(QtCore.Qt.Vertical)
|
||||
widgets_splitter.addWidget(output_widget)
|
||||
|
|
@ -371,14 +377,12 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
|
|||
line_check_timer.setInterval(200)
|
||||
|
||||
line_check_timer.timeout.connect(self._on_timer_timeout)
|
||||
add_tab_btn.clicked.connect(self._on_add_clicked)
|
||||
tab_bar.right_clicked.connect(self._on_tab_right_click)
|
||||
tab_bar.double_clicked.connect(self._on_tab_double_click)
|
||||
tab_bar.mid_clicked.connect(self._on_tab_mid_click)
|
||||
tab_widget.tabCloseRequested.connect(self._on_tab_close_req)
|
||||
|
||||
self._widgets_splitter = widgets_splitter
|
||||
self._add_tab_btn = add_tab_btn
|
||||
self._output_widget = output_widget
|
||||
self._tab_widget = tab_widget
|
||||
self._line_check_timer = line_check_timer
|
||||
|
|
@ -459,14 +463,41 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
|
|||
return
|
||||
|
||||
menu = QtWidgets.QMenu(self._tab_widget)
|
||||
menu.addAction("Rename")
|
||||
|
||||
add_tab_action = QtWidgets.QAction("Add tab...", menu)
|
||||
add_tab_action.setToolTip("Add new tab")
|
||||
|
||||
rename_tab_action = QtWidgets.QAction("Rename...", menu)
|
||||
rename_tab_action.setToolTip("Rename tab")
|
||||
|
||||
duplicate_tab_action = QtWidgets.QAction("Duplicate...", menu)
|
||||
duplicate_tab_action.setToolTip("Duplicate code to new tab")
|
||||
|
||||
close_tab_action = QtWidgets.QAction("Close", menu)
|
||||
close_tab_action.setToolTip("Close tab and lose content")
|
||||
close_tab_action.setEnabled(self._tab_widget.tabsClosable())
|
||||
|
||||
menu.addAction(add_tab_action)
|
||||
menu.addAction(rename_tab_action)
|
||||
menu.addAction(duplicate_tab_action)
|
||||
menu.addAction(close_tab_action)
|
||||
|
||||
result = menu.exec_(global_point)
|
||||
if result is None:
|
||||
return
|
||||
|
||||
if result.text() == "Rename":
|
||||
if result is rename_tab_action:
|
||||
self._rename_tab_req(tab_idx)
|
||||
|
||||
elif result is add_tab_action:
|
||||
self._on_add_requested()
|
||||
|
||||
elif result is duplicate_tab_action:
|
||||
self._duplicate_requested(tab_idx)
|
||||
|
||||
elif result is close_tab_action:
|
||||
self._on_tab_close_req(tab_idx)
|
||||
|
||||
def _rename_tab_req(self, tab_idx):
|
||||
dialog = TabNameDialog(self)
|
||||
dialog.set_tab_name(self._tab_widget.tabText(tab_idx))
|
||||
|
|
@ -475,6 +506,16 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
|
|||
if tab_name:
|
||||
self._tab_widget.setTabText(tab_idx, tab_name)
|
||||
|
||||
def _duplicate_requested(self, tab_idx=None):
|
||||
if tab_idx is None:
|
||||
tab_idx = self._tab_widget.currentIndex()
|
||||
|
||||
src_widget = self._tab_widget.widget(tab_idx)
|
||||
dst_widget = self._add_tab()
|
||||
if dst_widget is None:
|
||||
return
|
||||
dst_widget.set_code(src_widget.get_code())
|
||||
|
||||
def _on_tab_mid_click(self, global_point):
|
||||
point = self._tab_widget.mapFromGlobal(global_point)
|
||||
tab_bar = self._tab_widget.tabBar()
|
||||
|
|
@ -525,12 +566,17 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
|
|||
lines.append(self.ansi_escape.sub("", line))
|
||||
self._append_lines(lines)
|
||||
|
||||
def _on_add_clicked(self):
|
||||
def _on_add_requested(self):
|
||||
self._add_tab()
|
||||
|
||||
def _add_tab(self):
|
||||
dialog = TabNameDialog(self)
|
||||
dialog.exec_()
|
||||
tab_name = dialog.result()
|
||||
if tab_name:
|
||||
self.add_tab(tab_name)
|
||||
return self.add_tab(tab_name)
|
||||
|
||||
return None
|
||||
|
||||
def _on_before_execute(self, code_text):
|
||||
at_max = self._output_widget.vertical_scroll_at_max()
|
||||
|
|
@ -562,6 +608,7 @@ class PythonInterpreterWidget(QtWidgets.QWidget):
|
|||
def add_tab(self, tab_name, index=None):
|
||||
widget = PythonTabWidget(self)
|
||||
widget.before_execute.connect(self._on_before_execute)
|
||||
widget.add_tab_requested.connect(self._on_add_requested)
|
||||
if index is None:
|
||||
if self._tab_widget.count() > 0:
|
||||
index = self._tab_widget.currentIndex() + 1
|
||||
|
|
|
|||
|
|
@ -148,12 +148,27 @@ class OpenPypeContextSelector:
|
|||
for k, v in env.items():
|
||||
print(" {}: {}".format(k, v))
|
||||
|
||||
publishing_paths = [os.path.join(self.job.imageDir,
|
||||
os.path.dirname(
|
||||
self.job.imageFileName))]
|
||||
|
||||
# add additional channels
|
||||
channel_idx = 0
|
||||
channel = self.job.channelFileName(channel_idx)
|
||||
while channel:
|
||||
channel_path = os.path.dirname(
|
||||
os.path.join(self.job.imageDir, channel))
|
||||
if channel_path not in publishing_paths:
|
||||
publishing_paths.append(channel_path)
|
||||
channel_idx += 1
|
||||
channel = self.job.channelFileName(channel_idx)
|
||||
|
||||
args = [os.path.join(self.openpype_root, self.openpype_executable),
|
||||
'publish', '-t', "rr_control", "--gui",
|
||||
os.path.join(self.job.imageDir,
|
||||
os.path.dirname(self.job.imageFileName))
|
||||
'publish', '-t', "rr_control", "--gui"
|
||||
]
|
||||
|
||||
args += publishing_paths
|
||||
|
||||
print(">>> running {}".format(" ".join(args)))
|
||||
orig = os.environ.copy()
|
||||
orig.update(env)
|
||||
|
|
|
|||
|
|
@ -192,7 +192,7 @@ class SFTPHandler(AbstractProvider):
|
|||
Format is importing for usage of python's format ** approach
|
||||
"""
|
||||
# roots cannot be locally overridden
|
||||
return self.presets['roots']
|
||||
return self.presets['root']
|
||||
|
||||
def get_tree(self):
|
||||
"""
|
||||
|
|
@ -421,7 +421,8 @@ class SFTPHandler(AbstractProvider):
|
|||
|
||||
try:
|
||||
return pysftp.Connection(**conn_params)
|
||||
except paramiko.ssh_exception.SSHException:
|
||||
except (paramiko.ssh_exception.SSHException,
|
||||
pysftp.exceptions.ConnectionException):
|
||||
log.warning("Couldn't connect", exc_info=True)
|
||||
|
||||
def _mark_progress(self, collection, file, representation, server, site,
|
||||
|
|
|
|||
|
|
@ -1574,6 +1574,7 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
|
||||
Use 'force' to remove existing or raises ValueError
|
||||
"""
|
||||
reseted_existing = False
|
||||
for repre_file in representation.pop().get("files"):
|
||||
if file_id and file_id != repre_file["_id"]:
|
||||
continue
|
||||
|
|
@ -1584,12 +1585,15 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
self._reset_site_for_file(collection, query,
|
||||
elem, repre_file["_id"],
|
||||
site_name)
|
||||
return
|
||||
reseted_existing = True
|
||||
else:
|
||||
msg = "Site {} already present".format(site_name)
|
||||
log.info(msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
if reseted_existing:
|
||||
return
|
||||
|
||||
if not file_id:
|
||||
update = {
|
||||
"$push": {"files.$[].sites": elem}
|
||||
|
|
|
|||
|
|
@ -1,3 +1,6 @@
|
|||
from .constants import (
|
||||
SUBSET_NAME_ALLOWED_SYMBOLS
|
||||
)
|
||||
from .creator_plugins import (
|
||||
CreatorError,
|
||||
|
||||
|
|
@ -13,6 +16,8 @@ from .context import (
|
|||
|
||||
|
||||
__all__ = (
|
||||
"SUBSET_NAME_ALLOWED_SYMBOLS",
|
||||
|
||||
"CreatorError",
|
||||
|
||||
"BaseCreator",
|
||||
|
|
|
|||
6
openpype/pipeline/create/constants.py
Normal file
6
openpype/pipeline/create/constants.py
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
SUBSET_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_."
|
||||
|
||||
|
||||
__all__ = (
|
||||
"SUBSET_NAME_ALLOWED_SYMBOLS",
|
||||
)
|
||||
|
|
@ -54,6 +54,12 @@ class CollectAnatomyContextData(pyblish.api.ContextPlugin):
|
|||
if hierarchy_items:
|
||||
hierarchy = os.path.join(*hierarchy_items)
|
||||
|
||||
asset_tasks = asset_entity["data"]["tasks"]
|
||||
task_type = asset_tasks.get(task_name, {}).get("type")
|
||||
|
||||
project_task_types = project_entity["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
context_data = {
|
||||
"project": {
|
||||
"name": project_entity["name"],
|
||||
|
|
@ -61,7 +67,11 @@ class CollectAnatomyContextData(pyblish.api.ContextPlugin):
|
|||
},
|
||||
"asset": asset_entity["name"],
|
||||
"hierarchy": hierarchy.replace("\\", "/"),
|
||||
"task": task_name,
|
||||
"task": {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"username": context.data["user"],
|
||||
"app": context.data["hostName"]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -214,6 +214,8 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
project_doc = context.data["projectEntity"]
|
||||
context_asset_doc = context.data["assetEntity"]
|
||||
|
||||
project_task_types = project_doc["config"]["tasks"]
|
||||
|
||||
for instance in context:
|
||||
if self.follow_workfile_version:
|
||||
version_number = context.data('version')
|
||||
|
|
@ -245,7 +247,18 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
|
|||
# Task
|
||||
task_name = instance.data.get("task")
|
||||
if task_name:
|
||||
anatomy_updates["task"] = task_name
|
||||
asset_tasks = asset_doc["data"]["tasks"]
|
||||
task_type = asset_tasks.get(task_name, {}).get("type")
|
||||
task_code = (
|
||||
project_task_types
|
||||
.get(task_type, {})
|
||||
.get("short_name")
|
||||
)
|
||||
anatomy_updates["task"] = {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code
|
||||
}
|
||||
|
||||
# Additional data
|
||||
resolution_width = instance.data.get("resolutionWidth")
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import pyblish.api
|
|||
class CollectModules(pyblish.api.ContextPlugin):
|
||||
"""Collect OpenPype modules."""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
order = pyblish.api.CollectorOrder - 0.45
|
||||
label = "OpenPype Modules"
|
||||
|
||||
def process(self, context):
|
||||
|
|
|
|||
55
openpype/plugins/publish/collect_scene_loaded_versions.py
Normal file
55
openpype/plugins/publish/collect_scene_loaded_versions.py
Normal file
|
|
@ -0,0 +1,55 @@
|
|||
|
||||
import pyblish.api
|
||||
from avalon import api, io
|
||||
|
||||
|
||||
class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.0001
|
||||
label = "Collect Versions Loaded in Scene"
|
||||
hosts = [
|
||||
"aftereffects",
|
||||
"blender",
|
||||
"celaction",
|
||||
"fusion",
|
||||
"harmony",
|
||||
"hiero",
|
||||
"houdini",
|
||||
"maya",
|
||||
"nuke",
|
||||
"photoshop",
|
||||
"resolve",
|
||||
"tvpaint"
|
||||
]
|
||||
|
||||
def process(self, context):
|
||||
host = api.registered_host()
|
||||
if host is None:
|
||||
self.log.warn("No registered host.")
|
||||
return
|
||||
|
||||
if not hasattr(host, "ls"):
|
||||
host_name = host.__name__
|
||||
self.log.warn("Host %r doesn't have ls() implemented." % host_name)
|
||||
return
|
||||
|
||||
loaded_versions = []
|
||||
_containers = list(host.ls())
|
||||
_repr_ids = [io.ObjectId(c["representation"]) for c in _containers]
|
||||
version_by_repr = {
|
||||
str(doc["_id"]): doc["parent"] for doc in
|
||||
io.find({"_id": {"$in": _repr_ids}}, projection={"parent": 1})
|
||||
}
|
||||
|
||||
for con in _containers:
|
||||
# NOTE:
|
||||
# may have more then one representation that are same version
|
||||
version = {
|
||||
"objectName": con["objectName"], # container node name
|
||||
"subsetName": con["name"],
|
||||
"representation": io.ObjectId(con["representation"]),
|
||||
"version": version_by_repr[con["representation"]], # _id
|
||||
}
|
||||
loaded_versions.append(version)
|
||||
|
||||
context.data["loadedVersions"] = loaded_versions
|
||||
|
|
@ -10,7 +10,7 @@ class CollectSceneVersion(pyblish.api.ContextPlugin):
|
|||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = 'Collect Version'
|
||||
label = 'Collect Scene Version'
|
||||
hosts = [
|
||||
"aftereffects",
|
||||
"blender",
|
||||
|
|
|
|||
|
|
@ -110,6 +110,9 @@ class ExtractBurnin(openpype.api.Extractor):
|
|||
).format(host_name, family, task_name))
|
||||
return
|
||||
|
||||
self.log.debug("profile: {}".format(
|
||||
profile))
|
||||
|
||||
# Pre-filter burnin definitions by instance families
|
||||
burnin_defs = self.filter_burnins_defs(profile, instance)
|
||||
if not burnin_defs:
|
||||
|
|
@ -126,18 +129,41 @@ class ExtractBurnin(openpype.api.Extractor):
|
|||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
scriptpath = self.burnin_script_path()
|
||||
|
||||
# Executable args that will execute the script
|
||||
# [pype executable, *pype script, "run"]
|
||||
executable_args = get_pype_execute_args("run", scriptpath)
|
||||
|
||||
for idx, repre in enumerate(tuple(instance.data["representations"])):
|
||||
self.log.debug("repre ({}): `{}`".format(idx + 1, repre["name"]))
|
||||
|
||||
repre_burnin_links = repre.get("burnins", [])
|
||||
|
||||
if not self.repres_is_valid(repre):
|
||||
continue
|
||||
|
||||
self.log.debug("repre_burnin_links: {}".format(
|
||||
repre_burnin_links))
|
||||
|
||||
self.log.debug("burnin_defs.keys(): {}".format(
|
||||
burnin_defs.keys()))
|
||||
|
||||
# Filter output definition by `burnin` represetation key
|
||||
repre_linked_burnins = {
|
||||
name: output for name, output in burnin_defs.items()
|
||||
if name in repre_burnin_links
|
||||
}
|
||||
self.log.debug("repre_linked_burnins: {}".format(
|
||||
repre_linked_burnins))
|
||||
|
||||
# if any match then replace burnin defs and follow tag filtering
|
||||
_burnin_defs = copy.deepcopy(burnin_defs)
|
||||
if repre_linked_burnins:
|
||||
_burnin_defs = repre_linked_burnins
|
||||
|
||||
# Filter output definition by representation tags (optional)
|
||||
repre_burnin_defs = self.filter_burnins_by_tags(
|
||||
burnin_defs, repre["tags"]
|
||||
_burnin_defs, repre["tags"]
|
||||
)
|
||||
if not repre_burnin_defs:
|
||||
self.log.info((
|
||||
|
|
@ -184,7 +210,9 @@ class ExtractBurnin(openpype.api.Extractor):
|
|||
for key in self.positions:
|
||||
value = burnin_def.get(key)
|
||||
if value:
|
||||
burnin_values[key] = value
|
||||
burnin_values[key] = value.replace(
|
||||
"{task}", "{task[name]}"
|
||||
)
|
||||
|
||||
# Remove "delete" tag from new representation
|
||||
if "delete" in new_repre["tags"]:
|
||||
|
|
@ -281,6 +309,8 @@ class ExtractBurnin(openpype.api.Extractor):
|
|||
# NOTE we maybe can keep source representation if necessary
|
||||
instance.data["representations"].remove(repre)
|
||||
|
||||
self.log.debug("Files to delete: {}".format(files_to_delete))
|
||||
|
||||
# Delete input files
|
||||
for filepath in files_to_delete:
|
||||
if os.path.exists(filepath):
|
||||
|
|
|
|||
|
|
@ -180,6 +180,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
if "tags" not in output_def:
|
||||
output_def["tags"] = []
|
||||
|
||||
if "burnins" not in output_def:
|
||||
output_def["burnins"] = []
|
||||
|
||||
# Create copy of representation
|
||||
new_repre = copy.deepcopy(repre)
|
||||
|
||||
|
|
@ -192,8 +195,20 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
if tag not in new_repre["tags"]:
|
||||
new_repre["tags"].append(tag)
|
||||
|
||||
# Add burnin link from output definition to representation
|
||||
for burnin in output_def["burnins"]:
|
||||
if burnin not in new_repre.get("burnins", []):
|
||||
if not new_repre.get("burnins"):
|
||||
new_repre["burnins"] = []
|
||||
new_repre["burnins"].append(str(burnin))
|
||||
|
||||
self.log.debug(
|
||||
"New representation tags: `{}`".format(new_repre["tags"])
|
||||
"Linked burnins: `{}`".format(new_repre.get("burnins"))
|
||||
)
|
||||
|
||||
self.log.debug(
|
||||
"New representation tags: `{}`".format(
|
||||
new_repre.get("tags"))
|
||||
)
|
||||
|
||||
temp_data = self.prepare_temp_data(
|
||||
|
|
@ -232,12 +247,16 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
for f in files_to_clean:
|
||||
os.unlink(f)
|
||||
|
||||
output_name = output_def["filename_suffix"]
|
||||
output_name = new_repre.get("outputName", "")
|
||||
output_ext = new_repre["ext"]
|
||||
if output_name:
|
||||
output_name += "_"
|
||||
output_name += output_def["filename_suffix"]
|
||||
if temp_data["without_handles"]:
|
||||
output_name += "_noHandles"
|
||||
|
||||
new_repre.update({
|
||||
"name": output_def["filename_suffix"],
|
||||
"name": "{}_{}".format(output_name, output_ext),
|
||||
"outputName": output_name,
|
||||
"outputDef": output_def,
|
||||
"frameStartFtrack": temp_data["output_frame_start"],
|
||||
|
|
|
|||
130
openpype/plugins/publish/integrate_inputlinks.py
Normal file
130
openpype/plugins/publish/integrate_inputlinks.py
Normal file
|
|
@ -0,0 +1,130 @@
|
|||
|
||||
from collections import OrderedDict
|
||||
from avalon import io
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class IntegrateInputLinks(pyblish.api.ContextPlugin):
|
||||
"""Connecting version level dependency links"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.2
|
||||
label = "Connect Dependency InputLinks"
|
||||
|
||||
def process(self, context):
|
||||
"""Connect dependency links for all instances, globally
|
||||
|
||||
Code steps:
|
||||
* filter out instances that has "versionEntity" entry in data
|
||||
* find workfile instance within context
|
||||
* if workfile found:
|
||||
- link all `loadedVersions` as input of the workfile
|
||||
- link workfile as input of all publishing instances
|
||||
* else:
|
||||
- show "no workfile" warning
|
||||
* link instances' inputs if it's data has "inputVersions" entry
|
||||
* Write into database
|
||||
|
||||
inputVersions:
|
||||
The "inputVersions" in instance.data should be a list of
|
||||
version document's Id (str or ObjectId), which are the
|
||||
dependencies of the publishing instance that should be
|
||||
extracted from working scene by the DCC specific publish
|
||||
plugin.
|
||||
|
||||
"""
|
||||
workfile = None
|
||||
publishing = []
|
||||
|
||||
for instance in context:
|
||||
if not instance.data.get("publish", True):
|
||||
# Skip inactive instances
|
||||
continue
|
||||
|
||||
version_doc = instance.data.get("versionEntity")
|
||||
if not version_doc:
|
||||
self.log.debug("Instance %s doesn't have version." % instance)
|
||||
continue
|
||||
|
||||
version_data = version_doc.get("data", {})
|
||||
families = version_data.get("families", [])
|
||||
|
||||
if "workfile" in families:
|
||||
workfile = instance
|
||||
else:
|
||||
publishing.append(instance)
|
||||
|
||||
if workfile is None:
|
||||
self.log.warn("No workfile in this publish session.")
|
||||
else:
|
||||
workfile_version_doc = workfile.data["versionEntity"]
|
||||
# link all loaded versions in scene into workfile
|
||||
for version in context.data.get("loadedVersions", []):
|
||||
self.add_link(
|
||||
link_type="reference",
|
||||
input_id=version["version"],
|
||||
version_doc=workfile_version_doc,
|
||||
)
|
||||
# link workfile to all publishing versions
|
||||
for instance in publishing:
|
||||
self.add_link(
|
||||
link_type="generative",
|
||||
input_id=workfile_version_doc["_id"],
|
||||
version_doc=instance.data["versionEntity"],
|
||||
)
|
||||
|
||||
# link versions as dependencies to the instance
|
||||
for instance in publishing:
|
||||
for input_version in instance.data.get("inputVersions") or []:
|
||||
self.add_link(
|
||||
link_type="generative",
|
||||
input_id=input_version,
|
||||
version_doc=instance.data["versionEntity"],
|
||||
)
|
||||
|
||||
publishing.append(workfile)
|
||||
self.write_links_to_database(publishing)
|
||||
|
||||
def add_link(self, link_type, input_id, version_doc):
|
||||
"""Add dependency link data into version document
|
||||
|
||||
Args:
|
||||
link_type (str): Type of link, one of 'reference' or 'generative'
|
||||
input_id (str or ObjectId): Document Id of input version
|
||||
version_doc (dict): The version document that takes the input
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
# NOTE:
|
||||
# using OrderedDict() here is just for ensuring field order between
|
||||
# python versions, if we ever need to use mongodb operation '$addToSet'
|
||||
# to update and avoid duplicating elements in 'inputLinks' array in the
|
||||
# future.
|
||||
link = OrderedDict()
|
||||
link["type"] = link_type
|
||||
link["input"] = io.ObjectId(input_id)
|
||||
link["linkedBy"] = "publish"
|
||||
|
||||
if "inputLinks" not in version_doc["data"]:
|
||||
version_doc["data"]["inputLinks"] = []
|
||||
version_doc["data"]["inputLinks"].append(link)
|
||||
|
||||
def write_links_to_database(self, instances):
|
||||
"""Iter instances in context to update database
|
||||
|
||||
If `versionEntity.data.inputLinks` not None in `instance.data`, doc
|
||||
in database will be updated.
|
||||
|
||||
"""
|
||||
for instance in instances:
|
||||
version_doc = instance.data.get("versionEntity")
|
||||
if version_doc is None:
|
||||
continue
|
||||
|
||||
input_links = version_doc["data"].get("inputLinks")
|
||||
if input_links is None:
|
||||
continue
|
||||
|
||||
io.update_one({"_id": version_doc["_id"]},
|
||||
{"$set": {"data.inputLinks": input_links}})
|
||||
|
|
@ -172,21 +172,26 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
anatomy_data["hierarchy"] = hierarchy
|
||||
|
||||
# Make sure task name in anatomy data is same as on instance.data
|
||||
task_name = instance.data.get("task")
|
||||
if task_name:
|
||||
anatomy_data["task"] = task_name
|
||||
else:
|
||||
# Just set 'task_name' variable to context task
|
||||
task_name = anatomy_data["task"]
|
||||
|
||||
# Find task type for current task name
|
||||
# - this should be already prepared on instance
|
||||
asset_tasks = (
|
||||
asset_entity.get("data", {}).get("tasks")
|
||||
) or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
instance.data["task_type"] = task_type
|
||||
task_name = instance.data.get("task")
|
||||
if task_name:
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
|
||||
project_task_types = project_entity["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
anatomy_data["task"] = {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code
|
||||
}
|
||||
|
||||
else:
|
||||
# Just set 'task_name' variable to context task
|
||||
task_name = anatomy_data["task"]["name"]
|
||||
task_type = anatomy_data["task"]["type"]
|
||||
|
||||
# Fill family in anatomy data
|
||||
anatomy_data["family"] = instance.data.get("family")
|
||||
|
|
@ -804,11 +809,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
# - is there a chance that task name is not filled in anatomy
|
||||
# data?
|
||||
# - should we use context task in that case?
|
||||
task_name = (
|
||||
instance.data["anatomyData"]["task"]
|
||||
or io.Session["AVALON_TASK"]
|
||||
)
|
||||
task_type = instance.data["task_type"]
|
||||
task_name = instance.data["anatomyData"]["task"]["name"]
|
||||
task_type = instance.data["anatomyData"]["task"]["type"]
|
||||
filtering_criteria = {
|
||||
"families": instance.data["family"],
|
||||
"hosts": instance.context.data["hostName"],
|
||||
|
|
@ -1069,10 +1071,12 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
already_attached_sites[meta["name"]] = meta["created_dt"]
|
||||
|
||||
if sync_project_presets and sync_project_presets["enabled"]:
|
||||
# add remote
|
||||
meta = {"name": remote_site.strip()}
|
||||
rec["sites"].append(meta)
|
||||
already_attached_sites[meta["name"]] = None
|
||||
if remote_site and \
|
||||
remote_site not in already_attached_sites.keys():
|
||||
# add remote
|
||||
meta = {"name": remote_site.strip()}
|
||||
rec["sites"].append(meta)
|
||||
already_attached_sites[meta["name"]] = None
|
||||
|
||||
# add skeleton for site where it should be always synced to
|
||||
for always_on_site in always_accesible:
|
||||
|
|
@ -1100,8 +1104,6 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
local_site = local_site_id
|
||||
|
||||
remote_site = sync_project_presets["config"].get("remote_site")
|
||||
if remote_site == local_site:
|
||||
remote_site = None
|
||||
|
||||
if remote_site == 'local':
|
||||
remote_site = local_site_id
|
||||
|
|
|
|||
|
|
@ -12,6 +12,12 @@ class ValidateEditorialAssetName(pyblish.api.ContextPlugin):
|
|||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Editorial Asset Name"
|
||||
hosts = [
|
||||
"hiero",
|
||||
"standalonepublisher",
|
||||
"resolve",
|
||||
"flame"
|
||||
]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
|
|||
|
|
@ -392,3 +392,10 @@ class PypeCommands:
|
|||
import time
|
||||
while True:
|
||||
time.sleep(1.0)
|
||||
|
||||
def repack_version(self, directory):
|
||||
"""Repacking OpenPype version."""
|
||||
from openpype.tools.repack_version import VersionRepacker
|
||||
|
||||
version_packer = VersionRepacker(directory)
|
||||
version_packer.process()
|
||||
|
|
|
|||
|
|
@ -28,6 +28,9 @@
|
|||
"viewer": {
|
||||
"viewerProcess": "sRGB"
|
||||
},
|
||||
"baking": {
|
||||
"viewerProcess": "rec709"
|
||||
},
|
||||
"workfile": {
|
||||
"colorManagement": "Nuke",
|
||||
"OCIO_config": "nuke-default",
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@
|
|||
"frame": "{frame:0>{@frame_padding}}"
|
||||
},
|
||||
"work": {
|
||||
"folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{task}",
|
||||
"file": "{project[code]}_{asset}_{task}_{@version}<_{comment}>.{ext}",
|
||||
"folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{task[name]}",
|
||||
"file": "{project[code]}_{asset}_{task[name]}_{@version}<_{comment}>.{ext}",
|
||||
"path": "{@folder}/{@file}"
|
||||
},
|
||||
"render": {
|
||||
|
|
|
|||
|
|
@ -52,6 +52,7 @@
|
|||
"burnin",
|
||||
"ftrackreview"
|
||||
],
|
||||
"burnins": [],
|
||||
"ffmpeg_args": {
|
||||
"video_filters": [],
|
||||
"audio_filters": [],
|
||||
|
|
|
|||
|
|
@ -42,7 +42,8 @@
|
|||
"enabled": true,
|
||||
"defaults": [
|
||||
"Main"
|
||||
]
|
||||
],
|
||||
"aov_separator": "underscore"
|
||||
},
|
||||
"CreateAnimation": {
|
||||
"enabled": true,
|
||||
|
|
|
|||
|
|
@ -110,7 +110,20 @@
|
|||
},
|
||||
"ExtractReviewDataMov": {
|
||||
"enabled": true,
|
||||
"viewer_lut_raw": false
|
||||
"viewer_lut_raw": false,
|
||||
"outputs": {
|
||||
"baking": {
|
||||
"filter": {
|
||||
"task_types": [],
|
||||
"families": []
|
||||
},
|
||||
"extension": "mov",
|
||||
"viewer_process_override": "",
|
||||
"bake_viewer_process": true,
|
||||
"bake_viewer_input_process": true,
|
||||
"add_tags": []
|
||||
}
|
||||
}
|
||||
},
|
||||
"ExtractSlateFrame": {
|
||||
"viewer_lut_raw": false
|
||||
|
|
|
|||
|
|
@ -173,9 +173,9 @@
|
|||
"workfile_families": [],
|
||||
"texture_families": [],
|
||||
"color_space": [
|
||||
"linsRGB",
|
||||
"raw",
|
||||
"acesg"
|
||||
"sRGB",
|
||||
"Raw",
|
||||
"ACEScg"
|
||||
],
|
||||
"input_naming_patterns": {
|
||||
"workfile": [
|
||||
|
|
|
|||
|
|
@ -115,6 +115,9 @@
|
|||
"default_task_type": "Default task type"
|
||||
}
|
||||
}
|
||||
},
|
||||
"CollectTVPaintInstances": {
|
||||
"layer_name_regex": "(?P<layer>L[0-9]{3}_\\w+)_(?P<pass>.+)"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -188,5 +188,13 @@
|
|||
},
|
||||
"slack": {
|
||||
"enabled": false
|
||||
},
|
||||
"job_queue": {
|
||||
"server_url": "",
|
||||
"jobs_root": {
|
||||
"windows": "",
|
||||
"darwin": "",
|
||||
"linux": ""
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -62,8 +62,25 @@
|
|||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "CollectTVPaintInstances",
|
||||
"label": "Collect TVPaint Instances",
|
||||
"children": [
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Regex helps to extract render layer and pass names from TVPaint layer name.<br>The regex must contain named groups <b>'layer'</b> and <b>'pass'</b> which are used for creation of RenderPass instances.<hr><br>Example layer name: <b>\"L001_Person_Hand\"</b><br>Example regex: <b>\"(?P<layer>L[0-9]{3}_\\w+)_(?P<pass>.+)\"</b><br>Extracted layer: <b>\"L001_Person\"</b><br>Extracted pass: <b>\"Hand\"</b>"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "layer_name_regex",
|
||||
"label": "Layer name regex"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -131,6 +131,19 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "baking",
|
||||
"type": "dict",
|
||||
"label": "Extract-review baking profile",
|
||||
"collapsible": false,
|
||||
"children": [
|
||||
{
|
||||
"type": "text",
|
||||
"key": "viewerProcess",
|
||||
"label": "Viewer Process"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "workfile",
|
||||
"type": "dict",
|
||||
|
|
@ -363,7 +376,7 @@
|
|||
"key": "maya",
|
||||
"type": "dict",
|
||||
"label": "Maya",
|
||||
"children": [
|
||||
"children": [
|
||||
{
|
||||
"key": "colorManagementPreference",
|
||||
"type": "dict",
|
||||
|
|
|
|||
|
|
@ -11,6 +11,10 @@
|
|||
"type": "dict",
|
||||
"key": "defaults",
|
||||
"children": [
|
||||
{
|
||||
"type": "label",
|
||||
"label": "The list of existing placeholders is available here:<br> https://openpype.io/docs/admin_settings_project_anatomy/#available-template-keys "
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "version_padding",
|
||||
|
|
|
|||
|
|
@ -212,6 +212,12 @@
|
|||
"type": "schema",
|
||||
"name": "schema_representation_tags"
|
||||
},
|
||||
{
|
||||
"key": "burnins",
|
||||
"label": "Link to a burnin by name",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"key": "ffmpeg_args",
|
||||
"label": "FFmpeg arguments",
|
||||
|
|
|
|||
|
|
@ -46,6 +46,18 @@
|
|||
"key": "defaults",
|
||||
"label": "Default Subsets",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"key": "aov_separator",
|
||||
"label": "AOV Separator character",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"default": "underscore",
|
||||
"enum_items": [
|
||||
{"dash": "- (dash)"},
|
||||
{"underscore": "_ (underscore)"},
|
||||
{"dot": ". (dot)"}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -167,7 +167,67 @@
|
|||
"type": "boolean",
|
||||
"key": "viewer_lut_raw",
|
||||
"label": "Viewer LUT raw"
|
||||
},
|
||||
{
|
||||
"key": "outputs",
|
||||
"label": "Output Definitions",
|
||||
"type": "dict-modifiable",
|
||||
"highlight_content": true,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": false,
|
||||
"key": "filter",
|
||||
"label": "Filtering",
|
||||
"children": [
|
||||
{
|
||||
"key": "task_types",
|
||||
"label": "Task types",
|
||||
"type": "task-types-enum"
|
||||
},
|
||||
{
|
||||
"key": "families",
|
||||
"label": "Families",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "extension",
|
||||
"label": "File extension"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "viewer_process_override",
|
||||
"label": "Viewer Process colorspace profile override"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_process",
|
||||
"label": "Bake Viewer Process"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "bake_viewer_input_process",
|
||||
"label": "Bake Viewer Input Process (LUTs)"
|
||||
},
|
||||
{
|
||||
"key": "add_tags",
|
||||
"label": "Add additional tags to representations",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue