diff --git a/.all-contributorsrc b/.all-contributorsrc
index b30f3b2499..60812cdb3c 100644
--- a/.all-contributorsrc
+++ b/.all-contributorsrc
@@ -1,6 +1,6 @@
{
"projectName": "OpenPype",
- "projectOwner": "pypeclub",
+ "projectOwner": "ynput",
"repoType": "github",
"repoHost": "https://github.com",
"files": [
@@ -319,8 +319,18 @@
"code",
"doc"
]
+ },
+ {
+ "login": "movalex",
+ "name": "Alexey Bogomolov",
+ "avatar_url": "https://avatars.githubusercontent.com/u/11698866?v=4",
+ "profile": "http://abogomolov.com",
+ "contributions": [
+ "code"
+ ]
}
],
"contributorsPerLine": 7,
- "skipCi": true
+ "skipCi": true,
+ "commitType": "docs"
}
diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 4d7d06a2c8..aa5b8decdc 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -35,6 +35,9 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
+ - 3.15.9
+ - 3.15.9-nightly.2
+ - 3.15.9-nightly.1
- 3.15.8
- 3.15.8-nightly.3
- 3.15.8-nightly.2
@@ -132,9 +135,6 @@ body:
- 3.14.3-nightly.1
- 3.14.2
- 3.14.2-nightly.5
- - 3.14.2-nightly.4
- - 3.14.2-nightly.3
- - 3.14.2-nightly.2
validations:
required: true
- type: dropdown
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a33904735b..ec6544e659 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,341 @@
# Changelog
+## [3.15.9](https://github.com/ynput/OpenPype/tree/3.15.9)
+
+
+[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.8...3.15.9)
+
+### **π New features**
+
+
+
+Blender: Implemented Loading of Alembic Camera #4990
+
+Implemented loading of Alembic cameras in Blender.
+
+
+___
+
+
+
+
+
+Unreal: Implemented Creator, Loader and Extractor for Levels #5008
+
+Creator, Loader and Extractor for Unreal Levels have been implemented.
+
+
+___
+
+
+
+### **π Enhancements**
+
+
+
+Blender: Added setting for base unit scale #4987
+
+A setting for the base unit scale has been added for Blender.The unit scale is automatically applied when opening a file or creating a new one.
+
+
+___
+
+
+
+
+
+Unreal: Changed naming and path of Camera Levels #5010
+
+The levels created for the camera in Unreal now include `_camera` in the name, to be better identifiable, and are placed in the camera folder.
+
+
+___
+
+
+
+
+
+Settings: Added option to nest settings templates #5022
+
+It is possible to nest settings templates in another templates.
+
+
+___
+
+
+
+
+
+Enhancement/publisher: Remove "hit play to continue" label on continue #5029
+
+Remove "hit play to continue" message on continue so that it doesn't show anymore when play was clicked.
+
+
+___
+
+
+
+
+
+Ftrack: Limit number of ftrack events to query at once #5033
+
+Limit the amount of ftrack events received from mongo at once to 100.
+
+
+___
+
+
+
+
+
+General: Small code cleanups #5034
+
+Small code cleanup and updates.
+
+
+___
+
+
+
+
+
+Global: collect frames to fix with settings #5036
+
+Settings for `Collect Frames to Fix` will allow disable per project the plugin. Also `Rewriting latest version` attribute is hiddable from settings.
+
+
+___
+
+
+
+
+
+General: Publish plugin apply settings can expect only project settings #5037
+
+Only project settings are passed to optional `apply_settings` method, if the method expects only one argument.
+
+
+___
+
+
+
+### **π Bug fixes**
+
+
+
+Maya: Load Assembly fix invalid imports #4859
+
+Refactors imports so they are now correct.
+
+
+___
+
+
+
+
+
+Maya: Skipping rendersetup for members. #4973
+
+When publishing a `rendersetup`, the objectset is and should be empty.
+
+
+___
+
+
+
+
+
+Maya: Validate Rig Output IDs #5016
+
+Absolute names of node were not used, so plugin did not fetch the nodes properly.Also missed pymel command.
+
+
+___
+
+
+
+
+
+Deadline: escape rootless path in publish job #4910
+
+If the publish path on Deadline job contains spaces or other characters, command was failing because the path wasn't properly escaped. This is fixing it.
+
+
+___
+
+
+
+
+
+General: Company name and URL changed #4974
+
+The current records were obsolete in inno_setup, changed to the up-to-date.
+___
+
+
+
+
+
+Unreal: Fix usage of 'get_full_path' function #5014
+
+This PR changes all the occurrences of `get_full_path` functions to alternatives to get the path of the objects.
+
+
+___
+
+
+
+
+
+Unreal: Fix sequence frames validator to use correct data #5021
+
+Fix sequence frames validator to use clipIn and clipOut data instead of frameStart and frameEnd.
+
+
+___
+
+
+
+
+
+Unreal: Fix render instances collection to use correct data #5023
+
+Fix render instances collection to use `frameStart` and `frameEnd` from the Project Manager, instead of the sequence's ones.
+
+
+___
+
+
+
+
+
+Resolve: loader is opening even if no timeline in project #5025
+
+Loader is opening now even no timeline is available in a project.
+
+
+___
+
+
+
+
+
+nuke: callback for dirmapping is on demand #5030
+
+Nuke was slowed down on processing due this callback. Since it is disabled by default it made sense to add it only on demand.
+
+
+___
+
+
+
+
+
+Publisher: UI works with instances without label #5032
+
+Publisher UI does not crash if instance don't have filled 'label' key in instance data.
+
+
+___
+
+
+
+
+
+Publisher: Call explicitly prepared tab methods #5044
+
+It is not possible to go to Create tab during publishing from OpenPype menu.
+
+
+___
+
+
+
+
+
+Ftrack: Role names are not case sensitive in ftrack event server status action #5058
+
+Event server status action is not case sensitive for role names of user.
+
+
+___
+
+
+
+
+
+Publisher: Fix border widget #5063
+
+Fixed border lines in Publisher UI to be painted correctly with correct indentation and size.
+
+
+___
+
+
+
+
+
+Unreal: Fix Commandlet Project and Permissions #5066
+
+Fix problem when creating an Unreal Project when Commandlet Project is in a protected location.
+
+
+___
+
+
+
+
+
+Unreal: Added verification for Unreal app name format #5070
+
+The Unreal app name is used to determine the Unreal version folder, so it is necessary that if follows the format `x-x`, where `x` is any integer. This PR adds a verification that the app name follows that format.
+
+
+___
+
+
+
+### **π Documentation**
+
+
+
+Docs: Display wrong image in ExtractOIIOTranscode #5045
+
+Wrong image display in `https://openpype.io/docs/project_settings/settings_project_global#extract-oiio-transcode`.
+
+
+___
+
+
+
+### **Merged pull requests**
+
+
+
+Drop-down menu to list all families in create placeholder #4928
+
+Currently in the create placeholder window, we need to write the family manually. This replace the text field by an enum field with all families for the current software.
+
+
+___
+
+
+
+
+
+add sync to specific projects or listen only #4919
+
+Extend kitsu sync service with additional arguments to sync specific projects.
+
+
+___
+
+
+
+
+
+
## [3.15.8](https://github.com/ynput/OpenPype/tree/3.15.8)
diff --git a/README.md b/README.md
index 514ffb62c0..8757e3db92 100644
--- a/README.md
+++ b/README.md
@@ -1,6 +1,6 @@
-[](#contributors-)
+[](#contributors-)
OpenPype
====
@@ -303,41 +303,44 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
diff --git a/inno_setup.iss b/inno_setup.iss
index 3adde52a8b..418bedbd4d 100644
--- a/inno_setup.iss
+++ b/inno_setup.iss
@@ -14,10 +14,10 @@ AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93}
AppName={#MyAppName}
AppVersion={#AppVer}
AppVerName={#MyAppName} version {#AppVer}
-AppPublisher=Orbi Tools s.r.o
-AppPublisherURL=http://pype.club
-AppSupportURL=http://pype.club
-AppUpdatesURL=http://pype.club
+AppPublisher=Ynput s.r.o
+AppPublisherURL=https://ynput.io
+AppSupportURL=https://ynput.io
+AppUpdatesURL=https://ynput.io
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes
diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py
index c2aee1e653..9cc557c01a 100644
--- a/openpype/hosts/blender/api/pipeline.py
+++ b/openpype/hosts/blender/api/pipeline.py
@@ -26,6 +26,8 @@ from openpype.lib import (
emit_event
)
import openpype.hosts.blender
+from openpype.settings import get_project_settings
+
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
@@ -83,6 +85,31 @@ def uninstall():
ops.unregister()
+def show_message(title, message):
+ from openpype.widgets.message_window import Window
+ from .ops import BlenderApplication
+
+ BlenderApplication.get_app()
+
+ Window(
+ parent=None,
+ title=title,
+ message=message,
+ level="warning")
+
+
+def message_window(title, message):
+ from .ops import (
+ MainThreadItem,
+ execute_in_main_thread,
+ _process_app_events
+ )
+
+ mti = MainThreadItem(show_message, title, message)
+ execute_in_main_thread(mti)
+ _process_app_events()
+
+
def set_start_end_frames():
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
@@ -125,10 +152,36 @@ def set_start_end_frames():
def on_new():
set_start_end_frames()
+ project = os.environ.get("AVALON_PROJECT")
+ settings = get_project_settings(project)
+
+ unit_scale_settings = settings.get("blender").get("unit_scale_settings")
+ unit_scale_enabled = unit_scale_settings.get("enabled")
+ if unit_scale_enabled:
+ unit_scale = unit_scale_settings.get("base_file_unit_scale")
+ bpy.context.scene.unit_settings.scale_length = unit_scale
+
def on_open():
set_start_end_frames()
+ project = os.environ.get("AVALON_PROJECT")
+ settings = get_project_settings(project)
+
+ unit_scale_settings = settings.get("blender").get("unit_scale_settings")
+ unit_scale_enabled = unit_scale_settings.get("enabled")
+ apply_on_opening = unit_scale_settings.get("apply_on_opening")
+ if unit_scale_enabled and apply_on_opening:
+ unit_scale = unit_scale_settings.get("base_file_unit_scale")
+ prev_unit_scale = bpy.context.scene.unit_settings.scale_length
+
+ if unit_scale != prev_unit_scale:
+ bpy.context.scene.unit_settings.scale_length = unit_scale
+
+ message_window(
+ "Base file unit scale changed",
+ "Base file unit scale changed to match the project settings.")
+
@bpy.app.handlers.persistent
def _on_save_pre(*args):
diff --git a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py
new file mode 100644
index 0000000000..559e9ae0ce
--- /dev/null
+++ b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py
@@ -0,0 +1,55 @@
+from pathlib import Path
+
+from openpype.lib import PreLaunchHook
+
+
+class AddPythonScriptToLaunchArgs(PreLaunchHook):
+ """Add python script to be executed before Blender launch."""
+
+ # Append after file argument
+ order = 15
+ app_groups = [
+ "blender",
+ ]
+
+ def execute(self):
+ if not self.launch_context.data.get("python_scripts"):
+ return
+
+ # Add path to workfile to arguments
+ for python_script_path in self.launch_context.data["python_scripts"]:
+ self.log.info(
+ f"Adding python script {python_script_path} to launch"
+ )
+ # Test script path exists
+ python_script_path = Path(python_script_path)
+ if not python_script_path.exists():
+ self.log.warning(
+ f"Python script {python_script_path} doesn't exist. "
+ "Skipped..."
+ )
+ continue
+
+ if "--" in self.launch_context.launch_args:
+ # Insert before separator
+ separator_index = self.launch_context.launch_args.index("--")
+ self.launch_context.launch_args.insert(
+ separator_index,
+ "-P",
+ )
+ self.launch_context.launch_args.insert(
+ separator_index + 1,
+ python_script_path.as_posix(),
+ )
+ else:
+ self.launch_context.launch_args.extend(
+ ["-P", python_script_path.as_posix()]
+ )
+
+ # Ensure separator
+ if "--" not in self.launch_context.launch_args:
+ self.launch_context.launch_args.append("--")
+
+ self.launch_context.launch_args.extend(
+ [*self.launch_context.data.get("script_args", [])]
+ )
diff --git a/openpype/hosts/blender/plugins/load/load_camera_abc.py b/openpype/hosts/blender/plugins/load/load_camera_abc.py
new file mode 100644
index 0000000000..21b48f409f
--- /dev/null
+++ b/openpype/hosts/blender/plugins/load/load_camera_abc.py
@@ -0,0 +1,209 @@
+"""Load an asset in Blender from an Alembic file."""
+
+from pathlib import Path
+from pprint import pformat
+from typing import Dict, List, Optional
+
+import bpy
+
+from openpype.pipeline import (
+ get_representation_path,
+ AVALON_CONTAINER_ID,
+)
+from openpype.hosts.blender.api import plugin, lib
+from openpype.hosts.blender.api.pipeline import (
+ AVALON_CONTAINERS,
+ AVALON_PROPERTY,
+)
+
+
+class AbcCameraLoader(plugin.AssetLoader):
+ """Load a camera from Alembic file.
+
+ Stores the imported asset in an empty named after the asset.
+ """
+
+ families = ["camera"]
+ representations = ["abc"]
+
+ label = "Load Camera (ABC)"
+ icon = "code-fork"
+ color = "orange"
+
+ def _remove(self, asset_group):
+ objects = list(asset_group.children)
+
+ for obj in objects:
+ if obj.type == "CAMERA":
+ bpy.data.cameras.remove(obj.data)
+ elif obj.type == "EMPTY":
+ objects.extend(obj.children)
+ bpy.data.objects.remove(obj)
+
+ def _process(self, libpath, asset_group, group_name):
+ plugin.deselect_all()
+
+ bpy.ops.wm.alembic_import(filepath=libpath)
+
+ objects = lib.get_selection()
+
+ for obj in objects:
+ obj.parent = asset_group
+
+ for obj in objects:
+ name = obj.name
+ obj.name = f"{group_name}:{name}"
+ if obj.type != "EMPTY":
+ name_data = obj.data.name
+ obj.data.name = f"{group_name}:{name_data}"
+
+ if not obj.get(AVALON_PROPERTY):
+ obj[AVALON_PROPERTY] = dict()
+
+ avalon_info = obj[AVALON_PROPERTY]
+ avalon_info.update({"container_name": group_name})
+
+ plugin.deselect_all()
+
+ return objects
+
+ def process_asset(
+ self,
+ context: dict,
+ name: str,
+ namespace: Optional[str] = None,
+ options: Optional[Dict] = None,
+ ) -> Optional[List]:
+ """
+ Arguments:
+ name: Use pre-defined name
+ namespace: Use pre-defined namespace
+ context: Full parenthood of representation to load
+ options: Additional settings dictionary
+ """
+ libpath = self.fname
+ asset = context["asset"]["name"]
+ subset = context["subset"]["name"]
+
+ asset_name = plugin.asset_name(asset, subset)
+ unique_number = plugin.get_unique_number(asset, subset)
+ group_name = plugin.asset_name(asset, subset, unique_number)
+ namespace = namespace or f"{asset}_{unique_number}"
+
+ avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
+ if not avalon_container:
+ avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
+ bpy.context.scene.collection.children.link(avalon_container)
+
+ asset_group = bpy.data.objects.new(group_name, object_data=None)
+ avalon_container.objects.link(asset_group)
+
+ objects = self._process(libpath, asset_group, group_name)
+
+ objects = []
+ nodes = list(asset_group.children)
+
+ for obj in nodes:
+ objects.append(obj)
+ nodes.extend(list(obj.children))
+
+ bpy.context.scene.collection.objects.link(asset_group)
+
+ asset_group[AVALON_PROPERTY] = {
+ "schema": "openpype:container-2.0",
+ "id": AVALON_CONTAINER_ID,
+ "name": name,
+ "namespace": namespace or "",
+ "loader": str(self.__class__.__name__),
+ "representation": str(context["representation"]["_id"]),
+ "libpath": libpath,
+ "asset_name": asset_name,
+ "parent": str(context["representation"]["parent"]),
+ "family": context["representation"]["context"]["family"],
+ "objectName": group_name,
+ }
+
+ self[:] = objects
+ return objects
+
+ def exec_update(self, container: Dict, representation: Dict):
+ """Update the loaded asset.
+
+ This will remove all objects of the current collection, load the new
+ ones and add them to the collection.
+ If the objects of the collection are used in another collection they
+ will not be removed, only unlinked. Normally this should not be the
+ case though.
+
+ Warning:
+ No nested collections are supported at the moment!
+ """
+ object_name = container["objectName"]
+ asset_group = bpy.data.objects.get(object_name)
+ libpath = Path(get_representation_path(representation))
+ extension = libpath.suffix.lower()
+
+ self.log.info(
+ "Container: %s\nRepresentation: %s",
+ pformat(container, indent=2),
+ pformat(representation, indent=2),
+ )
+
+ assert asset_group, (
+ f"The asset is not loaded: {container['objectName']}")
+ assert libpath, (
+ f"No existing library file found for {container['objectName']}")
+ assert libpath.is_file(), f"The file doesn't exist: {libpath}"
+ assert extension in plugin.VALID_EXTENSIONS, (
+ f"Unsupported file: {libpath}")
+
+ metadata = asset_group.get(AVALON_PROPERTY)
+ group_libpath = metadata["libpath"]
+
+ normalized_group_libpath = str(
+ Path(bpy.path.abspath(group_libpath)).resolve())
+ normalized_libpath = str(
+ Path(bpy.path.abspath(str(libpath))).resolve())
+ self.log.debug(
+ "normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
+ normalized_group_libpath,
+ normalized_libpath,
+ )
+ if normalized_group_libpath == normalized_libpath:
+ self.log.info("Library already loaded, not updating...")
+ return
+
+ mat = asset_group.matrix_basis.copy()
+
+ self._remove(asset_group)
+ self._process(str(libpath), asset_group, object_name)
+
+ asset_group.matrix_basis = mat
+
+ metadata["libpath"] = str(libpath)
+ metadata["representation"] = str(representation["_id"])
+
+ def exec_remove(self, container: Dict) -> bool:
+ """Remove an existing container from a Blender scene.
+
+ Arguments:
+ container (openpype:container-1.0): Container to remove,
+ from `host.ls()`.
+
+ Returns:
+ bool: Whether the container was deleted.
+
+ Warning:
+ No nested collections are supported at the moment!
+ """
+ object_name = container["objectName"]
+ asset_group = bpy.data.objects.get(object_name)
+
+ if not asset_group:
+ return False
+
+ self._remove(asset_group)
+
+ bpy.data.objects.remove(asset_group)
+
+ return True
diff --git a/openpype/hosts/hiero/plugins/load/load_clip.py b/openpype/hosts/hiero/plugins/load/load_clip.py
index 77844d2448..c9bebfa8b2 100644
--- a/openpype/hosts/hiero/plugins/load/load_clip.py
+++ b/openpype/hosts/hiero/plugins/load/load_clip.py
@@ -41,8 +41,8 @@ class LoadClip(phiero.SequenceLoader):
clip_name_template = "{asset}_{subset}_{representation}"
+ @classmethod
def apply_settings(cls, project_settings, system_settings):
-
plugin_type_settings = (
project_settings
.get("hiero", {})
diff --git a/openpype/hosts/max/api/colorspace.py b/openpype/hosts/max/api/colorspace.py
new file mode 100644
index 0000000000..fafee4ee04
--- /dev/null
+++ b/openpype/hosts/max/api/colorspace.py
@@ -0,0 +1,50 @@
+import attr
+from pymxs import runtime as rt
+
+
+@attr.s
+class LayerMetadata(object):
+ """Data class for Render Layer metadata."""
+ frameStart = attr.ib()
+ frameEnd = attr.ib()
+
+
+@attr.s
+class RenderProduct(object):
+ """Getting Colorspace as
+ Specific Render Product Parameter for submitting
+ publish job.
+ """
+ colorspace = attr.ib() # colorspace
+ view = attr.ib()
+ productName = attr.ib(default=None)
+
+
+class ARenderProduct(object):
+
+ def __init__(self):
+ """Constructor."""
+ # Initialize
+ self.layer_data = self._get_layer_data()
+ self.layer_data.products = self.get_colorspace_data()
+
+ def _get_layer_data(self):
+ return LayerMetadata(
+ frameStart=int(rt.rendStart),
+ frameEnd=int(rt.rendEnd),
+ )
+
+ def get_colorspace_data(self):
+ """To be implemented by renderer class.
+ This should return a list of RenderProducts.
+ Returns:
+ list: List of RenderProduct
+ """
+ colorspace_data = [
+ RenderProduct(
+ colorspace="sRGB",
+ view="ACES 1.0",
+ productName=""
+ )
+ ]
+ return colorspace_data
diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py
index d9213863b1..e2af0720ec 100644
--- a/openpype/hosts/max/api/lib.py
+++ b/openpype/hosts/max/api/lib.py
@@ -128,7 +128,14 @@ def get_all_children(parent, node_type=None):
def get_current_renderer():
- """get current renderer"""
+ """
+ Notes:
+ Get current renderer for Max
+
+ Returns:
+ "{Current Renderer}:{Current Renderer}"
+ e.g. "Redshift_Renderer:Redshift_Renderer"
+ """
return rt.renderers.production
diff --git a/openpype/hosts/max/api/lib_renderproducts.py b/openpype/hosts/max/api/lib_renderproducts.py
index 8224d589ad..94b0aeb913 100644
--- a/openpype/hosts/max/api/lib_renderproducts.py
+++ b/openpype/hosts/max/api/lib_renderproducts.py
@@ -3,94 +3,126 @@
# arnold
# https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_3ds_max_ax_maxscript_commands_ax_renderview_commands_html
import os
+
from pymxs import runtime as rt
-from openpype.hosts.max.api.lib import (
- get_current_renderer,
- get_default_render_folder
-)
-from openpype.pipeline.context_tools import get_current_project_asset
-from openpype.settings import get_project_settings
+
+from openpype.hosts.max.api.lib import get_current_renderer
from openpype.pipeline import legacy_io
+from openpype.settings import get_project_settings
class RenderProducts(object):
def __init__(self, project_settings=None):
- self._project_settings = project_settings
- if not self._project_settings:
- self._project_settings = get_project_settings(
- legacy_io.Session["AVALON_PROJECT"]
- )
+ self._project_settings = project_settings or get_project_settings(
+ legacy_io.Session["AVALON_PROJECT"])
+
+ def get_beauty(self, container):
+ render_dir = os.path.dirname(rt.rendOutputFilename)
+
+ output_file = os.path.join(render_dir, container)
- def render_product(self, container):
- folder = rt.maxFilePath
- file = rt.maxFileName
- folder = folder.replace("\\", "/")
setting = self._project_settings
- render_folder = get_default_render_folder(setting)
- filename, ext = os.path.splitext(file)
+ img_fmt = setting["max"]["RenderSettings"]["image_format"] # noqa
- output_file = os.path.join(folder,
- render_folder,
- filename,
+ start_frame = int(rt.rendStart)
+ end_frame = int(rt.rendEnd) + 1
+
+ return {
+ "beauty": self.get_expected_beauty(
+ output_file, start_frame, end_frame, img_fmt
+ )
+ }
+
+ def get_aovs(self, container):
+ render_dir = os.path.dirname(rt.rendOutputFilename)
+
+ output_file = os.path.join(render_dir,
container)
- context = get_current_project_asset()
- # TODO: change the frame range follows the current render setting
- startFrame = int(rt.rendStart)
- endFrame = int(rt.rendEnd) + 1
-
- img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
- full_render_list = self.beauty_render_product(output_file,
- startFrame,
- endFrame,
- img_fmt)
+ setting = self._project_settings
+ img_fmt = setting["max"]["RenderSettings"]["image_format"] # noqa
+ start_frame = int(rt.rendStart)
+ end_frame = int(rt.rendEnd) + 1
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
-
-
- if renderer == "VUE_File_Renderer":
- return full_render_list
+ render_dict = {}
if renderer in [
"ART_Renderer",
- "Redshift_Renderer",
"V_Ray_6_Hotfix_3",
"V_Ray_GPU_6_Hotfix_3",
"Default_Scanline_Renderer",
"Quicksilver_Hardware_Renderer",
]:
- render_elem_list = self.render_elements_product(output_file,
- startFrame,
- endFrame,
- img_fmt)
- if render_elem_list:
- full_render_list.extend(iter(render_elem_list))
- return full_render_list
+ render_name = self.get_render_elements_name()
+ if render_name:
+ for name in render_name:
+ render_dict.update({
+ name: self.get_expected_render_elements(
+ output_file, name, start_frame,
+ end_frame, img_fmt)
+ })
+ elif renderer == "Redshift_Renderer":
+ render_name = self.get_render_elements_name()
+ if render_name:
+ rs_aov_files = rt.Execute("renderers.current.separateAovFiles")
+ # this doesn't work, always returns False
+ # rs_AovFiles = rt.RedShift_Renderer().separateAovFiles
+ if img_fmt == "exr" and not rs_aov_files:
+ for name in render_name:
+ if name == "RsCryptomatte":
+ render_dict.update({
+ name: self.get_expected_render_elements(
+ output_file, name, start_frame,
+ end_frame, img_fmt)
+ })
+ else:
+ for name in render_name:
+ render_dict.update({
+ name: self.get_expected_render_elements(
+ output_file, name, start_frame,
+ end_frame, img_fmt)
+ })
- if renderer == "Arnold":
- aov_list = self.arnold_render_product(output_file,
- startFrame,
- endFrame,
- img_fmt)
- if aov_list:
- full_render_list.extend(iter(aov_list))
- return full_render_list
+ elif renderer == "Arnold":
+ render_name = self.get_arnold_product_name()
+ if render_name:
+ for name in render_name:
+ render_dict.update({
+ name: self.get_expected_arnold_product(
+ output_file, name, start_frame, end_frame, img_fmt)
+ })
+ elif renderer in [
+ "V_Ray_6_Hotfix_3",
+ "V_Ray_GPU_6_Hotfix_3"
+ ]:
+ if img_fmt != "exr":
+ render_name = self.get_render_elements_name()
+ if render_name:
+ for name in render_name:
+ render_dict.update({
+ name: self.get_expected_render_elements(
+ output_file, name, start_frame,
+ end_frame, img_fmt) # noqa
+ })
- def beauty_render_product(self, folder, startFrame, endFrame, fmt):
+ return render_dict
+
+ def get_expected_beauty(self, folder, start_frame, end_frame, fmt):
beauty_frame_range = []
- for f in range(startFrame, endFrame):
- beauty_output = f"{folder}.{f}.{fmt}"
+ for f in range(start_frame, end_frame):
+ frame = "%04d" % f
+ beauty_output = f"{folder}.{frame}.{fmt}"
beauty_output = beauty_output.replace("\\", "/")
beauty_frame_range.append(beauty_output)
return beauty_frame_range
- # TODO: Get the arnold render product
- def arnold_render_product(self, folder, startFrame, endFrame, fmt):
- """Get all the Arnold AOVs"""
- aovs = []
+ def get_arnold_product_name(self):
+ """Get all the Arnold AOVs name"""
+ aov_name = []
amw = rt.MaxtoAOps.AOVsManagerWindow()
aov_mgr = rt.renderers.current.AOVManager
@@ -100,34 +132,51 @@ class RenderProducts(object):
return
for i in range(aov_group_num):
# get the specific AOV group
- for aov in aov_mgr.drivers[i].aov_list:
- for f in range(startFrame, endFrame):
- render_element = f"{folder}_{aov.name}.{f}.{fmt}"
- render_element = render_element.replace("\\", "/")
- aovs.append(render_element)
-
+ aov_name.extend(aov.name for aov in aov_mgr.drivers[i].aov_list)
# close the AOVs manager window
amw.close()
- return aovs
+ return aov_name
- def render_elements_product(self, folder, startFrame, endFrame, fmt):
- """Get all the render element output files. """
- render_dirname = []
+ def get_expected_arnold_product(self, folder, name,
+ start_frame, end_frame, fmt):
+ """Get all the expected Arnold AOVs"""
+ aov_list = []
+ for f in range(start_frame, end_frame):
+ frame = "%04d" % f
+ render_element = f"{folder}_{name}.{frame}.{fmt}"
+ render_element = render_element.replace("\\", "/")
+ aov_list.append(render_element)
+ return aov_list
+
+ def get_render_elements_name(self):
+ """Get all the render element names for general """
+ render_name = []
render_elem = rt.maxOps.GetCurRenderElementMgr()
render_elem_num = render_elem.NumRenderElements()
+ if render_elem_num < 1:
+ return
# get render elements from the renders
for i in range(render_elem_num):
renderlayer_name = render_elem.GetRenderElement(i)
- target, renderpass = str(renderlayer_name).split(":")
if renderlayer_name.enabled:
- for f in range(startFrame, endFrame):
- render_element = f"{folder}_{renderpass}.{f}.{fmt}"
- render_element = render_element.replace("\\", "/")
- render_dirname.append(render_element)
+ target, renderpass = str(renderlayer_name).split(":")
+ render_name.append(renderpass)
- return render_dirname
+ return render_name
+
+ def get_expected_render_elements(self, folder, name,
+ start_frame, end_frame, fmt):
+ """Get all the expected render element output files. """
+ render_elements = []
+ for f in range(start_frame, end_frame):
+ frame = "%04d" % f
+ render_element = f"{folder}_{name}.{frame}.{fmt}"
+ render_element = render_element.replace("\\", "/")
+ render_elements.append(render_element)
+
+ return render_elements
def image_format(self):
return self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
diff --git a/openpype/hosts/max/plugins/create/create_redshift_proxy.py b/openpype/hosts/max/plugins/create/create_redshift_proxy.py
new file mode 100644
index 0000000000..698ea82b69
--- /dev/null
+++ b/openpype/hosts/max/plugins/create/create_redshift_proxy.py
@@ -0,0 +1,18 @@
+# -*- coding: utf-8 -*-
+"""Creator plugin for creating camera."""
+from openpype.hosts.max.api import plugin
+from openpype.pipeline import CreatedInstance
+
+
+class CreateRedshiftProxy(plugin.MaxCreator):
+ identifier = "io.openpype.creators.max.redshiftproxy"
+ label = "Redshift Proxy"
+ family = "redshiftproxy"
+ icon = "gear"
+
+ def create(self, subset_name, instance_data, pre_create_data):
+
+ _ = super(CreateRedshiftProxy, self).create(
+ subset_name,
+ instance_data,
+ pre_create_data) # type: CreatedInstance
diff --git a/openpype/hosts/max/plugins/create/create_render.py b/openpype/hosts/max/plugins/create/create_render.py
index 68ae5eac72..5ad895b86e 100644
--- a/openpype/hosts/max/plugins/create/create_render.py
+++ b/openpype/hosts/max/plugins/create/create_render.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating camera."""
+import os
from openpype.hosts.max.api import plugin
from openpype.pipeline import CreatedInstance
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
@@ -14,6 +15,10 @@ class CreateRender(plugin.MaxCreator):
def create(self, subset_name, instance_data, pre_create_data):
from pymxs import runtime as rt
sel_obj = list(rt.selection)
+ file = rt.maxFileName
+ filename, _ = os.path.splitext(file)
+ instance_data["AssetName"] = filename
+
instance = super(CreateRender, self).create(
subset_name,
instance_data,
diff --git a/openpype/hosts/max/plugins/load/load_redshift_proxy.py b/openpype/hosts/max/plugins/load/load_redshift_proxy.py
new file mode 100644
index 0000000000..31692f6367
--- /dev/null
+++ b/openpype/hosts/max/plugins/load/load_redshift_proxy.py
@@ -0,0 +1,63 @@
+import os
+import clique
+
+from openpype.pipeline import (
+ load,
+ get_representation_path
+)
+from openpype.hosts.max.api.pipeline import containerise
+from openpype.hosts.max.api import lib
+
+
+class RedshiftProxyLoader(load.LoaderPlugin):
+ """Load rs files with Redshift Proxy"""
+
+ label = "Load Redshift Proxy"
+ families = ["redshiftproxy"]
+ representations = ["rs"]
+ order = -9
+ icon = "code-fork"
+ color = "white"
+
+ def load(self, context, name=None, namespace=None, data=None):
+ from pymxs import runtime as rt
+
+ filepath = self.filepath_from_context(context)
+ rs_proxy = rt.RedshiftProxy()
+ rs_proxy.file = filepath
+ files_in_folder = os.listdir(os.path.dirname(filepath))
+ collections, remainder = clique.assemble(files_in_folder)
+ if collections:
+ rs_proxy.is_sequence = True
+
+ container = rt.container()
+ container.name = name
+ rs_proxy.Parent = container
+
+ asset = rt.getNodeByName(name)
+
+ return containerise(
+ name, [asset], context, loader=self.__class__.__name__)
+
+ def update(self, container, representation):
+ from pymxs import runtime as rt
+
+ path = get_representation_path(representation)
+ node = rt.getNodeByName(container["instance_node"])
+ for children in node.Children:
+ children_node = rt.getNodeByName(children.name)
+ for proxy in children_node.Children:
+ proxy.file = path
+
+ lib.imprint(container["instance_node"], {
+ "representation": str(representation["_id"])
+ })
+
+ def switch(self, container, representation):
+ self.update(container, representation)
+
+ def remove(self, container):
+ from pymxs import runtime as rt
+
+ node = rt.getNodeByName(container["instance_node"])
+ rt.delete(node)
diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py
index 00e00a8eb5..db5c84fad9 100644
--- a/openpype/hosts/max/plugins/publish/collect_render.py
+++ b/openpype/hosts/max/plugins/publish/collect_render.py
@@ -5,7 +5,8 @@ import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline import get_current_asset_name
-from openpype.hosts.max.api.lib import get_max_version
+from openpype.hosts.max.api import colorspace
+from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
from openpype.client import get_last_version_by_subset_name
@@ -28,8 +29,16 @@ class CollectRender(pyblish.api.InstancePlugin):
context.data['currentFile'] = current_file
asset = get_current_asset_name()
- render_layer_files = RenderProducts().render_product(instance.name)
+ files_by_aov = RenderProducts().get_beauty(instance.name)
folder = folder.replace("\\", "/")
+ aovs = RenderProducts().get_aovs(instance.name)
+ files_by_aov.update(aovs)
+
+ if "expectedFiles" not in instance.data:
+ instance.data["expectedFiles"] = list()
+ instance.data["files"] = list()
+ instance.data["expectedFiles"].append(files_by_aov)
+ instance.data["files"].append(files_by_aov)
img_format = RenderProducts().image_format()
project_name = context.data["projectName"]
@@ -38,7 +47,6 @@ class CollectRender(pyblish.api.InstancePlugin):
version_doc = get_last_version_by_subset_name(project_name,
instance.name,
asset_id)
-
self.log.debug("version_doc: {0}".format(version_doc))
version_int = 1
if version_doc:
@@ -46,22 +54,42 @@ class CollectRender(pyblish.api.InstancePlugin):
self.log.debug(f"Setting {version_int} to context.")
context.data["version"] = version_int
- # setup the plugin as 3dsmax for the internal renderer
+ # OCIO config not support in
+ # most of the 3dsmax renderers
+ # so this is currently hard coded
+ # TODO: add options for redshift/vray ocio config
+ instance.data["colorspaceConfig"] = ""
+ instance.data["colorspaceDisplay"] = "sRGB"
+ instance.data["colorspaceView"] = "ACES 1.0 SDR-video"
+ instance.data["renderProducts"] = colorspace.ARenderProduct()
+ instance.data["publishJobState"] = "Suspended"
+ instance.data["attachTo"] = []
+ renderer_class = get_current_renderer()
+ renderer = str(renderer_class).split(":")[0]
+ # also need to get the render dir for conversion
data = {
- "subset": instance.name,
"asset": asset,
+ "subset": str(instance.name),
"publish": True,
"maxversion": str(get_max_version()),
"imageFormat": img_format,
"family": 'maxrender',
"families": ['maxrender'],
+ "renderer": renderer,
"source": filepath,
- "expectedFiles": render_layer_files,
"plugin": "3dsmax",
"frameStart": int(rt.rendStart),
"frameEnd": int(rt.rendEnd),
"version": version_int,
"farm": True
}
- self.log.info("data: {0}".format(data))
instance.data.update(data)
+
+ # TODO: this should be unified with maya and its "multipart" flag
+ # on instance.
+ if renderer == "Redshift_Renderer":
+ instance.data.update(
+ {"separateAovFiles": rt.Execute(
+ "renderers.current.separateAovFiles")})
+
+ self.log.info("data: {0}".format(data))
diff --git a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py
new file mode 100644
index 0000000000..3b44099609
--- /dev/null
+++ b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py
@@ -0,0 +1,62 @@
+import os
+import pyblish.api
+from openpype.pipeline import publish
+from pymxs import runtime as rt
+from openpype.hosts.max.api import maintained_selection
+
+
+class ExtractRedshiftProxy(publish.Extractor):
+ """
+ Extract Redshift Proxy with rsProxy
+ """
+
+ order = pyblish.api.ExtractorOrder - 0.1
+ label = "Extract RedShift Proxy"
+ hosts = ["max"]
+ families = ["redshiftproxy"]
+
+ def process(self, instance):
+ container = instance.data["instance_node"]
+ start = int(instance.context.data.get("frameStart"))
+ end = int(instance.context.data.get("frameEnd"))
+
+ self.log.info("Extracting Redshift Proxy...")
+ stagingdir = self.staging_dir(instance)
+ rs_filename = "{name}.rs".format(**instance.data)
+ rs_filepath = os.path.join(stagingdir, rs_filename)
+ rs_filepath = rs_filepath.replace("\\", "/")
+
+ rs_filenames = self.get_rsfiles(instance, start, end)
+
+ with maintained_selection():
+ # select and export
+ con = rt.getNodeByName(container)
+ rt.select(con.Children)
+ # Redshift rsProxy command
+ # rsProxy fp selected compress connectivity startFrame endFrame
+ # camera warnExisting transformPivotToOrigin
+ rt.rsProxy(rs_filepath, 1, 0, 0, start, end, 0, 1, 1)
+
+ self.log.info("Performing Extraction ...")
+
+ if "representations" not in instance.data:
+ instance.data["representations"] = []
+
+ representation = {
+ 'name': 'rs',
+ 'ext': 'rs',
+ 'files': rs_filenames if len(rs_filenames) > 1 else rs_filenames[0], # noqa
+ "stagingDir": stagingdir,
+ }
+ instance.data["representations"].append(representation)
+ self.log.info("Extracted instance '%s' to: %s" % (instance.name,
+ stagingdir))
+
+ def get_rsfiles(self, instance, startFrame, endFrame):
+ rs_filenames = []
+ rs_name = instance.data["name"]
+ for frame in range(startFrame, endFrame + 1):
+ rs_filename = "%s.%04d.rs" % (rs_name, frame)
+ rs_filenames.append(rs_filename)
+
+ return rs_filenames
diff --git a/openpype/hosts/max/plugins/publish/save_scene.py b/openpype/hosts/max/plugins/publish/save_scene.py
new file mode 100644
index 0000000000..a40788ab41
--- /dev/null
+++ b/openpype/hosts/max/plugins/publish/save_scene.py
@@ -0,0 +1,21 @@
+import pyblish.api
+import os
+
+
+class SaveCurrentScene(pyblish.api.ContextPlugin):
+ """Save current scene
+
+ """
+
+ label = "Save current file"
+ order = pyblish.api.ExtractorOrder - 0.49
+ hosts = ["max"]
+ families = ["maxrender", "workfile"]
+
+ def process(self, context):
+ from pymxs import runtime as rt
+ folder = rt.maxFilePath
+ file = rt.maxFileName
+ current = os.path.join(folder, file)
+ assert context.data["currentFile"] == current
+ rt.saveMaxFile(current)
diff --git a/openpype/hosts/max/plugins/publish/validate_deadline_publish.py b/openpype/hosts/max/plugins/publish/validate_deadline_publish.py
new file mode 100644
index 0000000000..b2f0e863f4
--- /dev/null
+++ b/openpype/hosts/max/plugins/publish/validate_deadline_publish.py
@@ -0,0 +1,43 @@
+import os
+import pyblish.api
+from pymxs import runtime as rt
+from openpype.pipeline.publish import (
+ RepairAction,
+ ValidateContentsOrder,
+ PublishValidationError,
+ OptionalPyblishPluginMixin
+)
+from openpype.hosts.max.api.lib_rendersettings import RenderSettings
+
+
+class ValidateDeadlinePublish(pyblish.api.InstancePlugin,
+ OptionalPyblishPluginMixin):
+ """Validates Render File Directory is
+ not the same in every submission
+ """
+
+ order = ValidateContentsOrder
+ families = ["maxrender"]
+ hosts = ["max"]
+ label = "Render Output for Deadline"
+ optional = True
+ actions = [RepairAction]
+
+ def process(self, instance):
+ if not self.is_active(instance.data):
+ return
+ file = rt.maxFileName
+ filename, ext = os.path.splitext(file)
+ if filename not in rt.rendOutputFilename:
+ raise PublishValidationError(
+ "Render output folder "
+ "doesn't match the max scene name! "
+ "Use Repair action to "
+ "fix the folder file path.."
+ )
+
+ @classmethod
+ def repair(cls, instance):
+ container = instance.data.get("instance_node")
+ RenderSettings().render_output(container)
+ cls.log.debug("Reset the render output folder...")
diff --git a/openpype/hosts/max/plugins/publish/validate_renderer_redshift_proxy.py b/openpype/hosts/max/plugins/publish/validate_renderer_redshift_proxy.py
new file mode 100644
index 0000000000..bc82f82f3b
--- /dev/null
+++ b/openpype/hosts/max/plugins/publish/validate_renderer_redshift_proxy.py
@@ -0,0 +1,54 @@
+# -*- coding: utf-8 -*-
+import pyblish.api
+from openpype.pipeline import PublishValidationError
+from pymxs import runtime as rt
+from openpype.pipeline.publish import RepairAction
+from openpype.hosts.max.api.lib import get_current_renderer
+
+
+class ValidateRendererRedshiftProxy(pyblish.api.InstancePlugin):
+ """
+ Validates Redshift as the current renderer for creating
+ Redshift Proxy
+ """
+
+ order = pyblish.api.ValidatorOrder
+ families = ["redshiftproxy"]
+ hosts = ["max"]
+ label = "Redshift Renderer"
+ actions = [RepairAction]
+
+ def process(self, instance):
+ invalid = self.get_redshift_renderer(instance)
+ if invalid:
+ raise PublishValidationError("Please install Redshift for 3dsMax"
+ " before using the Redshift proxy instance") # noqa
+ invalid = self.get_current_renderer(instance)
+ if invalid:
+ raise PublishValidationError("The Redshift proxy extraction"
+ "discontinued since the current renderer is not Redshift") # noqa
+
+ def get_redshift_renderer(self, instance):
+ invalid = list()
+ max_renderers_list = str(rt.RendererClass.classes)
+ if "Redshift_Renderer" not in max_renderers_list:
+ invalid.append(max_renderers_list)
+
+ return invalid
+
+ def get_current_renderer(self, instance):
+ invalid = list()
+ renderer_class = get_current_renderer()
+ current_renderer = str(renderer_class).split(":")[0]
+ if current_renderer != "Redshift_Renderer":
+ invalid.append(current_renderer)
+
+ return invalid
+
+ @classmethod
+ def repair(cls, instance):
+ for Renderer in rt.RendererClass.classes:
+ renderer = Renderer()
+ if "Redshift_Renderer" in str(renderer):
+ rt.renderers.production = renderer
+ break
diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py
index 159bfe9eb3..0bb1f186eb 100644
--- a/openpype/hosts/maya/api/setdress.py
+++ b/openpype/hosts/maya/api/setdress.py
@@ -28,7 +28,9 @@ from openpype.pipeline import (
)
from openpype.hosts.maya.api.lib import (
matrix_equals,
- unique_namespace
+ unique_namespace,
+ get_container_transforms,
+ DEFAULT_MATRIX
)
log = logging.getLogger("PackageLoader")
@@ -183,8 +185,6 @@ def _add(instance, representation_id, loaders, namespace, root="|"):
"""
- from openpype.hosts.maya.lib import get_container_transforms
-
# Process within the namespace
with namespaced(namespace, new=False) as namespace:
@@ -379,8 +379,6 @@ def update_scene(set_container, containers, current_data, new_data, new_file):
"""
- from openpype.hosts.maya.lib import DEFAULT_MATRIX, get_container_transforms
-
set_namespace = set_container['namespace']
project_name = legacy_io.active_project()
diff --git a/openpype/hosts/maya/plugins/load/load_assembly.py b/openpype/hosts/maya/plugins/load/load_assembly.py
index 902f38695c..275f21be5d 100644
--- a/openpype/hosts/maya/plugins/load/load_assembly.py
+++ b/openpype/hosts/maya/plugins/load/load_assembly.py
@@ -1,8 +1,14 @@
+import maya.cmds as cmds
+
from openpype.pipeline import (
load,
remove_container
)
+from openpype.hosts.maya.api.pipeline import containerise
+from openpype.hosts.maya.api.lib import unique_namespace
+from openpype.hosts.maya.api import setdress
+
class AssemblyLoader(load.LoaderPlugin):
@@ -16,9 +22,6 @@ class AssemblyLoader(load.LoaderPlugin):
def load(self, context, name, namespace, data):
- from openpype.hosts.maya.api.pipeline import containerise
- from openpype.hosts.maya.api.lib import unique_namespace
-
asset = context['asset']['name']
namespace = namespace or unique_namespace(
asset + "_",
@@ -26,8 +29,6 @@ class AssemblyLoader(load.LoaderPlugin):
suffix="_",
)
- from openpype.hosts.maya.api import setdress
-
containers = setdress.load_package(
filepath=self.fname,
name=name,
@@ -50,15 +51,11 @@ class AssemblyLoader(load.LoaderPlugin):
def update(self, container, representation):
- from openpype import setdress
return setdress.update_package(container, representation)
def remove(self, container):
"""Remove all sub containers"""
- from openpype import setdress
- import maya.cmds as cmds
-
# Remove all members
member_containers = setdress.get_contained_containers(container)
for member_container in member_containers:
diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
index 4870f27bff..63849cfd12 100644
--- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
+++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py
@@ -13,7 +13,6 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
-
invalid = list()
if not instance.data["setMembers"]:
objectset_name = instance.data['name']
@@ -22,6 +21,10 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
return invalid
def process(self, instance):
+ # Allow renderlayer and workfile to be empty
+ skip_families = ["workfile", "renderlayer", "rendersetup"]
+ if instance.data.get("family") in skip_families:
+ return
invalid = self.get_invalid(instance)
if invalid:
diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
index 499bfd4e37..cba70a21b7 100644
--- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
+++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py
@@ -55,7 +55,8 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
if shapes:
instance_nodes.extend(shapes)
- scene_nodes = cmds.ls(type="transform") + cmds.ls(type="mesh")
+ scene_nodes = cmds.ls(type="transform", long=True)
+ scene_nodes += cmds.ls(type="mesh", long=True)
scene_nodes = set(scene_nodes) - set(instance_nodes)
scene_nodes_by_basename = defaultdict(list)
@@ -76,7 +77,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
if len(ids) > 1:
cls.log.error(
"\"{}\" id mismatch to: {}".format(
- instance_node.longName(), matches
+ instance_node, matches
)
)
invalid[instance_node] = matches
diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py
index d649ffae7f..75b0f80d21 100644
--- a/openpype/hosts/nuke/api/pipeline.py
+++ b/openpype/hosts/nuke/api/pipeline.py
@@ -151,6 +151,7 @@ class NukeHost(
def add_nuke_callbacks():
""" Adding all available nuke callbacks
"""
+ nuke_settings = get_current_project_settings()["nuke"]
workfile_settings = WorkfileSettings()
# Set context settings.
nuke.addOnCreate(
@@ -169,7 +170,10 @@ def add_nuke_callbacks():
# # set apply all workfile settings on script load and save
nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
- nuke.addFilenameFilter(dirmap_file_name_filter)
+ if nuke_settings["nuke-dirmap"]["enabled"]:
+ log.info("Added Nuke's dirmaping callback ...")
+ # Add dirmap for file paths.
+ nuke.addFilenameFilter(dirmap_file_name_filter)
log.info("Added Nuke callbacks ...")
diff --git a/openpype/hosts/nuke/startup/__init__.py b/openpype/hosts/nuke/startup/__init__.py
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/openpype/hosts/nuke/startup/frame_setting_for_read_nodes.py b/openpype/hosts/nuke/startup/frame_setting_for_read_nodes.py
new file mode 100644
index 0000000000..f0cbabe20f
--- /dev/null
+++ b/openpype/hosts/nuke/startup/frame_setting_for_read_nodes.py
@@ -0,0 +1,47 @@
+""" OpenPype custom script for resetting read nodes start frame values """
+
+import nuke
+import nukescripts
+
+
+class FrameSettingsPanel(nukescripts.PythonPanel):
+ """ Frame Settings Panel """
+ def __init__(self):
+ nukescripts.PythonPanel.__init__(self, "Set Frame Start (Read Node)")
+
+ # create knobs
+ self.frame = nuke.Int_Knob(
+ 'frame', 'Frame Number')
+ self.selected = nuke.Boolean_Knob("selection")
+ # add knobs to panel
+ self.addKnob(self.selected)
+ self.addKnob(self.frame)
+
+ # set values
+ self.selected.setValue(False)
+ self.frame.setValue(nuke.root().firstFrame())
+
+ def process(self):
+ """ Process the panel values. """
+ # get values
+ frame = self.frame.value()
+ if self.selected.value():
+ # selected nodes processing
+ if not nuke.selectedNodes():
+ return
+ for rn_ in nuke.selectedNodes():
+ if rn_.Class() != "Read":
+ continue
+ rn_["frame_mode"].setValue("start_at")
+ rn_["frame"].setValue(str(frame))
+ else:
+ # all nodes processing
+ for rn_ in nuke.allNodes(filter="Read"):
+ rn_["frame_mode"].setValue("start_at")
+ rn_["frame"].setValue(str(frame))
+
+
+def main():
+ p_ = FrameSettingsPanel()
+ if p_.showModalDialog():
+ print(p_.process())
diff --git a/openpype/hosts/resolve/api/__init__.py b/openpype/hosts/resolve/api/__init__.py
index 00a598548e..2b4546f8d6 100644
--- a/openpype/hosts/resolve/api/__init__.py
+++ b/openpype/hosts/resolve/api/__init__.py
@@ -24,6 +24,8 @@ from .lib import (
get_project_manager,
get_current_project,
get_current_timeline,
+ get_any_timeline,
+ get_new_timeline,
create_bin,
get_media_pool_item,
create_media_pool_item,
@@ -95,6 +97,8 @@ __all__ = [
"get_project_manager",
"get_current_project",
"get_current_timeline",
+ "get_any_timeline",
+ "get_new_timeline",
"create_bin",
"get_media_pool_item",
"create_media_pool_item",
diff --git a/openpype/hosts/resolve/api/lib.py b/openpype/hosts/resolve/api/lib.py
index b3ad20df39..a44c527f13 100644
--- a/openpype/hosts/resolve/api/lib.py
+++ b/openpype/hosts/resolve/api/lib.py
@@ -15,6 +15,7 @@ log = Logger.get_logger(__name__)
self = sys.modules[__name__]
self.project_manager = None
self.media_storage = None
+self.current_project = None
# OpenPype sequential rename variables
self.rename_index = 0
@@ -85,22 +86,60 @@ def get_media_storage():
def get_current_project():
- # initialize project manager
- get_project_manager()
+ """Get current project object.
+ """
+ if not self.current_project:
+ self.current_project = get_project_manager().GetCurrentProject()
- return self.project_manager.GetCurrentProject()
+ return self.current_project
def get_current_timeline(new=False):
- # get current project
+ """Get current timeline object.
+
+ Args:
+ new (bool)[optional]: [DEPRECATED] if True it will create
+ new timeline if none exists
+
+ Returns:
+ TODO: will need to reflect future `None`
+ object: resolve.Timeline
+ """
project = get_current_project()
+ timeline = project.GetCurrentTimeline()
+ # return current timeline if any
+ if timeline:
+ return timeline
+
+ # TODO: [deprecated] and will be removed in future
if new:
- media_pool = project.GetMediaPool()
- new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name)
- project.SetCurrentTimeline(new_timeline)
+ return get_new_timeline()
- return project.GetCurrentTimeline()
+
+def get_any_timeline():
+ """Get any timeline object.
+
+ Returns:
+ object | None: resolve.Timeline
+ """
+ project = get_current_project()
+ timeline_count = project.GetTimelineCount()
+ if timeline_count > 0:
+ return project.GetTimelineByIndex(1)
+
+
+def get_new_timeline():
+ """Get new timeline object.
+
+ Returns:
+ object: resolve.Timeline
+ """
+ project = get_current_project()
+ media_pool = project.GetMediaPool()
+ new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name)
+ project.SetCurrentTimeline(new_timeline)
+ return new_timeline
def create_bin(name: str, root: object = None) -> object:
@@ -312,7 +351,13 @@ def get_current_timeline_items(
track_type = track_type or "video"
selecting_color = selecting_color or "Chocolate"
project = get_current_project()
- timeline = get_current_timeline()
+
+ # get timeline anyhow
+ timeline = (
+ get_current_timeline() or
+ get_any_timeline() or
+ get_new_timeline()
+ )
selected_clips = []
# get all tracks count filtered by track type
diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py
index 609cff60f7..e5846c2fc2 100644
--- a/openpype/hosts/resolve/api/plugin.py
+++ b/openpype/hosts/resolve/api/plugin.py
@@ -327,7 +327,10 @@ class ClipLoader:
self.active_timeline = options["timeline"]
else:
# create new sequence
- self.active_timeline = lib.get_current_timeline(new=True)
+ self.active_timeline = (
+ lib.get_current_timeline() or
+ lib.get_new_timeline()
+ )
else:
self.active_timeline = lib.get_current_timeline()
diff --git a/openpype/hosts/resolve/api/workio.py b/openpype/hosts/resolve/api/workio.py
index 5ce73eea53..5966fa6a43 100644
--- a/openpype/hosts/resolve/api/workio.py
+++ b/openpype/hosts/resolve/api/workio.py
@@ -43,18 +43,22 @@ def open_file(filepath):
"""
Loading project
"""
+
+ from . import bmdvr
+
pm = get_project_manager()
+ page = bmdvr.GetCurrentPage()
+ if page is not None:
+ # Save current project only if Resolve has an active page, otherwise
+ # we consider Resolve being in a pre-launch state (no open UI yet)
+ project = pm.GetCurrentProject()
+ print(f"Saving current project: {project}")
+ pm.SaveProject()
+
file = os.path.basename(filepath)
fname, _ = os.path.splitext(file)
dname, _ = fname.split("_v")
-
- # deal with current project
- project = pm.GetCurrentProject()
- log.info(f"Test `pm`: {pm}")
- pm.SaveProject()
-
try:
- log.info(f"Test `dname`: {dname}")
if not set_project_manager_to_folder_name(dname):
raise
# load project from input path
@@ -72,6 +76,7 @@ def open_file(filepath):
return False
return True
+
def current_file():
pm = get_project_manager()
current_dir = os.getenv("AVALON_WORKDIR")
diff --git a/openpype/hosts/resolve/hooks/pre_resolve_launch_last_workfile.py b/openpype/hosts/resolve/hooks/pre_resolve_launch_last_workfile.py
new file mode 100644
index 0000000000..0e27ddb8c3
--- /dev/null
+++ b/openpype/hosts/resolve/hooks/pre_resolve_launch_last_workfile.py
@@ -0,0 +1,45 @@
+import os
+
+from openpype.lib import PreLaunchHook
+import openpype.hosts.resolve
+
+
+class ResolveLaunchLastWorkfile(PreLaunchHook):
+ """Special hook to open last workfile for Resolve.
+
+ Checks 'start_last_workfile', if set to False, it will not open last
+ workfile. This property is set explicitly in Launcher.
+ """
+
+ # Execute after workfile template copy
+ order = 10
+ app_groups = ["resolve"]
+
+ def execute(self):
+ if not self.data.get("start_last_workfile"):
+ self.log.info("It is set to not start last workfile on start.")
+ return
+
+ last_workfile = self.data.get("last_workfile_path")
+ if not last_workfile:
+ self.log.warning("Last workfile was not collected.")
+ return
+
+ if not os.path.exists(last_workfile):
+ self.log.info("Current context does not have any workfile yet.")
+ return
+
+ # Add path to launch environment for the startup script to pick up
+ self.log.info(f"Setting OPENPYPE_RESOLVE_OPEN_ON_LAUNCH to launch "
+ f"last workfile: {last_workfile}")
+ key = "OPENPYPE_RESOLVE_OPEN_ON_LAUNCH"
+ self.launch_context.env[key] = last_workfile
+
+ # Set the openpype prelaunch startup script path for easy access
+ # in the LUA .scriptlib code
+ op_resolve_root = os.path.dirname(openpype.hosts.resolve.__file__)
+ script_path = os.path.join(op_resolve_root, "startup.py")
+ key = "OPENPYPE_RESOLVE_STARTUP_SCRIPT"
+ self.launch_context.env[key] = script_path
+ self.log.info("Setting OPENPYPE_RESOLVE_STARTUP_SCRIPT to: "
+ f"{script_path}")
diff --git a/openpype/hosts/resolve/plugins/load/load_clip.py b/openpype/hosts/resolve/plugins/load/load_clip.py
index d30a7ea272..05bfb003d6 100644
--- a/openpype/hosts/resolve/plugins/load/load_clip.py
+++ b/openpype/hosts/resolve/plugins/load/load_clip.py
@@ -19,6 +19,7 @@ from openpype.lib.transcoding import (
IMAGE_EXTENSIONS
)
+
class LoadClip(plugin.TimelineItemLoader):
"""Load a subset to timeline as clip
diff --git a/openpype/hosts/resolve/startup.py b/openpype/hosts/resolve/startup.py
new file mode 100644
index 0000000000..79a64e0fbf
--- /dev/null
+++ b/openpype/hosts/resolve/startup.py
@@ -0,0 +1,62 @@
+"""This script is used as a startup script in Resolve through a .scriptlib file
+
+It triggers directly after the launch of Resolve and it's recommended to keep
+it optimized for fast performance since the Resolve UI is actually interactive
+while this is running. As such, there's nothing ensuring the user isn't
+continuing manually before any of the logic here runs. As such we also try
+to delay any imports as much as possible.
+
+This code runs in a separate process to the main Resolve process.
+
+"""
+import os
+
+import openpype.hosts.resolve.api
+
+
+def ensure_installed_host():
+ """Install resolve host with openpype and return the registered host.
+
+ This function can be called multiple times without triggering an
+ additional install.
+ """
+ from openpype.pipeline import install_host, registered_host
+ host = registered_host()
+ if host:
+ return host
+
+ install_host(openpype.hosts.resolve.api)
+ return registered_host()
+
+
+def launch_menu():
+ print("Launching Resolve OpenPype menu..")
+ ensure_installed_host()
+ openpype.hosts.resolve.api.launch_pype_menu()
+
+
+def open_file(path):
+ # Avoid the need to "install" the host
+ host = ensure_installed_host()
+ host.open_file(path)
+
+
+def main():
+ # Open last workfile
+ workfile_path = os.environ.get("OPENPYPE_RESOLVE_OPEN_ON_LAUNCH")
+ if workfile_path:
+ open_file(workfile_path)
+ else:
+ print("No last workfile set to open. Skipping..")
+
+ # Launch OpenPype menu
+ from openpype.settings import get_project_settings
+ from openpype.pipeline.context_tools import get_current_project_name
+ project_name = get_current_project_name()
+ settings = get_project_settings(project_name)
+ if settings.get("resolve", {}).get("launch_openpype_menu_on_start", True):
+ launch_menu()
+
+
+if __name__ == "__main__":
+ main()
diff --git a/openpype/hosts/resolve/utility_scripts/__OpenPype__Menu__.py b/openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py
similarity index 100%
rename from openpype/hosts/resolve/utility_scripts/__OpenPype__Menu__.py
rename to openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py
diff --git a/openpype/hosts/resolve/utility_scripts/README.markdown b/openpype/hosts/resolve/utility_scripts/README.markdown
deleted file mode 100644
index 8b13789179..0000000000
--- a/openpype/hosts/resolve/utility_scripts/README.markdown
+++ /dev/null
@@ -1 +0,0 @@
-
diff --git a/openpype/hosts/resolve/utility_scripts/OTIO_export.py b/openpype/hosts/resolve/utility_scripts/develop/OTIO_export.py
similarity index 100%
rename from openpype/hosts/resolve/utility_scripts/OTIO_export.py
rename to openpype/hosts/resolve/utility_scripts/develop/OTIO_export.py
diff --git a/openpype/hosts/resolve/utility_scripts/OTIO_import.py b/openpype/hosts/resolve/utility_scripts/develop/OTIO_import.py
similarity index 100%
rename from openpype/hosts/resolve/utility_scripts/OTIO_import.py
rename to openpype/hosts/resolve/utility_scripts/develop/OTIO_import.py
diff --git a/openpype/hosts/resolve/utility_scripts/OpenPype_sync_util_scripts.py b/openpype/hosts/resolve/utility_scripts/develop/OpenPype_sync_util_scripts.py
similarity index 100%
rename from openpype/hosts/resolve/utility_scripts/OpenPype_sync_util_scripts.py
rename to openpype/hosts/resolve/utility_scripts/develop/OpenPype_sync_util_scripts.py
diff --git a/openpype/hosts/resolve/utility_scripts/openpype_startup.scriptlib b/openpype/hosts/resolve/utility_scripts/openpype_startup.scriptlib
new file mode 100644
index 0000000000..ec9b30a18d
--- /dev/null
+++ b/openpype/hosts/resolve/utility_scripts/openpype_startup.scriptlib
@@ -0,0 +1,21 @@
+-- Run OpenPype's Python launch script for resolve
+function file_exists(name)
+ local f = io.open(name, "r")
+ return f ~= nil and io.close(f)
+end
+
+
+openpype_startup_script = os.getenv("OPENPYPE_RESOLVE_STARTUP_SCRIPT")
+if openpype_startup_script ~= nil then
+ script = fusion:MapPath(openpype_startup_script)
+
+ if file_exists(script) then
+ -- We must use RunScript to ensure it runs in a separate
+ -- process to Resolve itself to avoid a deadlock for
+ -- certain imports of OpenPype libraries or Qt
+ print("Running launch script: " .. script)
+ fusion:RunScript(script)
+ else
+ print("Launch script not found at: " .. script)
+ end
+end
\ No newline at end of file
diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py b/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py
new file mode 100644
index 0000000000..8270496f64
--- /dev/null
+++ b/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py
@@ -0,0 +1,13 @@
+#! python3
+from openpype.pipeline import install_host
+from openpype.hosts.resolve import api as bmdvr
+from openpype.hosts.resolve.api.lib import get_current_project
+
+if __name__ == "__main__":
+ install_host(bmdvr)
+ project = get_current_project()
+ timeline_count = project.GetTimelineCount()
+ print(f"Timeline count: {timeline_count}")
+ timeline = project.GetTimelineByIndex(timeline_count)
+ print(f"Timeline name: {timeline.GetName()}")
+ print(timeline.GetTrackCount("video"))
diff --git a/openpype/hosts/resolve/utils.py b/openpype/hosts/resolve/utils.py
index 8e5dd9a188..5e3003862f 100644
--- a/openpype/hosts/resolve/utils.py
+++ b/openpype/hosts/resolve/utils.py
@@ -1,6 +1,6 @@
import os
import shutil
-from openpype.lib import Logger
+from openpype.lib import Logger, is_running_from_build
RESOLVE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -29,6 +29,9 @@ def setup(env):
log.info("Utility Scripts Dir: `{}`".format(util_scripts_paths))
log.info("Utility Scripts: `{}`".format(scripts))
+ # Make sure scripts dir exists
+ os.makedirs(util_scripts_dir, exist_ok=True)
+
# make sure no script file is in folder
for script in os.listdir(util_scripts_dir):
path = os.path.join(util_scripts_dir, script)
@@ -41,8 +44,23 @@ def setup(env):
# copy scripts into Resolve's utility scripts dir
for directory, scripts in scripts.items():
for script in scripts:
+ if (
+ is_running_from_build() and
+ script in ["tests", "develop"]
+ ):
+ # only copy those if started from build
+ continue
+
src = os.path.join(directory, script)
dst = os.path.join(util_scripts_dir, script)
+
+ # TODO: Make this a less hacky workaround
+ if script == "openpype_startup.scriptlib":
+ # Handle special case for scriptlib that needs to be a folder
+ # up from the Comp folder in the Fusion scripts
+ dst = os.path.join(os.path.dirname(util_scripts_dir),
+ script)
+
log.info("Copying `{}` to `{}`...".format(src, dst))
if os.path.isdir(src):
shutil.copytree(
diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py
index 1119b5c16c..ed23950b35 100644
--- a/openpype/hosts/unreal/addon.py
+++ b/openpype/hosts/unreal/addon.py
@@ -1,5 +1,7 @@
import os
+import re
from openpype.modules import IHostAddon, OpenPypeModule
+from openpype.widgets.message_window import Window
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@@ -19,6 +21,20 @@ class UnrealAddon(OpenPypeModule, IHostAddon):
from .lib import get_compatible_integration
+ pattern = re.compile(r'^\d+-\d+$')
+
+ if not pattern.match(app.name):
+ msg = (
+ "Unreal application key in the settings must be in format"
+ "'5-0' or '5-1'"
+ )
+ Window(
+ parent=None,
+ title="Unreal application name format",
+ message=msg,
+ level="critical")
+ raise ValueError(msg)
+
ue_version = app.name.replace("-", ".")
unreal_plugin_path = os.path.join(
UNREAL_ROOT_DIR, "integration", "UE_{}".format(ue_version), "Ayon"
diff --git a/openpype/hosts/unreal/api/__init__.py b/openpype/hosts/unreal/api/__init__.py
index de0fce13d5..ac6a91eae9 100644
--- a/openpype/hosts/unreal/api/__init__.py
+++ b/openpype/hosts/unreal/api/__init__.py
@@ -22,6 +22,8 @@ from .pipeline import (
show_tools_popup,
instantiate,
UnrealHost,
+ set_sequence_hierarchy,
+ generate_sequence,
maintained_selection
)
@@ -41,5 +43,7 @@ __all__ = [
"show_tools_popup",
"instantiate",
"UnrealHost",
+ "set_sequence_hierarchy",
+ "generate_sequence",
"maintained_selection"
]
diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py
index bb45fa8c01..72816c9b81 100644
--- a/openpype/hosts/unreal/api/pipeline.py
+++ b/openpype/hosts/unreal/api/pipeline.py
@@ -9,12 +9,14 @@ import time
import pyblish.api
+from openpype.client import get_asset_by_name, get_assets
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AYON_CONTAINER_ID,
+ legacy_io,
)
from openpype.tools.utils import host_tools
import openpype.hosts.unreal
@@ -512,6 +514,141 @@ def get_subsequences(sequence: unreal.LevelSequence):
return []
+def set_sequence_hierarchy(
+ seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths
+):
+ # Get existing sequencer tracks or create them if they don't exist
+ tracks = seq_i.get_master_tracks()
+ subscene_track = None
+ visibility_track = None
+ for t in tracks:
+ if t.get_class() == unreal.MovieSceneSubTrack.static_class():
+ subscene_track = t
+ if (t.get_class() ==
+ unreal.MovieSceneLevelVisibilityTrack.static_class()):
+ visibility_track = t
+ if not subscene_track:
+ subscene_track = seq_i.add_master_track(unreal.MovieSceneSubTrack)
+ if not visibility_track:
+ visibility_track = seq_i.add_master_track(
+ unreal.MovieSceneLevelVisibilityTrack)
+
+ # Create the sub-scene section
+ subscenes = subscene_track.get_sections()
+ subscene = None
+ for s in subscenes:
+ if s.get_editor_property('sub_sequence') == seq_j:
+ subscene = s
+ break
+ if not subscene:
+ subscene = subscene_track.add_section()
+ subscene.set_row_index(len(subscene_track.get_sections()))
+ subscene.set_editor_property('sub_sequence', seq_j)
+ subscene.set_range(
+ min_frame_j,
+ max_frame_j + 1)
+
+ # Create the visibility section
+ ar = unreal.AssetRegistryHelpers.get_asset_registry()
+ maps = []
+ for m in map_paths:
+ # Unreal requires to load the level to get the map name
+ unreal.EditorLevelLibrary.save_all_dirty_levels()
+ unreal.EditorLevelLibrary.load_level(m)
+ maps.append(str(ar.get_asset_by_object_path(m).asset_name))
+
+ vis_section = visibility_track.add_section()
+ index = len(visibility_track.get_sections())
+
+ vis_section.set_range(
+ min_frame_j,
+ max_frame_j + 1)
+ vis_section.set_visibility(unreal.LevelVisibility.VISIBLE)
+ vis_section.set_row_index(index)
+ vis_section.set_level_names(maps)
+
+ if min_frame_j > 1:
+ hid_section = visibility_track.add_section()
+ hid_section.set_range(
+ 1,
+ min_frame_j)
+ hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
+ hid_section.set_row_index(index)
+ hid_section.set_level_names(maps)
+ if max_frame_j < max_frame_i:
+ hid_section = visibility_track.add_section()
+ hid_section.set_range(
+ max_frame_j + 1,
+ max_frame_i + 1)
+ hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
+ hid_section.set_row_index(index)
+ hid_section.set_level_names(maps)
+
+
+def generate_sequence(h, h_dir):
+ tools = unreal.AssetToolsHelpers().get_asset_tools()
+
+ sequence = tools.create_asset(
+ asset_name=h,
+ package_path=h_dir,
+ asset_class=unreal.LevelSequence,
+ factory=unreal.LevelSequenceFactoryNew()
+ )
+
+ project_name = legacy_io.active_project()
+ asset_data = get_asset_by_name(
+ project_name,
+ h_dir.split('/')[-1],
+ fields=["_id", "data.fps"]
+ )
+
+ start_frames = []
+ end_frames = []
+
+ elements = list(get_assets(
+ project_name,
+ parent_ids=[asset_data["_id"]],
+ fields=["_id", "data.clipIn", "data.clipOut"]
+ ))
+ for e in elements:
+ start_frames.append(e.get('data').get('clipIn'))
+ end_frames.append(e.get('data').get('clipOut'))
+
+ elements.extend(get_assets(
+ project_name,
+ parent_ids=[e["_id"]],
+ fields=["_id", "data.clipIn", "data.clipOut"]
+ ))
+
+ min_frame = min(start_frames)
+ max_frame = max(end_frames)
+
+ fps = asset_data.get('data').get("fps")
+
+ sequence.set_display_rate(
+ unreal.FrameRate(fps, 1.0))
+ sequence.set_playback_start(min_frame)
+ sequence.set_playback_end(max_frame)
+
+ sequence.set_work_range_start(min_frame / fps)
+ sequence.set_work_range_end(max_frame / fps)
+ sequence.set_view_range_start(min_frame / fps)
+ sequence.set_view_range_end(max_frame / fps)
+
+ tracks = sequence.get_master_tracks()
+ track = None
+ for t in tracks:
+ if (t.get_class() ==
+ unreal.MovieSceneCameraCutTrack.static_class()):
+ track = t
+ break
+ if not track:
+ track = sequence.add_master_track(
+ unreal.MovieSceneCameraCutTrack)
+
+ return sequence, (min_frame, max_frame)
+
+
@contextmanager
def maintained_selection():
"""Stub to be either implemented or replaced.
diff --git a/openpype/hosts/unreal/plugins/create/create_uasset.py b/openpype/hosts/unreal/plugins/create/create_uasset.py
index c78518e86b..f70ecc55b3 100644
--- a/openpype/hosts/unreal/plugins/create/create_uasset.py
+++ b/openpype/hosts/unreal/plugins/create/create_uasset.py
@@ -17,6 +17,8 @@ class CreateUAsset(UnrealAssetCreator):
family = "uasset"
icon = "cube"
+ extension = ".uasset"
+
def create(self, subset_name, instance_data, pre_create_data):
if pre_create_data.get("use_selection"):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
@@ -37,10 +39,28 @@ class CreateUAsset(UnrealAssetCreator):
f"{Path(obj).name} is not on the disk. Likely it needs to"
"be saved first.")
- if Path(sys_path).suffix != ".uasset":
- raise CreatorError(f"{Path(sys_path).name} is not a UAsset.")
+ if Path(sys_path).suffix != self.extension:
+ raise CreatorError(
+ f"{Path(sys_path).name} is not a {self.label}.")
super(CreateUAsset, self).create(
subset_name,
instance_data,
pre_create_data)
+
+
+class CreateUMap(CreateUAsset):
+ """Create Level."""
+
+ identifier = "io.ayon.creators.unreal.umap"
+ label = "Level"
+ family = "uasset"
+ extension = ".umap"
+
+ def create(self, subset_name, instance_data, pre_create_data):
+ instance_data["families"] = ["umap"]
+
+ super(CreateUMap, self).create(
+ subset_name,
+ instance_data,
+ pre_create_data)
diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py
index 072b3b1467..59ea14697d 100644
--- a/openpype/hosts/unreal/plugins/load/load_camera.py
+++ b/openpype/hosts/unreal/plugins/load/load_camera.py
@@ -3,16 +3,24 @@
from pathlib import Path
import unreal
-from unreal import EditorAssetLibrary
-from unreal import EditorLevelLibrary
-from unreal import EditorLevelUtils
-from openpype.client import get_assets, get_asset_by_name
+from unreal import (
+ EditorAssetLibrary,
+ EditorLevelLibrary,
+ EditorLevelUtils,
+ LevelSequenceEditorBlueprintLibrary as LevelSequenceLib,
+)
+from openpype.client import get_asset_by_name
from openpype.pipeline import (
AYON_CONTAINER_ID,
legacy_io,
)
from openpype.hosts.unreal.api import plugin
-from openpype.hosts.unreal.api import pipeline as unreal_pipeline
+from openpype.hosts.unreal.api.pipeline import (
+ generate_sequence,
+ set_sequence_hierarchy,
+ create_container,
+ imprint,
+)
class CameraLoader(plugin.Loader):
@@ -24,32 +32,6 @@ class CameraLoader(plugin.Loader):
icon = "cube"
color = "orange"
- def _set_sequence_hierarchy(
- self, seq_i, seq_j, min_frame_j, max_frame_j
- ):
- tracks = seq_i.get_master_tracks()
- track = None
- for t in tracks:
- if t.get_class() == unreal.MovieSceneSubTrack.static_class():
- track = t
- break
- if not track:
- track = seq_i.add_master_track(unreal.MovieSceneSubTrack)
-
- subscenes = track.get_sections()
- subscene = None
- for s in subscenes:
- if s.get_editor_property('sub_sequence') == seq_j:
- subscene = s
- break
- if not subscene:
- subscene = track.add_section()
- subscene.set_row_index(len(track.get_sections()))
- subscene.set_editor_property('sub_sequence', seq_j)
- subscene.set_range(
- min_frame_j,
- max_frame_j + 1)
-
def _import_camera(
self, world, sequence, bindings, import_fbx_settings, import_filename
):
@@ -110,10 +92,7 @@ class CameraLoader(plugin.Loader):
hierarchy_dir_list.append(hierarchy_dir)
asset = context.get('asset').get('name')
suffix = "_CON"
- if asset:
- asset_name = "{}_{}".format(asset, name)
- else:
- asset_name = "{}".format(name)
+ asset_name = f"{asset}_{name}" if asset else f"{name}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
@@ -127,23 +106,15 @@ class CameraLoader(plugin.Loader):
# Get highest number to make a unique name
folders = [a for a in asset_content
if a[-1] == "/" and f"{name}_" in a]
- f_numbers = []
- for f in folders:
- # Get number from folder name. Splits the string by "_" and
- # removes the last element (which is a "/").
- f_numbers.append(int(f.split("_")[-1][:-1]))
+ # Get number from folder name. Splits the string by "_" and
+ # removes the last element (which is a "/").
+ f_numbers = [int(f.split("_")[-1][:-1]) for f in folders]
f_numbers.sort()
- if not f_numbers:
- unique_number = 1
- else:
- unique_number = f_numbers[-1] + 1
+ unique_number = f_numbers[-1] + 1 if f_numbers else 1
asset_dir, container_name = tools.create_unique_asset_name(
f"{hierarchy_dir}/{asset}/{name}_{unique_number:02d}", suffix="")
- asset_path = Path(asset_dir)
- asset_path_parent = str(asset_path.parent.as_posix())
-
container_name += suffix
EditorAssetLibrary.make_directory(asset_dir)
@@ -156,9 +127,9 @@ class CameraLoader(plugin.Loader):
if not EditorAssetLibrary.does_asset_exist(master_level):
EditorLevelLibrary.new_level(f"{h_dir}/{h_asset}_map")
- level = f"{asset_path_parent}/{asset}_map.{asset}_map"
+ level = f"{asset_dir}/{asset}_map_camera.{asset}_map_camera"
if not EditorAssetLibrary.does_asset_exist(level):
- EditorLevelLibrary.new_level(f"{asset_path_parent}/{asset}_map")
+ EditorLevelLibrary.new_level(f"{asset_dir}/{asset}_map_camera")
EditorLevelLibrary.load_level(master_level)
EditorLevelUtils.add_level_to_world(
@@ -169,27 +140,13 @@ class CameraLoader(plugin.Loader):
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(level)
- project_name = legacy_io.active_project()
- # TODO refactor
- # - Creating of hierarchy should be a function in unreal integration
- # - it's used in multiple loaders but must not be loader's logic
- # - hard to say what is purpose of the loop
- # - variables does not match their meaning
- # - why scene is stored to sequences?
- # - asset documents vs. elements
- # - cleanup variable names in whole function
- # - e.g. 'asset', 'asset_name', 'asset_data', 'asset_doc'
- # - really inefficient queries of asset documents
- # - existing asset in scene is considered as "with correct values"
- # - variable 'elements' is modified during it's loop
# Get all the sequences in the hierarchy. It will create them, if
# they don't exist.
- sequences = []
frame_ranges = []
- i = 0
- for h in hierarchy_dir_list:
+ sequences = []
+ for (h_dir, h) in zip(hierarchy_dir_list, hierarchy):
root_content = EditorAssetLibrary.list_assets(
- h, recursive=False, include_folder=False)
+ h_dir, recursive=False, include_folder=False)
existing_sequences = [
EditorAssetLibrary.find_asset_data(asset)
@@ -198,57 +155,17 @@ class CameraLoader(plugin.Loader):
asset).get_class().get_name() == 'LevelSequence'
]
- if not existing_sequences:
- scene = tools.create_asset(
- asset_name=hierarchy[i],
- package_path=h,
- asset_class=unreal.LevelSequence,
- factory=unreal.LevelSequenceFactoryNew()
- )
-
- asset_data = get_asset_by_name(
- project_name,
- h.split('/')[-1],
- fields=["_id", "data.fps"]
- )
-
- start_frames = []
- end_frames = []
-
- elements = list(get_assets(
- project_name,
- parent_ids=[asset_data["_id"]],
- fields=["_id", "data.clipIn", "data.clipOut"]
- ))
-
- for e in elements:
- start_frames.append(e.get('data').get('clipIn'))
- end_frames.append(e.get('data').get('clipOut'))
-
- elements.extend(get_assets(
- project_name,
- parent_ids=[e["_id"]],
- fields=["_id", "data.clipIn", "data.clipOut"]
- ))
-
- min_frame = min(start_frames)
- max_frame = max(end_frames)
-
- scene.set_display_rate(
- unreal.FrameRate(asset_data.get('data').get("fps"), 1.0))
- scene.set_playback_start(min_frame)
- scene.set_playback_end(max_frame)
-
- sequences.append(scene)
- frame_ranges.append((min_frame, max_frame))
- else:
- for e in existing_sequences:
- sequences.append(e.get_asset())
+ if existing_sequences:
+ for seq in existing_sequences:
+ sequences.append(seq.get_asset())
frame_ranges.append((
- e.get_asset().get_playback_start(),
- e.get_asset().get_playback_end()))
+ seq.get_asset().get_playback_start(),
+ seq.get_asset().get_playback_end()))
+ else:
+ sequence, frame_range = generate_sequence(h, h_dir)
- i += 1
+ sequences.append(sequence)
+ frame_ranges.append(frame_range)
EditorAssetLibrary.make_directory(asset_dir)
@@ -260,19 +177,24 @@ class CameraLoader(plugin.Loader):
)
# Add sequences data to hierarchy
- for i in range(0, len(sequences) - 1):
- self._set_sequence_hierarchy(
+ for i in range(len(sequences) - 1):
+ set_sequence_hierarchy(
sequences[i], sequences[i + 1],
- frame_ranges[i + 1][0], frame_ranges[i + 1][1])
+ frame_ranges[i][1],
+ frame_ranges[i + 1][0], frame_ranges[i + 1][1],
+ [level])
+ project_name = legacy_io.active_project()
data = get_asset_by_name(project_name, asset)["data"]
cam_seq.set_display_rate(
unreal.FrameRate(data.get("fps"), 1.0))
cam_seq.set_playback_start(data.get('clipIn'))
cam_seq.set_playback_end(data.get('clipOut') + 1)
- self._set_sequence_hierarchy(
+ set_sequence_hierarchy(
sequences[-1], cam_seq,
- data.get('clipIn'), data.get('clipOut'))
+ frame_ranges[-1][1],
+ data.get('clipIn'), data.get('clipOut'),
+ [level])
settings = unreal.MovieSceneUserImportFBXSettings()
settings.set_editor_property('reduce_keys', False)
@@ -307,7 +229,7 @@ class CameraLoader(plugin.Loader):
key.set_time(unreal.FrameNumber(value=new_time))
# Create Asset Container
- unreal_pipeline.create_container(
+ create_container(
container=container_name, path=asset_dir)
data = {
@@ -322,14 +244,14 @@ class CameraLoader(plugin.Loader):
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
- unreal_pipeline.imprint(
- "{}/{}".format(asset_dir, container_name), data)
+ imprint(f"{asset_dir}/{container_name}", data)
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(master_level)
+ # Save all assets in the hierarchy
asset_content = EditorAssetLibrary.list_assets(
- asset_dir, recursive=True, include_folder=True
+ hierarchy_dir_list[0], recursive=True, include_folder=False
)
for a in asset_content:
@@ -340,29 +262,27 @@ class CameraLoader(plugin.Loader):
def update(self, container, representation):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
- root = "/Game/ayon"
+ curr_level_sequence = LevelSequenceLib.get_current_level_sequence()
+ curr_time = LevelSequenceLib.get_current_time()
+ is_cam_lock = LevelSequenceLib.is_camera_cut_locked_to_viewport()
+
+ editor_subsystem = unreal.UnrealEditorSubsystem()
+ vp_loc, vp_rot = editor_subsystem.get_level_viewport_camera_info()
asset_dir = container.get('namespace')
- context = representation.get("context")
-
- hierarchy = context.get('hierarchy').split("/")
- h_dir = f"{root}/{hierarchy[0]}"
- h_asset = hierarchy[0]
- master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
-
EditorLevelLibrary.save_current_level()
- filter = unreal.ARFilter(
+ _filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[asset_dir],
recursive_paths=False)
- sequences = ar.get_assets(filter)
- filter = unreal.ARFilter(
+ sequences = ar.get_assets(_filter)
+ _filter = unreal.ARFilter(
class_names=["World"],
- package_paths=[str(Path(asset_dir).parent.as_posix())],
+ package_paths=[asset_dir],
recursive_paths=True)
- maps = ar.get_assets(filter)
+ maps = ar.get_assets(_filter)
# There should be only one map in the list
EditorLevelLibrary.load_level(maps[0].get_asset().get_path_name())
@@ -401,12 +321,18 @@ class CameraLoader(plugin.Loader):
root = "/Game/Ayon"
namespace = container.get('namespace').replace(f"{root}/", "")
ms_asset = namespace.split('/')[0]
- filter = unreal.ARFilter(
+ _filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
- sequences = ar.get_assets(filter)
+ sequences = ar.get_assets(_filter)
master_sequence = sequences[0].get_asset()
+ _filter = unreal.ARFilter(
+ class_names=["World"],
+ package_paths=[f"{root}/{ms_asset}"],
+ recursive_paths=False)
+ levels = ar.get_assets(_filter)
+ master_level = levels[0].get_asset().get_path_name()
sequences = [master_sequence]
@@ -418,26 +344,20 @@ class CameraLoader(plugin.Loader):
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
- break
if subscene_track:
sections = subscene_track.get_sections()
for ss in sections:
if ss.get_sequence().get_name() == sequence_name:
parent = s
sub_scene = ss
- # subscene_track.remove_section(ss)
break
sequences.append(ss.get_sequence())
- # Update subscenes indexes.
- i = 0
- for ss in sections:
+ for i, ss in enumerate(sections):
ss.set_row_index(i)
- i += 1
-
if parent:
break
- assert parent, "Could not find the parent sequence"
+ assert parent, "Could not find the parent sequence"
EditorAssetLibrary.delete_asset(level_sequence.get_path_name())
@@ -466,33 +386,63 @@ class CameraLoader(plugin.Loader):
str(representation["data"]["path"])
)
+ # Set range of all sections
+ # Changing the range of the section is not enough. We need to change
+ # the frame of all the keys in the section.
+ project_name = legacy_io.active_project()
+ asset = container.get('asset')
+ data = get_asset_by_name(project_name, asset)["data"]
+
+ for possessable in new_sequence.get_possessables():
+ for tracks in possessable.get_tracks():
+ for section in tracks.get_sections():
+ section.set_range(
+ data.get('clipIn'),
+ data.get('clipOut') + 1)
+ for channel in section.get_all_channels():
+ for key in channel.get_keys():
+ old_time = key.get_time().get_editor_property(
+ 'frame_number')
+ old_time_value = old_time.get_editor_property(
+ 'value')
+ new_time = old_time_value + (
+ data.get('clipIn') - data.get('frameStart')
+ )
+ key.set_time(unreal.FrameNumber(value=new_time))
+
data = {
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
}
- unreal_pipeline.imprint(
- "{}/{}".format(asset_dir, container.get('container_name')), data)
+ imprint(f"{asset_dir}/{container.get('container_name')}", data)
EditorLevelLibrary.save_current_level()
asset_content = EditorAssetLibrary.list_assets(
- asset_dir, recursive=True, include_folder=False)
+ f"{root}/{ms_asset}", recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
EditorLevelLibrary.load_level(master_level)
+ if curr_level_sequence:
+ LevelSequenceLib.open_level_sequence(curr_level_sequence)
+ LevelSequenceLib.set_current_time(curr_time)
+ LevelSequenceLib.set_lock_camera_cut_to_viewport(is_cam_lock)
+
+ editor_subsystem.set_level_viewport_camera_info(vp_loc, vp_rot)
+
def remove(self, container):
- path = Path(container.get("namespace"))
- parent_path = str(path.parent.as_posix())
+ asset_dir = container.get('namespace')
+ path = Path(asset_dir)
ar = unreal.AssetRegistryHelpers.get_asset_registry()
- filter = unreal.ARFilter(
+ _filter = unreal.ARFilter(
class_names=["LevelSequence"],
- package_paths=[f"{str(path.as_posix())}"],
+ package_paths=[asset_dir],
recursive_paths=False)
- sequences = ar.get_assets(filter)
+ sequences = ar.get_assets(_filter)
if not sequences:
raise Exception("Could not find sequence.")
@@ -500,11 +450,11 @@ class CameraLoader(plugin.Loader):
world = ar.get_asset_by_object_path(
EditorLevelLibrary.get_editor_world().get_path_name())
- filter = unreal.ARFilter(
+ _filter = unreal.ARFilter(
class_names=["World"],
- package_paths=[f"{parent_path}"],
+ package_paths=[asset_dir],
recursive_paths=True)
- maps = ar.get_assets(filter)
+ maps = ar.get_assets(_filter)
# There should be only one map in the list
if not maps:
@@ -534,12 +484,18 @@ class CameraLoader(plugin.Loader):
root = "/Game/Ayon"
namespace = container.get('namespace').replace(f"{root}/", "")
ms_asset = namespace.split('/')[0]
- filter = unreal.ARFilter(
+ _filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
- sequences = ar.get_assets(filter)
+ sequences = ar.get_assets(_filter)
master_sequence = sequences[0].get_asset()
+ _filter = unreal.ARFilter(
+ class_names=["World"],
+ package_paths=[f"{root}/{ms_asset}"],
+ recursive_paths=False)
+ levels = ar.get_assets(_filter)
+ master_level = levels[0].get_full_name()
sequences = [master_sequence]
@@ -547,10 +503,13 @@ class CameraLoader(plugin.Loader):
for s in sequences:
tracks = s.get_master_tracks()
subscene_track = None
+ visibility_track = None
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
- break
+ if (t.get_class() ==
+ unreal.MovieSceneLevelVisibilityTrack.static_class()):
+ visibility_track = t
if subscene_track:
sections = subscene_track.get_sections()
for ss in sections:
@@ -560,23 +519,48 @@ class CameraLoader(plugin.Loader):
break
sequences.append(ss.get_sequence())
# Update subscenes indexes.
- i = 0
- for ss in sections:
+ for i, ss in enumerate(sections):
ss.set_row_index(i)
- i += 1
+ if visibility_track:
+ sections = visibility_track.get_sections()
+ for ss in sections:
+ if (unreal.Name(f"{container.get('asset')}_map_camera")
+ in ss.get_level_names()):
+ visibility_track.remove_section(ss)
+ # Update visibility sections indexes.
+ i = -1
+ prev_name = []
+ for ss in sections:
+ if prev_name != ss.get_level_names():
+ i += 1
+ ss.set_row_index(i)
+ prev_name = ss.get_level_names()
if parent:
break
assert parent, "Could not find the parent sequence"
- EditorAssetLibrary.delete_directory(str(path.as_posix()))
+ # Create a temporary level to delete the layout level.
+ EditorLevelLibrary.save_all_dirty_levels()
+ EditorAssetLibrary.make_directory(f"{root}/tmp")
+ tmp_level = f"{root}/tmp/temp_map"
+ if not EditorAssetLibrary.does_asset_exist(f"{tmp_level}.temp_map"):
+ EditorLevelLibrary.new_level(tmp_level)
+ else:
+ EditorLevelLibrary.load_level(tmp_level)
+
+ # Delete the layout directory.
+ EditorAssetLibrary.delete_directory(asset_dir)
+
+ EditorLevelLibrary.load_level(master_level)
+ EditorAssetLibrary.delete_directory(f"{root}/tmp")
# Check if there isn't any more assets in the parent folder, and
# delete it if not.
asset_content = EditorAssetLibrary.list_assets(
- parent_path, recursive=False, include_folder=True
+ path.parent.as_posix(), recursive=False, include_folder=True
)
if len(asset_content) == 0:
- EditorAssetLibrary.delete_directory(parent_path)
+ EditorAssetLibrary.delete_directory(path.parent.as_posix())
diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py
index d94e6e5837..86b2e1456c 100644
--- a/openpype/hosts/unreal/plugins/load/load_layout.py
+++ b/openpype/hosts/unreal/plugins/load/load_layout.py
@@ -5,15 +5,18 @@ import collections
from pathlib import Path
import unreal
-from unreal import EditorAssetLibrary
-from unreal import EditorLevelLibrary
-from unreal import EditorLevelUtils
-from unreal import AssetToolsHelpers
-from unreal import FBXImportType
-from unreal import MovieSceneLevelVisibilityTrack
-from unreal import MovieSceneSubTrack
+from unreal import (
+ EditorAssetLibrary,
+ EditorLevelLibrary,
+ EditorLevelUtils,
+ AssetToolsHelpers,
+ FBXImportType,
+ MovieSceneLevelVisibilityTrack,
+ MovieSceneSubTrack,
+ LevelSequenceEditorBlueprintLibrary as LevelSequenceLib,
+)
-from openpype.client import get_asset_by_name, get_assets, get_representations
+from openpype.client import get_asset_by_name, get_representations
from openpype.pipeline import (
discover_loader_plugins,
loaders_from_representation,
@@ -25,7 +28,13 @@ from openpype.pipeline import (
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.settings import get_current_project_settings
from openpype.hosts.unreal.api import plugin
-from openpype.hosts.unreal.api import pipeline as unreal_pipeline
+from openpype.hosts.unreal.api.pipeline import (
+ generate_sequence,
+ set_sequence_hierarchy,
+ create_container,
+ imprint,
+ ls,
+)
class LayoutLoader(plugin.Loader):
@@ -91,77 +100,6 @@ class LayoutLoader(plugin.Loader):
return None
- @staticmethod
- def _set_sequence_hierarchy(
- seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths
- ):
- # Get existing sequencer tracks or create them if they don't exist
- tracks = seq_i.get_master_tracks()
- subscene_track = None
- visibility_track = None
- for t in tracks:
- if t.get_class() == unreal.MovieSceneSubTrack.static_class():
- subscene_track = t
- if (t.get_class() ==
- unreal.MovieSceneLevelVisibilityTrack.static_class()):
- visibility_track = t
- if not subscene_track:
- subscene_track = seq_i.add_master_track(unreal.MovieSceneSubTrack)
- if not visibility_track:
- visibility_track = seq_i.add_master_track(
- unreal.MovieSceneLevelVisibilityTrack)
-
- # Create the sub-scene section
- subscenes = subscene_track.get_sections()
- subscene = None
- for s in subscenes:
- if s.get_editor_property('sub_sequence') == seq_j:
- subscene = s
- break
- if not subscene:
- subscene = subscene_track.add_section()
- subscene.set_row_index(len(subscene_track.get_sections()))
- subscene.set_editor_property('sub_sequence', seq_j)
- subscene.set_range(
- min_frame_j,
- max_frame_j + 1)
-
- # Create the visibility section
- ar = unreal.AssetRegistryHelpers.get_asset_registry()
- maps = []
- for m in map_paths:
- # Unreal requires to load the level to get the map name
- EditorLevelLibrary.save_all_dirty_levels()
- EditorLevelLibrary.load_level(m)
- maps.append(str(ar.get_asset_by_object_path(m).asset_name))
-
- vis_section = visibility_track.add_section()
- index = len(visibility_track.get_sections())
-
- vis_section.set_range(
- min_frame_j,
- max_frame_j + 1)
- vis_section.set_visibility(unreal.LevelVisibility.VISIBLE)
- vis_section.set_row_index(index)
- vis_section.set_level_names(maps)
-
- if min_frame_j > 1:
- hid_section = visibility_track.add_section()
- hid_section.set_range(
- 1,
- min_frame_j)
- hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
- hid_section.set_row_index(index)
- hid_section.set_level_names(maps)
- if max_frame_j < max_frame_i:
- hid_section = visibility_track.add_section()
- hid_section.set_range(
- max_frame_j + 1,
- max_frame_i + 1)
- hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
- hid_section.set_row_index(index)
- hid_section.set_level_names(maps)
-
def _transform_from_basis(self, transform, basis):
"""Transform a transform from a basis to a new basis."""
# Get the basis matrix
@@ -352,63 +290,6 @@ class LayoutLoader(plugin.Loader):
sec_params = section.get_editor_property('params')
sec_params.set_editor_property('animation', animation)
- @staticmethod
- def _generate_sequence(h, h_dir):
- tools = unreal.AssetToolsHelpers().get_asset_tools()
-
- sequence = tools.create_asset(
- asset_name=h,
- package_path=h_dir,
- asset_class=unreal.LevelSequence,
- factory=unreal.LevelSequenceFactoryNew()
- )
-
- project_name = legacy_io.active_project()
- asset_data = get_asset_by_name(
- project_name,
- h_dir.split('/')[-1],
- fields=["_id", "data.fps"]
- )
-
- start_frames = []
- end_frames = []
-
- elements = list(get_assets(
- project_name,
- parent_ids=[asset_data["_id"]],
- fields=["_id", "data.clipIn", "data.clipOut"]
- ))
- for e in elements:
- start_frames.append(e.get('data').get('clipIn'))
- end_frames.append(e.get('data').get('clipOut'))
-
- elements.extend(get_assets(
- project_name,
- parent_ids=[e["_id"]],
- fields=["_id", "data.clipIn", "data.clipOut"]
- ))
-
- min_frame = min(start_frames)
- max_frame = max(end_frames)
-
- sequence.set_display_rate(
- unreal.FrameRate(asset_data.get('data').get("fps"), 1.0))
- sequence.set_playback_start(min_frame)
- sequence.set_playback_end(max_frame)
-
- tracks = sequence.get_master_tracks()
- track = None
- for t in tracks:
- if (t.get_class() ==
- unreal.MovieSceneCameraCutTrack.static_class()):
- track = t
- break
- if not track:
- track = sequence.add_master_track(
- unreal.MovieSceneCameraCutTrack)
-
- return sequence, (min_frame, max_frame)
-
def _get_repre_docs_by_version_id(self, data):
version_ids = {
element.get("version")
@@ -696,7 +577,7 @@ class LayoutLoader(plugin.Loader):
]
if not existing_sequences:
- sequence, frame_range = self._generate_sequence(h, h_dir)
+ sequence, frame_range = generate_sequence(h, h_dir)
sequences.append(sequence)
frame_ranges.append(frame_range)
@@ -716,7 +597,7 @@ class LayoutLoader(plugin.Loader):
# sequences and frame_ranges have the same length
for i in range(0, len(sequences) - 1):
- self._set_sequence_hierarchy(
+ set_sequence_hierarchy(
sequences[i], sequences[i + 1],
frame_ranges[i][1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1],
@@ -729,7 +610,7 @@ class LayoutLoader(plugin.Loader):
shot.set_playback_start(0)
shot.set_playback_end(data.get('clipOut') - data.get('clipIn') + 1)
if sequences:
- self._set_sequence_hierarchy(
+ set_sequence_hierarchy(
sequences[-1], shot,
frame_ranges[-1][1],
data.get('clipIn'), data.get('clipOut'),
@@ -745,7 +626,7 @@ class LayoutLoader(plugin.Loader):
EditorLevelLibrary.save_current_level()
# Create Asset Container
- unreal_pipeline.create_container(
+ create_container(
container=container_name, path=asset_dir)
data = {
@@ -761,11 +642,13 @@ class LayoutLoader(plugin.Loader):
"family": context["representation"]["context"]["family"],
"loaded_assets": loaded_assets
}
- unreal_pipeline.imprint(
+ imprint(
"{}/{}".format(asset_dir, container_name), data)
+ save_dir = hierarchy_dir_list[0] if create_sequences else asset_dir
+
asset_content = EditorAssetLibrary.list_assets(
- asset_dir, recursive=True, include_folder=False)
+ save_dir, recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
@@ -781,16 +664,24 @@ class LayoutLoader(plugin.Loader):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
+ curr_level_sequence = LevelSequenceLib.get_current_level_sequence()
+ curr_time = LevelSequenceLib.get_current_time()
+ is_cam_lock = LevelSequenceLib.is_camera_cut_locked_to_viewport()
+
+ editor_subsystem = unreal.UnrealEditorSubsystem()
+ vp_loc, vp_rot = editor_subsystem.get_level_viewport_camera_info()
+
root = "/Game/Ayon"
asset_dir = container.get('namespace')
context = representation.get("context")
+ hierarchy = context.get('hierarchy').split("/")
+
sequence = None
master_level = None
if create_sequences:
- hierarchy = context.get('hierarchy').split("/")
h_dir = f"{root}/{hierarchy[0]}"
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
@@ -843,13 +734,15 @@ class LayoutLoader(plugin.Loader):
"parent": str(representation["parent"]),
"loaded_assets": loaded_assets
}
- unreal_pipeline.imprint(
+ imprint(
"{}/{}".format(asset_dir, container.get('container_name')), data)
EditorLevelLibrary.save_current_level()
+ save_dir = f"{root}/{hierarchy[0]}" if create_sequences else asset_dir
+
asset_content = EditorAssetLibrary.list_assets(
- asset_dir, recursive=True, include_folder=False)
+ save_dir, recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
@@ -859,6 +752,13 @@ class LayoutLoader(plugin.Loader):
elif prev_level:
EditorLevelLibrary.load_level(prev_level)
+ if curr_level_sequence:
+ LevelSequenceLib.open_level_sequence(curr_level_sequence)
+ LevelSequenceLib.set_current_time(curr_time)
+ LevelSequenceLib.set_lock_camera_cut_to_viewport(is_cam_lock)
+
+ editor_subsystem.set_level_viewport_camera_info(vp_loc, vp_rot)
+
def remove(self, container):
"""
Delete the layout. First, check if the assets loaded with the layout
@@ -870,7 +770,7 @@ class LayoutLoader(plugin.Loader):
root = "/Game/Ayon"
path = Path(container.get("namespace"))
- containers = unreal_pipeline.ls()
+ containers = ls()
layout_containers = [
c for c in containers
if (c.get('asset_name') != container.get('asset_name') and
diff --git a/openpype/hosts/unreal/plugins/load/load_uasset.py b/openpype/hosts/unreal/plugins/load/load_uasset.py
index 7606bc14e4..30f63abe39 100644
--- a/openpype/hosts/unreal/plugins/load/load_uasset.py
+++ b/openpype/hosts/unreal/plugins/load/load_uasset.py
@@ -21,6 +21,8 @@ class UAssetLoader(plugin.Loader):
icon = "cube"
color = "orange"
+ extension = "uasset"
+
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
@@ -42,26 +44,29 @@ class UAssetLoader(plugin.Loader):
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
- if asset:
- asset_name = "{}_{}".format(asset, name)
- else:
- asset_name = "{}".format(name)
-
+ asset_name = f"{asset}_{name}" if asset else f"{name}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name}", suffix=""
)
- container_name += suffix
+ unique_number = 1
+ while unreal.EditorAssetLibrary.does_directory_exist(
+ f"{asset_dir}_{unique_number:02}"
+ ):
+ unique_number += 1
+
+ asset_dir = f"{asset_dir}_{unique_number:02}"
+ container_name = f"{container_name}_{unique_number:02}{suffix}"
unreal.EditorAssetLibrary.make_directory(asset_dir)
destination_path = asset_dir.replace(
- "/Game",
- Path(unreal.Paths.project_content_dir()).as_posix(),
- 1)
+ "/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1)
- shutil.copy(self.fname, f"{destination_path}/{name}.uasset")
+ shutil.copy(
+ self.fname,
+ f"{destination_path}/{name}_{unique_number:02}.{self.extension}")
# Create Asset Container
unreal_pipeline.create_container(
@@ -77,7 +82,7 @@ class UAssetLoader(plugin.Loader):
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
- "family": context["representation"]["context"]["family"]
+ "family": context["representation"]["context"]["family"],
}
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
@@ -96,10 +101,10 @@ class UAssetLoader(plugin.Loader):
asset_dir = container["namespace"]
name = representation["context"]["subset"]
+ unique_number = container["container_name"].split("_")[-2]
+
destination_path = asset_dir.replace(
- "/Game",
- Path(unreal.Paths.project_content_dir()).as_posix(),
- 1)
+ "/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=False, include_folder=True
@@ -107,22 +112,24 @@ class UAssetLoader(plugin.Loader):
for asset in asset_content:
obj = ar.get_asset_by_object_path(asset).get_asset()
- if not obj.get_class().get_name() == 'AyonAssetContainer':
+ if obj.get_class().get_name() != "AyonAssetContainer":
unreal.EditorAssetLibrary.delete_asset(asset)
update_filepath = get_representation_path(representation)
- shutil.copy(update_filepath, f"{destination_path}/{name}.uasset")
+ shutil.copy(
+ update_filepath,
+ f"{destination_path}/{name}_{unique_number}.{self.extension}")
- container_path = "{}/{}".format(container["namespace"],
- container["objectName"])
+ container_path = f'{container["namespace"]}/{container["objectName"]}'
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
- "parent": str(representation["parent"])
- })
+ "parent": str(representation["parent"]),
+ }
+ )
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@@ -143,3 +150,13 @@ class UAssetLoader(plugin.Loader):
if len(asset_content) == 0:
unreal.EditorAssetLibrary.delete_directory(parent_path)
+
+
+class UMapLoader(UAssetLoader):
+ """Load Level."""
+
+ families = ["uasset"]
+ label = "Load Level"
+ representations = ["umap"]
+
+ extension = "umap"
diff --git a/openpype/hosts/unreal/plugins/publish/collect_instance_members.py b/openpype/hosts/unreal/plugins/publish/collect_instance_members.py
index 46ca51ab7e..de10e7b119 100644
--- a/openpype/hosts/unreal/plugins/publish/collect_instance_members.py
+++ b/openpype/hosts/unreal/plugins/publish/collect_instance_members.py
@@ -24,7 +24,7 @@ class CollectInstanceMembers(pyblish.api.InstancePlugin):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
inst_path = instance.data.get('instance_path')
- inst_name = instance.data.get('objectName')
+ inst_name = inst_path.split('/')[-1]
pub_instance = ar.get_asset_by_object_path(
f"{inst_path}.{inst_name}").get_asset()
diff --git a/openpype/hosts/unreal/plugins/publish/collect_render_instances.py b/openpype/hosts/unreal/plugins/publish/collect_render_instances.py
index a352b2c3f3..dad0310dfc 100644
--- a/openpype/hosts/unreal/plugins/publish/collect_render_instances.py
+++ b/openpype/hosts/unreal/plugins/publish/collect_render_instances.py
@@ -103,8 +103,8 @@ class CollectRenderInstances(pyblish.api.InstancePlugin):
new_instance.data["representations"] = []
repr = {
- 'frameStart': s.get('frame_range')[0],
- 'frameEnd': s.get('frame_range')[1],
+ 'frameStart': instance.data["frameStart"],
+ 'frameEnd': instance.data["frameEnd"],
'name': 'png',
'ext': 'png',
'files': frames,
diff --git a/openpype/hosts/unreal/plugins/publish/extract_uasset.py b/openpype/hosts/unreal/plugins/publish/extract_uasset.py
index f719df2a82..48b62faa97 100644
--- a/openpype/hosts/unreal/plugins/publish/extract_uasset.py
+++ b/openpype/hosts/unreal/plugins/publish/extract_uasset.py
@@ -11,16 +11,17 @@ class ExtractUAsset(publish.Extractor):
label = "Extract UAsset"
hosts = ["unreal"]
- families = ["uasset"]
+ families = ["uasset", "umap"]
optional = True
def process(self, instance):
+ extension = (
+ "umap" if "umap" in instance.data.get("families") else "uasset")
ar = unreal.AssetRegistryHelpers.get_asset_registry()
self.log.info("Performing extraction..")
-
staging_dir = self.staging_dir(instance)
- filename = "{}.uasset".format(instance.name)
+ filename = f"{instance.name}.{extension}"
members = instance.data.get("members", [])
@@ -36,13 +37,15 @@ class ExtractUAsset(publish.Extractor):
shutil.copy(sys_path, staging_dir)
+ self.log.info(f"instance.data: {instance.data}")
+
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
- 'name': 'uasset',
- 'ext': 'uasset',
- 'files': filename,
+ "name": extension,
+ "ext": extension,
+ "files": filename,
"stagingDir": staging_dir,
}
instance.data["representations"].append(representation)
diff --git a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py
index e6584e130f..76bb25fac3 100644
--- a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py
+++ b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py
@@ -31,8 +31,8 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
frames = list(collection.indexes)
current_range = (frames[0], frames[-1])
- required_range = (data["frameStart"],
- data["frameEnd"])
+ required_range = (data["clipIn"],
+ data["clipOut"])
if current_range != required_range:
raise ValueError(f"Invalid frame range: {current_range} - "
diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py
index e7a690ac9c..2b7e1375e6 100644
--- a/openpype/hosts/unreal/ue_workers.py
+++ b/openpype/hosts/unreal/ue_workers.py
@@ -6,6 +6,8 @@ import subprocess
from distutils import dir_util
from pathlib import Path
from typing import List, Union
+import tempfile
+from distutils.dir_util import copy_tree
import openpype.hosts.unreal.lib as ue_lib
@@ -90,9 +92,20 @@ class UEProjectGenerationWorker(QtCore.QObject):
("Generating a new UE project ... 1 out of "
f"{stage_count}"))
+ # Need to copy the commandlet project to a temporary folder where
+ # users don't need admin rights to write to.
+ cmdlet_tmp = tempfile.TemporaryDirectory()
+ cmdlet_filename = cmdlet_project.name
+ cmdlet_dir = cmdlet_project.parent.as_posix()
+ cmdlet_tmp_name = Path(cmdlet_tmp.name)
+ cmdlet_tmp_file = cmdlet_tmp_name.joinpath(cmdlet_filename)
+ copy_tree(
+ cmdlet_dir,
+ cmdlet_tmp_name.as_posix())
+
commandlet_cmd = [
f"{ue_editor_exe.as_posix()}",
- f"{cmdlet_project.as_posix()}",
+ f"{cmdlet_tmp_file.as_posix()}",
"-run=AyonGenerateProject",
f"{project_file.resolve().as_posix()}",
]
@@ -111,6 +124,8 @@ class UEProjectGenerationWorker(QtCore.QObject):
gen_process.stdout.close()
return_code = gen_process.wait()
+ cmdlet_tmp.cleanup()
+
if return_code and return_code != 0:
msg = (
f"Failed to generate {self.project_name} "
diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py
index 558a637e4b..7938c27233 100644
--- a/openpype/modules/deadline/abstract_submit_deadline.py
+++ b/openpype/modules/deadline/abstract_submit_deadline.py
@@ -582,7 +582,6 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
metadata_folder = metadata_folder.replace(orig_scene,
new_scene)
instance.data["publishRenderMetadataFolder"] = metadata_folder
-
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))
diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py
index c728b6b9c7..b6a30e36b7 100644
--- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py
+++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py
@@ -78,7 +78,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
job_info.BatchName = src_filename
job_info.Plugin = instance.data["plugin"]
job_info.UserName = context.data.get("deadlineUser", getpass.getuser())
-
+ job_info.EnableAutoTimeout = True
# Deadline requires integers in frame range
frames = "{start}-{end}".format(
start=int(instance.data["frameStart"]),
@@ -133,7 +133,8 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
# Add list of expected files to job
# ---------------------------------
exp = instance.data.get("expectedFiles")
- for filepath in exp:
+
+ for filepath in self._iter_expected_files(exp):
job_info.OutputDirectory += os.path.dirname(filepath)
job_info.OutputFilename += os.path.basename(filepath)
@@ -162,10 +163,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
instance = self._instance
filepath = self.scene_path
- expected_files = instance.data["expectedFiles"]
- if not expected_files:
+ files = instance.data["expectedFiles"]
+ if not files:
raise RuntimeError("No Render Elements found!")
- output_dir = os.path.dirname(expected_files[0])
+ first_file = next(self._iter_expected_files(files))
+ output_dir = os.path.dirname(first_file)
instance.data["outputDir"] = output_dir
instance.data["toBeRenderedOn"] = "deadline"
@@ -196,25 +198,22 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
else:
plugin_data["DisableMultipass"] = 1
- expected_files = instance.data.get("expectedFiles")
- if not expected_files:
+ files = instance.data.get("expectedFiles")
+ if not files:
raise RuntimeError("No render elements found")
- old_output_dir = os.path.dirname(expected_files[0])
+ first_file = next(self._iter_expected_files(files))
+ old_output_dir = os.path.dirname(first_file)
output_beauty = RenderSettings().get_render_output(instance.name,
old_output_dir)
- filepath = self.from_published_scene()
-
- def _clean_name(path):
- return os.path.splitext(os.path.basename(path))[0]
-
- new_scene = _clean_name(filepath)
- orig_scene = _clean_name(instance.context.data["currentFile"])
-
- output_beauty = output_beauty.replace(orig_scene, new_scene)
- output_beauty = output_beauty.replace("\\", "/")
- plugin_data["RenderOutput"] = output_beauty
-
+ rgb_bname = os.path.basename(output_beauty)
+ dir = os.path.dirname(first_file)
+ beauty_name = f"{dir}/{rgb_bname}"
+ beauty_name = beauty_name.replace("\\", "/")
+ plugin_data["RenderOutput"] = beauty_name
+ # as 3dsmax has version with different languages
+ plugin_data["Language"] = "ENU"
renderer_class = get_current_renderer()
+
renderer = str(renderer_class).split(":")[0]
if renderer in [
"ART_Renderer",
@@ -226,14 +225,37 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
]:
render_elem_list = RenderSettings().get_render_element()
for i, element in enumerate(render_elem_list):
- element = element.replace(orig_scene, new_scene)
- plugin_data["RenderElementOutputFilename%d" % i] = element # noqa
+ elem_bname = os.path.basename(element)
+ new_elem = f"{dir}/{elem_bname}"
+ new_elem = new_elem.replace("/", "\\")
+ plugin_data["RenderElementOutputFilename%d" % i] = new_elem # noqa
+
+ if renderer == "Redshift_Renderer":
+ plugin_data["redshift_SeparateAovFiles"] = instance.data.get(
+ "separateAovFiles")
self.log.debug("plugin data:{}".format(plugin_data))
plugin_info.update(plugin_data)
return job_info, plugin_info
+ def from_published_scene(self, replace_in_path=True):
+ instance = self._instance
+ if instance.data["renderer"] == "Redshift_Renderer":
+ self.log.debug("Using Redshift...published scene wont be used..")
+ replace_in_path = False
+ return replace_in_path
+
+ @staticmethod
+ def _iter_expected_files(exp):
+ if isinstance(exp[0], dict):
+ for _aov, files in exp[0].items():
+ for file in files:
+ yield file
+ else:
+ for file in exp:
+ yield file
+
@classmethod
def get_attribute_defs(cls):
defs = super(MaxSubmitDeadline, cls).get_attribute_defs()
diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py
index 68eb0a437d..f646551a07 100644
--- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py
+++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py
@@ -275,7 +275,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
args = [
"--headless",
'publish',
- rootless_metadata_path,
+ '"{}"'.format(rootless_metadata_path),
"--targets", "deadline",
"--targets", "farm"
]
diff --git a/openpype/modules/ftrack/ftrack_server/lib.py b/openpype/modules/ftrack/ftrack_server/lib.py
index eb64063fab..2226c85ef9 100644
--- a/openpype/modules/ftrack/ftrack_server/lib.py
+++ b/openpype/modules/ftrack/ftrack_server/lib.py
@@ -196,7 +196,7 @@ class ProcessEventHub(SocketBaseEventHub):
{"pype_data.is_processed": False}
).sort(
[("pype_data.stored", pymongo.ASCENDING)]
- )
+ ).limit(100)
found = False
for event_data in not_processed_events:
diff --git a/openpype/modules/ftrack/lib/ftrack_action_handler.py b/openpype/modules/ftrack/lib/ftrack_action_handler.py
index 07b3a780a2..1be4353b26 100644
--- a/openpype/modules/ftrack/lib/ftrack_action_handler.py
+++ b/openpype/modules/ftrack/lib/ftrack_action_handler.py
@@ -234,6 +234,10 @@ class BaseAction(BaseHandler):
if not settings_roles:
return default
+ user_roles = {
+ role_name.lower()
+ for role_name in user_roles
+ }
for role_name in settings_roles:
if role_name.lower() in user_roles:
return True
@@ -264,8 +268,15 @@ class BaseAction(BaseHandler):
return user_entity
@classmethod
- def get_user_roles_from_event(cls, session, event):
- """Query user entity from event."""
+ def get_user_roles_from_event(cls, session, event, lower=True):
+ """Get user roles based on data in event.
+
+ Args:
+ session (ftrack_api.Session): Prepared ftrack session.
+ event (ftrack_api.event.Event): Event which is processed.
+ lower (Optional[bool]): Lower the role names. Default 'True'.
+ """
+
not_set = object()
user_roles = event["data"].get("user_roles", not_set)
@@ -273,7 +284,10 @@ class BaseAction(BaseHandler):
user_roles = []
user_entity = cls.get_user_entity_from_event(session, event)
for role in user_entity["user_security_roles"]:
- user_roles.append(role["security_role"]["name"].lower())
+ role_name = role["security_role"]["name"]
+ if lower:
+ role_name = role_name.lower()
+ user_roles.append(role_name)
event["data"]["user_roles"] = user_roles
return user_roles
@@ -322,7 +336,8 @@ class BaseAction(BaseHandler):
if not settings.get(self.settings_enabled_key, True):
return False
- user_role_list = self.get_user_roles_from_event(session, event)
+ user_role_list = self.get_user_roles_from_event(
+ session, event, lower=False)
if not self.roles_check(settings.get("role_list"), user_role_list):
return False
return True
diff --git a/openpype/modules/ftrack/scripts/sub_event_status.py b/openpype/modules/ftrack/scripts/sub_event_status.py
index dc5836e7f2..c6c2e9e1f6 100644
--- a/openpype/modules/ftrack/scripts/sub_event_status.py
+++ b/openpype/modules/ftrack/scripts/sub_event_status.py
@@ -296,9 +296,9 @@ def server_activity_validate_user(event):
if not user_ent:
return False
- role_list = ["Pypeclub", "Administrator"]
+ role_list = {"pypeclub", "administrator"}
for role in user_ent["user_security_roles"]:
- if role["security_role"]["name"] in role_list:
+ if role["security_role"]["name"].lower() in role_list:
return True
return False
diff --git a/openpype/modules/ftrack/tray/login_dialog.py b/openpype/modules/ftrack/tray/login_dialog.py
index f374a71178..a8abdaf191 100644
--- a/openpype/modules/ftrack/tray/login_dialog.py
+++ b/openpype/modules/ftrack/tray/login_dialog.py
@@ -1,5 +1,3 @@
-import os
-
import requests
from qtpy import QtCore, QtGui, QtWidgets
diff --git a/openpype/modules/kitsu/kitsu_module.py b/openpype/modules/kitsu/kitsu_module.py
index b91373af20..8d2d5ccd60 100644
--- a/openpype/modules/kitsu/kitsu_module.py
+++ b/openpype/modules/kitsu/kitsu_module.py
@@ -94,7 +94,7 @@ class KitsuModule(OpenPypeModule, IPluginPaths, ITrayAction):
return {
"publish": [os.path.join(current_dir, "plugins", "publish")],
- "actions": [os.path.join(current_dir, "actions")]
+ "actions": [os.path.join(current_dir, "actions")],
}
def cli(self, click_group):
@@ -128,15 +128,35 @@ def push_to_zou(login, password):
@click.option(
"-p", "--password", envvar="KITSU_PWD", help="Password for kitsu username"
)
-def sync_service(login, password):
+@click.option(
+ "-prj",
+ "--project",
+ "projects",
+ multiple=True,
+ default=[],
+ help="Sync specific kitsu projects",
+)
+@click.option(
+ "-lo",
+ "--listen-only",
+ "listen_only",
+ is_flag=True,
+ default=False,
+ help="Listen to events only without any syncing",
+)
+def sync_service(login, password, projects, listen_only):
"""Synchronize openpype database from Zou sever database.
Args:
login (str): Kitsu user login
password (str): Kitsu user password
+ projects (tuple): specific kitsu projects
+ listen_only (bool): run listen only without any syncing
"""
from .utils.update_op_with_zou import sync_all_projects
from .utils.sync_service import start_listeners
- sync_all_projects(login, password)
+ if not listen_only:
+ sync_all_projects(login, password, filter_projects=projects)
+
start_listeners(login, password)
diff --git a/openpype/modules/kitsu/utils/update_op_with_zou.py b/openpype/modules/kitsu/utils/update_op_with_zou.py
index 4f4f0810bc..b495cd1bea 100644
--- a/openpype/modules/kitsu/utils/update_op_with_zou.py
+++ b/openpype/modules/kitsu/utils/update_op_with_zou.py
@@ -94,9 +94,7 @@ def update_op_assets(
if not item_doc: # Create asset
op_asset = create_op_asset(item)
insert_result = dbcon.insert_one(op_asset)
- item_doc = get_asset_by_id(
- project_name, insert_result.inserted_id
- )
+ item_doc = get_asset_by_id(project_name, insert_result.inserted_id)
# Update asset
item_data = deepcopy(item_doc["data"])
@@ -329,7 +327,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
"code": project_code,
"fps": float(project["fps"]),
"zou_id": project["id"],
- "active": project['project_status_name'] != "Closed",
+ "active": project["project_status_name"] != "Closed",
}
)
@@ -359,7 +357,10 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
def sync_all_projects(
- login: str, password: str, ignore_projects: list = None
+ login: str,
+ password: str,
+ ignore_projects: list = None,
+ filter_projects: tuple = None,
):
"""Update all OP projects in DB with Zou data.
@@ -367,6 +368,7 @@ def sync_all_projects(
login (str): Kitsu user login
password (str): Kitsu user password
ignore_projects (list): List of unsynced project names
+ filter_projects (tuple): Tuple of filter project names to sync with
Raises:
gazu.exception.AuthFailedException: Wrong user login and/or password
"""
@@ -381,7 +383,24 @@ def sync_all_projects(
dbcon = AvalonMongoDB()
dbcon.install()
all_projects = gazu.project.all_projects()
- for project in all_projects:
+
+ project_to_sync = []
+
+ if filter_projects:
+ all_kitsu_projects = {p["name"]: p for p in all_projects}
+ for proj_name in filter_projects:
+ if proj_name in all_kitsu_projects:
+ project_to_sync.append(all_kitsu_projects[proj_name])
+ else:
+ log.info(
+ f"`{proj_name}` project does not exist in Kitsu."
+ f" Please make sure the project is spelled correctly."
+ )
+ else:
+ # all project
+ project_to_sync = all_projects
+
+ for project in project_to_sync:
if ignore_projects and project["name"] in ignore_projects:
continue
sync_project_from_kitsu(dbcon, project)
@@ -408,14 +427,13 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
# Get all statuses for projects from Kitsu
all_status = gazu.project.all_project_status()
for status in all_status:
- if project['project_status_id'] == status['id']:
- project['project_status_name'] = status['name']
+ if project["project_status_id"] == status["id"]:
+ project["project_status_name"] = status["name"]
break
# Do not sync closed kitsu project that is not found in openpype
- if (
- project['project_status_name'] == "Closed"
- and not get_project(project['name'])
+ if project["project_status_name"] == "Closed" and not get_project(
+ project["name"]
):
return
@@ -444,7 +462,7 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
log.info("Project created: {}".format(project_name))
bulk_writes.append(write_project_to_op(project, dbcon))
- if project['project_status_name'] == "Closed":
+ if project["project_status_name"] == "Closed":
return
# Try to find project document
diff --git a/openpype/modules/muster/muster.py b/openpype/modules/muster/muster.py
index 77b9214a5a..0cdb1230c8 100644
--- a/openpype/modules/muster/muster.py
+++ b/openpype/modules/muster/muster.py
@@ -1,7 +1,9 @@
import os
import json
+
import appdirs
import requests
+
from openpype.modules import OpenPypeModule, ITrayModule
@@ -110,16 +112,10 @@ class MusterModule(OpenPypeModule, ITrayModule):
self.save_credentials(token)
def save_credentials(self, token):
- """
- Save credentials to JSON file
- """
- data = {
- 'token': token
- }
+ """Save credentials to JSON file."""
- file = open(self.cred_path, 'w')
- file.write(json.dumps(data))
- file.close()
+ with open(self.cred_path, "w") as f:
+ json.dump({'token': token}, f)
def show_login(self):
"""
diff --git a/openpype/pipeline/publish/__init__.py b/openpype/pipeline/publish/__init__.py
index 72f3774e1a..0c57915c05 100644
--- a/openpype/pipeline/publish/__init__.py
+++ b/openpype/pipeline/publish/__init__.py
@@ -39,6 +39,7 @@ from .lib import (
apply_plugin_settings_automatically,
get_plugin_settings,
+ get_publish_instance_label,
)
from .abstract_expected_files import ExpectedFiles
@@ -85,6 +86,7 @@ __all__ = (
"apply_plugin_settings_automatically",
"get_plugin_settings",
+ "get_publish_instance_label",
"ExpectedFiles",
diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py
index b55f813b5e..471be5ddb8 100644
--- a/openpype/pipeline/publish/lib.py
+++ b/openpype/pipeline/publish/lib.py
@@ -12,7 +12,8 @@ import pyblish.api
from openpype.lib import (
Logger,
import_filepath,
- filter_profiles
+ filter_profiles,
+ is_func_signature_supported,
)
from openpype.settings import (
get_project_settings,
@@ -496,12 +497,26 @@ def filter_pyblish_plugins(plugins):
# iterate over plugins
for plugin in plugins[:]:
# Apply settings to plugins
- if hasattr(plugin, "apply_settings"):
+
+ apply_settings_func = getattr(plugin, "apply_settings", None)
+ if apply_settings_func is not None:
# Use classmethod 'apply_settings'
# - can be used to target settings from custom settings place
# - skip default behavior when successful
try:
- plugin.apply_settings(project_settings, system_settings)
+ # Support to pass only project settings
+ # - make sure that both settings are passed, when can be
+ # - that covers cases when *args are in method parameters
+ both_supported = is_func_signature_supported(
+ apply_settings_func, project_settings, system_settings
+ )
+ project_supported = is_func_signature_supported(
+ apply_settings_func, project_settings
+ )
+ if not both_supported and project_supported:
+ plugin.apply_settings(project_settings)
+ else:
+ plugin.apply_settings(project_settings, system_settings)
except Exception:
log.warning(
@@ -866,3 +881,26 @@ def add_repre_files_for_cleanup(instance, repre):
for file_name in files:
expected_file = os.path.join(staging_dir, file_name)
instance.context.data["cleanupFullPaths"].append(expected_file)
+
+
+def get_publish_instance_label(instance):
+ """Try to get label from pyblish instance.
+
+ First are used values in instance data under 'label' and 'name' keys. Then
+ is used string conversion of instance object -> 'instance._name'.
+
+ Todos:
+ Maybe 'subset' key could be used too.
+
+ Args:
+ instance (pyblish.api.Instance): Pyblish instance.
+
+ Returns:
+ str: Instance label.
+ """
+
+ return (
+ instance.data.get("label")
+ or instance.data.get("name")
+ or str(instance)
+ )
diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py
index a3d7340367..896ed40f2d 100644
--- a/openpype/pipeline/workfile/workfile_template_builder.py
+++ b/openpype/pipeline/workfile/workfile_template_builder.py
@@ -43,6 +43,7 @@ from openpype.pipeline.load import (
get_contexts_for_repre_docs,
load_with_repre_context,
)
+
from openpype.pipeline.create import (
discover_legacy_creator_plugins,
CreateContext,
@@ -1246,6 +1247,16 @@ class PlaceholderLoadMixin(object):
loader_items = list(sorted(loader_items, key=lambda i: i["label"]))
options = options or {}
+
+ # Get families from all loaders excluding "*"
+ families = set()
+ for loader in loaders_by_name.values():
+ families.update(loader.families)
+ families.discard("*")
+
+ # Sort for readability
+ families = list(sorted(families))
+
return [
attribute_definitions.UISeparatorDef(),
attribute_definitions.UILabelDef("Main attributes"),
@@ -1272,11 +1283,11 @@ class PlaceholderLoadMixin(object):
" field \"inputLinks\""
)
),
- attribute_definitions.TextDef(
+ attribute_definitions.EnumDef(
"family",
label="Family",
default=options.get("family"),
- placeholder="model, look, ..."
+ items=families
),
attribute_definitions.TextDef(
"representation",
diff --git a/openpype/plugins/publish/collect_frames_fix.py b/openpype/plugins/publish/collect_frames_fix.py
index bdd49585a5..86e727b053 100644
--- a/openpype/plugins/publish/collect_frames_fix.py
+++ b/openpype/plugins/publish/collect_frames_fix.py
@@ -26,55 +26,72 @@ class CollectFramesFixDef(
targets = ["local"]
hosts = ["nuke"]
families = ["render", "prerender"]
- enabled = True
+
+ rewrite_version_enable = False
def process(self, instance):
attribute_values = self.get_attr_values_from_data(instance.data)
frames_to_fix = attribute_values.get("frames_to_fix")
+
rewrite_version = attribute_values.get("rewrite_version")
- if frames_to_fix:
- instance.data["frames_to_fix"] = frames_to_fix
+ if not frames_to_fix:
+ return
- subset_name = instance.data["subset"]
- asset_name = instance.data["asset"]
+ instance.data["frames_to_fix"] = frames_to_fix
- project_entity = instance.data["projectEntity"]
- project_name = project_entity["name"]
+ subset_name = instance.data["subset"]
+ asset_name = instance.data["asset"]
- version = get_last_version_by_subset_name(project_name,
- subset_name,
- asset_name=asset_name)
- if not version:
- self.log.warning("No last version found, "
- "re-render not possible")
- return
+ project_entity = instance.data["projectEntity"]
+ project_name = project_entity["name"]
- representations = get_representations(project_name,
- version_ids=[version["_id"]])
- published_files = []
- for repre in representations:
- if repre["context"]["family"] not in self.families:
- continue
+ version = get_last_version_by_subset_name(
+ project_name,
+ subset_name,
+ asset_name=asset_name
+ )
+ if not version:
+ self.log.warning(
+ "No last version found, re-render not possible"
+ )
+ return
- for file_info in repre.get("files"):
- published_files.append(file_info["path"])
+ representations = get_representations(
+ project_name, version_ids=[version["_id"]]
+ )
+ published_files = []
+ for repre in representations:
+ if repre["context"]["family"] not in self.families:
+ continue
- instance.data["last_version_published_files"] = published_files
- self.log.debug("last_version_published_files::{}".format(
- instance.data["last_version_published_files"]))
+ for file_info in repre.get("files"):
+ published_files.append(file_info["path"])
- if rewrite_version:
- instance.data["version"] = version["name"]
- # limits triggering version validator
- instance.data.pop("latestVersion")
+ instance.data["last_version_published_files"] = published_files
+ self.log.debug("last_version_published_files::{}".format(
+ instance.data["last_version_published_files"]))
+
+ if self.rewrite_version_enable and rewrite_version:
+ instance.data["version"] = version["name"]
+ # limits triggering version validator
+ instance.data.pop("latestVersion")
@classmethod
def get_attribute_defs(cls):
- return [
+ attributes = [
TextDef("frames_to_fix", label="Frames to fix",
placeholder="5,10-15",
- regex="[0-9,-]+"),
- BoolDef("rewrite_version", label="Rewrite latest version",
- default=False),
+ regex="[0-9,-]+")
]
+
+ if cls.rewrite_version_enable:
+ attributes.append(
+ BoolDef(
+ "rewrite_version",
+ label="Rewrite latest version",
+ default=False
+ )
+ )
+
+ return attributes
diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py
index fa58c03df1..d04893fa7e 100644
--- a/openpype/plugins/publish/extract_review.py
+++ b/openpype/plugins/publish/extract_review.py
@@ -23,7 +23,10 @@ from openpype.lib.transcoding import (
convert_input_paths_for_ffmpeg,
get_transcode_temp_directory,
)
-from openpype.pipeline.publish import KnownPublishError
+from openpype.pipeline.publish import (
+ KnownPublishError,
+ get_publish_instance_label,
+)
from openpype.pipeline.publish.lib import add_repre_files_for_cleanup
@@ -203,17 +206,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
return filtered_defs
- @staticmethod
- def get_instance_label(instance):
- return (
- getattr(instance, "label", None)
- or instance.data.get("label")
- or instance.data.get("name")
- or str(instance)
- )
-
def main_process(self, instance):
- instance_label = self.get_instance_label(instance)
+ instance_label = get_publish_instance_label(instance)
self.log.debug("Processing instance \"{}\"".format(instance_label))
profile_outputs = self._get_outputs_for_instance(instance)
if not profile_outputs:
diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py
index f6d4f654f5..2e87d8fc86 100644
--- a/openpype/plugins/publish/integrate_thumbnail.py
+++ b/openpype/plugins/publish/integrate_thumbnail.py
@@ -20,6 +20,7 @@ import pyblish.api
from openpype.client import get_versions
from openpype.client.operations import OperationsSession, new_thumbnail_doc
+from openpype.pipeline.publish import get_publish_instance_label
InstanceFilterResult = collections.namedtuple(
"InstanceFilterResult",
@@ -133,7 +134,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin):
filtered_instances = []
for instance in context:
- instance_label = self._get_instance_label(instance)
+ instance_label = get_publish_instance_label(instance)
# Skip instances without published representations
# - there is no place where to put the thumbnail
published_repres = instance.data.get("published_representations")
@@ -248,7 +249,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin):
for instance_item in filtered_instance_items:
instance, thumbnail_path, version_id = instance_item
- instance_label = self._get_instance_label(instance)
+ instance_label = get_publish_instance_label(instance)
version_doc = version_docs_by_str_id.get(version_id)
if not version_doc:
self.log.warning((
@@ -339,10 +340,3 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin):
))
op_session.commit()
-
- def _get_instance_label(self, instance):
- return (
- instance.data.get("label")
- or instance.data.get("name")
- or "N/A"
- )
diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json
index 20eec0c09d..41aebfa537 100644
--- a/openpype/settings/defaults/project_settings/blender.json
+++ b/openpype/settings/defaults/project_settings/blender.json
@@ -1,4 +1,9 @@
{
+ "unit_scale_settings": {
+ "enabled": true,
+ "apply_on_opening": false,
+ "base_file_unit_scale": 0.01
+ },
"imageio": {
"ocio_config": {
"enabled": false,
diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json
index 75f335f1de..002e547feb 100644
--- a/openpype/settings/defaults/project_settings/global.json
+++ b/openpype/settings/defaults/project_settings/global.json
@@ -46,6 +46,10 @@
"enabled": false,
"families": []
},
+ "CollectFramesFixDef": {
+ "enabled": true,
+ "rewrite_version_enable": true
+ },
"ValidateEditorialAssetName": {
"enabled": true,
"optional": false
@@ -252,7 +256,9 @@
}
},
{
- "families": ["review"],
+ "families": [
+ "review"
+ ],
"hosts": [
"maya",
"houdini"
diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json
index f01bdf7d50..3f8be4c872 100644
--- a/openpype/settings/defaults/project_settings/nuke.json
+++ b/openpype/settings/defaults/project_settings/nuke.json
@@ -222,6 +222,13 @@
"title": "OpenPype Docs",
"command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_nuke_tut')",
"tooltip": "Open the OpenPype Nuke user doc page"
+ },
+ {
+ "type": "action",
+ "sourcetype": "python",
+ "title": "Set Frame Start (Read Node)",
+ "command": "from openpype.hosts.nuke.startup.frame_setting_for_read_nodes import main;main();",
+ "tooltip": "Set frame start for read node(s)"
}
]
},
diff --git a/openpype/settings/defaults/project_settings/resolve.json b/openpype/settings/defaults/project_settings/resolve.json
index 264f3bd902..56efa78e89 100644
--- a/openpype/settings/defaults/project_settings/resolve.json
+++ b/openpype/settings/defaults/project_settings/resolve.json
@@ -1,4 +1,5 @@
{
+ "launch_openpype_menu_on_start": false,
"imageio": {
"ocio_config": {
"enabled": false,
diff --git a/openpype/settings/defaults/project_settings/unreal.json b/openpype/settings/defaults/project_settings/unreal.json
index 737a17d289..92bdb468ba 100644
--- a/openpype/settings/defaults/project_settings/unreal.json
+++ b/openpype/settings/defaults/project_settings/unreal.json
@@ -15,6 +15,6 @@
"preroll_frames": 0,
"render_format": "png",
"project_setup": {
- "dev_mode": true
+ "dev_mode": false
}
}
diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json
index 725d9bfb08..5b40169872 100644
--- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json
+++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json
@@ -5,6 +5,32 @@
"label": "Blender",
"is_file": true,
"children": [
+ {
+ "key": "unit_scale_settings",
+ "type": "dict",
+ "label": "Set Unit Scale",
+ "collapsible": true,
+ "is_group": true,
+ "checkbox_key": "enabled",
+ "children": [
+ {
+ "type": "boolean",
+ "key": "enabled",
+ "label": "Enabled"
+ },
+ {
+ "key": "apply_on_opening",
+ "type": "boolean",
+ "label": "Apply on Opening Existing Files"
+ },
+ {
+ "key": "base_file_unit_scale",
+ "type": "number",
+ "label": "Base File Unit Scale",
+ "decimal": 10
+ }
+ ]
+ },
{
"key": "imageio",
"type": "dict",
diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json b/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json
index b326f22394..6f98bdd3bd 100644
--- a/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json
+++ b/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json
@@ -5,6 +5,11 @@
"label": "DaVinci Resolve",
"is_file": true,
"children": [
+ {
+ "type": "boolean",
+ "key": "launch_openpype_menu_on_start",
+ "label": "Launch OpenPype menu on start of Resolve"
+ },
{
"key": "imageio",
"type": "dict",
diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json
index a7617918a3..3164cfb62d 100644
--- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json
+++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json
@@ -81,6 +81,26 @@
}
]
},
+ {
+ "type": "dict",
+ "collapsible": true,
+ "checkbox_key": "enabled",
+ "key": "CollectFramesFixDef",
+ "label": "Collect Frames to Fix",
+ "is_group": true,
+ "children": [
+ {
+ "type": "boolean",
+ "key": "enabled",
+ "label": "Enabled"
+ },
+ {
+ "type": "boolean",
+ "key": "rewrite_version_enable",
+ "label": "Show 'Rewrite latest version' toggle"
+ }
+ ]
+ },
{
"type": "dict",
"collapsible": true,
diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py
index 8095d00103..89c2343ef7 100644
--- a/openpype/tools/publisher/control.py
+++ b/openpype/tools/publisher/control.py
@@ -40,6 +40,7 @@ from openpype.pipeline.create.context import (
CreatorsOperationFailed,
ConvertorsOperationFailed,
)
+from openpype.pipeline.publish import get_publish_instance_label
# Define constant for plugin orders offset
PLUGIN_ORDER_OFFSET = 0.5
@@ -346,7 +347,7 @@ class PublishReportMaker:
def _extract_instance_data(self, instance, exists):
return {
"name": instance.data.get("name"),
- "label": instance.data.get("label"),
+ "label": get_publish_instance_label(instance),
"family": instance.data["family"],
"families": instance.data.get("families") or [],
"exists": exists,
diff --git a/openpype/tools/publisher/widgets/border_label_widget.py b/openpype/tools/publisher/widgets/border_label_widget.py
index 5617e159cd..e5693368b1 100644
--- a/openpype/tools/publisher/widgets/border_label_widget.py
+++ b/openpype/tools/publisher/widgets/border_label_widget.py
@@ -14,32 +14,44 @@ class _VLineWidget(QtWidgets.QWidget):
It is expected that parent widget will set width.
"""
- def __init__(self, color, left, parent):
+ def __init__(self, color, line_size, left, parent):
super(_VLineWidget, self).__init__(parent)
self._color = color
self._left = left
+ self._line_size = line_size
+
+ def set_line_size(self, line_size):
+ self._line_size = line_size
def paintEvent(self, event):
if not self.isVisible():
return
- if self._left:
- pos_x = 0
- else:
- pos_x = self.width()
+ pos_x = self._line_size * 0.5
+ if not self._left:
+ pos_x = self.width() - pos_x
+
painter = QtGui.QPainter(self)
painter.setRenderHints(
QtGui.QPainter.Antialiasing
| QtGui.QPainter.SmoothPixmapTransform
)
+
if self._color:
pen = QtGui.QPen(self._color)
else:
pen = painter.pen()
- pen.setWidth(1)
+ pen.setWidth(self._line_size)
painter.setPen(pen)
painter.setBrush(QtCore.Qt.transparent)
- painter.drawLine(pos_x, 0, pos_x, self.height())
+ painter.drawRect(
+ QtCore.QRectF(
+ pos_x,
+ -self._line_size,
+ pos_x + (self.width() * 2),
+ self.height() + (self._line_size * 2)
+ )
+ )
painter.end()
@@ -56,34 +68,46 @@ class _HBottomLineWidget(QtWidgets.QWidget):
It is expected that parent widget will set height and radius.
"""
- def __init__(self, color, parent):
+ def __init__(self, color, line_size, parent):
super(_HBottomLineWidget, self).__init__(parent)
self._color = color
self._radius = 0
+ self._line_size = line_size
def set_radius(self, radius):
self._radius = radius
+ def set_line_size(self, line_size):
+ self._line_size = line_size
+
def paintEvent(self, event):
if not self.isVisible():
return
- rect = QtCore.QRect(
- 0, -self._radius, self.width(), self.height() + self._radius
+ x_offset = self._line_size * 0.5
+ rect = QtCore.QRectF(
+ x_offset,
+ -self._radius,
+ self.width() - (2 * x_offset),
+ (self.height() + self._radius) - x_offset
)
painter = QtGui.QPainter(self)
painter.setRenderHints(
QtGui.QPainter.Antialiasing
| QtGui.QPainter.SmoothPixmapTransform
)
+
if self._color:
pen = QtGui.QPen(self._color)
else:
pen = painter.pen()
- pen.setWidth(1)
+ pen.setWidth(self._line_size)
painter.setPen(pen)
painter.setBrush(QtCore.Qt.transparent)
- painter.drawRoundedRect(rect, self._radius, self._radius)
+ if self._radius:
+ painter.drawRoundedRect(rect, self._radius, self._radius)
+ else:
+ painter.drawRect(rect)
painter.end()
@@ -102,30 +126,38 @@ class _HTopCornerLineWidget(QtWidgets.QWidget):
It is expected that parent widget will set height and radius.
"""
- def __init__(self, color, left_side, parent):
+
+ def __init__(self, color, line_size, left_side, parent):
super(_HTopCornerLineWidget, self).__init__(parent)
self._left_side = left_side
+ self._line_size = line_size
self._color = color
self._radius = 0
def set_radius(self, radius):
self._radius = radius
+ def set_line_size(self, line_size):
+ self._line_size = line_size
+
def paintEvent(self, event):
if not self.isVisible():
return
- pos_y = self.height() / 2
-
+ pos_y = self.height() * 0.5
+ x_offset = self._line_size * 0.5
if self._left_side:
- rect = QtCore.QRect(
- 0, pos_y, self.width() + self._radius, self.height()
+ rect = QtCore.QRectF(
+ x_offset,
+ pos_y,
+ self.width() + self._radius + x_offset,
+ self.height()
)
else:
- rect = QtCore.QRect(
- -self._radius,
+ rect = QtCore.QRectF(
+ (-self._radius),
pos_y,
- self.width() + self._radius,
+ (self.width() + self._radius) - x_offset,
self.height()
)
@@ -138,10 +170,13 @@ class _HTopCornerLineWidget(QtWidgets.QWidget):
pen = QtGui.QPen(self._color)
else:
pen = painter.pen()
- pen.setWidth(1)
+ pen.setWidth(self._line_size)
painter.setPen(pen)
painter.setBrush(QtCore.Qt.transparent)
- painter.drawRoundedRect(rect, self._radius, self._radius)
+ if self._radius:
+ painter.drawRoundedRect(rect, self._radius, self._radius)
+ else:
+ painter.drawRect(rect)
painter.end()
@@ -163,8 +198,10 @@ class BorderedLabelWidget(QtWidgets.QFrame):
if color_value:
color = color_value.get_qcolor()
- top_left_w = _HTopCornerLineWidget(color, True, self)
- top_right_w = _HTopCornerLineWidget(color, False, self)
+ line_size = 1
+
+ top_left_w = _HTopCornerLineWidget(color, line_size, True, self)
+ top_right_w = _HTopCornerLineWidget(color, line_size, False, self)
label_widget = QtWidgets.QLabel(label, self)
@@ -175,10 +212,10 @@ class BorderedLabelWidget(QtWidgets.QFrame):
top_layout.addWidget(label_widget, 0)
top_layout.addWidget(top_right_w, 1)
- left_w = _VLineWidget(color, True, self)
- right_w = _VLineWidget(color, False, self)
+ left_w = _VLineWidget(color, line_size, True, self)
+ right_w = _VLineWidget(color, line_size, False, self)
- bottom_w = _HBottomLineWidget(color, self)
+ bottom_w = _HBottomLineWidget(color, line_size, self)
center_layout = QtWidgets.QHBoxLayout()
center_layout.setContentsMargins(5, 5, 5, 5)
@@ -201,6 +238,7 @@ class BorderedLabelWidget(QtWidgets.QFrame):
self._widget = None
self._radius = 0
+ self._line_size = line_size
self._top_left_w = top_left_w
self._top_right_w = top_right_w
@@ -216,14 +254,38 @@ class BorderedLabelWidget(QtWidgets.QFrame):
value, value, value, value
)
+ def set_line_size(self, line_size):
+ if self._line_size == line_size:
+ return
+ self._line_size = line_size
+ for widget in (
+ self._top_left_w,
+ self._top_right_w,
+ self._left_w,
+ self._right_w,
+ self._bottom_w
+ ):
+ widget.set_line_size(line_size)
+ self._recalculate_sizes()
+
def showEvent(self, event):
super(BorderedLabelWidget, self).showEvent(event)
+ self._recalculate_sizes()
+ def _recalculate_sizes(self):
height = self._label_widget.height()
- radius = (height + (height % 2)) / 2
+ radius = int((height + (height % 2)) / 2)
self._radius = radius
- side_width = 1 + radius
+ radius_size = self._line_size + 1
+ if radius_size < radius:
+ radius_size = radius
+
+ if radius:
+ side_width = self._line_size + radius
+ else:
+ side_width = self._line_size + 1
+
# Don't use fixed width/height as that would set also set
# the other size (When fixed width is set then is also set
# fixed height).
@@ -231,8 +293,8 @@ class BorderedLabelWidget(QtWidgets.QFrame):
self._left_w.setMaximumWidth(side_width)
self._right_w.setMinimumWidth(side_width)
self._right_w.setMaximumWidth(side_width)
- self._bottom_w.setMinimumHeight(radius)
- self._bottom_w.setMaximumHeight(radius)
+ self._bottom_w.setMinimumHeight(radius_size)
+ self._bottom_w.setMaximumHeight(radius_size)
self._bottom_w.set_radius(radius)
self._top_right_w.set_radius(radius)
self._top_left_w.set_radius(radius)
diff --git a/openpype/tools/publisher/widgets/publish_frame.py b/openpype/tools/publisher/widgets/publish_frame.py
index d21130deff..d423f97047 100644
--- a/openpype/tools/publisher/widgets/publish_frame.py
+++ b/openpype/tools/publisher/widgets/publish_frame.py
@@ -310,7 +310,7 @@ class PublishFrame(QtWidgets.QWidget):
self._set_success_property()
self._set_progress_visibility(True)
- self._main_label.setText("Hit publish (play button)! If you want")
+ self._main_label.setText("")
self._message_label_top.setText("")
self._reset_btn.setEnabled(True)
@@ -331,6 +331,7 @@ class PublishFrame(QtWidgets.QWidget):
self._set_success_property(3)
self._set_progress_visibility(True)
self._set_main_label("Publishing...")
+ self._message_label_top.setText("")
self._reset_btn.setEnabled(False)
self._stop_btn.setEnabled(True)
diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py
index fc90e66f21..6ab444109e 100644
--- a/openpype/tools/publisher/window.py
+++ b/openpype/tools/publisher/window.py
@@ -676,7 +676,15 @@ class PublisherWindow(QtWidgets.QDialog):
self._tabs_widget.set_current_tab(identifier)
def set_current_tab(self, tab):
- self._set_current_tab(tab)
+ if tab == "create":
+ self._go_to_create_tab()
+ elif tab == "publish":
+ self._go_to_publish_tab()
+ elif tab == "report":
+ self._go_to_report_tab()
+ elif tab == "details":
+ self._go_to_details_tab()
+
if not self._window_is_visible:
self.set_tab_on_reset(tab)
@@ -686,6 +694,12 @@ class PublisherWindow(QtWidgets.QDialog):
def _go_to_create_tab(self):
if self._create_tab.isEnabled():
self._set_current_tab("create")
+ return
+
+ self._overlay_object.add_message(
+ "Can't switch to Create tab because publishing is paused.",
+ message_type="info"
+ )
def _go_to_publish_tab(self):
self._set_current_tab("publish")
diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py
index 950c782727..58ece7c68f 100644
--- a/openpype/tools/utils/lib.py
+++ b/openpype/tools/utils/lib.py
@@ -872,7 +872,6 @@ class WrappedCallbackItem:
self.log.warning("- item is already processed")
return
- self.log.debug("Running callback: {}".format(str(self._callback)))
try:
result = self._callback(*self._args, **self._kwargs)
self._result = result
diff --git a/openpype/tools/utils/overlay_messages.py b/openpype/tools/utils/overlay_messages.py
index 180d7eae97..4da266bcf7 100644
--- a/openpype/tools/utils/overlay_messages.py
+++ b/openpype/tools/utils/overlay_messages.py
@@ -127,8 +127,7 @@ class OverlayMessageWidget(QtWidgets.QFrame):
if timeout:
self._timeout_timer.setInterval(timeout)
- if message_type:
- set_style_property(self, "type", message_type)
+ set_style_property(self, "type", message_type)
self._timeout_timer.start()
diff --git a/openpype/version.py b/openpype/version.py
index 342bbfc85a..dd23138dee 100644
--- a/openpype/version.py
+++ b/openpype/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
-__version__ = "3.15.8"
+__version__ = "3.15.9"
diff --git a/pyproject.toml b/pyproject.toml
index a72a3d66d7..633899d3a0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
-version = "3.15.8" # OpenPype
+version = "3.15.9" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team "]
license = "MIT License"
diff --git a/tests/README.md b/tests/README.md
index d36b6534f8..20847b2449 100644
--- a/tests/README.md
+++ b/tests/README.md
@@ -15,16 +15,16 @@ Structure:
- openpype/modules/MODULE_NAME - structure follow directory structure in code base
- fixture - sample data `(MongoDB dumps, test files etc.)`
- `tests.py` - single or more pytest files for MODULE_NAME
-- unit - quick unit test
- - MODULE_NAME
+- unit - quick unit test
+ - MODULE_NAME
- fixture
- `tests.py`
-
+
How to run:
----------
- use Openpype command 'runtests' from command line (`.venv` in ${OPENPYPE_ROOT} must be activated to use configured Python!)
-- `python ${OPENPYPE_ROOT}/start.py runtests`
-
+
By default, this command will run all tests in ${OPENPYPE_ROOT}/tests.
Specific location could be provided to this command as an argument, either as absolute path, or relative path to ${OPENPYPE_ROOT}.
@@ -41,17 +41,15 @@ In some cases your tests might be so localized, that you don't care about all en
In that case you might add this dummy configuration BEFORE any imports in your test file
```
import os
-os.environ["AVALON_MONGO"] = "mongodb://localhost:27017"
+os.environ["OPENPYPE_DEBUG"] = "1"
os.environ["OPENPYPE_MONGO"] = "mongodb://localhost:27017"
-os.environ["AVALON_DB"] = "avalon"
os.environ["OPENPYPE_DATABASE_NAME"] = "openpype"
-os.environ["AVALON_TIMEOUT"] = '3000'
-os.environ["OPENPYPE_DEBUG"] = "3"
-os.environ["AVALON_CONFIG"] = "pype"
+os.environ["AVALON_DB"] = "avalon"
+os.environ["AVALON_TIMEOUT"] = "3000"
os.environ["AVALON_ASSET"] = "Asset"
os.environ["AVALON_PROJECT"] = "test_project"
```
(AVALON_ASSET and AVALON_PROJECT values should exist in your environment)
This might be enough to run your test file separately. Do not commit this skeleton though.
-Use only when you know what you are doing!
\ No newline at end of file
+Use only when you know what you are doing!
diff --git a/website/docs/artist_hosts_3dsmax.md b/website/docs/artist_hosts_3dsmax.md
index 12c1f40181..fffab8ca5d 100644
--- a/website/docs/artist_hosts_3dsmax.md
+++ b/website/docs/artist_hosts_3dsmax.md
@@ -30,7 +30,7 @@ By clicking the icon ```OpenPype Menu``` rolls out.
Choose ```OpenPype Menu > Launcher``` to open the ```Launcher``` window.
-When opened you can **choose** the **project** to work in from the list. Then choose the particular **asset** you want to work on then choose **task**
+When opened you can **choose** the **project** to work in from the list. Then choose the particular **asset** you want to work on then choose **task**
and finally **run 3dsmax by its icon** in the tools.

@@ -65,13 +65,13 @@ If not any workfile present simply hit ```Save As``` and keep ```Subversion``` e

-OpenPype correctly names it and add version to the workfile. This basically happens whenever user trigger ```Save As``` action. Resulting into incremental version numbers like
+OpenPype correctly names it and add version to the workfile. This basically happens whenever user trigger ```Save As``` action. Resulting into incremental version numbers like
```workfileName_v001```
```workfileName_v002```
- etc.
+ etc.
Basically meaning user is free of guessing what is the correct naming and other necessities to keep everything in order and managed.
@@ -105,13 +105,13 @@ Before proceeding further please check [Glossary](artist_concepts.md) and [What
### Intro
-Current OpenPype integration (ver 3.15.0) supports only ```PointCache``` and ```Camera``` families now.
+Current OpenPype integration (ver 3.15.0) supports only ```PointCache```, ```Camera```, ```Geometry``` and ```Redshift Proxy``` families now.
**Pointcache** family being basically any geometry outputted as Alembic cache (.abc) format
**Camera** family being 3dsmax Camera object with/without animation outputted as native .max, FBX, Alembic format
-
+**Redshift Proxy** family being Redshift Proxy object with/without animation outputted as rs format(Redshift Proxy's very own format)
---
:::note Work in progress
@@ -119,7 +119,3 @@ This part of documentation is still work in progress.
:::
## ...to be added
-
-
-
-
diff --git a/website/docs/dev_blender.md b/website/docs/dev_blender.md
new file mode 100644
index 0000000000..bed0e4a09d
--- /dev/null
+++ b/website/docs/dev_blender.md
@@ -0,0 +1,61 @@
+---
+id: dev_blender
+title: Blender integration
+sidebar_label: Blender integration
+toc_max_heading_level: 4
+---
+
+## Run python script at launch
+In case you need to execute a python script when Blender is started (aka [`-P`](https://docs.blender.org/manual/en/latest/advanced/command_line/arguments.html#python-options)), for example to programmatically modify a blender file for conformation, you can create an OpenPype hook as follows:
+
+```python
+from openpype.hosts.blender.hooks import pre_add_run_python_script_arg
+from openpype.lib import PreLaunchHook
+
+
+class MyHook(PreLaunchHook):
+ """Add python script to be executed before Blender launch."""
+
+ order = pre_add_run_python_script_arg.AddPythonScriptToLaunchArgs.order - 1
+ app_groups = [
+ "blender",
+ ]
+
+ def execute(self):
+ self.launch_context.data.setdefault("python_scripts", []).append(
+ "/path/to/my_script.py"
+ )
+```
+
+You can write a bare python script, as you could run into the [Text Editor](https://docs.blender.org/manual/en/latest/editors/text_editor.html).
+
+### Python script with arguments
+#### Adding arguments
+In case you need to pass arguments to your script, you can append them to `self.launch_context.data["script_args"]`:
+
+```python
+self.launch_context.data.setdefault("script_args", []).append(
+ "--my-arg",
+ "value",
+ )
+```
+
+#### Parsing arguments
+You can parse arguments in your script using [argparse](https://docs.python.org/3/library/argparse.html) as follows:
+
+```python
+import argparse
+
+parser = argparse.ArgumentParser(
+ description="Parsing arguments for my_script.py"
+)
+parser.add_argument(
+ "--my-arg",
+ nargs="?",
+ help="My argument",
+)
+args, unknown = arg_parser.parse_known_args(
+ sys.argv[sys.argv.index("--") + 1 :]
+)
+print(args.my_arg)
+```
diff --git a/website/docs/module_kitsu.md b/website/docs/module_kitsu.md
index d79c78fecf..9695542723 100644
--- a/website/docs/module_kitsu.md
+++ b/website/docs/module_kitsu.md
@@ -18,9 +18,20 @@ This setting is available for all the users of the OpenPype instance.
## Synchronize
Updating OP with Kitsu data is executed running the `sync-service`, which requires to provide your Kitsu credentials with `-l, --login` and `-p, --password` or by setting the environment variables `KITSU_LOGIN` and `KITSU_PWD`. This process will request data from Kitsu and create/delete/update OP assets.
Once this sync is done, the thread will automatically start a loop to listen to Kitsu events.
+- `-prj, --project` This flag accepts multiple project name to sync specific projects, and the default to sync all projects.
+- `-lo, --listen-only` This flag to run listen to Kitsu events only without any sync.
+
+Note: You must use one argument of `-pro` or `-lo`, because the listen only flag override syncing flag.
```bash
+// sync all projects then run listen
openpype_console module kitsu sync-service -l me@domain.ext -p my_password
+
+// sync specific projects then run listen
+openpype_console module kitsu sync-service -l me@domain.ext -p my_password -prj project_name01 -prj project_name02
+
+// start listen only for all projects
+openpype_console module kitsu sync-service -l me@domain.ext -p my_password -lo
```
### Events listening
diff --git a/website/docs/project_settings/settings_project_global.md b/website/docs/project_settings/settings_project_global.md
index c17f707830..7bd24a5773 100644
--- a/website/docs/project_settings/settings_project_global.md
+++ b/website/docs/project_settings/settings_project_global.md
@@ -63,7 +63,7 @@ Example here describes use case for creation of new color coded review of png im

Another use case is to transcode in Maya only `beauty` render layers and use collected `Display` and `View` colorspaces from DCC.
-n
+
## Profile filters
diff --git a/website/sidebars.js b/website/sidebars.js
index 4874782197..267cc7f6d7 100644
--- a/website/sidebars.js
+++ b/website/sidebars.js
@@ -180,6 +180,7 @@ module.exports = {
]
},
"dev_deadline",
+ "dev_blender",
"dev_colorspace"
]
};