diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 0762eb2f20..e3ca8262e5 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,12 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.17.2-nightly.2 + - 3.17.2-nightly.1 + - 3.17.1 + - 3.17.1-nightly.3 + - 3.17.1-nightly.2 + - 3.17.1-nightly.1 - 3.17.0 - 3.16.7 - 3.16.7-nightly.2 @@ -129,12 +135,6 @@ body: - 3.14.10-nightly.9 - 3.14.10-nightly.8 - 3.14.10-nightly.7 - - 3.14.10-nightly.6 - - 3.14.10-nightly.5 - - 3.14.10-nightly.4 - - 3.14.10-nightly.3 - - 3.14.10-nightly.2 - - 3.14.10-nightly.1 validations: required: true - type: dropdown diff --git a/CHANGELOG.md b/CHANGELOG.md index 4bcf66a210..8f14340348 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,270 @@ # Changelog +## [3.17.1](https://github.com/ynput/OpenPype/tree/3.17.1) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.0...3.17.1) + +### **🆕 New features** + + +
+Unreal: Yeti support #5643 + +Implemented Yeti support for Unreal. + + +___ + +
+ + +
+Houdini: Add Static Mesh product-type (family) #5481 + +This PR adds support to publish Unreal Static Mesh in Houdini as FBXQuick recap +- [x] Add UE Static Mesh Creator +- [x] Dynamic subset name like in Maya +- [x] Collect Static Mesh Type +- [x] Update collect output node +- [x] Validate FBX output node +- [x] Validate mesh is static +- [x] Validate Unreal Static Mesh Name +- [x] Validate Subset Name +- [x] FBX Extractor +- [x] FBX Loader +- [x] Update OP Settings +- [x] Update AYON Settings + + +___ + +
+ + +
+Launcher tool: Refactor launcher tool (for AYON) #5612 + +Refactored launcher tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: Use custom staging dir function for Maya renders - OP-5265 #5186 + +Check for custom staging dir when setting the renders output folder in Maya. + + +___ + +
+ + +
+Colorspace: updating file path detection methods #5273 + +Support for OCIO v2 file rules integrated into the available color management API + + +___ + +
+ + +
+Chore: add default isort config #5572 + +Add default configuration for isort tool + + +___ + +
+ + +
+Deadline: set PATH environment in deadline jobs by GlobalJobPreLoad #5622 + +This PR makes `GlobalJobPreLoad` to set `PATH` environment in deadline jobs so that we don't have to use the full executable path for deadline to launch the dcc app. This trick should save us adding logic to pass houdini patch version and modifying Houdini deadline plugin. This trick should work with other DCCs + + +___ + +
+ + +
+nuke: extract review data mov read node with expression #5635 + +Some productions might have set default values for read nodes, those settings are not colliding anymore now. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Support new publisher for colorsets validation. #5630 + +Fix `validate_color_sets` for the new publisher.In current `develop` the repair option does not appear due to wrong error raising. + + +___ + +
+ + +
+Houdini: Camera Loader fix mismatch for Maya cameras #5584 + +This PR adds +- A workaround to match Maya render mask in Houdini +- `SetCameraResolution` inventory action +- set camera resolution when loading or updating camera + + +___ + +
+ + +
+Nuke: fix set colorspace on writes #5634 + +Colorspace is set correctly to any write node created from publisher. + + +___ + +
+ + +
+TVPaint: Fix review family extraction #5637 + +Extractor marks representation of review instance with review tag. + + +___ + +
+ + +
+AYON settings: Extract OIIO transcode settings #5639 + +Output definitions of Extract OIIO transcode have name to match OpenPype settings, and the settings are converted to dictionary in settings conversion. + + +___ + +
+ + +
+AYON: Fix task type short name conversion #5641 + +Convert AYON task type short name for OpenPype correctly. + + +___ + +
+ + +
+colorspace: missing `allowed_exts` fix #5646 + +Colorspace module is not failing due to missing `allowed_exts` attribute. + + +___ + +
+ + +
+Photoshop: remove trailing underscore in subset name #5647 + +If {layer} placeholder is at the end of subset name template and not used (for example in `auto_image` where separating it by layer doesn't make any sense) trailing '_' was kept. This updates cleaning logic and extracts it as it might be similar in regular `image` instance. + + +___ + +
+ + +
+traypublisher: missing `assetEntity` in context data #5648 + +Issue with missing `assetEnity` key in context data is not problem anymore. + + +___ + +
+ + +
+AYON: Workfiles tool save button works #5653 + +Fix save as button in workfiles tool.(It is mystery why this stopped to work??) + + +___ + +
+ + +
+Max: bug fix delete items from container #5658 + +Fix the bug shown when clicking "Delete Items from Container" and selecting nothing and press ok. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Chore: Remove unused functions from Fusion integration #5617 + +Cleanup unused code from Fusion integration + + +___ + +
+ +### **Merged pull requests** + + +
+Increase timout for deadline test #5654 + +DL picks up jobs quite slow, so bump up delay. + + +___ + +
+ + + + ## [3.17.0](https://github.com/ynput/OpenPype/tree/3.17.0) diff --git a/openpype/cli.py b/openpype/cli.py index 0df277fb0a..7422f32f13 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -290,11 +290,15 @@ def run(script): "--setup_only", help="Only create dbs, do not run tests", default=None) +@click.option("--mongo_url", + help="MongoDB for testing.", + default=None) def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant, - timeout, setup_only): + timeout, setup_only, mongo_url): """Run all automatic tests after proper initialization via start.py""" PypeCommands().run_tests(folder, mark, pyargs, test_data_folder, - persist, app_variant, timeout, setup_only) + persist, app_variant, timeout, setup_only, + mongo_url) @main.command(help="DEPRECATED - run sync server") diff --git a/openpype/client/server/conversion_utils.py b/openpype/client/server/conversion_utils.py index f67a1ef9c4..8c18cb1c13 100644 --- a/openpype/client/server/conversion_utils.py +++ b/openpype/client/server/conversion_utils.py @@ -235,6 +235,8 @@ def convert_v4_project_to_v3(project): new_task_types = {} for task_type in task_types: name = task_type.pop("name") + # Change 'shortName' to 'short_name' + task_type["short_name"] = task_type.pop("shortName", None) new_task_types[name] = task_type config["tasks"] = new_task_types diff --git a/openpype/hooks/pre_ocio_hook.py b/openpype/hooks/pre_ocio_hook.py index add3a0adaf..e695cf3fe8 100644 --- a/openpype/hooks/pre_ocio_hook.py +++ b/openpype/hooks/pre_ocio_hook.py @@ -13,7 +13,7 @@ class OCIOEnvHook(PreLaunchHook): "fusion", "blender", "aftereffects", - "max", + "3dsmax", "houdini", "maya", "nuke", diff --git a/openpype/hosts/blender/api/__init__.py b/openpype/hosts/blender/api/__init__.py index 75a11affde..e15f1193a5 100644 --- a/openpype/hosts/blender/api/__init__.py +++ b/openpype/hosts/blender/api/__init__.py @@ -38,6 +38,8 @@ from .lib import ( from .capture import capture +from .render_lib import prepare_rendering + __all__ = [ "install", @@ -66,4 +68,5 @@ __all__ = [ "get_selection", "capture", # "unique_name", + "prepare_rendering", ] diff --git a/openpype/hosts/blender/api/colorspace.py b/openpype/hosts/blender/api/colorspace.py new file mode 100644 index 0000000000..4521612b7d --- /dev/null +++ b/openpype/hosts/blender/api/colorspace.py @@ -0,0 +1,51 @@ +import attr + +import bpy + + +@attr.s +class LayerMetadata(object): + """Data class for Render Layer metadata.""" + frameStart = attr.ib() + frameEnd = attr.ib() + + +@attr.s +class RenderProduct(object): + """ + Getting Colorspace as Specific Render Product Parameter for submitting + publish job. + """ + colorspace = attr.ib() # colorspace + view = attr.ib() # OCIO view transform + productName = attr.ib(default=None) + + +class ARenderProduct(object): + def __init__(self): + """Constructor.""" + # Initialize + self.layer_data = self._get_layer_data() + self.layer_data.products = self.get_render_products() + + def _get_layer_data(self): + scene = bpy.context.scene + + return LayerMetadata( + frameStart=int(scene.frame_start), + frameEnd=int(scene.frame_end), + ) + + def get_render_products(self): + """To be implemented by renderer class. + This should return a list of RenderProducts. + Returns: + list: List of RenderProduct + """ + return [ + RenderProduct( + colorspace="sRGB", + view="ACES 1.0", + productName="" + ) + ] diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 62d7987b47..0eb90eeff9 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -16,6 +16,7 @@ import bpy import bpy.utils.previews from openpype import style +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import get_current_asset_name, get_current_task_name from openpype.tools.utils import host_tools @@ -331,10 +332,11 @@ class LaunchWorkFiles(LaunchQtApp): def execute(self, context): result = super().execute(context) - self._window.set_context({ - "asset": get_current_asset_name(), - "task": get_current_task_name() - }) + if not AYON_SERVER_ENABLED: + self._window.set_context({ + "asset": get_current_asset_name(), + "task": get_current_task_name() + }) return result def before_window_show(self): diff --git a/openpype/hosts/blender/api/render_lib.py b/openpype/hosts/blender/api/render_lib.py new file mode 100644 index 0000000000..d564b5ebcb --- /dev/null +++ b/openpype/hosts/blender/api/render_lib.py @@ -0,0 +1,255 @@ +import os + +import bpy + +from openpype.settings import get_project_settings +from openpype.pipeline import get_current_project_name + + +def get_default_render_folder(settings): + """Get default render folder from blender settings.""" + + return (settings["blender"] + ["RenderSettings"] + ["default_render_image_folder"]) + + +def get_aov_separator(settings): + """Get aov separator from blender settings.""" + + aov_sep = (settings["blender"] + ["RenderSettings"] + ["aov_separator"]) + + if aov_sep == "dash": + return "-" + elif aov_sep == "underscore": + return "_" + elif aov_sep == "dot": + return "." + else: + raise ValueError(f"Invalid aov separator: {aov_sep}") + + +def get_image_format(settings): + """Get image format from blender settings.""" + + return (settings["blender"] + ["RenderSettings"] + ["image_format"]) + + +def get_multilayer(settings): + """Get multilayer from blender settings.""" + + return (settings["blender"] + ["RenderSettings"] + ["multilayer_exr"]) + + +def get_render_product(output_path, name, aov_sep): + """ + Generate the path to the render product. Blender interprets the `#` + as the frame number, when it renders. + + Args: + file_path (str): The path to the blender scene. + render_folder (str): The render folder set in settings. + file_name (str): The name of the blender scene. + instance (pyblish.api.Instance): The instance to publish. + ext (str): The image format to render. + """ + filepath = os.path.join(output_path, name) + render_product = f"{filepath}{aov_sep}beauty.####" + render_product = render_product.replace("\\", "/") + + return render_product + + +def set_render_format(ext, multilayer): + # Set Blender to save the file with the right extension + bpy.context.scene.render.use_file_extension = True + + image_settings = bpy.context.scene.render.image_settings + + if ext == "exr": + image_settings.file_format = ( + "OPEN_EXR_MULTILAYER" if multilayer else "OPEN_EXR") + elif ext == "bmp": + image_settings.file_format = "BMP" + elif ext == "rgb": + image_settings.file_format = "IRIS" + elif ext == "png": + image_settings.file_format = "PNG" + elif ext == "jpeg": + image_settings.file_format = "JPEG" + elif ext == "jp2": + image_settings.file_format = "JPEG2000" + elif ext == "tga": + image_settings.file_format = "TARGA" + elif ext == "tif": + image_settings.file_format = "TIFF" + + +def set_render_passes(settings): + aov_list = (settings["blender"] + ["RenderSettings"] + ["aov_list"]) + + custom_passes = (settings["blender"] + ["RenderSettings"] + ["custom_passes"]) + + vl = bpy.context.view_layer + + vl.use_pass_combined = "combined" in aov_list + vl.use_pass_z = "z" in aov_list + vl.use_pass_mist = "mist" in aov_list + vl.use_pass_normal = "normal" in aov_list + vl.use_pass_diffuse_direct = "diffuse_light" in aov_list + vl.use_pass_diffuse_color = "diffuse_color" in aov_list + vl.use_pass_glossy_direct = "specular_light" in aov_list + vl.use_pass_glossy_color = "specular_color" in aov_list + vl.eevee.use_pass_volume_direct = "volume_light" in aov_list + vl.use_pass_emit = "emission" in aov_list + vl.use_pass_environment = "environment" in aov_list + vl.use_pass_shadow = "shadow" in aov_list + vl.use_pass_ambient_occlusion = "ao" in aov_list + + cycles = vl.cycles + + cycles.denoising_store_passes = "denoising" in aov_list + cycles.use_pass_volume_direct = "volume_direct" in aov_list + cycles.use_pass_volume_indirect = "volume_indirect" in aov_list + + aovs_names = [aov.name for aov in vl.aovs] + for cp in custom_passes: + cp_name = cp[0] + if cp_name not in aovs_names: + aov = vl.aovs.add() + aov.name = cp_name + else: + aov = vl.aovs[cp_name] + aov.type = cp[1].get("type", "VALUE") + + return aov_list, custom_passes + + +def set_node_tree(output_path, name, aov_sep, ext, multilayer): + # Set the scene to use the compositor node tree to render + bpy.context.scene.use_nodes = True + + tree = bpy.context.scene.node_tree + + # Get the Render Layers node + rl_node = None + for node in tree.nodes: + if node.bl_idname == "CompositorNodeRLayers": + rl_node = node + break + + # If there's not a Render Layers node, we create it + if not rl_node: + rl_node = tree.nodes.new("CompositorNodeRLayers") + + # Get the enabled output sockets, that are the active passes for the + # render. + # We also exclude some layers. + exclude_sockets = ["Image", "Alpha", "Noisy Image"] + passes = [ + socket + for socket in rl_node.outputs + if socket.enabled and socket.name not in exclude_sockets + ] + + # Remove all output nodes + for node in tree.nodes: + if node.bl_idname == "CompositorNodeOutputFile": + tree.nodes.remove(node) + + # Create a new output node + output = tree.nodes.new("CompositorNodeOutputFile") + + image_settings = bpy.context.scene.render.image_settings + output.format.file_format = image_settings.file_format + + # In case of a multilayer exr, we don't need to use the output node, + # because the blender render already outputs a multilayer exr. + if ext == "exr" and multilayer: + output.layer_slots.clear() + return [] + + output.file_slots.clear() + output.base_path = output_path + + aov_file_products = [] + + # For each active render pass, we add a new socket to the output node + # and link it + for render_pass in passes: + filepath = f"{name}{aov_sep}{render_pass.name}.####" + + output.file_slots.new(filepath) + + aov_file_products.append( + (render_pass.name, os.path.join(output_path, filepath))) + + node_input = output.inputs[-1] + + tree.links.new(render_pass, node_input) + + return aov_file_products + + +def imprint_render_settings(node, data): + RENDER_DATA = "render_data" + if not node.get(RENDER_DATA): + node[RENDER_DATA] = {} + for key, value in data.items(): + if value is None: + continue + node[RENDER_DATA][key] = value + + +def prepare_rendering(asset_group): + name = asset_group.name + + filepath = bpy.data.filepath + assert filepath, "Workfile not saved. Please save the file first." + + file_path = os.path.dirname(filepath) + file_name = os.path.basename(filepath) + file_name, _ = os.path.splitext(file_name) + + project = get_current_project_name() + settings = get_project_settings(project) + + render_folder = get_default_render_folder(settings) + aov_sep = get_aov_separator(settings) + ext = get_image_format(settings) + multilayer = get_multilayer(settings) + + set_render_format(ext, multilayer) + aov_list, custom_passes = set_render_passes(settings) + + output_path = os.path.join(file_path, render_folder, file_name) + + render_product = get_render_product(output_path, name, aov_sep) + aov_file_product = set_node_tree( + output_path, name, aov_sep, ext, multilayer) + + bpy.context.scene.render.filepath = render_product + + render_settings = { + "render_folder": render_folder, + "aov_separator": aov_sep, + "image_format": ext, + "multilayer_exr": multilayer, + "aov_list": aov_list, + "custom_passes": custom_passes, + "render_product": render_product, + "aov_file_product": aov_file_product, + "review": True, + } + + imprint_render_settings(asset_group, render_settings) diff --git a/openpype/hosts/blender/plugins/create/create_render.py b/openpype/hosts/blender/plugins/create/create_render.py new file mode 100644 index 0000000000..f938a21808 --- /dev/null +++ b/openpype/hosts/blender/plugins/create/create_render.py @@ -0,0 +1,53 @@ +"""Create render.""" +import bpy + +from openpype.pipeline import get_current_task_name +from openpype.hosts.blender.api import plugin, lib +from openpype.hosts.blender.api.render_lib import prepare_rendering +from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES + + +class CreateRenderlayer(plugin.Creator): + """Single baked camera""" + + name = "renderingMain" + label = "Render" + family = "render" + icon = "eye" + + def process(self): + # Get Instance Container or create it if it does not exist + instances = bpy.data.collections.get(AVALON_INSTANCES) + if not instances: + instances = bpy.data.collections.new(name=AVALON_INSTANCES) + bpy.context.scene.collection.children.link(instances) + + # Create instance object + asset = self.data["asset"] + subset = self.data["subset"] + name = plugin.asset_name(asset, subset) + asset_group = bpy.data.collections.new(name=name) + + try: + instances.children.link(asset_group) + self.data['task'] = get_current_task_name() + lib.imprint(asset_group, self.data) + + prepare_rendering(asset_group) + except Exception: + # Remove the instance if there was an error + bpy.data.collections.remove(asset_group) + raise + + # TODO: this is undesiderable, but it's the only way to be sure that + # the file is saved before the render starts. + # Blender, by design, doesn't set the file as dirty if modifications + # happen by script. So, when creating the instance and setting the + # render settings, the file is not marked as dirty. This means that + # there is the risk of sending to deadline a file without the right + # settings. Even the validator to check that the file is saved will + # detect the file as saved, even if it isn't. The only solution for + # now it is to force the file to be saved. + bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath) + + return asset_group diff --git a/openpype/hosts/blender/plugins/load/load_blend.py b/openpype/hosts/blender/plugins/load/load_blend.py index fa41f4374b..25d6568889 100644 --- a/openpype/hosts/blender/plugins/load/load_blend.py +++ b/openpype/hosts/blender/plugins/load/load_blend.py @@ -244,7 +244,7 @@ class BlendLoader(plugin.AssetLoader): for parent in parent_containers: parent.get(AVALON_PROPERTY)["members"] = list(filter( lambda i: i not in members, - parent.get(AVALON_PROPERTY)["members"])) + parent.get(AVALON_PROPERTY).get("members", []))) for attr in attrs: for data in getattr(bpy.data, attr): diff --git a/openpype/hosts/blender/plugins/publish/collect_render.py b/openpype/hosts/blender/plugins/publish/collect_render.py new file mode 100644 index 0000000000..92e2473a95 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/collect_render.py @@ -0,0 +1,123 @@ +# -*- coding: utf-8 -*- +"""Collect render data.""" + +import os +import re + +import bpy + +from openpype.hosts.blender.api import colorspace +import pyblish.api + + +class CollectBlenderRender(pyblish.api.InstancePlugin): + """Gather all publishable render layers from renderSetup.""" + + order = pyblish.api.CollectorOrder + 0.01 + hosts = ["blender"] + families = ["render"] + label = "Collect Render Layers" + sync_workfile_version = False + + @staticmethod + def generate_expected_beauty( + render_product, frame_start, frame_end, frame_step, ext + ): + """ + Generate the expected files for the render product for the beauty + render. This returns a list of files that should be rendered. It + replaces the sequence of `#` with the frame number. + """ + path = os.path.dirname(render_product) + file = os.path.basename(render_product) + + expected_files = [] + + for frame in range(frame_start, frame_end + 1, frame_step): + frame_str = str(frame).rjust(4, "0") + filename = re.sub("#+", frame_str, file) + expected_file = f"{os.path.join(path, filename)}.{ext}" + expected_files.append(expected_file.replace("\\", "/")) + + return { + "beauty": expected_files + } + + @staticmethod + def generate_expected_aovs( + aov_file_product, frame_start, frame_end, frame_step, ext + ): + """ + Generate the expected files for the render product for the beauty + render. This returns a list of files that should be rendered. It + replaces the sequence of `#` with the frame number. + """ + expected_files = {} + + for aov_name, aov_file in aov_file_product: + path = os.path.dirname(aov_file) + file = os.path.basename(aov_file) + + aov_files = [] + + for frame in range(frame_start, frame_end + 1, frame_step): + frame_str = str(frame).rjust(4, "0") + filename = re.sub("#+", frame_str, file) + expected_file = f"{os.path.join(path, filename)}.{ext}" + aov_files.append(expected_file.replace("\\", "/")) + + expected_files[aov_name] = aov_files + + return expected_files + + def process(self, instance): + context = instance.context + + render_data = bpy.data.collections[str(instance)].get("render_data") + + assert render_data, "No render data found." + + self.log.info(f"render_data: {dict(render_data)}") + + render_product = render_data.get("render_product") + aov_file_product = render_data.get("aov_file_product") + ext = render_data.get("image_format") + multilayer = render_data.get("multilayer_exr") + + frame_start = context.data["frameStart"] + frame_end = context.data["frameEnd"] + frame_handle_start = context.data["frameStartHandle"] + frame_handle_end = context.data["frameEndHandle"] + + expected_beauty = self.generate_expected_beauty( + render_product, int(frame_start), int(frame_end), + int(bpy.context.scene.frame_step), ext) + + expected_aovs = self.generate_expected_aovs( + aov_file_product, int(frame_start), int(frame_end), + int(bpy.context.scene.frame_step), ext) + + expected_files = expected_beauty | expected_aovs + + instance.data.update({ + "family": "render.farm", + "frameStart": frame_start, + "frameEnd": frame_end, + "frameStartHandle": frame_handle_start, + "frameEndHandle": frame_handle_end, + "fps": context.data["fps"], + "byFrameStep": bpy.context.scene.frame_step, + "review": render_data.get("review", False), + "multipartExr": ext == "exr" and multilayer, + "farm": True, + "expectedFiles": [expected_files], + # OCIO not currently implemented in Blender, but the following + # settings are required by the schema, so it is hardcoded. + # TODO: Implement OCIO in Blender + "colorspaceConfig": "", + "colorspaceDisplay": "sRGB", + "colorspaceView": "ACES 1.0 SDR-video", + "renderProducts": colorspace.ARenderProduct(), + }) + + self.log.info(f"data: {instance.data}") diff --git a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py index 27fa4baf28..3d176f9c30 100644 --- a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py +++ b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py @@ -9,7 +9,8 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin): label = "Increment Workfile Version" optional = True hosts = ["blender"] - families = ["animation", "model", "rig", "action", "layout", "blendScene"] + families = ["animation", "model", "rig", "action", "layout", "blendScene", + "render"] def process(self, context): diff --git a/openpype/hosts/blender/plugins/publish/validate_deadline_publish.py b/openpype/hosts/blender/plugins/publish/validate_deadline_publish.py new file mode 100644 index 0000000000..14220b5c9c --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/validate_deadline_publish.py @@ -0,0 +1,47 @@ +import os + +import bpy + +import pyblish.api +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.hosts.blender.api.render_lib import prepare_rendering + + +class ValidateDeadlinePublish(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates Render File Directory is + not the same in every submission + """ + + order = ValidateContentsOrder + families = ["render.farm"] + hosts = ["blender"] + label = "Validate Render Output for Deadline" + optional = True + actions = [RepairAction] + + def process(self, instance): + if not self.is_active(instance.data): + return + filepath = bpy.data.filepath + file = os.path.basename(filepath) + filename, ext = os.path.splitext(file) + if filename not in bpy.context.scene.render.filepath: + raise PublishValidationError( + "Render output folder " + "doesn't match the blender scene name! " + "Use Repair action to " + "fix the folder file path.." + ) + + @classmethod + def repair(cls, instance): + container = bpy.data.collections[str(instance)] + prepare_rendering(container) + bpy.ops.wm.save_as_mainfile(filepath=bpy.data.filepath) + cls.log.debug("Reset the render output folder...") diff --git a/openpype/hosts/blender/plugins/publish/validate_file_saved.py b/openpype/hosts/blender/plugins/publish/validate_file_saved.py new file mode 100644 index 0000000000..e191585c55 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/validate_file_saved.py @@ -0,0 +1,20 @@ +import bpy + +import pyblish.api + + +class ValidateFileSaved(pyblish.api.InstancePlugin): + """Validate that the workfile has been saved.""" + + order = pyblish.api.ValidatorOrder - 0.01 + hosts = ["blender"] + label = "Validate File Saved" + optional = False + exclude_families = [] + + def process(self, instance): + if [ef for ef in self.exclude_families + if instance.data["family"] in ef]: + return + if bpy.data.is_dirty: + raise RuntimeError("Workfile is not saved.") diff --git a/openpype/hosts/blender/plugins/publish/validate_render_camera_is_set.py b/openpype/hosts/blender/plugins/publish/validate_render_camera_is_set.py new file mode 100644 index 0000000000..ba3a796f35 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/validate_render_camera_is_set.py @@ -0,0 +1,17 @@ +import bpy + +import pyblish.api + + +class ValidateRenderCameraIsSet(pyblish.api.InstancePlugin): + """Validate that there is a camera set as active for rendering.""" + + order = pyblish.api.ValidatorOrder + hosts = ["blender"] + families = ["render"] + label = "Validate Render Camera Is Set" + optional = False + + def process(self, instance): + if not bpy.context.scene.camera: + raise RuntimeError("No camera is active for rendering.") diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py index 39edca4de3..fccd8b2965 100644 --- a/openpype/hosts/fusion/plugins/create/create_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_saver.py @@ -123,6 +123,9 @@ class CreateSaver(NewCreator): def _imprint(self, tool, data): # Save all data in a "openpype.{key}" = value data + # Instance id is the tool's name so we don't need to imprint as data + data.pop("instance_id", None) + active = data.pop("active", None) if active is not None: # Use active value to set the passthrough state @@ -188,6 +191,10 @@ class CreateSaver(NewCreator): passthrough = attrs["TOOLB_PassThrough"] data["active"] = not passthrough + # Override publisher's UUID generation because tool names are + # already unique in Fusion in a comp + data["instance_id"] = tool.Name + return data def get_pre_create_attr_defs(self): diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 2cd7ff83e3..c82ba11114 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -188,13 +188,14 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): self.customize_node_look(instance_node) instance_data["instance_node"] = instance_node.path() + instance_data["instance_id"] = instance_node.path() instance = CreatedInstance( self.family, subset_name, instance_data, self) self._add_instance_to_context(instance) - imprint(instance_node, instance.data_to_store()) + self.imprint(instance_node, instance.data_to_store()) if self.add_publish_button: add_self_publish_button(instance_node) @@ -227,25 +228,41 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): self.cache_subsets(self.collection_shared_data) for instance in self.collection_shared_data[ "houdini_cached_subsets"].get(self.identifier, []): + + node_data = read(instance) + + # Node paths are always the full node path since that is unique + # Because it's the node's path it's not written into attributes + # but explicitly collected + node_path = instance.path() + node_data["instance_id"] = node_path + node_data["instance_node"] = node_path + created_instance = CreatedInstance.from_existing( - read(instance), self + node_data, self ) self._add_instance_to_context(created_instance) def update_instances(self, update_list): for created_inst, changes in update_list: instance_node = hou.node(created_inst.get("instance_node")) - new_values = { key: changes[key].new_value for key in changes.changed_keys } - imprint( + self.imprint( instance_node, new_values, update=True ) + def imprint(self, node, values, update=False): + # Never store instance node and instance id since that data comes + # from the node's path + values.pop("instance_node", None) + values.pop("instance_id", None) + imprint(node, values, update=update) + def remove_instances(self, instances): """Remove specified instance from the scene. diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index 4a150067e1..8b70b3ced7 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -1,15 +1,35 @@ # -*- coding: utf-8 -*- """Library of functions useful for 3dsmax pipeline.""" import contextlib +import logging import json from typing import Any, Dict, Union import six +from openpype.pipeline import get_current_project_name, colorspace +from openpype.settings import get_project_settings from openpype.pipeline.context_tools import ( get_current_project, get_current_project_asset) +from openpype.style import load_stylesheet from pymxs import runtime as rt + JSON_PREFIX = "JSON::" +log = logging.getLogger("openpype.hosts.max") + + +def get_main_window(): + """Acquire Max's main window""" + from qtpy import QtWidgets + top_widgets = QtWidgets.QApplication.topLevelWidgets() + name = "QmaxApplicationWindow" + for widget in top_widgets: + if ( + widget.inherits("QMainWindow") + and widget.metaObject().className() == name + ): + return widget + raise RuntimeError('Count not find 3dsMax main window.') def imprint(node_name: str, data: dict) -> bool: @@ -277,6 +297,7 @@ def set_context_setting(): """ reset_scene_resolution() reset_frame_range() + reset_colorspace() def get_max_version(): @@ -292,6 +313,14 @@ def get_max_version(): return max_info[7] +def is_headless(): + """Check if 3dsMax runs in batch mode. + If it returns True, it runs in 3dsbatch.exe + If it returns False, it runs in 3dsmax.exe + """ + return rt.maxops.isInNonInteractiveMode() + + @contextlib.contextmanager def viewport_camera(camera): original = rt.viewport.getCamera() @@ -314,6 +343,51 @@ def set_timeline(frameStart, frameEnd): return rt.animationRange +def reset_colorspace(): + """OCIO Configuration + Supports in 3dsMax 2024+ + + """ + if int(get_max_version()) < 2024: + return + project_name = get_current_project_name() + colorspace_mgr = rt.ColorPipelineMgr + project_settings = get_project_settings(project_name) + + max_config_data = colorspace.get_imageio_config( + project_name, "max", project_settings) + if max_config_data: + ocio_config_path = max_config_data["path"] + colorspace_mgr = rt.ColorPipelineMgr + colorspace_mgr.Mode = rt.Name("OCIO_Custom") + colorspace_mgr.OCIOConfigPath = ocio_config_path + + colorspace_mgr.OCIOConfigPath = ocio_config_path + + +def check_colorspace(): + parent = get_main_window() + if parent is None: + log.info("Skipping outdated pop-up " + "because Max main window can't be found.") + if int(get_max_version()) >= 2024: + color_mgr = rt.ColorPipelineMgr + project_name = get_current_project_name() + project_settings = get_project_settings(project_name) + max_config_data = colorspace.get_imageio_config( + project_name, "max", project_settings) + if max_config_data and color_mgr.Mode != rt.Name("OCIO_Custom"): + if not is_headless(): + from openpype.widgets import popup + dialog = popup.Popup(parent=parent) + dialog.setWindowTitle("Warning: Wrong OCIO Mode") + dialog.setMessage("This scene has wrong OCIO " + "Mode setting.") + dialog.setButtonText("Fix") + dialog.setStyleSheet(load_stylesheet()) + dialog.on_clicked.connect(reset_colorspace) + dialog.show() + def unique_namespace(namespace, format="%02d", prefix="", suffix="", con_suffix="CON"): """Return unique namespace diff --git a/openpype/hosts/max/api/menu.py b/openpype/hosts/max/api/menu.py index 066cc90039..364f9cd5c5 100644 --- a/openpype/hosts/max/api/menu.py +++ b/openpype/hosts/max/api/menu.py @@ -119,6 +119,10 @@ class OpenPypeMenu(object): frame_action.triggered.connect(self.frame_range_callback) openpype_menu.addAction(frame_action) + colorspace_action = QtWidgets.QAction("Set Colorspace", openpype_menu) + colorspace_action.triggered.connect(self.colorspace_callback) + openpype_menu.addAction(colorspace_action) + return openpype_menu def load_callback(self): @@ -148,3 +152,7 @@ class OpenPypeMenu(object): def frame_range_callback(self): """Callback to reset frame range""" return lib.reset_frame_range() + + def colorspace_callback(self): + """Callback to reset colorspace""" + return lib.reset_colorspace() diff --git a/openpype/hosts/max/api/pipeline.py b/openpype/hosts/max/api/pipeline.py index bd680a3d84..e46c4cabe7 100644 --- a/openpype/hosts/max/api/pipeline.py +++ b/openpype/hosts/max/api/pipeline.py @@ -57,6 +57,9 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): rt.callbacks.addScript(rt.Name('systemPostNew'), context_setting) + rt.callbacks.addScript(rt.Name('filePostOpen'), + lib.check_colorspace) + def has_unsaved_changes(self): # TODO: how to get it from 3dsmax? return True diff --git a/openpype/hosts/max/api/plugin.py b/openpype/hosts/max/api/plugin.py index 3389447cb0..fa6db073db 100644 --- a/openpype/hosts/max/api/plugin.py +++ b/openpype/hosts/max/api/plugin.py @@ -65,12 +65,12 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" on button_add pressed do ( - current_selection = selectByName title:"Select Objects to add to + current_sel = selectByName title:"Select Objects to add to the Container" buttontext:"Add" filter:nodes_to_add - if current_selection == undefined then return False + if current_sel == undefined then return False temp_arr = #() i_node_arr = #() - for c in current_selection do + for c in current_sel do ( handle_name = node_to_name c node_ref = NodeTransformMonitor node:c @@ -89,15 +89,18 @@ MS_CUSTOM_ATTRIB = """attributes "openPypeData" on button_del pressed do ( - current_selection = selectByName title:"Select Objects to remove + current_sel = selectByName title:"Select Objects to remove from the Container" buttontext:"Remove" filter: nodes_to_rmv - if current_selection == undefined then return False + if current_sel == undefined or current_sel.count == 0 then + ( + return False + ) temp_arr = #() i_node_arr = #() new_i_node_arr = #() new_temp_arr = #() - for c in current_selection do + for c in current_sel do ( node_ref = NodeTransformMonitor node:c as string handle_name = node_to_name c diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index 2dfa1520a9..a359e61921 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -34,6 +34,12 @@ class CollectRender(pyblish.api.InstancePlugin): files_by_aov.update(aovs) camera = rt.viewport.GetCamera() + if instance.data.get("members"): + camera_list = [member for member in instance.data["members"] + if rt.ClassOf(member) == rt.Camera.Classes] + if camera_list: + camera = camera_list[-1] + instance.data["cameras"] = [camera.name] if camera else None # noqa if "expectedFiles" not in instance.data: @@ -63,6 +69,17 @@ class CollectRender(pyblish.api.InstancePlugin): instance.data["colorspaceConfig"] = "" instance.data["colorspaceDisplay"] = "sRGB" instance.data["colorspaceView"] = "ACES 1.0 SDR-video" + + if int(get_max_version()) >= 2024: + colorspace_mgr = rt.ColorPipelineMgr # noqa + display = next( + (display for display in colorspace_mgr.GetDisplayList())) + view_transform = next( + (view for view in colorspace_mgr.GetViewList(display))) + instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath + instance.data["colorspaceDisplay"] = display + instance.data["colorspaceView"] = view_transform + instance.data["renderProducts"] = colorspace.ARenderProduct() instance.data["publishJobState"] = "Suspended" instance.data["attachTo"] = [] diff --git a/openpype/hosts/max/plugins/publish/collect_review.py b/openpype/hosts/max/plugins/publish/collect_review.py index 7aeb45f46b..8e27a857d7 100644 --- a/openpype/hosts/max/plugins/publish/collect_review.py +++ b/openpype/hosts/max/plugins/publish/collect_review.py @@ -4,6 +4,7 @@ import pyblish.api from pymxs import runtime as rt from openpype.lib import BoolDef +from openpype.hosts.max.api.lib import get_max_version from openpype.pipeline.publish import OpenPypePyblishPluginMixin @@ -43,6 +44,17 @@ class CollectReview(pyblish.api.InstancePlugin, "dspSafeFrame": attr_values.get("dspSafeFrame"), "dspFrameNums": attr_values.get("dspFrameNums") } + + if int(get_max_version()) >= 2024: + colorspace_mgr = rt.ColorPipelineMgr # noqa + display = next( + (display for display in colorspace_mgr.GetDisplayList())) + view_transform = next( + (view for view in colorspace_mgr.GetViewList(display))) + instance.data["colorspaceConfig"] = colorspace_mgr.OCIOConfigPath + instance.data["colorspaceDisplay"] = display + instance.data["colorspaceView"] = view_transform + # Enable ftrack functionality instance.data.setdefault("families", []).append('ftrack') @@ -54,7 +66,6 @@ class CollectReview(pyblish.api.InstancePlugin, @classmethod def get_attribute_defs(cls): - return [ BoolDef("dspGeometry", label="Geometry", diff --git a/openpype/hosts/maya/api/fbx.py b/openpype/hosts/maya/api/fbx.py index 260241f5fc..dbb3578f08 100644 --- a/openpype/hosts/maya/api/fbx.py +++ b/openpype/hosts/maya/api/fbx.py @@ -6,6 +6,7 @@ from pyblish.api import Instance from maya import cmds # noqa import maya.mel as mel # noqa +from openpype.hosts.maya.api.lib import maintained_selection class FBXExtractor: @@ -53,7 +54,6 @@ class FBXExtractor: "bakeComplexEnd": int, "bakeComplexStep": int, "bakeResampleAnimation": bool, - "animationOnly": bool, "useSceneName": bool, "quaternion": str, # "euler" "shapes": bool, @@ -63,7 +63,10 @@ class FBXExtractor: "embeddedTextures": bool, "inputConnections": bool, "upAxis": str, # x, y or z, - "triangulate": bool + "triangulate": bool, + "fileVersion": str, + "skeletonDefinitions": bool, + "referencedAssetsContent": bool } @property @@ -94,7 +97,6 @@ class FBXExtractor: "bakeComplexEnd": end_frame, "bakeComplexStep": 1, "bakeResampleAnimation": True, - "animationOnly": False, "useSceneName": False, "quaternion": "euler", "shapes": True, @@ -104,7 +106,10 @@ class FBXExtractor: "embeddedTextures": False, "inputConnections": True, "upAxis": "y", - "triangulate": False + "triangulate": False, + "fileVersion": "FBX202000", + "skeletonDefinitions": False, + "referencedAssetsContent": False } def __init__(self, log=None): @@ -198,5 +203,9 @@ class FBXExtractor: path (str): Path to use for export. """ - cmds.select(members, r=True, noExpand=True) - mel.eval('FBXExport -f "{}" -s'.format(path)) + # The export requires forward slashes because we need + # to format it into a string in a mel expression + path = path.replace("\\", "/") + with maintained_selection(): + cmds.select(members, r=True, noExpand=True) + mel.eval('FBXExport -f "{}" -s'.format(path)) diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 40b3419e73..510d4ecc85 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -183,6 +183,51 @@ def maintained_selection(): cmds.select(clear=True) +def get_namespace(node): + """Return namespace of given node""" + node_name = node.rsplit("|", 1)[-1] + if ":" in node_name: + return node_name.rsplit(":", 1)[0] + else: + return "" + + +def strip_namespace(node, namespace): + """Strip given namespace from node path. + + The namespace will only be stripped from names + if it starts with that namespace. If the namespace + occurs within another namespace it's not removed. + + Examples: + >>> strip_namespace("namespace:node", namespace="namespace:") + "node" + >>> strip_namespace("hello:world:node", namespace="hello:world") + "node" + >>> strip_namespace("hello:world:node", namespace="hello") + "world:node" + >>> strip_namespace("hello:world:node", namespace="world") + "hello:world:node" + >>> strip_namespace("ns:group|ns:node", namespace="ns") + "group|node" + + Returns: + str: Node name without given starting namespace. + + """ + + # Ensure namespace ends with `:` + if not namespace.endswith(":"): + namespace = "{}:".format(namespace) + + # The long path for a node can also have the namespace + # in its parents so we need to remove it from each + return "|".join( + name[len(namespace):] if name.startswith(namespace) else name + for name in node.split("|") + ) + + def get_custom_namespace(custom_namespace): """Return unique namespace. @@ -922,7 +967,7 @@ def no_display_layers(nodes): @contextlib.contextmanager -def namespaced(namespace, new=True): +def namespaced(namespace, new=True, relative_names=None): """Work inside namespace during context Args: @@ -934,15 +979,19 @@ def namespaced(namespace, new=True): """ original = cmds.namespaceInfo(cur=True, absoluteName=True) + original_relative_names = cmds.namespace(query=True, relativeNames=True) if new: namespace = unique_namespace(namespace) cmds.namespace(add=namespace) - + if relative_names is not None: + cmds.namespace(relativeNames=relative_names) try: cmds.namespace(set=namespace) yield namespace finally: cmds.namespace(set=original) + if relative_names is not None: + cmds.namespace(relativeNames=original_relative_names) @contextlib.contextmanager @@ -2571,7 +2620,7 @@ def bake_to_world_space(nodes, new_name = "{0}_baked".format(short_name) new_node = cmds.duplicate(node, name=new_name, - renameChildren=True)[0] + renameChildren=True)[0] # noqa # Connect all attributes on the node except for transform # attributes @@ -4100,14 +4149,19 @@ def create_rig_animation_instance( """ if options is None: options = {} - + name = context["representation"]["name"] output = next((node for node in nodes if node.endswith("out_SET")), None) controls = next((node for node in nodes if node.endswith("controls_SET")), None) + if name != "fbx": + assert output, "No out_SET in rig, this is a bug." + assert controls, "No controls_SET in rig, this is a bug." - assert output, "No out_SET in rig, this is a bug." - assert controls, "No controls_SET in rig, this is a bug." + anim_skeleton = next((node for node in nodes if + node.endswith("skeletonAnim_SET")), None) + skeleton_mesh = next((node for node in nodes if + node.endswith("skeletonMesh_SET")), None) # Find the roots amongst the loaded nodes roots = ( @@ -4119,9 +4173,7 @@ def create_rig_animation_instance( custom_subset = options.get("animationSubsetName") if custom_subset: formatting_data = { - # TODO remove 'asset_type' and replace 'asset_name' with 'asset' - "asset_name": context['asset']['name'], - "asset_type": context['asset']['type'], + "asset": context["asset"], "subset": context['subset']['name'], "family": ( context['subset']['data'].get('family') or @@ -4142,10 +4194,12 @@ def create_rig_animation_instance( host = registered_host() create_context = CreateContext(host) - # Create the animation instance + rig_sets = [output, controls, anim_skeleton, skeleton_mesh] + # Remove sets that this particular rig does not have + rig_sets = [s for s in rig_sets if s is not None] with maintained_selection(): - cmds.select([output, controls] + roots, noExpand=True) + cmds.select(rig_sets + roots, noExpand=True) create_context.create( creator_identifier=creator_identifier, variant=namespace, diff --git a/openpype/hosts/maya/api/menu.py b/openpype/hosts/maya/api/menu.py index 715f54686c..18a4ea0e9a 100644 --- a/openpype/hosts/maya/api/menu.py +++ b/openpype/hosts/maya/api/menu.py @@ -1,14 +1,13 @@ import os import logging +from functools import partial from qtpy import QtWidgets, QtGui import maya.utils import maya.cmds as cmds -from openpype.settings import get_project_settings from openpype.pipeline import ( - get_current_project_name, get_current_asset_name, get_current_task_name ) @@ -46,12 +45,12 @@ def get_context_label(): ) -def install(): +def install(project_settings): if cmds.about(batch=True): log.info("Skipping openpype.menu initialization in batch mode..") return - def deferred(): + def add_menu(): pyblish_icon = host_tools.get_pyblish_icon() parent_widget = get_main_window() cmds.menu( @@ -191,7 +190,7 @@ def install(): cmds.setParent(MENU_NAME, menu=True) - def add_scripts_menu(): + def add_scripts_menu(project_settings): try: import scriptsmenu.launchformaya as launchformaya except ImportError: @@ -201,9 +200,6 @@ def install(): ) return - # load configuration of custom menu - project_name = get_current_project_name() - project_settings = get_project_settings(project_name) config = project_settings["maya"]["scriptsmenu"]["definition"] _menu = project_settings["maya"]["scriptsmenu"]["name"] @@ -225,8 +221,9 @@ def install(): # so that it only gets called after Maya UI has initialized too. # This is crucial with Maya 2020+ which initializes without UI # first as a QCoreApplication - maya.utils.executeDeferred(deferred) - cmds.evalDeferred(add_scripts_menu, lowestPriority=True) + maya.utils.executeDeferred(add_menu) + cmds.evalDeferred(partial(add_scripts_menu, project_settings), + lowestPriority=True) def uninstall(): diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index 3647ec0b6b..38d7ae08c1 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -28,8 +28,6 @@ from openpype.lib import ( from openpype.pipeline import ( legacy_io, get_current_project_name, - get_current_asset_name, - get_current_task_name, register_loader_plugin_path, register_inventory_action_path, register_creator_plugin_path, @@ -108,7 +106,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): _set_project() self._register_callbacks() - menu.install() + menu.install(project_settings) register_event_callback("save", on_save) register_event_callback("open", on_open) diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 79fcf9bc8b..3b54954c8a 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -151,6 +151,7 @@ class MayaCreatorBase(object): # We never store the instance_node as value on the node since # it's the node name itself data.pop("instance_node", None) + data.pop("instance_id", None) # Don't store `families` since it's up to the creator itself # to define the initial publish families - not a stored attribute of @@ -227,6 +228,7 @@ class MayaCreatorBase(object): # Explicitly re-parse the node name node_data["instance_node"] = node + node_data["instance_id"] = node # If the creator plug-in specifies families = self.get_publish_families() @@ -601,6 +603,13 @@ class RenderlayerCreator(NewCreator, MayaCreatorBase): class Loader(LoaderPlugin): hosts = ["maya"] + load_settings = {} # defined in settings + + @classmethod + def apply_settings(cls, project_settings, system_settings): + super(Loader, cls).apply_settings(project_settings, system_settings) + cls.load_settings = project_settings['maya']['load'] + def get_custom_namespace_and_group(self, context, options, loader_key): """Queries Settings to get custom template for namespace and group. @@ -613,12 +622,9 @@ class Loader(LoaderPlugin): loader_key (str): key to get separate configuration from Settings ('reference_loader'|'import_loader') """ - options["attach_to_root"] = True - asset = context['asset'] - subset = context['subset'] - settings = get_project_settings(context['project']['name']) - custom_naming = settings['maya']['load'][loader_key] + options["attach_to_root"] = True + custom_naming = self.load_settings[loader_key] if not custom_naming['namespace']: raise LoadError("No namespace specified in " @@ -627,6 +633,8 @@ class Loader(LoaderPlugin): self.log.debug("No custom group_name, no group will be created.") options["attach_to_root"] = False + asset = context['asset'] + subset = context['subset'] formatting_data = { "asset_name": asset['name'], "asset_type": asset['type'], diff --git a/openpype/hosts/maya/plugins/create/create_matchmove.py b/openpype/hosts/maya/plugins/create/create_matchmove.py new file mode 100644 index 0000000000..e64eb6a471 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_matchmove.py @@ -0,0 +1,32 @@ +from openpype.hosts.maya.api import ( + lib, + plugin +) +from openpype.lib import BoolDef + + +class CreateMatchmove(plugin.MayaCreator): + """Instance for more complex setup of cameras. + + Might contain multiple cameras, geometries etc. + + It is expected to be extracted into .abc or .ma + """ + + identifier = "io.openpype.creators.maya.matchmove" + label = "Matchmove" + family = "matchmove" + icon = "video-camera" + + def get_instance_attr_defs(self): + + defs = lib.collect_animation_defs() + + defs.extend([ + BoolDef("bakeToWorldSpace", + label="Bake Cameras to World-Space", + tooltip="Bake Cameras to World-Space", + default=True), + ]) + + return defs diff --git a/openpype/hosts/maya/plugins/create/create_rig.py b/openpype/hosts/maya/plugins/create/create_rig.py index 345ab6c00d..acd5c98f89 100644 --- a/openpype/hosts/maya/plugins/create/create_rig.py +++ b/openpype/hosts/maya/plugins/create/create_rig.py @@ -20,6 +20,13 @@ class CreateRig(plugin.MayaCreator): instance_node = instance.get("instance_node") self.log.info("Creating Rig instance set up ...") + # TODO:change name (_controls_SET -> _rigs_SET) controls = cmds.sets(name=subset_name + "_controls_SET", empty=True) + # TODO:change name (_out_SET -> _geo_SET) pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True) - cmds.sets([controls, pointcache], forceElement=instance_node) + skeleton = cmds.sets( + name=subset_name + "_skeletonAnim_SET", empty=True) + skeleton_mesh = cmds.sets( + name=subset_name + "_skeletonMesh_SET", empty=True) + cmds.sets([controls, pointcache, + skeleton, skeleton_mesh], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_unreal_yeticache.py b/openpype/hosts/maya/plugins/create/create_unreal_yeticache.py new file mode 100644 index 0000000000..c9f9cd9ba8 --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_unreal_yeticache.py @@ -0,0 +1,39 @@ +from openpype.hosts.maya.api import ( + lib, + plugin +) +from openpype.lib import NumberDef + + +class CreateYetiCache(plugin.MayaCreator): + """Output for procedural plugin nodes of Yeti """ + + identifier = "io.openpype.creators.maya.unrealyeticache" + label = "Unreal - Yeti Cache" + family = "yeticacheUE" + icon = "pagelines" + + def get_instance_attr_defs(self): + + defs = [ + NumberDef("preroll", + label="Preroll", + minimum=0, + default=0, + decimals=0) + ] + + # Add animation data without step and handles + defs.extend(lib.collect_animation_defs()) + remove = {"step", "handleStart", "handleEnd"} + defs = [attr_def for attr_def in defs if attr_def.key not in remove] + + # Add samples after frame range + defs.append( + NumberDef("samples", + label="Samples", + default=3, + decimals=0) + ) + + return defs diff --git a/openpype/hosts/maya/plugins/load/_load_animation.py b/openpype/hosts/maya/plugins/load/_load_animation.py index 981b9ef434..0781735bc4 100644 --- a/openpype/hosts/maya/plugins/load/_load_animation.py +++ b/openpype/hosts/maya/plugins/load/_load_animation.py @@ -1,4 +1,46 @@ import openpype.hosts.maya.api.plugin +import maya.cmds as cmds + + +def _process_reference(file_url, name, namespace, options): + """Load files by referencing scene in Maya. + + Args: + file_url (str): fileapth of the objects to be loaded + name (str): subset name + namespace (str): namespace + options (dict): dict of storing the param + + Returns: + list: list of object nodes + """ + from openpype.hosts.maya.api.lib import unique_namespace + # Get name from asset being loaded + # Assuming name is subset name from the animation, we split the number + # suffix from the name to ensure the namespace is unique + name = name.split("_")[0] + ext = file_url.split(".")[-1] + namespace = unique_namespace( + "{}_".format(name), + format="%03d", + suffix="_{}".format(ext) + ) + + attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] + + # no group shall be created + if not attach_to_root: + group_name = namespace + + nodes = cmds.file(file_url, + namespace=namespace, + sharedReferenceFile=False, + groupReference=attach_to_root, + groupName=group_name, + reference=True, + returnNewNodes=True) + return nodes class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): @@ -16,44 +58,42 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def process_reference(self, context, name, namespace, options): - import maya.cmds as cmds - from openpype.hosts.maya.api.lib import unique_namespace - cmds.loadPlugin("AbcImport.mll", quiet=True) - # Prevent identical alembic nodes from being shared - # Create unique namespace for the cameras - - # Get name from asset being loaded - # Assuming name is subset name from the animation, we split the number - # suffix from the name to ensure the namespace is unique - name = name.split("_")[0] - namespace = unique_namespace( - "{}_".format(name), - format="%03d", - suffix="_abc" - ) - - attach_to_root = options.get("attach_to_root", True) - group_name = options["group_name"] - - # no group shall be created - if not attach_to_root: - group_name = namespace - # hero_001 (abc) # asset_counter{optional} path = self.filepath_from_context(context) file_url = self.prepare_root_value(path, context["project"]["name"]) - nodes = cmds.file(file_url, - namespace=namespace, - sharedReferenceFile=False, - groupReference=attach_to_root, - groupName=group_name, - reference=True, - returnNewNodes=True) + nodes = _process_reference(file_url, name, namespace, options) # load colorbleed ID attribute self[:] = nodes return nodes + + +class FbxLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): + """Loader to reference an Fbx files""" + + families = ["animation", + "camera"] + representations = ["fbx"] + + label = "Reference animation" + order = -10 + icon = "code-fork" + color = "orange" + + def process_reference(self, context, name, namespace, options): + + cmds.loadPlugin("fbx4maya.mll", quiet=True) + + path = self.filepath_from_context(context) + file_url = self.prepare_root_value(path, + context["project"]["name"]) + + nodes = _process_reference(file_url, name, namespace, options) + + self[:] = nodes + + return nodes diff --git a/openpype/hosts/maya/plugins/load/load_audio.py b/openpype/hosts/maya/plugins/load/load_audio.py index 265b15f4ae..90cadb31b1 100644 --- a/openpype/hosts/maya/plugins/load/load_audio.py +++ b/openpype/hosts/maya/plugins/load/load_audio.py @@ -1,12 +1,6 @@ from maya import cmds, mel -from openpype.client import ( - get_asset_by_id, - get_subset_by_id, - get_version_by_id, -) from openpype.pipeline import ( - get_current_project_name, load, get_representation_path, ) @@ -18,7 +12,7 @@ class AudioLoader(load.LoaderPlugin): """Specific loader of audio.""" families = ["audio"] - label = "Import audio" + label = "Load audio" representations = ["wav"] icon = "volume-up" color = "orange" @@ -27,10 +21,10 @@ class AudioLoader(load.LoaderPlugin): start_frame = cmds.playbackOptions(query=True, min=True) sound_node = cmds.sound( - file=context["representation"]["data"]["path"], offset=start_frame + file=self.filepath_from_context(context), offset=start_frame ) cmds.timeControl( - mel.eval("$tmpVar=$gPlayBackSlider"), + mel.eval("$gPlayBackSlider=$gPlayBackSlider"), edit=True, sound=sound_node, displaySound=True @@ -59,32 +53,50 @@ class AudioLoader(load.LoaderPlugin): assert audio_nodes is not None, "Audio node not found." audio_node = audio_nodes[0] + current_sound = cmds.timeControl( + mel.eval("$gPlayBackSlider=$gPlayBackSlider"), + query=True, + sound=True + ) + activate_sound = current_sound == audio_node + path = get_representation_path(representation) - cmds.setAttr("{}.filename".format(audio_node), path, type="string") + + cmds.sound( + audio_node, + edit=True, + file=path + ) + + # The source start + end does not automatically update itself to the + # length of thew new audio file, even though maya does do that when + # creating a new audio node. So to update we compute it manually. + # This would however override any source start and source end a user + # might have done on the original audio node after load. + audio_frame_count = cmds.getAttr("{}.frameCount".format(audio_node)) + audio_sample_rate = cmds.getAttr("{}.sampleRate".format(audio_node)) + duration_in_seconds = audio_frame_count / audio_sample_rate + fps = mel.eval('currentTimeUnitToFPS()') # workfile FPS + source_start = 0 + source_end = (duration_in_seconds * fps) + cmds.setAttr("{}.sourceStart".format(audio_node), source_start) + cmds.setAttr("{}.sourceEnd".format(audio_node), source_end) + + if activate_sound: + # maya by default deactivates it from timeline on file change + cmds.timeControl( + mel.eval("$gPlayBackSlider=$gPlayBackSlider"), + edit=True, + sound=audio_node, + displaySound=True + ) + cmds.setAttr( container["objectName"] + ".representation", str(representation["_id"]), type="string" ) - # Set frame range. - project_name = get_current_project_name() - version = get_version_by_id( - project_name, representation["parent"], fields=["parent"] - ) - subset = get_subset_by_id( - project_name, version["parent"], fields=["parent"] - ) - asset = get_asset_by_id( - project_name, subset["parent"], fields=["parent"] - ) - - source_start = 1 - asset["data"]["frameStart"] - source_end = asset["data"]["frameEnd"] - - cmds.setAttr("{}.sourceStart".format(audio_node), source_start) - cmds.setAttr("{}.sourceEnd".format(audio_node), source_end) - def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index 61f337f501..4b704fa706 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -101,7 +101,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): "camerarig", "staticMesh", "skeletalMesh", - "mvLook"] + "mvLook", + "matchmove"] representations = ["ma", "abc", "fbx", "mb"] diff --git a/openpype/hosts/maya/plugins/publish/collect_fbx_animation.py b/openpype/hosts/maya/plugins/publish/collect_fbx_animation.py new file mode 100644 index 0000000000..aef8765e9c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_fbx_animation.py @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +from maya import cmds # noqa +import pyblish.api +from openpype.pipeline import OptionalPyblishPluginMixin + + +class CollectFbxAnimation(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Collect Animated Rig Data for FBX Extractor.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Fbx Animation" + hosts = ["maya"] + families = ["animation"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + skeleton_sets = [ + i for i in instance + if i.endswith("skeletonAnim_SET") + ] + if not skeleton_sets: + return + + instance.data["families"].append("animation.fbx") + instance.data["animated_skeleton"] = [] + for skeleton_set in skeleton_sets: + skeleton_content = cmds.sets(skeleton_set, query=True) + self.log.debug( + "Collected animated skeleton data: {}".format( + skeleton_content + )) + if skeleton_content: + instance.data["animated_skeleton"] = skeleton_content diff --git a/openpype/hosts/maya/plugins/publish/collect_rig_sets.py b/openpype/hosts/maya/plugins/publish/collect_rig_sets.py index 36a4211af1..34ff26a8b8 100644 --- a/openpype/hosts/maya/plugins/publish/collect_rig_sets.py +++ b/openpype/hosts/maya/plugins/publish/collect_rig_sets.py @@ -22,7 +22,8 @@ class CollectRigSets(pyblish.api.InstancePlugin): def process(self, instance): # Find required sets by suffix - searching = {"controls_SET", "out_SET"} + searching = {"controls_SET", "out_SET", + "skeletonAnim_SET", "skeletonMesh_SET"} found = {} for node in cmds.ls(instance, exactType="objectSet"): for suffix in searching: diff --git a/openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py b/openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py new file mode 100644 index 0000000000..31f0eca88c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_skeleton_mesh.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +from maya import cmds # noqa +import pyblish.api + + +class CollectSkeletonMesh(pyblish.api.InstancePlugin): + """Collect Static Rig Data for FBX Extractor.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Skeleton Mesh" + hosts = ["maya"] + families = ["rig"] + + def process(self, instance): + skeleton_mesh_set = instance.data["rig_sets"].get( + "skeletonMesh_SET") + if not skeleton_mesh_set: + self.log.debug( + "No skeletonMesh_SET found. " + "Skipping collecting of skeleton mesh..." + ) + return + + # Store current frame to ensure single frame export + frame = cmds.currentTime(query=True) + instance.data["frameStart"] = frame + instance.data["frameEnd"] = frame + + instance.data["skeleton_mesh"] = [] + + skeleton_mesh_content = cmds.sets( + skeleton_mesh_set, query=True) or [] + if not skeleton_mesh_content: + self.log.debug( + "No object nodes in skeletonMesh_SET. " + "Skipping collecting of skeleton mesh..." + ) + return + instance.data["families"] += ["rig.fbx"] + instance.data["skeleton_mesh"] = skeleton_mesh_content + self.log.debug( + "Collected skeletonMesh_SET members: {}".format( + skeleton_mesh_content + )) diff --git a/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py b/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py index 4dcda29050..7a1516997a 100644 --- a/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py +++ b/openpype/hosts/maya/plugins/publish/collect_yeti_cache.py @@ -39,7 +39,7 @@ class CollectYetiCache(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder + 0.45 label = "Collect Yeti Cache" - families = ["yetiRig", "yeticache"] + families = ["yetiRig", "yeticache", "yeticacheUE"] hosts = ["maya"] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py index 4ec1399df4..43803743bc 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py @@ -6,17 +6,21 @@ from openpype.pipeline import publish from openpype.hosts.maya.api import lib -class ExtractCameraAlembic(publish.Extractor): +class ExtractCameraAlembic(publish.Extractor, + publish.OptionalPyblishPluginMixin): """Extract a Camera as Alembic. - The cameras gets baked to world space by default. Only when the instance's + The camera gets baked to world space by default. Only when the instance's `bakeToWorldSpace` is set to False it will include its full hierarchy. + 'camera' family expects only single camera, if multiple cameras are needed, + 'matchmove' is better choice. + """ - label = "Camera (Alembic)" + label = "Extract Camera (Alembic)" hosts = ["maya"] - families = ["camera"] + families = ["camera", "matchmove"] bake_attributes = [] def process(self, instance): @@ -35,10 +39,11 @@ class ExtractCameraAlembic(publish.Extractor): # validate required settings assert isinstance(step, float), "Step must be a float value" - camera = cameras[0] # Define extract output file path dir_path = self.staging_dir(instance) + if not os.path.exists(dir_path): + os.makedirs(dir_path) filename = "{0}.abc".format(instance.name) path = os.path.join(dir_path, filename) @@ -64,9 +69,10 @@ class ExtractCameraAlembic(publish.Extractor): # if baked, drop the camera hierarchy to maintain # clean output and backwards compatibility - camera_root = cmds.listRelatives( - camera, parent=True, fullPath=True)[0] - job_str += ' -root {0}'.format(camera_root) + camera_roots = cmds.listRelatives( + cameras, parent=True, fullPath=True) + for camera_root in camera_roots: + job_str += ' -root {0}'.format(camera_root) for member in members: descendants = cmds.listRelatives(member, diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py index a50a8f0dfa..38cf00bbdd 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py @@ -2,11 +2,15 @@ """Extract camera as Maya Scene.""" import os import itertools +import contextlib from maya import cmds from openpype.pipeline import publish from openpype.hosts.maya.api import lib +from openpype.lib import ( + BoolDef +) def massage_ma_file(path): @@ -78,7 +82,8 @@ def unlock(plug): cmds.disconnectAttr(source, destination) -class ExtractCameraMayaScene(publish.Extractor): +class ExtractCameraMayaScene(publish.Extractor, + publish.OptionalPyblishPluginMixin): """Extract a Camera as Maya Scene. This will create a duplicate of the camera that will be baked *with* @@ -88,17 +93,22 @@ class ExtractCameraMayaScene(publish.Extractor): The cameras gets baked to world space by default. Only when the instance's `bakeToWorldSpace` is set to False it will include its full hierarchy. + 'camera' family expects only single camera, if multiple cameras are needed, + 'matchmove' is better choice. + Note: The extracted Maya ascii file gets "massaged" removing the uuid values so they are valid for older versions of Fusion (e.g. 6.4) """ - label = "Camera (Maya Scene)" + label = "Extract Camera (Maya Scene)" hosts = ["maya"] - families = ["camera"] + families = ["camera", "matchmove"] scene_type = "ma" + keep_image_planes = True + def process(self, instance): """Plugin entry point.""" # get settings @@ -131,15 +141,15 @@ class ExtractCameraMayaScene(publish.Extractor): "bake to world space is ignored...") # get cameras - members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True, - long=True, dag=True) - cameras = cmds.ls(members, leaf=True, shapes=True, long=True, - dag=True, type="camera") + members = set(cmds.ls(instance.data['setMembers'], leaf=True, + shapes=True, long=True, dag=True)) + cameras = set(cmds.ls(members, leaf=True, shapes=True, long=True, + dag=True, type="camera")) # validate required settings assert isinstance(step, float), "Step must be a float value" - camera = cameras[0] - transform = cmds.listRelatives(camera, parent=True, fullPath=True) + transforms = cmds.listRelatives(list(cameras), + parent=True, fullPath=True) # Define extract output file path dir_path = self.staging_dir(instance) @@ -151,23 +161,21 @@ class ExtractCameraMayaScene(publish.Extractor): with lib.evaluation("off"): with lib.suspended_refresh(): if bake_to_worldspace: - self.log.debug( - "Performing camera bakes: {}".format(transform)) baked = lib.bake_to_world_space( - transform, + transforms, frame_range=[start, end], step=step ) - baked_camera_shapes = cmds.ls(baked, - type="camera", - dag=True, - shapes=True, - long=True) + baked_camera_shapes = set(cmds.ls(baked, + type="camera", + dag=True, + shapes=True, + long=True)) - members = members + baked_camera_shapes - members.remove(camera) + members.update(baked_camera_shapes) + members.difference_update(cameras) else: - baked_camera_shapes = cmds.ls(cameras, + baked_camera_shapes = cmds.ls(list(cameras), type="camera", dag=True, shapes=True, @@ -186,19 +194,28 @@ class ExtractCameraMayaScene(publish.Extractor): unlock(plug) cmds.setAttr(plug, value) - self.log.debug("Performing extraction..") - cmds.select(cmds.ls(members, dag=True, - shapes=True, long=True), noExpand=True) - cmds.file(path, - force=True, - typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501 - exportSelected=True, - preserveReferences=False, - constructionHistory=False, - channels=True, # allow animation - constraints=False, - shader=False, - expressions=False) + attr_values = self.get_attr_values_from_data( + instance.data) + keep_image_planes = attr_values.get("keep_image_planes") + + with transfer_image_planes(sorted(cameras), + sorted(baked_camera_shapes), + keep_image_planes): + + self.log.info("Performing extraction..") + cmds.select(cmds.ls(list(members), dag=True, + shapes=True, long=True), + noExpand=True) + cmds.file(path, + force=True, + typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501 + exportSelected=True, + preserveReferences=False, + constructionHistory=False, + channels=True, # allow animation + constraints=False, + shader=False, + expressions=False) # Delete the baked hierarchy if bake_to_worldspace: @@ -219,3 +236,62 @@ class ExtractCameraMayaScene(publish.Extractor): self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, path)) + + @classmethod + def get_attribute_defs(cls): + defs = super(ExtractCameraMayaScene, cls).get_attribute_defs() + + defs.extend([ + BoolDef("keep_image_planes", + label="Keep Image Planes", + tooltip="Preserving connected image planes on camera", + default=cls.keep_image_planes), + + ]) + + return defs + + +@contextlib.contextmanager +def transfer_image_planes(source_cameras, target_cameras, + keep_input_connections): + """Reattaches image planes to baked or original cameras. + + Baked cameras are duplicates of original ones. + This attaches it to duplicated camera properly and after + export it reattaches it back to original to keep image plane in workfile. + """ + originals = {} + try: + for source_camera, target_camera in zip(source_cameras, + target_cameras): + image_planes = cmds.listConnections(source_camera, + type="imagePlane") or [] + + # Split of the parent path they are attached - we want + # the image plane node name. + # TODO: Does this still mean the image plane name is unique? + image_planes = [x.split("->", 1)[1] for x in image_planes] + + if not image_planes: + continue + + originals[source_camera] = [] + for image_plane in image_planes: + if keep_input_connections: + if source_camera == target_camera: + continue + _attach_image_plane(target_camera, image_plane) + else: # explicitly dettaching image planes + cmds.imagePlane(image_plane, edit=True, detach=True) + originals[source_camera].append(image_plane) + yield + finally: + for camera, image_planes in originals.items(): + for image_plane in image_planes: + _attach_image_plane(camera, image_plane) + + +def _attach_image_plane(camera, image_plane): + cmds.imagePlane(image_plane, edit=True, detach=True) + cmds.imagePlane(image_plane, edit=True, camera=camera) diff --git a/openpype/hosts/maya/plugins/publish/extract_fbx_animation.py b/openpype/hosts/maya/plugins/publish/extract_fbx_animation.py new file mode 100644 index 0000000000..8288bc9329 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_fbx_animation.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- +import os + +from maya import cmds # noqa +import pyblish.api + +from openpype.pipeline import publish +from openpype.hosts.maya.api import fbx +from openpype.hosts.maya.api.lib import ( + namespaced, get_namespace, strip_namespace +) + + +class ExtractFBXAnimation(publish.Extractor): + """Extract Rig in FBX format from Maya. + + This extracts the rig in fbx with the constraints + and referenced asset content included. + This also optionally extract animated rig in fbx with + geometries included. + + """ + order = pyblish.api.ExtractorOrder + label = "Extract Animation (FBX)" + hosts = ["maya"] + families = ["animation.fbx"] + + def process(self, instance): + # Define output path + staging_dir = self.staging_dir(instance) + filename = "{0}.fbx".format(instance.name) + path = os.path.join(staging_dir, filename) + path = path.replace("\\", "/") + + fbx_exporter = fbx.FBXExtractor(log=self.log) + out_members = instance.data.get("animated_skeleton", []) + # Export + instance.data["constraints"] = True + instance.data["skeletonDefinitions"] = True + instance.data["referencedAssetsContent"] = True + fbx_exporter.set_options_from_instance(instance) + # Export from the rig's namespace so that the exported + # FBX does not include the namespace but preserves the node + # names as existing in the rig workfile + namespace = get_namespace(out_members[0]) + relative_out_members = [ + strip_namespace(node, namespace) for node in out_members + ] + with namespaced( + ":" + namespace, + new=False, + relative_names=True + ) as namespace: + fbx_exporter.export(relative_out_members, path) + + representations = instance.data.setdefault("representations", []) + representations.append({ + 'name': 'fbx', + 'ext': 'fbx', + 'files': filename, + "stagingDir": staging_dir + }) + + self.log.debug( + "Extracted FBX animation to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py b/openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py new file mode 100644 index 0000000000..50c1fb3bde --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_skeleton_mesh.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +import os + +from maya import cmds # noqa +import pyblish.api + +from openpype.pipeline import publish +from openpype.pipeline.publish import OptionalPyblishPluginMixin +from openpype.hosts.maya.api import fbx + + +class ExtractSkeletonMesh(publish.Extractor, + OptionalPyblishPluginMixin): + """Extract Rig in FBX format from Maya. + + This extracts the rig in fbx with the constraints + and referenced asset content included. + This also optionally extract animated rig in fbx with + geometries included. + + """ + order = pyblish.api.ExtractorOrder + label = "Extract Skeleton Mesh" + hosts = ["maya"] + families = ["rig.fbx"] + + def process(self, instance): + if not self.is_active(instance.data): + return + # Define output path + staging_dir = self.staging_dir(instance) + filename = "{0}.fbx".format(instance.name) + path = os.path.join(staging_dir, filename) + + fbx_exporter = fbx.FBXExtractor(log=self.log) + out_set = instance.data.get("skeleton_mesh", []) + + instance.data["constraints"] = True + instance.data["skeletonDefinitions"] = True + + fbx_exporter.set_options_from_instance(instance) + + # Export + fbx_exporter.export(out_set, path) + + representations = instance.data.setdefault("representations", []) + representations.append({ + 'name': 'fbx', + 'ext': 'fbx', + 'files': filename, + "stagingDir": staging_dir + }) + + self.log.debug("Extract FBX to: {0}".format(path)) diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_yeticache.py b/openpype/hosts/maya/plugins/publish/extract_unreal_yeticache.py new file mode 100644 index 0000000000..963285093e --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_yeticache.py @@ -0,0 +1,61 @@ +import os + +from maya import cmds + +from openpype.pipeline import publish + + +class ExtractYetiCache(publish.Extractor): + """Producing Yeti cache files using scene time range. + + This will extract Yeti cache file sequence and fur settings. + """ + + label = "Extract Yeti Cache" + hosts = ["maya"] + families = ["yeticacheUE"] + + def process(self, instance): + + yeti_nodes = cmds.ls(instance, type="pgYetiMaya") + if not yeti_nodes: + raise RuntimeError("No pgYetiMaya nodes found in the instance") + + # Define extract output file path + dirname = self.staging_dir(instance) + + # Collect information for writing cache + start_frame = instance.data["frameStartHandle"] + end_frame = instance.data["frameEndHandle"] + preroll = instance.data["preroll"] + if preroll > 0: + start_frame -= preroll + + kwargs = {} + samples = instance.data.get("samples", 0) + if samples == 0: + kwargs.update({"sampleTimes": "0.0 1.0"}) + else: + kwargs.update({"samples": samples}) + + self.log.debug(f"Writing out cache {start_frame} - {end_frame}") + filename = f"{instance.name}.abc" + path = os.path.join(dirname, filename) + cmds.pgYetiCommand(yeti_nodes, + writeAlembic=path, + range=(start_frame, end_frame), + asUnrealAbc=True, + **kwargs) + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + 'stagingDir': dirname + } + instance.data["representations"].append(representation) + + self.log.debug(f"Extracted {instance} to {dirname}") diff --git a/openpype/hosts/maya/plugins/publish/validate_animated_reference.py b/openpype/hosts/maya/plugins/publish/validate_animated_reference.py new file mode 100644 index 0000000000..4537892d6d --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_animated_reference.py @@ -0,0 +1,66 @@ +import pyblish.api +import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + PublishValidationError, + ValidateContentsOrder +) +from maya import cmds + + +class ValidateAnimatedReferenceRig(pyblish.api.InstancePlugin): + """Validate all nodes in skeletonAnim_SET are referenced""" + + order = ValidateContentsOrder + hosts = ["maya"] + families = ["animation.fbx"] + label = "Animated Reference Rig" + accepted_controllers = ["transform", "locator"] + actions = [openpype.hosts.maya.api.action.SelectInvalidAction] + + def process(self, instance): + animated_sets = instance.data.get("animated_skeleton", []) + if not animated_sets: + self.log.debug( + "No nodes found in skeletonAnim_SET. " + "Skipping validation of animated reference rig..." + ) + return + + for animated_reference in animated_sets: + is_referenced = cmds.referenceQuery( + animated_reference, isNodeReferenced=True) + if not bool(is_referenced): + raise PublishValidationError( + "All the content in skeletonAnim_SET" + " should be referenced nodes" + ) + invalid_controls = self.validate_controls(animated_sets) + if invalid_controls: + raise PublishValidationError( + "All the content in skeletonAnim_SET" + " should be transforms" + ) + + @classmethod + def validate_controls(self, set_members): + """Check if the controller set contains only accepted node types. + + Checks if all its set members are within the hierarchy of the root + Checks if the node types of the set members valid + + Args: + set_members: list of nodes of the skeleton_anim_set + hierarchy: list of nodes which reside under the root node + + Returns: + errors (list) + """ + + # Validate control types + invalid = [] + set_members = cmds.ls(set_members, long=True) + for node in set_members: + if cmds.nodeType(node) not in self.accepted_controllers: + invalid.append(node) + + return invalid diff --git a/openpype/hosts/maya/plugins/publish/validate_color_sets.py b/openpype/hosts/maya/plugins/publish/validate_color_sets.py index 766124cd9e..173fee4179 100644 --- a/openpype/hosts/maya/plugins/publish/validate_color_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_color_sets.py @@ -3,9 +3,10 @@ from maya import cmds import pyblish.api import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( - RepairAction, ValidateMeshOrder, - OptionalPyblishPluginMixin + OptionalPyblishPluginMixin, + PublishValidationError, + RepairAction ) @@ -22,8 +23,9 @@ class ValidateColorSets(pyblish.api.Validator, hosts = ['maya'] families = ['model'] label = 'Mesh ColorSets' - actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - RepairAction] + actions = [ + openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction + ] optional = True @staticmethod @@ -48,8 +50,9 @@ class ValidateColorSets(pyblish.api.Validator, invalid = self.get_invalid(instance) if invalid: - raise ValueError("Meshes found with " - "Color Sets: {0}".format(invalid)) + raise PublishValidationError( + message="Meshes found with Color Sets: {0}".format(invalid) + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py index 9f47bf7a3d..cb5c68e4ab 100644 --- a/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_plugin_path_attributes.py @@ -30,18 +30,21 @@ class ValidatePluginPathAttributes(pyblish.api.InstancePlugin): def get_invalid(cls, instance): invalid = list() - file_attr = cls.attribute - if not file_attr: + file_attrs = cls.attribute + if not file_attrs: return invalid # Consider only valid node types to avoid "Unknown object type" warning all_node_types = set(cmds.allNodeTypes()) - node_types = [key for key in file_attr.keys() if key in all_node_types] + node_types = [ + key for key in file_attrs.keys() + if key in all_node_types + ] for node, node_type in pairwise(cmds.ls(type=node_types, showType=True)): # get the filepath - file_attr = "{}.{}".format(node, file_attr[node_type]) + file_attr = "{}.{}".format(node, file_attrs[node_type]) filepath = cmds.getAttr(file_attr) if filepath and not os.path.exists(filepath): diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py index 23f031a5db..106b4024e2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py @@ -1,6 +1,6 @@ import pyblish.api from maya import cmds - +import openpype.hosts.maya.api.action from openpype.pipeline.publish import ( PublishValidationError, ValidateContentsOrder @@ -20,33 +20,27 @@ class ValidateRigContents(pyblish.api.InstancePlugin): label = "Rig Contents" hosts = ["maya"] families = ["rig"] + action = [openpype.hosts.maya.api.action.SelectInvalidAction] accepted_output = ["mesh", "transform"] accepted_controllers = ["transform"] def process(self, instance): + invalid = self.get_invalid(instance) + if invalid: + raise PublishValidationError( + "Invalid rig content. See log for details.") + + @classmethod + def get_invalid(cls, instance): # Find required sets by suffix - required = ["controls_SET", "out_SET"] - missing = [ - key for key in required if key not in instance.data["rig_sets"] - ] - if missing: - raise PublishValidationError( - "%s is missing sets: %s" % (instance, ", ".join(missing)) - ) + required, rig_sets = cls.get_nodes(instance) - controls_set = instance.data["rig_sets"]["controls_SET"] - out_set = instance.data["rig_sets"]["out_SET"] + cls.validate_missing_objectsets(instance, required, rig_sets) - # Ensure there are at least some transforms or dag nodes - # in the rig instance - set_members = instance.data['setMembers'] - if not cmds.ls(set_members, type="dagNode", long=True): - raise PublishValidationError( - "No dag nodes in the pointcache instance. " - "(Empty instance?)" - ) + controls_set = rig_sets["controls_SET"] + out_set = rig_sets["out_SET"] # Ensure contents in sets and retrieve long path for all objects output_content = cmds.sets(out_set, query=True) or [] @@ -61,49 +55,92 @@ class ValidateRigContents(pyblish.api.InstancePlugin): ) controls_content = cmds.ls(controls_content, long=True) - # Validate members are inside the hierarchy from root node - root_nodes = cmds.ls(set_members, assemblies=True, long=True) - hierarchy = cmds.listRelatives(root_nodes, allDescendents=True, - fullPath=True) + root_nodes - hierarchy = set(hierarchy) - - invalid_hierarchy = [] - for node in output_content: - if node not in hierarchy: - invalid_hierarchy.append(node) - for node in controls_content: - if node not in hierarchy: - invalid_hierarchy.append(node) + rig_content = output_content + controls_content + invalid_hierarchy = cls.invalid_hierarchy(instance, rig_content) # Additional validations - invalid_geometry = self.validate_geometry(output_content) - invalid_controls = self.validate_controls(controls_content) + invalid_geometry = cls.validate_geometry(output_content) + invalid_controls = cls.validate_controls(controls_content) error = False if invalid_hierarchy: - self.log.error("Found nodes which reside outside of root group " + cls.log.error("Found nodes which reside outside of root group " "while they are set up for publishing." "\n%s" % invalid_hierarchy) error = True if invalid_controls: - self.log.error("Only transforms can be part of the controls_SET." + cls.log.error("Only transforms can be part of the controls_SET." "\n%s" % invalid_controls) error = True if invalid_geometry: - self.log.error("Only meshes can be part of the out_SET\n%s" + cls.log.error("Only meshes can be part of the out_SET\n%s" % invalid_geometry) error = True - if error: + return invalid_hierarchy + invalid_controls + invalid_geometry + + @classmethod + def validate_missing_objectsets(cls, instance, + required_objsets, rig_sets): + """Validate missing objectsets in rig sets + + Args: + instance (str): instance + required_objsets (list): list of objectset names + rig_sets (list): list of rig sets + + Raises: + PublishValidationError: When the error is raised, it will show + which instance has the missing object sets + """ + missing = [ + key for key in required_objsets if key not in rig_sets + ] + if missing: raise PublishValidationError( - "Invalid rig content. See log for details.") + "%s is missing sets: %s" % (instance, ", ".join(missing)) + ) - def validate_geometry(self, set_members): - """Check if the out set passes the validations + @classmethod + def invalid_hierarchy(cls, instance, content): + """ + Check if all rig set members are within the hierarchy of the rig root - Checks if all its set members are within the hierarchy of the root + Args: + instance (str): instance + content (list): list of content from rig sets + + Raises: + PublishValidationError: It means no dag nodes in + the rig instance + + Returns: + list: invalid hierarchy + """ + # Ensure there are at least some transforms or dag nodes + # in the rig instance + set_members = instance.data['setMembers'] + if not cmds.ls(set_members, type="dagNode", long=True): + raise PublishValidationError( + "No dag nodes in the rig instance. " + "(Empty instance?)" + ) + # Validate members are inside the hierarchy from root node + root_nodes = cmds.ls(set_members, assemblies=True, long=True) + hierarchy = cmds.listRelatives(root_nodes, allDescendents=True, + fullPath=True) + root_nodes + hierarchy = set(hierarchy) + invalid_hierarchy = [] + for node in content: + if node not in hierarchy: + invalid_hierarchy.append(node) + return invalid_hierarchy + + @classmethod + def validate_geometry(cls, set_members): + """ Checks if the node types of the set members valid Args: @@ -122,15 +159,13 @@ class ValidateRigContents(pyblish.api.InstancePlugin): fullPath=True) or [] all_shapes = cmds.ls(set_members + shapes, long=True, shapes=True) for shape in all_shapes: - if cmds.nodeType(shape) not in self.accepted_output: + if cmds.nodeType(shape) not in cls.accepted_output: invalid.append(shape) - return invalid - - def validate_controls(self, set_members): - """Check if the controller set passes the validations - - Checks if all its set members are within the hierarchy of the root + @classmethod + def validate_controls(cls, set_members): + """ + Checks if the control set members are allowed node types. Checks if the node types of the set members valid Args: @@ -144,7 +179,80 @@ class ValidateRigContents(pyblish.api.InstancePlugin): # Validate control types invalid = [] for node in set_members: - if cmds.nodeType(node) not in self.accepted_controllers: + if cmds.nodeType(node) not in cls.accepted_controllers: invalid.append(node) return invalid + + @classmethod + def get_nodes(cls, instance): + """Get the target objectsets and rig sets nodes + + Args: + instance (str): instance + + Returns: + tuple: 2-tuple of list of objectsets, + list of rig sets nodes + """ + objectsets = ["controls_SET", "out_SET"] + rig_sets_nodes = instance.data.get("rig_sets", []) + return objectsets, rig_sets_nodes + + +class ValidateSkeletonRigContents(ValidateRigContents): + """Ensure skeleton rigs contains pipeline-critical content + + The rigs optionally contain at least two object sets: + "skeletonMesh_SET" - Set of the skinned meshes + with bone hierarchies + + """ + + order = ValidateContentsOrder + label = "Skeleton Rig Contents" + hosts = ["maya"] + families = ["rig.fbx"] + + @classmethod + def get_invalid(cls, instance): + objectsets, skeleton_mesh_nodes = cls.get_nodes(instance) + cls.validate_missing_objectsets( + instance, objectsets, instance.data["rig_sets"]) + + # Ensure contents in sets and retrieve long path for all objects + output_content = instance.data.get("skeleton_mesh", []) + output_content = cmds.ls(skeleton_mesh_nodes, long=True) + + invalid_hierarchy = cls.invalid_hierarchy( + instance, output_content) + invalid_geometry = cls.validate_geometry(output_content) + + error = False + if invalid_hierarchy: + cls.log.error("Found nodes which reside outside of root group " + "while they are set up for publishing." + "\n%s" % invalid_hierarchy) + error = True + if invalid_geometry: + cls.log.error("Found nodes which reside outside of root group " + "while they are set up for publishing." + "\n%s" % invalid_hierarchy) + error = True + if error: + return invalid_hierarchy + invalid_geometry + + @classmethod + def get_nodes(cls, instance): + """Get the target objectsets and rig sets nodes + + Args: + instance (str): instance + + Returns: + tuple: 2-tuple of list of objectsets, + list of rig sets nodes + """ + objectsets = ["skeletonMesh_SET"] + skeleton_mesh_nodes = instance.data.get("skeleton_mesh", []) + return objectsets, skeleton_mesh_nodes diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py index a3828f871b..82248c57b3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py @@ -59,7 +59,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - controls_set = instance.data["rig_sets"].get("controls_SET") + controls_set = cls.get_node(instance) if not controls_set: cls.log.error( "Must have 'controls_SET' in rig instance" @@ -189,7 +189,7 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): @classmethod def repair(cls, instance): - controls_set = instance.data["rig_sets"].get("controls_SET") + controls_set = cls.get_node(instance) if not controls_set: cls.log.error( "Unable to repair because no 'controls_SET' found in rig " @@ -228,3 +228,64 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): default = cls.CONTROLLER_DEFAULTS[attr] cls.log.info("Setting %s to %s" % (plug, default)) cmds.setAttr(plug, default) + + @classmethod + def get_node(cls, instance): + """Get target object nodes from controls_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from controls_SET + """ + return instance.data["rig_sets"].get("controls_SET") + + +class ValidateSkeletonRigControllers(ValidateRigControllers): + """Validate rig controller for skeletonAnim_SET + + Controls must have the transformation attributes on their default + values of translate zero, rotate zero and scale one when they are + unlocked attributes. + + Unlocked keyable attributes may not have any incoming connections. If + these connections are required for the rig then lock the attributes. + + The visibility attribute must be locked. + + Note that `repair` will: + - Lock all visibility attributes + - Reset all default values for translate, rotate, scale + - Break all incoming connections to keyable attributes + + """ + order = ValidateContentsOrder + 0.05 + label = "Skeleton Rig Controllers" + hosts = ["maya"] + families = ["rig.fbx"] + + # Default controller values + CONTROLLER_DEFAULTS = { + "translateX": 0, + "translateY": 0, + "translateZ": 0, + "rotateX": 0, + "rotateY": 0, + "rotateZ": 0, + "scaleX": 1, + "scaleY": 1, + "scaleZ": 1 + } + + @classmethod + def get_node(cls, instance): + """Get target object nodes from skeletonMesh_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from skeletonMesh_SET + """ + return instance.data["rig_sets"].get("skeletonMesh_SET") diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py index fbd510c683..80ac0f27e6 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py @@ -46,7 +46,7 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): def get_invalid(cls, instance): """Get all nodes which do not match the criteria""" - out_set = instance.data["rig_sets"].get("out_SET") + out_set = cls.get_node(instance) if not out_set: return [] @@ -85,3 +85,45 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): continue lib.set_id(node, sibling_id, overwrite=True) + + @classmethod + def get_node(cls, instance): + """Get target object nodes from out_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from out_SET + """ + return instance.data["rig_sets"].get("out_SET") + + +class ValidateSkeletonRigOutSetNodeIds(ValidateRigOutSetNodeIds): + """Validate if deformed shapes have related IDs to the original shapes + from skeleton set. + + When a deformer is applied in the scene on a referenced mesh that already + had deformers then Maya will create a new shape node for the mesh that + does not have the original id. This validator checks whether the ids are + valid on all the shape nodes in the instance. + + """ + + order = ValidateContentsOrder + families = ["rig.fbx"] + hosts = ['maya'] + label = 'Skeleton Rig Out Set Node Ids' + + @classmethod + def get_node(cls, instance): + """Get target object nodes from skeletonMesh_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from skeletonMesh_SET + """ + return instance.data["rig_sets"].get( + "skeletonMesh_SET") diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index 24fb36eb8b..343d8e6924 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -47,7 +47,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): invalid = {} if compute: - out_set = instance.data["rig_sets"].get("out_SET") + out_set = cls.get_node(instance) if not out_set: instance.data["mismatched_output_ids"] = invalid return invalid @@ -115,3 +115,40 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): "Multiple matched ids found. Please repair manually: " "{}".format(multiple_ids_match) ) + + @classmethod + def get_node(cls, instance): + """Get target object nodes from out_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from out_SET + """ + return instance.data["rig_sets"].get("out_SET") + + +class ValidateSkeletonRigOutputIds(ValidateRigOutputIds): + """Validate rig output ids from the skeleton sets. + + Ids must share the same id as similarly named nodes in the scene. This is + to ensure the id from the model is preserved through animation. + + """ + order = ValidateContentsOrder + 0.05 + label = "Skeleton Rig Output Ids" + hosts = ["maya"] + families = ["rig.fbx"] + + @classmethod + def get_node(cls, instance): + """Get target object nodes from skeletonMesh_SET + + Args: + instance (str): instance + + Returns: + list: list of object nodes from skeletonMesh_SET + """ + return instance.data["rig_sets"].get("skeletonMesh_SET") diff --git a/openpype/hosts/maya/plugins/publish/validate_skeleton_top_group_hierarchy.py b/openpype/hosts/maya/plugins/publish/validate_skeleton_top_group_hierarchy.py new file mode 100644 index 0000000000..1dbe1c454c --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_skeleton_top_group_hierarchy.py @@ -0,0 +1,40 @@ +# -*- coding: utf-8 -*- +"""Plugin for validating naming conventions.""" +from maya import cmds + +import pyblish.api + +from openpype.pipeline.publish import ( + ValidateContentsOrder, + OptionalPyblishPluginMixin, + PublishValidationError +) + + +class ValidateSkeletonTopGroupHierarchy(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates top group hierarchy in the SETs + Make sure the object inside the SETs are always top + group of the hierarchy + + """ + order = ValidateContentsOrder + 0.05 + label = "Skeleton Rig Top Group Hierarchy" + families = ["rig.fbx"] + + def process(self, instance): + invalid = [] + skeleton_mesh_data = instance.data("skeleton_mesh", []) + if skeleton_mesh_data: + invalid = self.get_top_hierarchy(skeleton_mesh_data) + if invalid: + raise PublishValidationError( + "The skeletonMesh_SET includes the object which " + "is not at the top hierarchy: {}".format(invalid)) + + def get_top_hierarchy(self, targets): + targets = cmds.ls(targets, long=True) # ensure long names + non_top_hierarchy_list = [ + target for target in targets if target.count("|") > 2 + ] + return non_top_hierarchy_list diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py index 5ba256f9f5..58fa9d02bd 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py @@ -69,11 +69,8 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin, invalid = [] - project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) collision_prefixes = ( - project_settings + instance.context.data["project_settings"] ["maya"] ["create"] ["CreateUnrealStaticMesh"] diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 41e6a27cef..390545b806 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -2316,27 +2316,53 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. ''' Adds correct colorspace to write node dict ''' - for node in nuke.allNodes(filter="Group"): + for node in nuke.allNodes(filter="Group", group=self._root_node): + log.info("Setting colorspace to `{}`".format(node.name())) # get data from avalon knob avalon_knob_data = read_avalon_data(node) + node_data = get_node_data(node, INSTANCE_DATA_KNOB) - if avalon_knob_data.get("id") != "pyblish.avalon.instance": + if ( + # backward compatibility + # TODO: remove this once old avalon data api will be removed + avalon_knob_data + and avalon_knob_data.get("id") != "pyblish.avalon.instance" + ): + continue + elif ( + node_data + and node_data.get("id") != "pyblish.avalon.instance" + ): continue - if "creator" not in avalon_knob_data: + if ( + # backward compatibility + # TODO: remove this once old avalon data api will be removed + avalon_knob_data + and "creator" not in avalon_knob_data + ): + continue + elif ( + node_data + and "creator_identifier" not in node_data + ): continue - # establish families - families = [avalon_knob_data["family"]] - if avalon_knob_data.get("families"): - families.append(avalon_knob_data.get("families")) + nuke_imageio_writes = None + if avalon_knob_data: + # establish families + families = [avalon_knob_data["family"]] + if avalon_knob_data.get("families"): + families.append(avalon_knob_data.get("families")) - nuke_imageio_writes = get_imageio_node_setting( - node_class=avalon_knob_data["families"], - plugin_name=avalon_knob_data["creator"], - subset=avalon_knob_data["subset"] - ) + nuke_imageio_writes = get_imageio_node_setting( + node_class=avalon_knob_data["families"], + plugin_name=avalon_knob_data["creator"], + subset=avalon_knob_data["subset"] + ) + elif node_data: + nuke_imageio_writes = get_write_node_template_attr(node) log.debug("nuke_imageio_writes: `{}`".format(nuke_imageio_writes)) @@ -3397,3 +3423,27 @@ def create_viewer_profile_string(viewer, display=None, path_like=False): if path_like: return "{}/{}".format(display, viewer) return "{} ({})".format(viewer, display) + + +def get_filenames_without_hash(filename, frame_start, frame_end): + """Get filenames without frame hash + i.e. "renderCompositingMain.baking.0001.exr" + + Args: + filename (str): filename with frame hash + frame_start (str): start of the frame + frame_end (str): end of the frame + + Returns: + list: filename per frame of the sequence + """ + filenames = [] + for frame in range(int(frame_start), (int(frame_end) + 1)): + if "#" in filename: + # use regex to convert #### to {:0>4} + def replace(match): + return "{{:0>{}}}".format(len(match.group())) + filename_without_hashes = re.sub("#+", replace, filename) + new_filename = filename_without_hashes.format(frame) + filenames.append(new_filename) + return filenames diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index a0e1525cd0..c39e3c339d 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -21,6 +21,9 @@ from openpype.pipeline import ( CreatedInstance, get_current_task_name ) +from openpype.lib.transcoding import ( + VIDEO_EXTENSIONS +) from .lib import ( INSTANCE_DATA_KNOB, Knobby, @@ -35,7 +38,8 @@ from .lib import ( get_node_data, get_view_process_node, get_viewer_config_from_string, - deprecated + deprecated, + get_filenames_without_hash ) from .pipeline import ( list_instances, @@ -580,18 +584,25 @@ class ExporterReview(object): def get_file_info(self): if self.collection: # get path - self.fname = os.path.basename(self.collection.format( - "{head}{padding}{tail}")) + self.fname = os.path.basename( + self.collection.format("{head}{padding}{tail}") + ) self.fhead = self.collection.format("{head}") # get first and last frame self.first_frame = min(self.collection.indexes) self.last_frame = max(self.collection.indexes) + + # make sure slate frame is not included + frame_start_handle = self.instance.data["frameStartHandle"] + if frame_start_handle > self.first_frame: + self.first_frame = frame_start_handle + else: self.fname = os.path.basename(self.path_in) self.fhead = os.path.splitext(self.fname)[0] + "." - self.first_frame = self.instance.data.get("frameStartHandle", None) - self.last_frame = self.instance.data.get("frameEndHandle", None) + self.first_frame = self.instance.data["frameStartHandle"] + self.last_frame = self.instance.data["frameEndHandle"] if "#" in self.fhead: self.fhead = self.fhead.replace("#", "")[:-1] @@ -627,6 +638,10 @@ class ExporterReview(object): "frameStart": self.first_frame, "frameEnd": self.last_frame, }) + if ".{}".format(self.ext) not in VIDEO_EXTENSIONS: + filenames = get_filenames_without_hash( + self.file, self.first_frame, self.last_frame) + repre["files"] = filenames if self.multiple_presets: repre["outputName"] = self.name @@ -800,7 +815,20 @@ class ExporterReviewMov(ExporterReview): self.log.info("File info was set...") - self.file = self.fhead + self.name + ".{}".format(self.ext) + if ".{}".format(self.ext) in VIDEO_EXTENSIONS: + self.file = "{}{}.{}".format( + self.fhead, self.name, self.ext) + else: + # Output is image (or image sequence) + # When the file is an image it's possible it + # has extra information after the `fhead` that + # we want to preserve, e.g. like frame numbers + # or frames hashes like `####` + filename_no_ext = os.path.splitext( + os.path.basename(self.path_in))[0] + after_head = filename_no_ext[len(self.fhead):] + self.file = "{}{}.{}.{}".format( + self.fhead, self.name, after_head, self.ext) self.path = os.path.join( self.staging_dir, self.file).replace("\\", "/") @@ -869,6 +897,11 @@ class ExporterReviewMov(ExporterReview): r_node["origlast"].setValue(self.last_frame) r_node["colorspace"].setValue(self.write_colorspace) + # do not rely on defaults, set explicitly + # to be sure it is set correctly + r_node["frame_mode"].setValue("expression") + r_node["frame"].setValue("") + if read_raw: r_node["raw"].setValue(1) @@ -921,7 +954,6 @@ class ExporterReviewMov(ExporterReview): self.log.debug("Path: {}".format(self.path)) write_node["file"].setValue(str(self.path)) write_node["file_type"].setValue(str(self.ext)) - # Knobs `meta_codec` and `mov64_codec` are not available on centos. # TODO shouldn't this come from settings on outputs? try: diff --git a/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py index b0f69e8ab8..449a1cc935 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py +++ b/openpype/hosts/nuke/plugins/publish/collect_nuke_instance_data.py @@ -2,7 +2,7 @@ import nuke import pyblish.api -class CollectNukeInstanceData(pyblish.api.InstancePlugin): +class CollectInstanceData(pyblish.api.InstancePlugin): """Collect Nuke instance data """ diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py similarity index 83% rename from openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py rename to openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py index 956d1a54a3..9730e3b61f 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py @@ -8,15 +8,16 @@ from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api.lib import maintained_selection -class ExtractReviewDataMov(publish.Extractor): - """Extracts movie and thumbnail with baked in luts +class ExtractReviewIntermediates(publish.Extractor): + """Extracting intermediate videos or sequences with + thumbnail for transcoding. must be run after extract_render_local.py """ order = pyblish.api.ExtractorOrder + 0.01 - label = "Extract Review Data Mov" + label = "Extract Review Intermediates" families = ["review"] hosts = ["nuke"] @@ -25,6 +26,24 @@ class ExtractReviewDataMov(publish.Extractor): viewer_lut_raw = None outputs = {} + @classmethod + def apply_settings(cls, project_settings): + """Apply the settings from the deprecated + ExtractReviewDataMov plugin for backwards compatibility + """ + nuke_publish = project_settings["nuke"]["publish"] + deprecated_setting = nuke_publish["ExtractReviewDataMov"] + current_setting = nuke_publish.get("ExtractReviewIntermediates") + if deprecated_setting["enabled"]: + # Use deprecated settings if they are still enabled + cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"] + cls.outputs = deprecated_setting["outputs"] + elif current_setting is None: + pass + elif current_setting["enabled"]: + cls.viewer_lut_raw = current_setting["viewer_lut_raw"] + cls.outputs = current_setting["outputs"] + def process(self, instance): families = set(instance.data["families"]) diff --git a/openpype/hosts/photoshop/lib.py b/openpype/hosts/photoshop/lib.py index ae7a33b7b6..9f603a70d2 100644 --- a/openpype/hosts/photoshop/lib.py +++ b/openpype/hosts/photoshop/lib.py @@ -1,5 +1,8 @@ +import re + import openpype.hosts.photoshop.api as api from openpype.client import get_asset_by_name +from openpype.lib import prepare_template_data from openpype.pipeline import ( AutoCreator, CreatedInstance @@ -78,3 +81,17 @@ class PSAutoCreator(AutoCreator): existing_instance["asset"] = asset_name existing_instance["task"] = task_name existing_instance["subset"] = subset_name + + +def clean_subset_name(subset_name): + """Clean all variants leftover {layer} from subset name.""" + dynamic_data = prepare_template_data({"layer": "{layer}"}) + for value in dynamic_data.values(): + if value in subset_name: + subset_name = (subset_name.replace(value, "") + .replace("__", "_") + .replace("..", ".")) + # clean trailing separator as Main_ + pattern = r'[\W_]+$' + replacement = '' + return re.sub(pattern, replacement, subset_name) diff --git a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py index e4229788bd..afde77fdb4 100644 --- a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py @@ -2,7 +2,7 @@ from openpype.pipeline import CreatedInstance from openpype.lib import BoolDef import openpype.hosts.photoshop.api as api -from openpype.hosts.photoshop.lib import PSAutoCreator +from openpype.hosts.photoshop.lib import PSAutoCreator, clean_subset_name from openpype.pipeline.create import get_subset_name from openpype.lib import prepare_template_data from openpype.client import get_asset_by_name @@ -129,14 +129,4 @@ class AutoImageCreator(PSAutoCreator): self.family, variant, task_name, asset_doc, project_name, host_name, dynamic_data=dynamic_data ) - return self._clean_subset_name(subset_name) - - def _clean_subset_name(self, subset_name): - """Clean all variants leftover {layer} from subset name.""" - dynamic_data = prepare_template_data({"layer": "{layer}"}) - for value in dynamic_data.values(): - if value in subset_name: - return (subset_name.replace(value, "") - .replace("__", "_") - .replace("..", ".")) - return subset_name + return clean_subset_name(subset_name) diff --git a/openpype/hosts/photoshop/plugins/create/create_image.py b/openpype/hosts/photoshop/plugins/create/create_image.py index af20d456e0..4f2e90886a 100644 --- a/openpype/hosts/photoshop/plugins/create/create_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_image.py @@ -10,6 +10,7 @@ from openpype.pipeline import ( from openpype.lib import prepare_template_data from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances +from openpype.hosts.photoshop.lib import clean_subset_name class ImageCreator(Creator): @@ -88,6 +89,7 @@ class ImageCreator(Creator): layer_fill = prepare_template_data({"layer": layer_name}) subset_name = subset_name.format(**layer_fill) + subset_name = clean_subset_name(subset_name) if group.long_name: for directory in group.long_name[::-1]: @@ -184,7 +186,6 @@ class ImageCreator(Creator): self.mark_for_review = plugin_settings["mark_for_review"] self.enabled = plugin_settings["enabled"] - def get_detail_description(self): return """Creator for Image instances diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_missing_frame_range_asset_entity.py b/openpype/hosts/traypublisher/plugins/publish/collect_frame_data_from_asset_entity.py similarity index 73% rename from openpype/hosts/traypublisher/plugins/publish/collect_missing_frame_range_asset_entity.py rename to openpype/hosts/traypublisher/plugins/publish/collect_frame_data_from_asset_entity.py index 72379ea4e1..e8a2cae16c 100644 --- a/openpype/hosts/traypublisher/plugins/publish/collect_missing_frame_range_asset_entity.py +++ b/openpype/hosts/traypublisher/plugins/publish/collect_frame_data_from_asset_entity.py @@ -1,28 +1,21 @@ import pyblish.api -from openpype.pipeline import OptionalPyblishPluginMixin -class CollectMissingFrameDataFromAssetEntity( - pyblish.api.InstancePlugin, - OptionalPyblishPluginMixin -): - """Collect Missing Frame Range data From Asset Entity +class CollectFrameDataFromAssetEntity(pyblish.api.InstancePlugin): + """Collect Frame Data From AssetEntity found in context Frame range data will only be collected if the keys are not yet collected for the instance. """ order = pyblish.api.CollectorOrder + 0.491 - label = "Collect Missing Frame Data From Asset Entity" + label = "Collect Missing Frame Data From Asset" families = ["plate", "pointcache", "vdbcache", "online", "render"] hosts = ["traypublisher"] - optional = True def process(self, instance): - if not self.is_active(instance.data): - return missing_keys = [] for key in ( "fps", diff --git a/openpype/plugins/publish/collect_sequence_frame_data.py b/openpype/hosts/traypublisher/plugins/publish/collect_sequence_frame_data.py similarity index 71% rename from openpype/plugins/publish/collect_sequence_frame_data.py rename to openpype/hosts/traypublisher/plugins/publish/collect_sequence_frame_data.py index 6c2bfbf358..db70d4fe0a 100644 --- a/openpype/plugins/publish/collect_sequence_frame_data.py +++ b/openpype/hosts/traypublisher/plugins/publish/collect_sequence_frame_data.py @@ -1,33 +1,47 @@ import pyblish.api import clique +from openpype.pipeline import OptionalPyblishPluginMixin + + +class CollectSequenceFrameData( + pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin +): + """Collect Original Sequence Frame Data -class CollectSequenceFrameData(pyblish.api.InstancePlugin): - """Collect Sequence Frame Data If the representation includes files with frame numbers, then set `frameStart` and `frameEnd` for the instance to the start and end frame respectively """ - order = pyblish.api.CollectorOrder + 0.2 - label = "Collect Sequence Frame Data" + order = pyblish.api.CollectorOrder + 0.4905 + label = "Collect Original Sequence Frame Data" families = ["plate", "pointcache", "vdbcache", "online", "render"] hosts = ["traypublisher"] + optional = True def process(self, instance): + if not self.is_active(instance.data): + return + frame_data = self.get_frame_data_from_repre_sequence(instance) + if not frame_data: # if no dict data skip collecting the frame range data return + for key, value in frame_data.items(): - if key not in instance.data: - instance.data[key] = value - self.log.debug(f"Collected Frame range data '{key}':{value} ") + instance.data[key] = value + self.log.debug(f"Collected Frame range data '{key}':{value} ") + def get_frame_data_from_repre_sequence(self, instance): repres = instance.data.get("representations") + asset_data = instance.data["assetEntity"]["data"] + if repres: first_repre = repres[0] if "ext" not in first_repre: @@ -36,7 +50,7 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin): return files = first_repre["files"] - collections, remainder = clique.assemble(files) + collections, _ = clique.assemble(files) if not collections: # No sequences detected and we can't retrieve # frame range @@ -52,5 +66,5 @@ class CollectSequenceFrameData(pyblish.api.InstancePlugin): "frameEnd": repres_frames[-1], "handleStart": 0, "handleEnd": 0, - "fps": instance.context.data["assetEntity"]["data"]["fps"] + "fps": asset_data["fps"] } diff --git a/openpype/hosts/unreal/plugins/load/load_yeticache.py b/openpype/hosts/unreal/plugins/load/load_yeticache.py new file mode 100644 index 0000000000..22f5029bac --- /dev/null +++ b/openpype/hosts/unreal/plugins/load/load_yeticache.py @@ -0,0 +1,179 @@ +# -*- coding: utf-8 -*- +"""Loader for Yeti Cache.""" +import os +import json + +from openpype.pipeline import ( + get_representation_path, + AYON_CONTAINER_ID +) +from openpype.hosts.unreal.api import plugin +from openpype.hosts.unreal.api import pipeline as unreal_pipeline +import unreal # noqa + + +class YetiLoader(plugin.Loader): + """Load Yeti Cache""" + + families = ["yeticacheUE"] + label = "Import Yeti" + representations = ["abc"] + icon = "pagelines" + color = "orange" + + @staticmethod + def get_task(filename, asset_dir, asset_name, replace): + task = unreal.AssetImportTask() + options = unreal.AbcImportSettings() + + task.set_editor_property('filename', filename) + task.set_editor_property('destination_path', asset_dir) + task.set_editor_property('destination_name', asset_name) + task.set_editor_property('replace_existing', replace) + task.set_editor_property('automated', True) + task.set_editor_property('save', True) + + task.options = options + + return task + + @staticmethod + def is_groom_module_active(): + """ + Check if Groom plugin is active. + + This is a workaround, because the Unreal python API don't have + any method to check if plugin is active. + """ + prj_file = unreal.Paths.get_project_file_path() + + with open(prj_file, "r") as fp: + data = json.load(fp) + + plugins = data.get("Plugins") + + if not plugins: + return False + + plugin_names = [p.get("Name") for p in plugins] + + return "HairStrands" in plugin_names + + def load(self, context, name, namespace, options): + """Load and containerise representation into Content Browser. + + This is two step process. First, import FBX to temporary path and + then call `containerise()` on it - this moves all content to new + directory and then it will create AssetContainer there and imprint it + with metadata. This will mark this path as container. + + Args: + context (dict): application context + name (str): subset name + namespace (str): in Unreal this is basically path to container. + This is not passed here, so namespace is set + by `containerise()` because only then we know + real path. + data (dict): Those would be data to be imprinted. This is not used + now, data are imprinted by `containerise()`. + + Returns: + list(str): list of container content + + """ + # Check if Groom plugin is active + if not self.is_groom_module_active(): + raise RuntimeError("Groom plugin is not activated.") + + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" + asset = context.get('asset').get('name') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{root}/{asset}/{name}", suffix="") + + unique_number = 1 + while unreal.EditorAssetLibrary.does_directory_exist( + f"{asset_dir}_{unique_number:02}" + ): + unique_number += 1 + + asset_dir = f"{asset_dir}_{unique_number:02}" + container_name = f"{container_name}_{unique_number:02}{suffix}" + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + unreal.EditorAssetLibrary.make_directory(asset_dir) + + path = self.filepath_from_context(context) + task = self.get_task(path, asset_dir, asset_name, False) + + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 + + # Create Asset Container + unreal_pipeline.create_container( + container=container_name, path=asset_dir) + + data = { + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": context["representation"]["_id"], + "parent": context["representation"]["parent"], + "family": context["representation"]["context"]["family"] + } + unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) + + asset_content = unreal.EditorAssetLibrary.list_assets( + asset_dir, recursive=True, include_folder=True + ) + + for a in asset_content: + unreal.EditorAssetLibrary.save_asset(a) + + return asset_content + + def update(self, container, representation): + name = container["asset_name"] + source_path = get_representation_path(representation) + destination_path = container["namespace"] + + task = self.get_task(source_path, destination_path, name, True) + + # do import fbx and replace existing data + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + + container_path = f'{container["namespace"]}/{container["objectName"]}' + # update metadata + unreal_pipeline.imprint( + container_path, + { + "representation": str(representation["_id"]), + "parent": str(representation["parent"]) + }) + + asset_content = unreal.EditorAssetLibrary.list_assets( + destination_path, recursive=True, include_folder=True + ) + + for a in asset_content: + unreal.EditorAssetLibrary.save_asset(a) + + def remove(self, container): + path = container["namespace"] + parent_path = os.path.dirname(path) + + unreal.EditorAssetLibrary.delete_directory(path) + + asset_content = unreal.EditorAssetLibrary.list_assets( + parent_path, recursive=False + ) + + if len(asset_content) == 0: + unreal.EditorAssetLibrary.delete_directory(parent_path) diff --git a/openpype/lib/local_settings.py b/openpype/lib/local_settings.py index 3fb35a7e7b..dae6e074af 100644 --- a/openpype/lib/local_settings.py +++ b/openpype/lib/local_settings.py @@ -494,10 +494,18 @@ class OpenPypeSettingsRegistry(JSONSettingRegistry): """ def __init__(self, name=None): - self.vendor = "pypeclub" - self.product = "openpype" + if AYON_SERVER_ENABLED: + vendor = "Ynput" + product = "AYON" + default_name = "AYON_settings" + else: + vendor = "pypeclub" + product = "openpype" + default_name = "openpype_settings" + self.vendor = vendor + self.product = product if not name: - name = "openpype_settings" + name = default_name path = appdirs.user_data_dir(self.product, self.vendor) super(OpenPypeSettingsRegistry, self).__init__(name, path) diff --git a/openpype/modules/deadline/plugins/publish/submit_blender_deadline.py b/openpype/modules/deadline/plugins/publish/submit_blender_deadline.py new file mode 100644 index 0000000000..4a7497b075 --- /dev/null +++ b/openpype/modules/deadline/plugins/publish/submit_blender_deadline.py @@ -0,0 +1,181 @@ +# -*- coding: utf-8 -*- +"""Submitting render job to Deadline.""" + +import os +import getpass +import attr +from datetime import datetime + +import bpy + +from openpype.lib import is_running_from_build +from openpype.pipeline import legacy_io +from openpype.pipeline.farm.tools import iter_expected_files +from openpype.tests.lib import is_in_tests + +from openpype_modules.deadline import abstract_submit_deadline +from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo + + +@attr.s +class BlenderPluginInfo(): + SceneFile = attr.ib(default=None) # Input + Version = attr.ib(default=None) # Mandatory for Deadline + SaveFile = attr.ib(default=True) + + +class BlenderSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): + label = "Submit Render to Deadline" + hosts = ["blender"] + families = ["render.farm"] + + use_published = True + priority = 50 + chunk_size = 1 + jobInfo = {} + pluginInfo = {} + group = None + + def get_job_info(self): + job_info = DeadlineJobInfo(Plugin="Blender") + + job_info.update(self.jobInfo) + + instance = self._instance + context = instance.context + + # Always use the original work file name for the Job name even when + # rendering is done from the published Work File. The original work + # file name is clearer because it can also have subversion strings, + # etc. which are stripped for the published file. + src_filepath = context.data["currentFile"] + src_filename = os.path.basename(src_filepath) + + if is_in_tests(): + src_filename += datetime.now().strftime("%d%m%Y%H%M%S") + + job_info.Name = f"{src_filename} - {instance.name}" + job_info.BatchName = src_filename + instance.data.get("blenderRenderPlugin", "Blender") + job_info.UserName = context.data.get("deadlineUser", getpass.getuser()) + + # Deadline requires integers in frame range + frames = "{start}-{end}x{step}".format( + start=int(instance.data["frameStartHandle"]), + end=int(instance.data["frameEndHandle"]), + step=int(instance.data["byFrameStep"]), + ) + job_info.Frames = frames + + job_info.Pool = instance.data.get("primaryPool") + job_info.SecondaryPool = instance.data.get("secondaryPool") + job_info.Comment = context.data.get("comment") + job_info.Priority = instance.data.get("priority", self.priority) + + if self.group != "none" and self.group: + job_info.Group = self.group + + attr_values = self.get_attr_values_from_data(instance.data) + render_globals = instance.data.setdefault("renderGlobals", {}) + machine_list = attr_values.get("machineList", "") + if machine_list: + if attr_values.get("whitelist", True): + machine_list_key = "Whitelist" + else: + machine_list_key = "Blacklist" + render_globals[machine_list_key] = machine_list + + job_info.Priority = attr_values.get("priority") + job_info.ChunkSize = attr_values.get("chunkSize") + + # Add options from RenderGlobals + render_globals = instance.data.get("renderGlobals", {}) + job_info.update(render_globals) + + keys = [ + "FTRACK_API_KEY", + "FTRACK_API_USER", + "FTRACK_SERVER", + "OPENPYPE_SG_USER", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK", + "AVALON_APP_NAME", + "OPENPYPE_DEV" + "IS_TEST" + ] + + # Add OpenPype version if we are running from build. + if is_running_from_build(): + keys.append("OPENPYPE_VERSION") + + # Add mongo url if it's enabled + if self._instance.context.data.get("deadlinePassMongoUrl"): + keys.append("OPENPYPE_MONGO") + + environment = dict({key: os.environ[key] for key in keys + if key in os.environ}, **legacy_io.Session) + + for key in keys: + value = environment.get(key) + if not value: + continue + job_info.EnvironmentKeyValue[key] = value + + # to recognize job from PYPE for turning Event On/Off + job_info.add_render_job_env_var() + job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" + + # Adding file dependencies. + if self.asset_dependencies: + dependencies = instance.context.data["fileDependencies"] + for dependency in dependencies: + job_info.AssetDependency += dependency + + # Add list of expected files to job + # --------------------------------- + exp = instance.data.get("expectedFiles") + for filepath in iter_expected_files(exp): + job_info.OutputDirectory += os.path.dirname(filepath) + job_info.OutputFilename += os.path.basename(filepath) + + return job_info + + def get_plugin_info(self): + plugin_info = BlenderPluginInfo( + SceneFile=self.scene_path, + Version=bpy.app.version_string, + SaveFile=True, + ) + + plugin_payload = attr.asdict(plugin_info) + + # Patching with pluginInfo from settings + for key, value in self.pluginInfo.items(): + plugin_payload[key] = value + + return plugin_payload + + def process_submission(self): + instance = self._instance + + expected_files = instance.data["expectedFiles"] + if not expected_files: + raise RuntimeError("No Render Elements found!") + + first_file = next(iter_expected_files(expected_files)) + output_dir = os.path.dirname(first_file) + instance.data["outputDir"] = output_dir + instance.data["toBeRenderedOn"] = "deadline" + + payload = self.assemble_payload() + return self.submit(payload) + + def from_published_scene(self): + """ + This is needed to set the correct path for the json metadata. Because + the rendering path is set in the blend file during the collection, + and the path is adjusted to use the published scene, this ensures that + the metadata and the rendered files are in the same location. + """ + return super().from_published_scene(False) diff --git a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py index 70aa12956d..0b97582d2a 100644 --- a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py @@ -6,6 +6,7 @@ import requests import pyblish.api +from openpype import AYON_SERVER_ENABLED from openpype.pipeline import legacy_io from openpype.pipeline.publish import ( OpenPypePyblishPluginMixin @@ -34,6 +35,8 @@ class FusionSubmitDeadline( targets = ["local"] # presets + plugin = None + priority = 50 chunk_size = 1 concurrent_tasks = 1 @@ -173,7 +176,7 @@ class FusionSubmitDeadline( "SecondaryPool": instance.data.get("secondaryPool"), "Group": self.group, - "Plugin": "Fusion", + "Plugin": self.plugin, "Frames": "{start}-{end}".format( start=int(instance.data["frameStartHandle"]), end=int(instance.data["frameEndHandle"]) @@ -216,16 +219,29 @@ class FusionSubmitDeadline( # Include critical variables with submission keys = [ - # TODO: This won't work if the slaves don't have access to - # these paths, such as if slaves are running Linux and the - # submitter is on Windows. - "PYTHONPATH", - "OFX_PLUGIN_PATH", - "FUSION9_MasterPrefs" + "FTRACK_API_KEY", + "FTRACK_API_USER", + "FTRACK_SERVER", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK", + "AVALON_APP_NAME", + "OPENPYPE_DEV", + "OPENPYPE_LOG_NO_COLORS", + "IS_TEST" ] environment = dict({key: os.environ[key] for key in keys if key in os.environ}, **legacy_io.Session) + # to recognize render jobs + if AYON_SERVER_ENABLED: + environment["AYON_BUNDLE_NAME"] = os.environ["AYON_BUNDLE_NAME"] + render_job_label = "AYON_RENDER_JOB" + else: + render_job_label = "OPENPYPE_RENDER_JOB" + + environment[render_job_label] = "1" + payload["JobInfo"].update({ "EnvironmentKeyValue%d" % index: "{key}={value}".format( key=key, diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py index 63c6e4a0c7..073da3019a 100644 --- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py @@ -238,9 +238,10 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, plugin_data["redshift_SeparateAovFiles"] = instance.data.get( "separateAovFiles") if instance.data["cameras"]: - plugin_info["Camera0"] = None - plugin_info["Camera"] = instance.data["cameras"][0] - plugin_info["Camera1"] = instance.data["cameras"][0] + camera = instance.data["cameras"][0] + plugin_info["Camera0"] = camera + plugin_info["Camera"] = camera + plugin_info["Camera1"] = camera self.log.debug("plugin data:{}".format(plugin_data)) plugin_info.update(plugin_data) diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 909975f7ab..6ed5819f2b 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -96,7 +96,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, targets = ["local"] hosts = ["fusion", "max", "maya", "nuke", "houdini", - "celaction", "aftereffects", "harmony"] + "celaction", "aftereffects", "harmony", "blender"] families = ["render.farm", "render.frames_farm", "prerender.farm", "prerender.frames_farm", @@ -107,6 +107,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "redshift_rop"] aov_filter = {"maya": [r".*([Bb]eauty).*"], + "blender": [r".*([Bb]eauty).*"], "aftereffects": [r".*"], # for everything from AE "harmony": [r".*"], # for everything from AE "celaction": [r".*"], diff --git a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py index a29acf9823..2c55e7c951 100644 --- a/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py +++ b/openpype/modules/deadline/repository/custom/plugins/Ayon/Ayon.py @@ -96,7 +96,7 @@ class AyonDeadlinePlugin(DeadlinePlugin): for path in exe_list.split(";"): if path.startswith("~"): path = os.path.expanduser(path) - expanded_paths.append(path) + expanded_paths.append(path) exe = FileUtils.SearchFileList(";".join(expanded_paths)) if exe == "": diff --git a/openpype/modules/launcher_action.py b/openpype/modules/launcher_action.py index c4331b6094..5e14f25f76 100644 --- a/openpype/modules/launcher_action.py +++ b/openpype/modules/launcher_action.py @@ -1,3 +1,6 @@ +import os + +from openpype import PLUGINS_DIR, AYON_SERVER_ENABLED from openpype.modules import ( OpenPypeModule, ITrayAction, @@ -13,36 +16,66 @@ class LauncherAction(OpenPypeModule, ITrayAction): self.enabled = True # Tray attributes - self.window = None + self._window = None def tray_init(self): - self.create_window() + self._create_window() - self.add_doubleclick_callback(self.show_launcher) + self.add_doubleclick_callback(self._show_launcher) def tray_start(self): return def connect_with_modules(self, enabled_modules): # Register actions - if self.tray_initialized: - from openpype.tools.launcher import actions - actions.register_config_actions() - actions_paths = self.manager.collect_plugin_paths()["actions"] - actions.register_actions_from_paths(actions_paths) - actions.register_environment_actions() - - def create_window(self): - if self.window: + if not self.tray_initialized: return - from openpype.tools.launcher import LauncherWindow - self.window = LauncherWindow() + + from openpype.pipeline.actions import register_launcher_action_path + + actions_dir = os.path.join(PLUGINS_DIR, "actions") + if os.path.exists(actions_dir): + register_launcher_action_path(actions_dir) + + actions_paths = self.manager.collect_plugin_paths()["actions"] + for path in actions_paths: + if path and os.path.exists(path): + register_launcher_action_path(actions_dir) + + paths_str = os.environ.get("AVALON_ACTIONS") or "" + if paths_str: + self.log.warning( + "WARNING: 'AVALON_ACTIONS' is deprecated. Support of this" + " environment variable will be removed in future versions." + " Please consider using 'OpenPypeModule' to define custom" + " action paths. Planned version to drop the support" + " is 3.17.2 or 3.18.0 ." + ) + + for path in paths_str.split(os.pathsep): + if path and os.path.exists(path): + register_launcher_action_path(path) def on_action_trigger(self): - self.show_launcher() + """Implementation for ITrayAction interface. - def show_launcher(self): - if self.window: - self.window.show() - self.window.raise_() - self.window.activateWindow() + Show launcher tool on action trigger. + """ + + self._show_launcher() + + def _create_window(self): + if self._window: + return + if AYON_SERVER_ENABLED: + from openpype.tools.ayon_launcher.ui import LauncherWindow + else: + from openpype.tools.launcher import LauncherWindow + self._window = LauncherWindow() + + def _show_launcher(self): + if self._window is None: + return + self._window.show() + self._window.raise_() + self._window.activateWindow() diff --git a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py b/openpype/modules/muster/plugins/publish/submit_maya_muster.py similarity index 99% rename from openpype/hosts/maya/plugins/publish/submit_maya_muster.py rename to openpype/modules/muster/plugins/publish/submit_maya_muster.py index c174fa7a33..5c95744876 100644 --- a/openpype/hosts/maya/plugins/publish/submit_maya_muster.py +++ b/openpype/modules/muster/plugins/publish/submit_maya_muster.py @@ -25,6 +25,7 @@ def _get_template_id(renderer): :rtype: int """ + # TODO: Use settings from context? templates = get_system_settings()["modules"]["muster"]["templates_mapping"] if not templates: raise RuntimeError(("Muster template mapping missing in " diff --git a/openpype/pipeline/actions.py b/openpype/pipeline/actions.py index b488fe3e1f..feb1bd05d2 100644 --- a/openpype/pipeline/actions.py +++ b/openpype/pipeline/actions.py @@ -20,7 +20,13 @@ class LauncherAction(object): log.propagate = True def is_compatible(self, session): - """Return whether the class is compatible with the Session.""" + """Return whether the class is compatible with the Session. + + Args: + session (dict[str, Union[str, None]]): Session data with + AVALON_PROJECT, AVALON_ASSET and AVALON_TASK. + """ + return True def process(self, session, **kwargs): diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py index 44e5cb6c47..2800050496 100644 --- a/openpype/pipeline/colorspace.py +++ b/openpype/pipeline/colorspace.py @@ -2,9 +2,12 @@ from copy import deepcopy import re import os import json -import platform import contextlib +import functools +import platform import tempfile +import warnings + from openpype import PACKAGE_DIR from openpype.settings import get_project_settings from openpype.lib import ( @@ -20,12 +23,60 @@ log = Logger.get_logger(__name__) class CachedData: - remapping = {} + remapping = None + has_compatible_ocio_package = None + config_version_data = {} + ocio_config_colorspaces = {} allowed_exts = { ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS) } +class DeprecatedWarning(DeprecationWarning): + pass + + +def deprecated(new_destination): + """Mark functions as deprecated. + + It will result in a warning being emitted when the function is used. + """ + + func = None + if callable(new_destination): + func = new_destination + new_destination = None + + def _decorator(decorated_func): + if new_destination is None: + warning_message = ( + " Please check content of deprecated function to figure out" + " possible replacement." + ) + else: + warning_message = " Please replace your usage with '{}'.".format( + new_destination + ) + + @functools.wraps(decorated_func) + def wrapper(*args, **kwargs): + warnings.simplefilter("always", DeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'" + "\nFunction was moved or removed.{}" + ).format(decorated_func.__name__, warning_message), + category=DeprecatedWarning, + stacklevel=4 + ) + return decorated_func(*args, **kwargs) + return wrapper + + if func is None: + return _decorator + return _decorator(func) + + @contextlib.contextmanager def _make_temp_json_file(): """Wrapping function for json temp file @@ -69,124 +120,264 @@ def get_ocio_config_script_path(): ) -def get_imageio_colorspace_from_filepath( - path, host_name, project_name, +def get_colorspace_name_from_filepath( + filepath, host_name, project_name, config_data=None, file_rules=None, project_settings=None, validate=True ): """Get colorspace name from filepath - ImageIO Settings file rules are tested for matching rule. - Args: - path (str): path string, file rule pattern is tested on it + filepath (str): path string, file rule pattern is tested on it host_name (str): host name project_name (str): project name - config_data (dict, optional): config path and template in dict. + config_data (Optional[dict]): config path and template in dict. Defaults to None. - file_rules (dict, optional): file rule data from settings. + file_rules (Optional[dict]): file rule data from settings. Defaults to None. - project_settings (dict, optional): project settings. Defaults to None. - validate (bool, optional): should resulting colorspace be validated - with config file? Defaults to True. + project_settings (Optional[dict]): project settings. Defaults to None. + validate (Optional[bool]): should resulting colorspace be validated + with config file? Defaults to True. Returns: str: name of colorspace """ - if not any([config_data, file_rules]): - project_settings = project_settings or get_project_settings( - project_name - ) - config_data = get_imageio_config( - project_name, host_name, project_settings) + project_settings, config_data, file_rules = _get_context_settings( + host_name, project_name, + config_data=config_data, file_rules=file_rules, + project_settings=project_settings + ) - # in case host color management is not enabled - if not config_data: - return None + if not config_data: + # in case global or host color management is not enabled + return None - file_rules = get_imageio_file_rules( - project_name, host_name, project_settings) + # use ImageIO file rules + colorspace_name = get_imageio_file_rules_colorspace_from_filepath( + filepath, host_name, project_name, + config_data=config_data, file_rules=file_rules, + project_settings=project_settings + ) - # match file rule from path - colorspace_name = None - for _frule_name, file_rule in file_rules.items(): - pattern = file_rule["pattern"] - extension = file_rule["ext"] - ext_match = re.match( - r".*(?=.{})".format(extension), path - ) - file_match = re.search( - pattern, path - ) + # try to get colorspace from OCIO v2 file rules + if ( + not colorspace_name + and compatibility_check_config_version(config_data["path"], major=2) + ): + colorspace_name = get_config_file_rules_colorspace_from_filepath( + config_data["path"], filepath) - if ext_match and file_match: - colorspace_name = file_rule["colorspace"] + # use parse colorspace from filepath as fallback + colorspace_name = colorspace_name or parse_colorspace_from_filepath( + filepath, config_path=config_data["path"] + ) if not colorspace_name: log.info("No imageio file rule matched input path: '{}'".format( - path + filepath )) return None # validate matching colorspace with config - if validate and config_data: + if validate: validate_imageio_colorspace_in_config( config_data["path"], colorspace_name) return colorspace_name -def parse_colorspace_from_filepath( - path, host_name, project_name, - config_data=None, +# TODO: remove this in future - backward compatibility +@deprecated("get_imageio_file_rules_colorspace_from_filepath") +def get_imageio_colorspace_from_filepath(*args, **kwargs): + return get_imageio_file_rules_colorspace_from_filepath(*args, **kwargs) + +# TODO: remove this in future - backward compatibility +@deprecated("get_imageio_file_rules_colorspace_from_filepath") +def get_colorspace_from_filepath(*args, **kwargs): + return get_imageio_file_rules_colorspace_from_filepath(*args, **kwargs) + + +def _get_context_settings( + host_name, project_name, + config_data=None, file_rules=None, project_settings=None +): + project_settings = project_settings or get_project_settings( + project_name + ) + + config_data = config_data or get_imageio_config( + project_name, host_name, project_settings) + + # in case host color management is not enabled + if not config_data: + return (None, None, None) + + file_rules = file_rules or get_imageio_file_rules( + project_name, host_name, project_settings) + + return project_settings, config_data, file_rules + + +def get_imageio_file_rules_colorspace_from_filepath( + filepath, host_name, project_name, + config_data=None, file_rules=None, + project_settings=None +): + """Get colorspace name from filepath + + ImageIO Settings file rules are tested for matching rule. + + Args: + filepath (str): path string, file rule pattern is tested on it + host_name (str): host name + project_name (str): project name + config_data (Optional[dict]): config path and template in dict. + Defaults to None. + file_rules (Optional[dict]): file rule data from settings. + Defaults to None. + project_settings (Optional[dict]): project settings. Defaults to None. + + Returns: + str: name of colorspace + """ + project_settings, config_data, file_rules = _get_context_settings( + host_name, project_name, + config_data=config_data, file_rules=file_rules, + project_settings=project_settings + ) + + if not config_data: + # in case global or host color management is not enabled + return None + + # match file rule from path + colorspace_name = None + for file_rule in file_rules.values(): + pattern = file_rule["pattern"] + extension = file_rule["ext"] + ext_match = re.match( + r".*(?=.{})".format(extension), filepath + ) + file_match = re.search( + pattern, filepath + ) + + if ext_match and file_match: + colorspace_name = file_rule["colorspace"] + + return colorspace_name + + +def get_config_file_rules_colorspace_from_filepath(config_path, filepath): + """Get colorspace from file path wrapper. + + Wrapper function for getting colorspace from file path + with use of OCIO v2 file-rules. + + Args: + config_path (str): path leading to config.ocio file + filepath (str): path leading to a file + + Returns: + Any[str, None]: matching colorspace name + """ + if not compatibility_check(): + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + result_data = _get_wrapped_with_subprocess( + "colorspace", "get_config_file_rules_colorspace_from_filepath", + config_path=config_path, + filepath=filepath + ) + if result_data: + return result_data[0] + + # TODO: refactor this so it is not imported but part of this file + from openpype.scripts.ocio_wrapper import _get_config_file_rules_colorspace_from_filepath # noqa: E501 + + result_data = _get_config_file_rules_colorspace_from_filepath( + config_path, filepath) + + if result_data: + return result_data[0] + + +def parse_colorspace_from_filepath( + filepath, colorspaces=None, config_path=None ): """Parse colorspace name from filepath An input path can have colorspace name used as part of name or as folder name. + Example: + >>> config_path = "path/to/config.ocio" + >>> colorspaces = get_ocio_config_colorspaces(config_path) + >>> colorspace = parse_colorspace_from_filepath( + "path/to/file/acescg/file.exr", + colorspaces=colorspaces + ) + >>> print(colorspace) + acescg + Args: - path (str): path string - host_name (str): host name - project_name (str): project name - config_data (dict, optional): config path and template in dict. - Defaults to None. - project_settings (dict, optional): project settings. Defaults to None. + filepath (str): path string + colorspaces (Optional[dict[str]]): list of colorspaces + config_path (Optional[str]): path to config.ocio file Returns: str: name of colorspace """ - if not config_data: - project_settings = project_settings or get_project_settings( - project_name + def _get_colorspace_match_regex(colorspaces): + """Return a regex pattern + + Allows to search a colorspace match in a filename + + Args: + colorspaces (list): List of colorspace names + + Returns: + re.Pattern: regex pattern + """ + pattern = "|".join( + # Allow to match spaces also as underscores because the + # integrator replaces spaces with underscores in filenames + re.escape(colorspace) for colorspace in + # Sort by longest first so the regex matches longer matches + # over smaller matches, e.g. matching 'Output - sRGB' over 'sRGB' + sorted(colorspaces, key=len, reverse=True) ) - config_data = get_imageio_config( - project_name, host_name, project_settings) + return re.compile(pattern) - config_path = config_data["path"] + if not colorspaces and not config_path: + raise ValueError( + "Must provide `config_path` if `colorspaces` is not provided." + ) - # match file rule from path - colorspace_name = None - colorspaces = get_ocio_config_colorspaces(config_path) - for colorspace_key in colorspaces: - # check underscored variant of colorspace name - # since we are reformatting it in integrate.py - if colorspace_key.replace(" ", "_") in path: - colorspace_name = colorspace_key - break - if colorspace_key in path: - colorspace_name = colorspace_key - break + colorspaces = colorspaces or get_ocio_config_colorspaces(config_path) + underscored_colorspaces = { + key.replace(" ", "_"): key for key in colorspaces + if " " in key + } - if not colorspace_name: - log.info("No matching colorspace in config '{}' for path: '{}'".format( - config_path, path - )) - return None + # match colorspace from filepath + regex_pattern = _get_colorspace_match_regex( + list(colorspaces) + list(underscored_colorspaces)) + match = regex_pattern.search(filepath) + colorspace = match.group(0) if match else None - return colorspace_name + if colorspace in underscored_colorspaces: + return underscored_colorspaces[colorspace] + + if colorspace: + return colorspace + + log.info("No matching colorspace in config '{}' for path: '{}'".format( + config_path, filepath + )) + return None def validate_imageio_colorspace_in_config(config_path, colorspace_name): @@ -211,49 +402,101 @@ def validate_imageio_colorspace_in_config(config_path, colorspace_name): return True +# TODO: remove this in future - backward compatibility +@deprecated("_get_wrapped_with_subprocess") def get_data_subprocess(config_path, data_type): - """Get data via subprocess + """[Deprecated] Get data via subprocess Wrapper for Python 2 hosts. Args: config_path (str): path leading to config.ocio file """ + return _get_wrapped_with_subprocess( + "config", data_type, in_path=config_path, + ) + + +def _get_wrapped_with_subprocess(command_group, command, **kwargs): + """Get data via subprocess + + Wrapper for Python 2 hosts. + + Args: + command_group (str): command group name + command (str): command name + **kwargs: command arguments + + Returns: + Any[dict, None]: data + """ with _make_temp_json_file() as tmp_json_path: # Prepare subprocess arguments args = [ "run", get_ocio_config_script_path(), - "config", data_type, - "--in_path", config_path, - "--out_path", tmp_json_path - + command_group, command ] + + for key_, value_ in kwargs.items(): + args.extend(("--{}".format(key_), value_)) + + args.append("--out_path") + args.append(tmp_json_path) + log.info("Executing: {}".format(" ".join(args))) - process_kwargs = { - "logger": log - } - - run_openpype_process(*args, **process_kwargs) + run_openpype_process(*args, logger=log) # return all colorspaces - return_json_data = open(tmp_json_path).read() - return json.loads(return_json_data) + with open(tmp_json_path, "r") as f_: + return json.load(f_) +# TODO: this should be part of ocio_wrapper.py def compatibility_check(): - """checking if user has a compatible PyOpenColorIO >= 2. + """Making sure PyOpenColorIO is importable""" + if CachedData.has_compatible_ocio_package is not None: + return CachedData.has_compatible_ocio_package - It's achieved by checking if PyOpenColorIO is importable - and calling any version 2 specific function - """ try: - import PyOpenColorIO + import PyOpenColorIO # noqa: F401 + CachedData.has_compatible_ocio_package = True + except ImportError: + CachedData.has_compatible_ocio_package = False - # ocio versions lower than 2 will raise AttributeError - PyOpenColorIO.GetVersion() - except (ImportError, AttributeError): + # compatible + return CachedData.has_compatible_ocio_package + + +# TODO: this should be part of ocio_wrapper.py +def compatibility_check_config_version(config_path, major=1, minor=None): + """Making sure PyOpenColorIO config version is compatible""" + + if not CachedData.config_version_data.get(config_path): + if compatibility_check(): + # TODO: refactor this so it is not imported but part of this file + from openpype.scripts.ocio_wrapper import _get_version_data + + CachedData.config_version_data[config_path] = \ + _get_version_data(config_path) + + else: + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + CachedData.config_version_data[config_path] = \ + _get_wrapped_with_subprocess( + "config", "get_version", config_path=config_path + ) + + # check major version + if CachedData.config_version_data[config_path]["major"] != major: return False + + # check minor version + if minor and CachedData.config_version_data[config_path]["minor"] != minor: + return False + + # compatible return True @@ -269,18 +512,28 @@ def get_ocio_config_colorspaces(config_path): Returns: dict: colorspace and family in couple """ - if not compatibility_check(): - # python environment is not compatible with PyOpenColorIO - # needs to be run in subprocess - return get_colorspace_data_subprocess(config_path) + if not CachedData.ocio_config_colorspaces.get(config_path): + if not compatibility_check(): + # python environment is not compatible with PyOpenColorIO + # needs to be run in subprocess + CachedData.ocio_config_colorspaces[config_path] = \ + _get_wrapped_with_subprocess( + "config", "get_colorspace", in_path=config_path + ) + else: + # TODO: refactor this so it is not imported but part of this file + from openpype.scripts.ocio_wrapper import _get_colorspace_data - from openpype.scripts.ocio_wrapper import _get_colorspace_data + CachedData.ocio_config_colorspaces[config_path] = \ + _get_colorspace_data(config_path) - return _get_colorspace_data(config_path) + return CachedData.ocio_config_colorspaces[config_path] +# TODO: remove this in future - backward compatibility +@deprecated("_get_wrapped_with_subprocess") def get_colorspace_data_subprocess(config_path): - """Get colorspace data via subprocess + """[Deprecated] Get colorspace data via subprocess Wrapper for Python 2 hosts. @@ -290,7 +543,9 @@ def get_colorspace_data_subprocess(config_path): Returns: dict: colorspace and family in couple """ - return get_data_subprocess(config_path, "get_colorspace") + return _get_wrapped_with_subprocess( + "config", "get_colorspace", in_path=config_path + ) def get_ocio_config_views(config_path): @@ -308,15 +563,20 @@ def get_ocio_config_views(config_path): if not compatibility_check(): # python environment is not compatible with PyOpenColorIO # needs to be run in subprocess - return get_views_data_subprocess(config_path) + return _get_wrapped_with_subprocess( + "config", "get_views", in_path=config_path + ) + # TODO: refactor this so it is not imported but part of this file from openpype.scripts.ocio_wrapper import _get_views_data return _get_views_data(config_path) +# TODO: remove this in future - backward compatibility +@deprecated("_get_wrapped_with_subprocess") def get_views_data_subprocess(config_path): - """Get viewers data via subprocess + """[Deprecated] Get viewers data via subprocess Wrapper for Python 2 hosts. @@ -326,7 +586,9 @@ def get_views_data_subprocess(config_path): Returns: dict: `display/viewer` and viewer data """ - return get_data_subprocess(config_path, "get_views") + return _get_wrapped_with_subprocess( + "config", "get_views", in_path=config_path + ) def get_imageio_config( diff --git a/openpype/plugins/actions/open_file_explorer.py b/openpype/plugins/actions/open_file_explorer.py index e4fbd91143..1568c41fbd 100644 --- a/openpype/plugins/actions/open_file_explorer.py +++ b/openpype/plugins/actions/open_file_explorer.py @@ -83,10 +83,6 @@ class OpenTaskPath(LauncherAction): if os.path.exists(valid_workdir): return valid_workdir - # If task was selected, try to find asset path only to asset - if not task_name: - raise AssertionError("Folder does not exist.") - data.pop("task", None) workdir = anatomy.templates_obj["work"]["folder"].format(data) valid_workdir = self._find_first_filled_path(workdir) @@ -95,7 +91,7 @@ class OpenTaskPath(LauncherAction): valid_workdir = os.path.normpath(valid_workdir) if os.path.exists(valid_workdir): return valid_workdir - raise AssertionError("Folder does not exist.") + raise AssertionError("Folder does not exist yet.") @staticmethod def open_in_explorer(path): diff --git a/openpype/plugins/publish/collect_resources_path.py b/openpype/plugins/publish/collect_resources_path.py index f96dd0ae18..57a612c5ae 100644 --- a/openpype/plugins/publish/collect_resources_path.py +++ b/openpype/plugins/publish/collect_resources_path.py @@ -62,7 +62,8 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): "effect", "staticMesh", "skeletalMesh", - "xgen" + "xgen", + "yeticacheUE" ] def process(self, instance): diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index 7e48155b9e..ce24dad1b5 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -139,7 +139,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "simpleUnrealTexture", "online", "uasset", - "blendScene" + "blendScene", + "yeticacheUE" ] default_template_name = "publish" diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index 7f1c3b01e2..071ecfffd2 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -213,7 +213,8 @@ class PypeCommands: pass def run_tests(self, folder, mark, pyargs, - test_data_folder, persist, app_variant, timeout, setup_only): + test_data_folder, persist, app_variant, timeout, setup_only, + mongo_url): """ Runs tests from 'folder' @@ -226,6 +227,10 @@ class PypeCommands: end app_variant (str): variant (eg 2020 for AE), empty if use latest installed version + timeout (int): explicit timeout for single test + setup_only (bool): if only preparation steps should be + triggered, no tests (useful for debugging/development) + mongo_url (str): url to Openpype Mongo database """ print("run_tests") if folder: @@ -264,6 +269,9 @@ class PypeCommands: if setup_only: args.extend(["--setup_only", setup_only]) + if mongo_url: + args.extend(["--mongo_url", mongo_url]) + print("run_tests args: {}".format(args)) import pytest pytest.main(args) diff --git a/openpype/scripts/ocio_wrapper.py b/openpype/scripts/ocio_wrapper.py index 40553d30f2..c362670126 100644 --- a/openpype/scripts/ocio_wrapper.py +++ b/openpype/scripts/ocio_wrapper.py @@ -27,7 +27,7 @@ import PyOpenColorIO as ocio @click.group() def main(): - pass + pass # noqa: WPS100 @main.group() @@ -37,7 +37,17 @@ def config(): Example of use: > pyton.exe ./ocio_wrapper.py config *args """ - pass + pass # noqa: WPS100 + + +@main.group() +def colorspace(): + """Colorspace related commands group + + Example of use: + > pyton.exe ./ocio_wrapper.py config *args + """ + pass # noqa: WPS100 @config.command( @@ -70,8 +80,8 @@ def get_colorspace(in_path, out_path): out_data = _get_colorspace_data(in_path) - with open(json_path, "w") as f: - json.dump(out_data, f) + with open(json_path, "w") as f_: + json.dump(out_data, f_) print(f"Colorspace data are saved to '{json_path}'") @@ -97,8 +107,8 @@ def _get_colorspace_data(config_path): config = ocio.Config().CreateFromFile(str(config_path)) return { - c.getName(): c.getFamily() - for c in config.getColorSpaces() + c_.getName(): c_.getFamily() + for c_ in config.getColorSpaces() } @@ -132,8 +142,8 @@ def get_views(in_path, out_path): out_data = _get_views_data(in_path) - with open(json_path, "w") as f: - json.dump(out_data, f) + with open(json_path, "w") as f_: + json.dump(out_data, f_) print(f"Viewer data are saved to '{json_path}'") @@ -157,7 +167,7 @@ def _get_views_data(config_path): config = ocio.Config().CreateFromFile(str(config_path)) - data = {} + data_ = {} for display in config.getDisplays(): for view in config.getViews(display): colorspace = config.getDisplayViewColorSpaceName(display, view) @@ -165,13 +175,148 @@ def _get_views_data(config_path): if colorspace == "": colorspace = display - data[f"{display}/{view}"] = { + data_[f"{display}/{view}"] = { "display": display, "view": view, "colorspace": colorspace } - return data + return data_ + + +@config.command( + name="get_version", + help=( + "return major and minor version from config file " + "--config_path input arg is required" + "--out_path input arg is required" + ) +) +@click.option("--config_path", required=True, + help="path where to read ocio config file", + type=click.Path(exists=True)) +@click.option("--out_path", required=True, + help="path where to write output json file", + type=click.Path()) +def get_version(config_path, out_path): + """Get version of config. + + Python 2 wrapped console command + + Args: + config_path (str): ocio config file path string + out_path (str): temp json file path string + + Example of use: + > pyton.exe ./ocio_wrapper.py config get_version \ + --config_path= --out_path= + """ + json_path = Path(out_path) + + out_data = _get_version_data(config_path) + + with open(json_path, "w") as f_: + json.dump(out_data, f_) + + print(f"Config version data are saved to '{json_path}'") + + +def _get_version_data(config_path): + """Return major and minor version info. + + Args: + config_path (str): path string leading to config.ocio + + Raises: + IOError: Input config does not exist. + + Returns: + dict: minor and major keys with values + """ + config_path = Path(config_path) + + if not config_path.is_file(): + raise IOError("Input path should be `config.ocio` file") + + config = ocio.Config().CreateFromFile(str(config_path)) + + return { + "major": config.getMajorVersion(), + "minor": config.getMinorVersion() + } + + +@colorspace.command( + name="get_config_file_rules_colorspace_from_filepath", + help=( + "return colorspace from filepath " + "--config_path - ocio config file path (input arg is required) " + "--filepath - any file path (input arg is required) " + "--out_path - temp json file path (input arg is required)" + ) +) +@click.option("--config_path", required=True, + help="path where to read ocio config file", + type=click.Path(exists=True)) +@click.option("--filepath", required=True, + help="path to file to get colorspace from", + type=click.Path()) +@click.option("--out_path", required=True, + help="path where to write output json file", + type=click.Path()) +def get_config_file_rules_colorspace_from_filepath( + config_path, filepath, out_path +): + """Get colorspace from file path wrapper. + + Python 2 wrapped console command + + Args: + config_path (str): config file path string + filepath (str): path string leading to file + out_path (str): temp json file path string + + Example of use: + > pyton.exe ./ocio_wrapper.py \ + colorspace get_config_file_rules_colorspace_from_filepath \ + --config_path= --filepath= --out_path= + """ + json_path = Path(out_path) + + colorspace = _get_config_file_rules_colorspace_from_filepath( + config_path, filepath) + + with open(json_path, "w") as f_: + json.dump(colorspace, f_) + + print(f"Colorspace name is saved to '{json_path}'") + + +def _get_config_file_rules_colorspace_from_filepath(config_path, filepath): + """Return found colorspace data found in v2 file rules. + + Args: + config_path (str): path string leading to config.ocio + filepath (str): path string leading to v2 file rules + + Raises: + IOError: Input config does not exist. + + Returns: + dict: aggregated available colorspaces + """ + config_path = Path(config_path) + + if not config_path.is_file(): + raise IOError( + f"Input path `{config_path}` should be `config.ocio` file") + + config = ocio.Config().CreateFromFile(str(config_path)) + + # TODO: use `parseColorSpaceFromString` instead if ocio v1 + colorspace = config.getColorSpaceFromFilepath(str(filepath)) + + return colorspace def _get_display_view_colorspace_name(config_path, display, view): diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py index 9a4f0607e0..d54d71e851 100644 --- a/openpype/settings/ayon_settings.py +++ b/openpype/settings/ayon_settings.py @@ -748,7 +748,21 @@ def _convert_nuke_project_settings(ayon_settings, output): ) new_review_data_outputs = {} - for item in ayon_publish["ExtractReviewDataMov"]["outputs"]: + outputs_settings = [] + # Check deprecated ExtractReviewDataMov + # settings for backwards compatibility + deprecrated_review_settings = ayon_publish["ExtractReviewDataMov"] + current_review_settings = ( + ayon_publish.get("ExtractReviewIntermediates") + ) + if deprecrated_review_settings["enabled"]: + outputs_settings = deprecrated_review_settings["outputs"] + elif current_review_settings is None: + pass + elif current_review_settings["enabled"]: + outputs_settings = current_review_settings["outputs"] + + for item in outputs_settings: item_filter = item["filter"] if "product_names" in item_filter: item_filter["subsets"] = item_filter.pop("product_names") @@ -767,7 +781,11 @@ def _convert_nuke_project_settings(ayon_settings, output): name = item.pop("name") new_review_data_outputs[name] = item - ayon_publish["ExtractReviewDataMov"]["outputs"] = new_review_data_outputs + + if deprecrated_review_settings["enabled"]: + deprecrated_review_settings["outputs"] = new_review_data_outputs + elif current_review_settings["enabled"]: + current_review_settings["outputs"] = new_review_data_outputs collect_instance_data = ayon_publish["CollectInstanceData"] if "sync_workfile_version_on_product_types" in collect_instance_data: @@ -1102,7 +1120,7 @@ def _convert_global_project_settings(ayon_settings, output, default_settings): "studio_name", "studio_code", ): - ayon_core.pop(key) + ayon_core.pop(key, None) # Publish conversion ayon_publish = ayon_core["publish"] @@ -1140,6 +1158,27 @@ def _convert_global_project_settings(ayon_settings, output, default_settings): profile["outputs"] = new_outputs + # ExtractOIIOTranscode plugin + extract_oiio_transcode = ayon_publish["ExtractOIIOTranscode"] + extract_oiio_transcode_profiles = extract_oiio_transcode["profiles"] + for profile in extract_oiio_transcode_profiles: + new_outputs = {} + name_counter = {} + for output in profile["outputs"]: + if "name" in output: + name = output.pop("name") + else: + # Backwards compatibility for setting without 'name' in model + name = output["extension"] + if name in new_outputs: + name_counter[name] += 1 + name = "{}_{}".format(name, name_counter[name]) + else: + name_counter[name] = 0 + + new_outputs[name] = output + profile["outputs"] = new_outputs + # Extract Burnin plugin extract_burnin = ayon_publish["ExtractBurnin"] extract_burnin_options = extract_burnin["options"] diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index df865adeba..f3eb31174f 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -17,6 +17,14 @@ "rules": {} } }, + "RenderSettings": { + "default_render_image_folder": "renders/blender", + "aov_separator": "underscore", + "image_format": "exr", + "multilayer_exr": true, + "aov_list": [], + "custom_passes": [] + }, "workfile_builder": { "create_first_version": false, "custom_templates": [] @@ -27,6 +35,22 @@ "optional": true, "active": true }, + "ValidateFileSaved": { + "enabled": true, + "optional": false, + "active": true, + "exclude_families": [] + }, + "ValidateRenderCameraIsSet": { + "enabled": true, + "optional": false, + "active": true + }, + "ValidateDeadlinePublish": { + "enabled": true, + "optional": false, + "active": true + }, "ValidateMeshHasUvs": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/deadline.json b/openpype/settings/defaults/project_settings/deadline.json index 1b8c8397d7..2c5e0dc65d 100644 --- a/openpype/settings/defaults/project_settings/deadline.json +++ b/openpype/settings/defaults/project_settings/deadline.json @@ -52,7 +52,8 @@ "priority": 50, "chunk_size": 10, "concurrent_tasks": 1, - "group": "" + "group": "", + "plugin": "Fusion" }, "NukeSubmitDeadline": { "enabled": true, @@ -99,6 +100,15 @@ "deadline_chunk_size": 10, "deadline_job_delay": "00:00:00:00" }, + "BlenderSubmitDeadline": { + "enabled": true, + "optional": false, + "active": true, + "use_published": true, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, "ProcessSubmittedJobOnFarm": { "enabled": true, "deadline_department": "", @@ -112,6 +122,9 @@ "maya": [ ".*([Bb]eauty).*" ], + "blender": [ + ".*([Bb]eauty).*" + ], "aftereffects": [ ".*" ], diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 38f14ec022..300d63985b 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -707,6 +707,9 @@ "CollectMayaRender": { "sync_workfile_version": false }, + "CollectFbxAnimation": { + "enabled": true + }, "CollectFbxCamera": { "enabled": false }, @@ -1120,6 +1123,11 @@ "optional": true, "active": true }, + "ValidateAnimatedReferenceRig": { + "enabled": true, + "optional": false, + "active": true + }, "ValidateAnimationContent": { "enabled": true, "optional": false, @@ -1140,6 +1148,16 @@ "optional": false, "active": true }, + "ValidateSkeletonRigContents": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateSkeletonRigControllers": { + "enabled": false, + "optional": true, + "active": true + }, "ValidateSkinclusterDeformerSet": { "enabled": true, "optional": false, @@ -1150,6 +1168,21 @@ "optional": false, "allow_history_only": false }, + "ValidateSkeletonRigOutSetNodeIds": { + "enabled": false, + "optional": false, + "allow_history_only": false + }, + "ValidateSkeletonRigOutputIds": { + "enabled": false, + "optional": true, + "active": true + }, + "ValidateSkeletonTopGroupHierarchy": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateCameraAttributes": { "enabled": false, "optional": true, @@ -1338,6 +1371,12 @@ "active": true, "bake_attributes": [] }, + "ExtractCameraMayaScene": { + "enabled": true, + "optional": true, + "active": true, + "keep_image_planes": false + }, "ExtractGLB": { "enabled": true, "active": true, diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 7961e77113..ad9f46c8ab 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -501,6 +501,60 @@ } } }, + "ExtractReviewIntermediates": { + "enabled": true, + "viewer_lut_raw": false, + "outputs": { + "baking": { + "filter": { + "task_types": [], + "families": [], + "subsets": [] + }, + "read_raw": false, + "viewer_process_override": "", + "bake_viewer_process": true, + "bake_viewer_input_process": true, + "reformat_nodes_config": { + "enabled": false, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "value": "to format" + }, + { + "type": "text", + "name": "format", + "value": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "value": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "value": true + }, + { + "type": "bool", + "name": "pbb", + "value": false + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + } + }, "ExtractSlateFrame": { "viewer_lut_raw": false, "key_value_mapping": { diff --git a/openpype/settings/defaults/project_settings/traypublisher.json b/openpype/settings/defaults/project_settings/traypublisher.json index 7f7b7d1452..e13de11414 100644 --- a/openpype/settings/defaults/project_settings/traypublisher.json +++ b/openpype/settings/defaults/project_settings/traypublisher.json @@ -346,10 +346,10 @@ } }, "publish": { - "CollectFrameDataFromAssetEntity": { + "CollectSequenceFrameData": { "enabled": true, "optional": true, - "active": true + "active": false }, "ValidateFrameRange": { "enabled": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json index aeb70dfd8c..535d9434a3 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json @@ -54,6 +54,110 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "RenderSettings", + "label": "Render Settings", + "children": [ + { + "type": "text", + "key": "default_render_image_folder", + "label": "Default render image folder" + }, + { + "key": "aov_separator", + "label": "AOV Separator Character", + "type": "enum", + "multiselection": false, + "defaults": "underscore", + "enum_items": [ + {"dash": "- (dash)"}, + {"underscore": "_ (underscore)"}, + {"dot": ". (dot)"} + ] + }, + { + "key": "image_format", + "label": "Output Image Format", + "type": "enum", + "multiselection": false, + "defaults": "exr", + "enum_items": [ + {"exr": "OpenEXR"}, + {"bmp": "BMP"}, + {"rgb": "Iris"}, + {"png": "PNG"}, + {"jpg": "JPEG"}, + {"jp2": "JPEG 2000"}, + {"tga": "Targa"}, + {"tif": "TIFF"} + ] + }, + { + "key": "multilayer_exr", + "type": "boolean", + "label": "Multilayer (EXR)" + }, + { + "type": "label", + "label": "Note: Multilayer EXR is only used when output format type set to EXR." + }, + { + "key": "aov_list", + "label": "AOVs to create", + "type": "enum", + "multiselection": true, + "defaults": "empty", + "enum_items": [ + {"empty": "< empty >"}, + {"combined": "Combined"}, + {"z": "Z"}, + {"mist": "Mist"}, + {"normal": "Normal"}, + {"diffuse_light": "Diffuse Light"}, + {"diffuse_color": "Diffuse Color"}, + {"specular_light": "Specular Light"}, + {"specular_color": "Specular Color"}, + {"volume_light": "Volume Light"}, + {"emission": "Emission"}, + {"environment": "Environment"}, + {"shadow": "Shadow"}, + {"ao": "Ambient Occlusion"}, + {"denoising": "Denoising"}, + {"volume_direct": "Direct Volumetric Scattering"}, + {"volume_indirect": "Indirect Volumetric Scattering"} + ] + }, + { + "type": "label", + "label": "Add custom AOVs. They are added to the view layer and in the Compositing Nodetree,\nbut they need to be added manually to the Shader Nodetree." + }, + { + "type": "dict-modifiable", + "store_as_list": true, + "key": "custom_passes", + "label": "Custom Passes", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "key": "type", + "label": "Type", + "type": "enum", + "multiselection": false, + "default": "COLOR", + "enum_items": [ + {"COLOR": "Color"}, + {"VALUE": "Value"} + ] + } + ] + } + } + ] + }, { "type": "schema_template", "name": "template_workfile_options", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json index 6d59b5a92b..64db852c89 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json @@ -289,6 +289,15 @@ "type": "text", "key": "group", "label": "Group Name" + }, + { + "type": "enum", + "key": "plugin", + "label": "Deadline Plugin", + "enum_items": [ + {"Fusion": "Fusion"}, + {"FusionCmd": "FusionCmd"} + ] } ] }, @@ -531,6 +540,50 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "BlenderSubmitDeadline", + "label": "Blender Submit to Deadline", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "boolean", + "key": "use_published", + "label": "Use Published scene" + }, + { + "type": "number", + "key": "priority", + "label": "Priority" + }, + { + "type": "number", + "key": "chunk_size", + "label": "Frame per Task" + }, + { + "type": "text", + "key": "group", + "label": "Group Name" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json index 184fc657be..93e6325b5a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json @@ -350,8 +350,8 @@ "name": "template_validate_plugin", "template_data": [ { - "key": "CollectFrameDataFromAssetEntity", - "label": "Collect frame range from asset entity" + "key": "CollectSequenceFrameData", + "label": "Collect Original Sequence Frame Data" }, { "key": "ValidateFrameRange", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 2f0bf0a831..7f1a8a915b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -18,6 +18,39 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "ValidateFileSaved", + "label": "Validate File Saved", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "splitter" + }, + { + "key": "exclude_families", + "label": "Exclude Families", + "type": "list", + "object_type": "text" + } + ] + }, { "type": "collapsible-wrap", "label": "Model", @@ -46,6 +79,66 @@ } ] }, + { + "type": "collapsible-wrap", + "label": "Render", + "children": [ + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "type": "dict", + "collapsible": true, + "key": "ValidateRenderCameraIsSet", + "label": "Validate Render Camera Is Set", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ValidateDeadlinePublish", + "label": "Validate Render Output for Deadline", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + } + ] + } + ] + } + ] + }, { "type": "splitter" }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index b115ee3faa..8a0815c185 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -21,6 +21,20 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "CollectFbxAnimation", + "label": "Collect Fbx Animation", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + } + ] + }, { "type": "dict", "collapsible": true, @@ -793,6 +807,10 @@ "key": "ValidateRigControllers", "label": "Validate Rig Controllers" }, + { + "key": "ValidateAnimatedReferenceRig", + "label": "Validate Animated Reference Rig" + }, { "key": "ValidateAnimationContent", "label": "Validate Animation Content" @@ -809,9 +827,51 @@ "key": "ValidateSkeletalMeshHierarchy", "label": "Validate Skeletal Mesh Top Node" }, - { + { + "key": "ValidateSkeletonRigContents", + "label": "Validate Skeleton Rig Contents" + }, + { + "key": "ValidateSkeletonRigControllers", + "label": "Validate Skeleton Rig Controllers" + }, + { "key": "ValidateSkinclusterDeformerSet", "label": "Validate Skincluster Deformer Relationships" + }, + { + "key": "ValidateSkeletonRigOutputIds", + "label": "Validate Skeleton Rig Output Ids" + }, + { + "key": "ValidateSkeletonTopGroupHierarchy", + "label": "Validate Skeleton Top Group Hierarchy" + } + ] + }, + + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ValidateRigOutSetNodeIds", + "label": "Validate Rig Out Set Node Ids", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "allow_history_only", + "label": "Allow history only" } ] }, @@ -819,8 +879,8 @@ "type": "dict", "collapsible": true, "checkbox_key": "enabled", - "key": "ValidateRigOutSetNodeIds", - "label": "Validate Rig Out Set Node Ids", + "key": "ValidateSkeletonRigOutSetNodeIds", + "label": "Validate Skeleton Rig Out Set Node Ids", "is_group": true, "children": [ { @@ -978,6 +1038,35 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractCameraMayaScene", + "label": "Extract camera to Maya scene", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "boolean", + "key": "keep_image_planes", + "label": "Export Image planes" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index f006392bef..fa08e19c63 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -371,6 +371,151 @@ ] }, + { + "type": "label", + "label": "^ Settings and for ExtractReviewDataMov is deprecated and will be soon removed.
Please use ExtractReviewIntermediates instead." + }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ExtractReviewIntermediates", + "label": "ExtractReviewIntermediates", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "viewer_lut_raw", + "label": "Viewer LUT raw" + }, + { + "key": "outputs", + "label": "Output Definitions", + "type": "dict-modifiable", + "highlight_content": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "dict", + "collapsible": false, + "key": "filter", + "label": "Filtering", + "children": [ + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subsets", + "type": "list", + "object_type": "text" + } + ] + }, + { + "type": "separator" + }, + { + "type": "boolean", + "key": "read_raw", + "label": "Read colorspace RAW", + "default": false + }, + { + "type": "text", + "key": "viewer_process_override", + "label": "Viewer Process colorspace profile override" + }, + { + "type": "boolean", + "key": "bake_viewer_process", + "label": "Bake Viewer Process" + }, + { + "type": "boolean", + "key": "bake_viewer_input_process", + "label": "Bake Viewer Input Process (LUTs)" + }, + { + "type": "separator" + }, + { + "key": "reformat_nodes_config", + "type": "dict", + "label": "Reformat Nodes", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "Reposition knobs supported only.
You can add multiple reformat nodes
and set their knobs. Order of reformat
nodes is important. First reformat node
will be applied first and last reformat
node will be applied last." + }, + { + "key": "reposition_nodes", + "type": "list", + "label": "Reposition nodes", + "object_type": { + "type": "dict", + "children": [ + { + "key": "node_class", + "label": "Node class", + "type": "text" + }, + { + "type": "schema_template", + "name": "template_nuke_knob_inputs", + "template_data": [ + { + "label": "Node knobs", + "key": "knobs" + } + ] + } + ] + } + } + ] + }, + { + "type": "separator" + }, + { + "type": "text", + "key": "extension", + "label": "Write node file type" + }, + { + "key": "add_custom_tags", + "label": "Add custom tags", + "type": "list", + "object_type": "text" + } + ] + } + } + + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/tools/ayon_launcher/abstract.py b/openpype/tools/ayon_launcher/abstract.py new file mode 100644 index 0000000000..f2ef681c62 --- /dev/null +++ b/openpype/tools/ayon_launcher/abstract.py @@ -0,0 +1,297 @@ +from abc import ABCMeta, abstractmethod + +import six + + +@six.add_metaclass(ABCMeta) +class AbstractLauncherCommon(object): + @abstractmethod + def register_event_callback(self, topic, callback): + """Register event callback. + + Listen for events with given topic. + + Args: + topic (str): Name of topic. + callback (Callable): Callback that will be called when event + is triggered. + """ + + pass + + +class AbstractLauncherBackend(AbstractLauncherCommon): + @abstractmethod + def emit_event(self, topic, data=None, source=None): + """Emit event. + + Args: + topic (str): Event topic used for callbacks filtering. + data (Optional[dict[str, Any]]): Event data. + source (Optional[str]): Event source. + """ + + pass + + @abstractmethod + def get_project_settings(self, project_name): + """Project settings for current project. + + Args: + project_name (Union[str, None]): Project name. + + Returns: + dict[str, Any]: Project settings. + """ + + pass + + @abstractmethod + def get_project_entity(self, project_name): + """Get project entity by name. + + Args: + project_name (str): Project name. + + Returns: + dict[str, Any]: Project entity data. + """ + + pass + + @abstractmethod + def get_folder_entity(self, project_name, folder_id): + """Get folder entity by id. + + Args: + project_name (str): Project name. + folder_id (str): Folder id. + + Returns: + dict[str, Any]: Folder entity data. + """ + + pass + + @abstractmethod + def get_task_entity(self, project_name, task_id): + """Get task entity by id. + + Args: + project_name (str): Project name. + task_id (str): Task id. + + Returns: + dict[str, Any]: Task entity data. + """ + + pass + + +class AbstractLauncherFrontEnd(AbstractLauncherCommon): + # Entity items for UI + @abstractmethod + def get_project_items(self, sender=None): + """Project items for all projects. + + This function may trigger events 'projects.refresh.started' and + 'projects.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of project items in UI elements. + + Args: + sender (str): Who requested folder items. + + Returns: + list[ProjectItem]: Minimum possible information needed + for visualisation of folder hierarchy. + """ + + pass + + @abstractmethod + def get_folder_items(self, project_name, sender=None): + """Folder items to visualize project hierarchy. + + This function may trigger events 'folders.refresh.started' and + 'folders.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of folder items in UI elements. + + Args: + project_name (str): Project name. + sender (str): Who requested folder items. + + Returns: + list[FolderItem]: Minimum possible information needed + for visualisation of folder hierarchy. + """ + + pass + + @abstractmethod + def get_task_items(self, project_name, folder_id, sender=None): + """Task items. + + This function may trigger events 'tasks.refresh.started' and + 'tasks.refresh.finished' which will contain 'sender' value in data. + That may help to avoid re-refresh of task items in UI elements. + + Args: + project_name (str): Project name. + folder_id (str): Folder ID for which are tasks requested. + sender (str): Who requested folder items. + + Returns: + list[TaskItem]: Minimum possible information needed + for visualisation of tasks. + """ + + pass + + @abstractmethod + def get_selected_project_name(self): + """Selected project name. + + Returns: + Union[str, None]: Selected project name. + """ + + pass + + @abstractmethod + def get_selected_folder_id(self): + """Selected folder id. + + Returns: + Union[str, None]: Selected folder id. + """ + + pass + + @abstractmethod + def get_selected_task_id(self): + """Selected task id. + + Returns: + Union[str, None]: Selected task id. + """ + + pass + + @abstractmethod + def get_selected_task_name(self): + """Selected task name. + + Returns: + Union[str, None]: Selected task name. + """ + + pass + + @abstractmethod + def get_selected_context(self): + """Get whole selected context. + + Example: + { + "project_name": self.get_selected_project_name(), + "folder_id": self.get_selected_folder_id(), + "task_id": self.get_selected_task_id(), + "task_name": self.get_selected_task_name(), + } + + Returns: + dict[str, Union[str, None]]: Selected context. + """ + + pass + + @abstractmethod + def set_selected_project(self, project_name): + """Change selected folder. + + Args: + project_name (Union[str, None]): Project nameor None if no project + is selected. + """ + + pass + + @abstractmethod + def set_selected_folder(self, folder_id): + """Change selected folder. + + Args: + folder_id (Union[str, None]): Folder id or None if no folder + is selected. + """ + + pass + + @abstractmethod + def set_selected_task(self, task_id, task_name): + """Change selected task. + + Args: + task_id (Union[str, None]): Task id or None if no task + is selected. + task_name (Union[str, None]): Task name or None if no task + is selected. + """ + + pass + + # Actions + @abstractmethod + def get_action_items(self, project_name, folder_id, task_id): + """Get action items for given context. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + + Returns: + list[ActionItem]: List of action items that should be shown + for given context. + """ + + pass + + @abstractmethod + def trigger_action(self, project_name, folder_id, task_id, action_id): + """Trigger action on given context. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + action_id (str): Action identifier. + """ + + pass + + @abstractmethod + def set_application_force_not_open_workfile( + self, project_name, folder_id, task_id, action_ids, enabled + ): + """This is application action related to force not open last workfile. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + action_id (Iterable[str]): Action identifiers. + enabled (bool): New value of force not open workfile. + """ + + pass + + @abstractmethod + def refresh(self): + """Refresh everything, models, ui etc. + + Triggers 'controller.refresh.started' event at the beginning and + 'controller.refresh.finished' at the end. + """ + + pass diff --git a/openpype/tools/ayon_launcher/control.py b/openpype/tools/ayon_launcher/control.py new file mode 100644 index 0000000000..a6e528b104 --- /dev/null +++ b/openpype/tools/ayon_launcher/control.py @@ -0,0 +1,149 @@ +from openpype.lib import Logger +from openpype.lib.events import QueuedEventSystem +from openpype.settings import get_project_settings +from openpype.tools.ayon_utils.models import ProjectsModel, HierarchyModel + +from .abstract import AbstractLauncherFrontEnd, AbstractLauncherBackend +from .models import LauncherSelectionModel, ActionsModel + + +class BaseLauncherController( + AbstractLauncherFrontEnd, AbstractLauncherBackend +): + def __init__(self): + self._project_settings = {} + self._event_system = None + self._log = None + + self._selection_model = LauncherSelectionModel(self) + self._projects_model = ProjectsModel(self) + self._hierarchy_model = HierarchyModel(self) + self._actions_model = ActionsModel(self) + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + @property + def event_system(self): + """Inner event system for workfiles tool controller. + + Is used for communication with UI. Event system is created on demand. + + Returns: + QueuedEventSystem: Event system which can trigger callbacks + for topics. + """ + + if self._event_system is None: + self._event_system = QueuedEventSystem() + return self._event_system + + # --------------------------------- + # Implementation of abstract methods + # --------------------------------- + # Events system + def emit_event(self, topic, data=None, source=None): + """Use implemented event system to trigger event.""" + + if data is None: + data = {} + self.event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self.event_system.add_callback(topic, callback) + + # Entity items for UI + def get_project_items(self, sender=None): + return self._projects_model.get_project_items(sender) + + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_task_items(self, project_name, folder_id, sender=None): + return self._hierarchy_model.get_task_items( + project_name, folder_id, sender) + + # Project settings for applications actions + def get_project_settings(self, project_name): + if project_name in self._project_settings: + return self._project_settings[project_name] + settings = get_project_settings(project_name) + self._project_settings[project_name] = settings + return settings + + # Entity for backend + def get_project_entity(self, project_name): + return self._projects_model.get_project_entity(project_name) + + def get_folder_entity(self, project_name, folder_id): + return self._hierarchy_model.get_folder_entity( + project_name, folder_id) + + def get_task_entity(self, project_name, task_id): + return self._hierarchy_model.get_task_entity(project_name, task_id) + + # Selection methods + def get_selected_project_name(self): + return self._selection_model.get_selected_project_name() + + def set_selected_project(self, project_name): + self._selection_model.set_selected_project(project_name) + + def get_selected_folder_id(self): + return self._selection_model.get_selected_folder_id() + + def set_selected_folder(self, folder_id): + self._selection_model.set_selected_folder(folder_id) + + def get_selected_task_id(self): + return self._selection_model.get_selected_task_id() + + def get_selected_task_name(self): + return self._selection_model.get_selected_task_name() + + def set_selected_task(self, task_id, task_name): + self._selection_model.set_selected_task(task_id, task_name) + + def get_selected_context(self): + return { + "project_name": self.get_selected_project_name(), + "folder_id": self.get_selected_folder_id(), + "task_id": self.get_selected_task_id(), + "task_name": self.get_selected_task_name(), + } + + # Actions + def get_action_items(self, project_name, folder_id, task_id): + return self._actions_model.get_action_items( + project_name, folder_id, task_id) + + def set_application_force_not_open_workfile( + self, project_name, folder_id, task_id, action_ids, enabled + ): + self._actions_model.set_application_force_not_open_workfile( + project_name, folder_id, task_id, action_ids, enabled + ) + + def trigger_action(self, project_name, folder_id, task_id, identifier): + self._actions_model.trigger_action( + project_name, folder_id, task_id, identifier) + + # General methods + def refresh(self): + self._emit_event("controller.refresh.started") + + self._project_settings = {} + + self._projects_model.reset() + self._hierarchy_model.reset() + + self._actions_model.refresh() + self._projects_model.refresh() + + self._emit_event("controller.refresh.finished") + + def _emit_event(self, topic, data=None): + self.emit_event(topic, data, "controller") diff --git a/openpype/tools/ayon_launcher/models/__init__.py b/openpype/tools/ayon_launcher/models/__init__.py new file mode 100644 index 0000000000..1bc60c85f0 --- /dev/null +++ b/openpype/tools/ayon_launcher/models/__init__.py @@ -0,0 +1,8 @@ +from .actions import ActionsModel +from .selection import LauncherSelectionModel + + +__all__ = ( + "ActionsModel", + "LauncherSelectionModel", +) diff --git a/openpype/tools/ayon_launcher/models/actions.py b/openpype/tools/ayon_launcher/models/actions.py new file mode 100644 index 0000000000..93ec115734 --- /dev/null +++ b/openpype/tools/ayon_launcher/models/actions.py @@ -0,0 +1,509 @@ +import os + +from openpype import resources +from openpype.lib import Logger, OpenPypeSettingsRegistry +from openpype.pipeline.actions import ( + discover_launcher_actions, + LauncherAction, +) + + +# class Action: +# def __init__(self, label, icon=None, identifier=None): +# self._label = label +# self._icon = icon +# self._callbacks = [] +# self._identifier = identifier or uuid.uuid4().hex +# self._checked = True +# self._checkable = False +# +# def set_checked(self, checked): +# self._checked = checked +# +# def set_checkable(self, checkable): +# self._checkable = checkable +# +# def set_label(self, label): +# self._label = label +# +# def add_callback(self, callback): +# self._callbacks = callback +# +# +# class Menu: +# def __init__(self, label, icon=None): +# self.label = label +# self.icon = icon +# self._actions = [] +# +# def add_action(self, action): +# self._actions.append(action) + + +class ApplicationAction(LauncherAction): + """Action to launch an application. + + Application action based on 'ApplicationManager' system. + + Handling of applications in launcher is not ideal and should be completely + redone from scratch. This is just a temporary solution to keep backwards + compatibility with OpenPype launcher. + + Todos: + Move handling of errors to frontend. + """ + + # Application object + application = None + # Action attributes + name = None + label = None + label_variant = None + group = None + icon = None + color = None + order = 0 + data = {} + project_settings = {} + project_entities = {} + + _log = None + required_session_keys = ( + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK" + ) + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + def is_compatible(self, session): + for key in self.required_session_keys: + if not session.get(key): + return False + + project_name = session["AVALON_PROJECT"] + project_entity = self.project_entities[project_name] + apps = project_entity["attrib"].get("applications") + if not apps or self.application.full_name not in apps: + return False + + project_settings = self.project_settings[project_name] + only_available = project_settings["applications"]["only_available"] + if only_available and not self.application.find_executable(): + return False + return True + + def _show_message_box(self, title, message, details=None): + from qtpy import QtWidgets, QtGui + from openpype import style + + dialog = QtWidgets.QMessageBox() + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + dialog.setWindowIcon(icon) + dialog.setStyleSheet(style.load_stylesheet()) + dialog.setWindowTitle(title) + dialog.setText(message) + if details: + dialog.setDetailedText(details) + dialog.exec_() + + def process(self, session, **kwargs): + """Process the full Application action""" + + from openpype.lib import ( + ApplictionExecutableNotFound, + ApplicationLaunchFailed, + ) + + project_name = session["AVALON_PROJECT"] + asset_name = session["AVALON_ASSET"] + task_name = session["AVALON_TASK"] + try: + self.application.launch( + project_name=project_name, + asset_name=asset_name, + task_name=task_name, + **self.data + ) + + except ApplictionExecutableNotFound as exc: + details = exc.details + msg = exc.msg + log_msg = str(msg) + if details: + log_msg += "\n" + details + self.log.warning(log_msg) + self._show_message_box( + "Application executable not found", msg, details + ) + + except ApplicationLaunchFailed as exc: + msg = str(exc) + self.log.warning(msg, exc_info=True) + self._show_message_box("Application launch failed", msg) + + +class ActionItem: + """Item representing single action to trigger. + + Todos: + Get rid of application specific logic. + + Args: + identifier (str): Unique identifier of action item. + label (str): Action label. + variant_label (Union[str, None]): Variant label, full label is + concatenated with space. Actions are grouped under single + action if it has same 'label' and have set 'variant_label'. + icon (dict[str, str]): Icon definition. + order (int): Action ordering. + is_application (bool): Is action application action. + force_not_open_workfile (bool): Force not open workfile. Application + related. + full_label (Optional[str]): Full label, if not set it is generated + from 'label' and 'variant_label'. + """ + + def __init__( + self, + identifier, + label, + variant_label, + icon, + order, + is_application, + force_not_open_workfile, + full_label=None + ): + self.identifier = identifier + self.label = label + self.variant_label = variant_label + self.icon = icon + self.order = order + self.is_application = is_application + self.force_not_open_workfile = force_not_open_workfile + self._full_label = full_label + + def copy(self): + return self.from_data(self.to_data()) + + @property + def full_label(self): + if self._full_label is None: + if self.variant_label: + self._full_label = " ".join([self.label, self.variant_label]) + else: + self._full_label = self.label + return self._full_label + + def to_data(self): + return { + "identifier": self.identifier, + "label": self.label, + "variant_label": self.variant_label, + "icon": self.icon, + "order": self.order, + "is_application": self.is_application, + "force_not_open_workfile": self.force_not_open_workfile, + "full_label": self._full_label, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +def get_action_icon(action): + """Get action icon info. + + Args: + action (LacunherAction): Action instance. + + Returns: + dict[str, str]: Icon info. + """ + + icon = action.icon + if not icon: + return { + "type": "awesome-font", + "name": "fa.cube", + "color": "white" + } + + if isinstance(icon, dict): + return icon + + icon_path = resources.get_resource(icon) + if not os.path.exists(icon_path): + try: + icon_path = icon.format(resources.RESOURCES_DIR) + except Exception: + pass + + if os.path.exists(icon_path): + return { + "type": "path", + "path": icon_path, + } + + return { + "type": "awesome-font", + "name": icon, + "color": action.color or "white" + } + + +class ActionsModel: + """Actions model. + + Args: + controller (AbstractLauncherBackend): Controller instance. + """ + + _not_open_workfile_reg_key = "force_not_open_workfile" + + def __init__(self, controller): + self._controller = controller + + self._log = None + + self._discovered_actions = None + self._actions = None + self._action_items = {} + + self._launcher_tool_reg = OpenPypeSettingsRegistry("launcher_tool") + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + def refresh(self): + self._discovered_actions = None + self._actions = None + self._action_items = {} + + self._controller.emit_event("actions.refresh.started") + self._get_action_objects() + self._controller.emit_event("actions.refresh.finished") + + def get_action_items(self, project_name, folder_id, task_id): + """Get actions for project. + + Args: + project_name (Union[str, None]): Project name. + folder_id (Union[str, None]): Folder id. + task_id (Union[str, None]): Task id. + + Returns: + list[ActionItem]: List of actions. + """ + + not_open_workfile_actions = self._get_no_last_workfile_for_context( + project_name, folder_id, task_id) + session = self._prepare_session(project_name, folder_id, task_id) + output = [] + action_items = self._get_action_items(project_name) + for identifier, action in self._get_action_objects().items(): + if not action.is_compatible(session): + continue + + action_item = action_items[identifier] + # Handling of 'force_not_open_workfile' for applications + if action_item.is_application: + action_item = action_item.copy() + action_item.force_not_open_workfile = ( + not_open_workfile_actions.get(identifier, False) + ) + + output.append(action_item) + return output + + def set_application_force_not_open_workfile( + self, project_name, folder_id, task_id, action_ids, enabled + ): + no_workfile_reg_data = self._get_no_last_workfile_reg_data() + project_data = no_workfile_reg_data.setdefault(project_name, {}) + folder_data = project_data.setdefault(folder_id, {}) + task_data = folder_data.setdefault(task_id, {}) + for action_id in action_ids: + task_data[action_id] = enabled + self._launcher_tool_reg.set_item( + self._not_open_workfile_reg_key, no_workfile_reg_data + ) + + def trigger_action(self, project_name, folder_id, task_id, identifier): + session = self._prepare_session(project_name, folder_id, task_id) + failed = False + error_message = None + action_label = identifier + action_items = self._get_action_items(project_name) + try: + action = self._actions[identifier] + action_item = action_items[identifier] + action_label = action_item.full_label + self._controller.emit_event( + "action.trigger.started", + { + "identifier": identifier, + "full_label": action_label, + } + ) + if isinstance(action, ApplicationAction): + per_action = self._get_no_last_workfile_for_context( + project_name, folder_id, task_id + ) + force_not_open_workfile = per_action.get(identifier, False) + if force_not_open_workfile: + action.data["start_last_workfile"] = False + else: + action.data.pop("start_last_workfile", None) + action.process(session) + except Exception as exc: + self.log.warning("Action trigger failed.", exc_info=True) + failed = True + error_message = str(exc) + + self._controller.emit_event( + "action.trigger.finished", + { + "identifier": identifier, + "failed": failed, + "error_message": error_message, + "full_label": action_label, + } + ) + + def _get_no_last_workfile_reg_data(self): + try: + no_workfile_reg_data = self._launcher_tool_reg.get_item( + self._not_open_workfile_reg_key) + except ValueError: + no_workfile_reg_data = {} + self._launcher_tool_reg.set_item( + self._not_open_workfile_reg_key, no_workfile_reg_data) + return no_workfile_reg_data + + def _get_no_last_workfile_for_context( + self, project_name, folder_id, task_id + ): + not_open_workfile_reg_data = self._get_no_last_workfile_reg_data() + return ( + not_open_workfile_reg_data + .get(project_name, {}) + .get(folder_id, {}) + .get(task_id, {}) + ) + + def _prepare_session(self, project_name, folder_id, task_id): + folder_name = None + if folder_id: + folder = self._controller.get_folder_entity( + project_name, folder_id) + if folder: + folder_name = folder["name"] + + task_name = None + if task_id: + task = self._controller.get_task_entity(project_name, task_id) + if task: + task_name = task["name"] + + return { + "AVALON_PROJECT": project_name, + "AVALON_ASSET": folder_name, + "AVALON_TASK": task_name, + } + + def _get_discovered_action_classes(self): + if self._discovered_actions is None: + self._discovered_actions = ( + discover_launcher_actions() + + self._get_applications_action_classes() + ) + return self._discovered_actions + + def _get_action_objects(self): + if self._actions is None: + actions = {} + for cls in self._get_discovered_action_classes(): + obj = cls() + identifier = getattr(obj, "identifier", None) + if identifier is None: + identifier = cls.__name__ + actions[identifier] = obj + self._actions = actions + return self._actions + + def _get_action_items(self, project_name): + action_items = self._action_items.get(project_name) + if action_items is not None: + return action_items + + project_entity = None + if project_name: + project_entity = self._controller.get_project_entity(project_name) + project_settings = self._controller.get_project_settings(project_name) + + action_items = {} + for identifier, action in self._get_action_objects().items(): + is_application = isinstance(action, ApplicationAction) + if is_application: + action.project_entities[project_name] = project_entity + action.project_settings[project_name] = project_settings + label = action.label or identifier + variant_label = getattr(action, "label_variant", None) + icon = get_action_icon(action) + item = ActionItem( + identifier, + label, + variant_label, + icon, + action.order, + is_application, + False + ) + action_items[identifier] = item + self._action_items[project_name] = action_items + return action_items + + def _get_applications_action_classes(self): + from openpype.lib.applications import ( + CUSTOM_LAUNCH_APP_GROUPS, + ApplicationManager, + ) + + actions = [] + + manager = ApplicationManager() + for full_name, application in manager.applications.items(): + if ( + application.group.name in CUSTOM_LAUNCH_APP_GROUPS + or not application.enabled + ): + continue + + action = type( + "app_{}".format(full_name), + (ApplicationAction,), + { + "identifier": "application.{}".format(full_name), + "application": application, + "name": application.name, + "label": application.group.label, + "label_variant": application.label, + "group": None, + "icon": application.icon, + "color": getattr(application, "color", None), + "order": getattr(application, "order", None) or 0, + "data": {} + } + ) + actions.append(action) + return actions diff --git a/openpype/tools/ayon_launcher/models/selection.py b/openpype/tools/ayon_launcher/models/selection.py new file mode 100644 index 0000000000..b156d2084c --- /dev/null +++ b/openpype/tools/ayon_launcher/models/selection.py @@ -0,0 +1,72 @@ +class LauncherSelectionModel(object): + """Model handling selection changes. + + Triggering events: + - "selection.project.changed" + - "selection.folder.changed" + - "selection.task.changed" + """ + + event_source = "launcher.selection.model" + + def __init__(self, controller): + self._controller = controller + + self._project_name = None + self._folder_id = None + self._task_name = None + self._task_id = None + + def get_selected_project_name(self): + return self._project_name + + def set_selected_project(self, project_name): + if project_name == self._project_name: + return + + self._project_name = project_name + self._controller.emit_event( + "selection.project.changed", + {"project_name": project_name}, + self.event_source + ) + + def get_selected_folder_id(self): + return self._folder_id + + def set_selected_folder(self, folder_id): + if folder_id == self._folder_id: + return + + self._folder_id = folder_id + self._controller.emit_event( + "selection.folder.changed", + { + "project_name": self._project_name, + "folder_id": folder_id, + }, + self.event_source + ) + + def get_selected_task_name(self): + return self._task_name + + def get_selected_task_id(self): + return self._task_id + + def set_selected_task(self, task_id, task_name): + if task_id == self._task_id: + return + + self._task_name = task_name + self._task_id = task_id + self._controller.emit_event( + "selection.task.changed", + { + "project_name": self._project_name, + "folder_id": self._folder_id, + "task_name": task_name, + "task_id": task_id, + }, + self.event_source + ) diff --git a/openpype/tools/ayon_launcher/ui/__init__.py b/openpype/tools/ayon_launcher/ui/__init__.py new file mode 100644 index 0000000000..da30c84656 --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/__init__.py @@ -0,0 +1,6 @@ +from .window import LauncherWindow + + +__all__ = ( + "LauncherWindow", +) diff --git a/openpype/tools/ayon_launcher/ui/actions_widget.py b/openpype/tools/ayon_launcher/ui/actions_widget.py new file mode 100644 index 0000000000..0630d1d5b5 --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/actions_widget.py @@ -0,0 +1,484 @@ +import time +import collections + +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.flickcharm import FlickCharm +from openpype.tools.ayon_utils.widgets import get_qt_icon + +from .resources import get_options_image_path + +ANIMATION_LEN = 7 + +ACTION_ID_ROLE = QtCore.Qt.UserRole + 1 +ACTION_IS_APPLICATION_ROLE = QtCore.Qt.UserRole + 2 +ACTION_IS_GROUP_ROLE = QtCore.Qt.UserRole + 3 +ACTION_SORT_ROLE = QtCore.Qt.UserRole + 4 +ANIMATION_START_ROLE = QtCore.Qt.UserRole + 5 +ANIMATION_STATE_ROLE = QtCore.Qt.UserRole + 6 +FORCE_NOT_OPEN_WORKFILE_ROLE = QtCore.Qt.UserRole + 7 + + +def _variant_label_sort_getter(action_item): + """Get variant label value for sorting. + + Make sure the output value is a string. + + Args: + action_item (ActionItem): Action item. + + Returns: + str: Variant label or empty string. + """ + + return action_item.variant_label or "" + + +class ActionsQtModel(QtGui.QStandardItemModel): + """Qt model for actions. + + Args: + controller (AbstractLauncherFrontEnd): Controller instance. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(ActionsQtModel, self).__init__() + + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh_finished, + ) + controller.register_event_callback( + "selection.project.changed", + self._on_selection_project_changed, + ) + controller.register_event_callback( + "selection.folder.changed", + self._on_selection_folder_changed, + ) + controller.register_event_callback( + "selection.task.changed", + self._on_selection_task_changed, + ) + + self._controller = controller + + self._items_by_id = {} + self._action_items_by_id = {} + self._groups_by_id = {} + + self._selected_project_name = None + self._selected_folder_id = None + self._selected_task_id = None + + def get_selected_project_name(self): + return self._selected_project_name + + def get_selected_folder_id(self): + return self._selected_folder_id + + def get_selected_task_id(self): + return self._selected_task_id + + def get_group_items(self, action_id): + return self._groups_by_id[action_id] + + def get_item_by_id(self, action_id): + return self._items_by_id.get(action_id) + + def get_action_item_by_id(self, action_id): + return self._action_items_by_id.get(action_id) + + def _clear_items(self): + self._items_by_id = {} + self._action_items_by_id = {} + self._groups_by_id = {} + root = self.invisibleRootItem() + root.removeRows(0, root.rowCount()) + + def refresh(self): + items = self._controller.get_action_items( + self._selected_project_name, + self._selected_folder_id, + self._selected_task_id, + ) + if not items: + self._clear_items() + self.refreshed.emit() + return + + root_item = self.invisibleRootItem() + + all_action_items_info = [] + items_by_label = collections.defaultdict(list) + for item in items: + if not item.variant_label: + all_action_items_info.append((item, False)) + else: + items_by_label[item.label].append(item) + + groups_by_id = {} + for action_items in items_by_label.values(): + action_items.sort(key=_variant_label_sort_getter, reverse=True) + first_item = next(iter(action_items)) + all_action_items_info.append((first_item, len(action_items) > 1)) + groups_by_id[first_item.identifier] = action_items + + new_items = [] + items_by_id = {} + action_items_by_id = {} + for action_item_info in all_action_items_info: + action_item, is_group = action_item_info + icon = get_qt_icon(action_item.icon) + if is_group: + label = action_item.label + else: + label = action_item.full_label + + item = self._items_by_id.get(action_item.identifier) + if item is None: + item = QtGui.QStandardItem() + item.setData(action_item.identifier, ACTION_ID_ROLE) + new_items.append(item) + + item.setFlags(QtCore.Qt.ItemIsEnabled) + item.setData(label, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setData(is_group, ACTION_IS_GROUP_ROLE) + item.setData(action_item.order, ACTION_SORT_ROLE) + item.setData( + action_item.is_application, ACTION_IS_APPLICATION_ROLE) + item.setData( + action_item.force_not_open_workfile, + FORCE_NOT_OPEN_WORKFILE_ROLE) + items_by_id[action_item.identifier] = item + action_items_by_id[action_item.identifier] = action_item + + if new_items: + root_item.appendRows(new_items) + + to_remove = set(self._items_by_id.keys()) - set(items_by_id.keys()) + for identifier in to_remove: + item = self._items_by_id.pop(identifier) + self._action_items_by_id.pop(identifier) + root_item.removeRow(item.row()) + + self._groups_by_id = groups_by_id + self._items_by_id = items_by_id + self._action_items_by_id = action_items_by_id + self.refreshed.emit() + + def _on_controller_refresh_finished(self): + context = self._controller.get_selected_context() + self._selected_project_name = context["project_name"] + self._selected_folder_id = context["folder_id"] + self._selected_task_id = context["task_id"] + self.refresh() + + def _on_selection_project_changed(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_id = None + self._selected_task_id = None + self.refresh() + + def _on_selection_folder_changed(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_id = event["folder_id"] + self._selected_task_id = None + self.refresh() + + def _on_selection_task_changed(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_id = event["folder_id"] + self._selected_task_id = event["task_id"] + self.refresh() + + +class ActionDelegate(QtWidgets.QStyledItemDelegate): + _cached_extender = {} + + def __init__(self, *args, **kwargs): + super(ActionDelegate, self).__init__(*args, **kwargs) + self._anim_start_color = QtGui.QColor(178, 255, 246) + self._anim_end_color = QtGui.QColor(5, 44, 50) + + def _draw_animation(self, painter, option, index): + grid_size = option.widget.gridSize() + x_offset = int( + (grid_size.width() / 2) + - (option.rect.width() / 2) + ) + item_x = option.rect.x() - x_offset + rect_offset = grid_size.width() / 20 + size = grid_size.width() - (rect_offset * 2) + anim_rect = QtCore.QRect( + item_x + rect_offset, + option.rect.y() + rect_offset, + size, + size + ) + + painter.save() + + painter.setBrush(QtCore.Qt.transparent) + + gradient = QtGui.QConicalGradient() + gradient.setCenter(QtCore.QPointF(anim_rect.center())) + gradient.setColorAt(0, self._anim_start_color) + gradient.setColorAt(1, self._anim_end_color) + + time_diff = time.time() - index.data(ANIMATION_START_ROLE) + + # Repeat 4 times + part_anim = 2.5 + part_time = time_diff % part_anim + offset = (part_time / part_anim) * 360 + angle = (offset + 90) % 360 + + gradient.setAngle(-angle) + + pen = QtGui.QPen(QtGui.QBrush(gradient), rect_offset) + pen.setCapStyle(QtCore.Qt.RoundCap) + painter.setPen(pen) + painter.drawArc( + anim_rect, + -16 * (angle + 10), + -16 * offset + ) + + painter.restore() + + @classmethod + def _get_extender_pixmap(cls, size): + pix = cls._cached_extender.get(size) + if pix is not None: + return pix + pix = QtGui.QPixmap(get_options_image_path()).scaled( + size, size, + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + cls._cached_extender[size] = pix + return pix + + def paint(self, painter, option, index): + painter.setRenderHints( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + + if index.data(ANIMATION_STATE_ROLE): + self._draw_animation(painter, option, index) + + super(ActionDelegate, self).paint(painter, option, index) + + if index.data(FORCE_NOT_OPEN_WORKFILE_ROLE): + rect = QtCore.QRectF( + option.rect.x(), option.rect.height(), 5, 5) + painter.setPen(QtCore.Qt.NoPen) + painter.setBrush(QtGui.QColor(200, 0, 0)) + painter.drawEllipse(rect) + + if not index.data(ACTION_IS_GROUP_ROLE): + return + + grid_size = option.widget.gridSize() + x_offset = int( + (grid_size.width() / 2) + - (option.rect.width() / 2) + ) + item_x = option.rect.x() - x_offset + + tenth_size = int(grid_size.width() / 10) + extender_size = int(tenth_size * 2.4) + + extender_x = item_x + tenth_size + extender_y = option.rect.y() + tenth_size + + pix = self._get_extender_pixmap(extender_size) + painter.drawPixmap(extender_x, extender_y, pix) + + +class ActionsWidget(QtWidgets.QWidget): + def __init__(self, controller, parent): + super(ActionsWidget, self).__init__(parent) + + self._controller = controller + + view = QtWidgets.QListView(self) + view.setProperty("mode", "icon") + view.setObjectName("IconView") + view.setViewMode(QtWidgets.QListView.IconMode) + view.setResizeMode(QtWidgets.QListView.Adjust) + view.setSelectionMode(QtWidgets.QListView.NoSelection) + view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + view.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers) + view.setWrapping(True) + view.setGridSize(QtCore.QSize(70, 75)) + view.setIconSize(QtCore.QSize(30, 30)) + view.setSpacing(0) + view.setWordWrap(True) + + # Make view flickable + flick = FlickCharm(parent=view) + flick.activateOn(view) + + model = ActionsQtModel(controller) + + proxy_model = QtCore.QSortFilterProxyModel() + proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + proxy_model.setSortRole(ACTION_SORT_ROLE) + + proxy_model.setSourceModel(model) + view.setModel(proxy_model) + + delegate = ActionDelegate(self) + view.setItemDelegate(delegate) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(view) + + animation_timer = QtCore.QTimer() + animation_timer.setInterval(40) + animation_timer.timeout.connect(self._on_animation) + + view.clicked.connect(self._on_clicked) + view.customContextMenuRequested.connect(self._on_context_menu) + model.refreshed.connect(self._on_model_refresh) + + self._animated_items = set() + self._animation_timer = animation_timer + + self._context_menu = None + + self._flick = flick + self._view = view + self._model = model + self._proxy_model = proxy_model + + self._set_row_height(1) + + def _set_row_height(self, rows): + self.setMinimumHeight(rows * 75) + + def _on_model_refresh(self): + self._proxy_model.sort(0) + + def _on_animation(self): + time_now = time.time() + for action_id in tuple(self._animated_items): + item = self._model.get_item_by_id(action_id) + if item is None: + self._animated_items.discard(action_id) + continue + + start_time = item.data(ANIMATION_START_ROLE) + if start_time is None or (time_now - start_time) > ANIMATION_LEN: + item.setData(0, ANIMATION_STATE_ROLE) + self._animated_items.discard(action_id) + + if not self._animated_items: + self._animation_timer.stop() + + self.update() + + def _start_animation(self, index): + # Offset refresh timout + model_index = self._proxy_model.mapToSource(index) + if not model_index.isValid(): + return + action_id = model_index.data(ACTION_ID_ROLE) + self._model.setData(model_index, time.time(), ANIMATION_START_ROLE) + self._model.setData(model_index, 1, ANIMATION_STATE_ROLE) + self._animated_items.add(action_id) + self._animation_timer.start() + + def _on_context_menu(self, point): + """Creates menu to force skip opening last workfile.""" + index = self._view.indexAt(point) + if not index.isValid(): + return + + if not index.data(ACTION_IS_APPLICATION_ROLE): + return + + menu = QtWidgets.QMenu(self._view) + checkbox = QtWidgets.QCheckBox( + "Skip opening last workfile.", menu) + if index.data(FORCE_NOT_OPEN_WORKFILE_ROLE): + checkbox.setChecked(True) + + action_id = index.data(ACTION_ID_ROLE) + is_group = index.data(ACTION_IS_GROUP_ROLE) + if is_group: + action_items = self._model.get_group_items(action_id) + else: + action_items = [self._model.get_action_item_by_id(action_id)] + action_ids = {action_item.identifier for action_item in action_items} + checkbox.stateChanged.connect( + lambda: self._on_checkbox_changed( + action_ids, checkbox.isChecked() + ) + ) + action = QtWidgets.QWidgetAction(menu) + action.setDefaultWidget(checkbox) + + menu.addAction(action) + + self._context_menu = menu + global_point = self.mapToGlobal(point) + menu.exec_(global_point) + self._context_menu = None + + def _on_checkbox_changed(self, action_ids, is_checked): + if self._context_menu is not None: + self._context_menu.close() + + project_name = self._model.get_selected_project_name() + folder_id = self._model.get_selected_folder_id() + task_id = self._model.get_selected_task_id() + self._controller.set_application_force_not_open_workfile( + project_name, folder_id, task_id, action_ids, is_checked) + self._model.refresh() + + def _on_clicked(self, index): + if not index or not index.isValid(): + return + + is_group = index.data(ACTION_IS_GROUP_ROLE) + action_id = index.data(ACTION_ID_ROLE) + + project_name = self._model.get_selected_project_name() + folder_id = self._model.get_selected_folder_id() + task_id = self._model.get_selected_task_id() + + if not is_group: + self._controller.trigger_action( + project_name, folder_id, task_id, action_id + ) + self._start_animation(index) + return + + action_items = self._model.get_group_items(action_id) + + menu = QtWidgets.QMenu(self) + actions_mapping = {} + + for action_item in action_items: + menu_action = QtWidgets.QAction(action_item.full_label) + menu.addAction(menu_action) + actions_mapping[menu_action] = action_item + + result = menu.exec_(QtGui.QCursor.pos()) + if not result: + return + + action_item = actions_mapping[result] + + self._controller.trigger_action( + project_name, folder_id, task_id, action_item.identifier + ) + self._start_animation(index) diff --git a/openpype/tools/ayon_launcher/ui/hierarchy_page.py b/openpype/tools/ayon_launcher/ui/hierarchy_page.py new file mode 100644 index 0000000000..5047cdc692 --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/hierarchy_page.py @@ -0,0 +1,102 @@ +import qtawesome +from qtpy import QtWidgets, QtCore + +from openpype.tools.utils import ( + PlaceholderLineEdit, + SquareButton, + RefreshButton, +) +from openpype.tools.ayon_utils.widgets import ( + ProjectsCombobox, + FoldersWidget, + TasksWidget, +) + + +class HierarchyPage(QtWidgets.QWidget): + def __init__(self, controller, parent): + super(HierarchyPage, self).__init__(parent) + + # Header + header_widget = QtWidgets.QWidget(self) + + btn_back_icon = qtawesome.icon("fa.angle-left", color="white") + btn_back = SquareButton(header_widget) + btn_back.setIcon(btn_back_icon) + + projects_combobox = ProjectsCombobox(controller, header_widget) + + refresh_btn = RefreshButton(header_widget) + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(btn_back, 0) + header_layout.addWidget(projects_combobox, 1) + header_layout.addWidget(refresh_btn, 0) + + # Body - Folders + Tasks selection + content_body = QtWidgets.QSplitter(self) + content_body.setContentsMargins(0, 0, 0, 0) + content_body.setSizePolicy( + QtWidgets.QSizePolicy.Expanding, + QtWidgets.QSizePolicy.Expanding + ) + content_body.setOrientation(QtCore.Qt.Horizontal) + + # - Folders widget with filter + folders_wrapper = QtWidgets.QWidget(content_body) + + folders_filter_text = PlaceholderLineEdit(folders_wrapper) + folders_filter_text.setPlaceholderText("Filter folders...") + + folders_widget = FoldersWidget(controller, folders_wrapper) + + folders_wrapper_layout = QtWidgets.QVBoxLayout(folders_wrapper) + folders_wrapper_layout.setContentsMargins(0, 0, 0, 0) + folders_wrapper_layout.addWidget(folders_filter_text, 0) + folders_wrapper_layout.addWidget(folders_widget, 1) + + # - Tasks widget + tasks_widget = TasksWidget(controller, content_body) + + content_body.addWidget(folders_wrapper) + content_body.addWidget(tasks_widget) + content_body.setStretchFactor(0, 100) + content_body.setStretchFactor(1, 65) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(header_widget, 0) + main_layout.addWidget(content_body, 1) + + btn_back.clicked.connect(self._on_back_clicked) + refresh_btn.clicked.connect(self._on_refreh_clicked) + folders_filter_text.textChanged.connect(self._on_filter_text_changed) + + self._is_visible = False + self._controller = controller + + self._btn_back = btn_back + self._projects_combobox = projects_combobox + self._folders_widget = folders_widget + self._tasks_widget = tasks_widget + + # Post init + projects_combobox.set_listen_to_selection_change(self._is_visible) + + def set_page_visible(self, visible, project_name=None): + if self._is_visible == visible: + return + self._is_visible = visible + self._projects_combobox.set_listen_to_selection_change(visible) + if visible and project_name: + self._projects_combobox.set_selection(project_name) + + def _on_back_clicked(self): + self._controller.set_selected_project(None) + + def _on_refreh_clicked(self): + self._controller.refresh() + + def _on_filter_text_changed(self, text): + self._folders_widget.set_name_filer(text) diff --git a/openpype/tools/ayon_launcher/ui/projects_widget.py b/openpype/tools/ayon_launcher/ui/projects_widget.py new file mode 100644 index 0000000000..baa399d0ed --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/projects_widget.py @@ -0,0 +1,135 @@ +from qtpy import QtWidgets, QtCore + +from openpype.tools.flickcharm import FlickCharm +from openpype.tools.utils import PlaceholderLineEdit, RefreshButton +from openpype.tools.ayon_utils.widgets import ( + ProjectsModel, + ProjectSortFilterProxy, +) +from openpype.tools.ayon_utils.models import PROJECTS_MODEL_SENDER + + +class ProjectIconView(QtWidgets.QListView): + """Styled ListView that allows to toggle between icon and list mode. + + Toggling between the two modes is done by Right Mouse Click. + """ + + IconMode = 0 + ListMode = 1 + + def __init__(self, parent=None, mode=ListMode): + super(ProjectIconView, self).__init__(parent=parent) + + # Workaround for scrolling being super slow or fast when + # toggling between the two visual modes + self.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) + self.setObjectName("IconView") + + self._mode = None + self.set_mode(mode) + + def set_mode(self, mode): + if mode == self._mode: + return + + self._mode = mode + + if mode == self.IconMode: + self.setViewMode(QtWidgets.QListView.IconMode) + self.setResizeMode(QtWidgets.QListView.Adjust) + self.setWrapping(True) + self.setWordWrap(True) + self.setGridSize(QtCore.QSize(151, 90)) + self.setIconSize(QtCore.QSize(50, 50)) + self.setSpacing(0) + self.setAlternatingRowColors(False) + + self.setProperty("mode", "icon") + self.style().polish(self) + + self.verticalScrollBar().setSingleStep(30) + + elif self.ListMode: + self.setProperty("mode", "list") + self.style().polish(self) + + self.setViewMode(QtWidgets.QListView.ListMode) + self.setResizeMode(QtWidgets.QListView.Adjust) + self.setWrapping(False) + self.setWordWrap(False) + self.setIconSize(QtCore.QSize(20, 20)) + self.setGridSize(QtCore.QSize(100, 25)) + self.setSpacing(0) + self.setAlternatingRowColors(False) + + self.verticalScrollBar().setSingleStep(34) + + def mousePressEvent(self, event): + if event.button() == QtCore.Qt.RightButton: + self.set_mode(int(not self._mode)) + return super(ProjectIconView, self).mousePressEvent(event) + + +class ProjectsWidget(QtWidgets.QWidget): + """Projects Page""" + def __init__(self, controller, parent=None): + super(ProjectsWidget, self).__init__(parent=parent) + + header_widget = QtWidgets.QWidget(self) + + projects_filter_text = PlaceholderLineEdit(header_widget) + projects_filter_text.setPlaceholderText("Filter projects...") + + refresh_btn = RefreshButton(header_widget) + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(projects_filter_text, 1) + header_layout.addWidget(refresh_btn, 0) + + projects_view = ProjectIconView(parent=self) + projects_view.setSelectionMode(QtWidgets.QListView.NoSelection) + flick = FlickCharm(parent=self) + flick.activateOn(projects_view) + projects_model = ProjectsModel(controller) + projects_proxy_model = ProjectSortFilterProxy() + projects_proxy_model.setSourceModel(projects_model) + + projects_view.setModel(projects_proxy_model) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(header_widget, 0) + main_layout.addWidget(projects_view, 1) + + projects_view.clicked.connect(self._on_view_clicked) + projects_filter_text.textChanged.connect( + self._on_project_filter_change) + refresh_btn.clicked.connect(self._on_refresh_clicked) + + controller.register_event_callback( + "projects.refresh.finished", + self._on_projects_refresh_finished + ) + + self._controller = controller + + self._projects_view = projects_view + self._projects_model = projects_model + self._projects_proxy_model = projects_proxy_model + + def _on_view_clicked(self, index): + if index.isValid(): + project_name = index.data(QtCore.Qt.DisplayRole) + self._controller.set_selected_project(project_name) + + def _on_project_filter_change(self, text): + self._projects_proxy_model.setFilterFixedString(text) + + def _on_refresh_clicked(self): + self._controller.refresh() + + def _on_projects_refresh_finished(self, event): + if event["sender"] != PROJECTS_MODEL_SENDER: + self._projects_model.refresh() diff --git a/openpype/tools/ayon_launcher/ui/resources/__init__.py b/openpype/tools/ayon_launcher/ui/resources/__init__.py new file mode 100644 index 0000000000..27c59af2ba --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/resources/__init__.py @@ -0,0 +1,7 @@ +import os + +RESOURCES_DIR = os.path.dirname(os.path.abspath(__file__)) + + +def get_options_image_path(): + return os.path.join(RESOURCES_DIR, "options.png") diff --git a/openpype/tools/ayon_launcher/ui/resources/options.png b/openpype/tools/ayon_launcher/ui/resources/options.png new file mode 100644 index 0000000000..a9617d0d19 Binary files /dev/null and b/openpype/tools/ayon_launcher/ui/resources/options.png differ diff --git a/openpype/tools/ayon_launcher/ui/window.py b/openpype/tools/ayon_launcher/ui/window.py new file mode 100644 index 0000000000..139da42a2e --- /dev/null +++ b/openpype/tools/ayon_launcher/ui/window.py @@ -0,0 +1,295 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype import style +from openpype import resources + +from openpype.tools.ayon_launcher.control import BaseLauncherController + +from .projects_widget import ProjectsWidget +from .hierarchy_page import HierarchyPage +from .actions_widget import ActionsWidget + + +class LauncherWindow(QtWidgets.QWidget): + """Launcher interface""" + message_interval = 5000 + refresh_interval = 10000 + page_side_anim_interval = 250 + + def __init__(self, controller=None, parent=None): + super(LauncherWindow, self).__init__(parent) + + if controller is None: + controller = BaseLauncherController() + + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + self.setWindowTitle("Launcher") + self.setFocusPolicy(QtCore.Qt.StrongFocus) + self.setAttribute(QtCore.Qt.WA_DeleteOnClose, False) + + self.setStyleSheet(style.load_stylesheet()) + + # Allow minimize + self.setWindowFlags( + QtCore.Qt.Window + | QtCore.Qt.CustomizeWindowHint + | QtCore.Qt.WindowTitleHint + | QtCore.Qt.WindowMinimizeButtonHint + | QtCore.Qt.WindowCloseButtonHint + ) + + self._controller = controller + + # Main content - Pages & Actions + content_body = QtWidgets.QSplitter(self) + + # Pages + pages_widget = QtWidgets.QWidget(content_body) + + # - First page - Projects + projects_page = ProjectsWidget(controller, pages_widget) + + # - Second page - Hierarchy (folders & tasks) + hierarchy_page = HierarchyPage(controller, pages_widget) + + pages_layout = QtWidgets.QHBoxLayout(pages_widget) + pages_layout.setContentsMargins(0, 0, 0, 0) + pages_layout.addWidget(projects_page, 1) + pages_layout.addWidget(hierarchy_page, 1) + + # Actions + actions_widget = ActionsWidget(controller, content_body) + + # Vertically split Pages and Actions + content_body.setContentsMargins(0, 0, 0, 0) + content_body.setSizePolicy( + QtWidgets.QSizePolicy.Expanding, + QtWidgets.QSizePolicy.Expanding + ) + content_body.setOrientation(QtCore.Qt.Vertical) + content_body.addWidget(pages_widget) + content_body.addWidget(actions_widget) + + # Set useful default sizes and set stretch + # for the pages so that is the only one that + # stretches on UI resize. + content_body.setStretchFactor(0, 10) + content_body.setSizes([580, 160]) + + # Footer + footer_widget = QtWidgets.QWidget(self) + + # - Message label + message_label = QtWidgets.QLabel(footer_widget) + + # action_history = ActionHistory(footer_widget) + # action_history.setStatusTip("Show Action History") + + footer_layout = QtWidgets.QHBoxLayout(footer_widget) + footer_layout.setContentsMargins(0, 0, 0, 0) + footer_layout.addWidget(message_label, 1) + # footer_layout.addWidget(action_history, 0) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(content_body, 1) + layout.addWidget(footer_widget, 0) + + message_timer = QtCore.QTimer() + message_timer.setInterval(self.message_interval) + message_timer.setSingleShot(True) + + refresh_timer = QtCore.QTimer() + refresh_timer.setInterval(self.refresh_interval) + + page_slide_anim = QtCore.QVariantAnimation(self) + page_slide_anim.setDuration(self.page_side_anim_interval) + page_slide_anim.setStartValue(0.0) + page_slide_anim.setEndValue(1.0) + page_slide_anim.setEasingCurve(QtCore.QEasingCurve.OutQuad) + + message_timer.timeout.connect(self._on_message_timeout) + refresh_timer.timeout.connect(self._on_refresh_timeout) + page_slide_anim.valueChanged.connect( + self._on_page_slide_value_changed) + page_slide_anim.finished.connect(self._on_page_slide_finished) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_change, + ) + controller.register_event_callback( + "action.trigger.started", + self._on_action_trigger_started, + ) + controller.register_event_callback( + "action.trigger.finished", + self._on_action_trigger_finished, + ) + + self._controller = controller + + self._is_on_projects_page = True + self._window_is_active = False + self._refresh_on_activate = False + + self._pages_widget = pages_widget + self._pages_layout = pages_layout + self._projects_page = projects_page + self._hierarchy_page = hierarchy_page + self._actions_widget = actions_widget + + self._message_label = message_label + # self._action_history = action_history + + self._message_timer = message_timer + self._refresh_timer = refresh_timer + self._page_slide_anim = page_slide_anim + + hierarchy_page.setVisible(not self._is_on_projects_page) + self.resize(520, 740) + + def showEvent(self, event): + super(LauncherWindow, self).showEvent(event) + self._window_is_active = True + if not self._refresh_timer.isActive(): + self._refresh_timer.start() + self._controller.refresh() + + def closeEvent(self, event): + super(LauncherWindow, self).closeEvent(event) + self._window_is_active = False + self._refresh_timer.stop() + + def changeEvent(self, event): + if event.type() in ( + QtCore.QEvent.Type.WindowStateChange, + QtCore.QEvent.ActivationChange, + ): + is_active = self.isActiveWindow() and not self.isMinimized() + self._window_is_active = is_active + if is_active and self._refresh_on_activate: + self._refresh_on_activate = False + self._on_refresh_timeout() + self._refresh_timer.start() + + super(LauncherWindow, self).changeEvent(event) + + def _on_refresh_timeout(self): + # Stop timer if widget is not visible + if self._window_is_active: + self._controller.refresh() + else: + self._refresh_on_activate = True + + def _echo(self, message): + self._message_label.setText(str(message)) + self._message_timer.start() + + def _on_message_timeout(self): + self._message_label.setText("") + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + if not project_name: + self._go_to_projects_page() + + elif self._is_on_projects_page: + self._go_to_hierarchy_page(project_name) + + def _on_action_trigger_started(self, event): + self._echo("Running action: {}".format(event["full_label"])) + + def _on_action_trigger_finished(self, event): + if not event["failed"]: + return + self._echo("Failed: {}".format(event["error_message"])) + + def _is_page_slide_anim_running(self): + return ( + self._page_slide_anim.state() == QtCore.QAbstractAnimation.Running + ) + + def _go_to_projects_page(self): + if self._is_on_projects_page: + return + self._is_on_projects_page = True + self._hierarchy_page.set_page_visible(False) + + self._start_page_slide_animation() + + def _go_to_hierarchy_page(self, project_name): + if not self._is_on_projects_page: + return + self._is_on_projects_page = False + self._hierarchy_page.set_page_visible(True, project_name) + + self._start_page_slide_animation() + + def _start_page_slide_animation(self): + if self._is_on_projects_page: + direction = QtCore.QAbstractAnimation.Backward + else: + direction = QtCore.QAbstractAnimation.Forward + self._page_slide_anim.setDirection(direction) + if self._is_page_slide_anim_running(): + return + + layout_spacing = self._pages_layout.spacing() + if self._is_on_projects_page: + hierarchy_geo = self._hierarchy_page.geometry() + projects_geo = QtCore.QRect(hierarchy_geo) + projects_geo.moveRight( + hierarchy_geo.left() - (layout_spacing + 1)) + + self._projects_page.setVisible(True) + + else: + projects_geo = self._projects_page.geometry() + hierarchy_geo = QtCore.QRect(projects_geo) + hierarchy_geo.moveLeft(projects_geo.right() + layout_spacing) + self._hierarchy_page.setVisible(True) + + while self._pages_layout.count(): + self._pages_layout.takeAt(0) + + self._projects_page.setGeometry(projects_geo) + self._hierarchy_page.setGeometry(hierarchy_geo) + + self._page_slide_anim.start() + + def _on_page_slide_value_changed(self, value): + layout_spacing = self._pages_layout.spacing() + content_width = self._pages_widget.width() - layout_spacing + content_height = self._pages_widget.height() + + # Visible widths of other widgets + hierarchy_width = int(content_width * value) + + hierarchy_geo = QtCore.QRect( + content_width - hierarchy_width, 0, content_width, content_height + ) + projects_geo = QtCore.QRect(hierarchy_geo) + projects_geo.moveRight(hierarchy_geo.left() - (layout_spacing + 1)) + + self._projects_page.setGeometry(projects_geo) + self._hierarchy_page.setGeometry(hierarchy_geo) + + def _on_page_slide_finished(self): + self._pages_layout.addWidget(self._projects_page, 1) + self._pages_layout.addWidget(self._hierarchy_page, 1) + self._projects_page.setVisible(self._is_on_projects_page) + self._hierarchy_page.setVisible(not self._is_on_projects_page) + + # def _on_history_action(self, history_data): + # action, session = history_data + # app = QtWidgets.QApplication.instance() + # modifiers = app.keyboardModifiers() + # + # is_control_down = QtCore.Qt.ControlModifier & modifiers + # if is_control_down: + # # Revert to that "session" location + # self.set_session(session) + # else: + # # User is holding control, rerun the action + # self.run_action(action, session=session) diff --git a/openpype/tools/ayon_utils/models/__init__.py b/openpype/tools/ayon_utils/models/__init__.py new file mode 100644 index 0000000000..1434282c5b --- /dev/null +++ b/openpype/tools/ayon_utils/models/__init__.py @@ -0,0 +1,29 @@ +"""Backend models that can be used in controllers.""" + +from .cache import CacheItem, NestedCacheItem +from .projects import ( + ProjectItem, + ProjectsModel, + PROJECTS_MODEL_SENDER, +) +from .hierarchy import ( + FolderItem, + TaskItem, + HierarchyModel, + HIERARCHY_MODEL_SENDER, +) + + +__all__ = ( + "CacheItem", + "NestedCacheItem", + + "ProjectItem", + "ProjectsModel", + "PROJECTS_MODEL_SENDER", + + "FolderItem", + "TaskItem", + "HierarchyModel", + "HIERARCHY_MODEL_SENDER", +) diff --git a/openpype/tools/ayon_utils/models/cache.py b/openpype/tools/ayon_utils/models/cache.py new file mode 100644 index 0000000000..44b97e930d --- /dev/null +++ b/openpype/tools/ayon_utils/models/cache.py @@ -0,0 +1,196 @@ +import time +import collections + +InitInfo = collections.namedtuple( + "InitInfo", + ["default_factory", "lifetime"] +) + + +def _default_factory_func(): + return None + + +class CacheItem: + """Simple cache item with lifetime and default value. + + Args: + default_factory (Optional[callable]): Function that returns default + value used on init and on reset. + lifetime (Optional[int]): Lifetime of the cache data in seconds. + """ + + def __init__(self, default_factory=None, lifetime=None): + if lifetime is None: + lifetime = 120 + self._lifetime = lifetime + self._last_update = None + if default_factory is None: + default_factory = _default_factory_func + self._default_factory = default_factory + self._data = default_factory() + + @property + def is_valid(self): + """Is cache valid to use. + + Return: + bool: True if cache is valid, False otherwise. + """ + + if self._last_update is None: + return False + + return (time.time() - self._last_update) < self._lifetime + + def set_lifetime(self, lifetime): + """Change lifetime of cache item. + + Args: + lifetime (int): Lifetime of the cache data in seconds. + """ + + self._lifetime = lifetime + + def set_invalid(self): + """Set cache as invalid.""" + + self._last_update = None + + def reset(self): + """Set cache as invalid and reset data.""" + + self._last_update = None + self._data = self._default_factory() + + def get_data(self): + """Receive cached data. + + Returns: + Any: Any data that are cached. + """ + + return self._data + + def update_data(self, data): + self._data = data + self._last_update = time.time() + + +class NestedCacheItem: + """Helper for cached items stored in nested structure. + + Example: + >>> cache = NestedCacheItem(levels=2) + >>> cache["a"]["b"].is_valid + False + >>> cache["a"]["b"].get_data() + None + >>> cache["a"]["b"] = 1 + >>> cache["a"]["b"].is_valid + True + >>> cache["a"]["b"].get_data() + 1 + >>> cache.reset() + >>> cache["a"]["b"].is_valid + False + + Args: + levels (int): Number of nested levels where read cache is stored. + default_factory (Optional[callable]): Function that returns default + value used on init and on reset. + lifetime (Optional[int]): Lifetime of the cache data in seconds. + _init_info (Optional[InitInfo]): Private argument. Init info for + nested cache where created from parent item. + """ + + def __init__( + self, levels=1, default_factory=None, lifetime=None, _init_info=None + ): + if levels < 1: + raise ValueError("Nested levels must be greater than 0") + self._data_by_key = {} + if _init_info is None: + _init_info = InitInfo(default_factory, lifetime) + self._init_info = _init_info + self._levels = levels + + def __getitem__(self, key): + """Get cached data. + + Args: + key (str): Key of the cache item. + + Returns: + Union[NestedCacheItem, CacheItem]: Cache item. + """ + + cache = self._data_by_key.get(key) + if cache is None: + if self._levels > 1: + cache = NestedCacheItem( + levels=self._levels - 1, + _init_info=self._init_info + ) + else: + cache = CacheItem( + self._init_info.default_factory, + self._init_info.lifetime + ) + self._data_by_key[key] = cache + return cache + + def __setitem__(self, key, value): + """Update cached data. + + Args: + key (str): Key of the cache item. + value (Any): Any data that are cached. + """ + + if self._levels > 1: + raise AttributeError(( + "{} does not support '__setitem__'. Lower nested level by {}" + ).format(self.__class__.__name__, self._levels - 1)) + cache = self[key] + cache.update_data(value) + + def get(self, key): + """Get cached data. + + Args: + key (str): Key of the cache item. + + Returns: + Union[NestedCacheItem, CacheItem]: Cache item. + """ + + return self[key] + + def reset(self): + """Reset cache.""" + + self._data_by_key = {} + + def set_lifetime(self, lifetime): + """Change lifetime of all children cache items. + + Args: + lifetime (int): Lifetime of the cache data in seconds. + """ + + self._init_info.lifetime = lifetime + for cache in self._data_by_key.values(): + cache.set_lifetime(lifetime) + + @property + def is_valid(self): + """Raise reasonable error when called on wront level. + + Raises: + AttributeError: If called on nested cache item. + """ + + raise AttributeError(( + "{} does not support 'is_valid'. Lower nested level by '{}'" + ).format(self.__class__.__name__, self._levels)) diff --git a/openpype/tools/ayon_utils/models/hierarchy.py b/openpype/tools/ayon_utils/models/hierarchy.py new file mode 100644 index 0000000000..8e01c557c5 --- /dev/null +++ b/openpype/tools/ayon_utils/models/hierarchy.py @@ -0,0 +1,340 @@ +import collections +import contextlib +from abc import ABCMeta, abstractmethod + +import ayon_api +import six + +from openpype.style import get_default_entity_icon_color + +from .cache import NestedCacheItem + +HIERARCHY_MODEL_SENDER = "hierarchy.model" + + +@six.add_metaclass(ABCMeta) +class AbstractHierarchyController: + @abstractmethod + def emit_event(self, topic, data, source): + pass + + +class FolderItem: + """Item representing folder entity on a server. + + Folder can be a child of another folder or a project. + + Args: + entity_id (str): Folder id. + parent_id (Union[str, None]): Parent folder id. If 'None' then project + is parent. + name (str): Name of folder. + label (str): Folder label. + icon_name (str): Name of icon from font awesome. + icon_color (str): Hex color string that will be used for icon. + """ + + def __init__( + self, entity_id, parent_id, name, label, icon + ): + self.entity_id = entity_id + self.parent_id = parent_id + self.name = name + if not icon: + icon = { + "type": "awesome-font", + "name": "fa.folder", + "color": get_default_entity_icon_color() + } + self.icon = icon + self.label = label or name + + def to_data(self): + """Converts folder item to data. + + Returns: + dict[str, Any]: Folder item data. + """ + + return { + "entity_id": self.entity_id, + "parent_id": self.parent_id, + "name": self.name, + "label": self.label, + "icon": self.icon, + } + + @classmethod + def from_data(cls, data): + """Re-creates folder item from data. + + Args: + data (dict[str, Any]): Folder item data. + + Returns: + FolderItem: Folder item. + """ + + return cls(**data) + + +class TaskItem: + """Task item representing task entity on a server. + + Task is child of a folder. + + Task item has label that is used for display in UI. The label is by + default using task name and type. + + Args: + task_id (str): Task id. + name (str): Name of task. + task_type (str): Type of task. + parent_id (str): Parent folder id. + icon_name (str): Name of icon from font awesome. + icon_color (str): Hex color string that will be used for icon. + """ + + def __init__( + self, task_id, name, task_type, parent_id, icon + ): + self.task_id = task_id + self.name = name + self.task_type = task_type + self.parent_id = parent_id + if icon is None: + icon = { + "type": "awesome-font", + "name": "fa.male", + "color": get_default_entity_icon_color() + } + self.icon = icon + + self._label = None + + @property + def id(self): + """Alias for task_id. + + Returns: + str: Task id. + """ + + return self.task_id + + @property + def label(self): + """Label of task item for UI. + + Returns: + str: Label of task item. + """ + + if self._label is None: + self._label = "{} ({})".format(self.name, self.task_type) + return self._label + + def to_data(self): + """Converts task item to data. + + Returns: + dict[str, Any]: Task item data. + """ + + return { + "task_id": self.task_id, + "name": self.name, + "parent_id": self.parent_id, + "task_type": self.task_type, + "icon": self.icon, + } + + @classmethod + def from_data(cls, data): + """Re-create task item from data. + + Args: + data (dict[str, Any]): Task item data. + + Returns: + TaskItem: Task item. + """ + + return cls(**data) + + +def _get_task_items_from_tasks(tasks): + """ + + Returns: + TaskItem: Task item. + """ + + output = [] + for task in tasks: + folder_id = task["folderId"] + output.append(TaskItem( + task["id"], + task["name"], + task["type"], + folder_id, + None + )) + return output + + +def _get_folder_item_from_hierarchy_item(item): + return FolderItem( + item["id"], + item["parentId"], + item["name"], + item["label"], + None + ) + + +class HierarchyModel(object): + """Model for project hierarchy items. + + Hierarchy items are folders and tasks. Folders can have as parent another + folder or project. Tasks can have as parent only folder. + """ + + def __init__(self, controller): + self._folders_items = NestedCacheItem(levels=1, default_factory=dict) + self._folders_by_id = NestedCacheItem(levels=2, default_factory=dict) + + self._task_items = NestedCacheItem(levels=2, default_factory=dict) + self._tasks_by_id = NestedCacheItem(levels=2, default_factory=dict) + + self._folders_refreshing = set() + self._tasks_refreshing = set() + self._controller = controller + + def reset(self): + self._folders_items.reset() + self._folders_by_id.reset() + + self._task_items.reset() + self._tasks_by_id.reset() + + def refresh_project(self, project_name): + self._refresh_folders_cache(project_name) + + def get_folder_items(self, project_name, sender): + if not self._folders_items[project_name].is_valid: + self._refresh_folders_cache(project_name, sender) + return self._folders_items[project_name].get_data() + + def get_task_items(self, project_name, folder_id, sender): + if not project_name or not folder_id: + return [] + + task_cache = self._task_items[project_name][folder_id] + if not task_cache.is_valid: + self._refresh_tasks_cache(project_name, folder_id, sender) + return task_cache.get_data() + + def get_folder_entity(self, project_name, folder_id): + cache = self._folders_by_id[project_name][folder_id] + if not cache.is_valid: + entity = None + if folder_id: + entity = ayon_api.get_folder_by_id(project_name, folder_id) + cache.update_data(entity) + return cache.get_data() + + def get_task_entity(self, project_name, task_id): + cache = self._tasks_by_id[project_name][task_id] + if not cache.is_valid: + entity = None + if task_id: + entity = ayon_api.get_task_by_id(project_name, task_id) + cache.update_data(entity) + return cache.get_data() + + @contextlib.contextmanager + def _folder_refresh_event_manager(self, project_name, sender): + self._folders_refreshing.add(project_name) + self._controller.emit_event( + "folders.refresh.started", + {"project_name": project_name, "sender": sender}, + HIERARCHY_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "folders.refresh.finished", + {"project_name": project_name, "sender": sender}, + HIERARCHY_MODEL_SENDER + ) + self._folders_refreshing.remove(project_name) + + @contextlib.contextmanager + def _task_refresh_event_manager( + self, project_name, folder_id, sender + ): + self._tasks_refreshing.add(folder_id) + self._controller.emit_event( + "tasks.refresh.started", + { + "project_name": project_name, + "folder_id": folder_id, + "sender": sender, + }, + HIERARCHY_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "tasks.refresh.finished", + { + "project_name": project_name, + "folder_id": folder_id, + "sender": sender, + }, + HIERARCHY_MODEL_SENDER + ) + self._tasks_refreshing.discard(folder_id) + + def _refresh_folders_cache(self, project_name, sender=None): + if project_name in self._folders_refreshing: + return + + with self._folder_refresh_event_manager(project_name, sender): + folder_items = self._query_folders(project_name) + self._folders_items[project_name].update_data(folder_items) + + def _query_folders(self, project_name): + hierarchy = ayon_api.get_folders_hierarchy(project_name) + + folder_items = {} + hierachy_queue = collections.deque(hierarchy["hierarchy"]) + while hierachy_queue: + item = hierachy_queue.popleft() + folder_item = _get_folder_item_from_hierarchy_item(item) + folder_items[folder_item.entity_id] = folder_item + hierachy_queue.extend(item["children"] or []) + return folder_items + + def _refresh_tasks_cache(self, project_name, folder_id, sender=None): + if folder_id in self._tasks_refreshing: + return + + with self._task_refresh_event_manager( + project_name, folder_id, sender + ): + task_items = self._query_tasks(project_name, folder_id) + self._task_items[project_name][folder_id] = task_items + + def _query_tasks(self, project_name, folder_id): + tasks = list(ayon_api.get_tasks( + project_name, + folder_ids=[folder_id], + fields={"id", "name", "label", "folderId", "type"} + )) + return _get_task_items_from_tasks(tasks) diff --git a/openpype/tools/ayon_utils/models/projects.py b/openpype/tools/ayon_utils/models/projects.py new file mode 100644 index 0000000000..ae3eeecea4 --- /dev/null +++ b/openpype/tools/ayon_utils/models/projects.py @@ -0,0 +1,145 @@ +import contextlib +from abc import ABCMeta, abstractmethod + +import ayon_api +import six + +from openpype.style import get_default_entity_icon_color + +from .cache import CacheItem + +PROJECTS_MODEL_SENDER = "projects.model" + + +@six.add_metaclass(ABCMeta) +class AbstractHierarchyController: + @abstractmethod + def emit_event(self, topic, data, source): + pass + + +class ProjectItem: + """Item representing folder entity on a server. + + Folder can be a child of another folder or a project. + + Args: + name (str): Project name. + active (Union[str, None]): Parent folder id. If 'None' then project + is parent. + """ + + def __init__(self, name, active, icon=None): + self.name = name + self.active = active + if icon is None: + icon = { + "type": "awesome-font", + "name": "fa.map", + "color": get_default_entity_icon_color(), + } + self.icon = icon + + def to_data(self): + """Converts folder item to data. + + Returns: + dict[str, Any]: Folder item data. + """ + + return { + "name": self.name, + "active": self.active, + "icon": self.icon, + } + + @classmethod + def from_data(cls, data): + """Re-creates folder item from data. + + Args: + data (dict[str, Any]): Folder item data. + + Returns: + FolderItem: Folder item. + """ + + return cls(**data) + + +def _get_project_items_from_entitiy(projects): + """ + + Args: + projects (list[dict[str, Any]]): List of projects. + + Returns: + ProjectItem: Project item. + """ + + return [ + ProjectItem(project["name"], project["active"]) + for project in projects + ] + + +class ProjectsModel(object): + def __init__(self, controller): + self._projects_cache = CacheItem(default_factory=dict) + self._project_items_by_name = {} + self._projects_by_name = {} + + self._is_refreshing = False + self._controller = controller + + def reset(self): + self._projects_cache.reset() + self._project_items_by_name = {} + self._projects_by_name = {} + + def refresh(self): + self._refresh_projects_cache() + + def get_project_items(self, sender): + if not self._projects_cache.is_valid: + self._refresh_projects_cache(sender) + return self._projects_cache.get_data() + + def get_project_entity(self, project_name): + if project_name not in self._projects_by_name: + entity = None + if project_name: + entity = ayon_api.get_project(project_name) + self._projects_by_name[project_name] = entity + return self._projects_by_name[project_name] + + @contextlib.contextmanager + def _project_refresh_event_manager(self, sender): + self._is_refreshing = True + self._controller.emit_event( + "projects.refresh.started", + {"sender": sender}, + PROJECTS_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "projects.refresh.finished", + {"sender": sender}, + PROJECTS_MODEL_SENDER + ) + self._is_refreshing = False + + def _refresh_projects_cache(self, sender=None): + if self._is_refreshing: + return + + with self._project_refresh_event_manager(sender): + project_items = self._query_projects() + self._projects_cache.update_data(project_items) + + def _query_projects(self): + projects = ayon_api.get_projects(fields=["name", "active"]) + return _get_project_items_from_entitiy(projects) diff --git a/openpype/tools/ayon_utils/widgets/__init__.py b/openpype/tools/ayon_utils/widgets/__init__.py new file mode 100644 index 0000000000..59aef98faf --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/__init__.py @@ -0,0 +1,37 @@ +from .projects_widget import ( + # ProjectsWidget, + ProjectsCombobox, + ProjectsModel, + ProjectSortFilterProxy, +) + +from .folders_widget import ( + FoldersWidget, + FoldersModel, +) + +from .tasks_widget import ( + TasksWidget, + TasksModel, +) +from .utils import ( + get_qt_icon, + RefreshThread, +) + + +__all__ = ( + # "ProjectsWidget", + "ProjectsCombobox", + "ProjectsModel", + "ProjectSortFilterProxy", + + "FoldersWidget", + "FoldersModel", + + "TasksWidget", + "TasksModel", + + "get_qt_icon", + "RefreshThread", +) diff --git a/openpype/tools/ayon_utils/widgets/folders_widget.py b/openpype/tools/ayon_utils/widgets/folders_widget.py new file mode 100644 index 0000000000..3fab64f657 --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/folders_widget.py @@ -0,0 +1,364 @@ +import collections + +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + DeselectableTreeView, +) + +from .utils import RefreshThread, get_qt_icon + +SENDER_NAME = "qt_folders_model" +ITEM_ID_ROLE = QtCore.Qt.UserRole + 1 +ITEM_NAME_ROLE = QtCore.Qt.UserRole + 2 + + +class FoldersModel(QtGui.QStandardItemModel): + """Folders model which cares about refresh of folders. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(FoldersModel, self).__init__() + + self._controller = controller + self._items_by_id = {} + self._parent_id_by_id = {} + + self._refresh_threads = {} + self._current_refresh_thread = None + self._last_project_name = None + + self._has_content = False + self._is_refreshing = False + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: True if model is refreshing. + """ + return self._is_refreshing + + @property + def has_content(self): + """Has at least one folder. + + Returns: + bool: True if model has at least one folder. + """ + + return self._has_content + + def clear(self): + self._items_by_id = {} + self._parent_id_by_id = {} + self._has_content = False + super(FoldersModel, self).clear() + + def get_index_by_id(self, item_id): + """Get index by folder id. + + Returns: + QtCore.QModelIndex: Index of the folder. Can be invalid if folder + is not available. + """ + item = self._items_by_id.get(item_id) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def set_project_name(self, project_name): + """Refresh folders items. + + Refresh start thread because it can cause that controller can + start query from database if folders are not cached. + """ + + if not project_name: + self._last_project_name = project_name + self._current_refresh_thread = None + self._fill_items({}) + return + + self._is_refreshing = True + + if self._last_project_name != project_name: + self.clear() + self._last_project_name = project_name + + thread = self._refresh_threads.get(project_name) + if thread is not None: + self._current_refresh_thread = thread + return + + thread = RefreshThread( + project_name, + self._controller.get_folder_items, + project_name, + SENDER_NAME + ) + self._current_refresh_thread = thread + self._refresh_threads[thread.id] = thread + thread.refresh_finished.connect(self._on_refresh_thread) + thread.start() + + def _on_refresh_thread(self, thread_id): + """Callback when refresh thread is finished. + + Technically can be running multiple refresh threads at the same time, + to avoid using values from wrong thread, we check if thread id is + current refresh thread id. + + Folders are stored by id. + + Args: + thread_id (str): Thread id. + """ + + # Make sure to remove thread from '_refresh_threads' dict + thread = self._refresh_threads.pop(thread_id) + if ( + self._current_refresh_thread is None + or thread_id != self._current_refresh_thread.id + ): + return + + self._fill_items(thread.get_result()) + + def _fill_items(self, folder_items_by_id): + if not folder_items_by_id: + if folder_items_by_id is not None: + self.clear() + self._is_refreshing = False + self.refreshed.emit() + return + + self._has_content = True + + folder_ids = set(folder_items_by_id) + ids_to_remove = set(self._items_by_id) - folder_ids + + folder_items_by_parent = collections.defaultdict(dict) + for folder_item in folder_items_by_id.values(): + ( + folder_items_by_parent + [folder_item.parent_id] + [folder_item.entity_id] + ) = folder_item + + hierarchy_queue = collections.deque() + hierarchy_queue.append((self.invisibleRootItem(), None)) + + # Keep pointers to removed items until the refresh finishes + # - some children of the items could be moved and reused elsewhere + removed_items = [] + while hierarchy_queue: + item = hierarchy_queue.popleft() + parent_item, parent_id = item + folder_items = folder_items_by_parent[parent_id] + + items_by_id = {} + folder_ids_to_add = set(folder_items) + for row_idx in reversed(range(parent_item.rowCount())): + child_item = parent_item.child(row_idx) + child_id = child_item.data(ITEM_ID_ROLE) + if child_id in ids_to_remove: + removed_items.append(parent_item.takeRow(row_idx)) + else: + items_by_id[child_id] = child_item + + new_items = [] + for item_id in folder_ids_to_add: + folder_item = folder_items[item_id] + item = items_by_id.get(item_id) + if item is None: + is_new = True + item = QtGui.QStandardItem() + item.setEditable(False) + else: + is_new = self._parent_id_by_id[item_id] != parent_id + + icon = get_qt_icon(folder_item.icon) + item.setData(item_id, ITEM_ID_ROLE) + item.setData(folder_item.name, ITEM_NAME_ROLE) + item.setData(folder_item.label, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + if is_new: + new_items.append(item) + self._items_by_id[item_id] = item + self._parent_id_by_id[item_id] = parent_id + + hierarchy_queue.append((item, item_id)) + + if new_items: + parent_item.appendRows(new_items) + + for item_id in ids_to_remove: + self._items_by_id.pop(item_id) + self._parent_id_by_id.pop(item_id) + + self._is_refreshing = False + self.refreshed.emit() + + +class FoldersWidget(QtWidgets.QWidget): + """Folders widget. + + Widget that handles folders view, model and selection. + + Expected selection handling is disabled by default. If enabled, the + widget will handle the expected in predefined way. Widget is listening + to event 'expected_selection_changed' with expected event data below, + the same data must be available when called method + 'get_expected_selection_data' on controller. + + { + "folder": { + "current": bool, # Folder is what should be set now + "folder_id": Union[str, None], # Folder id that should be selected + }, + ... + } + + Selection is confirmed by calling method 'expected_folder_selected' on + controller. + + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + handle_expected_selection (bool): If True, the widget will handle + the expected selection. Defaults to False. + """ + + def __init__(self, controller, parent, handle_expected_selection=False): + super(FoldersWidget, self).__init__(parent) + + folders_view = DeselectableTreeView(self) + folders_view.setHeaderHidden(True) + + folders_model = FoldersModel(controller) + folders_proxy_model = RecursiveSortFilterProxyModel() + folders_proxy_model.setSourceModel(folders_model) + + folders_view.setModel(folders_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(folders_view, 1) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_change, + ) + controller.register_event_callback( + "folders.refresh.finished", + self._on_folders_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = folders_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + folders_model.refreshed.connect(self._on_model_refresh) + + self._controller = controller + self._folders_view = folders_view + self._folders_model = folders_model + self._folders_proxy_model = folders_proxy_model + + self._handle_expected_selection = handle_expected_selection + self._expected_selection = None + + def set_name_filer(self, name): + """Set filter of folder name. + + Args: + name (str): The string filter. + """ + + self._folders_proxy_model.setFilterFixedString(name) + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + self._set_project_name(project_name) + + def _set_project_name(self, project_name): + self._folders_model.set_project_name(project_name) + + def _clear(self): + self._folders_model.clear() + + def _on_folders_refresh_finished(self, event): + if event["sender"] != SENDER_NAME: + self._set_project_name(event["project_name"]) + + def _on_controller_refresh(self): + self._update_expected_selection() + + def _on_model_refresh(self): + if self._expected_selection: + self._set_expected_selection() + self._folders_proxy_model.sort(0) + + def _get_selected_item_id(self): + selection_model = self._folders_view.selectionModel() + for index in selection_model.selectedIndexes(): + item_id = index.data(ITEM_ID_ROLE) + if item_id is not None: + return item_id + return None + + def _on_selection_change(self): + item_id = self._get_selected_item_id() + self._controller.set_selected_folder(item_id) + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _update_expected_selection(self, expected_data=None): + if not self._handle_expected_selection: + return + + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + folder_data = expected_data.get("folder") + if not folder_data or not folder_data["current"]: + return + + folder_id = folder_data["id"] + self._expected_selection = folder_id + if not self._folders_model.is_refreshing: + self._set_expected_selection() + + def _set_expected_selection(self): + if not self._handle_expected_selection: + return + + folder_id = self._expected_selection + self._expected_selection = None + if ( + folder_id is not None + and folder_id != self._get_selected_item_id() + ): + index = self._folders_model.get_index_by_id(folder_id) + if index.isValid(): + proxy_index = self._folders_proxy_model.mapFromSource(index) + self._folders_view.setCurrentIndex(proxy_index) + self._controller.expected_folder_selected(folder_id) diff --git a/openpype/tools/ayon_utils/widgets/projects_widget.py b/openpype/tools/ayon_utils/widgets/projects_widget.py new file mode 100644 index 0000000000..818d574910 --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/projects_widget.py @@ -0,0 +1,325 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.ayon_utils.models import PROJECTS_MODEL_SENDER +from .utils import RefreshThread, get_qt_icon + +PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 1 +PROJECT_IS_ACTIVE_ROLE = QtCore.Qt.UserRole + 2 + + +class ProjectsModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(ProjectsModel, self).__init__() + self._controller = controller + + self._project_items = {} + + self._empty_item = None + self._empty_item_added = False + + self._is_refreshing = False + self._refresh_thread = None + + @property + def is_refreshing(self): + return self._is_refreshing + + def refresh(self): + self._refresh() + + def has_content(self): + return len(self._project_items) > 0 + + def _add_empty_item(self): + item = self._get_empty_item() + if not self._empty_item_added: + root_item = self.invisibleRootItem() + root_item.appendRow(item) + self._empty_item_added = True + + def _remove_empty_item(self): + if not self._empty_item_added: + return + + root_item = self.invisibleRootItem() + item = self._get_empty_item() + root_item.takeRow(item.row()) + self._empty_item_added = False + + def _get_empty_item(self): + if self._empty_item is None: + item = QtGui.QStandardItem("< No projects >") + item.setFlags(QtCore.Qt.NoItemFlags) + self._empty_item = item + return self._empty_item + + def _refresh(self): + if self._is_refreshing: + return + self._is_refreshing = True + refresh_thread = RefreshThread( + "projects", self._query_project_items + ) + refresh_thread.refresh_finished.connect(self._refresh_finished) + refresh_thread.start() + self._refresh_thread = refresh_thread + + def _query_project_items(self): + return self._controller.get_project_items() + + def _refresh_finished(self): + # TODO check if failed + result = self._refresh_thread.get_result() + self._refresh_thread = None + + self._fill_items(result) + + self._is_refreshing = False + self.refreshed.emit() + + def _fill_items(self, project_items): + items_to_remove = set(self._project_items.keys()) + new_items = [] + for project_item in project_items: + project_name = project_item.name + items_to_remove.discard(project_name) + item = self._project_items.get(project_name) + if item is None: + item = QtGui.QStandardItem() + new_items.append(item) + icon = get_qt_icon(project_item.icon) + item.setData(project_name, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setData(project_name, PROJECT_NAME_ROLE) + item.setData(project_item.active, PROJECT_IS_ACTIVE_ROLE) + self._project_items[project_name] = item + + root_item = self.invisibleRootItem() + if new_items: + root_item.appendRows(new_items) + + for project_name in items_to_remove: + item = self._project_items.pop(project_name) + root_item.removeRow(item.row()) + + if self.has_content(): + self._remove_empty_item() + else: + self._add_empty_item() + + +class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): + def __init__(self, *args, **kwargs): + super(ProjectSortFilterProxy, self).__init__(*args, **kwargs) + self._filter_inactive = True + # Disable case sensitivity + self.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + def lessThan(self, left_index, right_index): + if left_index.data(PROJECT_NAME_ROLE) is None: + return True + + if right_index.data(PROJECT_NAME_ROLE) is None: + return False + + left_is_active = left_index.data(PROJECT_IS_ACTIVE_ROLE) + right_is_active = right_index.data(PROJECT_IS_ACTIVE_ROLE) + if right_is_active == left_is_active: + return super(ProjectSortFilterProxy, self).lessThan( + left_index, right_index + ) + + if left_is_active: + return True + return False + + def filterAcceptsRow(self, source_row, source_parent): + index = self.sourceModel().index(source_row, 0, source_parent) + string_pattern = self.filterRegularExpression().pattern() + if ( + self._filter_inactive + and not index.data(PROJECT_IS_ACTIVE_ROLE) + ): + return False + + if string_pattern: + project_name = index.data(PROJECT_IS_ACTIVE_ROLE) + if project_name is not None: + return string_pattern.lower() in project_name.lower() + + return super(ProjectSortFilterProxy, self).filterAcceptsRow( + source_row, source_parent + ) + + def _custom_index_filter(self, index): + return bool(index.data(PROJECT_IS_ACTIVE_ROLE)) + + def is_active_filter_enabled(self): + return self._filter_inactive + + def set_active_filter_enabled(self, value): + if self._filter_inactive == value: + return + self._filter_inactive = value + self.invalidateFilter() + + +class ProjectsCombobox(QtWidgets.QWidget): + def __init__(self, controller, parent, handle_expected_selection=False): + super(ProjectsCombobox, self).__init__(parent) + + projects_combobox = QtWidgets.QComboBox(self) + combobox_delegate = QtWidgets.QStyledItemDelegate(projects_combobox) + projects_combobox.setItemDelegate(combobox_delegate) + projects_model = ProjectsModel(controller) + projects_proxy_model = ProjectSortFilterProxy() + projects_proxy_model.setSourceModel(projects_model) + projects_combobox.setModel(projects_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(projects_combobox, 1) + + projects_model.refreshed.connect(self._on_model_refresh) + + controller.register_event_callback( + "projects.refresh.finished", + self._on_projects_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + projects_combobox.currentIndexChanged.connect( + self._on_current_index_changed + ) + + self._controller = controller + self._listen_selection_change = True + + self._handle_expected_selection = handle_expected_selection + self._expected_selection = None + + self._projects_combobox = projects_combobox + self._projects_model = projects_model + self._projects_proxy_model = projects_proxy_model + self._combobox_delegate = combobox_delegate + + def refresh(self): + self._projects_model.refresh() + + def set_selection(self, project_name): + """Set selection to a given project. + + Selection change is ignored if project is not found. + + Args: + project_name (str): Name of project. + + Returns: + bool: True if selection was changed, False otherwise. NOTE: + Selection may not be changed if project is not found, or if + project is already selected. + """ + + idx = self._projects_combobox.findData( + project_name, PROJECT_NAME_ROLE) + if idx < 0: + return False + if idx != self._projects_combobox.currentIndex(): + self._projects_combobox.setCurrentIndex(idx) + return True + return False + + def set_listen_to_selection_change(self, listen): + """Disable listening to changes of the selection. + + Because combobox is triggering selection change when it's model + is refreshed, it's necessary to disable listening to selection for + some cases, e.g. when is on a different page of UI and should be just + refreshed. + + Args: + listen (bool): Enable or disable listening to selection changes. + """ + + self._listen_selection_change = listen + + def get_current_project_name(self): + """Name of selected project. + + Returns: + Union[str, None]: Name of selected project, or None if no project + """ + + idx = self._projects_combobox.currentIndex() + if idx < 0: + return None + return self._projects_combobox.itemData(idx, PROJECT_NAME_ROLE) + + def _on_current_index_changed(self, idx): + if not self._listen_selection_change: + return + project_name = self._projects_combobox.itemData( + idx, PROJECT_NAME_ROLE) + self._controller.set_selected_project(project_name) + + def _on_model_refresh(self): + self._projects_proxy_model.sort(0) + if self._expected_selection: + self._set_expected_selection() + + def _on_projects_refresh_finished(self, event): + if event["sender"] != PROJECTS_MODEL_SENDER: + self._projects_model.refresh() + + def _on_controller_refresh(self): + self._update_expected_selection() + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _set_expected_selection(self): + if not self._handle_expected_selection: + return + project_name = self._expected_selection + if project_name is not None: + if project_name != self.get_current_project_name(): + self.set_selection(project_name) + else: + # Fake project change + self._on_current_index_changed( + self._projects_combobox.currentIndex() + ) + + self._controller.expected_project_selected(project_name) + + def _update_expected_selection(self, expected_data=None): + if not self._handle_expected_selection: + return + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + project_data = expected_data.get("project") + if ( + not project_data + or not project_data["current"] + or project_data["selected"] + ): + return + self._expected_selection = project_data["name"] + if not self._projects_model.is_refreshing: + self._set_expected_selection() + + +class ProjectsWidget(QtWidgets.QWidget): + # TODO implement + pass diff --git a/openpype/tools/ayon_utils/widgets/tasks_widget.py b/openpype/tools/ayon_utils/widgets/tasks_widget.py new file mode 100644 index 0000000000..66ebd0b777 --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/tasks_widget.py @@ -0,0 +1,436 @@ +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.style import get_disabled_entity_icon_color +from openpype.tools.utils import DeselectableTreeView + +from .utils import RefreshThread, get_qt_icon + +SENDER_NAME = "qt_tasks_model" +ITEM_ID_ROLE = QtCore.Qt.UserRole + 1 +PARENT_ID_ROLE = QtCore.Qt.UserRole + 2 +ITEM_NAME_ROLE = QtCore.Qt.UserRole + 3 +TASK_TYPE_ROLE = QtCore.Qt.UserRole + 4 + + +class TasksModel(QtGui.QStandardItemModel): + """Tasks model which cares about refresh of tasks by folder id. + + Args: + controller (AbstractWorkfilesFrontend): The control object. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller): + super(TasksModel, self).__init__() + + self._controller = controller + + self._items_by_name = {} + self._has_content = False + self._is_refreshing = False + + self._invalid_selection_item_used = False + self._invalid_selection_item = None + self._empty_tasks_item_used = False + self._empty_tasks_item = None + + self._last_project_name = None + self._last_folder_id = None + + self._refresh_threads = {} + self._current_refresh_thread = None + + # Initial state + self._add_invalid_selection_item() + + def clear(self): + self._items_by_name = {} + self._has_content = False + self._remove_invalid_items() + super(TasksModel, self).clear() + + def refresh(self, project_name, folder_id): + """Refresh tasks for folder. + + Args: + project_name (Union[str]): Name of project. + folder_id (Union[str, None]): Folder id. + """ + + self._refresh(project_name, folder_id) + + def get_index_by_name(self, task_name): + """Find item by name and return its index. + + Returns: + QtCore.QModelIndex: Index of item. Is invalid if task is not + found by name. + """ + + item = self._items_by_name.get(task_name) + if item is None: + return QtCore.QModelIndex() + return self.indexFromItem(item) + + def get_last_project_name(self): + """Get last refreshed project name. + + Returns: + Union[str, None]: Project name. + """ + + return self._last_project_name + + def get_last_folder_id(self): + """Get last refreshed folder id. + + Returns: + Union[str, None]: Folder id. + """ + + return self._last_folder_id + + def set_selected_project(self, project_name): + self._selected_project_name = project_name + + def _get_invalid_selection_item(self): + if self._invalid_selection_item is None: + item = QtGui.QStandardItem("Select a folder") + item.setFlags(QtCore.Qt.NoItemFlags) + icon = get_qt_icon({ + "type": "awesome-font", + "name": "fa.times", + "color": get_disabled_entity_icon_color(), + }) + item.setData(icon, QtCore.Qt.DecorationRole) + self._invalid_selection_item = item + return self._invalid_selection_item + + def _get_empty_task_item(self): + if self._empty_tasks_item is None: + item = QtGui.QStandardItem("No task") + icon = get_qt_icon({ + "type": "awesome-font", + "name": "fa.exclamation-circle", + "color": get_disabled_entity_icon_color(), + }) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setFlags(QtCore.Qt.NoItemFlags) + self._empty_tasks_item = item + return self._empty_tasks_item + + def _add_invalid_item(self, item): + self.clear() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_invalid_item(self, item): + root_item = self.invisibleRootItem() + root_item.takeRow(item.row()) + + def _remove_invalid_items(self): + self._remove_invalid_selection_item() + self._remove_empty_task_item() + + def _add_invalid_selection_item(self): + if not self._invalid_selection_item_used: + self._add_invalid_item(self._get_invalid_selection_item()) + self._invalid_selection_item_used = True + + def _remove_invalid_selection_item(self): + if self._invalid_selection_item: + self._remove_invalid_item(self._get_invalid_selection_item()) + self._invalid_selection_item_used = False + + def _add_empty_task_item(self): + if not self._empty_tasks_item_used: + self._add_invalid_item(self._get_empty_task_item()) + self._empty_tasks_item_used = True + + def _remove_empty_task_item(self): + if self._empty_tasks_item_used: + self._remove_invalid_item(self._get_empty_task_item()) + self._empty_tasks_item_used = False + + def _refresh(self, project_name, folder_id): + self._is_refreshing = True + self._last_project_name = project_name + self._last_folder_id = folder_id + if not folder_id: + self._add_invalid_selection_item() + self._current_refresh_thread = None + self._is_refreshing = False + self.refreshed.emit() + return + + thread = self._refresh_threads.get(folder_id) + if thread is not None: + self._current_refresh_thread = thread + return + thread = RefreshThread( + folder_id, + self._controller.get_task_items, + project_name, + folder_id + ) + self._current_refresh_thread = thread + self._refresh_threads[thread.id] = thread + thread.refresh_finished.connect(self._on_refresh_thread) + thread.start() + + def _on_refresh_thread(self, thread_id): + """Callback when refresh thread is finished. + + Technically can be running multiple refresh threads at the same time, + to avoid using values from wrong thread, we check if thread id is + current refresh thread id. + + Tasks are stored by name, so if a folder has same task name as + previously selected folder it keeps the selection. + + Args: + thread_id (str): Thread id. + """ + + # Make sure to remove thread from '_refresh_threads' dict + thread = self._refresh_threads.pop(thread_id) + if ( + self._current_refresh_thread is None + or thread_id != self._current_refresh_thread.id + ): + return + + task_items = thread.get_result() + # Task items are refreshed + if task_items is None: + return + + # No tasks are available on folder + if not task_items: + self._add_empty_task_item() + return + self._remove_invalid_items() + + new_items = [] + new_names = set() + for task_item in task_items: + name = task_item.name + new_names.add(name) + item = self._items_by_name.get(name) + if item is None: + item = QtGui.QStandardItem() + item.setEditable(False) + new_items.append(item) + self._items_by_name[name] = item + + # TODO cache locally + icon = get_qt_icon(task_item.icon) + item.setData(task_item.label, QtCore.Qt.DisplayRole) + item.setData(name, ITEM_NAME_ROLE) + item.setData(task_item.id, ITEM_ID_ROLE) + item.setData(task_item.parent_id, PARENT_ID_ROLE) + item.setData(icon, QtCore.Qt.DecorationRole) + + root_item = self.invisibleRootItem() + + for name in set(self._items_by_name) - new_names: + item = self._items_by_name.pop(name) + root_item.removeRow(item.row()) + + if new_items: + root_item.appendRows(new_items) + + self._has_content = root_item.rowCount() > 0 + self._is_refreshing = False + self.refreshed.emit() + + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: Model is refreshing + """ + + return self._is_refreshing + + @property + def has_content(self): + """Model has content. + + Returns: + bools: Have at least one task. + """ + + return self._has_content + + def headerData(self, section, orientation, role): + # Show nice labels in the header + if ( + role == QtCore.Qt.DisplayRole + and orientation == QtCore.Qt.Horizontal + ): + if section == 0: + return "Tasks" + + return super(TasksModel, self).headerData( + section, orientation, role + ) + + +class TasksWidget(QtWidgets.QWidget): + """Tasks widget. + + Widget that handles tasks view, model and selection. + + Args: + controller (AbstractWorkfilesFrontend): Workfiles controller. + parent (QtWidgets.QWidget): Parent widget. + handle_expected_selection (Optional[bool]): Handle expected selection. + """ + + def __init__(self, controller, parent, handle_expected_selection=False): + super(TasksWidget, self).__init__(parent) + + tasks_view = DeselectableTreeView(self) + tasks_view.setIndentation(0) + + tasks_model = TasksModel(controller) + tasks_proxy_model = QtCore.QSortFilterProxyModel() + tasks_proxy_model.setSourceModel(tasks_model) + + tasks_view.setModel(tasks_proxy_model) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(tasks_view, 1) + + controller.register_event_callback( + "tasks.refresh.finished", + self._on_tasks_refresh_finished + ) + controller.register_event_callback( + "selection.folder.changed", + self._folder_selection_changed + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = tasks_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + tasks_model.refreshed.connect(self._on_tasks_model_refresh) + + self._controller = controller + self._tasks_view = tasks_view + self._tasks_model = tasks_model + self._tasks_proxy_model = tasks_proxy_model + + self._selected_folder_id = None + + self._handle_expected_selection = handle_expected_selection + self._expected_selection_data = None + + def _clear(self): + self._tasks_model.clear() + + def _on_tasks_refresh_finished(self, event): + """Tasks were refreshed in controller. + + Ignore if refresh was triggered by tasks model, or refreshed folder is + not the same as currently selected folder. + + Args: + event (Event): Event object. + """ + + # Refresh only if current folder id is the same + if ( + event["sender"] == SENDER_NAME + or event["folder_id"] != self._selected_folder_id + ): + return + self._tasks_model.refresh( + event["project_name"], self._selected_folder_id + ) + + def _folder_selection_changed(self, event): + self._selected_folder_id = event["folder_id"] + self._tasks_model.refresh( + event["project_name"], self._selected_folder_id + ) + + def _on_tasks_model_refresh(self): + if not self._set_expected_selection(): + self._on_selection_change() + self._tasks_proxy_model.sort(0) + + def _get_selected_item_ids(self): + selection_model = self._tasks_view.selectionModel() + for index in selection_model.selectedIndexes(): + task_id = index.data(ITEM_ID_ROLE) + task_name = index.data(ITEM_NAME_ROLE) + parent_id = index.data(PARENT_ID_ROLE) + if task_name is not None: + return parent_id, task_id, task_name + return self._selected_folder_id, None, None + + def _on_selection_change(self): + # Don't trigger task change during refresh + # - a task was deselected if that happens + # - can cause crash triggered during tasks refreshing + if self._tasks_model.is_refreshing: + return + + parent_id, task_id, task_name = self._get_selected_item_ids() + self._controller.set_selected_task(task_id, task_name) + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _set_expected_selection(self): + if not self._handle_expected_selection: + return False + + if self._expected_selection_data is None: + return False + folder_id = self._expected_selection_data["folder_id"] + task_name = self._expected_selection_data["task_name"] + self._expected_selection_data = None + model_folder_id = self._tasks_model.get_last_folder_id() + if folder_id != model_folder_id: + return False + if task_name is not None: + index = self._tasks_model.get_index_by_name(task_name) + if index.isValid(): + proxy_index = self._tasks_proxy_model.mapFromSource(index) + self._tasks_view.setCurrentIndex(proxy_index) + self._controller.expected_task_selected(folder_id, task_name) + return True + + def _update_expected_selection(self, expected_data=None): + if not self._handle_expected_selection: + return + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + folder_data = expected_data.get("folder") + task_data = expected_data.get("task") + if ( + not folder_data + or not task_data + or not task_data["current"] + ): + return + folder_id = folder_data["id"] + self._expected_selection_data = { + "task_name": task_data["name"], + "folder_id": folder_id, + } + model_folder_id = self._tasks_model.get_last_folder_id() + if folder_id != model_folder_id or self._tasks_model.is_refreshing: + return + self._set_expected_selection() diff --git a/openpype/tools/ayon_utils/widgets/utils.py b/openpype/tools/ayon_utils/widgets/utils.py new file mode 100644 index 0000000000..8bc3b1ea9b --- /dev/null +++ b/openpype/tools/ayon_utils/widgets/utils.py @@ -0,0 +1,98 @@ +import os +from functools import partial + +from qtpy import QtCore, QtGui + +from openpype.tools.utils.lib import get_qta_icon_by_name_and_color + + +class RefreshThread(QtCore.QThread): + refresh_finished = QtCore.Signal(str) + + def __init__(self, thread_id, func, *args, **kwargs): + super(RefreshThread, self).__init__() + self._id = thread_id + self._callback = partial(func, *args, **kwargs) + self._exception = None + self._result = None + + @property + def id(self): + return self._id + + @property + def failed(self): + return self._exception is not None + + def run(self): + try: + self._result = self._callback() + except Exception as exc: + self._exception = exc + self.refresh_finished.emit(self.id) + + def get_result(self): + return self._result + + +class _IconsCache: + """Cache for icons.""" + + _cache = {} + _default = None + + @classmethod + def _get_cache_key(cls, icon_def): + parts = [] + icon_type = icon_def["type"] + if icon_type == "path": + parts = [icon_type, icon_def["path"]] + + elif icon_type == "awesome-font": + parts = [icon_type, icon_def["name"], icon_def["color"]] + return "|".join(parts) + + @classmethod + def get_icon(cls, icon_def): + icon_type = icon_def["type"] + cache_key = cls._get_cache_key(icon_def) + cache = cls._cache.get(cache_key) + if cache is not None: + return cache + + icon = None + if icon_type == "path": + path = icon_def["path"] + if os.path.exists(path): + icon = QtGui.QIcon(path) + + elif icon_type == "awesome-font": + icon_name = icon_def["name"] + icon_color = icon_def["color"] + icon = get_qta_icon_by_name_and_color(icon_name, icon_color) + if icon is None: + icon = get_qta_icon_by_name_and_color( + "fa.{}".format(icon_name), icon_color) + if icon is None: + icon = cls.get_default() + cls._cache[cache_key] = icon + return icon + + @classmethod + def get_default(cls): + pix = QtGui.QPixmap(1, 1) + pix.fill(QtCore.Qt.transparent) + return QtGui.QIcon(pix) + + +def get_qt_icon(icon_def): + """Returns icon from cache or creates new one. + + Args: + icon_def (dict[str, Any]): Icon definition. + + Returns: + QtGui.QIcon: Icon. + """ + + return _IconsCache.get_icon(icon_def) diff --git a/openpype/tools/ayon_workfiles/abstract.py b/openpype/tools/ayon_workfiles/abstract.py index e30a2c2499..ce399fd4c6 100644 --- a/openpype/tools/ayon_workfiles/abstract.py +++ b/openpype/tools/ayon_workfiles/abstract.py @@ -442,6 +442,16 @@ class AbstractWorkfilesBackend(AbstractWorkfilesCommon): pass + @abstractmethod + def get_project_entity(self): + """Get current project entity. + + Returns: + dict[str, Any]: Project entity data. + """ + + pass + @abstractmethod def get_folder_entity(self, folder_id): """Get folder entity by id. @@ -904,10 +914,12 @@ class AbstractWorkfilesFrontend(AbstractWorkfilesCommon): # Controller actions @abstractmethod - def open_workfile(self, filepath): - """Open a workfile. + def open_workfile(self, folder_id, task_id, filepath): + """Open a workfile for context. Args: + folder_id (str): Folder id. + task_id (str): Task id. filepath (str): Workfile path. """ diff --git a/openpype/tools/ayon_workfiles/control.py b/openpype/tools/ayon_workfiles/control.py index fc8819bff3..3784959caf 100644 --- a/openpype/tools/ayon_workfiles/control.py +++ b/openpype/tools/ayon_workfiles/control.py @@ -193,6 +193,9 @@ class BaseWorkfileController( self._project_anatomy = Anatomy(self.get_current_project_name()) return self._project_anatomy + def get_project_entity(self): + return self._entities_model.get_project_entity() + def get_folder_entity(self, folder_id): return self._entities_model.get_folder_entity(folder_id) @@ -449,12 +452,12 @@ class BaseWorkfileController( self._emit_event("controller.refresh.finished") # Controller actions - def open_workfile(self, filepath): + def open_workfile(self, folder_id, task_id, filepath): self._emit_event("open_workfile.started") failed = False try: - self._host_open_workfile(filepath) + self._open_workfile(folder_id, task_id, filepath) except Exception: failed = True @@ -572,6 +575,53 @@ class BaseWorkfileController( self._expected_selection.get_expected_selection_data(), ) + def _get_event_context_data( + self, project_name, folder_id, task_id, folder=None, task=None + ): + if folder is None: + folder = self.get_folder_entity(folder_id) + if task is None: + task = self.get_task_entity(task_id) + # NOTE keys should be OpenPype compatible + return { + "project_name": project_name, + "folder_id": folder_id, + "asset_id": folder_id, + "asset_name": folder["name"], + "task_id": task_id, + "task_name": task["name"], + "host_name": self.get_host_name(), + } + + def _open_workfile(self, folder_id, task_id, filepath): + project_name = self.get_current_project_name() + event_data = self._get_event_context_data( + project_name, folder_id, task_id + ) + event_data["filepath"] = filepath + + emit_event("workfile.open.before", event_data, source="workfiles.tool") + + # Change context + task_name = event_data["task_name"] + if ( + folder_id != self.get_current_folder_id() + or task_name != self.get_current_task_name() + ): + # Use OpenPype asset-like object + asset_doc = get_asset_by_id( + event_data["project_name"], + event_data["folder_id"], + ) + change_current_context( + asset_doc, + event_data["task_name"] + ) + + self._host_open_workfile(filepath) + + emit_event("workfile.open.after", event_data, source="workfiles.tool") + def _save_as_workfile( self, folder_id, @@ -588,18 +638,14 @@ class BaseWorkfileController( task_name = task["name"] # QUESTION should the data be different for 'before' and 'after'? - # NOTE keys should be OpenPype compatible - event_data = { - "project_name": project_name, - "folder_id": folder_id, - "asset_id": folder_id, - "asset_name": folder["name"], - "task_id": task_id, - "task_name": task_name, - "host_name": self.get_host_name(), + event_data = self._get_event_context_data( + project_name, folder_id, task_id, folder, task + ) + event_data.update({ "filename": filename, "workdir_path": workdir, - } + }) + emit_event("workfile.save.before", event_data, source="workfiles.tool") # Create workfiles root folder diff --git a/openpype/tools/ayon_workfiles/models/hierarchy.py b/openpype/tools/ayon_workfiles/models/hierarchy.py index 948c0b8a17..a1d51525da 100644 --- a/openpype/tools/ayon_workfiles/models/hierarchy.py +++ b/openpype/tools/ayon_workfiles/models/hierarchy.py @@ -77,8 +77,11 @@ class EntitiesModel(object): event_source = "entities.model" def __init__(self, controller): + project_cache = CacheItem() + project_cache.set_invalid({}) folders_cache = CacheItem() folders_cache.set_invalid({}) + self._project_cache = project_cache self._folders_cache = folders_cache self._tasks_cache = {} @@ -90,6 +93,7 @@ class EntitiesModel(object): self._controller = controller def reset(self): + self._project_cache.set_invalid({}) self._folders_cache.set_invalid({}) self._tasks_cache = {} @@ -99,6 +103,13 @@ class EntitiesModel(object): def refresh(self): self._refresh_folders_cache() + def get_project_entity(self): + if not self._project_cache.is_valid: + project_name = self._controller.get_current_project_name() + project_entity = ayon_api.get_project(project_name) + self._project_cache.update_data(project_entity) + return self._project_cache.get_data() + def get_folder_items(self, sender): if not self._folders_cache.is_valid: self._refresh_folders_cache(sender) diff --git a/openpype/tools/ayon_workfiles/models/workfiles.py b/openpype/tools/ayon_workfiles/models/workfiles.py index eb82f62de3..4d989ed22c 100644 --- a/openpype/tools/ayon_workfiles/models/workfiles.py +++ b/openpype/tools/ayon_workfiles/models/workfiles.py @@ -43,13 +43,21 @@ def get_folder_template_data(folder): } -def get_task_template_data(task): +def get_task_template_data(project_entity, task): if not task: return {} + short_name = None + task_type_name = task["taskType"] + for task_type_info in project_entity["taskTypes"]: + if task_type_info["name"] == task_type_name: + short_name = task_type_info["shortName"] + break + return { "task": { "name": task["name"], - "type": task["taskType"] + "type": task_type_name, + "short": short_name, } } @@ -145,12 +153,13 @@ class WorkareaModel: self._fill_data_by_folder_id[folder_id] = fill_data return copy.deepcopy(fill_data) - def _get_task_data(self, folder_id, task_id): + def _get_task_data(self, project_entity, folder_id, task_id): task_data = self._task_data_by_folder_id.setdefault(folder_id, {}) if task_id not in task_data: task = self._controller.get_task_entity(task_id) if task: - task_data[task_id] = get_task_template_data(task) + task_data[task_id] = get_task_template_data( + project_entity, task) return copy.deepcopy(task_data[task_id]) def _prepare_fill_data(self, folder_id, task_id): @@ -159,7 +168,8 @@ class WorkareaModel: base_data = self._get_base_data() folder_data = self._get_folder_data(folder_id) - task_data = self._get_task_data(folder_id, task_id) + project_entity = self._controller.get_project_entity() + task_data = self._get_task_data(project_entity, folder_id, task_id) base_data.update(folder_data) base_data.update(task_data) diff --git a/openpype/tools/ayon_workfiles/widgets/files_widget.py b/openpype/tools/ayon_workfiles/widgets/files_widget.py index fbf4dbc593..656ddf1dd8 100644 --- a/openpype/tools/ayon_workfiles/widgets/files_widget.py +++ b/openpype/tools/ayon_workfiles/widgets/files_widget.py @@ -106,7 +106,8 @@ class FilesWidget(QtWidgets.QWidget): self._on_published_cancel_clicked) self._selected_folder_id = None - self._selected_tak_name = None + self._selected_task_id = None + self._selected_task_name = None self._pre_select_folder_id = None self._pre_select_task_name = None @@ -178,7 +179,7 @@ class FilesWidget(QtWidgets.QWidget): # ------------------------------------------------------------- # Workarea workfiles # ------------------------------------------------------------- - def _open_workfile(self, filepath): + def _open_workfile(self, folder_id, task_name, filepath): if self._controller.has_unsaved_changes(): result = self._save_changes_prompt() if result is None: @@ -186,12 +187,15 @@ class FilesWidget(QtWidgets.QWidget): if result: self._controller.save_current_workfile() - self._controller.open_workfile(filepath) + self._controller.open_workfile(folder_id, task_name, filepath) def _on_workarea_open_clicked(self): path = self._workarea_widget.get_selected_path() - if path: - self._open_workfile(path) + if not path: + return + folder_id = self._selected_folder_id + task_id = self._selected_task_id + self._open_workfile(folder_id, task_id, path) def _on_current_open_requests(self): self._on_workarea_open_clicked() @@ -238,8 +242,12 @@ class FilesWidget(QtWidgets.QWidget): } filepath = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0] - if filepath: - self._open_workfile(filepath) + if not filepath: + return + + folder_id = self._selected_folder_id + task_id = self._selected_task_id + self._open_workfile(folder_id, task_id, filepath) def _on_workarea_save_clicked(self): result = self._exec_save_as_dialog() @@ -279,10 +287,11 @@ class FilesWidget(QtWidgets.QWidget): def _on_task_changed(self, event): self._selected_folder_id = event["folder_id"] - self._selected_tak_name = event["task_name"] + self._selected_task_id = event["task_id"] + self._selected_task_name = event["task_name"] self._valid_selected_context = ( self._selected_folder_id is not None - and self._selected_tak_name is not None + and self._selected_task_id is not None ) self._update_published_btns_state() @@ -311,7 +320,7 @@ class FilesWidget(QtWidgets.QWidget): if enabled: self._pre_select_folder_id = self._selected_folder_id - self._pre_select_task_name = self._selected_tak_name + self._pre_select_task_name = self._selected_task_name else: self._pre_select_folder_id = None self._pre_select_task_name = None @@ -334,7 +343,7 @@ class FilesWidget(QtWidgets.QWidget): return True if self._pre_select_task_name is None: return False - return self._pre_select_task_name != self._selected_tak_name + return self._pre_select_task_name != self._selected_task_name def _on_published_cancel_clicked(self): folder_id = self._pre_select_folder_id diff --git a/openpype/tools/launcher/actions.py b/openpype/tools/launcher/actions.py index 61660ee9b7..285b5d04ca 100644 --- a/openpype/tools/launcher/actions.py +++ b/openpype/tools/launcher/actions.py @@ -1,8 +1,5 @@ -import os - from qtpy import QtWidgets, QtGui -from openpype import PLUGINS_DIR from openpype import style from openpype import resources from openpype.lib import ( @@ -10,46 +7,7 @@ from openpype.lib import ( ApplictionExecutableNotFound, ApplicationLaunchFailed ) -from openpype.pipeline import ( - LauncherAction, - register_launcher_action_path, -) - - -def register_actions_from_paths(paths): - if not paths: - return - - for path in paths: - if not path: - continue - - if path.startswith("."): - print(( - "BUG: Relative paths are not allowed for security reasons. {}" - ).format(path)) - continue - - if not os.path.exists(path): - print("Path was not found: {}".format(path)) - continue - - register_launcher_action_path(path) - - -def register_config_actions(): - """Register actions from the configuration for Launcher""" - - actions_dir = os.path.join(PLUGINS_DIR, "actions") - if os.path.exists(actions_dir): - register_actions_from_paths([actions_dir]) - - -def register_environment_actions(): - """Register actions from AVALON_ACTIONS for Launcher.""" - - paths_str = os.environ.get("AVALON_ACTIONS") or "" - register_actions_from_paths(paths_str.split(os.pathsep)) +from openpype.pipeline import LauncherAction # TODO move to 'openpype.pipeline.actions' diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py index d4e0ae0453..a6264303d5 100644 --- a/openpype/tools/publisher/control.py +++ b/openpype/tools/publisher/control.py @@ -176,11 +176,10 @@ class PublishReportMaker: self._create_discover_result = None self._convert_discover_result = None self._publish_discover_result = None - self._plugin_data = [] - self._plugin_data_with_plugin = [] - self._stored_plugins = [] - self._current_plugin_data = [] + self._plugin_data_by_id = {} + self._current_plugin = None + self._current_plugin_data = {} self._all_instances_by_id = {} self._current_context = None @@ -192,8 +191,9 @@ class PublishReportMaker: create_context.convertor_discover_result ) self._publish_discover_result = create_context.publish_discover_result - self._plugin_data = [] - self._plugin_data_with_plugin = [] + + self._plugin_data_by_id = {} + self._current_plugin = None self._current_plugin_data = {} self._all_instances_by_id = {} self._current_context = context @@ -210,18 +210,11 @@ class PublishReportMaker: if self._current_plugin_data: self._current_plugin_data["passed"] = True + self._current_plugin = plugin self._current_plugin_data = self._add_plugin_data_item(plugin) - def _get_plugin_data_item(self, plugin): - store_item = None - for item in self._plugin_data_with_plugin: - if item["plugin"] is plugin: - store_item = item["data"] - break - return store_item - def _add_plugin_data_item(self, plugin): - if plugin in self._stored_plugins: + if plugin.id in self._plugin_data_by_id: # A plugin would be processed more than once. What can cause it: # - there is a bug in controller # - plugin class is imported into multiple files @@ -229,15 +222,9 @@ class PublishReportMaker: raise ValueError( "Plugin '{}' is already stored".format(str(plugin))) - self._stored_plugins.append(plugin) - plugin_data_item = self._create_plugin_data_item(plugin) + self._plugin_data_by_id[plugin.id] = plugin_data_item - self._plugin_data_with_plugin.append({ - "plugin": plugin, - "data": plugin_data_item - }) - self._plugin_data.append(plugin_data_item) return plugin_data_item def _create_plugin_data_item(self, plugin): @@ -278,7 +265,7 @@ class PublishReportMaker: """Add result of single action.""" plugin = result["plugin"] - store_item = self._get_plugin_data_item(plugin) + store_item = self._plugin_data_by_id.get(plugin.id) if store_item is None: store_item = self._add_plugin_data_item(plugin) @@ -300,14 +287,24 @@ class PublishReportMaker: instance, instance in self._current_context ) - plugins_data = copy.deepcopy(self._plugin_data) - if plugins_data and not plugins_data[-1]["passed"]: - plugins_data[-1]["passed"] = True + plugins_data_by_id = copy.deepcopy( + self._plugin_data_by_id + ) + + # Ensure the current plug-in is marked as `passed` in the result + # so that it shows on reports for paused publishes + if self._current_plugin is not None: + current_plugin_data = plugins_data_by_id.get( + self._current_plugin.id + ) + if current_plugin_data and not current_plugin_data["passed"]: + current_plugin_data["passed"] = True if publish_plugins: for plugin in publish_plugins: - if plugin not in self._stored_plugins: - plugins_data.append(self._create_plugin_data_item(plugin)) + if plugin.id not in plugins_data_by_id: + plugins_data_by_id[plugin.id] = \ + self._create_plugin_data_item(plugin) reports = [] if self._create_discover_result is not None: @@ -328,7 +325,7 @@ class PublishReportMaker: ) return { - "plugins_data": plugins_data, + "plugins_data": list(plugins_data_by_id.values()), "instances": instances_details, "context": self._extract_context_data(self._current_context), "crashed_file_paths": crashed_file_paths, diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py index d343353112..018088e916 100644 --- a/openpype/tools/utils/__init__.py +++ b/openpype/tools/utils/__init__.py @@ -15,6 +15,10 @@ from .widgets import ( IconButton, PixmapButton, SeparatorWidget, + VerticalExpandButton, + SquareButton, + RefreshButton, + GoToCurrentButton, ) from .views import DeselectableTreeView from .error_dialog import ErrorMessageBox @@ -60,6 +64,11 @@ __all__ = ( "PixmapButton", "SeparatorWidget", + "VerticalExpandButton", + "SquareButton", + "RefreshButton", + "GoToCurrentButton", + "DeselectableTreeView", "ErrorMessageBox", diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index a70437cc65..9223afecaa 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -6,10 +6,13 @@ import qtawesome from openpype.style import ( get_objected_colors, - get_style_image_path + get_style_image_path, + get_default_tools_icon_color, ) from openpype.lib.attribute_definitions import AbstractAttrDef +from .lib import get_qta_icon_by_name_and_color + log = logging.getLogger(__name__) @@ -777,3 +780,77 @@ class SeparatorWidget(QtWidgets.QFrame): self._orientation = orientation self._set_size(self._size) + + +def get_refresh_icon(): + return get_qta_icon_by_name_and_color( + "fa.refresh", get_default_tools_icon_color() + ) + + +def get_go_to_current_icon(): + return get_qta_icon_by_name_and_color( + "fa.arrow-down", get_default_tools_icon_color() + ) + + +class VerticalExpandButton(QtWidgets.QPushButton): + """Button which is expanding vertically. + + By default, button is a little bit smaller than other widgets like + QLineEdit. This button is expanding vertically to match size of + other widgets, next to it. + """ + + def __init__(self, parent=None): + super(VerticalExpandButton, self).__init__(parent) + + sp = self.sizePolicy() + sp.setVerticalPolicy(QtWidgets.QSizePolicy.Minimum) + self.setSizePolicy(sp) + + +class SquareButton(QtWidgets.QPushButton): + """Make button square shape. + + Change width to match height on resize. + """ + + def __init__(self, *args, **kwargs): + super(SquareButton, self).__init__(*args, **kwargs) + + sp = self.sizePolicy() + sp.setVerticalPolicy(QtWidgets.QSizePolicy.Minimum) + sp.setHorizontalPolicy(QtWidgets.QSizePolicy.Minimum) + self.setSizePolicy(sp) + self._ideal_width = None + + def showEvent(self, event): + super(SquareButton, self).showEvent(event) + self._ideal_width = self.height() + self.updateGeometry() + + def resizeEvent(self, event): + super(SquareButton, self).resizeEvent(event) + self._ideal_width = self.height() + self.updateGeometry() + + def sizeHint(self): + sh = super(SquareButton, self).sizeHint() + ideal_width = self._ideal_width + if ideal_width is None: + ideal_width = sh.height() + sh.setWidth(ideal_width) + return sh + + +class RefreshButton(VerticalExpandButton): + def __init__(self, parent=None): + super(RefreshButton, self).__init__(parent) + self.setIcon(get_refresh_icon()) + + +class GoToCurrentButton(VerticalExpandButton): + def __init__(self, parent=None): + super(GoToCurrentButton, self).__init__(parent) + self.setIcon(get_go_to_current_icon()) diff --git a/openpype/version.py b/openpype/version.py index 483b70436a..399c1404b1 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.17.1-nightly.1" +__version__ = "3.17.2-nightly.2" diff --git a/pyproject.toml b/pyproject.toml index d0b1ecf589..2460185bdd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.17.0" # OpenPype +version = "3.17.1" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/server_addon/applications/server/applications.json b/server_addon/applications/server/applications.json index 8e5b28623e..e40b8d41f6 100644 --- a/server_addon/applications/server/applications.json +++ b/server_addon/applications/server/applications.json @@ -237,6 +237,7 @@ }, { "name": "13-0", + "label": "13.0", "use_python_2": false, "executables": { "windows": [ @@ -319,6 +320,7 @@ }, { "name": "13-0", + "label": "13.0", "use_python_2": false, "executables": { "windows": [ @@ -405,6 +407,7 @@ }, { "name": "13-0", + "label": "13.0", "use_python_2": false, "executables": { "windows": [ @@ -491,6 +494,7 @@ }, { "name": "13-0", + "label": "13.0", "use_python_2": false, "executables": { "windows": [ @@ -577,6 +581,7 @@ }, { "name": "13-0", + "label": "13.0", "use_python_2": false, "executables": { "windows": [ diff --git a/server_addon/blender/server/settings/main.py b/server_addon/blender/server/settings/main.py index f6118d39cd..4476ea709b 100644 --- a/server_addon/blender/server/settings/main.py +++ b/server_addon/blender/server/settings/main.py @@ -9,6 +9,10 @@ from .publish_plugins import ( PublishPuginsModel, DEFAULT_BLENDER_PUBLISH_SETTINGS ) +from .render_settings import ( + RenderSettingsModel, + DEFAULT_RENDER_SETTINGS +) class UnitScaleSettingsModel(BaseSettingsModel): @@ -37,6 +41,8 @@ class BlenderSettings(BaseSettingsModel): default_factory=BlenderImageIOModel, title="Color Management (ImageIO)" ) + render_settings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, title="Render Settings") workfile_builder: TemplateWorkfileBaseOptions = Field( default_factory=TemplateWorkfileBaseOptions, title="Workfile Builder" @@ -55,6 +61,7 @@ DEFAULT_VALUES = { }, "set_frames_startup": True, "set_resolution_startup": True, + "render_settings": DEFAULT_RENDER_SETTINGS, "publish": DEFAULT_BLENDER_PUBLISH_SETTINGS, "workfile_builder": { "create_first_version": False, diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py index 65dda78411..5e047b7013 100644 --- a/server_addon/blender/server/settings/publish_plugins.py +++ b/server_addon/blender/server/settings/publish_plugins.py @@ -26,6 +26,16 @@ class ValidatePluginModel(BaseSettingsModel): active: bool = Field(title="Active") +class ValidateFileSavedModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateFileSaved") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + exclude_families: list[str] = Field( + default_factory=list, + title="Exclude product types" + ) + + class ExtractBlendModel(BaseSettingsModel): enabled: bool = Field(True) optional: bool = Field(title="Optional") @@ -53,6 +63,21 @@ class PublishPuginsModel(BaseSettingsModel): title="Validate Camera Zero Keyframe", section="Validators" ) + ValidateFileSaved: ValidateFileSavedModel = Field( + default_factory=ValidateFileSavedModel, + title="Validate File Saved", + section="Validators" + ) + ValidateRenderCameraIsSet: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Render Camera Is Set", + section="Validators" + ) + ValidateDeadlinePublish: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Render Output for Deadline", + section="Validators" + ) ValidateMeshHasUvs: ValidatePluginModel = Field( default_factory=ValidatePluginModel, title="Validate Mesh Has Uvs" @@ -118,6 +143,22 @@ DEFAULT_BLENDER_PUBLISH_SETTINGS = { "optional": True, "active": True }, + "ValidateFileSaved": { + "enabled": True, + "optional": False, + "active": True, + "exclude_families": [] + }, + "ValidateRenderCameraIsSet": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateDeadlinePublish": { + "enabled": True, + "optional": False, + "active": True + }, "ValidateMeshHasUvs": { "enabled": True, "optional": True, diff --git a/server_addon/blender/server/settings/render_settings.py b/server_addon/blender/server/settings/render_settings.py new file mode 100644 index 0000000000..f62013982e --- /dev/null +++ b/server_addon/blender/server/settings/render_settings.py @@ -0,0 +1,109 @@ +"""Providing models and values for Blender Render Settings.""" +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def image_format_enum(): + return [ + {"value": "exr", "label": "OpenEXR"}, + {"value": "bmp", "label": "BMP"}, + {"value": "rgb", "label": "Iris"}, + {"value": "png", "label": "PNG"}, + {"value": "jpg", "label": "JPEG"}, + {"value": "jp2", "label": "JPEG 2000"}, + {"value": "tga", "label": "Targa"}, + {"value": "tif", "label": "TIFF"}, + ] + + +def aov_list_enum(): + return [ + {"value": "empty", "label": "< none >"}, + {"value": "combined", "label": "Combined"}, + {"value": "z", "label": "Z"}, + {"value": "mist", "label": "Mist"}, + {"value": "normal", "label": "Normal"}, + {"value": "diffuse_light", "label": "Diffuse Light"}, + {"value": "diffuse_color", "label": "Diffuse Color"}, + {"value": "specular_light", "label": "Specular Light"}, + {"value": "specular_color", "label": "Specular Color"}, + {"value": "volume_light", "label": "Volume Light"}, + {"value": "emission", "label": "Emission"}, + {"value": "environment", "label": "Environment"}, + {"value": "shadow", "label": "Shadow"}, + {"value": "ao", "label": "Ambient Occlusion"}, + {"value": "denoising", "label": "Denoising"}, + {"value": "volume_direct", "label": "Direct Volumetric Scattering"}, + {"value": "volume_indirect", "label": "Indirect Volumetric Scattering"} + ] + + +def custom_passes_types_enum(): + return [ + {"value": "COLOR", "label": "Color"}, + {"value": "VALUE", "label": "Value"}, + ] + + +class CustomPassesModel(BaseSettingsModel): + """Custom Passes""" + _layout = "compact" + + attribute: str = Field("", title="Attribute name") + value: str = Field( + "COLOR", + title="Type", + enum_resolver=custom_passes_types_enum + ) + + +class RenderSettingsModel(BaseSettingsModel): + default_render_image_folder: str = Field( + title="Default Render Image Folder" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator Character", + enum_resolver=aov_separators_enum + ) + image_format: str = Field( + "exr", + title="Image Format", + enum_resolver=image_format_enum + ) + multilayer_exr: bool = Field( + title="Multilayer (EXR)" + ) + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=aov_list_enum, + title="AOVs to create" + ) + custom_passes: list[CustomPassesModel] = Field( + default_factory=list, + title="Custom Passes", + description=( + "Add custom AOVs. They are added to the view layer and in the " + "Compositing Nodetree,\nbut they need to be added manually to " + "the Shader Nodetree." + ) + ) + + +DEFAULT_RENDER_SETTINGS = { + "default_render_image_folder": "renders/blender", + "aov_separator": "underscore", + "image_format": "exr", + "multilayer_exr": True, + "aov_list": [], + "custom_passes": [] +} diff --git a/server_addon/blender/server/version.py b/server_addon/blender/server/version.py index 485f44ac21..ae7362549b 100644 --- a/server_addon/blender/server/version.py +++ b/server_addon/blender/server/version.py @@ -1 +1 @@ -__version__ = "0.1.1" +__version__ = "0.1.3" diff --git a/server_addon/core/server/settings/publish_plugins.py b/server_addon/core/server/settings/publish_plugins.py index c012312579..69a759465e 100644 --- a/server_addon/core/server/settings/publish_plugins.py +++ b/server_addon/core/server/settings/publish_plugins.py @@ -116,6 +116,8 @@ class OIIOToolArgumentsModel(BaseSettingsModel): class ExtractOIIOTranscodeOutputModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") extension: str = Field("", title="Extension") transcoding_type: str = Field( "colorspace", @@ -164,6 +166,11 @@ class ExtractOIIOTranscodeProfileModel(BaseSettingsModel): title="Output Definitions", ) + @validator("outputs") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + class ExtractOIIOTranscodeModel(BaseSettingsModel): enabled: bool = Field(True) diff --git a/server_addon/deadline/server/settings/publish_plugins.py b/server_addon/deadline/server/settings/publish_plugins.py index 8d1b667345..8d48695a9c 100644 --- a/server_addon/deadline/server/settings/publish_plugins.py +++ b/server_addon/deadline/server/settings/publish_plugins.py @@ -124,6 +124,24 @@ class LimitGroupsSubmodel(BaseSettingsModel): ) +def fusion_deadline_plugin_enum(): + """Return a list of value/label dicts for the enumerator. + + Returning a list of dicts is used to allow for a custom label to be + displayed in the UI. + """ + return [ + { + "value": "Fusion", + "label": "Fusion" + }, + { + "value": "FusionCmd", + "label": "FusionCmd" + } + ] + + class FusionSubmitDeadlineModel(BaseSettingsModel): enabled: bool = Field(True, title="Enabled") optional: bool = Field(False, title="Optional") @@ -132,6 +150,9 @@ class FusionSubmitDeadlineModel(BaseSettingsModel): chunk_size: int = Field(10, title="Frame per Task") concurrent_tasks: int = Field(1, title="Number of concurrent tasks") group: str = Field("", title="Group Name") + plugin: str = Field("Fusion", + enum_resolver=fusion_deadline_plugin_enum, + title="Deadline Plugin") class NukeSubmitDeadlineModel(BaseSettingsModel): @@ -208,6 +229,16 @@ class CelactionSubmitDeadlineModel(BaseSettingsModel): ) +class BlenderSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Frame per Task") + group: str = Field("", title="Group Name") + + class AOVFilterSubmodel(BaseSettingsModel): _layout = "expanded" name: str = Field(title="Host") @@ -276,8 +307,10 @@ class PublishPluginsModel(BaseSettingsModel): title="After Effects to deadline") CelactionSubmitDeadline: CelactionSubmitDeadlineModel = Field( default_factory=CelactionSubmitDeadlineModel, - title="Celaction Submit Deadline" - ) + title="Celaction Submit Deadline") + BlenderSubmitDeadline: BlenderSubmitDeadlineModel = Field( + default_factory=BlenderSubmitDeadlineModel, + title="Blender Submit Deadline") ProcessSubmittedJobOnFarm: ProcessSubmittedJobOnFarmModel = Field( default_factory=ProcessSubmittedJobOnFarmModel, title="Process submitted job on farm.") @@ -384,6 +417,15 @@ DEFAULT_DEADLINE_PLUGINS_SETTINGS = { "deadline_chunk_size": 10, "deadline_job_delay": "00:00:00:00" }, + "BlenderSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, "ProcessSubmittedJobOnFarm": { "enabled": True, "deadline_department": "", @@ -400,6 +442,12 @@ DEFAULT_DEADLINE_PLUGINS_SETTINGS = { ".*([Bb]eauty).*" ] }, + { + "name": "blender", + "value": [ + ".*([Bb]eauty).*" + ] + }, { "name": "aftereffects", "value": [ diff --git a/server_addon/deadline/server/version.py b/server_addon/deadline/server/version.py index 485f44ac21..b3f4756216 100644 --- a/server_addon/deadline/server/version.py +++ b/server_addon/deadline/server/version.py @@ -1 +1 @@ -__version__ = "0.1.1" +__version__ = "0.1.2" diff --git a/server_addon/maya/server/settings/publishers.py b/server_addon/maya/server/settings/publishers.py index bd7ccdf4d5..6c5baa3900 100644 --- a/server_addon/maya/server/settings/publishers.py +++ b/server_addon/maya/server/settings/publishers.py @@ -129,6 +129,10 @@ class CollectMayaRenderModel(BaseSettingsModel): ) +class CollectFbxAnimationModel(BaseSettingsModel): + enabled: bool = Field(title="Collect Fbx Animation") + + class CollectFbxCameraModel(BaseSettingsModel): enabled: bool = Field(title="CollectFbxCamera") @@ -364,6 +368,10 @@ class PublishersModel(BaseSettingsModel): title="Collect Render Layers", section="Collectors" ) + CollectFbxAnimation: CollectFbxAnimationModel = Field( + default_factory=CollectFbxAnimationModel, + title="Collect FBX Animation", + ) CollectFbxCamera: CollectFbxCameraModel = Field( default_factory=CollectFbxCameraModel, title="Collect Camera for FBX export", @@ -644,6 +652,10 @@ class PublishersModel(BaseSettingsModel): default_factory=BasicValidateModel, title="Validate Rig Controllers", ) + ValidateAnimatedReferenceRig: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animated Reference Rig", + ) ValidateAnimationContent: BasicValidateModel = Field( default_factory=BasicValidateModel, title="Validate Animation Content", @@ -660,14 +672,34 @@ class PublishersModel(BaseSettingsModel): default_factory=BasicValidateModel, title="Validate Skeletal Mesh Top Node", ) + ValidateSkeletonRigContents: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Rig Contents" + ) + ValidateSkeletonRigControllers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Rig Controllers" + ) ValidateSkinclusterDeformerSet: BasicValidateModel = Field( default_factory=BasicValidateModel, title="Validate Skincluster Deformer Relationships", ) + ValidateSkeletonRigOutputIds: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Rig Output Ids" + ) + ValidateSkeletonTopGroupHierarchy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeleton Top Group Hierarchy", + ) ValidateRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( default_factory=ValidateRigOutSetNodeIdsModel, title="Validate Rig Out Set Node Ids", ) + ValidateSkeletonRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( + default_factory=ValidateRigOutSetNodeIdsModel, + title="Validate Skeleton Rig Out Set Node Ids", + ) # Rig - END ValidateCameraAttributes: BasicValidateModel = Field( default_factory=BasicValidateModel, @@ -748,6 +780,9 @@ DEFAULT_PUBLISH_SETTINGS = { "CollectMayaRender": { "sync_workfile_version": False }, + "CollectFbxAnimation": { + "enabled": True + }, "CollectFbxCamera": { "enabled": False }, @@ -1143,6 +1178,11 @@ DEFAULT_PUBLISH_SETTINGS = { "optional": True, "active": True }, + "ValidateAnimatedReferenceRig": { + "enabled": True, + "optional": False, + "active": True + }, "ValidateAnimationContent": { "enabled": True, "optional": False, @@ -1163,6 +1203,16 @@ DEFAULT_PUBLISH_SETTINGS = { "optional": False, "active": True }, + "ValidateSkeletonRigContents": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSkeletonRigControllers": { + "enabled": False, + "optional": True, + "active": True + }, "ValidateSkinclusterDeformerSet": { "enabled": True, "optional": False, @@ -1173,6 +1223,21 @@ DEFAULT_PUBLISH_SETTINGS = { "optional": False, "allow_history_only": False }, + "ValidateSkeletonRigOutSetNodeIds": { + "enabled": False, + "optional": False, + "allow_history_only": False + }, + "ValidateSkeletonRigOutputIds": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateSkeletonTopGroupHierarchy": { + "enabled": True, + "optional": True, + "active": True + }, "ValidateCameraAttributes": { "enabled": False, "optional": True, diff --git a/server_addon/maya/server/version.py b/server_addon/maya/server/version.py index e57ad00718..de699158fd 100644 --- a/server_addon/maya/server/version.py +++ b/server_addon/maya/server/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring addon version.""" -__version__ = "0.1.3" +__version__ = "0.1.4" diff --git a/server_addon/nuke/server/settings/publish_plugins.py b/server_addon/nuke/server/settings/publish_plugins.py index c78685534f..19206149b6 100644 --- a/server_addon/nuke/server/settings/publish_plugins.py +++ b/server_addon/nuke/server/settings/publish_plugins.py @@ -149,7 +149,7 @@ class ReformatNodesConfigModel(BaseSettingsModel): ) -class BakingStreamModel(BaseSettingsModel): +class IntermediateOutputModel(BaseSettingsModel): name: str = Field(title="Output name") filter: BakingStreamFilterModel = Field( title="Filter", default_factory=BakingStreamFilterModel) @@ -166,9 +166,21 @@ class BakingStreamModel(BaseSettingsModel): class ExtractReviewDataMovModel(BaseSettingsModel): + """[deprecated] use Extract Review Data Baking + Streams instead. + """ enabled: bool = Field(title="Enabled") viewer_lut_raw: bool = Field(title="Viewer lut raw") - outputs: list[BakingStreamModel] = Field( + outputs: list[IntermediateOutputModel] = Field( + default_factory=list, + title="Baking streams" + ) + + +class ExtractReviewIntermediatesModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + viewer_lut_raw: bool = Field(title="Viewer lut raw") + outputs: list[IntermediateOutputModel] = Field( default_factory=list, title="Baking streams" ) @@ -270,6 +282,10 @@ class PublishPuginsModel(BaseSettingsModel): title="Extract Review Data Mov", default_factory=ExtractReviewDataMovModel ) + ExtractReviewIntermediates: ExtractReviewIntermediatesModel = Field( + title="Extract Review Intermediates", + default_factory=ExtractReviewIntermediatesModel + ) ExtractSlateFrame: ExtractSlateFrameModel = Field( title="Extract Slate Frame", default_factory=ExtractSlateFrameModel @@ -465,6 +481,61 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { } ] }, + "ExtractReviewIntermediates": { + "enabled": True, + "viewer_lut_raw": False, + "outputs": [ + { + "name": "baking", + "filter": { + "task_types": [], + "product_types": [], + "product_names": [] + }, + "read_raw": False, + "viewer_process_override": "", + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "reformat_nodes_config": { + "enabled": False, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + ] + }, "ExtractSlateFrame": { "viewer_lut_raw": False, "key_value_mapping": { diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py index b3f4756216..ae7362549b 100644 --- a/server_addon/nuke/server/version.py +++ b/server_addon/nuke/server/version.py @@ -1 +1 @@ -__version__ = "0.1.2" +__version__ = "0.1.3" diff --git a/tests/conftest.py b/tests/conftest.py index 4f7c17244b..6e82c9917d 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -29,6 +29,11 @@ def pytest_addoption(parser): help="True - only setup test, do not run any tests" ) + parser.addoption( + "--mongo_url", action="store", default=None, + help="Provide url of the Mongo database." + ) + @pytest.fixture(scope="module") def test_data_folder(request): @@ -55,6 +60,11 @@ def setup_only(request): return request.config.getoption("--setup_only") +@pytest.fixture(scope="module") +def mongo_url(request): + return request.config.getoption("--mongo_url") + + @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): # execute all other hooks to obtain the report object diff --git a/tests/integration/hosts/maya/test_deadline_publish_in_maya.py b/tests/integration/hosts/maya/test_deadline_publish_in_maya.py index c5bf526f52..9332177944 100644 --- a/tests/integration/hosts/maya/test_deadline_publish_in_maya.py +++ b/tests/integration/hosts/maya/test_deadline_publish_in_maya.py @@ -32,7 +32,7 @@ class TestDeadlinePublishInMaya(MayaDeadlinePublishTestClass): # keep empty to locate latest installed variant or explicit APP_VARIANT = "" - TIMEOUT = 120 # publish timeout + TIMEOUT = 180 # publish timeout def test_db_asserts(self, dbcon, publish_finished): """Host and input data dependent expected results in DB.""" diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index 2af4af02de..e82e438e54 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -147,11 +147,11 @@ class ModuleUnitTest(BaseTest): @pytest.fixture(scope="module") def db_setup(self, download_test_data, env_var, monkeypatch_session, - request): + request, mongo_url): """Restore prepared MongoDB dumps into selected DB.""" backup_dir = os.path.join(download_test_data, "input", "dumps") - uri = os.environ.get("OPENPYPE_MONGO") + uri = mongo_url or os.environ.get("OPENPYPE_MONGO") db_handler = DBHandler(uri) db_handler.setup_from_dump(self.TEST_DB_NAME, backup_dir, overwrite=True, diff --git a/tests/unit/openpype/hosts/photoshop/test_lib.py b/tests/unit/openpype/hosts/photoshop/test_lib.py new file mode 100644 index 0000000000..ad4feb42ae --- /dev/null +++ b/tests/unit/openpype/hosts/photoshop/test_lib.py @@ -0,0 +1,25 @@ +import pytest + +from openpype.hosts.photoshop.lib import clean_subset_name + +""" +Tests cleanup of unused layer placeholder ({layer}) from subset name. +Layer differentiation might be desired in subset name, but in some cases it +might be used (in `auto_image` - only single image without layer diff., +single image instance created without toggled use of subset name etc.) +""" + + +def test_no_layer_placeholder(): + clean_subset = clean_subset_name("imageMain") + assert "imageMain" == clean_subset + + +@pytest.mark.parametrize("subset_name", + ["imageMain{Layer}", + "imageMain_{layer}", # trailing _ + "image{Layer}Main", + "image{LAYER}Main"]) +def test_not_used_layer_placeholder(subset_name): + clean_subset = clean_subset_name(subset_name) + assert "imageMain" == clean_subset diff --git a/tests/unit/openpype/pipeline/test_colorspace.py b/tests/unit/openpype/pipeline/test_colorspace.py index ac35a28303..85faa8ff5d 100644 --- a/tests/unit/openpype/pipeline/test_colorspace.py +++ b/tests/unit/openpype/pipeline/test_colorspace.py @@ -26,8 +26,9 @@ class TestPipelineColorspace(TestPipeline): Example: cd to OpenPype repo root dir - poetry run python ./start.py runtests ../tests/unit/openpype/pipeline - """ + poetry run python ./start.py runtests /tests/unit/openpype/pipeline/test_colorspace.py + """ # noqa: E501 + TEST_FILES = [ ( "1csqimz8bbNcNgxtEXklLz6GRv91D3KgA", @@ -131,14 +132,14 @@ class TestPipelineColorspace(TestPipeline): path_1 = "renderCompMain_ACES2065-1.####.exr" expected_1 = "ACES2065-1" ret_1 = colorspace.parse_colorspace_from_filepath( - path_1, "nuke", "test_project", project_settings=project_settings + path_1, config_path=config_path_asset ) assert ret_1 == expected_1, f"Not matching colorspace {expected_1}" path_2 = "renderCompMain_BMDFilm_WideGamut_Gen5.mov" expected_2 = "BMDFilm WideGamut Gen5" ret_2 = colorspace.parse_colorspace_from_filepath( - path_2, "nuke", "test_project", project_settings=project_settings + path_2, config_path=config_path_asset ) assert ret_2 == expected_2, f"Not matching colorspace {expected_2}" @@ -184,5 +185,70 @@ class TestPipelineColorspace(TestPipeline): assert expected_hiero == hiero_file_rules, ( f"Not matching file rules {expected_hiero}") + def test_get_imageio_colorspace_from_filepath_p3(self, project_settings): + """Test Colorspace from filepath with python 3 compatibility mode + + Also test ocio v2 file rules + """ + nuke_filepath = "renderCompMain_baking_h264.mp4" + hiero_filepath = "prerenderCompMain.mp4" + + expected_nuke = "Camera Rec.709" + expected_hiero = "Gamma 2.2 Rec.709 - Texture" + + nuke_colorspace = colorspace.get_colorspace_name_from_filepath( + nuke_filepath, + "nuke", + "test_project", + project_settings=project_settings + ) + assert expected_nuke == nuke_colorspace, ( + f"Not matching colorspace {expected_nuke}") + + hiero_colorspace = colorspace.get_colorspace_name_from_filepath( + hiero_filepath, + "hiero", + "test_project", + project_settings=project_settings + ) + assert expected_hiero == hiero_colorspace, ( + f"Not matching colorspace {expected_hiero}") + + def test_get_imageio_colorspace_from_filepath_python2mode( + self, project_settings): + """Test Colorspace from filepath with python 2 compatibility mode + + Also test ocio v2 file rules + """ + nuke_filepath = "renderCompMain_baking_h264.mp4" + hiero_filepath = "prerenderCompMain.mp4" + + expected_nuke = "Camera Rec.709" + expected_hiero = "Gamma 2.2 Rec.709 - Texture" + + # switch to python 2 compatibility mode + colorspace.CachedData.has_compatible_ocio_package = False + + nuke_colorspace = colorspace.get_colorspace_name_from_filepath( + nuke_filepath, + "nuke", + "test_project", + project_settings=project_settings + ) + assert expected_nuke == nuke_colorspace, ( + f"Not matching colorspace {expected_nuke}") + + hiero_colorspace = colorspace.get_colorspace_name_from_filepath( + hiero_filepath, + "hiero", + "test_project", + project_settings=project_settings + ) + assert expected_hiero == hiero_colorspace, ( + f"Not matching colorspace {expected_hiero}") + + # return to python 3 compatibility mode + colorspace.CachedData.python3compatible = None + test_case = TestPipelineColorspace() diff --git a/website/docs/artist_publish.md b/website/docs/artist_publish.md index 321eb5c56a..b1be2e629e 100644 --- a/website/docs/artist_publish.md +++ b/website/docs/artist_publish.md @@ -33,39 +33,41 @@ The Instances are categorized into ‘families’ based on what type of data the Following family definitions and requirements are OpenPype defaults and what we consider good industry practice, but most of the requirements can be easily altered to suit the studio or project needs. Here's a list of supported families -| Family | Comment | Example Subsets | -| ----------------------- | ------------------------------------------------ | ------------------------- | -| [Model](#model) | Cleaned geo without materials | main, proxy, broken | -| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty | -| [Rig](#rig) | Characters or props with animation controls | main, deform, sim | -| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim | -| [Layout](#layout) | Simple representation of the environment | main, | -| [Setdress](#setdress) | Environment containing only referenced assets | main, | -| [Camera](#camera) | May contain trackers or proxy geo | main, tracked, anim | -| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB | -| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 | -| MayaAscii | Maya publishes that don't fit other categories | | -| [Render](#render) | Rendered frames from CG or Comp | | -| RenderSetup | Scene render settings, AOVs and layers | | -| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane | -| Write | Nuke write nodes for rendering | | -| Image | Any non-plate image to be used by artists | Reference, ConceptArt | -| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt | -| Review | Reviewable video or image. | | -| Matchmove | Matchmoved camera, potentially with geometry | main | -| Workfile | Backup of the workfile with all its content | uses the task name | -| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop | -| Yeticache | Cached out yeti fur setup | | -| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed | -| VrayProxy | Vray proxy geometry for rendering | | -| VrayScene | Vray full scene export | | -| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty | -| LUT | | | -| Nukenodes | | | -| Gizmo | | | -| Nukenodes | | | -| Harmony.template | | | -| Harmony.palette | | | +| Family | Comment | Example Subsets | +|-------------------------|-------------------------------------------------------| ------------------------- | +| [Model](#model) | Cleaned geo without materials | main, proxy, broken | +| [Look](#look) | Package of shaders, assignments and textures | main, wet, dirty | +| [Rig](#rig) | Characters or props with animation controls | main, deform, sim | +| [Assembly](#assembly) | A complex model made from multiple other models. | main, deform, sim | +| [Layout](#layout) | Simple representation of the environment | main, | +| [Setdress](#setdress) | Environment containing only referenced assets | main, | +| [Camera](#camera) | May contain trackers or proxy geo, only single camera | main, tracked, anim | +| | expected. | | +| [Animation](#animation) | Animation exported from a rig. | characterA, vehicleB | +| [Cache](#cache) | Arbitrary animated geometry or fx cache | rest, ROM , pose01 | +| MayaAscii | Maya publishes that don't fit other categories | | +| [Render](#render) | Rendered frames from CG or Comp | | +| RenderSetup | Scene render settings, AOVs and layers | | +| Plate | Ingested, transcode, conformed footage | raw, graded, imageplane | +| Write | Nuke write nodes for rendering | | +| Image | Any non-plate image to be used by artists | Reference, ConceptArt | +| LayeredImage | Software agnostic layered image with metadata | Reference, ConceptArt | +| Review | Reviewable video or image. | | +| Matchmove | Matchmoved camera, potentially with geometry, allows | main | +| | multiple cameras even with planes. | | +| Workfile | Backup of the workfile with all its content | uses the task name | +| Nukenodes | Any collection of nuke nodes | maskSetup, usefulBackdrop | +| Yeticache | Cached out yeti fur setup | | +| YetiRig | Yeti groom ready to be applied to geometry cache | main, destroyed | +| VrayProxy | Vray proxy geometry for rendering | | +| VrayScene | Vray full scene export | | +| ArnodldStandin | All arnold .ass archives for rendering | main, wet, dirty | +| LUT | | | +| Nukenodes | | | +| Gizmo | | | +| Nukenodes | | | +| Harmony.template | | | +| Harmony.palette | | | @@ -161,7 +163,7 @@ Example Representations: ### Animation Published result of an animation created with a rig. Animation can be extracted -as animation curves, cached out geometry or even fully animated rig with all the controllers. +as animation curves, cached out geometry or even fully animated rig with all the controllers. Animation cache is usually defined by a rigger in the rig file of a character or by FX TD in the effects rig, to ensure consistency of outputs. diff --git a/website/docs/project_settings/settings_project_global.md b/website/docs/project_settings/settings_project_global.md index e0481a8717..27aa60a464 100644 --- a/website/docs/project_settings/settings_project_global.md +++ b/website/docs/project_settings/settings_project_global.md @@ -189,7 +189,7 @@ A profile may generate multiple outputs from a single input. Each output must de - Profile filtering defines which group of output definitions is used but output definitions may require more specific filters on their own. - They may filter by subset name (regex can be used) or publish families. Publish families are more complex as are based on knowing code base. - Filtering by custom tags -> this is used for targeting to output definitions from other extractors using settings (at this moment only Nuke bake extractor can target using custom tags). - - Nuke extractor settings path: `project_settings/nuke/publish/ExtractReviewDataMov/outputs/baking/add_custom_tags` + - Nuke extractor settings path: `project_settings/nuke/publish/ExtractReviewIntermediates/outputs/baking/add_custom_tags` - Filtering by input length. Input may be video, sequence or single image. It is possible that `.mp4` should be created only when input is video or sequence and to create review `.png` when input is single frame. In some cases the output should be created even if it's single frame or multi frame input. diff --git a/website/docs/pype2/admin_presets_plugins.md b/website/docs/pype2/admin_presets_plugins.md index 6a057f4bb4..b5e8a3b8a8 100644 --- a/website/docs/pype2/admin_presets_plugins.md +++ b/website/docs/pype2/admin_presets_plugins.md @@ -534,8 +534,7 @@ Plugin responsible for generating thumbnails with colorspace controlled by Nuke. } ``` -### `ExtractReviewDataMov` - +### `ExtractReviewIntermediates` `viewer_lut_raw` **true** will publish the baked mov file without any colorspace conversion. It will be baked with the workfile workspace. This can happen in case the Viewer input process uses baked screen space luts. #### baking with controlled colorspace