diff --git a/.all-contributorsrc b/.all-contributorsrc index b30f3b2499..60812cdb3c 100644 --- a/.all-contributorsrc +++ b/.all-contributorsrc @@ -1,6 +1,6 @@ { "projectName": "OpenPype", - "projectOwner": "pypeclub", + "projectOwner": "ynput", "repoType": "github", "repoHost": "https://github.com", "files": [ @@ -319,8 +319,18 @@ "code", "doc" ] + }, + { + "login": "movalex", + "name": "Alexey Bogomolov", + "avatar_url": "https://avatars.githubusercontent.com/u/11698866?v=4", + "profile": "http://abogomolov.com", + "contributions": [ + "code" + ] } ], "contributorsPerLine": 7, - "skipCi": true + "skipCi": true, + "commitType": "docs" } diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 54a4ee6ac0..3406ca8b65 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,9 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.15.10-nightly.1 + - 3.15.9 + - 3.15.9-nightly.2 - 3.15.9-nightly.1 - 3.15.8 - 3.15.8-nightly.3 @@ -132,9 +135,6 @@ body: - 3.14.3-nightly.2 - 3.14.3-nightly.1 - 3.14.2 - - 3.14.2-nightly.5 - - 3.14.2-nightly.4 - - 3.14.2-nightly.3 validations: required: true - type: dropdown diff --git a/CHANGELOG.md b/CHANGELOG.md index a33904735b..ec6544e659 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,341 @@ # Changelog +## [3.15.9](https://github.com/ynput/OpenPype/tree/3.15.9) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.8...3.15.9) + +### **πŸ†• New features** + + +
+Blender: Implemented Loading of Alembic Camera #4990 + +Implemented loading of Alembic cameras in Blender. + + +___ + +
+ + +
+Unreal: Implemented Creator, Loader and Extractor for Levels #5008 + +Creator, Loader and Extractor for Unreal Levels have been implemented. + + +___ + +
+ +### **πŸš€ Enhancements** + + +
+Blender: Added setting for base unit scale #4987 + +A setting for the base unit scale has been added for Blender.The unit scale is automatically applied when opening a file or creating a new one. + + +___ + +
+ + +
+Unreal: Changed naming and path of Camera Levels #5010 + +The levels created for the camera in Unreal now include `_camera` in the name, to be better identifiable, and are placed in the camera folder. + + +___ + +
+ + +
+Settings: Added option to nest settings templates #5022 + +It is possible to nest settings templates in another templates. + + +___ + +
+ + +
+Enhancement/publisher: Remove "hit play to continue" label on continue #5029 + +Remove "hit play to continue" message on continue so that it doesn't show anymore when play was clicked. + + +___ + +
+ + +
+Ftrack: Limit number of ftrack events to query at once #5033 + +Limit the amount of ftrack events received from mongo at once to 100. + + +___ + +
+ + +
+General: Small code cleanups #5034 + +Small code cleanup and updates. + + +___ + +
+ + +
+Global: collect frames to fix with settings #5036 + +Settings for `Collect Frames to Fix` will allow disable per project the plugin. Also `Rewriting latest version` attribute is hiddable from settings. + + +___ + +
+ + +
+General: Publish plugin apply settings can expect only project settings #5037 + +Only project settings are passed to optional `apply_settings` method, if the method expects only one argument. + + +___ + +
+ +### **πŸ› Bug fixes** + + +
+Maya: Load Assembly fix invalid imports #4859 + +Refactors imports so they are now correct. + + +___ + +
+ + +
+Maya: Skipping rendersetup for members. #4973 + +When publishing a `rendersetup`, the objectset is and should be empty. + + +___ + +
+ + +
+Maya: Validate Rig Output IDs #5016 + +Absolute names of node were not used, so plugin did not fetch the nodes properly.Also missed pymel command. + + +___ + +
+ + +
+Deadline: escape rootless path in publish job #4910 + +If the publish path on Deadline job contains spaces or other characters, command was failing because the path wasn't properly escaped. This is fixing it. + + +___ + +
+ + +
+General: Company name and URL changed #4974 + +The current records were obsolete in inno_setup, changed to the up-to-date. +___ + +
+ + +
+Unreal: Fix usage of 'get_full_path' function #5014 + +This PR changes all the occurrences of `get_full_path` functions to alternatives to get the path of the objects. + + +___ + +
+ + +
+Unreal: Fix sequence frames validator to use correct data #5021 + +Fix sequence frames validator to use clipIn and clipOut data instead of frameStart and frameEnd. + + +___ + +
+ + +
+Unreal: Fix render instances collection to use correct data #5023 + +Fix render instances collection to use `frameStart` and `frameEnd` from the Project Manager, instead of the sequence's ones. + + +___ + +
+ + +
+Resolve: loader is opening even if no timeline in project #5025 + +Loader is opening now even no timeline is available in a project. + + +___ + +
+ + +
+nuke: callback for dirmapping is on demand #5030 + +Nuke was slowed down on processing due this callback. Since it is disabled by default it made sense to add it only on demand. + + +___ + +
+ + +
+Publisher: UI works with instances without label #5032 + +Publisher UI does not crash if instance don't have filled 'label' key in instance data. + + +___ + +
+ + +
+Publisher: Call explicitly prepared tab methods #5044 + +It is not possible to go to Create tab during publishing from OpenPype menu. + + +___ + +
+ + +
+Ftrack: Role names are not case sensitive in ftrack event server status action #5058 + +Event server status action is not case sensitive for role names of user. + + +___ + +
+ + +
+Publisher: Fix border widget #5063 + +Fixed border lines in Publisher UI to be painted correctly with correct indentation and size. + + +___ + +
+ + +
+Unreal: Fix Commandlet Project and Permissions #5066 + +Fix problem when creating an Unreal Project when Commandlet Project is in a protected location. + + +___ + +
+ + +
+Unreal: Added verification for Unreal app name format #5070 + +The Unreal app name is used to determine the Unreal version folder, so it is necessary that if follows the format `x-x`, where `x` is any integer. This PR adds a verification that the app name follows that format. + + +___ + +
+ +### **πŸ“ƒ Documentation** + + +
+Docs: Display wrong image in ExtractOIIOTranscode #5045 + +Wrong image display in `https://openpype.io/docs/project_settings/settings_project_global#extract-oiio-transcode`. + + +___ + +
+ +### **Merged pull requests** + + +
+Drop-down menu to list all families in create placeholder #4928 + +Currently in the create placeholder window, we need to write the family manually. This replace the text field by an enum field with all families for the current software. + + +___ + +
+ + +
+add sync to specific projects or listen only #4919 + +Extend kitsu sync service with additional arguments to sync specific projects. + + +___ + +
+ + + + ## [3.15.8](https://github.com/ynput/OpenPype/tree/3.15.8) diff --git a/README.md b/README.md index 514ffb62c0..8757e3db92 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ -[![All Contributors](https://img.shields.io/badge/all_contributors-27-orange.svg?style=flat-square)](#contributors-) +[![All Contributors](https://img.shields.io/badge/all_contributors-28-orange.svg?style=flat-square)](#contributors-) OpenPype ==== @@ -303,41 +303,44 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Milan Kolar

πŸ’» πŸ“– πŸš‡ πŸ’Ό πŸ–‹ πŸ” 🚧 πŸ“† πŸ‘€ πŸ§‘β€πŸ« πŸ’¬

Jakub JeΕΎek

πŸ’» πŸ“– πŸš‡ πŸ–‹ πŸ‘€ 🚧 πŸ§‘β€πŸ« πŸ“† πŸ’¬

OndΕ™ej Samohel

πŸ’» πŸ“– πŸš‡ πŸ–‹ πŸ‘€ 🚧 πŸ§‘β€πŸ« πŸ“† πŸ’¬

Jakub Trllo

πŸ’» πŸ“– πŸš‡ πŸ‘€ 🚧 πŸ’¬

Petr Kalis

πŸ’» πŸ“– πŸš‡ πŸ‘€ 🚧 πŸ’¬

64qam

πŸ’» πŸ‘€ πŸ“– πŸš‡ πŸ“† 🚧 πŸ–‹ πŸ““

Roy Nieterau

πŸ’» πŸ“– πŸ‘€ πŸ§‘β€πŸ« πŸ’¬

Toke Jepsen

πŸ’» πŸ“– πŸ‘€ πŸ§‘β€πŸ« πŸ’¬

Jiri Sindelar

πŸ’» πŸ‘€ πŸ“– πŸ–‹ βœ… πŸ““

Simone Barbieri

πŸ’» πŸ“–

karimmozilla

πŸ’»

Allan I. A.

πŸ’»

murphy

πŸ’» πŸ‘€ πŸ““ πŸ“– πŸ“†

Wijnand Koreman

πŸ’»

Bo Zhou

πŸ’»

ClΓ©ment Hector

πŸ’» πŸ‘€

David Lai

πŸ’» πŸ‘€

Derek

πŸ’» πŸ“–

GΓ‘bor Marinov

πŸ’» πŸ“–

icyvapor

πŸ’» πŸ“–

JΓ©rΓ΄me LORRAIN

πŸ’»

David Morris-Oliveros

πŸ’»

BenoitConnan

πŸ’»

Malthaldar

πŸ’»

Sven Neve

πŸ’»

zafrs

πŸ’»

FΓ©lix David

πŸ’» πŸ“–
Milan Kolar
Milan Kolar

πŸ’» πŸ“– πŸš‡ πŸ’Ό πŸ–‹ πŸ” 🚧 πŸ“† πŸ‘€ πŸ§‘β€πŸ« πŸ’¬
Jakub JeΕΎek
Jakub JeΕΎek

πŸ’» πŸ“– πŸš‡ πŸ–‹ πŸ‘€ 🚧 πŸ§‘β€πŸ« πŸ“† πŸ’¬
OndΕ™ej Samohel
OndΕ™ej Samohel

πŸ’» πŸ“– πŸš‡ πŸ–‹ πŸ‘€ 🚧 πŸ§‘β€πŸ« πŸ“† πŸ’¬
Jakub Trllo
Jakub Trllo

πŸ’» πŸ“– πŸš‡ πŸ‘€ 🚧 πŸ’¬
Petr Kalis
Petr Kalis

πŸ’» πŸ“– πŸš‡ πŸ‘€ 🚧 πŸ’¬
64qam
64qam

πŸ’» πŸ‘€ πŸ“– πŸš‡ πŸ“† 🚧 πŸ–‹ πŸ““
Roy Nieterau
Roy Nieterau

πŸ’» πŸ“– πŸ‘€ πŸ§‘β€πŸ« πŸ’¬
Toke Jepsen
Toke Jepsen

πŸ’» πŸ“– πŸ‘€ πŸ§‘β€πŸ« πŸ’¬
Jiri Sindelar
Jiri Sindelar

πŸ’» πŸ‘€ πŸ“– πŸ–‹ βœ… πŸ““
Simone Barbieri
Simone Barbieri

πŸ’» πŸ“–
karimmozilla
karimmozilla

πŸ’»
Allan I. A.
Allan I. A.

πŸ’»
murphy
murphy

πŸ’» πŸ‘€ πŸ““ πŸ“– πŸ“†
Wijnand Koreman
Wijnand Koreman

πŸ’»
Bo Zhou
Bo Zhou

πŸ’»
ClΓ©ment Hector
ClΓ©ment Hector

πŸ’» πŸ‘€
David Lai
David Lai

πŸ’» πŸ‘€
Derek
Derek

πŸ’» πŸ“–
GΓ‘bor Marinov
GΓ‘bor Marinov

πŸ’» πŸ“–
icyvapor
icyvapor

πŸ’» πŸ“–
JΓ©rΓ΄me LORRAIN
JΓ©rΓ΄me LORRAIN

πŸ’»
David Morris-Oliveros
David Morris-Oliveros

πŸ’»
BenoitConnan
BenoitConnan

πŸ’»
Malthaldar
Malthaldar

πŸ’»
Sven Neve
Sven Neve

πŸ’»
zafrs
zafrs

πŸ’»
FΓ©lix David
FΓ©lix David

πŸ’» πŸ“–
Alexey Bogomolov
Alexey Bogomolov

πŸ’»
diff --git a/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py new file mode 100644 index 0000000000..559e9ae0ce --- /dev/null +++ b/openpype/hosts/blender/hooks/pre_add_run_python_script_arg.py @@ -0,0 +1,55 @@ +from pathlib import Path + +from openpype.lib import PreLaunchHook + + +class AddPythonScriptToLaunchArgs(PreLaunchHook): + """Add python script to be executed before Blender launch.""" + + # Append after file argument + order = 15 + app_groups = [ + "blender", + ] + + def execute(self): + if not self.launch_context.data.get("python_scripts"): + return + + # Add path to workfile to arguments + for python_script_path in self.launch_context.data["python_scripts"]: + self.log.info( + f"Adding python script {python_script_path} to launch" + ) + # Test script path exists + python_script_path = Path(python_script_path) + if not python_script_path.exists(): + self.log.warning( + f"Python script {python_script_path} doesn't exist. " + "Skipped..." + ) + continue + + if "--" in self.launch_context.launch_args: + # Insert before separator + separator_index = self.launch_context.launch_args.index("--") + self.launch_context.launch_args.insert( + separator_index, + "-P", + ) + self.launch_context.launch_args.insert( + separator_index + 1, + python_script_path.as_posix(), + ) + else: + self.launch_context.launch_args.extend( + ["-P", python_script_path.as_posix()] + ) + + # Ensure separator + if "--" not in self.launch_context.launch_args: + self.launch_context.launch_args.append("--") + + self.launch_context.launch_args.extend( + [*self.launch_context.data.get("script_args", [])] + ) diff --git a/openpype/hosts/hiero/plugins/load/load_clip.py b/openpype/hosts/hiero/plugins/load/load_clip.py index 77844d2448..c9bebfa8b2 100644 --- a/openpype/hosts/hiero/plugins/load/load_clip.py +++ b/openpype/hosts/hiero/plugins/load/load_clip.py @@ -41,8 +41,8 @@ class LoadClip(phiero.SequenceLoader): clip_name_template = "{asset}_{subset}_{representation}" + @classmethod def apply_settings(cls, project_settings, system_settings): - plugin_type_settings = ( project_settings .get("hiero", {}) diff --git a/openpype/hosts/houdini/api/colorspace.py b/openpype/hosts/houdini/api/colorspace.py new file mode 100644 index 0000000000..7047644225 --- /dev/null +++ b/openpype/hosts/houdini/api/colorspace.py @@ -0,0 +1,56 @@ +import attr +import hou +from openpype.hosts.houdini.api.lib import get_color_management_preferences + + +@attr.s +class LayerMetadata(object): + """Data class for Render Layer metadata.""" + frameStart = attr.ib() + frameEnd = attr.ib() + + +@attr.s +class RenderProduct(object): + """Getting Colorspace as + Specific Render Product Parameter for submitting + publish job. + + """ + colorspace = attr.ib() # colorspace + view = attr.ib() + productName = attr.ib(default=None) + + +class ARenderProduct(object): + + def __init__(self): + """Constructor.""" + # Initialize + self.layer_data = self._get_layer_data() + self.layer_data.products = self.get_colorspace_data() + + def _get_layer_data(self): + return LayerMetadata( + frameStart=int(hou.playbar.frameRange()[0]), + frameEnd=int(hou.playbar.frameRange()[1]), + ) + + def get_colorspace_data(self): + """To be implemented by renderer class. + + This should return a list of RenderProducts. + + Returns: + list: List of RenderProduct + + """ + data = get_color_management_preferences() + colorspace_data = [ + RenderProduct( + colorspace=data["display"], + view=data["view"], + productName="" + ) + ] + return colorspace_data diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index 2e58f3dd98..a33ba7aad2 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -1,6 +1,7 @@ # -*- coding: utf-8 -*- import sys import os +import re import uuid import logging from contextlib import contextmanager @@ -581,3 +582,74 @@ def splitext(name, allowed_multidot_extensions): return name[:-len(ext)], ext return os.path.splitext(name) + + +def get_top_referenced_parm(parm): + + processed = set() # disallow infinite loop + while True: + if parm.path() in processed: + raise RuntimeError("Parameter references result in cycle.") + + processed.add(parm.path()) + + ref = parm.getReferencedParm() + if ref.path() == parm.path(): + # It returns itself when it doesn't reference + # another parameter + return ref + else: + parm = ref + + +def evalParmNoFrame(node, parm, pad_character="#"): + + parameter = node.parm(parm) + assert parameter, "Parameter does not exist: %s.%s" % (node, parm) + + # If the parameter has a parameter reference, then get that + # parameter instead as otherwise `unexpandedString()` fails. + parameter = get_top_referenced_parm(parameter) + + # Substitute out the frame numbering with padded characters + try: + raw = parameter.unexpandedString() + except hou.Error as exc: + print("Failed: %s" % parameter) + raise RuntimeError(exc) + + def replace(match): + padding = 1 + n = match.group(2) + if n and int(n): + padding = int(n) + return pad_character * padding + + expression = re.sub(r"(\$F([0-9]*))", replace, raw) + + with hou.ScriptEvalContext(parameter): + return hou.expandStringAtFrame(expression, 0) + + +def get_color_management_preferences(): + """Get default OCIO preferences""" + data = { + "config": hou.Color.ocio_configPath() + + } + + # Get default display and view from OCIO + display = hou.Color.ocio_defaultDisplay() + disp_regex = re.compile(r"^(?P.+-)(?P.+)$") + disp_match = disp_regex.match(display) + + view = hou.Color.ocio_defaultView() + view_regex = re.compile(r"^(?P.+- )(?P.+)$") + view_match = view_regex.match(view) + data.update({ + "display": disp_match.group("display"), + "view": view_match.group("view") + + }) + + return data diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py new file mode 100644 index 0000000000..bddf26dbd5 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py @@ -0,0 +1,71 @@ +from openpype.hosts.houdini.api import plugin +from openpype.lib import EnumDef + + +class CreateArnoldRop(plugin.HoudiniCreator): + """Arnold ROP""" + + identifier = "io.openpype.creators.houdini.arnold_rop" + label = "Arnold ROP" + family = "arnold_rop" + icon = "magic" + defaults = ["master"] + + # Default extension + ext = "exr" + + def create(self, subset_name, instance_data, pre_create_data): + import hou + + # Remove the active, we are checking the bypass flag of the nodes + instance_data.pop("active", None) + instance_data.update({"node_type": "arnold"}) + + # Add chunk size attribute + instance_data["chunkSize"] = 1 + # Submit for job publishing + instance_data["farm"] = True + + instance = super(CreateArnoldRop, self).create( + subset_name, + instance_data, + pre_create_data) # type: plugin.CreatedInstance + + instance_node = hou.node(instance.get("instance_node")) + + ext = pre_create_data.get("image_format") + + filepath = "{renders_dir}{subset_name}/{subset_name}.$F4.{ext}".format( + renders_dir=hou.text.expandString("$HIP/pyblish/renders/"), + subset_name=subset_name, + ext=ext, + ) + parms = { + # Render frame range + "trange": 1, + + # Arnold ROP settings + "ar_picture": filepath, + "ar_exr_half_precision": 1 # half precision + } + + instance_node.setParms(parms) + + # Lock any parameters in this list + to_lock = ["family", "id"] + self.lock_parameters(instance_node, to_lock) + + def get_pre_create_attr_defs(self): + attrs = super(CreateArnoldRop, self).get_pre_create_attr_defs() + + image_format_enum = [ + "bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png", + "rad", "rat", "rta", "sgi", "tga", "tif", + ] + + return attrs + [ + EnumDef("image_format", + image_format_enum, + default=self.ext, + label="Image Format Options") + ] diff --git a/openpype/hosts/houdini/plugins/create/create_karma_rop.py b/openpype/hosts/houdini/plugins/create/create_karma_rop.py new file mode 100644 index 0000000000..edfb992e1a --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_karma_rop.py @@ -0,0 +1,114 @@ +# -*- coding: utf-8 -*- +"""Creator plugin to create Karma ROP.""" +from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance +from openpype.lib import BoolDef, EnumDef, NumberDef + + +class CreateKarmaROP(plugin.HoudiniCreator): + """Karma ROP""" + identifier = "io.openpype.creators.houdini.karma_rop" + label = "Karma ROP" + family = "karma_rop" + icon = "magic" + defaults = ["master"] + + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa + + instance_data.pop("active", None) + instance_data.update({"node_type": "karma"}) + # Add chunk size attribute + instance_data["chunkSize"] = 10 + # Submit for job publishing + instance_data["farm"] = True + + instance = super(CreateKarmaROP, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + + instance_node = hou.node(instance.get("instance_node")) + + ext = pre_create_data.get("image_format") + + filepath = "{renders_dir}{subset_name}/{subset_name}.$F4.{ext}".format( + renders_dir=hou.text.expandString("$HIP/pyblish/renders/"), + subset_name=subset_name, + ext=ext, + ) + checkpoint = "{cp_dir}{subset_name}.$F4.checkpoint".format( + cp_dir=hou.text.expandString("$HIP/pyblish/"), + subset_name=subset_name + ) + + usd_directory = "{usd_dir}{subset_name}_$RENDERID".format( + usd_dir=hou.text.expandString("$HIP/pyblish/renders/usd_renders/"), # noqa + subset_name=subset_name + ) + + parms = { + # Render Frame Range + "trange": 1, + # Karma ROP Setting + "picture": filepath, + # Karma Checkpoint Setting + "productName": checkpoint, + # USD Output Directory + "savetodirectory": usd_directory, + } + + res_x = pre_create_data.get("res_x") + res_y = pre_create_data.get("res_y") + + if self.selected_nodes: + # If camera found in selection + # we will use as render camera + camera = None + for node in self.selected_nodes: + if node.type().name() == "cam": + has_camera = pre_create_data.get("cam_res") + if has_camera: + res_x = node.evalParm("resx") + res_y = node.evalParm("resy") + + if not camera: + self.log.warning("No render camera found in selection") + + parms.update({ + "camera": camera or "", + "resolutionx": res_x, + "resolutiony": res_y, + }) + + instance_node.setParms(parms) + + # Lock some Avalon attributes + to_lock = ["family", "id"] + self.lock_parameters(instance_node, to_lock) + + def get_pre_create_attr_defs(self): + attrs = super(CreateKarmaROP, self).get_pre_create_attr_defs() + + image_format_enum = [ + "bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png", + "rad", "rat", "rta", "sgi", "tga", "tif", + ] + + return attrs + [ + EnumDef("image_format", + image_format_enum, + default="exr", + label="Image Format Options"), + NumberDef("res_x", + label="width", + default=1920, + decimals=0), + NumberDef("res_y", + label="height", + default=720, + decimals=0), + BoolDef("cam_res", + label="Camera Resolution", + default=False) + ] diff --git a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py new file mode 100644 index 0000000000..5ca53e96de --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py @@ -0,0 +1,88 @@ +# -*- coding: utf-8 -*- +"""Creator plugin to create Mantra ROP.""" +from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance +from openpype.lib import EnumDef, BoolDef + + +class CreateMantraROP(plugin.HoudiniCreator): + """Mantra ROP""" + identifier = "io.openpype.creators.houdini.mantra_rop" + label = "Mantra ROP" + family = "mantra_rop" + icon = "magic" + defaults = ["master"] + + def create(self, subset_name, instance_data, pre_create_data): + import hou # noqa + + instance_data.pop("active", None) + instance_data.update({"node_type": "ifd"}) + # Add chunk size attribute + instance_data["chunkSize"] = 10 + # Submit for job publishing + instance_data["farm"] = True + + instance = super(CreateMantraROP, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + + instance_node = hou.node(instance.get("instance_node")) + + ext = pre_create_data.get("image_format") + + filepath = "{renders_dir}{subset_name}/{subset_name}.$F4.{ext}".format( + renders_dir=hou.text.expandString("$HIP/pyblish/renders/"), + subset_name=subset_name, + ext=ext, + ) + + parms = { + # Render Frame Range + "trange": 1, + # Mantra ROP Setting + "vm_picture": filepath, + } + + if self.selected_nodes: + # If camera found in selection + # we will use as render camera + camera = None + for node in self.selected_nodes: + if node.type().name() == "cam": + camera = node.path() + + if not camera: + self.log.warning("No render camera found in selection") + + parms.update({"camera": camera or ""}) + + custom_res = pre_create_data.get("override_resolution") + if custom_res: + parms.update({"override_camerares": 1}) + instance_node.setParms(parms) + + # Lock some Avalon attributes + to_lock = ["family", "id"] + self.lock_parameters(instance_node, to_lock) + + def get_pre_create_attr_defs(self): + attrs = super(CreateMantraROP, self).get_pre_create_attr_defs() + + image_format_enum = [ + "bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png", + "rad", "rat", "rta", "sgi", "tga", "tif", + ] + + return attrs + [ + EnumDef("image_format", + image_format_enum, + default="exr", + label="Image Format Options"), + BoolDef("override_resolution", + label="Override Camera Resolution", + tooltip="Override the current camera " + "resolution, recommended for IPR.", + default=False) + ] diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 2cbe9bfda1..e14ff15bf8 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -1,7 +1,10 @@ # -*- coding: utf-8 -*- """Creator plugin to create Redshift ROP.""" +import hou # noqa + from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +from openpype.lib import EnumDef class CreateRedshiftROP(plugin.HoudiniCreator): @@ -11,20 +14,16 @@ class CreateRedshiftROP(plugin.HoudiniCreator): family = "redshift_rop" icon = "magic" defaults = ["master"] + ext = "exr" def create(self, subset_name, instance_data, pre_create_data): - import hou # noqa instance_data.pop("active", None) instance_data.update({"node_type": "Redshift_ROP"}) # Add chunk size attribute instance_data["chunkSize"] = 10 - - # Clear the family prefix from the subset - subset = subset_name - subset_no_prefix = subset[len(self.family):] - subset_no_prefix = subset_no_prefix[0].lower() + subset_no_prefix[1:] - subset_name = subset_no_prefix + # Submit for job publishing + instance_data["farm"] = True instance = super(CreateRedshiftROP, self).create( subset_name, @@ -34,11 +33,10 @@ class CreateRedshiftROP(plugin.HoudiniCreator): instance_node = hou.node(instance.get("instance_node")) basename = instance_node.name() - instance_node.setName(basename + "_ROP", unique_name=True) # Also create the linked Redshift IPR Rop try: - ipr_rop = self.parent.createNode( + ipr_rop = instance_node.parent().createNode( "Redshift_IPR", node_name=basename + "_IPR" ) except hou.OperationFailed: @@ -50,19 +48,58 @@ class CreateRedshiftROP(plugin.HoudiniCreator): ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1)) # Set the linked rop to the Redshift ROP - ipr_rop.parm("linked_rop").set(ipr_rop.relativePathTo(instance)) + ipr_rop.parm("linked_rop").set(instance_node.path()) + + ext = pre_create_data.get("image_format") + filepath = "{renders_dir}{subset_name}/{subset_name}.{fmt}".format( + renders_dir=hou.text.expandString("$HIP/pyblish/renders/"), + subset_name=subset_name, + fmt="${aov}.$F4.{ext}".format(aov="AOV", ext=ext) + ) - prefix = '${HIP}/render/${HIPNAME}/`chs("subset")`.${AOV}.$F4.exr' parms = { # Render frame range "trange": 1, # Redshift ROP settings - "RS_outputFileNamePrefix": prefix, - "RS_outputMultilayerMode": 0, # no multi-layered exr + "RS_outputFileNamePrefix": filepath, + "RS_outputMultilayerMode": "1", # no multi-layered exr "RS_outputBeautyAOVSuffix": "beauty", } + + if self.selected_nodes: + # set up the render camera from the selected node + camera = None + for node in self.selected_nodes: + if node.type().name() == "cam": + camera = node.path() + parms.update({ + "RS_renderCamera": camera or ""}) instance_node.setParms(parms) # Lock some Avalon attributes to_lock = ["family", "id"] self.lock_parameters(instance_node, to_lock) + + def remove_instances(self, instances): + for instance in instances: + node = instance.data.get("instance_node") + + ipr_node = hou.node(f"{node}_IPR") + if ipr_node: + ipr_node.destroy() + + return super(CreateRedshiftROP, self).remove_instances(instances) + + def get_pre_create_attr_defs(self): + attrs = super(CreateRedshiftROP, self).get_pre_create_attr_defs() + image_format_enum = [ + "bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png", + "rad", "rat", "rta", "sgi", "tga", "tif", + ] + + return attrs + [ + EnumDef("image_format", + image_format_enum, + default=self.ext, + label="Image Format Options") + ] diff --git a/openpype/hosts/houdini/plugins/create/create_vray_rop.py b/openpype/hosts/houdini/plugins/create/create_vray_rop.py new file mode 100644 index 0000000000..1de9be4ed6 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_vray_rop.py @@ -0,0 +1,156 @@ +# -*- coding: utf-8 -*- +"""Creator plugin to create VRay ROP.""" +import hou + +from openpype.hosts.houdini.api import plugin +from openpype.pipeline import CreatedInstance +from openpype.lib import EnumDef, BoolDef + + +class CreateVrayROP(plugin.HoudiniCreator): + """VRay ROP""" + + identifier = "io.openpype.creators.houdini.vray_rop" + label = "VRay ROP" + family = "vray_rop" + icon = "magic" + defaults = ["master"] + + ext = "exr" + + def create(self, subset_name, instance_data, pre_create_data): + + instance_data.pop("active", None) + instance_data.update({"node_type": "vray_renderer"}) + # Add chunk size attribute + instance_data["chunkSize"] = 10 + # Submit for job publishing + instance_data["farm"] = True + + instance = super(CreateVrayROP, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + + instance_node = hou.node(instance.get("instance_node")) + + # Add IPR for Vray + basename = instance_node.name() + try: + ipr_rop = instance_node.parent().createNode( + "vray", node_name=basename + "_IPR" + ) + except hou.OperationFailed: + raise plugin.OpenPypeCreatorError( + "Cannot create Vray render node. " + "Make sure Vray installed and enabled!" + ) + + ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1)) + ipr_rop.parm("rop").set(instance_node.path()) + + parms = { + "trange": 1, + "SettingsEXR_bits_per_channel": "16" # half precision + } + + if self.selected_nodes: + # set up the render camera from the selected node + camera = None + for node in self.selected_nodes: + if node.type().name() == "cam": + camera = node.path() + parms.update({ + "render_camera": camera or "" + }) + + # Enable render element + ext = pre_create_data.get("image_format") + instance_data["RenderElement"] = pre_create_data.get("render_element_enabled") # noqa + if pre_create_data.get("render_element_enabled", True): + # Vray has its own tag for AOV file output + filepath = "{renders_dir}{subset_name}/{subset_name}.{fmt}".format( + renders_dir=hou.text.expandString("$HIP/pyblish/renders/"), + subset_name=subset_name, + fmt="${aov}.$F4.{ext}".format(aov="AOV", + ext=ext) + ) + filepath = "{}{}".format( + hou.text.expandString("$HIP/pyblish/renders/"), + "{}/{}.${}.$F4.{}".format(subset_name, + subset_name, + "AOV", + ext) + ) + re_rop = instance_node.parent().createNode( + "vray_render_channels", + node_name=basename + "_render_element" + ) + # move the render element node next to the vray renderer node + re_rop.setPosition(instance_node.position() + hou.Vector2(0, 1)) + re_path = re_rop.path() + parms.update({ + "use_render_channels": 1, + "SettingsOutput_img_file_path": filepath, + "render_network_render_channels": re_path + }) + + else: + filepath = "{renders_dir}{subset_name}/{subset_name}.{fmt}".format( + renders_dir=hou.text.expandString("$HIP/pyblish/renders/"), + subset_name=subset_name, + fmt="$F4.{ext}".format(ext=ext) + ) + parms.update({ + "use_render_channels": 0, + "SettingsOutput_img_file_path": filepath + }) + + custom_res = pre_create_data.get("override_resolution") + if custom_res: + parms.update({"override_camerares": 1}) + + instance_node.setParms(parms) + + # lock parameters from AVALON + to_lock = ["family", "id"] + self.lock_parameters(instance_node, to_lock) + + def remove_instances(self, instances): + for instance in instances: + node = instance.data.get("instance_node") + # for the extra render node from the plugins + # such as vray and redshift + ipr_node = hou.node("{}{}".format(node, "_IPR")) + if ipr_node: + ipr_node.destroy() + re_node = hou.node("{}{}".format(node, + "_render_element")) + if re_node: + re_node.destroy() + + return super(CreateVrayROP, self).remove_instances(instances) + + def get_pre_create_attr_defs(self): + attrs = super(CreateVrayROP, self).get_pre_create_attr_defs() + image_format_enum = [ + "bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png", + "rad", "rat", "rta", "sgi", "tga", "tif", + ] + + return attrs + [ + EnumDef("image_format", + image_format_enum, + default=self.ext, + label="Image Format Options"), + BoolDef("override_resolution", + label="Override Camera Resolution", + tooltip="Override the current camera " + "resolution, recommended for IPR.", + default=False), + BoolDef("render_element_enabled", + label="Render Element", + tooltip="Create Render Element Node " + "if enabled", + default=False) + ] diff --git a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py new file mode 100644 index 0000000000..614785487f --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py @@ -0,0 +1,135 @@ +import os +import re + +import hou +import pyblish.api + +from openpype.hosts.houdini.api import colorspace +from openpype.hosts.houdini.api.lib import ( + evalParmNoFrame, get_color_management_preferences) + + +class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin): + """Collect Arnold ROP Render Products + + Collects the instance.data["files"] for the render products. + + Provides: + instance -> files + + """ + + label = "Arnold ROP Render Products" + order = pyblish.api.CollectorOrder + 0.4 + hosts = ["houdini"] + families = ["arnold_rop"] + + def process(self, instance): + + rop = hou.node(instance.data.get("instance_node")) + + # Collect chunkSize + chunk_size_parm = rop.parm("chunkSize") + if chunk_size_parm: + chunk_size = int(chunk_size_parm.eval()) + instance.data["chunkSize"] = chunk_size + self.log.debug("Chunk Size: %s" % chunk_size) + + default_prefix = evalParmNoFrame(rop, "ar_picture") + render_products = [] + + # Default beauty AOV + beauty_product = self.get_render_product_name(prefix=default_prefix, + suffix=None) + render_products.append(beauty_product) + + files_by_aov = { + "": self.generate_expected_files(instance, beauty_product) + } + + num_aovs = rop.evalParm("ar_aovs") + for index in range(1, num_aovs + 1): + # Skip disabled AOVs + if not rop.evalParm("ar_enable_aovP{}".format(index)): + continue + + if rop.evalParm("ar_aov_exr_enable_layer_name{}".format(index)): + label = rop.evalParm("ar_aov_exr_layer_name{}".format(index)) + else: + label = evalParmNoFrame(rop, "ar_aov_label{}".format(index)) + + aov_product = self.get_render_product_name(default_prefix, + suffix=label) + render_products.append(aov_product) + files_by_aov[label] = self.generate_expected_files(instance, + aov_product) + + for product in render_products: + self.log.debug("Found render product: {}".format(product)) + + instance.data["files"] = list(render_products) + instance.data["renderProducts"] = colorspace.ARenderProduct() + + # For now by default do NOT try to publish the rendered output + instance.data["publishJobState"] = "Suspended" + instance.data["attachTo"] = [] # stub required data + + if "expectedFiles" not in instance.data: + instance.data["expectedFiles"] = list() + instance.data["expectedFiles"].append(files_by_aov) + + # update the colorspace data + colorspace_data = get_color_management_preferences() + instance.data["colorspaceConfig"] = colorspace_data["config"] + instance.data["colorspaceDisplay"] = colorspace_data["display"] + instance.data["colorspaceView"] = colorspace_data["view"] + + def get_render_product_name(self, prefix, suffix): + """Return the output filename using the AOV prefix and suffix""" + + # When AOV is explicitly defined in prefix we just swap it out + # directly with the AOV suffix to embed it. + # Note: ${AOV} seems to be evaluated in the parameter as %AOV% + if "%AOV%" in prefix: + # It seems that when some special separator characters are present + # before the %AOV% token that Redshift will secretly remove it if + # there is no suffix for the current product, for example: + # foo_%AOV% -> foo.exr + pattern = "%AOV%" if suffix else "[._-]?%AOV%" + product_name = re.sub(pattern, + suffix, + prefix, + flags=re.IGNORECASE) + else: + if suffix: + # Add ".{suffix}" before the extension + prefix_base, ext = os.path.splitext(prefix) + product_name = prefix_base + "." + suffix + ext + else: + product_name = prefix + + return product_name + + def generate_expected_files(self, instance, path): + """Create expected files in instance data""" + + dir = os.path.dirname(path) + file = os.path.basename(path) + + if "#" in file: + def replace(match): + return "%0{}d".format(len(match.group())) + + file = re.sub("#+", replace, file) + + if "%" not in file: + return path + + expected_files = [] + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + for i in range(int(start), (int(end) + 1)): + expected_files.append( + os.path.join(dir, (file % i)).replace("\\", "/")) + + return expected_files diff --git a/openpype/hosts/houdini/plugins/publish/collect_instance_frame_data.py b/openpype/hosts/houdini/plugins/publish/collect_instance_frame_data.py new file mode 100644 index 0000000000..584343cd64 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_instance_frame_data.py @@ -0,0 +1,56 @@ +import hou + +import pyblish.api + + +class CollectInstanceNodeFrameRange(pyblish.api.InstancePlugin): + """Collect time range frame data for the instance node.""" + + order = pyblish.api.CollectorOrder + 0.001 + label = "Instance Node Frame Range" + hosts = ["houdini"] + + def process(self, instance): + + node_path = instance.data.get("instance_node") + node = hou.node(node_path) if node_path else None + if not node_path or not node: + self.log.debug("No instance node found for instance: " + "{}".format(instance)) + return + + frame_data = self.get_frame_data(node) + if not frame_data: + return + + self.log.info("Collected time data: {}".format(frame_data)) + instance.data.update(frame_data) + + def get_frame_data(self, node): + """Get the frame data: start frame, end frame and steps + Args: + node(hou.Node) + + Returns: + dict + + """ + + data = {} + + if node.parm("trange") is None: + self.log.debug("Node has no 'trange' parameter: " + "{}".format(node.path())) + return data + + if node.evalParm("trange") == 0: + # Ignore 'render current frame' + self.log.debug("Node '{}' has 'Render current frame' set. " + "Time range data ignored.".format(node.path())) + return data + + data["frameStart"] = node.evalParm("f1") + data["frameEnd"] = node.evalParm("f2") + data["byFrameStep"] = node.evalParm("f3") + + return data diff --git a/openpype/hosts/houdini/plugins/publish/collect_instances.py b/openpype/hosts/houdini/plugins/publish/collect_instances.py index 5d5347f96e..ccfaf60f0c 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_instances.py +++ b/openpype/hosts/houdini/plugins/publish/collect_instances.py @@ -118,6 +118,6 @@ class CollectInstances(pyblish.api.ContextPlugin): data["frameStart"] = node.evalParm("f1") data["frameEnd"] = node.evalParm("f2") - data["steps"] = node.evalParm("f3") + data["byFrameStep"] = node.evalParm("f3") return data diff --git a/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py b/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py new file mode 100644 index 0000000000..eabb1128d8 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py @@ -0,0 +1,104 @@ +import re +import os + +import hou +import pyblish.api + +from openpype.hosts.houdini.api.lib import ( + evalParmNoFrame, + get_color_management_preferences +) +from openpype.hosts.houdini.api import ( + colorspace +) + + +class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin): + """Collect Karma Render Products + + Collects the instance.data["files"] for the multipart render product. + + Provides: + instance -> files + + """ + + label = "Karma ROP Render Products" + order = pyblish.api.CollectorOrder + 0.4 + hosts = ["houdini"] + families = ["karma_rop"] + + def process(self, instance): + + rop = hou.node(instance.data.get("instance_node")) + + # Collect chunkSize + chunk_size_parm = rop.parm("chunkSize") + if chunk_size_parm: + chunk_size = int(chunk_size_parm.eval()) + instance.data["chunkSize"] = chunk_size + self.log.debug("Chunk Size: %s" % chunk_size) + + default_prefix = evalParmNoFrame(rop, "picture") + render_products = [] + + # Default beauty AOV + beauty_product = self.get_render_product_name( + prefix=default_prefix, suffix=None + ) + render_products.append(beauty_product) + + files_by_aov = { + "beauty": self.generate_expected_files(instance, + beauty_product) + } + + filenames = list(render_products) + instance.data["files"] = filenames + instance.data["renderProducts"] = colorspace.ARenderProduct() + + for product in render_products: + self.log.debug("Found render product: %s" % product) + + if "expectedFiles" not in instance.data: + instance.data["expectedFiles"] = list() + instance.data["expectedFiles"].append(files_by_aov) + + # update the colorspace data + colorspace_data = get_color_management_preferences() + instance.data["colorspaceConfig"] = colorspace_data["config"] + instance.data["colorspaceDisplay"] = colorspace_data["display"] + instance.data["colorspaceView"] = colorspace_data["view"] + + def get_render_product_name(self, prefix, suffix): + product_name = prefix + if suffix: + # Add ".{suffix}" before the extension + prefix_base, ext = os.path.splitext(prefix) + product_name = "{}.{}{}".format(prefix_base, suffix, ext) + + return product_name + + def generate_expected_files(self, instance, path): + """Create expected files in instance data""" + + dir = os.path.dirname(path) + file = os.path.basename(path) + + if "#" in file: + def replace(match): + return "%0{}d".format(len(match.group())) + + file = re.sub("#+", replace, file) + + if "%" not in file: + return path + + expected_files = [] + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + for i in range(int(start), (int(end) + 1)): + expected_files.append( + os.path.join(dir, (file % i)).replace("\\", "/")) + + return expected_files diff --git a/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py b/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py new file mode 100644 index 0000000000..c4460f5350 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py @@ -0,0 +1,127 @@ +import re +import os + +import hou +import pyblish.api + +from openpype.hosts.houdini.api.lib import ( + evalParmNoFrame, + get_color_management_preferences +) +from openpype.hosts.houdini.api import ( + colorspace +) + + +class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin): + """Collect Mantra Render Products + + Collects the instance.data["files"] for the render products. + + Provides: + instance -> files + + """ + + label = "Mantra ROP Render Products" + order = pyblish.api.CollectorOrder + 0.4 + hosts = ["houdini"] + families = ["mantra_rop"] + + def process(self, instance): + + rop = hou.node(instance.data.get("instance_node")) + + # Collect chunkSize + chunk_size_parm = rop.parm("chunkSize") + if chunk_size_parm: + chunk_size = int(chunk_size_parm.eval()) + instance.data["chunkSize"] = chunk_size + self.log.debug("Chunk Size: %s" % chunk_size) + + default_prefix = evalParmNoFrame(rop, "vm_picture") + render_products = [] + + # Default beauty AOV + beauty_product = self.get_render_product_name( + prefix=default_prefix, suffix=None + ) + render_products.append(beauty_product) + + files_by_aov = { + "beauty": self.generate_expected_files(instance, + beauty_product) + } + + aov_numbers = rop.evalParm("vm_numaux") + if aov_numbers > 0: + # get the filenames of the AOVs + for i in range(1, aov_numbers + 1): + var = rop.evalParm("vm_variable_plane%d" % i) + if var: + aov_name = "vm_filename_plane%d" % i + aov_boolean = "vm_usefile_plane%d" % i + aov_enabled = rop.evalParm(aov_boolean) + has_aov_path = rop.evalParm(aov_name) + if has_aov_path and aov_enabled == 1: + aov_prefix = evalParmNoFrame(rop, aov_name) + aov_product = self.get_render_product_name( + prefix=aov_prefix, suffix=None + ) + render_products.append(aov_product) + + files_by_aov[var] = self.generate_expected_files(instance, aov_product) # noqa + + for product in render_products: + self.log.debug("Found render product: %s" % product) + + filenames = list(render_products) + instance.data["files"] = filenames + instance.data["renderProducts"] = colorspace.ARenderProduct() + + # For now by default do NOT try to publish the rendered output + instance.data["publishJobState"] = "Suspended" + instance.data["attachTo"] = [] # stub required data + + if "expectedFiles" not in instance.data: + instance.data["expectedFiles"] = list() + instance.data["expectedFiles"].append(files_by_aov) + + # update the colorspace data + colorspace_data = get_color_management_preferences() + instance.data["colorspaceConfig"] = colorspace_data["config"] + instance.data["colorspaceDisplay"] = colorspace_data["display"] + instance.data["colorspaceView"] = colorspace_data["view"] + + def get_render_product_name(self, prefix, suffix): + product_name = prefix + if suffix: + # Add ".{suffix}" before the extension + prefix_base, ext = os.path.splitext(prefix) + product_name = prefix_base + "." + suffix + ext + + return product_name + + def generate_expected_files(self, instance, path): + """Create expected files in instance data""" + + dir = os.path.dirname(path) + file = os.path.basename(path) + + if "#" in file: + def replace(match): + return "%0{}d".format(len(match.group())) + + file = re.sub("#+", replace, file) + + if "%" not in file: + return path + + expected_files = [] + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + for i in range(int(start), (int(end) + 1)): + expected_files.append( + os.path.join(dir, (file % i)).replace("\\", "/")) + + return expected_files diff --git a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py index f1d73d7523..dbb15ab88f 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py @@ -4,52 +4,13 @@ import os import hou import pyblish.api - -def get_top_referenced_parm(parm): - - processed = set() # disallow infinite loop - while True: - if parm.path() in processed: - raise RuntimeError("Parameter references result in cycle.") - - processed.add(parm.path()) - - ref = parm.getReferencedParm() - if ref.path() == parm.path(): - # It returns itself when it doesn't reference - # another parameter - return ref - else: - parm = ref - - -def evalParmNoFrame(node, parm, pad_character="#"): - - parameter = node.parm(parm) - assert parameter, "Parameter does not exist: %s.%s" % (node, parm) - - # If the parameter has a parameter reference, then get that - # parameter instead as otherwise `unexpandedString()` fails. - parameter = get_top_referenced_parm(parameter) - - # Substitute out the frame numbering with padded characters - try: - raw = parameter.unexpandedString() - except hou.Error as exc: - print("Failed: %s" % parameter) - raise RuntimeError(exc) - - def replace(match): - padding = 1 - n = match.group(2) - if n and int(n): - padding = int(n) - return pad_character * padding - - expression = re.sub(r"(\$F([0-9]*))", replace, raw) - - with hou.ScriptEvalContext(parameter): - return hou.expandStringAtFrame(expression, 0) +from openpype.hosts.houdini.api.lib import ( + evalParmNoFrame, + get_color_management_preferences +) +from openpype.hosts.houdini.api import ( + colorspace +) class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): @@ -87,6 +48,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): prefix=default_prefix, suffix=beauty_suffix ) render_products.append(beauty_product) + files_by_aov = { + "_": self.generate_expected_files(instance, + beauty_product)} num_aovs = rop.evalParm("RS_aov") for index in range(num_aovs): @@ -104,11 +68,29 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): aov_product = self.get_render_product_name(aov_prefix, aov_suffix) render_products.append(aov_product) + files_by_aov[aov_suffix] = self.generate_expected_files(instance, + aov_product) # noqa + for product in render_products: self.log.debug("Found render product: %s" % product) filenames = list(render_products) instance.data["files"] = filenames + instance.data["renderProducts"] = colorspace.ARenderProduct() + + # For now by default do NOT try to publish the rendered output + instance.data["publishJobState"] = "Suspended" + instance.data["attachTo"] = [] # stub required data + + if "expectedFiles" not in instance.data: + instance.data["expectedFiles"] = list() + instance.data["expectedFiles"].append(files_by_aov) + + # update the colorspace data + colorspace_data = get_color_management_preferences() + instance.data["colorspaceConfig"] = colorspace_data["config"] + instance.data["colorspaceDisplay"] = colorspace_data["display"] + instance.data["colorspaceView"] = colorspace_data["view"] def get_render_product_name(self, prefix, suffix): """Return the output filename using the AOV prefix and suffix""" @@ -133,3 +115,27 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): product_name = prefix return product_name + + def generate_expected_files(self, instance, path): + """Create expected files in instance data""" + + dir = os.path.dirname(path) + file = os.path.basename(path) + + if "#" in file: + def replace(match): + return "%0{}d".format(len(match.group())) + + file = re.sub("#+", replace, file) + + if "%" not in file: + return path + + expected_files = [] + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + for i in range(int(start), (int(end) + 1)): + expected_files.append( + os.path.join(dir, (file % i)).replace("\\", "/")) + + return expected_files diff --git a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py new file mode 100644 index 0000000000..d4fe37f993 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py @@ -0,0 +1,129 @@ +import re +import os + +import hou +import pyblish.api + +from openpype.hosts.houdini.api.lib import ( + evalParmNoFrame, + get_color_management_preferences +) +from openpype.hosts.houdini.api import ( + colorspace +) + + +class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin): + """Collect Vray Render Products + + Collects the instance.data["files"] for the render products. + + Provides: + instance -> files + + """ + + label = "VRay ROP Render Products" + order = pyblish.api.CollectorOrder + 0.4 + hosts = ["houdini"] + families = ["vray_rop"] + + def process(self, instance): + + rop = hou.node(instance.data.get("instance_node")) + + # Collect chunkSize + chunk_size_parm = rop.parm("chunkSize") + if chunk_size_parm: + chunk_size = int(chunk_size_parm.eval()) + instance.data["chunkSize"] = chunk_size + self.log.debug("Chunk Size: %s" % chunk_size) + + default_prefix = evalParmNoFrame(rop, "SettingsOutput_img_file_path") + render_products = [] + # TODO: add render elements if render element + + beauty_product = self.get_beauty_render_product(default_prefix) + render_products.append(beauty_product) + files_by_aov = { + "RGB Color": self.generate_expected_files(instance, + beauty_product)} + + if instance.data.get("RenderElement", True): + render_element = self.get_render_element_name(rop, default_prefix) + if render_element: + for aov, renderpass in render_element.items(): + render_products.append(renderpass) + files_by_aov[aov] = self.generate_expected_files(instance, renderpass) # noqa + + for product in render_products: + self.log.debug("Found render product: %s" % product) + filenames = list(render_products) + instance.data["files"] = filenames + instance.data["renderProducts"] = colorspace.ARenderProduct() + + # For now by default do NOT try to publish the rendered output + instance.data["publishJobState"] = "Suspended" + instance.data["attachTo"] = [] # stub required data + + if "expectedFiles" not in instance.data: + instance.data["expectedFiles"] = list() + instance.data["expectedFiles"].append(files_by_aov) + self.log.debug("expectedFiles:{}".format(files_by_aov)) + + # update the colorspace data + colorspace_data = get_color_management_preferences() + instance.data["colorspaceConfig"] = colorspace_data["config"] + instance.data["colorspaceDisplay"] = colorspace_data["display"] + instance.data["colorspaceView"] = colorspace_data["view"] + + def get_beauty_render_product(self, prefix, suffix=""): + """Return the beauty output filename if render element enabled + """ + aov_parm = ".{}".format(suffix) + beauty_product = None + if aov_parm in prefix: + beauty_product = prefix.replace(aov_parm, "") + else: + beauty_product = prefix + + return beauty_product + + def get_render_element_name(self, node, prefix, suffix=""): + """Return the output filename using the AOV prefix and suffix + """ + render_element_dict = {} + # need a rewrite + re_path = node.evalParm("render_network_render_channels") + if re_path: + node_children = hou.node(re_path).children() + for element in node_children: + if element.shaderName() != "vray:SettingsRenderChannels": + aov = str(element) + render_product = prefix.replace(suffix, aov) + render_element_dict[aov] = render_product + return render_element_dict + + def generate_expected_files(self, instance, path): + """Create expected files in instance data""" + + dir = os.path.dirname(path) + file = os.path.basename(path) + + if "#" in file: + def replace(match): + return "%0{}d".format(len(match.group())) + + file = re.sub("#+", replace, file) + + if "%" not in file: + return path + + expected_files = [] + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + for i in range(int(start), (int(end) + 1)): + expected_files.append( + os.path.join(dir, (file % i)).replace("\\", "/")) + + return expected_files diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file.py b/openpype/hosts/houdini/plugins/publish/increment_current_file.py index 16d9ef9aec..2493b28bc1 100644 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/increment_current_file.py @@ -2,7 +2,10 @@ import pyblish.api from openpype.lib import version_up from openpype.pipeline import registered_host +from openpype.action import get_errored_plugins_from_data from openpype.hosts.houdini.api import HoudiniHost +from openpype.pipeline.publish import KnownPublishError + class IncrementCurrentFile(pyblish.api.ContextPlugin): """Increment the current file. @@ -14,17 +17,32 @@ class IncrementCurrentFile(pyblish.api.ContextPlugin): label = "Increment current file" order = pyblish.api.IntegratorOrder + 9.0 hosts = ["houdini"] - families = ["workfile"] + families = ["workfile", + "redshift_rop", + "arnold_rop", + "mantra_rop", + "karma_rop", + "usdrender"] optional = True def process(self, context): + errored_plugins = get_errored_plugins_from_data(context) + if any( + plugin.__name__ == "HoudiniSubmitPublishDeadline" + for plugin in errored_plugins + ): + raise KnownPublishError( + "Skipping incrementing current file because " + "submission to deadline failed." + ) + # Filename must not have changed since collecting host = registered_host() # type: HoudiniHost current_file = host.current_file() assert ( context.data["currentFile"] == current_file - ), "Collected filename from current scene name." + ), "Collected filename mismatches from current scene name." new_filepath = version_up(current_file) host.save_workfile(new_filepath) diff --git a/openpype/hosts/max/api/colorspace.py b/openpype/hosts/max/api/colorspace.py new file mode 100644 index 0000000000..fafee4ee04 --- /dev/null +++ b/openpype/hosts/max/api/colorspace.py @@ -0,0 +1,50 @@ +import attr +from pymxs import runtime as rt + + +@attr.s +class LayerMetadata(object): + """Data class for Render Layer metadata.""" + frameStart = attr.ib() + frameEnd = attr.ib() + + +@attr.s +class RenderProduct(object): + """Getting Colorspace as + Specific Render Product Parameter for submitting + publish job. + """ + colorspace = attr.ib() # colorspace + view = attr.ib() + productName = attr.ib(default=None) + + +class ARenderProduct(object): + + def __init__(self): + """Constructor.""" + # Initialize + self.layer_data = self._get_layer_data() + self.layer_data.products = self.get_colorspace_data() + + def _get_layer_data(self): + return LayerMetadata( + frameStart=int(rt.rendStart), + frameEnd=int(rt.rendEnd), + ) + + def get_colorspace_data(self): + """To be implemented by renderer class. + This should return a list of RenderProducts. + Returns: + list: List of RenderProduct + """ + colorspace_data = [ + RenderProduct( + colorspace="sRGB", + view="ACES 1.0", + productName="" + ) + ] + return colorspace_data diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index d9213863b1..e2af0720ec 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -128,7 +128,14 @@ def get_all_children(parent, node_type=None): def get_current_renderer(): - """get current renderer""" + """ + Notes: + Get current renderer for Max + + Returns: + "{Current Renderer}:{Current Renderer}" + e.g. "Redshift_Renderer:Redshift_Renderer" + """ return rt.renderers.production diff --git a/openpype/hosts/max/api/lib_renderproducts.py b/openpype/hosts/max/api/lib_renderproducts.py index 8224d589ad..94b0aeb913 100644 --- a/openpype/hosts/max/api/lib_renderproducts.py +++ b/openpype/hosts/max/api/lib_renderproducts.py @@ -3,94 +3,126 @@ # arnold # https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_3ds_max_ax_maxscript_commands_ax_renderview_commands_html import os + from pymxs import runtime as rt -from openpype.hosts.max.api.lib import ( - get_current_renderer, - get_default_render_folder -) -from openpype.pipeline.context_tools import get_current_project_asset -from openpype.settings import get_project_settings + +from openpype.hosts.max.api.lib import get_current_renderer from openpype.pipeline import legacy_io +from openpype.settings import get_project_settings class RenderProducts(object): def __init__(self, project_settings=None): - self._project_settings = project_settings - if not self._project_settings: - self._project_settings = get_project_settings( - legacy_io.Session["AVALON_PROJECT"] - ) + self._project_settings = project_settings or get_project_settings( + legacy_io.Session["AVALON_PROJECT"]) + + def get_beauty(self, container): + render_dir = os.path.dirname(rt.rendOutputFilename) + + output_file = os.path.join(render_dir, container) - def render_product(self, container): - folder = rt.maxFilePath - file = rt.maxFileName - folder = folder.replace("\\", "/") setting = self._project_settings - render_folder = get_default_render_folder(setting) - filename, ext = os.path.splitext(file) + img_fmt = setting["max"]["RenderSettings"]["image_format"] # noqa - output_file = os.path.join(folder, - render_folder, - filename, + start_frame = int(rt.rendStart) + end_frame = int(rt.rendEnd) + 1 + + return { + "beauty": self.get_expected_beauty( + output_file, start_frame, end_frame, img_fmt + ) + } + + def get_aovs(self, container): + render_dir = os.path.dirname(rt.rendOutputFilename) + + output_file = os.path.join(render_dir, container) - context = get_current_project_asset() - # TODO: change the frame range follows the current render setting - startFrame = int(rt.rendStart) - endFrame = int(rt.rendEnd) + 1 - - img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa - full_render_list = self.beauty_render_product(output_file, - startFrame, - endFrame, - img_fmt) + setting = self._project_settings + img_fmt = setting["max"]["RenderSettings"]["image_format"] # noqa + start_frame = int(rt.rendStart) + end_frame = int(rt.rendEnd) + 1 renderer_class = get_current_renderer() renderer = str(renderer_class).split(":")[0] - - - if renderer == "VUE_File_Renderer": - return full_render_list + render_dict = {} if renderer in [ "ART_Renderer", - "Redshift_Renderer", "V_Ray_6_Hotfix_3", "V_Ray_GPU_6_Hotfix_3", "Default_Scanline_Renderer", "Quicksilver_Hardware_Renderer", ]: - render_elem_list = self.render_elements_product(output_file, - startFrame, - endFrame, - img_fmt) - if render_elem_list: - full_render_list.extend(iter(render_elem_list)) - return full_render_list + render_name = self.get_render_elements_name() + if render_name: + for name in render_name: + render_dict.update({ + name: self.get_expected_render_elements( + output_file, name, start_frame, + end_frame, img_fmt) + }) + elif renderer == "Redshift_Renderer": + render_name = self.get_render_elements_name() + if render_name: + rs_aov_files = rt.Execute("renderers.current.separateAovFiles") + # this doesn't work, always returns False + # rs_AovFiles = rt.RedShift_Renderer().separateAovFiles + if img_fmt == "exr" and not rs_aov_files: + for name in render_name: + if name == "RsCryptomatte": + render_dict.update({ + name: self.get_expected_render_elements( + output_file, name, start_frame, + end_frame, img_fmt) + }) + else: + for name in render_name: + render_dict.update({ + name: self.get_expected_render_elements( + output_file, name, start_frame, + end_frame, img_fmt) + }) - if renderer == "Arnold": - aov_list = self.arnold_render_product(output_file, - startFrame, - endFrame, - img_fmt) - if aov_list: - full_render_list.extend(iter(aov_list)) - return full_render_list + elif renderer == "Arnold": + render_name = self.get_arnold_product_name() + if render_name: + for name in render_name: + render_dict.update({ + name: self.get_expected_arnold_product( + output_file, name, start_frame, end_frame, img_fmt) + }) + elif renderer in [ + "V_Ray_6_Hotfix_3", + "V_Ray_GPU_6_Hotfix_3" + ]: + if img_fmt != "exr": + render_name = self.get_render_elements_name() + if render_name: + for name in render_name: + render_dict.update({ + name: self.get_expected_render_elements( + output_file, name, start_frame, + end_frame, img_fmt) # noqa + }) - def beauty_render_product(self, folder, startFrame, endFrame, fmt): + return render_dict + + def get_expected_beauty(self, folder, start_frame, end_frame, fmt): beauty_frame_range = [] - for f in range(startFrame, endFrame): - beauty_output = f"{folder}.{f}.{fmt}" + for f in range(start_frame, end_frame): + frame = "%04d" % f + beauty_output = f"{folder}.{frame}.{fmt}" beauty_output = beauty_output.replace("\\", "/") beauty_frame_range.append(beauty_output) return beauty_frame_range - # TODO: Get the arnold render product - def arnold_render_product(self, folder, startFrame, endFrame, fmt): - """Get all the Arnold AOVs""" - aovs = [] + def get_arnold_product_name(self): + """Get all the Arnold AOVs name""" + aov_name = [] amw = rt.MaxtoAOps.AOVsManagerWindow() aov_mgr = rt.renderers.current.AOVManager @@ -100,34 +132,51 @@ class RenderProducts(object): return for i in range(aov_group_num): # get the specific AOV group - for aov in aov_mgr.drivers[i].aov_list: - for f in range(startFrame, endFrame): - render_element = f"{folder}_{aov.name}.{f}.{fmt}" - render_element = render_element.replace("\\", "/") - aovs.append(render_element) - + aov_name.extend(aov.name for aov in aov_mgr.drivers[i].aov_list) # close the AOVs manager window amw.close() - return aovs + return aov_name - def render_elements_product(self, folder, startFrame, endFrame, fmt): - """Get all the render element output files. """ - render_dirname = [] + def get_expected_arnold_product(self, folder, name, + start_frame, end_frame, fmt): + """Get all the expected Arnold AOVs""" + aov_list = [] + for f in range(start_frame, end_frame): + frame = "%04d" % f + render_element = f"{folder}_{name}.{frame}.{fmt}" + render_element = render_element.replace("\\", "/") + aov_list.append(render_element) + return aov_list + + def get_render_elements_name(self): + """Get all the render element names for general """ + render_name = [] render_elem = rt.maxOps.GetCurRenderElementMgr() render_elem_num = render_elem.NumRenderElements() + if render_elem_num < 1: + return # get render elements from the renders for i in range(render_elem_num): renderlayer_name = render_elem.GetRenderElement(i) - target, renderpass = str(renderlayer_name).split(":") if renderlayer_name.enabled: - for f in range(startFrame, endFrame): - render_element = f"{folder}_{renderpass}.{f}.{fmt}" - render_element = render_element.replace("\\", "/") - render_dirname.append(render_element) + target, renderpass = str(renderlayer_name).split(":") + render_name.append(renderpass) - return render_dirname + return render_name + + def get_expected_render_elements(self, folder, name, + start_frame, end_frame, fmt): + """Get all the expected render element output files. """ + render_elements = [] + for f in range(start_frame, end_frame): + frame = "%04d" % f + render_element = f"{folder}_{name}.{frame}.{fmt}" + render_element = render_element.replace("\\", "/") + render_elements.append(render_element) + + return render_elements def image_format(self): return self._project_settings["max"]["RenderSettings"]["image_format"] # noqa diff --git a/openpype/hosts/max/plugins/create/create_redshift_proxy.py b/openpype/hosts/max/plugins/create/create_redshift_proxy.py new file mode 100644 index 0000000000..698ea82b69 --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_redshift_proxy.py @@ -0,0 +1,18 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating camera.""" +from openpype.hosts.max.api import plugin +from openpype.pipeline import CreatedInstance + + +class CreateRedshiftProxy(plugin.MaxCreator): + identifier = "io.openpype.creators.max.redshiftproxy" + label = "Redshift Proxy" + family = "redshiftproxy" + icon = "gear" + + def create(self, subset_name, instance_data, pre_create_data): + + _ = super(CreateRedshiftProxy, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance diff --git a/openpype/hosts/max/plugins/create/create_render.py b/openpype/hosts/max/plugins/create/create_render.py index 68ae5eac72..5ad895b86e 100644 --- a/openpype/hosts/max/plugins/create/create_render.py +++ b/openpype/hosts/max/plugins/create/create_render.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """Creator plugin for creating camera.""" +import os from openpype.hosts.max.api import plugin from openpype.pipeline import CreatedInstance from openpype.hosts.max.api.lib_rendersettings import RenderSettings @@ -14,6 +15,10 @@ class CreateRender(plugin.MaxCreator): def create(self, subset_name, instance_data, pre_create_data): from pymxs import runtime as rt sel_obj = list(rt.selection) + file = rt.maxFileName + filename, _ = os.path.splitext(file) + instance_data["AssetName"] = filename + instance = super(CreateRender, self).create( subset_name, instance_data, diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py index 95ee014e07..5f1ae3378e 100644 --- a/openpype/hosts/max/plugins/load/load_model.py +++ b/openpype/hosts/max/plugins/load/load_model.py @@ -1,8 +1,5 @@ - import os -from openpype.pipeline import ( - load, get_representation_path -) +from openpype.pipeline import load, get_representation_path from openpype.hosts.max.api.pipeline import containerise from openpype.hosts.max.api import lib from openpype.hosts.max.api.lib import maintained_selection @@ -24,24 +21,20 @@ class ModelAbcLoader(load.LoaderPlugin): file_path = os.path.normpath(self.fname) abc_before = { - c for c in rt.rootNode.Children + c + for c in rt.rootNode.Children if rt.classOf(c) == rt.AlembicContainer } - abc_import_cmd = (f""" -AlembicImport.ImportToRoot = false -AlembicImport.CustomAttributes = true -AlembicImport.UVs = true -AlembicImport.VertexColors = true - -importFile @"{file_path}" #noPrompt - """) - - self.log.debug(f"Executing command: {abc_import_cmd}") - rt.execute(abc_import_cmd) + rt.AlembicImport.ImportToRoot = False + rt.AlembicImport.CustomAttributes = True + rt.AlembicImport.UVs = True + rt.AlembicImport.VertexColors = True + rt.importFile(file_path, rt.name("noPrompt")) abc_after = { - c for c in rt.rootNode.Children + c + for c in rt.rootNode.Children if rt.classOf(c) == rt.AlembicContainer } @@ -54,10 +47,12 @@ importFile @"{file_path}" #noPrompt abc_container = abc_containers.pop() return containerise( - name, [abc_container], context, loader=self.__class__.__name__) + name, [abc_container], context, loader=self.__class__.__name__ + ) def update(self, container, representation): from pymxs import runtime as rt + path = get_representation_path(representation) node = rt.getNodeByName(container["instance_node"]) rt.select(node.Children) @@ -76,9 +71,10 @@ importFile @"{file_path}" #noPrompt with maintained_selection(): rt.select(node) - lib.imprint(container["instance_node"], { - "representation": str(representation["_id"]) - }) + lib.imprint( + container["instance_node"], + {"representation": str(representation["_id"])}, + ) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py index 01e6acae12..61101c482d 100644 --- a/openpype/hosts/max/plugins/load/load_model_fbx.py +++ b/openpype/hosts/max/plugins/load/load_model_fbx.py @@ -1,8 +1,5 @@ import os -from openpype.pipeline import ( - load, - get_representation_path -) +from openpype.pipeline import load, get_representation_path from openpype.hosts.max.api.pipeline import containerise from openpype.hosts.max.api import lib from openpype.hosts.max.api.lib import maintained_selection @@ -24,10 +21,7 @@ class FbxModelLoader(load.LoaderPlugin): rt.FBXImporterSetParam("Animation", False) rt.FBXImporterSetParam("Cameras", False) rt.FBXImporterSetParam("Preserveinstances", True) - rt.importFile( - filepath, - rt.name("noPrompt"), - using=rt.FBXIMP) + rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP) container = rt.getNodeByName(f"{name}") if not container: @@ -38,7 +32,8 @@ class FbxModelLoader(load.LoaderPlugin): selection.Parent = container return containerise( - name, [container], context, loader=self.__class__.__name__) + name, [container], context, loader=self.__class__.__name__ + ) def update(self, container, representation): from pymxs import runtime as rt @@ -46,24 +41,21 @@ class FbxModelLoader(load.LoaderPlugin): path = get_representation_path(representation) node = rt.getNodeByName(container["instance_node"]) rt.select(node.Children) - fbx_reimport_cmd = ( - f""" -FBXImporterSetParam "Animation" false -FBXImporterSetParam "Cameras" false -FBXImporterSetParam "AxisConversionMethod" true -FbxExporterSetParam "UpAxis" "Y" -FbxExporterSetParam "Preserveinstances" true -importFile @"{path}" #noPrompt using:FBXIMP - """) - rt.execute(fbx_reimport_cmd) + rt.FBXImporterSetParam("Animation", False) + rt.FBXImporterSetParam("Cameras", False) + rt.FBXImporterSetParam("AxisConversionMethod", True) + rt.FBXImporterSetParam("UpAxis", "Y") + rt.FBXImporterSetParam("Preserveinstances", True) + rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP) with maintained_selection(): rt.select(node) - lib.imprint(container["instance_node"], { - "representation": str(representation["_id"]) - }) + lib.imprint( + container["instance_node"], + {"representation": str(representation["_id"])}, + ) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py index b3e12adc7b..5fb9772f87 100644 --- a/openpype/hosts/max/plugins/load/load_pointcache.py +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -5,9 +5,7 @@ Because of limited api, alembics can be only loaded, but not easily updated. """ import os -from openpype.pipeline import ( - load, get_representation_path -) +from openpype.pipeline import load, get_representation_path from openpype.hosts.max.api.pipeline import containerise from openpype.hosts.max.api import lib @@ -15,9 +13,7 @@ from openpype.hosts.max.api import lib class AbcLoader(load.LoaderPlugin): """Alembic loader.""" - families = ["camera", - "animation", - "pointcache"] + families = ["camera", "animation", "pointcache"] label = "Load Alembic" representations = ["abc"] order = -10 @@ -30,21 +26,17 @@ class AbcLoader(load.LoaderPlugin): file_path = os.path.normpath(self.fname) abc_before = { - c for c in rt.rootNode.Children + c + for c in rt.rootNode.Children if rt.classOf(c) == rt.AlembicContainer } - abc_export_cmd = (f""" -AlembicImport.ImportToRoot = false - -importFile @"{file_path}" #noPrompt - """) - - self.log.debug(f"Executing command: {abc_export_cmd}") - rt.execute(abc_export_cmd) + rt.AlembicImport.ImportToRoot = False + rt.importFile(file_path, rt.name("noPrompt")) abc_after = { - c for c in rt.rootNode.Children + c + for c in rt.rootNode.Children if rt.classOf(c) == rt.AlembicContainer } @@ -57,7 +49,8 @@ importFile @"{file_path}" #noPrompt abc_container = abc_containers.pop() return containerise( - name, [abc_container], context, loader=self.__class__.__name__) + name, [abc_container], context, loader=self.__class__.__name__ + ) def update(self, container, representation): from pymxs import runtime as rt @@ -69,9 +62,10 @@ importFile @"{file_path}" #noPrompt for alembic_object in alembic_objects: alembic_object.source = path - lib.imprint(container["instance_node"], { - "representation": str(representation["_id"]) - }) + lib.imprint( + container["instance_node"], + {"representation": str(representation["_id"])}, + ) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/max/plugins/load/load_redshift_proxy.py b/openpype/hosts/max/plugins/load/load_redshift_proxy.py new file mode 100644 index 0000000000..31692f6367 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_redshift_proxy.py @@ -0,0 +1,63 @@ +import os +import clique + +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib + + +class RedshiftProxyLoader(load.LoaderPlugin): + """Load rs files with Redshift Proxy""" + + label = "Load Redshift Proxy" + families = ["redshiftproxy"] + representations = ["rs"] + order = -9 + icon = "code-fork" + color = "white" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + filepath = self.filepath_from_context(context) + rs_proxy = rt.RedshiftProxy() + rs_proxy.file = filepath + files_in_folder = os.listdir(os.path.dirname(filepath)) + collections, remainder = clique.assemble(files_in_folder) + if collections: + rs_proxy.is_sequence = True + + container = rt.container() + container.name = name + rs_proxy.Parent = container + + asset = rt.getNodeByName(name) + + return containerise( + name, [asset], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node = rt.getNodeByName(container["instance_node"]) + for children in node.Children: + children_node = rt.getNodeByName(children.name) + for proxy in children_node.Children: + proxy.file = path + + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index 00e00a8eb5..db5c84fad9 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -5,7 +5,8 @@ import pyblish.api from pymxs import runtime as rt from openpype.pipeline import get_current_asset_name -from openpype.hosts.max.api.lib import get_max_version +from openpype.hosts.max.api import colorspace +from openpype.hosts.max.api.lib import get_max_version, get_current_renderer from openpype.hosts.max.api.lib_renderproducts import RenderProducts from openpype.client import get_last_version_by_subset_name @@ -28,8 +29,16 @@ class CollectRender(pyblish.api.InstancePlugin): context.data['currentFile'] = current_file asset = get_current_asset_name() - render_layer_files = RenderProducts().render_product(instance.name) + files_by_aov = RenderProducts().get_beauty(instance.name) folder = folder.replace("\\", "/") + aovs = RenderProducts().get_aovs(instance.name) + files_by_aov.update(aovs) + + if "expectedFiles" not in instance.data: + instance.data["expectedFiles"] = list() + instance.data["files"] = list() + instance.data["expectedFiles"].append(files_by_aov) + instance.data["files"].append(files_by_aov) img_format = RenderProducts().image_format() project_name = context.data["projectName"] @@ -38,7 +47,6 @@ class CollectRender(pyblish.api.InstancePlugin): version_doc = get_last_version_by_subset_name(project_name, instance.name, asset_id) - self.log.debug("version_doc: {0}".format(version_doc)) version_int = 1 if version_doc: @@ -46,22 +54,42 @@ class CollectRender(pyblish.api.InstancePlugin): self.log.debug(f"Setting {version_int} to context.") context.data["version"] = version_int - # setup the plugin as 3dsmax for the internal renderer + # OCIO config not support in + # most of the 3dsmax renderers + # so this is currently hard coded + # TODO: add options for redshift/vray ocio config + instance.data["colorspaceConfig"] = "" + instance.data["colorspaceDisplay"] = "sRGB" + instance.data["colorspaceView"] = "ACES 1.0 SDR-video" + instance.data["renderProducts"] = colorspace.ARenderProduct() + instance.data["publishJobState"] = "Suspended" + instance.data["attachTo"] = [] + renderer_class = get_current_renderer() + renderer = str(renderer_class).split(":")[0] + # also need to get the render dir for conversion data = { - "subset": instance.name, "asset": asset, + "subset": str(instance.name), "publish": True, "maxversion": str(get_max_version()), "imageFormat": img_format, "family": 'maxrender', "families": ['maxrender'], + "renderer": renderer, "source": filepath, - "expectedFiles": render_layer_files, "plugin": "3dsmax", "frameStart": int(rt.rendStart), "frameEnd": int(rt.rendEnd), "version": version_int, "farm": True } - self.log.info("data: {0}".format(data)) instance.data.update(data) + + # TODO: this should be unified with maya and its "multipart" flag + # on instance. + if renderer == "Redshift_Renderer": + instance.data.update( + {"separateAovFiles": rt.Execute( + "renderers.current.separateAovFiles")}) + + self.log.info("data: {0}".format(data)) diff --git a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py new file mode 100644 index 0000000000..3b44099609 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py @@ -0,0 +1,62 @@ +import os +import pyblish.api +from openpype.pipeline import publish +from pymxs import runtime as rt +from openpype.hosts.max.api import maintained_selection + + +class ExtractRedshiftProxy(publish.Extractor): + """ + Extract Redshift Proxy with rsProxy + """ + + order = pyblish.api.ExtractorOrder - 0.1 + label = "Extract RedShift Proxy" + hosts = ["max"] + families = ["redshiftproxy"] + + def process(self, instance): + container = instance.data["instance_node"] + start = int(instance.context.data.get("frameStart")) + end = int(instance.context.data.get("frameEnd")) + + self.log.info("Extracting Redshift Proxy...") + stagingdir = self.staging_dir(instance) + rs_filename = "{name}.rs".format(**instance.data) + rs_filepath = os.path.join(stagingdir, rs_filename) + rs_filepath = rs_filepath.replace("\\", "/") + + rs_filenames = self.get_rsfiles(instance, start, end) + + with maintained_selection(): + # select and export + con = rt.getNodeByName(container) + rt.select(con.Children) + # Redshift rsProxy command + # rsProxy fp selected compress connectivity startFrame endFrame + # camera warnExisting transformPivotToOrigin + rt.rsProxy(rs_filepath, 1, 0, 0, start, end, 0, 1, 1) + + self.log.info("Performing Extraction ...") + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'rs', + 'ext': 'rs', + 'files': rs_filenames if len(rs_filenames) > 1 else rs_filenames[0], # noqa + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + stagingdir)) + + def get_rsfiles(self, instance, startFrame, endFrame): + rs_filenames = [] + rs_name = instance.data["name"] + for frame in range(startFrame, endFrame + 1): + rs_filename = "%s.%04d.rs" % (rs_name, frame) + rs_filenames.append(rs_filename) + + return rs_filenames diff --git a/openpype/hosts/max/plugins/publish/save_scene.py b/openpype/hosts/max/plugins/publish/save_scene.py new file mode 100644 index 0000000000..a40788ab41 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/save_scene.py @@ -0,0 +1,21 @@ +import pyblish.api +import os + + +class SaveCurrentScene(pyblish.api.ContextPlugin): + """Save current scene + + """ + + label = "Save current file" + order = pyblish.api.ExtractorOrder - 0.49 + hosts = ["max"] + families = ["maxrender", "workfile"] + + def process(self, context): + from pymxs import runtime as rt + folder = rt.maxFilePath + file = rt.maxFileName + current = os.path.join(folder, file) + assert context.data["currentFile"] == current + rt.saveMaxFile(current) diff --git a/openpype/hosts/max/plugins/publish/validate_deadline_publish.py b/openpype/hosts/max/plugins/publish/validate_deadline_publish.py new file mode 100644 index 0000000000..b2f0e863f4 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_deadline_publish.py @@ -0,0 +1,43 @@ +import os +import pyblish.api +from pymxs import runtime as rt +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishValidationError, + OptionalPyblishPluginMixin +) +from openpype.hosts.max.api.lib_rendersettings import RenderSettings + + +class ValidateDeadlinePublish(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates Render File Directory is + not the same in every submission + """ + + order = ValidateContentsOrder + families = ["maxrender"] + hosts = ["max"] + label = "Render Output for Deadline" + optional = True + actions = [RepairAction] + + def process(self, instance): + if not self.is_active(instance.data): + return + file = rt.maxFileName + filename, ext = os.path.splitext(file) + if filename not in rt.rendOutputFilename: + raise PublishValidationError( + "Render output folder " + "doesn't match the max scene name! " + "Use Repair action to " + "fix the folder file path.." + ) + + @classmethod + def repair(cls, instance): + container = instance.data.get("instance_node") + RenderSettings().render_output(container) + cls.log.debug("Reset the render output folder...") diff --git a/openpype/hosts/max/plugins/publish/validate_renderer_redshift_proxy.py b/openpype/hosts/max/plugins/publish/validate_renderer_redshift_proxy.py new file mode 100644 index 0000000000..bc82f82f3b --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_renderer_redshift_proxy.py @@ -0,0 +1,54 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt +from openpype.pipeline.publish import RepairAction +from openpype.hosts.max.api.lib import get_current_renderer + + +class ValidateRendererRedshiftProxy(pyblish.api.InstancePlugin): + """ + Validates Redshift as the current renderer for creating + Redshift Proxy + """ + + order = pyblish.api.ValidatorOrder + families = ["redshiftproxy"] + hosts = ["max"] + label = "Redshift Renderer" + actions = [RepairAction] + + def process(self, instance): + invalid = self.get_redshift_renderer(instance) + if invalid: + raise PublishValidationError("Please install Redshift for 3dsMax" + " before using the Redshift proxy instance") # noqa + invalid = self.get_current_renderer(instance) + if invalid: + raise PublishValidationError("The Redshift proxy extraction" + "discontinued since the current renderer is not Redshift") # noqa + + def get_redshift_renderer(self, instance): + invalid = list() + max_renderers_list = str(rt.RendererClass.classes) + if "Redshift_Renderer" not in max_renderers_list: + invalid.append(max_renderers_list) + + return invalid + + def get_current_renderer(self, instance): + invalid = list() + renderer_class = get_current_renderer() + current_renderer = str(renderer_class).split(":")[0] + if current_renderer != "Redshift_Renderer": + invalid.append(current_renderer) + + return invalid + + @classmethod + def repair(cls, instance): + for Renderer in rt.RendererClass.classes: + renderer = Renderer() + if "Redshift_Renderer" in str(renderer): + rt.renderers.production = renderer + break diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py index 159bfe9eb3..0bb1f186eb 100644 --- a/openpype/hosts/maya/api/setdress.py +++ b/openpype/hosts/maya/api/setdress.py @@ -28,7 +28,9 @@ from openpype.pipeline import ( ) from openpype.hosts.maya.api.lib import ( matrix_equals, - unique_namespace + unique_namespace, + get_container_transforms, + DEFAULT_MATRIX ) log = logging.getLogger("PackageLoader") @@ -183,8 +185,6 @@ def _add(instance, representation_id, loaders, namespace, root="|"): """ - from openpype.hosts.maya.lib import get_container_transforms - # Process within the namespace with namespaced(namespace, new=False) as namespace: @@ -379,8 +379,6 @@ def update_scene(set_container, containers, current_data, new_data, new_file): """ - from openpype.hosts.maya.lib import DEFAULT_MATRIX, get_container_transforms - set_namespace = set_container['namespace'] project_name = legacy_io.active_project() diff --git a/openpype/hosts/maya/plugins/create/create_render.py b/openpype/hosts/maya/plugins/create/create_render.py index 71d42cc82f..90d9bb5652 100644 --- a/openpype/hosts/maya/plugins/create/create_render.py +++ b/openpype/hosts/maya/plugins/create/create_render.py @@ -216,16 +216,34 @@ class CreateRender(plugin.Creator): primary_pool = pool_setting["primary_pool"] sorted_pools = self._set_default_pool(list(pools), primary_pool) - cmds.addAttr(self.instance, longName="primaryPool", - attributeType="enum", - enumName=":".join(sorted_pools)) + cmds.addAttr( + self.instance, + longName="primaryPool", + attributeType="enum", + enumName=":".join(sorted_pools) + ) + cmds.setAttr( + "{}.primaryPool".format(self.instance), + 0, + keyable=False, + channelBox=True + ) pools = ["-"] + pools secondary_pool = pool_setting["secondary_pool"] sorted_pools = self._set_default_pool(list(pools), secondary_pool) - cmds.addAttr("{}.secondaryPool".format(self.instance), - attributeType="enum", - enumName=":".join(sorted_pools)) + cmds.addAttr( + self.instance, + longName="secondaryPool", + attributeType="enum", + enumName=":".join(sorted_pools) + ) + cmds.setAttr( + "{}.secondaryPool".format(self.instance), + 0, + keyable=False, + channelBox=True + ) @staticmethod def _rr_path_changed(): @@ -313,6 +331,12 @@ class CreateRender(plugin.Creator): default_priority) self.data["tile_priority"] = tile_priority + strict_error_checking = maya_submit_dl.get("strict_error_checking", + True) + self.data["strict_error_checking"] = strict_error_checking + + # Pool attributes should be last since they will be recreated when + # the deadline server changes. pool_setting = (self._project_settings["deadline"] ["publish"] ["CollectDeadlinePools"]) @@ -325,9 +349,6 @@ class CreateRender(plugin.Creator): secondary_pool = pool_setting["secondary_pool"] self.data["secondaryPool"] = self._set_default_pool(pool_names, secondary_pool) - strict_error_checking = maya_submit_dl.get("strict_error_checking", - True) - self.data["strict_error_checking"] = strict_error_checking if muster_enabled: self.log.info(">>> Loading Muster credentials ...") diff --git a/openpype/hosts/maya/plugins/load/load_assembly.py b/openpype/hosts/maya/plugins/load/load_assembly.py index 902f38695c..275f21be5d 100644 --- a/openpype/hosts/maya/plugins/load/load_assembly.py +++ b/openpype/hosts/maya/plugins/load/load_assembly.py @@ -1,8 +1,14 @@ +import maya.cmds as cmds + from openpype.pipeline import ( load, remove_container ) +from openpype.hosts.maya.api.pipeline import containerise +from openpype.hosts.maya.api.lib import unique_namespace +from openpype.hosts.maya.api import setdress + class AssemblyLoader(load.LoaderPlugin): @@ -16,9 +22,6 @@ class AssemblyLoader(load.LoaderPlugin): def load(self, context, name, namespace, data): - from openpype.hosts.maya.api.pipeline import containerise - from openpype.hosts.maya.api.lib import unique_namespace - asset = context['asset']['name'] namespace = namespace or unique_namespace( asset + "_", @@ -26,8 +29,6 @@ class AssemblyLoader(load.LoaderPlugin): suffix="_", ) - from openpype.hosts.maya.api import setdress - containers = setdress.load_package( filepath=self.fname, name=name, @@ -50,15 +51,11 @@ class AssemblyLoader(load.LoaderPlugin): def update(self, container, representation): - from openpype import setdress return setdress.update_package(container, representation) def remove(self, container): """Remove all sub containers""" - from openpype import setdress - import maya.cmds as cmds - # Remove all members member_containers = setdress.get_contained_containers(container) for member_container in member_containers: diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index f4a4a44344..74ca27ff3c 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -33,7 +33,7 @@ def preserve_modelpanel_cameras(container, log=None): panel_cameras = {} for panel in cmds.getPanel(type="modelPanel"): cam = cmds.ls(cmds.modelPanel(panel, query=True, camera=True), - long=True) + long=True)[0] # Often but not always maya returns the transform from the # modelPanel as opposed to the camera shape, so we convert it diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py index 2fb55782d2..3b9f1f60f6 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render.py +++ b/openpype/hosts/maya/plugins/publish/collect_render.py @@ -336,7 +336,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin): context.data["system_settings"]["modules"]["deadline"] ) if deadline_settings["enabled"]: - data["deadlineUrl"] = render_instance.data.get("deadlineUrl") + data["deadlineUrl"] = render_instance.data["deadlineUrl"] rr_settings = ( context.data["system_settings"]["modules"]["royalrender"] diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index 499bfd4e37..cba70a21b7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -55,7 +55,8 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): if shapes: instance_nodes.extend(shapes) - scene_nodes = cmds.ls(type="transform") + cmds.ls(type="mesh") + scene_nodes = cmds.ls(type="transform", long=True) + scene_nodes += cmds.ls(type="mesh", long=True) scene_nodes = set(scene_nodes) - set(instance_nodes) scene_nodes_by_basename = defaultdict(list) @@ -76,7 +77,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): if len(ids) > 1: cls.log.error( "\"{}\" id mismatch to: {}".format( - instance_node.longName(), matches + instance_node, matches ) ) invalid[instance_node] = matches diff --git a/openpype/hosts/nuke/startup/__init__.py b/openpype/hosts/nuke/startup/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/hosts/nuke/startup/frame_setting_for_read_nodes.py b/openpype/hosts/nuke/startup/frame_setting_for_read_nodes.py new file mode 100644 index 0000000000..f0cbabe20f --- /dev/null +++ b/openpype/hosts/nuke/startup/frame_setting_for_read_nodes.py @@ -0,0 +1,47 @@ +""" OpenPype custom script for resetting read nodes start frame values """ + +import nuke +import nukescripts + + +class FrameSettingsPanel(nukescripts.PythonPanel): + """ Frame Settings Panel """ + def __init__(self): + nukescripts.PythonPanel.__init__(self, "Set Frame Start (Read Node)") + + # create knobs + self.frame = nuke.Int_Knob( + 'frame', 'Frame Number') + self.selected = nuke.Boolean_Knob("selection") + # add knobs to panel + self.addKnob(self.selected) + self.addKnob(self.frame) + + # set values + self.selected.setValue(False) + self.frame.setValue(nuke.root().firstFrame()) + + def process(self): + """ Process the panel values. """ + # get values + frame = self.frame.value() + if self.selected.value(): + # selected nodes processing + if not nuke.selectedNodes(): + return + for rn_ in nuke.selectedNodes(): + if rn_.Class() != "Read": + continue + rn_["frame_mode"].setValue("start_at") + rn_["frame"].setValue(str(frame)) + else: + # all nodes processing + for rn_ in nuke.allNodes(filter="Read"): + rn_["frame_mode"].setValue("start_at") + rn_["frame"].setValue(str(frame)) + + +def main(): + p_ = FrameSettingsPanel() + if p_.showModalDialog(): + print(p_.process()) diff --git a/openpype/hosts/resolve/api/workio.py b/openpype/hosts/resolve/api/workio.py index 5ce73eea53..5966fa6a43 100644 --- a/openpype/hosts/resolve/api/workio.py +++ b/openpype/hosts/resolve/api/workio.py @@ -43,18 +43,22 @@ def open_file(filepath): """ Loading project """ + + from . import bmdvr + pm = get_project_manager() + page = bmdvr.GetCurrentPage() + if page is not None: + # Save current project only if Resolve has an active page, otherwise + # we consider Resolve being in a pre-launch state (no open UI yet) + project = pm.GetCurrentProject() + print(f"Saving current project: {project}") + pm.SaveProject() + file = os.path.basename(filepath) fname, _ = os.path.splitext(file) dname, _ = fname.split("_v") - - # deal with current project - project = pm.GetCurrentProject() - log.info(f"Test `pm`: {pm}") - pm.SaveProject() - try: - log.info(f"Test `dname`: {dname}") if not set_project_manager_to_folder_name(dname): raise # load project from input path @@ -72,6 +76,7 @@ def open_file(filepath): return False return True + def current_file(): pm = get_project_manager() current_dir = os.getenv("AVALON_WORKDIR") diff --git a/openpype/hosts/resolve/hooks/pre_resolve_launch_last_workfile.py b/openpype/hosts/resolve/hooks/pre_resolve_launch_last_workfile.py new file mode 100644 index 0000000000..0e27ddb8c3 --- /dev/null +++ b/openpype/hosts/resolve/hooks/pre_resolve_launch_last_workfile.py @@ -0,0 +1,45 @@ +import os + +from openpype.lib import PreLaunchHook +import openpype.hosts.resolve + + +class ResolveLaunchLastWorkfile(PreLaunchHook): + """Special hook to open last workfile for Resolve. + + Checks 'start_last_workfile', if set to False, it will not open last + workfile. This property is set explicitly in Launcher. + """ + + # Execute after workfile template copy + order = 10 + app_groups = ["resolve"] + + def execute(self): + if not self.data.get("start_last_workfile"): + self.log.info("It is set to not start last workfile on start.") + return + + last_workfile = self.data.get("last_workfile_path") + if not last_workfile: + self.log.warning("Last workfile was not collected.") + return + + if not os.path.exists(last_workfile): + self.log.info("Current context does not have any workfile yet.") + return + + # Add path to launch environment for the startup script to pick up + self.log.info(f"Setting OPENPYPE_RESOLVE_OPEN_ON_LAUNCH to launch " + f"last workfile: {last_workfile}") + key = "OPENPYPE_RESOLVE_OPEN_ON_LAUNCH" + self.launch_context.env[key] = last_workfile + + # Set the openpype prelaunch startup script path for easy access + # in the LUA .scriptlib code + op_resolve_root = os.path.dirname(openpype.hosts.resolve.__file__) + script_path = os.path.join(op_resolve_root, "startup.py") + key = "OPENPYPE_RESOLVE_STARTUP_SCRIPT" + self.launch_context.env[key] = script_path + self.log.info("Setting OPENPYPE_RESOLVE_STARTUP_SCRIPT to: " + f"{script_path}") diff --git a/openpype/hosts/resolve/startup.py b/openpype/hosts/resolve/startup.py new file mode 100644 index 0000000000..79a64e0fbf --- /dev/null +++ b/openpype/hosts/resolve/startup.py @@ -0,0 +1,62 @@ +"""This script is used as a startup script in Resolve through a .scriptlib file + +It triggers directly after the launch of Resolve and it's recommended to keep +it optimized for fast performance since the Resolve UI is actually interactive +while this is running. As such, there's nothing ensuring the user isn't +continuing manually before any of the logic here runs. As such we also try +to delay any imports as much as possible. + +This code runs in a separate process to the main Resolve process. + +""" +import os + +import openpype.hosts.resolve.api + + +def ensure_installed_host(): + """Install resolve host with openpype and return the registered host. + + This function can be called multiple times without triggering an + additional install. + """ + from openpype.pipeline import install_host, registered_host + host = registered_host() + if host: + return host + + install_host(openpype.hosts.resolve.api) + return registered_host() + + +def launch_menu(): + print("Launching Resolve OpenPype menu..") + ensure_installed_host() + openpype.hosts.resolve.api.launch_pype_menu() + + +def open_file(path): + # Avoid the need to "install" the host + host = ensure_installed_host() + host.open_file(path) + + +def main(): + # Open last workfile + workfile_path = os.environ.get("OPENPYPE_RESOLVE_OPEN_ON_LAUNCH") + if workfile_path: + open_file(workfile_path) + else: + print("No last workfile set to open. Skipping..") + + # Launch OpenPype menu + from openpype.settings import get_project_settings + from openpype.pipeline.context_tools import get_current_project_name + project_name = get_current_project_name() + settings = get_project_settings(project_name) + if settings.get("resolve", {}).get("launch_openpype_menu_on_start", True): + launch_menu() + + +if __name__ == "__main__": + main() diff --git a/openpype/hosts/resolve/utility_scripts/openpype_startup.scriptlib b/openpype/hosts/resolve/utility_scripts/openpype_startup.scriptlib new file mode 100644 index 0000000000..ec9b30a18d --- /dev/null +++ b/openpype/hosts/resolve/utility_scripts/openpype_startup.scriptlib @@ -0,0 +1,21 @@ +-- Run OpenPype's Python launch script for resolve +function file_exists(name) + local f = io.open(name, "r") + return f ~= nil and io.close(f) +end + + +openpype_startup_script = os.getenv("OPENPYPE_RESOLVE_STARTUP_SCRIPT") +if openpype_startup_script ~= nil then + script = fusion:MapPath(openpype_startup_script) + + if file_exists(script) then + -- We must use RunScript to ensure it runs in a separate + -- process to Resolve itself to avoid a deadlock for + -- certain imports of OpenPype libraries or Qt + print("Running launch script: " .. script) + fusion:RunScript(script) + else + print("Launch script not found at: " .. script) + end +end \ No newline at end of file diff --git a/openpype/hosts/resolve/utils.py b/openpype/hosts/resolve/utils.py index 9a161f4865..5e3003862f 100644 --- a/openpype/hosts/resolve/utils.py +++ b/openpype/hosts/resolve/utils.py @@ -29,6 +29,9 @@ def setup(env): log.info("Utility Scripts Dir: `{}`".format(util_scripts_paths)) log.info("Utility Scripts: `{}`".format(scripts)) + # Make sure scripts dir exists + os.makedirs(util_scripts_dir, exist_ok=True) + # make sure no script file is in folder for script in os.listdir(util_scripts_dir): path = os.path.join(util_scripts_dir, script) @@ -50,6 +53,14 @@ def setup(env): src = os.path.join(directory, script) dst = os.path.join(util_scripts_dir, script) + + # TODO: Make this a less hacky workaround + if script == "openpype_startup.scriptlib": + # Handle special case for scriptlib that needs to be a folder + # up from the Comp folder in the Fusion scripts + dst = os.path.join(os.path.dirname(util_scripts_dir), + script) + log.info("Copying `{}` to `{}`...".format(src, dst)) if os.path.isdir(src): shutil.copytree( diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py index 1119b5c16c..ed23950b35 100644 --- a/openpype/hosts/unreal/addon.py +++ b/openpype/hosts/unreal/addon.py @@ -1,5 +1,7 @@ import os +import re from openpype.modules import IHostAddon, OpenPypeModule +from openpype.widgets.message_window import Window UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -19,6 +21,20 @@ class UnrealAddon(OpenPypeModule, IHostAddon): from .lib import get_compatible_integration + pattern = re.compile(r'^\d+-\d+$') + + if not pattern.match(app.name): + msg = ( + "Unreal application key in the settings must be in format" + "'5-0' or '5-1'" + ) + Window( + parent=None, + title="Unreal application name format", + message=msg, + level="critical") + raise ValueError(msg) + ue_version = app.name.replace("-", ".") unreal_plugin_path = os.path.join( UNREAL_ROOT_DIR, "integration", "UE_{}".format(ue_version), "Ayon" diff --git a/openpype/hosts/unreal/api/__init__.py b/openpype/hosts/unreal/api/__init__.py index de0fce13d5..ac6a91eae9 100644 --- a/openpype/hosts/unreal/api/__init__.py +++ b/openpype/hosts/unreal/api/__init__.py @@ -22,6 +22,8 @@ from .pipeline import ( show_tools_popup, instantiate, UnrealHost, + set_sequence_hierarchy, + generate_sequence, maintained_selection ) @@ -41,5 +43,7 @@ __all__ = [ "show_tools_popup", "instantiate", "UnrealHost", + "set_sequence_hierarchy", + "generate_sequence", "maintained_selection" ] diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py index bb45fa8c01..72816c9b81 100644 --- a/openpype/hosts/unreal/api/pipeline.py +++ b/openpype/hosts/unreal/api/pipeline.py @@ -9,12 +9,14 @@ import time import pyblish.api +from openpype.client import get_asset_by_name, get_assets from openpype.pipeline import ( register_loader_plugin_path, register_creator_plugin_path, deregister_loader_plugin_path, deregister_creator_plugin_path, AYON_CONTAINER_ID, + legacy_io, ) from openpype.tools.utils import host_tools import openpype.hosts.unreal @@ -512,6 +514,141 @@ def get_subsequences(sequence: unreal.LevelSequence): return [] +def set_sequence_hierarchy( + seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths +): + # Get existing sequencer tracks or create them if they don't exist + tracks = seq_i.get_master_tracks() + subscene_track = None + visibility_track = None + for t in tracks: + if t.get_class() == unreal.MovieSceneSubTrack.static_class(): + subscene_track = t + if (t.get_class() == + unreal.MovieSceneLevelVisibilityTrack.static_class()): + visibility_track = t + if not subscene_track: + subscene_track = seq_i.add_master_track(unreal.MovieSceneSubTrack) + if not visibility_track: + visibility_track = seq_i.add_master_track( + unreal.MovieSceneLevelVisibilityTrack) + + # Create the sub-scene section + subscenes = subscene_track.get_sections() + subscene = None + for s in subscenes: + if s.get_editor_property('sub_sequence') == seq_j: + subscene = s + break + if not subscene: + subscene = subscene_track.add_section() + subscene.set_row_index(len(subscene_track.get_sections())) + subscene.set_editor_property('sub_sequence', seq_j) + subscene.set_range( + min_frame_j, + max_frame_j + 1) + + # Create the visibility section + ar = unreal.AssetRegistryHelpers.get_asset_registry() + maps = [] + for m in map_paths: + # Unreal requires to load the level to get the map name + unreal.EditorLevelLibrary.save_all_dirty_levels() + unreal.EditorLevelLibrary.load_level(m) + maps.append(str(ar.get_asset_by_object_path(m).asset_name)) + + vis_section = visibility_track.add_section() + index = len(visibility_track.get_sections()) + + vis_section.set_range( + min_frame_j, + max_frame_j + 1) + vis_section.set_visibility(unreal.LevelVisibility.VISIBLE) + vis_section.set_row_index(index) + vis_section.set_level_names(maps) + + if min_frame_j > 1: + hid_section = visibility_track.add_section() + hid_section.set_range( + 1, + min_frame_j) + hid_section.set_visibility(unreal.LevelVisibility.HIDDEN) + hid_section.set_row_index(index) + hid_section.set_level_names(maps) + if max_frame_j < max_frame_i: + hid_section = visibility_track.add_section() + hid_section.set_range( + max_frame_j + 1, + max_frame_i + 1) + hid_section.set_visibility(unreal.LevelVisibility.HIDDEN) + hid_section.set_row_index(index) + hid_section.set_level_names(maps) + + +def generate_sequence(h, h_dir): + tools = unreal.AssetToolsHelpers().get_asset_tools() + + sequence = tools.create_asset( + asset_name=h, + package_path=h_dir, + asset_class=unreal.LevelSequence, + factory=unreal.LevelSequenceFactoryNew() + ) + + project_name = legacy_io.active_project() + asset_data = get_asset_by_name( + project_name, + h_dir.split('/')[-1], + fields=["_id", "data.fps"] + ) + + start_frames = [] + end_frames = [] + + elements = list(get_assets( + project_name, + parent_ids=[asset_data["_id"]], + fields=["_id", "data.clipIn", "data.clipOut"] + )) + for e in elements: + start_frames.append(e.get('data').get('clipIn')) + end_frames.append(e.get('data').get('clipOut')) + + elements.extend(get_assets( + project_name, + parent_ids=[e["_id"]], + fields=["_id", "data.clipIn", "data.clipOut"] + )) + + min_frame = min(start_frames) + max_frame = max(end_frames) + + fps = asset_data.get('data').get("fps") + + sequence.set_display_rate( + unreal.FrameRate(fps, 1.0)) + sequence.set_playback_start(min_frame) + sequence.set_playback_end(max_frame) + + sequence.set_work_range_start(min_frame / fps) + sequence.set_work_range_end(max_frame / fps) + sequence.set_view_range_start(min_frame / fps) + sequence.set_view_range_end(max_frame / fps) + + tracks = sequence.get_master_tracks() + track = None + for t in tracks: + if (t.get_class() == + unreal.MovieSceneCameraCutTrack.static_class()): + track = t + break + if not track: + track = sequence.add_master_track( + unreal.MovieSceneCameraCutTrack) + + return sequence, (min_frame, max_frame) + + @contextmanager def maintained_selection(): """Stub to be either implemented or replaced. diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py index 072b3b1467..59ea14697d 100644 --- a/openpype/hosts/unreal/plugins/load/load_camera.py +++ b/openpype/hosts/unreal/plugins/load/load_camera.py @@ -3,16 +3,24 @@ from pathlib import Path import unreal -from unreal import EditorAssetLibrary -from unreal import EditorLevelLibrary -from unreal import EditorLevelUtils -from openpype.client import get_assets, get_asset_by_name +from unreal import ( + EditorAssetLibrary, + EditorLevelLibrary, + EditorLevelUtils, + LevelSequenceEditorBlueprintLibrary as LevelSequenceLib, +) +from openpype.client import get_asset_by_name from openpype.pipeline import ( AYON_CONTAINER_ID, legacy_io, ) from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + generate_sequence, + set_sequence_hierarchy, + create_container, + imprint, +) class CameraLoader(plugin.Loader): @@ -24,32 +32,6 @@ class CameraLoader(plugin.Loader): icon = "cube" color = "orange" - def _set_sequence_hierarchy( - self, seq_i, seq_j, min_frame_j, max_frame_j - ): - tracks = seq_i.get_master_tracks() - track = None - for t in tracks: - if t.get_class() == unreal.MovieSceneSubTrack.static_class(): - track = t - break - if not track: - track = seq_i.add_master_track(unreal.MovieSceneSubTrack) - - subscenes = track.get_sections() - subscene = None - for s in subscenes: - if s.get_editor_property('sub_sequence') == seq_j: - subscene = s - break - if not subscene: - subscene = track.add_section() - subscene.set_row_index(len(track.get_sections())) - subscene.set_editor_property('sub_sequence', seq_j) - subscene.set_range( - min_frame_j, - max_frame_j + 1) - def _import_camera( self, world, sequence, bindings, import_fbx_settings, import_filename ): @@ -110,10 +92,7 @@ class CameraLoader(plugin.Loader): hierarchy_dir_list.append(hierarchy_dir) asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) - else: - asset_name = "{}".format(name) + asset_name = f"{asset}_{name}" if asset else f"{name}" tools = unreal.AssetToolsHelpers().get_asset_tools() @@ -127,23 +106,15 @@ class CameraLoader(plugin.Loader): # Get highest number to make a unique name folders = [a for a in asset_content if a[-1] == "/" and f"{name}_" in a] - f_numbers = [] - for f in folders: - # Get number from folder name. Splits the string by "_" and - # removes the last element (which is a "/"). - f_numbers.append(int(f.split("_")[-1][:-1])) + # Get number from folder name. Splits the string by "_" and + # removes the last element (which is a "/"). + f_numbers = [int(f.split("_")[-1][:-1]) for f in folders] f_numbers.sort() - if not f_numbers: - unique_number = 1 - else: - unique_number = f_numbers[-1] + 1 + unique_number = f_numbers[-1] + 1 if f_numbers else 1 asset_dir, container_name = tools.create_unique_asset_name( f"{hierarchy_dir}/{asset}/{name}_{unique_number:02d}", suffix="") - asset_path = Path(asset_dir) - asset_path_parent = str(asset_path.parent.as_posix()) - container_name += suffix EditorAssetLibrary.make_directory(asset_dir) @@ -156,9 +127,9 @@ class CameraLoader(plugin.Loader): if not EditorAssetLibrary.does_asset_exist(master_level): EditorLevelLibrary.new_level(f"{h_dir}/{h_asset}_map") - level = f"{asset_path_parent}/{asset}_map.{asset}_map" + level = f"{asset_dir}/{asset}_map_camera.{asset}_map_camera" if not EditorAssetLibrary.does_asset_exist(level): - EditorLevelLibrary.new_level(f"{asset_path_parent}/{asset}_map") + EditorLevelLibrary.new_level(f"{asset_dir}/{asset}_map_camera") EditorLevelLibrary.load_level(master_level) EditorLevelUtils.add_level_to_world( @@ -169,27 +140,13 @@ class CameraLoader(plugin.Loader): EditorLevelLibrary.save_all_dirty_levels() EditorLevelLibrary.load_level(level) - project_name = legacy_io.active_project() - # TODO refactor - # - Creating of hierarchy should be a function in unreal integration - # - it's used in multiple loaders but must not be loader's logic - # - hard to say what is purpose of the loop - # - variables does not match their meaning - # - why scene is stored to sequences? - # - asset documents vs. elements - # - cleanup variable names in whole function - # - e.g. 'asset', 'asset_name', 'asset_data', 'asset_doc' - # - really inefficient queries of asset documents - # - existing asset in scene is considered as "with correct values" - # - variable 'elements' is modified during it's loop # Get all the sequences in the hierarchy. It will create them, if # they don't exist. - sequences = [] frame_ranges = [] - i = 0 - for h in hierarchy_dir_list: + sequences = [] + for (h_dir, h) in zip(hierarchy_dir_list, hierarchy): root_content = EditorAssetLibrary.list_assets( - h, recursive=False, include_folder=False) + h_dir, recursive=False, include_folder=False) existing_sequences = [ EditorAssetLibrary.find_asset_data(asset) @@ -198,57 +155,17 @@ class CameraLoader(plugin.Loader): asset).get_class().get_name() == 'LevelSequence' ] - if not existing_sequences: - scene = tools.create_asset( - asset_name=hierarchy[i], - package_path=h, - asset_class=unreal.LevelSequence, - factory=unreal.LevelSequenceFactoryNew() - ) - - asset_data = get_asset_by_name( - project_name, - h.split('/')[-1], - fields=["_id", "data.fps"] - ) - - start_frames = [] - end_frames = [] - - elements = list(get_assets( - project_name, - parent_ids=[asset_data["_id"]], - fields=["_id", "data.clipIn", "data.clipOut"] - )) - - for e in elements: - start_frames.append(e.get('data').get('clipIn')) - end_frames.append(e.get('data').get('clipOut')) - - elements.extend(get_assets( - project_name, - parent_ids=[e["_id"]], - fields=["_id", "data.clipIn", "data.clipOut"] - )) - - min_frame = min(start_frames) - max_frame = max(end_frames) - - scene.set_display_rate( - unreal.FrameRate(asset_data.get('data').get("fps"), 1.0)) - scene.set_playback_start(min_frame) - scene.set_playback_end(max_frame) - - sequences.append(scene) - frame_ranges.append((min_frame, max_frame)) - else: - for e in existing_sequences: - sequences.append(e.get_asset()) + if existing_sequences: + for seq in existing_sequences: + sequences.append(seq.get_asset()) frame_ranges.append(( - e.get_asset().get_playback_start(), - e.get_asset().get_playback_end())) + seq.get_asset().get_playback_start(), + seq.get_asset().get_playback_end())) + else: + sequence, frame_range = generate_sequence(h, h_dir) - i += 1 + sequences.append(sequence) + frame_ranges.append(frame_range) EditorAssetLibrary.make_directory(asset_dir) @@ -260,19 +177,24 @@ class CameraLoader(plugin.Loader): ) # Add sequences data to hierarchy - for i in range(0, len(sequences) - 1): - self._set_sequence_hierarchy( + for i in range(len(sequences) - 1): + set_sequence_hierarchy( sequences[i], sequences[i + 1], - frame_ranges[i + 1][0], frame_ranges[i + 1][1]) + frame_ranges[i][1], + frame_ranges[i + 1][0], frame_ranges[i + 1][1], + [level]) + project_name = legacy_io.active_project() data = get_asset_by_name(project_name, asset)["data"] cam_seq.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) cam_seq.set_playback_start(data.get('clipIn')) cam_seq.set_playback_end(data.get('clipOut') + 1) - self._set_sequence_hierarchy( + set_sequence_hierarchy( sequences[-1], cam_seq, - data.get('clipIn'), data.get('clipOut')) + frame_ranges[-1][1], + data.get('clipIn'), data.get('clipOut'), + [level]) settings = unreal.MovieSceneUserImportFBXSettings() settings.set_editor_property('reduce_keys', False) @@ -307,7 +229,7 @@ class CameraLoader(plugin.Loader): key.set_time(unreal.FrameNumber(value=new_time)) # Create Asset Container - unreal_pipeline.create_container( + create_container( container=container_name, path=asset_dir) data = { @@ -322,14 +244,14 @@ class CameraLoader(plugin.Loader): "parent": context["representation"]["parent"], "family": context["representation"]["context"]["family"] } - unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + imprint(f"{asset_dir}/{container_name}", data) EditorLevelLibrary.save_all_dirty_levels() EditorLevelLibrary.load_level(master_level) + # Save all assets in the hierarchy asset_content = EditorAssetLibrary.list_assets( - asset_dir, recursive=True, include_folder=True + hierarchy_dir_list[0], recursive=True, include_folder=False ) for a in asset_content: @@ -340,29 +262,27 @@ class CameraLoader(plugin.Loader): def update(self, container, representation): ar = unreal.AssetRegistryHelpers.get_asset_registry() - root = "/Game/ayon" + curr_level_sequence = LevelSequenceLib.get_current_level_sequence() + curr_time = LevelSequenceLib.get_current_time() + is_cam_lock = LevelSequenceLib.is_camera_cut_locked_to_viewport() + + editor_subsystem = unreal.UnrealEditorSubsystem() + vp_loc, vp_rot = editor_subsystem.get_level_viewport_camera_info() asset_dir = container.get('namespace') - context = representation.get("context") - - hierarchy = context.get('hierarchy').split("/") - h_dir = f"{root}/{hierarchy[0]}" - h_asset = hierarchy[0] - master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map" - EditorLevelLibrary.save_current_level() - filter = unreal.ARFilter( + _filter = unreal.ARFilter( class_names=["LevelSequence"], package_paths=[asset_dir], recursive_paths=False) - sequences = ar.get_assets(filter) - filter = unreal.ARFilter( + sequences = ar.get_assets(_filter) + _filter = unreal.ARFilter( class_names=["World"], - package_paths=[str(Path(asset_dir).parent.as_posix())], + package_paths=[asset_dir], recursive_paths=True) - maps = ar.get_assets(filter) + maps = ar.get_assets(_filter) # There should be only one map in the list EditorLevelLibrary.load_level(maps[0].get_asset().get_path_name()) @@ -401,12 +321,18 @@ class CameraLoader(plugin.Loader): root = "/Game/Ayon" namespace = container.get('namespace').replace(f"{root}/", "") ms_asset = namespace.split('/')[0] - filter = unreal.ARFilter( + _filter = unreal.ARFilter( class_names=["LevelSequence"], package_paths=[f"{root}/{ms_asset}"], recursive_paths=False) - sequences = ar.get_assets(filter) + sequences = ar.get_assets(_filter) master_sequence = sequences[0].get_asset() + _filter = unreal.ARFilter( + class_names=["World"], + package_paths=[f"{root}/{ms_asset}"], + recursive_paths=False) + levels = ar.get_assets(_filter) + master_level = levels[0].get_asset().get_path_name() sequences = [master_sequence] @@ -418,26 +344,20 @@ class CameraLoader(plugin.Loader): for t in tracks: if t.get_class() == unreal.MovieSceneSubTrack.static_class(): subscene_track = t - break if subscene_track: sections = subscene_track.get_sections() for ss in sections: if ss.get_sequence().get_name() == sequence_name: parent = s sub_scene = ss - # subscene_track.remove_section(ss) break sequences.append(ss.get_sequence()) - # Update subscenes indexes. - i = 0 - for ss in sections: + for i, ss in enumerate(sections): ss.set_row_index(i) - i += 1 - if parent: break - assert parent, "Could not find the parent sequence" + assert parent, "Could not find the parent sequence" EditorAssetLibrary.delete_asset(level_sequence.get_path_name()) @@ -466,33 +386,63 @@ class CameraLoader(plugin.Loader): str(representation["data"]["path"]) ) + # Set range of all sections + # Changing the range of the section is not enough. We need to change + # the frame of all the keys in the section. + project_name = legacy_io.active_project() + asset = container.get('asset') + data = get_asset_by_name(project_name, asset)["data"] + + for possessable in new_sequence.get_possessables(): + for tracks in possessable.get_tracks(): + for section in tracks.get_sections(): + section.set_range( + data.get('clipIn'), + data.get('clipOut') + 1) + for channel in section.get_all_channels(): + for key in channel.get_keys(): + old_time = key.get_time().get_editor_property( + 'frame_number') + old_time_value = old_time.get_editor_property( + 'value') + new_time = old_time_value + ( + data.get('clipIn') - data.get('frameStart') + ) + key.set_time(unreal.FrameNumber(value=new_time)) + data = { "representation": str(representation["_id"]), "parent": str(representation["parent"]) } - unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container.get('container_name')), data) + imprint(f"{asset_dir}/{container.get('container_name')}", data) EditorLevelLibrary.save_current_level() asset_content = EditorAssetLibrary.list_assets( - asset_dir, recursive=True, include_folder=False) + f"{root}/{ms_asset}", recursive=True, include_folder=False) for a in asset_content: EditorAssetLibrary.save_asset(a) EditorLevelLibrary.load_level(master_level) + if curr_level_sequence: + LevelSequenceLib.open_level_sequence(curr_level_sequence) + LevelSequenceLib.set_current_time(curr_time) + LevelSequenceLib.set_lock_camera_cut_to_viewport(is_cam_lock) + + editor_subsystem.set_level_viewport_camera_info(vp_loc, vp_rot) + def remove(self, container): - path = Path(container.get("namespace")) - parent_path = str(path.parent.as_posix()) + asset_dir = container.get('namespace') + path = Path(asset_dir) ar = unreal.AssetRegistryHelpers.get_asset_registry() - filter = unreal.ARFilter( + _filter = unreal.ARFilter( class_names=["LevelSequence"], - package_paths=[f"{str(path.as_posix())}"], + package_paths=[asset_dir], recursive_paths=False) - sequences = ar.get_assets(filter) + sequences = ar.get_assets(_filter) if not sequences: raise Exception("Could not find sequence.") @@ -500,11 +450,11 @@ class CameraLoader(plugin.Loader): world = ar.get_asset_by_object_path( EditorLevelLibrary.get_editor_world().get_path_name()) - filter = unreal.ARFilter( + _filter = unreal.ARFilter( class_names=["World"], - package_paths=[f"{parent_path}"], + package_paths=[asset_dir], recursive_paths=True) - maps = ar.get_assets(filter) + maps = ar.get_assets(_filter) # There should be only one map in the list if not maps: @@ -534,12 +484,18 @@ class CameraLoader(plugin.Loader): root = "/Game/Ayon" namespace = container.get('namespace').replace(f"{root}/", "") ms_asset = namespace.split('/')[0] - filter = unreal.ARFilter( + _filter = unreal.ARFilter( class_names=["LevelSequence"], package_paths=[f"{root}/{ms_asset}"], recursive_paths=False) - sequences = ar.get_assets(filter) + sequences = ar.get_assets(_filter) master_sequence = sequences[0].get_asset() + _filter = unreal.ARFilter( + class_names=["World"], + package_paths=[f"{root}/{ms_asset}"], + recursive_paths=False) + levels = ar.get_assets(_filter) + master_level = levels[0].get_full_name() sequences = [master_sequence] @@ -547,10 +503,13 @@ class CameraLoader(plugin.Loader): for s in sequences: tracks = s.get_master_tracks() subscene_track = None + visibility_track = None for t in tracks: if t.get_class() == unreal.MovieSceneSubTrack.static_class(): subscene_track = t - break + if (t.get_class() == + unreal.MovieSceneLevelVisibilityTrack.static_class()): + visibility_track = t if subscene_track: sections = subscene_track.get_sections() for ss in sections: @@ -560,23 +519,48 @@ class CameraLoader(plugin.Loader): break sequences.append(ss.get_sequence()) # Update subscenes indexes. - i = 0 - for ss in sections: + for i, ss in enumerate(sections): ss.set_row_index(i) - i += 1 + if visibility_track: + sections = visibility_track.get_sections() + for ss in sections: + if (unreal.Name(f"{container.get('asset')}_map_camera") + in ss.get_level_names()): + visibility_track.remove_section(ss) + # Update visibility sections indexes. + i = -1 + prev_name = [] + for ss in sections: + if prev_name != ss.get_level_names(): + i += 1 + ss.set_row_index(i) + prev_name = ss.get_level_names() if parent: break assert parent, "Could not find the parent sequence" - EditorAssetLibrary.delete_directory(str(path.as_posix())) + # Create a temporary level to delete the layout level. + EditorLevelLibrary.save_all_dirty_levels() + EditorAssetLibrary.make_directory(f"{root}/tmp") + tmp_level = f"{root}/tmp/temp_map" + if not EditorAssetLibrary.does_asset_exist(f"{tmp_level}.temp_map"): + EditorLevelLibrary.new_level(tmp_level) + else: + EditorLevelLibrary.load_level(tmp_level) + + # Delete the layout directory. + EditorAssetLibrary.delete_directory(asset_dir) + + EditorLevelLibrary.load_level(master_level) + EditorAssetLibrary.delete_directory(f"{root}/tmp") # Check if there isn't any more assets in the parent folder, and # delete it if not. asset_content = EditorAssetLibrary.list_assets( - parent_path, recursive=False, include_folder=True + path.parent.as_posix(), recursive=False, include_folder=True ) if len(asset_content) == 0: - EditorAssetLibrary.delete_directory(parent_path) + EditorAssetLibrary.delete_directory(path.parent.as_posix()) diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index d94e6e5837..86b2e1456c 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -5,15 +5,18 @@ import collections from pathlib import Path import unreal -from unreal import EditorAssetLibrary -from unreal import EditorLevelLibrary -from unreal import EditorLevelUtils -from unreal import AssetToolsHelpers -from unreal import FBXImportType -from unreal import MovieSceneLevelVisibilityTrack -from unreal import MovieSceneSubTrack +from unreal import ( + EditorAssetLibrary, + EditorLevelLibrary, + EditorLevelUtils, + AssetToolsHelpers, + FBXImportType, + MovieSceneLevelVisibilityTrack, + MovieSceneSubTrack, + LevelSequenceEditorBlueprintLibrary as LevelSequenceLib, +) -from openpype.client import get_asset_by_name, get_assets, get_representations +from openpype.client import get_asset_by_name, get_representations from openpype.pipeline import ( discover_loader_plugins, loaders_from_representation, @@ -25,7 +28,13 @@ from openpype.pipeline import ( from openpype.pipeline.context_tools import get_current_project_asset from openpype.settings import get_current_project_settings from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + generate_sequence, + set_sequence_hierarchy, + create_container, + imprint, + ls, +) class LayoutLoader(plugin.Loader): @@ -91,77 +100,6 @@ class LayoutLoader(plugin.Loader): return None - @staticmethod - def _set_sequence_hierarchy( - seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths - ): - # Get existing sequencer tracks or create them if they don't exist - tracks = seq_i.get_master_tracks() - subscene_track = None - visibility_track = None - for t in tracks: - if t.get_class() == unreal.MovieSceneSubTrack.static_class(): - subscene_track = t - if (t.get_class() == - unreal.MovieSceneLevelVisibilityTrack.static_class()): - visibility_track = t - if not subscene_track: - subscene_track = seq_i.add_master_track(unreal.MovieSceneSubTrack) - if not visibility_track: - visibility_track = seq_i.add_master_track( - unreal.MovieSceneLevelVisibilityTrack) - - # Create the sub-scene section - subscenes = subscene_track.get_sections() - subscene = None - for s in subscenes: - if s.get_editor_property('sub_sequence') == seq_j: - subscene = s - break - if not subscene: - subscene = subscene_track.add_section() - subscene.set_row_index(len(subscene_track.get_sections())) - subscene.set_editor_property('sub_sequence', seq_j) - subscene.set_range( - min_frame_j, - max_frame_j + 1) - - # Create the visibility section - ar = unreal.AssetRegistryHelpers.get_asset_registry() - maps = [] - for m in map_paths: - # Unreal requires to load the level to get the map name - EditorLevelLibrary.save_all_dirty_levels() - EditorLevelLibrary.load_level(m) - maps.append(str(ar.get_asset_by_object_path(m).asset_name)) - - vis_section = visibility_track.add_section() - index = len(visibility_track.get_sections()) - - vis_section.set_range( - min_frame_j, - max_frame_j + 1) - vis_section.set_visibility(unreal.LevelVisibility.VISIBLE) - vis_section.set_row_index(index) - vis_section.set_level_names(maps) - - if min_frame_j > 1: - hid_section = visibility_track.add_section() - hid_section.set_range( - 1, - min_frame_j) - hid_section.set_visibility(unreal.LevelVisibility.HIDDEN) - hid_section.set_row_index(index) - hid_section.set_level_names(maps) - if max_frame_j < max_frame_i: - hid_section = visibility_track.add_section() - hid_section.set_range( - max_frame_j + 1, - max_frame_i + 1) - hid_section.set_visibility(unreal.LevelVisibility.HIDDEN) - hid_section.set_row_index(index) - hid_section.set_level_names(maps) - def _transform_from_basis(self, transform, basis): """Transform a transform from a basis to a new basis.""" # Get the basis matrix @@ -352,63 +290,6 @@ class LayoutLoader(plugin.Loader): sec_params = section.get_editor_property('params') sec_params.set_editor_property('animation', animation) - @staticmethod - def _generate_sequence(h, h_dir): - tools = unreal.AssetToolsHelpers().get_asset_tools() - - sequence = tools.create_asset( - asset_name=h, - package_path=h_dir, - asset_class=unreal.LevelSequence, - factory=unreal.LevelSequenceFactoryNew() - ) - - project_name = legacy_io.active_project() - asset_data = get_asset_by_name( - project_name, - h_dir.split('/')[-1], - fields=["_id", "data.fps"] - ) - - start_frames = [] - end_frames = [] - - elements = list(get_assets( - project_name, - parent_ids=[asset_data["_id"]], - fields=["_id", "data.clipIn", "data.clipOut"] - )) - for e in elements: - start_frames.append(e.get('data').get('clipIn')) - end_frames.append(e.get('data').get('clipOut')) - - elements.extend(get_assets( - project_name, - parent_ids=[e["_id"]], - fields=["_id", "data.clipIn", "data.clipOut"] - )) - - min_frame = min(start_frames) - max_frame = max(end_frames) - - sequence.set_display_rate( - unreal.FrameRate(asset_data.get('data').get("fps"), 1.0)) - sequence.set_playback_start(min_frame) - sequence.set_playback_end(max_frame) - - tracks = sequence.get_master_tracks() - track = None - for t in tracks: - if (t.get_class() == - unreal.MovieSceneCameraCutTrack.static_class()): - track = t - break - if not track: - track = sequence.add_master_track( - unreal.MovieSceneCameraCutTrack) - - return sequence, (min_frame, max_frame) - def _get_repre_docs_by_version_id(self, data): version_ids = { element.get("version") @@ -696,7 +577,7 @@ class LayoutLoader(plugin.Loader): ] if not existing_sequences: - sequence, frame_range = self._generate_sequence(h, h_dir) + sequence, frame_range = generate_sequence(h, h_dir) sequences.append(sequence) frame_ranges.append(frame_range) @@ -716,7 +597,7 @@ class LayoutLoader(plugin.Loader): # sequences and frame_ranges have the same length for i in range(0, len(sequences) - 1): - self._set_sequence_hierarchy( + set_sequence_hierarchy( sequences[i], sequences[i + 1], frame_ranges[i][1], frame_ranges[i + 1][0], frame_ranges[i + 1][1], @@ -729,7 +610,7 @@ class LayoutLoader(plugin.Loader): shot.set_playback_start(0) shot.set_playback_end(data.get('clipOut') - data.get('clipIn') + 1) if sequences: - self._set_sequence_hierarchy( + set_sequence_hierarchy( sequences[-1], shot, frame_ranges[-1][1], data.get('clipIn'), data.get('clipOut'), @@ -745,7 +626,7 @@ class LayoutLoader(plugin.Loader): EditorLevelLibrary.save_current_level() # Create Asset Container - unreal_pipeline.create_container( + create_container( container=container_name, path=asset_dir) data = { @@ -761,11 +642,13 @@ class LayoutLoader(plugin.Loader): "family": context["representation"]["context"]["family"], "loaded_assets": loaded_assets } - unreal_pipeline.imprint( + imprint( "{}/{}".format(asset_dir, container_name), data) + save_dir = hierarchy_dir_list[0] if create_sequences else asset_dir + asset_content = EditorAssetLibrary.list_assets( - asset_dir, recursive=True, include_folder=False) + save_dir, recursive=True, include_folder=False) for a in asset_content: EditorAssetLibrary.save_asset(a) @@ -781,16 +664,24 @@ class LayoutLoader(plugin.Loader): ar = unreal.AssetRegistryHelpers.get_asset_registry() + curr_level_sequence = LevelSequenceLib.get_current_level_sequence() + curr_time = LevelSequenceLib.get_current_time() + is_cam_lock = LevelSequenceLib.is_camera_cut_locked_to_viewport() + + editor_subsystem = unreal.UnrealEditorSubsystem() + vp_loc, vp_rot = editor_subsystem.get_level_viewport_camera_info() + root = "/Game/Ayon" asset_dir = container.get('namespace') context = representation.get("context") + hierarchy = context.get('hierarchy').split("/") + sequence = None master_level = None if create_sequences: - hierarchy = context.get('hierarchy').split("/") h_dir = f"{root}/{hierarchy[0]}" h_asset = hierarchy[0] master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map" @@ -843,13 +734,15 @@ class LayoutLoader(plugin.Loader): "parent": str(representation["parent"]), "loaded_assets": loaded_assets } - unreal_pipeline.imprint( + imprint( "{}/{}".format(asset_dir, container.get('container_name')), data) EditorLevelLibrary.save_current_level() + save_dir = f"{root}/{hierarchy[0]}" if create_sequences else asset_dir + asset_content = EditorAssetLibrary.list_assets( - asset_dir, recursive=True, include_folder=False) + save_dir, recursive=True, include_folder=False) for a in asset_content: EditorAssetLibrary.save_asset(a) @@ -859,6 +752,13 @@ class LayoutLoader(plugin.Loader): elif prev_level: EditorLevelLibrary.load_level(prev_level) + if curr_level_sequence: + LevelSequenceLib.open_level_sequence(curr_level_sequence) + LevelSequenceLib.set_current_time(curr_time) + LevelSequenceLib.set_lock_camera_cut_to_viewport(is_cam_lock) + + editor_subsystem.set_level_viewport_camera_info(vp_loc, vp_rot) + def remove(self, container): """ Delete the layout. First, check if the assets loaded with the layout @@ -870,7 +770,7 @@ class LayoutLoader(plugin.Loader): root = "/Game/Ayon" path = Path(container.get("namespace")) - containers = unreal_pipeline.ls() + containers = ls() layout_containers = [ c for c in containers if (c.get('asset_name') != container.get('asset_name') and diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py index e7a690ac9c..2b7e1375e6 100644 --- a/openpype/hosts/unreal/ue_workers.py +++ b/openpype/hosts/unreal/ue_workers.py @@ -6,6 +6,8 @@ import subprocess from distutils import dir_util from pathlib import Path from typing import List, Union +import tempfile +from distutils.dir_util import copy_tree import openpype.hosts.unreal.lib as ue_lib @@ -90,9 +92,20 @@ class UEProjectGenerationWorker(QtCore.QObject): ("Generating a new UE project ... 1 out of " f"{stage_count}")) + # Need to copy the commandlet project to a temporary folder where + # users don't need admin rights to write to. + cmdlet_tmp = tempfile.TemporaryDirectory() + cmdlet_filename = cmdlet_project.name + cmdlet_dir = cmdlet_project.parent.as_posix() + cmdlet_tmp_name = Path(cmdlet_tmp.name) + cmdlet_tmp_file = cmdlet_tmp_name.joinpath(cmdlet_filename) + copy_tree( + cmdlet_dir, + cmdlet_tmp_name.as_posix()) + commandlet_cmd = [ f"{ue_editor_exe.as_posix()}", - f"{cmdlet_project.as_posix()}", + f"{cmdlet_tmp_file.as_posix()}", "-run=AyonGenerateProject", f"{project_file.resolve().as_posix()}", ] @@ -111,6 +124,8 @@ class UEProjectGenerationWorker(QtCore.QObject): gen_process.stdout.close() return_code = gen_process.wait() + cmdlet_tmp.cleanup() + if return_code and return_code != 0: msg = ( f"Failed to generate {self.project_name} " diff --git a/openpype/lib/project_backpack.py b/openpype/lib/project_backpack.py index 07107ec011..674eaa3b91 100644 --- a/openpype/lib/project_backpack.py +++ b/openpype/lib/project_backpack.py @@ -113,26 +113,29 @@ def pack_project( project_name )) - roots = project_doc["config"]["roots"] - # Determine root directory of project - source_root = None - source_root_name = None - for root_name, root_value in roots.items(): - if source_root is not None: - raise ValueError( - "Packaging is supported only for single root projects" - ) - source_root = root_value - source_root_name = root_name + root_path = None + source_root = {} + project_source_path = None + if not only_documents: + roots = project_doc["config"]["roots"] + # Determine root directory of project + source_root_name = None + for root_name, root_value in roots.items(): + if source_root is not None: + raise ValueError( + "Packaging is supported only for single root projects" + ) + source_root = root_value + source_root_name = root_name - root_path = source_root[platform.system().lower()] - print("Using root \"{}\" with path \"{}\"".format( - source_root_name, root_path - )) + root_path = source_root[platform.system().lower()] + print("Using root \"{}\" with path \"{}\"".format( + source_root_name, root_path + )) - project_source_path = os.path.join(root_path, project_name) - if not os.path.exists(project_source_path): - raise ValueError("Didn't find source of project files") + project_source_path = os.path.join(root_path, project_name) + if not os.path.exists(project_source_path): + raise ValueError("Didn't find source of project files") # Determine zip filepath where data will be stored if not destination_dir: @@ -273,8 +276,7 @@ def unpack_project( low_platform = platform.system().lower() project_name = metadata["project_name"] - source_root = metadata["root"] - root_path = source_root[low_platform] + root_path = metadata["root"].get(low_platform) # Drop existing collection replace_project_documents(project_name, docs, database_name) diff --git a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py index 9981bead3e..2de6073e29 100644 --- a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py +++ b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py @@ -5,23 +5,26 @@ This is resolving index of server lists stored in `deadlineServers` instance attribute or using default server if that attribute doesn't exists. """ +from maya import cmds + import pyblish.api class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): """Collect Deadline Webservice URL from instance.""" - order = pyblish.api.CollectorOrder + 0.415 + # Run before collect_render. + order = pyblish.api.CollectorOrder + 0.005 label = "Deadline Webservice from the Instance" families = ["rendering", "renderlayer"] + hosts = ["maya"] def process(self, instance): instance.data["deadlineUrl"] = self._collect_deadline_url(instance) self.log.info( "Using {} for submission.".format(instance.data["deadlineUrl"])) - @staticmethod - def _collect_deadline_url(render_instance): + def _collect_deadline_url(self, render_instance): # type: (pyblish.api.Instance) -> str """Get Deadline Webservice URL from render instance. @@ -49,8 +52,16 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): default_server = render_instance.context.data["defaultDeadline"] instance_server = render_instance.data.get("deadlineServers") if not instance_server: + self.log.debug("Using default server.") return default_server + # Get instance server as sting. + if isinstance(instance_server, int): + instance_server = cmds.getAttr( + "{}.deadlineServers".format(render_instance.data["objset"]), + asString=True + ) + default_servers = deadline_settings["deadline_urls"] project_servers = ( render_instance.context.data @@ -58,15 +69,23 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): ["deadline"] ["deadline_servers"] ) - deadline_servers = { + if not project_servers: + self.log.debug("Not project servers found. Using default servers.") + return default_servers[instance_server] + + project_enabled_servers = { k: default_servers[k] for k in project_servers if k in default_servers } - # This is Maya specific and may not reflect real selection of deadline - # url as dictionary keys in Python 2 are not ordered - return deadline_servers[ - list(deadline_servers.keys())[ - int(render_instance.data.get("deadlineServers")) - ] - ] + + msg = ( + "\"{}\" server on instance is not enabled in project settings." + " Enabled project servers:\n{}".format( + instance_server, project_enabled_servers + ) + ) + assert instance_server in project_enabled_servers, msg + + self.log.debug("Using project approved server.") + return project_enabled_servers[instance_server] diff --git a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py index cb2b0cf156..1a0d615dc3 100644 --- a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py +++ b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py @@ -17,7 +17,8 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin): `CollectDeadlineServerFromInstance`. """ - order = pyblish.api.CollectorOrder + 0.410 + # Run before collect_deadline_server_instance. + order = pyblish.api.CollectorOrder + 0.0025 label = "Default Deadline Webservice" pass_mongo_url = False diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py index 73ab689c9a..254914a850 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py @@ -1,19 +1,27 @@ +import hou + import os -import json +import attr import getpass from datetime import datetime - -import requests import pyblish.api -# import hou ??? - from openpype.pipeline import legacy_io from openpype.tests.lib import is_in_tests +from openpype_modules.deadline import abstract_submit_deadline +from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo from openpype.lib import is_running_from_build -class HoudiniSubmitRenderDeadline(pyblish.api.InstancePlugin): +@attr.s +class DeadlinePluginInfo(): + SceneFile = attr.ib(default=None) + OutputDriver = attr.ib(default=None) + Version = attr.ib(default=None) + IgnoreInputs = attr.ib(default=True) + + +class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): """Submit Solaris USD Render ROPs to Deadline. Renders are submitted to a Deadline Web Service as @@ -30,83 +38,57 @@ class HoudiniSubmitRenderDeadline(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder hosts = ["houdini"] families = ["usdrender", - "redshift_rop"] + "redshift_rop", + "arnold_rop", + "mantra_rop", + "karma_rop", + "vray_rop"] targets = ["local"] + use_published = True - def process(self, instance): + def get_job_info(self): + job_info = DeadlineJobInfo(Plugin="Houdini") + instance = self._instance context = instance.context - code = context.data["code"] + filepath = context.data["currentFile"] filename = os.path.basename(filepath) - comment = context.data.get("comment", "") - deadline_user = context.data.get("deadlineUser", getpass.getuser()) - jobname = "%s - %s" % (filename, instance.name) - # Support code prefix label for batch name - batch_name = filename - if code: - batch_name = "{0} - {1}".format(code, batch_name) + job_info.Name = "{} - {}".format(filename, instance.name) + job_info.BatchName = filename + job_info.Plugin = "Houdini" + job_info.UserName = context.data.get( + "deadlineUser", getpass.getuser()) if is_in_tests(): - batch_name += datetime.now().strftime("%d%m%Y%H%M%S") + job_info.BatchName += datetime.now().strftime("%d%m%Y%H%M%S") - # Output driver to render - driver = instance[0] - - # StartFrame to EndFrame by byFrameStep + # Deadline requires integers in frame range frames = "{start}-{end}x{step}".format( start=int(instance.data["frameStart"]), end=int(instance.data["frameEnd"]), step=int(instance.data["byFrameStep"]), ) + job_info.Frames = frames - # Documentation for keys available at: - # https://docs.thinkboxsoftware.com - # /products/deadline/8.0/1_User%20Manual/manual - # /manual-submission.html#job-info-file-options - payload = { - "JobInfo": { - # Top-level group name - "BatchName": batch_name, + job_info.Pool = instance.data.get("primaryPool") + job_info.SecondaryPool = instance.data.get("secondaryPool") + job_info.ChunkSize = instance.data.get("chunkSize", 10) + job_info.Comment = context.data.get("comment") - # Job name, as seen in Monitor - "Name": jobname, - - # Arbitrary username, for visualisation in Monitor - "UserName": deadline_user, - - "Plugin": "Houdini", - "Pool": instance.data.get("primaryPool"), - "secondaryPool": instance.data.get("secondaryPool"), - "Frames": frames, - - "ChunkSize": instance.data.get("chunkSize", 10), - - "Comment": comment - }, - "PluginInfo": { - # Input - "SceneFile": filepath, - "OutputDriver": driver.path(), - - # Mandatory for Deadline - # Houdini version without patch number - "Version": hou.applicationVersionString().rsplit(".", 1)[0], - - "IgnoreInputs": True - }, - - # Mandatory for Deadline, may be empty - "AuxFiles": [] - } - - # Include critical environment variables with submission + api.Session keys = [ - # Submit along the current Avalon tool setup that we launched - # this application with so the Render Slave can build its own - # similar environment using it, e.g. "maya2018;vray4.x;yeti3.1.9" - "AVALON_TOOLS" + "FTRACK_API_KEY", + "FTRACK_API_USER", + "FTRACK_SERVER", + "OPENPYPE_SG_USER", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK", + "AVALON_APP_NAME", + "OPENPYPE_DEV", + "OPENPYPE_LOG_NO_COLORS", + "OPENPYPE_VERSION" ] # Add OpenPype version if we are running from build. @@ -114,61 +96,50 @@ class HoudiniSubmitRenderDeadline(pyblish.api.InstancePlugin): keys.append("OPENPYPE_VERSION") # Add mongo url if it's enabled - if context.data.get("deadlinePassMongoUrl"): + if self._instance.context.data.get("deadlinePassMongoUrl"): keys.append("OPENPYPE_MONGO") environment = dict({key: os.environ[key] for key in keys if key in os.environ}, **legacy_io.Session) + for key in keys: + value = environment.get(key) + if value: + job_info.EnvironmentKeyValue[key] = value - payload["JobInfo"].update({ - "EnvironmentKeyValue%d" % index: "{key}={value}".format( - key=key, - value=environment[key] - ) for index, key in enumerate(environment) - }) + # to recognize job from PYPE for turning Event On/Off + job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" - # Include OutputFilename entries - # The first entry also enables double-click to preview rendered - # frames from Deadline Monitor - output_data = {} for i, filepath in enumerate(instance.data["files"]): dirname = os.path.dirname(filepath) fname = os.path.basename(filepath) - output_data["OutputDirectory%d" % i] = dirname.replace("\\", "/") - output_data["OutputFilename%d" % i] = fname + job_info.OutputDirectory += dirname.replace("\\", "/") + job_info.OutputFilename += fname - # For now ensure destination folder exists otherwise HUSK - # will fail to render the output image. This is supposedly fixed - # in new production builds of Houdini - # TODO Remove this workaround with Houdini 18.0.391+ - if not os.path.exists(dirname): - self.log.info("Ensuring output directory exists: %s" % - dirname) - os.makedirs(dirname) + return job_info - payload["JobInfo"].update(output_data) + def get_plugin_info(self): - self.submit(instance, payload) + instance = self._instance + context = instance.context - def submit(self, instance, payload): + # Output driver to render + driver = hou.node(instance.data["instance_node"]) + hou_major_minor = hou.applicationVersionString().rsplit(".", 1)[0] - AVALON_DEADLINE = legacy_io.Session.get("AVALON_DEADLINE", - "http://localhost:8082") - assert AVALON_DEADLINE, "Requires AVALON_DEADLINE" + plugin_info = DeadlinePluginInfo( + SceneFile=context.data["currentFile"], + OutputDriver=driver.path(), + Version=hou_major_minor, + IgnoreInputs=True + ) - plugin = payload["JobInfo"]["Plugin"] - self.log.info("Using Render Plugin : {}".format(plugin)) + return attr.asdict(plugin_info) - self.log.info("Submitting..") - self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) - - # E.g. http://192.168.0.1:8082/api/jobs - url = "{}/api/jobs".format(AVALON_DEADLINE) - response = requests.post(url, json=payload) - if not response.ok: - raise Exception(response.text) + def process(self, instance): + super(HoudiniSubmitDeadline, self).process(instance) + # TODO: Avoid the need for this logic here, needed for submit publish # Store output dir for unified publisher (filesequence) output_dir = os.path.dirname(instance.data["files"][0]) instance.data["outputDir"] = output_dir - instance.data["deadlineSubmissionJob"] = response.json() + instance.data["toBeRenderedOn"] = "deadline" diff --git a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py index c728b6b9c7..b6a30e36b7 100644 --- a/openpype/modules/deadline/plugins/publish/submit_max_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_max_deadline.py @@ -78,7 +78,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, job_info.BatchName = src_filename job_info.Plugin = instance.data["plugin"] job_info.UserName = context.data.get("deadlineUser", getpass.getuser()) - + job_info.EnableAutoTimeout = True # Deadline requires integers in frame range frames = "{start}-{end}".format( start=int(instance.data["frameStart"]), @@ -133,7 +133,8 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, # Add list of expected files to job # --------------------------------- exp = instance.data.get("expectedFiles") - for filepath in exp: + + for filepath in self._iter_expected_files(exp): job_info.OutputDirectory += os.path.dirname(filepath) job_info.OutputFilename += os.path.basename(filepath) @@ -162,10 +163,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, instance = self._instance filepath = self.scene_path - expected_files = instance.data["expectedFiles"] - if not expected_files: + files = instance.data["expectedFiles"] + if not files: raise RuntimeError("No Render Elements found!") - output_dir = os.path.dirname(expected_files[0]) + first_file = next(self._iter_expected_files(files)) + output_dir = os.path.dirname(first_file) instance.data["outputDir"] = output_dir instance.data["toBeRenderedOn"] = "deadline" @@ -196,25 +198,22 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, else: plugin_data["DisableMultipass"] = 1 - expected_files = instance.data.get("expectedFiles") - if not expected_files: + files = instance.data.get("expectedFiles") + if not files: raise RuntimeError("No render elements found") - old_output_dir = os.path.dirname(expected_files[0]) + first_file = next(self._iter_expected_files(files)) + old_output_dir = os.path.dirname(first_file) output_beauty = RenderSettings().get_render_output(instance.name, old_output_dir) - filepath = self.from_published_scene() - - def _clean_name(path): - return os.path.splitext(os.path.basename(path))[0] - - new_scene = _clean_name(filepath) - orig_scene = _clean_name(instance.context.data["currentFile"]) - - output_beauty = output_beauty.replace(orig_scene, new_scene) - output_beauty = output_beauty.replace("\\", "/") - plugin_data["RenderOutput"] = output_beauty - + rgb_bname = os.path.basename(output_beauty) + dir = os.path.dirname(first_file) + beauty_name = f"{dir}/{rgb_bname}" + beauty_name = beauty_name.replace("\\", "/") + plugin_data["RenderOutput"] = beauty_name + # as 3dsmax has version with different languages + plugin_data["Language"] = "ENU" renderer_class = get_current_renderer() + renderer = str(renderer_class).split(":")[0] if renderer in [ "ART_Renderer", @@ -226,14 +225,37 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, ]: render_elem_list = RenderSettings().get_render_element() for i, element in enumerate(render_elem_list): - element = element.replace(orig_scene, new_scene) - plugin_data["RenderElementOutputFilename%d" % i] = element # noqa + elem_bname = os.path.basename(element) + new_elem = f"{dir}/{elem_bname}" + new_elem = new_elem.replace("/", "\\") + plugin_data["RenderElementOutputFilename%d" % i] = new_elem # noqa + + if renderer == "Redshift_Renderer": + plugin_data["redshift_SeparateAovFiles"] = instance.data.get( + "separateAovFiles") self.log.debug("plugin data:{}".format(plugin_data)) plugin_info.update(plugin_data) return job_info, plugin_info + def from_published_scene(self, replace_in_path=True): + instance = self._instance + if instance.data["renderer"] == "Redshift_Renderer": + self.log.debug("Using Redshift...published scene wont be used..") + replace_in_path = False + return replace_in_path + + @staticmethod + def _iter_expected_files(exp): + if isinstance(exp[0], dict): + for _aov, files in exp[0].items(): + for file in files: + yield file + else: + for file in exp: + yield file + @classmethod def get_attribute_defs(cls): defs = super(MaxSubmitDeadline, cls).get_attribute_defs() diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 4c90dca583..96649db961 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -92,11 +92,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): deadline_plugin = "OpenPype" targets = ["local"] - hosts = ["fusion", "max", "maya", "nuke", + hosts = ["fusion", "max", "maya", "nuke", "houdini", "celaction", "aftereffects", "harmony"] families = ["render.farm", "prerender.farm", - "renderlayer", "imagesequence", "maxrender", "vrayscene"] + "renderlayer", "imagesequence", + "vrayscene", "maxrender", + "arnold_rop", "mantra_rop", + "karma_rop", "vray_rop", + "redshift_rop"] aov_filter = {"maya": [r".*([Bb]eauty).*"], "aftereffects": [r".*"], # for everything from AE @@ -114,7 +118,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): "FTRACK_SERVER", "AVALON_APP_NAME", "OPENPYPE_USERNAME", - "OPENPYPE_SG_USER", + "OPENPYPE_VERSION", + "OPENPYPE_SG_USER" ] # Add OpenPype version if we are running from build. @@ -462,6 +467,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): deadline_publish_job_id = \ self._submit_deadline_post_job(instance, render_job, instances) + # Inject deadline url to instances. + for inst in instances: + inst["deadlineUrl"] = self.deadline_url + # publish job file publish_job = { "asset": instance_skeleton_data["asset"], diff --git a/openpype/modules/ftrack/lib/ftrack_action_handler.py b/openpype/modules/ftrack/lib/ftrack_action_handler.py index 07b3a780a2..1be4353b26 100644 --- a/openpype/modules/ftrack/lib/ftrack_action_handler.py +++ b/openpype/modules/ftrack/lib/ftrack_action_handler.py @@ -234,6 +234,10 @@ class BaseAction(BaseHandler): if not settings_roles: return default + user_roles = { + role_name.lower() + for role_name in user_roles + } for role_name in settings_roles: if role_name.lower() in user_roles: return True @@ -264,8 +268,15 @@ class BaseAction(BaseHandler): return user_entity @classmethod - def get_user_roles_from_event(cls, session, event): - """Query user entity from event.""" + def get_user_roles_from_event(cls, session, event, lower=True): + """Get user roles based on data in event. + + Args: + session (ftrack_api.Session): Prepared ftrack session. + event (ftrack_api.event.Event): Event which is processed. + lower (Optional[bool]): Lower the role names. Default 'True'. + """ + not_set = object() user_roles = event["data"].get("user_roles", not_set) @@ -273,7 +284,10 @@ class BaseAction(BaseHandler): user_roles = [] user_entity = cls.get_user_entity_from_event(session, event) for role in user_entity["user_security_roles"]: - user_roles.append(role["security_role"]["name"].lower()) + role_name = role["security_role"]["name"] + if lower: + role_name = role_name.lower() + user_roles.append(role_name) event["data"]["user_roles"] = user_roles return user_roles @@ -322,7 +336,8 @@ class BaseAction(BaseHandler): if not settings.get(self.settings_enabled_key, True): return False - user_role_list = self.get_user_roles_from_event(session, event) + user_role_list = self.get_user_roles_from_event( + session, event, lower=False) if not self.roles_check(settings.get("role_list"), user_role_list): return False return True diff --git a/openpype/modules/ftrack/scripts/sub_event_status.py b/openpype/modules/ftrack/scripts/sub_event_status.py index dc5836e7f2..c6c2e9e1f6 100644 --- a/openpype/modules/ftrack/scripts/sub_event_status.py +++ b/openpype/modules/ftrack/scripts/sub_event_status.py @@ -296,9 +296,9 @@ def server_activity_validate_user(event): if not user_ent: return False - role_list = ["Pypeclub", "Administrator"] + role_list = {"pypeclub", "administrator"} for role in user_ent["user_security_roles"]: - if role["security_role"]["name"] in role_list: + if role["security_role"]["name"].lower() in role_list: return True return False diff --git a/openpype/modules/kitsu/kitsu_module.py b/openpype/modules/kitsu/kitsu_module.py index b91373af20..8d2d5ccd60 100644 --- a/openpype/modules/kitsu/kitsu_module.py +++ b/openpype/modules/kitsu/kitsu_module.py @@ -94,7 +94,7 @@ class KitsuModule(OpenPypeModule, IPluginPaths, ITrayAction): return { "publish": [os.path.join(current_dir, "plugins", "publish")], - "actions": [os.path.join(current_dir, "actions")] + "actions": [os.path.join(current_dir, "actions")], } def cli(self, click_group): @@ -128,15 +128,35 @@ def push_to_zou(login, password): @click.option( "-p", "--password", envvar="KITSU_PWD", help="Password for kitsu username" ) -def sync_service(login, password): +@click.option( + "-prj", + "--project", + "projects", + multiple=True, + default=[], + help="Sync specific kitsu projects", +) +@click.option( + "-lo", + "--listen-only", + "listen_only", + is_flag=True, + default=False, + help="Listen to events only without any syncing", +) +def sync_service(login, password, projects, listen_only): """Synchronize openpype database from Zou sever database. Args: login (str): Kitsu user login password (str): Kitsu user password + projects (tuple): specific kitsu projects + listen_only (bool): run listen only without any syncing """ from .utils.update_op_with_zou import sync_all_projects from .utils.sync_service import start_listeners - sync_all_projects(login, password) + if not listen_only: + sync_all_projects(login, password, filter_projects=projects) + start_listeners(login, password) diff --git a/openpype/modules/kitsu/utils/update_op_with_zou.py b/openpype/modules/kitsu/utils/update_op_with_zou.py index 4f4f0810bc..b495cd1bea 100644 --- a/openpype/modules/kitsu/utils/update_op_with_zou.py +++ b/openpype/modules/kitsu/utils/update_op_with_zou.py @@ -94,9 +94,7 @@ def update_op_assets( if not item_doc: # Create asset op_asset = create_op_asset(item) insert_result = dbcon.insert_one(op_asset) - item_doc = get_asset_by_id( - project_name, insert_result.inserted_id - ) + item_doc = get_asset_by_id(project_name, insert_result.inserted_id) # Update asset item_data = deepcopy(item_doc["data"]) @@ -329,7 +327,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne: "code": project_code, "fps": float(project["fps"]), "zou_id": project["id"], - "active": project['project_status_name'] != "Closed", + "active": project["project_status_name"] != "Closed", } ) @@ -359,7 +357,10 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne: def sync_all_projects( - login: str, password: str, ignore_projects: list = None + login: str, + password: str, + ignore_projects: list = None, + filter_projects: tuple = None, ): """Update all OP projects in DB with Zou data. @@ -367,6 +368,7 @@ def sync_all_projects( login (str): Kitsu user login password (str): Kitsu user password ignore_projects (list): List of unsynced project names + filter_projects (tuple): Tuple of filter project names to sync with Raises: gazu.exception.AuthFailedException: Wrong user login and/or password """ @@ -381,7 +383,24 @@ def sync_all_projects( dbcon = AvalonMongoDB() dbcon.install() all_projects = gazu.project.all_projects() - for project in all_projects: + + project_to_sync = [] + + if filter_projects: + all_kitsu_projects = {p["name"]: p for p in all_projects} + for proj_name in filter_projects: + if proj_name in all_kitsu_projects: + project_to_sync.append(all_kitsu_projects[proj_name]) + else: + log.info( + f"`{proj_name}` project does not exist in Kitsu." + f" Please make sure the project is spelled correctly." + ) + else: + # all project + project_to_sync = all_projects + + for project in project_to_sync: if ignore_projects and project["name"] in ignore_projects: continue sync_project_from_kitsu(dbcon, project) @@ -408,14 +427,13 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict): # Get all statuses for projects from Kitsu all_status = gazu.project.all_project_status() for status in all_status: - if project['project_status_id'] == status['id']: - project['project_status_name'] = status['name'] + if project["project_status_id"] == status["id"]: + project["project_status_name"] = status["name"] break # Do not sync closed kitsu project that is not found in openpype - if ( - project['project_status_name'] == "Closed" - and not get_project(project['name']) + if project["project_status_name"] == "Closed" and not get_project( + project["name"] ): return @@ -444,7 +462,7 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict): log.info("Project created: {}".format(project_name)) bulk_writes.append(write_project_to_op(project, dbcon)) - if project['project_status_name'] == "Closed": + if project["project_status_name"] == "Closed": return # Try to find project document diff --git a/openpype/plugins/publish/collect_frames_fix.py b/openpype/plugins/publish/collect_frames_fix.py index bdd49585a5..86e727b053 100644 --- a/openpype/plugins/publish/collect_frames_fix.py +++ b/openpype/plugins/publish/collect_frames_fix.py @@ -26,55 +26,72 @@ class CollectFramesFixDef( targets = ["local"] hosts = ["nuke"] families = ["render", "prerender"] - enabled = True + + rewrite_version_enable = False def process(self, instance): attribute_values = self.get_attr_values_from_data(instance.data) frames_to_fix = attribute_values.get("frames_to_fix") + rewrite_version = attribute_values.get("rewrite_version") - if frames_to_fix: - instance.data["frames_to_fix"] = frames_to_fix + if not frames_to_fix: + return - subset_name = instance.data["subset"] - asset_name = instance.data["asset"] + instance.data["frames_to_fix"] = frames_to_fix - project_entity = instance.data["projectEntity"] - project_name = project_entity["name"] + subset_name = instance.data["subset"] + asset_name = instance.data["asset"] - version = get_last_version_by_subset_name(project_name, - subset_name, - asset_name=asset_name) - if not version: - self.log.warning("No last version found, " - "re-render not possible") - return + project_entity = instance.data["projectEntity"] + project_name = project_entity["name"] - representations = get_representations(project_name, - version_ids=[version["_id"]]) - published_files = [] - for repre in representations: - if repre["context"]["family"] not in self.families: - continue + version = get_last_version_by_subset_name( + project_name, + subset_name, + asset_name=asset_name + ) + if not version: + self.log.warning( + "No last version found, re-render not possible" + ) + return - for file_info in repre.get("files"): - published_files.append(file_info["path"]) + representations = get_representations( + project_name, version_ids=[version["_id"]] + ) + published_files = [] + for repre in representations: + if repre["context"]["family"] not in self.families: + continue - instance.data["last_version_published_files"] = published_files - self.log.debug("last_version_published_files::{}".format( - instance.data["last_version_published_files"])) + for file_info in repre.get("files"): + published_files.append(file_info["path"]) - if rewrite_version: - instance.data["version"] = version["name"] - # limits triggering version validator - instance.data.pop("latestVersion") + instance.data["last_version_published_files"] = published_files + self.log.debug("last_version_published_files::{}".format( + instance.data["last_version_published_files"])) + + if self.rewrite_version_enable and rewrite_version: + instance.data["version"] = version["name"] + # limits triggering version validator + instance.data.pop("latestVersion") @classmethod def get_attribute_defs(cls): - return [ + attributes = [ TextDef("frames_to_fix", label="Frames to fix", placeholder="5,10-15", - regex="[0-9,-]+"), - BoolDef("rewrite_version", label="Rewrite latest version", - default=False), + regex="[0-9,-]+") ] + + if cls.rewrite_version_enable: + attributes.append( + BoolDef( + "rewrite_version", + label="Rewrite latest version", + default=False + ) + ) + + return attributes diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 75f335f1de..002e547feb 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -46,6 +46,10 @@ "enabled": false, "families": [] }, + "CollectFramesFixDef": { + "enabled": true, + "rewrite_version_enable": true + }, "ValidateEditorialAssetName": { "enabled": true, "optional": false @@ -252,7 +256,9 @@ } }, { - "families": ["review"], + "families": [ + "review" + ], "hosts": [ "maya", "houdini" diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index f01bdf7d50..3f8be4c872 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -222,6 +222,13 @@ "title": "OpenPype Docs", "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_nuke_tut')", "tooltip": "Open the OpenPype Nuke user doc page" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set Frame Start (Read Node)", + "command": "from openpype.hosts.nuke.startup.frame_setting_for_read_nodes import main;main();", + "tooltip": "Set frame start for read node(s)" } ] }, diff --git a/openpype/settings/defaults/project_settings/resolve.json b/openpype/settings/defaults/project_settings/resolve.json index 264f3bd902..56efa78e89 100644 --- a/openpype/settings/defaults/project_settings/resolve.json +++ b/openpype/settings/defaults/project_settings/resolve.json @@ -1,4 +1,5 @@ { + "launch_openpype_menu_on_start": false, "imageio": { "ocio_config": { "enabled": false, diff --git a/openpype/settings/defaults/project_settings/unreal.json b/openpype/settings/defaults/project_settings/unreal.json index 737a17d289..92bdb468ba 100644 --- a/openpype/settings/defaults/project_settings/unreal.json +++ b/openpype/settings/defaults/project_settings/unreal.json @@ -15,6 +15,6 @@ "preroll_frames": 0, "render_format": "png", "project_setup": { - "dev_mode": true + "dev_mode": false } } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json b/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json index b326f22394..6f98bdd3bd 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_resolve.json @@ -5,6 +5,11 @@ "label": "DaVinci Resolve", "is_file": true, "children": [ + { + "type": "boolean", + "key": "launch_openpype_menu_on_start", + "label": "Launch OpenPype menu on start of Resolve" + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index a7617918a3..3164cfb62d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -81,6 +81,26 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "CollectFramesFixDef", + "label": "Collect Frames to Fix", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "rewrite_version_enable", + "label": "Show 'Rewrite latest version' toggle" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/tools/publisher/widgets/border_label_widget.py b/openpype/tools/publisher/widgets/border_label_widget.py index 5617e159cd..e5693368b1 100644 --- a/openpype/tools/publisher/widgets/border_label_widget.py +++ b/openpype/tools/publisher/widgets/border_label_widget.py @@ -14,32 +14,44 @@ class _VLineWidget(QtWidgets.QWidget): It is expected that parent widget will set width. """ - def __init__(self, color, left, parent): + def __init__(self, color, line_size, left, parent): super(_VLineWidget, self).__init__(parent) self._color = color self._left = left + self._line_size = line_size + + def set_line_size(self, line_size): + self._line_size = line_size def paintEvent(self, event): if not self.isVisible(): return - if self._left: - pos_x = 0 - else: - pos_x = self.width() + pos_x = self._line_size * 0.5 + if not self._left: + pos_x = self.width() - pos_x + painter = QtGui.QPainter(self) painter.setRenderHints( QtGui.QPainter.Antialiasing | QtGui.QPainter.SmoothPixmapTransform ) + if self._color: pen = QtGui.QPen(self._color) else: pen = painter.pen() - pen.setWidth(1) + pen.setWidth(self._line_size) painter.setPen(pen) painter.setBrush(QtCore.Qt.transparent) - painter.drawLine(pos_x, 0, pos_x, self.height()) + painter.drawRect( + QtCore.QRectF( + pos_x, + -self._line_size, + pos_x + (self.width() * 2), + self.height() + (self._line_size * 2) + ) + ) painter.end() @@ -56,34 +68,46 @@ class _HBottomLineWidget(QtWidgets.QWidget): It is expected that parent widget will set height and radius. """ - def __init__(self, color, parent): + def __init__(self, color, line_size, parent): super(_HBottomLineWidget, self).__init__(parent) self._color = color self._radius = 0 + self._line_size = line_size def set_radius(self, radius): self._radius = radius + def set_line_size(self, line_size): + self._line_size = line_size + def paintEvent(self, event): if not self.isVisible(): return - rect = QtCore.QRect( - 0, -self._radius, self.width(), self.height() + self._radius + x_offset = self._line_size * 0.5 + rect = QtCore.QRectF( + x_offset, + -self._radius, + self.width() - (2 * x_offset), + (self.height() + self._radius) - x_offset ) painter = QtGui.QPainter(self) painter.setRenderHints( QtGui.QPainter.Antialiasing | QtGui.QPainter.SmoothPixmapTransform ) + if self._color: pen = QtGui.QPen(self._color) else: pen = painter.pen() - pen.setWidth(1) + pen.setWidth(self._line_size) painter.setPen(pen) painter.setBrush(QtCore.Qt.transparent) - painter.drawRoundedRect(rect, self._radius, self._radius) + if self._radius: + painter.drawRoundedRect(rect, self._radius, self._radius) + else: + painter.drawRect(rect) painter.end() @@ -102,30 +126,38 @@ class _HTopCornerLineWidget(QtWidgets.QWidget): It is expected that parent widget will set height and radius. """ - def __init__(self, color, left_side, parent): + + def __init__(self, color, line_size, left_side, parent): super(_HTopCornerLineWidget, self).__init__(parent) self._left_side = left_side + self._line_size = line_size self._color = color self._radius = 0 def set_radius(self, radius): self._radius = radius + def set_line_size(self, line_size): + self._line_size = line_size + def paintEvent(self, event): if not self.isVisible(): return - pos_y = self.height() / 2 - + pos_y = self.height() * 0.5 + x_offset = self._line_size * 0.5 if self._left_side: - rect = QtCore.QRect( - 0, pos_y, self.width() + self._radius, self.height() + rect = QtCore.QRectF( + x_offset, + pos_y, + self.width() + self._radius + x_offset, + self.height() ) else: - rect = QtCore.QRect( - -self._radius, + rect = QtCore.QRectF( + (-self._radius), pos_y, - self.width() + self._radius, + (self.width() + self._radius) - x_offset, self.height() ) @@ -138,10 +170,13 @@ class _HTopCornerLineWidget(QtWidgets.QWidget): pen = QtGui.QPen(self._color) else: pen = painter.pen() - pen.setWidth(1) + pen.setWidth(self._line_size) painter.setPen(pen) painter.setBrush(QtCore.Qt.transparent) - painter.drawRoundedRect(rect, self._radius, self._radius) + if self._radius: + painter.drawRoundedRect(rect, self._radius, self._radius) + else: + painter.drawRect(rect) painter.end() @@ -163,8 +198,10 @@ class BorderedLabelWidget(QtWidgets.QFrame): if color_value: color = color_value.get_qcolor() - top_left_w = _HTopCornerLineWidget(color, True, self) - top_right_w = _HTopCornerLineWidget(color, False, self) + line_size = 1 + + top_left_w = _HTopCornerLineWidget(color, line_size, True, self) + top_right_w = _HTopCornerLineWidget(color, line_size, False, self) label_widget = QtWidgets.QLabel(label, self) @@ -175,10 +212,10 @@ class BorderedLabelWidget(QtWidgets.QFrame): top_layout.addWidget(label_widget, 0) top_layout.addWidget(top_right_w, 1) - left_w = _VLineWidget(color, True, self) - right_w = _VLineWidget(color, False, self) + left_w = _VLineWidget(color, line_size, True, self) + right_w = _VLineWidget(color, line_size, False, self) - bottom_w = _HBottomLineWidget(color, self) + bottom_w = _HBottomLineWidget(color, line_size, self) center_layout = QtWidgets.QHBoxLayout() center_layout.setContentsMargins(5, 5, 5, 5) @@ -201,6 +238,7 @@ class BorderedLabelWidget(QtWidgets.QFrame): self._widget = None self._radius = 0 + self._line_size = line_size self._top_left_w = top_left_w self._top_right_w = top_right_w @@ -216,14 +254,38 @@ class BorderedLabelWidget(QtWidgets.QFrame): value, value, value, value ) + def set_line_size(self, line_size): + if self._line_size == line_size: + return + self._line_size = line_size + for widget in ( + self._top_left_w, + self._top_right_w, + self._left_w, + self._right_w, + self._bottom_w + ): + widget.set_line_size(line_size) + self._recalculate_sizes() + def showEvent(self, event): super(BorderedLabelWidget, self).showEvent(event) + self._recalculate_sizes() + def _recalculate_sizes(self): height = self._label_widget.height() - radius = (height + (height % 2)) / 2 + radius = int((height + (height % 2)) / 2) self._radius = radius - side_width = 1 + radius + radius_size = self._line_size + 1 + if radius_size < radius: + radius_size = radius + + if radius: + side_width = self._line_size + radius + else: + side_width = self._line_size + 1 + # Don't use fixed width/height as that would set also set # the other size (When fixed width is set then is also set # fixed height). @@ -231,8 +293,8 @@ class BorderedLabelWidget(QtWidgets.QFrame): self._left_w.setMaximumWidth(side_width) self._right_w.setMinimumWidth(side_width) self._right_w.setMaximumWidth(side_width) - self._bottom_w.setMinimumHeight(radius) - self._bottom_w.setMaximumHeight(radius) + self._bottom_w.setMinimumHeight(radius_size) + self._bottom_w.setMaximumHeight(radius_size) self._bottom_w.set_radius(radius) self._top_right_w.set_radius(radius) self._top_left_w.set_radius(radius) diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 6ab444109e..006098cb37 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -66,8 +66,7 @@ class PublisherWindow(QtWidgets.QDialog): on_top_flag = QtCore.Qt.Dialog self.setWindowFlags( - self.windowFlags() - | QtCore.Qt.WindowTitleHint + QtCore.Qt.WindowTitleHint | QtCore.Qt.WindowMaximizeButtonHint | QtCore.Qt.WindowMinimizeButtonHint | QtCore.Qt.WindowCloseButtonHint diff --git a/openpype/tools/tray/pype_tray.py b/openpype/tools/tray/pype_tray.py index 2f3b5251f9..fdc0a8094d 100644 --- a/openpype/tools/tray/pype_tray.py +++ b/openpype/tools/tray/pype_tray.py @@ -633,10 +633,10 @@ class TrayManager: # Create a copy of sys.argv additional_args = list(sys.argv) - # Check last argument from `get_openpype_execute_args` - # - when running from code it is the same as first from sys.argv - if args[-1] == additional_args[0]: - additional_args.pop(0) + # Remove first argument from 'sys.argv' + # - when running from code the first argument is 'start.py' + # - when running from build the first argument is executable + additional_args.pop(0) cleanup_additional_args = False if use_expected_version: @@ -663,7 +663,6 @@ class TrayManager: additional_args = _additional_args args.extend(additional_args) - run_detached_process(args, env=envs) self.exit() diff --git a/openpype/version.py b/openpype/version.py index c24388b2ff..b55ca42244 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.15.9-nightly.1" +__version__ = "3.15.10-nightly.1" diff --git a/pyproject.toml b/pyproject.toml index 50b39e2a30..91b1827b3c 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.15.8" # OpenPype +version = "3.15.9" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/website/docs/artist_hosts_3dsmax.md b/website/docs/artist_hosts_3dsmax.md index 12c1f40181..fffab8ca5d 100644 --- a/website/docs/artist_hosts_3dsmax.md +++ b/website/docs/artist_hosts_3dsmax.md @@ -30,7 +30,7 @@ By clicking the icon ```OpenPype Menu``` rolls out. Choose ```OpenPype Menu > Launcher``` to open the ```Launcher``` window. -When opened you can **choose** the **project** to work in from the list. Then choose the particular **asset** you want to work on then choose **task** +When opened you can **choose** the **project** to work in from the list. Then choose the particular **asset** you want to work on then choose **task** and finally **run 3dsmax by its icon** in the tools. ![Menu OpenPype](assets/3dsmax_tray_OP.png) @@ -65,13 +65,13 @@ If not any workfile present simply hit ```Save As``` and keep ```Subversion``` e ![Save As Dialog](assets/3dsmax_SavingFirstFile_OP.png) -OpenPype correctly names it and add version to the workfile. This basically happens whenever user trigger ```Save As``` action. Resulting into incremental version numbers like +OpenPype correctly names it and add version to the workfile. This basically happens whenever user trigger ```Save As``` action. Resulting into incremental version numbers like ```workfileName_v001``` ```workfileName_v002``` - etc. + etc. Basically meaning user is free of guessing what is the correct naming and other necessities to keep everything in order and managed. @@ -105,13 +105,13 @@ Before proceeding further please check [Glossary](artist_concepts.md) and [What ### Intro -Current OpenPype integration (ver 3.15.0) supports only ```PointCache``` and ```Camera``` families now. +Current OpenPype integration (ver 3.15.0) supports only ```PointCache```, ```Camera```, ```Geometry``` and ```Redshift Proxy``` families now. **Pointcache** family being basically any geometry outputted as Alembic cache (.abc) format **Camera** family being 3dsmax Camera object with/without animation outputted as native .max, FBX, Alembic format - +**Redshift Proxy** family being Redshift Proxy object with/without animation outputted as rs format(Redshift Proxy's very own format) --- :::note Work in progress @@ -119,7 +119,3 @@ This part of documentation is still work in progress. ::: ## ...to be added - - - - diff --git a/website/docs/artist_hosts_houdini.md b/website/docs/artist_hosts_houdini.md index 8874a0b5cf..0471765365 100644 --- a/website/docs/artist_hosts_houdini.md +++ b/website/docs/artist_hosts_houdini.md @@ -14,7 +14,7 @@ sidebar_label: Houdini - [Library Loader](artist_tools_library-loader) ## Publishing Alembic Cameras -You can publish baked camera in Alembic format. +You can publish baked camera in Alembic format. Select your camera and go **OpenPype -> Create** and select **Camera (abc)**. This will create Alembic ROP in **out** with path and frame range already set. This node will have a name you've @@ -30,7 +30,7 @@ You can use any COP node and publish the image sequence generated from it. For e ![Noise COP](assets/houdini_imagesequence_cop.png) To publish the output of the `radialblur1` go to **OpenPype -> Create** and -select **Composite (Image Sequence)**. If you name the variant *Noise* this will create the `/out/imagesequenceNoise` Composite ROP with the frame range set. +select **Composite (Image Sequence)**. If you name the variant *Noise* this will create the `/out/imagesequenceNoise` Composite ROP with the frame range set. When you hit **Publish** it will render image sequence from selected node. @@ -56,14 +56,14 @@ Now select the `output0` node and go **OpenPype -> Create** and select **Point C Alembic ROP `/out/pointcacheStrange` ## Publishing Reviews (OpenGL) -To generate a review output from Houdini you need to create a **review** instance. +To generate a review output from Houdini you need to create a **review** instance. Go to **OpenPype -> Create** and select **Review**. ![Houdini Create Review](assets/houdini_review_create_attrs.png) -On create, with the **Use Selection** checkbox enabled it will set up the first -camera found in your selection as the camera for the OpenGL ROP node and other -non-cameras are set in **Force Objects**. It will then render those even if +On create, with the **Use Selection** checkbox enabled it will set up the first +camera found in your selection as the camera for the OpenGL ROP node and other +non-cameras are set in **Force Objects**. It will then render those even if their display flag is disabled in your scene. ## Redshift @@ -71,6 +71,18 @@ their display flag is disabled in your scene. This part of documentation is still work in progress. ::: +## Publishing Render to Deadline +Five Renderers(Arnold, Redshift, Mantra, Karma, VRay) are supported for Render Publishing. +They are named with the suffix("_ROP") +To submit render to deadline, you need to create a **Render** instance. +Go to **Openpype -> Create** and select **Publish**. Before clicking **Create** button, +you need select your preferred image rendering format. You can also enable the **Use selection** to +select your render camera. +![Houdini Create Render](assets/houdini_render_publish_creator.png) + +All the render outputs are stored in the pyblish/render directory within your project path.\ +For Karma-specific render, it also outputs the USD render as default. + ## USD (experimental support) ### Publishing USD You can publish your Solaris Stage as USD file. diff --git a/website/docs/assets/houdini_render_publish_creator.png b/website/docs/assets/houdini_render_publish_creator.png new file mode 100644 index 0000000000..5dd73d296a Binary files /dev/null and b/website/docs/assets/houdini_render_publish_creator.png differ diff --git a/website/docs/dev_blender.md b/website/docs/dev_blender.md new file mode 100644 index 0000000000..bed0e4a09d --- /dev/null +++ b/website/docs/dev_blender.md @@ -0,0 +1,61 @@ +--- +id: dev_blender +title: Blender integration +sidebar_label: Blender integration +toc_max_heading_level: 4 +--- + +## Run python script at launch +In case you need to execute a python script when Blender is started (aka [`-P`](https://docs.blender.org/manual/en/latest/advanced/command_line/arguments.html#python-options)), for example to programmatically modify a blender file for conformation, you can create an OpenPype hook as follows: + +```python +from openpype.hosts.blender.hooks import pre_add_run_python_script_arg +from openpype.lib import PreLaunchHook + + +class MyHook(PreLaunchHook): + """Add python script to be executed before Blender launch.""" + + order = pre_add_run_python_script_arg.AddPythonScriptToLaunchArgs.order - 1 + app_groups = [ + "blender", + ] + + def execute(self): + self.launch_context.data.setdefault("python_scripts", []).append( + "/path/to/my_script.py" + ) +``` + +You can write a bare python script, as you could run into the [Text Editor](https://docs.blender.org/manual/en/latest/editors/text_editor.html). + +### Python script with arguments +#### Adding arguments +In case you need to pass arguments to your script, you can append them to `self.launch_context.data["script_args"]`: + +```python +self.launch_context.data.setdefault("script_args", []).append( + "--my-arg", + "value", + ) +``` + +#### Parsing arguments +You can parse arguments in your script using [argparse](https://docs.python.org/3/library/argparse.html) as follows: + +```python +import argparse + +parser = argparse.ArgumentParser( + description="Parsing arguments for my_script.py" +) +parser.add_argument( + "--my-arg", + nargs="?", + help="My argument", +) +args, unknown = arg_parser.parse_known_args( + sys.argv[sys.argv.index("--") + 1 :] +) +print(args.my_arg) +``` diff --git a/website/docs/module_kitsu.md b/website/docs/module_kitsu.md index d79c78fecf..9695542723 100644 --- a/website/docs/module_kitsu.md +++ b/website/docs/module_kitsu.md @@ -18,9 +18,20 @@ This setting is available for all the users of the OpenPype instance. ## Synchronize Updating OP with Kitsu data is executed running the `sync-service`, which requires to provide your Kitsu credentials with `-l, --login` and `-p, --password` or by setting the environment variables `KITSU_LOGIN` and `KITSU_PWD`. This process will request data from Kitsu and create/delete/update OP assets. Once this sync is done, the thread will automatically start a loop to listen to Kitsu events. +- `-prj, --project` This flag accepts multiple project name to sync specific projects, and the default to sync all projects. +- `-lo, --listen-only` This flag to run listen to Kitsu events only without any sync. + +Note: You must use one argument of `-pro` or `-lo`, because the listen only flag override syncing flag. ```bash +// sync all projects then run listen openpype_console module kitsu sync-service -l me@domain.ext -p my_password + +// sync specific projects then run listen +openpype_console module kitsu sync-service -l me@domain.ext -p my_password -prj project_name01 -prj project_name02 + +// start listen only for all projects +openpype_console module kitsu sync-service -l me@domain.ext -p my_password -lo ``` ### Events listening diff --git a/website/docs/project_settings/assets/global_extract_review_letter_box_settings.png b/website/docs/project_settings/assets/global_extract_review_letter_box_settings.png index 80e00702e6..76dd9b372a 100644 Binary files a/website/docs/project_settings/assets/global_extract_review_letter_box_settings.png and b/website/docs/project_settings/assets/global_extract_review_letter_box_settings.png differ diff --git a/website/docs/project_settings/settings_project_global.md b/website/docs/project_settings/settings_project_global.md index c17f707830..5ddf247d98 100644 --- a/website/docs/project_settings/settings_project_global.md +++ b/website/docs/project_settings/settings_project_global.md @@ -63,7 +63,7 @@ Example here describes use case for creation of new color coded review of png im ![global_oiio_transcode](assets/global_oiio_transcode.png) Another use case is to transcode in Maya only `beauty` render layers and use collected `Display` and `View` colorspaces from DCC. -![global_oiio_transcode_in_Maya](assets/global_oiio_transcode.png)n +![global_oiio_transcode_in_Maya](assets/global_oiio_transcode2.png) ## Profile filters @@ -170,12 +170,10 @@ A profile may generate multiple outputs from a single input. Each output must de - **`Letter Box`** - **Enabled** - Enable letter boxes - - **Ratio** - Ratio of letter boxes - - **Type** - **Letterbox** (horizontal bars) or **Pillarbox** (vertical bars) + - **Ratio** - Ratio of letter boxes. Ratio type is calculated from output image dimensions. If letterbox ratio > image ratio, _letterbox_ is applied. Otherwise _pillarbox_ will be rendered. - **Fill color** - Fill color of boxes (RGBA: 0-255) - **Line Thickness** - Line thickness on the edge of box (set to `0` to turn off) - - **Fill color** - Line color on the edge of box (RGBA: 0-255) - - **Example** + - **Line color** - Line color on the edge of box (RGBA: 0-255) ![global_extract_review_letter_box_settings](assets/global_extract_review_letter_box_settings.png) ![global_extract_review_letter_box](assets/global_extract_review_letter_box.png) diff --git a/website/sidebars.js b/website/sidebars.js index c846b04ca7..b885181fb6 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -181,6 +181,7 @@ module.exports = { ] }, "dev_deadline", + "dev_blender", "dev_colorspace" ] };