diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index be0a6e1299..7d6c5650d1 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,14 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.18.3-nightly.2 + - 3.18.3-nightly.1 + - 3.18.2 + - 3.18.2-nightly.6 + - 3.18.2-nightly.5 + - 3.18.2-nightly.4 + - 3.18.2-nightly.3 + - 3.18.2-nightly.2 - 3.18.2-nightly.1 - 3.18.1 - 3.18.1-nightly.1 @@ -127,14 +135,6 @@ body: - 3.15.6 - 3.15.6-nightly.3 - 3.15.6-nightly.2 - - 3.15.6-nightly.1 - - 3.15.5 - - 3.15.5-nightly.2 - - 3.15.5-nightly.1 - - 3.15.4 - - 3.15.4-nightly.3 - - 3.15.4-nightly.2 - - 3.15.4-nightly.1 validations: required: true - type: dropdown diff --git a/CHANGELOG.md b/CHANGELOG.md index f309d904eb..4a21882008 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,323 @@ # Changelog +## [3.18.2](https://github.com/ynput/OpenPype/tree/3.18.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.18.1...3.18.2) + +### **🚀 Enhancements** + + +
+Testing: Release Maya/Deadline job from pending when testing. #5988 + +When testing we wont put the Deadline jobs into pending with dependencies, so the worker can start as soon as possible. + + +___ + +
+ + +
+Max: Tweaks on Extractions for the exporters #5814 + +With this PR +- Suspend Refresh would be introduced in abc & obj extractors for optimization. +- Allow users to choose the custom attributes to be included in abc exports + + +___ + +
+ + +
+Maya: Optional preserve references. #5994 + +Optional preserve references when publishing Maya scenes. + + +___ + +
+ + +
+AYON ftrack: Expect 'ayon' group in custom attributes #6066 + +Expect `ayon` group as one of options to get custom attributes. + + +___ + +
+ + +
+AYON Chore: Remove dependencies related to separated addons #6074 + +Removed dependencies from openpype client pyproject.toml that are already defined by addons which require them. + + +___ + +
+ + +
+Editorial & chore: Stop using pathlib2 #6075 + +Do not use `pathlib2` which is Python 2 backport for `pathlib` module in python 3. + + +___ + +
+ + +
+Traypublisher: Correct validator label #6084 + +Use correct label for Validate filepaths. + + +___ + +
+ + +
+Nuke: Extract Review Intermediate disabled when both Extract Review Mov and Extract Review Intermediate disabled in setting #6089 + +Report in Discord https://discord.com/channels/517362899170230292/563751989075378201/1187874498234556477 + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Bug fix the file from texture node not being collected correctly in Yeti Rig #5990 + +Fix the bug of collect Yeti Rig not being able to get the file parameter(s) from the texture node(s), resulting to the failure of publishing the textures to the resource directory. + + +___ + +
+ + +
+Bug: fix AYON settings for Maya workspace #6069 + +This is changing bug in default AYON setting for Maya workspace, where missing semicolumn caused workspace not being set. This is also syncing default workspace settings to OpenPype + + +___ + +
+ + +
+Refactor colorspace handling in CollectColorspace plugin #6033 + +Traypublisher is now capable set available colorspaces or roles to publishing images sequence or video. This is fix of new implementation where we allowed to use roles in the enumerator selector. + + +___ + +
+ + +
+Bugfix: Houdini render split bugs #6037 + +This PR is a follow up PR to https://github.com/ynput/OpenPype/pull/5420This PR does: +- refactor `get_output_parameter` to what is used to be. +- fix a bug with split render +- rename `exportJob` flag to `split_render` + + +___ + +
+ + +
+Fusion: fix for single frame rendering #6056 + +Fixes publishes of single frame of `render` product type. + + +___ + +
+ + +
+Photoshop: fix layer publish thumbnail missing in loader #6061 + +Thumbnails from any products (either `review` nor separate layer instances) weren't stored in Ayon.This resulted in not showing them in Loader and Server UI. After this PR thumbnails should be shown in the Loader and on the Server (`http://YOUR_AYON_HOSTNAME:5000/projects/YOUR_PROJECT/browser`). + + +___ + +
+ + +
+AYON Chore: Do not use thumbnailSource for thumbnail integration #6063 + +Do not use `thumbnailSource` for thumbnail integration. + + +___ + +
+ + +
+Photoshop: fix creation of .mov #6064 + +Generation of .mov file with 1 frame per published layer was failing. + + +___ + +
+ + +
+Photoshop: fix Collect Color Coded settings #6065 + +Fix for wrong default value for `Collect Color Coded Instances` Settings + + +___ + +
+ + +
+Bug: Fix Publisher parent window in Nuke #6067 + +Fixing issue where publisher parent window wasn't set because wrong use of version constant. + + +___ + +
+ + +
+Python console widget: Save registry fix #6076 + +Do not save registry until there is something to save. + + +___ + +
+ + +
+Ftrack: update asset names for multiple reviewable items #6077 + +Multiple reviewable assetVersion components with better grouping to asset version name. + + +___ + +
+ + +
+Ftrack: DJV action fixes #6098 + +Fix bugs in DJV ftrack action. + + +___ + +
+ + +
+AYON Workfiles tool: Fix arrow to timezone typo #6099 + +Fix parenthesis typo with arrow local timezone function. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Chore: Update folder-favorite icon to ayon icon #5718 + +Updates old "Pype-2.0-era" (from ancient greece times) to AYON logo equivalent.I believe it's only used in Nuke. + + +___ + +
+ +### **Merged pull requests** + + +
+Chore: Maya / Nuke remove publish gui filters from settings #5570 + +- Remove Publish GUI Filters from Nuke settings +- Remove Publish GUI Filters from Maya settings + + +___ + +
+ + +
+Fusion: Project/User option for output format (create_saver) #6045 + +Adds "Output Image Format" option which can be set via project settings and overwritten by users in "Create" menu. This replaces the current behaviour of being hardcoded to "exr". Replacing the need for people to manually edit the saver path if they require a different extension. + + +___ + +
+ + +
+Fusion: Output Image Format Updating Instances (create_saver) #6060 + +Adds the ability to update Saver image output format if changed in the Publish UI.~~Adds an optional validator that compares "Output Image Format" in the Publish menu against the one currently found on the saver. It then offers a repair action to update the output extension on the saver.~~ + + +___ + +
+ + +
+Tests: Fix representation count for AE legacy test #6072 + + +___ + +
+ + + + ## [3.18.1](https://github.com/ynput/OpenPype/tree/3.18.1) diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py index 6e71b41541..5870828b41 100644 --- a/openpype/hosts/fusion/plugins/create/create_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_saver.py @@ -120,8 +120,15 @@ class CreateSaver(NewCreator): return original_subset = tool.GetData("openpype.subset") + original_format = tool.GetData( + "openpype.creator_attributes.image_format" + ) + subset = data["subset"] - if original_subset != subset: + if ( + original_subset != subset + or original_format != data["creator_attributes"]["image_format"] + ): self._configure_saver_tool(data, tool, subset) def _configure_saver_tool(self, data, tool, subset): diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index 614052431f..edd50f10c1 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -121,62 +121,6 @@ def get_id_required_nodes(): return list(nodes) -def get_export_parameter(node): - """Return the export output parameter of the given node - - Example: - root = hou.node("/obj") - my_alembic_node = root.createNode("alembic") - get_output_parameter(my_alembic_node) - # Result: "output" - - Args: - node(hou.Node): node instance - - Returns: - hou.Parm - - """ - node_type = node.type().description() - - # Ensures the proper Take is selected for each ROP to retrieve the correct - # ifd - try: - rop_take = hou.takes.findTake(node.parm("take").eval()) - if rop_take is not None: - hou.takes.setCurrentTake(rop_take) - except AttributeError: - # hou object doesn't always have the 'takes' attribute - pass - - if node_type == "Mantra" and node.parm("soho_outputmode").eval(): - return node.parm("soho_diskfile") - elif node_type == "Alfred": - return node.parm("alf_diskfile") - elif (node_type == "RenderMan" or node_type == "RenderMan RIS"): - pre_ris22 = node.parm("rib_outputmode") and \ - node.parm("rib_outputmode").eval() - ris22 = node.parm("diskfile") and node.parm("diskfile").eval() - if pre_ris22 or ris22: - return node.parm("soho_diskfile") - elif node_type == "Redshift" and node.parm("RS_archive_enable").eval(): - return node.parm("RS_archive_file") - elif node_type == "Wedge" and node.parm("driver").eval(): - return get_export_parameter(node.node(node.parm("driver").eval())) - elif node_type == "Arnold": - return node.parm("ar_ass_file") - elif node_type == "Alembic" and node.parm("use_sop_path").eval(): - return node.parm("sop_path") - elif node_type == "Shotgun Mantra" and node.parm("soho_outputmode").eval(): - return node.parm("sgtk_soho_diskfile") - elif node_type == "Shotgun Alembic" and node.parm("use_sop_path").eval(): - return node.parm("sop_path") - elif node.type().nameWithCategory() == "Driver/vray_renderer": - return node.parm("render_export_filepath") - - raise TypeError("Node type '%s' not supported" % node_type) - - def get_output_parameter(node): """Return the render output parameter of the given node @@ -184,41 +128,59 @@ def get_output_parameter(node): root = hou.node("/obj") my_alembic_node = root.createNode("alembic") get_output_parameter(my_alembic_node) - # Result: "output" + >>> "filename" + + Notes: + I'm using node.type().name() to get on par with the creators, + Because the return value of `node.type().name()` is the + same string value used in creators + e.g. instance_data.update({"node_type": "alembic"}) + + Rop nodes in different network categories have + the same output parameter. + So, I took that into consideration as a hint for + future development. Args: node(hou.Node): node instance Returns: hou.Parm - """ - node_type = node.type().description() - category = node.type().category().name() + + node_type = node.type().name() # Figure out which type of node is being rendered - if node_type == "Geometry" or node_type == "Filmbox FBX" or \ - (node_type == "ROP Output Driver" and category == "Sop"): - return node.parm("sopoutput") - elif node_type == "Composite": - return node.parm("copoutput") - elif node_type == "opengl": - return node.parm("picture") + if node_type in {"alembic", "rop_alembic"}: + return node.parm("filename") elif node_type == "arnold": - if node.evalParm("ar_ass_export_enable"): + if node_type.evalParm("ar_ass_export_enable"): return node.parm("ar_ass_file") - elif node_type == "Redshift_Proxy_Output": - return node.parm("RS_archive_file") - elif node_type == "ifd": + return node.parm("ar_picture") + elif node_type in { + "geometry", + "rop_geometry", + "filmboxfbx", + "rop_fbx" + }: + return node.parm("sopoutput") + elif node_type == "comp": + return node.parm("copoutput") + elif node_type in {"karma", "opengl"}: + return node.parm("picture") + elif node_type == "ifd": # Mantra if node.evalParm("soho_outputmode"): return node.parm("soho_diskfile") - elif node_type == "Octane": - return node.parm("HO_img_fileName") - elif node_type == "Fetch": - inner_node = node.node(node.parm("source").eval()) - if inner_node: - return get_output_parameter(inner_node) - elif node.type().nameWithCategory() == "Driver/vray_renderer": + return node.parm("vm_picture") + elif node_type == "Redshift_Proxy_Output": + return node.parm("RS_archive_file") + elif node_type == "Redshift_ROP": + return node.parm("RS_outputFileNamePrefix") + elif node_type in {"usd", "usd_rop", "usdexport"}: + return node.parm("lopoutput") + elif node_type in {"usdrender", "usdrender_rop"}: + return node.parm("outputimage") + elif node_type == "vray_renderer": return node.parm("SettingsOutput_img_file_path") raise TypeError("Node type '%s' not supported" % node_type) diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 1b8826a932..9d1c7bc90d 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -15,6 +15,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator): icon = "magic" ext = "exr" + # Default to split export and render jobs + split_render = True + def create(self, subset_name, instance_data, pre_create_data): instance_data.pop("active", None) @@ -36,12 +39,15 @@ class CreateRedshiftROP(plugin.HoudiniCreator): # Also create the linked Redshift IPR Rop try: ipr_rop = instance_node.parent().createNode( - "Redshift_IPR", node_name=basename + "_IPR" + "Redshift_IPR", node_name=f"{basename}_IPR" ) - except hou.OperationFailed: + except hou.OperationFailed as e: raise plugin.OpenPypeCreatorError( - ("Cannot create Redshift node. Is Redshift " - "installed and enabled?")) + ( + "Cannot create Redshift node. Is Redshift " + "installed and enabled?" + ) + ) from e # Move it to directly under the Redshift ROP ipr_rop.setPosition(instance_node.position() + hou.Vector2(0, -1)) @@ -74,8 +80,15 @@ class CreateRedshiftROP(plugin.HoudiniCreator): for node in self.selected_nodes: if node.type().name() == "cam": camera = node.path() - parms.update({ - "RS_renderCamera": camera or ""}) + parms["RS_renderCamera"] = camera or "" + + export_dir = hou.text.expandString("$HIP/pyblish/rs/") + rs_filepath = f"{export_dir}{subset_name}/{subset_name}.$F4.rs" + parms["RS_archive_file"] = rs_filepath + + if pre_create_data.get("split_render", self.split_render): + parms["RS_archive_enable"] = 1 + instance_node.setParms(parms) # Lock some Avalon attributes @@ -102,6 +115,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator): BoolDef("farm", label="Submitting to Farm", default=True), + BoolDef("split_render", + label="Split export and render jobs", + default=self.split_render), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/load/load_redshift_proxy.py b/openpype/hosts/houdini/plugins/load/load_redshift_proxy.py new file mode 100644 index 0000000000..efd7c6d0ca --- /dev/null +++ b/openpype/hosts/houdini/plugins/load/load_redshift_proxy.py @@ -0,0 +1,112 @@ +import os +import re +from openpype.pipeline import ( + load, + get_representation_path, +) +from openpype.hosts.houdini.api import pipeline +from openpype.pipeline.load import LoadError + +import hou + + +class RedshiftProxyLoader(load.LoaderPlugin): + """Load Redshift Proxy""" + + families = ["redshiftproxy"] + label = "Load Redshift Proxy" + representations = ["rs"] + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, data=None): + + # Get the root node + obj = hou.node("/obj") + + # Define node name + namespace = namespace if namespace else context["asset"]["name"] + node_name = "{}_{}".format(namespace, name) if namespace else name + + # Create a new geo node + container = obj.createNode("geo", node_name=node_name) + + # Check whether the Redshift parameters exist - if not, then likely + # redshift is not set up or initialized correctly + if not container.parm("RS_objprop_proxy_enable"): + container.destroy() + raise LoadError("Unable to initialize geo node with Redshift " + "attributes. Make sure you have the Redshift " + "plug-in set up correctly for Houdini.") + + # Enable by default + container.setParms({ + "RS_objprop_proxy_enable": True, + "RS_objprop_proxy_file": self.format_path( + self.filepath_from_context(context), + context["representation"]) + }) + + # Remove the file node, it only loads static meshes + # Houdini 17 has removed the file node from the geo node + file_node = container.node("file1") + if file_node: + file_node.destroy() + + # Add this stub node inside so it previews ok + proxy_sop = container.createNode("redshift_proxySOP", + node_name=node_name) + proxy_sop.setDisplayFlag(True) + + nodes = [container, proxy_sop] + + self[:] = nodes + + return pipeline.containerise( + node_name, + namespace, + nodes, + context, + self.__class__.__name__, + suffix="", + ) + + def update(self, container, representation): + + # Update the file path + file_path = get_representation_path(representation) + + node = container["node"] + node.setParms({ + "RS_objprop_proxy_file": self.format_path( + file_path, representation) + }) + + # Update attribute + node.setParms({"representation": str(representation["_id"])}) + + def remove(self, container): + + node = container["node"] + node.destroy() + + @staticmethod + def format_path(path, representation): + """Format file path correctly for single redshift proxy + or redshift proxy sequence.""" + if not os.path.exists(path): + raise RuntimeError("Path does not exist: %s" % path) + + is_sequence = bool(representation["context"].get("frame")) + # The path is either a single file or sequence in a folder. + if is_sequence: + filename = re.sub(r"(.*)\.(\d+)\.(rs.*)", "\\1.$F4.\\3", path) + filename = os.path.join(path, filename) + else: + filename = path + + filename = os.path.normpath(filename) + filename = filename.replace("\\", "/") + + return filename diff --git a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py index c7da8397dc..ffc2a526a3 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py @@ -41,11 +41,11 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin): render_products = [] # Store whether we are splitting the render job (export + render) - export_job = bool(rop.parm("ar_ass_export_enable").eval()) - instance.data["exportJob"] = export_job + split_render = bool(rop.parm("ar_ass_export_enable").eval()) + instance.data["splitRender"] = split_render export_prefix = None export_products = [] - if export_job: + if split_render: export_prefix = evalParmNoFrame( rop, "ar_ass_file", pad_character="0" ) diff --git a/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py b/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py index bc71576174..64ef20f4e7 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py @@ -45,11 +45,11 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin): render_products = [] # Store whether we are splitting the render job (export + render) - export_job = bool(rop.parm("soho_outputmode").eval()) - instance.data["exportJob"] = export_job + split_render = bool(rop.parm("soho_outputmode").eval()) + instance.data["splitRender"] = split_render export_prefix = None export_products = [] - if export_job: + if split_render: export_prefix = evalParmNoFrame( rop, "soho_diskfile", pad_character="0" ) diff --git a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py index 0acddab011..aec7e07fbc 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py @@ -31,7 +31,6 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): families = ["redshift_rop"] def process(self, instance): - rop = hou.node(instance.data.get("instance_node")) # Collect chunkSize @@ -43,13 +42,29 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): default_prefix = evalParmNoFrame(rop, "RS_outputFileNamePrefix") beauty_suffix = rop.evalParm("RS_outputBeautyAOVSuffix") - render_products = [] + # Store whether we are splitting the render job (export + render) + split_render = bool(rop.parm("RS_archive_enable").eval()) + instance.data["splitRender"] = split_render + export_products = [] + if split_render: + export_prefix = evalParmNoFrame( + rop, "RS_archive_file", pad_character="0" + ) + beauty_export_product = self.get_render_product_name( + prefix=export_prefix, + suffix=None) + export_products.append(beauty_export_product) + self.log.debug( + "Found export product: {}".format(beauty_export_product) + ) + instance.data["ifdFile"] = beauty_export_product + instance.data["exportFiles"] = list(export_products) # Default beauty AOV beauty_product = self.get_render_product_name( prefix=default_prefix, suffix=beauty_suffix ) - render_products.append(beauty_product) + render_products = [beauty_product] files_by_aov = { "_": self.generate_expected_files(instance, beauty_product)} @@ -59,11 +74,11 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): i = index + 1 # Skip disabled AOVs - if not rop.evalParm("RS_aovEnable_%s" % i): + if not rop.evalParm(f"RS_aovEnable_{i}"): continue - aov_suffix = rop.evalParm("RS_aovSuffix_%s" % i) - aov_prefix = evalParmNoFrame(rop, "RS_aovCustomPrefix_%s" % i) + aov_suffix = rop.evalParm(f"RS_aovSuffix_{i}") + aov_prefix = evalParmNoFrame(rop, f"RS_aovCustomPrefix_{i}") if not aov_prefix: aov_prefix = default_prefix @@ -85,7 +100,7 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): instance.data["attachTo"] = [] # stub required data if "expectedFiles" not in instance.data: - instance.data["expectedFiles"] = list() + instance.data["expectedFiles"] = [] instance.data["expectedFiles"].append(files_by_aov) # update the colorspace data diff --git a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py index a1f4554726..ad4fdb0da5 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py @@ -46,11 +46,11 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin): # TODO: add render elements if render element # Store whether we are splitting the render job in an export + render - export_job = rop.parm("render_export_mode").eval() == "2" - instance.data["exportJob"] = export_job + split_render = rop.parm("render_export_mode").eval() == "2" + instance.data["splitRender"] = split_render export_prefix = None export_products = [] - if export_job: + if split_render: export_prefix = evalParmNoFrame( rop, "render_export_filepath", pad_character="0" ) diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index 8531233bb2..e2d8d9c55f 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -294,6 +294,37 @@ def reset_frame_range(fps: bool = True): frame_range["frameStartHandle"], frame_range["frameEndHandle"]) +def reset_unit_scale(): + """Apply the unit scale setting to 3dsMax + """ + project_name = get_current_project_name() + settings = get_project_settings(project_name).get("max") + scene_scale = settings.get("unit_scale_settings", + {}).get("scene_unit_scale") + if scene_scale: + rt.units.DisplayType = rt.Name("Metric") + rt.units.MetricType = rt.Name(scene_scale) + else: + rt.units.DisplayType = rt.Name("Generic") + + +def convert_unit_scale(): + """Convert system unit scale in 3dsMax + for fbx export + + Returns: + str: unit scale + """ + unit_scale_dict = { + "millimeters": "mm", + "centimeters": "cm", + "meters": "m", + "kilometers": "km" + } + current_unit_scale = rt.Execute("units.MetricType as string") + return unit_scale_dict[current_unit_scale] + + def set_context_setting(): """Apply the project settings from the project definition @@ -310,6 +341,7 @@ def set_context_setting(): reset_scene_resolution() reset_frame_range() reset_colorspace() + reset_unit_scale() def get_max_version(): diff --git a/openpype/hosts/max/api/menu.py b/openpype/hosts/max/api/menu.py index caaa3e3730..9bdb6bd7ce 100644 --- a/openpype/hosts/max/api/menu.py +++ b/openpype/hosts/max/api/menu.py @@ -124,6 +124,10 @@ class OpenPypeMenu(object): colorspace_action.triggered.connect(self.colorspace_callback) openpype_menu.addAction(colorspace_action) + unit_scale_action = QtWidgets.QAction("Set Unit Scale", openpype_menu) + unit_scale_action.triggered.connect(self.unit_scale_callback) + openpype_menu.addAction(unit_scale_action) + return openpype_menu def load_callback(self): @@ -157,3 +161,7 @@ class OpenPypeMenu(object): def colorspace_callback(self): """Callback to reset colorspace""" return lib.reset_colorspace() + + def unit_scale_callback(self): + """Callback to reset unit scale""" + return lib.reset_unit_scale() diff --git a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py deleted file mode 100644 index 4b5631b05f..0000000000 --- a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py +++ /dev/null @@ -1,55 +0,0 @@ -import os - -import pyblish.api -from pymxs import runtime as rt - -from openpype.hosts.max.api import maintained_selection -from openpype.pipeline import OptionalPyblishPluginMixin, publish - - -class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin): - """Extract Camera with FbxExporter.""" - - order = pyblish.api.ExtractorOrder - 0.2 - label = "Extract Fbx Camera" - hosts = ["max"] - families = ["camera"] - optional = True - - def process(self, instance): - if not self.is_active(instance.data): - return - - stagingdir = self.staging_dir(instance) - filename = "{name}.fbx".format(**instance.data) - - filepath = os.path.join(stagingdir, filename) - rt.FBXExporterSetParam("Animation", True) - rt.FBXExporterSetParam("Cameras", True) - rt.FBXExporterSetParam("AxisConversionMethod", "Animation") - rt.FBXExporterSetParam("UpAxis", "Y") - rt.FBXExporterSetParam("Preserveinstances", True) - - with maintained_selection(): - # select and export - node_list = instance.data["members"] - rt.Select(node_list) - rt.ExportFile( - filepath, - rt.Name("noPrompt"), - selectedOnly=True, - using=rt.FBXEXP, - ) - - self.log.info("Performing Extraction ...") - if "representations" not in instance.data: - instance.data["representations"] = [] - - representation = { - "name": "fbx", - "ext": "fbx", - "files": filename, - "stagingDir": stagingdir, - } - instance.data["representations"].append(representation) - self.log.info(f"Extracted instance '{instance.name}' to: {filepath}") diff --git a/openpype/hosts/max/plugins/publish/extract_model_fbx.py b/openpype/hosts/max/plugins/publish/extract_fbx.py similarity index 67% rename from openpype/hosts/max/plugins/publish/extract_model_fbx.py rename to openpype/hosts/max/plugins/publish/extract_fbx.py index 6c42fd5364..7454cd08d1 100644 --- a/openpype/hosts/max/plugins/publish/extract_model_fbx.py +++ b/openpype/hosts/max/plugins/publish/extract_fbx.py @@ -3,6 +3,7 @@ import pyblish.api from openpype.pipeline import publish, OptionalPyblishPluginMixin from pymxs import runtime as rt from openpype.hosts.max.api import maintained_selection +from openpype.hosts.max.api.lib import convert_unit_scale class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin): @@ -23,14 +24,7 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin): stagingdir = self.staging_dir(instance) filename = "{name}.fbx".format(**instance.data) filepath = os.path.join(stagingdir, filename) - - rt.FBXExporterSetParam("Animation", False) - rt.FBXExporterSetParam("Cameras", False) - rt.FBXExporterSetParam("Lights", False) - rt.FBXExporterSetParam("PointCache", False) - rt.FBXExporterSetParam("AxisConversionMethod", "Animation") - rt.FBXExporterSetParam("UpAxis", "Y") - rt.FBXExporterSetParam("Preserveinstances", True) + self._set_fbx_attributes() with maintained_selection(): # select and export @@ -56,3 +50,34 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin): self.log.info( "Extracted instance '%s' to: %s" % (instance.name, filepath) ) + + def _set_fbx_attributes(self): + unit_scale = convert_unit_scale() + rt.FBXExporterSetParam("Animation", False) + rt.FBXExporterSetParam("Cameras", False) + rt.FBXExporterSetParam("Lights", False) + rt.FBXExporterSetParam("PointCache", False) + rt.FBXExporterSetParam("AxisConversionMethod", "Animation") + rt.FBXExporterSetParam("UpAxis", "Y") + rt.FBXExporterSetParam("Preserveinstances", True) + if unit_scale: + rt.FBXExporterSetParam("ConvertUnit", unit_scale) + + +class ExtractCameraFbx(ExtractModelFbx): + """Extract Camera with FbxExporter.""" + + order = pyblish.api.ExtractorOrder - 0.2 + label = "Extract Fbx Camera" + families = ["camera"] + optional = True + + def _set_fbx_attributes(self): + unit_scale = convert_unit_scale() + rt.FBXExporterSetParam("Animation", True) + rt.FBXExporterSetParam("Cameras", True) + rt.FBXExporterSetParam("AxisConversionMethod", "Animation") + rt.FBXExporterSetParam("UpAxis", "Y") + rt.FBXExporterSetParam("Preserveinstances", True) + if unit_scale: + rt.FBXExporterSetParam("ConvertUnit", unit_scale) diff --git a/openpype/hosts/maya/api/exitstack.py b/openpype/hosts/maya/api/exitstack.py new file mode 100644 index 0000000000..d151ee16d7 --- /dev/null +++ b/openpype/hosts/maya/api/exitstack.py @@ -0,0 +1,139 @@ +"""Backwards compatible implementation of ExitStack for Python 2. + +ExitStack contextmanager was implemented with Python 3.3. +As long as we supportPython 2 hosts we can use this backwards +compatible implementation to support bothPython 2 and Python 3. + +Instead of using ExitStack from contextlib, use it from this module: + +>>> from openpype.hosts.maya.api.exitstack import ExitStack + +It will provide the appropriate ExitStack implementation for the current +running Python version. + +""" +# TODO: Remove the entire script once dropping Python 2 support. +import contextlib +if getattr(contextlib, "nested", None): + from contextlib import ExitStack # noqa +else: + import sys + from collections import deque + + class ExitStack(object): + + """Context manager for dynamic management of a stack of exit callbacks + + For example: + + with ExitStack() as stack: + files = [stack.enter_context(open(fname)) + for fname in filenames] + # All opened files will automatically be closed at the end of + # the with statement, even if attempts to open files later + # in the list raise an exception + + """ + def __init__(self): + self._exit_callbacks = deque() + + def pop_all(self): + """Preserve the context stack by transferring + it to a new instance""" + new_stack = type(self)() + new_stack._exit_callbacks = self._exit_callbacks + self._exit_callbacks = deque() + return new_stack + + def _push_cm_exit(self, cm, cm_exit): + """Helper to correctly register callbacks + to __exit__ methods""" + def _exit_wrapper(*exc_details): + return cm_exit(cm, *exc_details) + _exit_wrapper.__self__ = cm + self.push(_exit_wrapper) + + def push(self, exit): + """Registers a callback with the standard __exit__ method signature + + Can suppress exceptions the same way __exit__ methods can. + + Also accepts any object with an __exit__ method (registering a call + to the method instead of the object itself) + """ + # We use an unbound method rather than a bound method to follow + # the standard lookup behaviour for special methods + _cb_type = type(exit) + try: + exit_method = _cb_type.__exit__ + except AttributeError: + # Not a context manager, so assume its a callable + self._exit_callbacks.append(exit) + else: + self._push_cm_exit(exit, exit_method) + return exit # Allow use as a decorator + + def callback(self, callback, *args, **kwds): + """Registers an arbitrary callback and arguments. + + Cannot suppress exceptions. + """ + def _exit_wrapper(exc_type, exc, tb): + callback(*args, **kwds) + # We changed the signature, so using @wraps is not appropriate, but + # setting __wrapped__ may still help with introspection + _exit_wrapper.__wrapped__ = callback + self.push(_exit_wrapper) + return callback # Allow use as a decorator + + def enter_context(self, cm): + """Enters the supplied context manager + + If successful, also pushes its __exit__ method as a callback and + returns the result of the __enter__ method. + """ + # We look up the special methods on the type to + # match the with statement + _cm_type = type(cm) + _exit = _cm_type.__exit__ + result = _cm_type.__enter__(cm) + self._push_cm_exit(cm, _exit) + return result + + def close(self): + """Immediately unwind the context stack""" + self.__exit__(None, None, None) + + def __enter__(self): + return self + + def __exit__(self, *exc_details): + # We manipulate the exception state so it behaves as though + # we were actually nesting multiple with statements + frame_exc = sys.exc_info()[1] + + def _fix_exception_context(new_exc, old_exc): + while 1: + exc_context = new_exc.__context__ + if exc_context in (None, frame_exc): + break + new_exc = exc_context + new_exc.__context__ = old_exc + + # Callbacks are invoked in LIFO order to match the behaviour of + # nested context managers + suppressed_exc = False + while self._exit_callbacks: + cb = self._exit_callbacks.pop() + try: + if cb(*exc_details): + suppressed_exc = True + exc_details = (None, None, None) + except Exception: + new_exc_details = sys.exc_info() + # simulate the stack of exceptions by setting the context + _fix_exception_context(new_exc_details[1], exc_details[1]) + if not self._exit_callbacks: + raise + exc_details = new_exc_details + return suppressed_exc diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index af726409d4..394f92ed42 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -1,6 +1,7 @@ """Standalone helper functions""" import os +import copy from pprint import pformat import sys import uuid @@ -9,6 +10,8 @@ import re import json import logging import contextlib +import capture +from .exitstack import ExitStack from collections import OrderedDict, defaultdict from math import ceil from six import string_types @@ -172,6 +175,216 @@ def maintained_selection(): cmds.select(clear=True) +def reload_all_udim_tile_previews(): + """Regenerate all UDIM tile preview in texture file""" + for texture_file in cmds.ls(type="file"): + if cmds.getAttr("{}.uvTilingMode".format(texture_file)) > 0: + cmds.ogs(regenerateUVTilePreview=texture_file) + + +@contextlib.contextmanager +def panel_camera(panel, camera): + """Set modelPanel's camera during the context. + + Arguments: + panel (str): modelPanel name. + camera (str): camera name. + + """ + original_camera = cmds.modelPanel(panel, query=True, camera=True) + try: + cmds.modelPanel(panel, edit=True, camera=camera) + yield + finally: + cmds.modelPanel(panel, edit=True, camera=original_camera) + + +def render_capture_preset(preset): + """Capture playblast with a preset. + + To generate the preset use `generate_capture_preset`. + + Args: + preset (dict): preset options + + Returns: + str: Output path of `capture.capture` + """ + + # Force a refresh at the start of the timeline + # TODO (Question): Why do we need to do this? What bug does it solve? + # Is this for simulations? + cmds.refresh(force=True) + refresh_frame_int = int(cmds.playbackOptions(query=True, minTime=True)) + cmds.currentTime(refresh_frame_int - 1, edit=True) + cmds.currentTime(refresh_frame_int, edit=True) + log.debug( + "Using preset: {}".format( + json.dumps(preset, indent=4, sort_keys=True) + ) + ) + preset = copy.deepcopy(preset) + # not supported by `capture` so we pop it off of the preset + reload_textures = preset["viewport_options"].pop("loadTextures", False) + panel = preset.pop("panel") + with ExitStack() as stack: + stack.enter_context(maintained_time()) + stack.enter_context(panel_camera(panel, preset["camera"])) + stack.enter_context(viewport_default_options(panel, preset)) + if reload_textures: + # Force immediate texture loading when to ensure + # all textures have loaded before the playblast starts + stack.enter_context(material_loading_mode(mode="immediate")) + # Regenerate all UDIM tiles previews + reload_all_udim_tile_previews() + path = capture.capture(log=self.log, **preset) + + return path + + +def generate_capture_preset(instance, camera, path, + start=None, end=None, capture_preset=None): + """Function for getting all the data of preset options for + playblast capturing + + Args: + instance (pyblish.api.Instance): instance + camera (str): review camera + path (str): filepath + start (int): frameStart + end (int): frameEnd + capture_preset (dict): capture preset + + Returns: + dict: Resulting preset + """ + preset = load_capture_preset(data=capture_preset) + + preset["camera"] = camera + preset["start_frame"] = start + preset["end_frame"] = end + preset["filename"] = path + preset["overwrite"] = True + preset["panel"] = instance.data["panel"] + + # Disable viewer since we use the rendering logic for publishing + # We don't want to open the generated playblast in a viewer directly. + preset["viewer"] = False + + # "isolate_view" will already have been applied at creation, so we'll + # ignore it here. + preset.pop("isolate_view") + + # Set resolution variables from capture presets + width_preset = capture_preset["Resolution"]["width"] + height_preset = capture_preset["Resolution"]["height"] + + # Set resolution variables from asset values + asset_data = instance.data["assetEntity"]["data"] + asset_width = asset_data.get("resolutionWidth") + asset_height = asset_data.get("resolutionHeight") + review_instance_width = instance.data.get("review_width") + review_instance_height = instance.data.get("review_height") + + # Use resolution from instance if review width/height is set + # Otherwise use the resolution from preset if it has non-zero values + # Otherwise fall back to asset width x height + # Else define no width, then `capture.capture` will use render resolution + if review_instance_width and review_instance_height: + preset["width"] = review_instance_width + preset["height"] = review_instance_height + elif width_preset and height_preset: + preset["width"] = width_preset + preset["height"] = height_preset + elif asset_width and asset_height: + preset["width"] = asset_width + preset["height"] = asset_height + + # Isolate view is requested by having objects in the set besides a + # camera. If there is only 1 member it'll be the camera because we + # validate to have 1 camera only. + if instance.data["isolate"] and len(instance.data["setMembers"]) > 1: + preset["isolate"] = instance.data["setMembers"] + + # Override camera options + # Enforce persisting camera depth of field + camera_options = preset.setdefault("camera_options", {}) + camera_options["depthOfField"] = cmds.getAttr( + "{0}.depthOfField".format(camera) + ) + + # Use Pan/Zoom from instance data instead of from preset + preset.pop("pan_zoom", None) + camera_options["panZoomEnabled"] = instance.data["panZoom"] + + # Override viewport options by instance data + viewport_options = preset.setdefault("viewport_options", {}) + viewport_options["displayLights"] = instance.data["displayLights"] + viewport_options["imagePlane"] = instance.data.get("imagePlane", True) + + # Override transparency if requested. + transparency = instance.data.get("transparency", 0) + if transparency != 0: + preset["viewport2_options"]["transparencyAlgorithm"] = transparency + + # Update preset with current panel setting + # if override_viewport_options is turned off + if not capture_preset["Viewport Options"]["override_viewport_options"]: + panel_preset = capture.parse_view(preset["panel"]) + panel_preset.pop("camera") + preset.update(panel_preset) + + return preset + + +@contextlib.contextmanager +def viewport_default_options(panel, preset): + """Context manager used by `render_capture_preset`. + + We need to explicitly enable some viewport changes so the viewport is + refreshed ahead of playblasting. + + """ + # TODO: Clarify in the docstring WHY we need to set it ahead of + # playblasting. What issues does it solve? + viewport_defaults = {} + try: + keys = [ + "useDefaultMaterial", + "wireframeOnShaded", + "xray", + "jointXray", + "backfaceCulling", + "textures" + ] + for key in keys: + viewport_defaults[key] = cmds.modelEditor( + panel, query=True, **{key: True} + ) + if preset["viewport_options"].get(key): + cmds.modelEditor( + panel, edit=True, **{key: True} + ) + yield + finally: + # Restoring viewport options. + if viewport_defaults: + cmds.modelEditor( + panel, edit=True, **viewport_defaults + ) + + +@contextlib.contextmanager +def material_loading_mode(mode="immediate"): + """Set material loading mode during context""" + original = cmds.displayPref(query=True, materialLoadingMode=True) + cmds.displayPref(materialLoadingMode=mode) + try: + yield + finally: + cmds.displayPref(materialLoadingMode=original) + + def get_namespace(node): """Return namespace of given node""" node_name = node.rsplit("|", 1)[-1] @@ -2677,7 +2890,7 @@ def bake_to_world_space(nodes, return world_space_nodes -def load_capture_preset(data=None): +def load_capture_preset(data): """Convert OpenPype Extract Playblast settings to `capture` arguments Input data is the settings from: @@ -2691,8 +2904,6 @@ def load_capture_preset(data=None): """ - import capture - options = dict() viewport_options = dict() viewport2_options = dict() diff --git a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py index b3fbfb2ed9..40385f34d6 100644 --- a/openpype/hosts/maya/plugins/load/load_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/load/load_redshift_proxy.py @@ -137,6 +137,11 @@ class RedshiftProxyLoader(load.LoaderPlugin): cmds.connectAttr("{}.outMesh".format(rs_mesh), "{}.inMesh".format(mesh_shape)) + # TODO: use the assigned shading group as shaders if existed + # assign default shader to redshift proxy + if cmds.ls("initialShadingGroup", type="shadingEngine"): + cmds.sets(mesh_shape, forceElement="initialShadingGroup") + group_node = cmds.group(empty=True, name="{}_GRP".format(name)) mesh_transform = cmds.listRelatives(mesh_shape, parent=True, fullPath=True) diff --git a/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py b/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py index df761cde13..f82f7b69cd 100644 --- a/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py +++ b/openpype/hosts/maya/plugins/publish/collect_yeti_rig.py @@ -6,6 +6,7 @@ from maya import cmds import pyblish.api from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import KnownPublishError SETTINGS = {"renderDensity", @@ -116,7 +117,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin): resources = [] image_search_paths = cmds.getAttr("{}.imageSearchPath".format(node)) - texture_filenames = [] if image_search_paths: # TODO: Somehow this uses OS environment path separator, `:` vs `;` @@ -127,9 +127,16 @@ class CollectYetiRig(pyblish.api.InstancePlugin): # find all ${TOKEN} tokens and replace them with $TOKEN env. variable image_search_paths = self._replace_tokens(image_search_paths) - # List all related textures - texture_filenames = cmds.pgYetiCommand(node, listTextures=True) - self.log.debug("Found %i texture(s)" % len(texture_filenames)) + # List all related textures + texture_nodes = cmds.pgYetiGraph( + node, listNodes=True, type="texture") + texture_filenames = [ + cmds.pgYetiGraph( + node, node=texture_node, + param="file_name", getParamValue=True) + for texture_node in texture_nodes + ] + self.log.debug("Found %i texture(s)" % len(texture_filenames)) # Get all reference nodes reference_nodes = cmds.pgYetiGraph(node, @@ -137,11 +144,6 @@ class CollectYetiRig(pyblish.api.InstancePlugin): type="reference") self.log.debug("Found %i reference node(s)" % len(reference_nodes)) - if texture_filenames and not image_search_paths: - raise ValueError("pgYetiMaya node '%s' is missing the path to the " - "files in the 'imageSearchPath " - "atttribute'" % node) - # Collect all texture files # find all ${TOKEN} tokens and replace them with $TOKEN env. variable texture_filenames = self._replace_tokens(texture_filenames) @@ -161,7 +163,7 @@ class CollectYetiRig(pyblish.api.InstancePlugin): break if not files: - self.log.warning( + raise KnownPublishError( "No texture found for: %s " "(searched: %s)" % (texture, image_search_paths)) diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py index ab170fe48c..a4f313bdf9 100644 --- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py +++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py @@ -6,9 +6,11 @@ from maya import cmds from openpype.hosts.maya.api.lib import maintained_selection from openpype.pipeline import AVALON_CONTAINER_ID, publish +from openpype.pipeline.publish import OpenPypePyblishPluginMixin +from openpype.lib import BoolDef -class ExtractMayaSceneRaw(publish.Extractor): +class ExtractMayaSceneRaw(publish.Extractor, OpenPypePyblishPluginMixin): """Extract as Maya Scene (raw). This will preserve all references, construction history, etc. @@ -23,6 +25,22 @@ class ExtractMayaSceneRaw(publish.Extractor): "camerarig"] scene_type = "ma" + @classmethod + def get_attribute_defs(cls): + return [ + BoolDef( + "preserve_references", + label="Preserve References", + tooltip=( + "When enabled references will still be references " + "in the published file.\nWhen disabled the references " + "are imported into the published file generating a " + "file without references." + ), + default=True + ) + ] + def process(self, instance): """Plugin entry point.""" ext_mapping = ( @@ -64,13 +82,18 @@ class ExtractMayaSceneRaw(publish.Extractor): # Perform extraction self.log.debug("Performing extraction ...") + attribute_values = self.get_attr_values_from_data( + instance.data + ) with maintained_selection(): cmds.select(selection, noExpand=True) cmds.file(path, force=True, typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501 exportSelected=True, - preserveReferences=True, + preserveReferences=attribute_values[ + "preserve_references" + ], constructionHistory=True, shader=True, constraints=True, diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index cfab239da3..507229a7b3 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -1,9 +1,6 @@ import os -import json -import contextlib import clique -import capture from openpype.pipeline import publish from openpype.hosts.maya.api import lib @@ -11,16 +8,6 @@ from openpype.hosts.maya.api import lib from maya import cmds -@contextlib.contextmanager -def panel_camera(panel, camera): - original_camera = cmds.modelPanel(panel, query=True, camera=True) - try: - cmds.modelPanel(panel, edit=True, camera=camera) - yield - finally: - cmds.modelPanel(panel, edit=True, camera=original_camera) - - class ExtractPlayblast(publish.Extractor): """Extract viewport playblast. @@ -36,19 +23,8 @@ class ExtractPlayblast(publish.Extractor): capture_preset = {} profiles = None - def _capture(self, preset): - if os.environ.get("OPENPYPE_DEBUG") == "1": - self.log.debug( - "Using preset: {}".format( - json.dumps(preset, indent=4, sort_keys=True) - ) - ) - - path = capture.capture(log=self.log, **preset) - self.log.debug("playblast path {}".format(path)) - def process(self, instance): - self.log.debug("Extracting capture..") + self.log.debug("Extracting playblast..") # get scene fps fps = instance.data.get("fps") or instance.context.data.get("fps") @@ -63,10 +39,6 @@ class ExtractPlayblast(publish.Extractor): end = cmds.playbackOptions(query=True, animationEndTime=True) self.log.debug("start: {}, end: {}".format(start, end)) - - # get cameras - camera = instance.data["review_camera"] - task_data = instance.data["anatomyData"].get("task", {}) capture_preset = lib.get_capture_preset( task_data.get("name"), @@ -75,174 +47,35 @@ class ExtractPlayblast(publish.Extractor): instance.context.data["project_settings"], self.log ) - - preset = lib.load_capture_preset(data=capture_preset) - - # "isolate_view" will already have been applied at creation, so we'll - # ignore it here. - preset.pop("isolate_view") - - # Set resolution variables from capture presets - width_preset = capture_preset["Resolution"]["width"] - height_preset = capture_preset["Resolution"]["height"] - - # Set resolution variables from asset values - asset_data = instance.data["assetEntity"]["data"] - asset_width = asset_data.get("resolutionWidth") - asset_height = asset_data.get("resolutionHeight") - review_instance_width = instance.data.get("review_width") - review_instance_height = instance.data.get("review_height") - preset["camera"] = camera - - # Tests if project resolution is set, - # if it is a value other than zero, that value is - # used, if not then the asset resolution is - # used - if review_instance_width and review_instance_height: - preset["width"] = review_instance_width - preset["height"] = review_instance_height - elif width_preset and height_preset: - preset["width"] = width_preset - preset["height"] = height_preset - elif asset_width and asset_height: - preset["width"] = asset_width - preset["height"] = asset_height - preset["start_frame"] = start - preset["end_frame"] = end - - # Enforce persisting camera depth of field - camera_options = preset.setdefault("camera_options", {}) - camera_options["depthOfField"] = cmds.getAttr( - "{0}.depthOfField".format(camera)) - stagingdir = self.staging_dir(instance) - filename = "{0}".format(instance.name) + filename = instance.name path = os.path.join(stagingdir, filename) - self.log.debug("Outputting images to %s" % path) + # get cameras + camera = instance.data["review_camera"] + preset = lib.generate_capture_preset( + instance, camera, path, + start=start, end=end, + capture_preset=capture_preset) + lib.render_capture_preset(preset) - preset["filename"] = path - preset["overwrite"] = True - - cmds.refresh(force=True) - - refreshFrameInt = int(cmds.playbackOptions(q=True, minTime=True)) - cmds.currentTime(refreshFrameInt - 1, edit=True) - cmds.currentTime(refreshFrameInt, edit=True) - - # Use displayLights setting from instance - key = "displayLights" - preset["viewport_options"][key] = instance.data[key] - - # Override transparency if requested. - transparency = instance.data.get("transparency", 0) - if transparency != 0: - preset["viewport2_options"]["transparencyAlgorithm"] = transparency - - # Isolate view is requested by having objects in the set besides a - # camera. If there is only 1 member it'll be the camera because we - # validate to have 1 camera only. - if instance.data["isolate"] and len(instance.data["setMembers"]) > 1: - preset["isolate"] = instance.data["setMembers"] - - # Show/Hide image planes on request. - image_plane = instance.data.get("imagePlane", True) - if "viewport_options" in preset: - preset["viewport_options"]["imagePlane"] = image_plane - else: - preset["viewport_options"] = {"imagePlane": image_plane} - - # Disable Pan/Zoom. - pan_zoom = cmds.getAttr("{}.panZoomEnabled".format(preset["camera"])) - preset.pop("pan_zoom", None) - preset["camera_options"]["panZoomEnabled"] = instance.data["panZoom"] - - # Need to explicitly enable some viewport changes so the viewport is - # refreshed ahead of playblasting. - keys = [ - "useDefaultMaterial", - "wireframeOnShaded", - "xray", - "jointXray", - "backfaceCulling" - ] - viewport_defaults = {} - for key in keys: - viewport_defaults[key] = cmds.modelEditor( - instance.data["panel"], query=True, **{key: True} - ) - if preset["viewport_options"][key]: - cmds.modelEditor( - instance.data["panel"], edit=True, **{key: True} - ) - - override_viewport_options = ( - capture_preset["Viewport Options"]["override_viewport_options"] - ) - - # Force viewer to False in call to capture because we have our own - # viewer opening call to allow a signal to trigger between - # playblast and viewer - preset["viewer"] = False - - # Update preset with current panel setting - # if override_viewport_options is turned off - if not override_viewport_options: - panel_preset = capture.parse_view(instance.data["panel"]) - panel_preset.pop("camera") - preset.update(panel_preset) - - # Need to ensure Python 2 compatibility. - # TODO: Remove once dropping Python 2. - if getattr(contextlib, "nested", None): - # Python 3 compatibility. - with contextlib.nested( - lib.maintained_time(), - panel_camera(instance.data["panel"], preset["camera"]) - ): - self._capture(preset) - else: - # Python 2 compatibility. - with contextlib.ExitStack() as stack: - stack.enter_context(lib.maintained_time()) - stack.enter_context( - panel_camera(instance.data["panel"], preset["camera"]) - ) - - self._capture(preset) - - # Restoring viewport options. - if viewport_defaults: - cmds.modelEditor( - instance.data["panel"], edit=True, **viewport_defaults - ) - - try: - cmds.setAttr( - "{}.panZoomEnabled".format(preset["camera"]), pan_zoom) - except RuntimeError: - self.log.warning("Cannot restore Pan/Zoom settings.") - + # Find playblast sequence collected_files = os.listdir(stagingdir) patterns = [clique.PATTERNS["frames"]] collections, remainder = clique.assemble(collected_files, minimum_items=1, patterns=patterns) - filename = preset.get("filename", "%TEMP%") - self.log.debug("filename {}".format(filename)) + self.log.debug("Searching playblast collection for: %s", path) frame_collection = None for collection in collections: filebase = collection.format("{head}").rstrip(".") - self.log.debug("collection head {}".format(filebase)) - if filebase in filename: + self.log.debug("Checking collection head: %s", filebase) + if filebase in path: frame_collection = collection self.log.debug( - "we found collection of interest {}".format( - str(frame_collection))) - - if "representations" not in instance.data: - instance.data["representations"] = [] + "Found playblast collection: %s", frame_collection + ) tags = ["review"] if not instance.data.get("keepImages"): @@ -256,6 +89,9 @@ class ExtractPlayblast(publish.Extractor): if len(collected_files) == 1: collected_files = collected_files[0] + if "representations" not in instance.data: + instance.data["representations"] = [] + representation = { "name": capture_preset["Codec"]["compression"], "ext": capture_preset["Codec"]["compression"], diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index c0be3d77db..28362b355c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -1,15 +1,10 @@ import os import glob import tempfile -import json - -import capture from openpype.pipeline import publish from openpype.hosts.maya.api import lib -from maya import cmds - class ExtractThumbnail(publish.Extractor): """Extract viewport thumbnail. @@ -24,7 +19,7 @@ class ExtractThumbnail(publish.Extractor): families = ["review"] def process(self, instance): - self.log.debug("Extracting capture..") + self.log.debug("Extracting thumbnail..") camera = instance.data["review_camera"] @@ -37,20 +32,24 @@ class ExtractThumbnail(publish.Extractor): self.log ) - preset = lib.load_capture_preset(data=capture_preset) - - # "isolate_view" will already have been applied at creation, so we'll - # ignore it here. - preset.pop("isolate_view") - - override_viewport_options = ( - capture_preset["Viewport Options"]["override_viewport_options"] + # Create temp directory for thumbnail + # - this is to avoid "override" of source file + dst_staging = tempfile.mkdtemp(prefix="pyblish_tmp_thumbnail") + self.log.debug( + "Create temp directory {} for thumbnail".format(dst_staging) ) + # Store new staging to cleanup paths + filename = instance.name + path = os.path.join(dst_staging, filename) - preset["camera"] = camera - preset["start_frame"] = instance.data["frameStart"] - preset["end_frame"] = instance.data["frameStart"] - preset["camera_options"] = { + self.log.debug("Outputting images to %s" % path) + + preset = lib.generate_capture_preset( + instance, camera, path, + start=1, end=1, + capture_preset=capture_preset) + + preset["camera_options"].update({ "displayGateMask": False, "displayResolution": False, "displayFilmGate": False, @@ -60,101 +59,10 @@ class ExtractThumbnail(publish.Extractor): "displayFilmPivot": False, "displayFilmOrigin": False, "overscan": 1.0, - "depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)), - } - # Set resolution variables from capture presets - width_preset = capture_preset["Resolution"]["width"] - height_preset = capture_preset["Resolution"]["height"] - # Set resolution variables from asset values - asset_data = instance.data["assetEntity"]["data"] - asset_width = asset_data.get("resolutionWidth") - asset_height = asset_data.get("resolutionHeight") - review_instance_width = instance.data.get("review_width") - review_instance_height = instance.data.get("review_height") - # Tests if project resolution is set, - # if it is a value other than zero, that value is - # used, if not then the asset resolution is - # used - if review_instance_width and review_instance_height: - preset["width"] = review_instance_width - preset["height"] = review_instance_height - elif width_preset and height_preset: - preset["width"] = width_preset - preset["height"] = height_preset - elif asset_width and asset_height: - preset["width"] = asset_width - preset["height"] = asset_height + }) + path = lib.render_capture_preset(preset) - # Create temp directory for thumbnail - # - this is to avoid "override" of source file - dst_staging = tempfile.mkdtemp(prefix="pyblish_tmp_") - self.log.debug( - "Create temp directory {} for thumbnail".format(dst_staging) - ) - # Store new staging to cleanup paths - filename = "{0}".format(instance.name) - path = os.path.join(dst_staging, filename) - - self.log.debug("Outputting images to %s" % path) - - preset["filename"] = path - preset["overwrite"] = True - - cmds.refresh(force=True) - - refreshFrameInt = int(cmds.playbackOptions(q=True, minTime=True)) - cmds.currentTime(refreshFrameInt - 1, edit=True) - cmds.currentTime(refreshFrameInt, edit=True) - - # Use displayLights setting from instance - key = "displayLights" - preset["viewport_options"][key] = instance.data[key] - - # Override transparency if requested. - transparency = instance.data.get("transparency", 0) - if transparency != 0: - preset["viewport2_options"]["transparencyAlgorithm"] = transparency - - # Isolate view is requested by having objects in the set besides a - # camera. If there is only 1 member it'll be the camera because we - # validate to have 1 camera only. - if instance.data["isolate"] and len(instance.data["setMembers"]) > 1: - preset["isolate"] = instance.data["setMembers"] - - # Show or Hide Image Plane - image_plane = instance.data.get("imagePlane", True) - if "viewport_options" in preset: - preset["viewport_options"]["imagePlane"] = image_plane - else: - preset["viewport_options"] = {"imagePlane": image_plane} - - # Disable Pan/Zoom. - preset.pop("pan_zoom", None) - preset["camera_options"]["panZoomEnabled"] = instance.data["panZoom"] - - with lib.maintained_time(): - # Force viewer to False in call to capture because we have our own - # viewer opening call to allow a signal to trigger between - # playblast and viewer - preset["viewer"] = False - - # Update preset with current panel setting - # if override_viewport_options is turned off - panel = cmds.getPanel(withFocus=True) or "" - if not override_viewport_options and "modelPanel" in panel: - panel_preset = capture.parse_active_view() - preset.update(panel_preset) - cmds.setFocus(panel) - - if os.environ.get("OPENPYPE_DEBUG") == "1": - self.log.debug( - "Using preset: {}".format( - json.dumps(preset, indent=4, sort_keys=True) - ) - ) - - path = capture.capture(**preset) - playblast = self._fix_playblast_output_path(path) + playblast = self._fix_playblast_output_path(path) _, thumbnail = os.path.split(playblast) diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index 7bc17ff504..12562a6b6f 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -260,7 +260,7 @@ def _install_menu(): "Create...", lambda: host_tools.show_publisher( parent=( - main_window if nuke.NUKE_VERSION_RELEASE >= 14 else None + main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None ), tab="create" ) @@ -271,7 +271,7 @@ def _install_menu(): "Publish...", lambda: host_tools.show_publisher( parent=( - main_window if nuke.NUKE_VERSION_RELEASE >= 14 else None + main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None ), tab="publish" ) diff --git a/openpype/hosts/nuke/api/utils.py b/openpype/hosts/nuke/api/utils.py index 7b02585892..a7df1dee71 100644 --- a/openpype/hosts/nuke/api/utils.py +++ b/openpype/hosts/nuke/api/utils.py @@ -12,7 +12,7 @@ def set_context_favorites(favorites=None): favorites (dict): couples of {name:path} """ favorites = favorites or {} - icon_path = resources.get_resource("icons", "folder-favorite3.png") + icon_path = resources.get_resource("icons", "folder-favorite.png") for name, path in favorites.items(): nuke.addFavoriteDir( name, diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py index 3ee166eb56..a02a807206 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py @@ -34,6 +34,11 @@ class ExtractReviewIntermediates(publish.Extractor): nuke_publish = project_settings["nuke"]["publish"] deprecated_setting = nuke_publish["ExtractReviewDataMov"] current_setting = nuke_publish.get("ExtractReviewIntermediates") + if not deprecated_setting["enabled"] and ( + not current_setting["enabled"] + ): + cls.enabled = False + if deprecated_setting["enabled"]: # Use deprecated settings if they are still enabled cls.viewer_lut_raw = deprecated_setting["viewer_lut_raw"] diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index c2773b2a20..09c5d63aa5 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -170,8 +170,7 @@ class ExtractReview(publish.Extractor): # Generate mov. mov_path = os.path.join(staging_dir, "review.mov") self.log.info(f"Generate mov review: {mov_path}") - args = [ - ffmpeg_path, + args = ffmpeg_path + [ "-y", "-i", source_files_pattern, "-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2", diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py b/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py index 5db2b0cbad..75c26ac958 100644 --- a/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py +++ b/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py @@ -5,6 +5,7 @@ from openpype.pipeline import ( ) from openpype.lib import EnumDef from openpype.pipeline import colorspace +from openpype.pipeline.publish import KnownPublishError class CollectColorspace(pyblish.api.InstancePlugin, @@ -26,18 +27,44 @@ class CollectColorspace(pyblish.api.InstancePlugin, def process(self, instance): values = self.get_attr_values_from_data(instance.data) - colorspace = values.get("colorspace", None) - if colorspace is None: + colorspace_value = values.get("colorspace", None) + if colorspace_value is None: return - self.log.debug("Explicit colorspace set to: {}".format(colorspace)) + color_data = colorspace.convert_colorspace_enumerator_item( + colorspace_value, self.config_items) + + colorspace_name = self._colorspace_name_by_type(color_data) + self.log.debug("Explicit colorspace name: {}".format(colorspace_name)) context = instance.context for repre in instance.data.get("representations", {}): self.set_representation_colorspace( representation=repre, context=context, - colorspace=colorspace + colorspace=colorspace_name + ) + + def _colorspace_name_by_type(self, colorspace_data): + """ + Returns colorspace name by type + + Arguments: + colorspace_data (dict): colorspace data + + Returns: + str: colorspace name + """ + if colorspace_data["type"] == "colorspaces": + return colorspace_data["name"] + elif colorspace_data["type"] == "roles": + return colorspace_data["colorspace"] + else: + raise KnownPublishError( + ( + "Collecting of colorspace failed. used config is missing " + "colorspace type: '{}' . Please contact your pipeline TD." + ).format(colorspace_data['type']) ) @classmethod diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py index 03f9f299b2..58c40938d2 100644 --- a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py +++ b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py @@ -33,7 +33,19 @@ class ValidateColorspace(pyblish.api.InstancePlugin, config_path = colorspace_data["config"]["path"] if config_path not in config_colorspaces: colorspaces = get_ocio_config_colorspaces(config_path) - config_colorspaces[config_path] = set(colorspaces) + if not colorspaces.get("colorspaces"): + message = ( + f"OCIO config '{config_path}' does not contain any " + "colorspaces. This is an error in the OCIO config. " + "Contact your pipeline TD.", + ) + raise PublishValidationError( + title="Colorspace validation", + message=message, + description=message + ) + config_colorspaces[config_path] = set( + colorspaces["colorspaces"]) colorspace = colorspace_data["colorspace"] self.log.debug( diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_filepaths.py b/openpype/hosts/traypublisher/plugins/publish/validate_filepaths.py index 749199fbd3..b67e47d213 100644 --- a/openpype/hosts/traypublisher/plugins/publish/validate_filepaths.py +++ b/openpype/hosts/traypublisher/plugins/publish/validate_filepaths.py @@ -15,7 +15,7 @@ class ValidateFilePath(pyblish.api.InstancePlugin): This is primarily created for Simple Creator instances. """ - label = "Validate Workfile" + label = "Validate Filepaths" order = pyblish.api.ValidatorOrder - 0.49 hosts = ["traypublisher"] diff --git a/openpype/lib/events.py b/openpype/lib/events.py index 496b765a05..774790b80a 100644 --- a/openpype/lib/events.py +++ b/openpype/lib/events.py @@ -16,6 +16,113 @@ class MissingEventSystem(Exception): pass +def _get_func_ref(func): + if inspect.ismethod(func): + return WeakMethod(func) + return weakref.ref(func) + + +def _get_func_info(func): + path = "" + if func is None: + return "", path + + if hasattr(func, "__name__"): + name = func.__name__ + else: + name = str(func) + + # Get path to file and fallback to '' if fails + # NOTE This was added because of 'partial' functions which is handled, + # but who knows what else can cause this to fail? + try: + path = os.path.abspath(inspect.getfile(func)) + except TypeError: + pass + + return name, path + + +class weakref_partial: + """Partial function with weak reference to the wrapped function. + + Can be used as 'functools.partial' but it will store weak reference to + function. That means that the function must be reference counted + to avoid garbage collecting the function itself. + + When the referenced functions is garbage collected then calling the + weakref partial (no matter the args/kwargs passed) will do nothing. + It will fail silently, returning `None`. The `is_valid()` method can + be used to detect whether the reference is still valid. + + Is useful for object methods. In that case the callback is + deregistered when object is destroyed. + + Warnings: + Values passed as *args and **kwargs are stored strongly in memory. + That may "keep alive" objects that should be already destroyed. + It is recommended to pass only immutable objects like 'str', + 'bool', 'int' etc. + + Args: + func (Callable): Function to wrap. + *args: Arguments passed to the wrapped function. + **kwargs: Keyword arguments passed to the wrapped function. + """ + + def __init__(self, func, *args, **kwargs): + self._func_ref = _get_func_ref(func) + self._args = args + self._kwargs = kwargs + + def __call__(self, *args, **kwargs): + func = self._func_ref() + if func is None: + return + + new_args = tuple(list(self._args) + list(args)) + new_kwargs = dict(self._kwargs) + new_kwargs.update(kwargs) + return func(*new_args, **new_kwargs) + + def get_func(self): + """Get wrapped function. + + Returns: + Union[Callable, None]: Wrapped function or None if it was + destroyed. + """ + + return self._func_ref() + + def is_valid(self): + """Check if wrapped function is still valid. + + Returns: + bool: Is wrapped function still valid. + """ + + return self._func_ref() is not None + + def validate_signature(self, *args, **kwargs): + """Validate if passed arguments are supported by wrapped function. + + Returns: + bool: Are passed arguments supported by wrapped function. + """ + + func = self._func_ref() + if func is None: + return False + + new_args = tuple(list(self._args) + list(args)) + new_kwargs = dict(self._kwargs) + new_kwargs.update(kwargs) + return is_func_signature_supported( + func, *new_args, **new_kwargs + ) + + class EventCallback(object): """Callback registered to a topic. @@ -34,20 +141,37 @@ class EventCallback(object): or none arguments. When 1 argument is expected then the processed 'Event' object is passed in. - The registered callbacks don't keep function in memory so it is not - possible to store lambda function as callback. + The callbacks are validated against their reference counter, that is + achieved using 'weakref' module. That means that the callback must + be stored in memory somewhere. e.g. lambda functions are not + supported as valid callback. + + You can use 'weakref_partial' functions. In that case is partial object + stored in the callback object and reference counter is checked for + the wrapped function. Args: - topic(str): Topic which will be listened. - func(func): Callback to a topic. + topic (str): Topic which will be listened. + func (Callable): Callback to a topic. + order (Union[int, None]): Order of callback. Lower number means higher + priority. Raises: TypeError: When passed function is not a callable object. """ - def __init__(self, topic, func): + def __init__(self, topic, func, order): + if not callable(func): + raise TypeError(( + "Registered callback is not callable. \"{}\"" + ).format(str(func))) + + self._validate_order(order) + self._log = None self._topic = topic + self._order = order + self._enabled = True # Replace '*' with any character regex and escape rest of text # - when callback is registered for '*' topic it will receive all # events @@ -63,37 +187,38 @@ class EventCallback(object): topic_regex = re.compile(topic_regex_str) self._topic_regex = topic_regex - # Convert callback into references - # - deleted functions won't cause crashes - if inspect.ismethod(func): - func_ref = WeakMethod(func) - elif callable(func): - func_ref = weakref.ref(func) + # Callback function prep + if isinstance(func, weakref_partial): + partial_func = func + (name, path) = _get_func_info(func.get_func()) + func_ref = None + expect_args = partial_func.validate_signature("fake") + expect_kwargs = partial_func.validate_signature(event="fake") + else: - raise TypeError(( - "Registered callback is not callable. \"{}\"" - ).format(str(func))) + partial_func = None + (name, path) = _get_func_info(func) + # Convert callback into references + # - deleted functions won't cause crashes + func_ref = _get_func_ref(func) - # Collect function name and path to file for logging - func_name = func.__name__ - func_path = os.path.abspath(inspect.getfile(func)) - - # Get expected arguments from function spec - # - positional arguments are always preferred - expect_args = is_func_signature_supported(func, "fake") - expect_kwargs = is_func_signature_supported(func, event="fake") + # Get expected arguments from function spec + # - positional arguments are always preferred + expect_args = is_func_signature_supported(func, "fake") + expect_kwargs = is_func_signature_supported(func, event="fake") self._func_ref = func_ref - self._func_name = func_name - self._func_path = func_path + self._partial_func = partial_func + self._ref_is_valid = True self._expect_args = expect_args self._expect_kwargs = expect_kwargs - self._ref_valid = func_ref is not None - self._enabled = True + + self._name = name + self._path = path def __repr__(self): return "< {} - {} > {}".format( - self.__class__.__name__, self._func_name, self._func_path + self.__class__.__name__, self._name, self._path ) @property @@ -104,32 +229,83 @@ class EventCallback(object): @property def is_ref_valid(self): - return self._ref_valid + """ + + Returns: + bool: Is reference to callback valid. + """ + + self._validate_ref() + return self._ref_is_valid def validate_ref(self): - if not self._ref_valid: - return + """Validate if reference to callback is valid. - callback = self._func_ref() - if not callback: - self._ref_valid = False + Deprecated: + Reference is always live checkd with 'is_ref_valid'. + """ + + # Trigger validate by getting 'is_valid' + _ = self.is_ref_valid @property def enabled(self): - """Is callback enabled.""" + """Is callback enabled. + + Returns: + bool: Is callback enabled. + """ + return self._enabled def set_enabled(self, enabled): - """Change if callback is enabled.""" + """Change if callback is enabled. + + Args: + enabled (bool): Change enabled state of the callback. + """ + self._enabled = enabled def deregister(self): """Calling this function will cause that callback will be removed.""" - # Fake reference - self._ref_valid = False + + self._ref_is_valid = False + self._partial_func = None + self._func_ref = None + + def get_order(self): + """Get callback order. + + Returns: + Union[int, None]: Callback order. + """ + + return self._order + + def set_order(self, order): + """Change callback order. + + Args: + order (Union[int, None]): Order of callback. Lower number means + higher priority. + """ + + self._validate_order(order) + self._order = order + + order = property(get_order, set_order) def topic_matches(self, topic): - """Check if event topic matches callback's topic.""" + """Check if event topic matches callback's topic. + + Args: + topic (str): Topic name. + + Returns: + bool: Topic matches callback's topic. + """ + return self._topic_regex.match(topic) def process_event(self, event): @@ -139,36 +315,69 @@ class EventCallback(object): event(Event): Event that was triggered. """ - # Skip if callback is not enabled or has invalid reference - if not self._ref_valid or not self._enabled: + # Skip if callback is not enabled + if not self._enabled: return - # Get reference - callback = self._func_ref() - # Check if reference is valid or callback's topic matches the event - if not callback: - # Change state if is invalid so the callback is removed - self._ref_valid = False + # Get reference and skip if is not available + callback = self._get_callback() + if callback is None: + return - elif self.topic_matches(event.topic): - # Try execute callback - try: - if self._expect_args: - callback(event) + if not self.topic_matches(event.topic): + return - elif self._expect_kwargs: - callback(event=event) + # Try to execute callback + try: + if self._expect_args: + callback(event) - else: - callback() + elif self._expect_kwargs: + callback(event=event) - except Exception: - self.log.warning( - "Failed to execute event callback {}".format( - str(repr(self)) - ), - exc_info=True - ) + else: + callback() + + except Exception: + self.log.warning( + "Failed to execute event callback {}".format( + str(repr(self)) + ), + exc_info=True + ) + + def _validate_order(self, order): + if isinstance(order, int): + return + + raise TypeError( + "Expected type 'int' got '{}'.".format(str(type(order))) + ) + + def _get_callback(self): + if self._partial_func is not None: + return self._partial_func + + if self._func_ref is not None: + return self._func_ref() + return None + + def _validate_ref(self): + if self._ref_is_valid is False: + return + + if self._func_ref is not None: + self._ref_is_valid = self._func_ref() is not None + + elif self._partial_func is not None: + self._ref_is_valid = self._partial_func.is_valid() + + else: + self._ref_is_valid = False + + if not self._ref_is_valid: + self._func_ref = None + self._partial_func = None # Inherit from 'object' for Python 2 hosts @@ -282,30 +491,39 @@ class Event(object): class EventSystem(object): """Encapsulate event handling into an object. - System wraps registered callbacks and triggered events into single object - so it is possible to create mutltiple independent systems that have their + System wraps registered callbacks and triggered events into single object, + so it is possible to create multiple independent systems that have their topics and callbacks. - + Callbacks are stored by order of their registration, but it is possible to + manually define order of callbacks using 'order' argument within + 'add_callback'. """ + default_order = 100 + def __init__(self): self._registered_callbacks = [] - def add_callback(self, topic, callback): + def add_callback(self, topic, callback, order=None): """Register callback in event system. Args: topic (str): Topic for EventCallback. - callback (Callable): Function or method that will be called - when topic is triggered. + callback (Union[Callable, weakref_partial]): Function or method + that will be called when topic is triggered. + order (Optional[int]): Order of callback. Lower number means + higher priority. Returns: EventCallback: Created callback object which can be used to stop listening. """ - callback = EventCallback(topic, callback) + if order is None: + order = self.default_order + + callback = EventCallback(topic, callback, order) self._registered_callbacks.append(callback) return callback @@ -341,22 +559,6 @@ class EventSystem(object): event.emit() return event - def _process_event(self, event): - """Process event topic and trigger callbacks. - - Args: - event (Event): Prepared event with topic and data. - """ - - invalid_callbacks = [] - for callback in self._registered_callbacks: - callback.process_event(event) - if not callback.is_ref_valid: - invalid_callbacks.append(callback) - - for callback in invalid_callbacks: - self._registered_callbacks.remove(callback) - def emit_event(self, event): """Emit event object. @@ -366,6 +568,21 @@ class EventSystem(object): self._process_event(event) + def _process_event(self, event): + """Process event topic and trigger callbacks. + + Args: + event (Event): Prepared event with topic and data. + """ + + callbacks = tuple(sorted( + self._registered_callbacks, key=lambda x: x.order + )) + for callback in callbacks: + callback.process_event(event) + if not callback.is_ref_valid: + self._registered_callbacks.remove(callback) + class QueuedEventSystem(EventSystem): """Events are automatically processed in queue. diff --git a/openpype/lib/python_module_tools.py b/openpype/lib/python_module_tools.py index bedf19562d..4f9eb7f667 100644 --- a/openpype/lib/python_module_tools.py +++ b/openpype/lib/python_module_tools.py @@ -269,7 +269,7 @@ def is_func_signature_supported(func, *args, **kwargs): True Args: - func (function): A function where the signature should be tested. + func (Callable): A function where the signature should be tested. *args (Any): Positional arguments for function signature. **kwargs (Any): Keyword arguments for function signature. diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py index 187feb9b1a..002dfa5992 100644 --- a/openpype/modules/deadline/abstract_submit_deadline.py +++ b/openpype/modules/deadline/abstract_submit_deadline.py @@ -464,7 +464,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin, self.log.info("Submitted job to Deadline: {}.".format(job_id)) # TODO: Find a way that's more generic and not render type specific - if "exportJob" in instance.data: + if instance.data.get("splitRender"): self.log.info("Splitting export and render in two jobs") self.log.info("Export job id: %s", job_id) render_job_info = self.get_job_info(dependency_job_ids=[job_id]) diff --git a/openpype/modules/deadline/plugins/publish/collect_pools.py b/openpype/modules/deadline/plugins/publish/collect_pools.py index a25b149f11..9ee079b892 100644 --- a/openpype/modules/deadline/plugins/publish/collect_pools.py +++ b/openpype/modules/deadline/plugins/publish/collect_pools.py @@ -1,7 +1,4 @@ # -*- coding: utf-8 -*- -"""Collect Deadline pools. Choose default one from Settings - -""" import pyblish.api from openpype.lib import TextDef from openpype.pipeline.publish import OpenPypePyblishPluginMixin @@ -9,11 +6,35 @@ from openpype.pipeline.publish import OpenPypePyblishPluginMixin class CollectDeadlinePools(pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin): - """Collect pools from instance if present, from Setting otherwise.""" + """Collect pools from instance or Publisher attributes, from Setting + otherwise. + + Pools are used to control which DL workers could render the job. + + Pools might be set: + - directly on the instance (set directly in DCC) + - from Publisher attributes + - from defaults from Settings. + + Publisher attributes could be shown even for instances that should be + rendered locally as visibility is driven by product type of the instance + (which will be `render` most likely). + (Might be resolved in the future and class attribute 'families' should + be cleaned up.) + + """ order = pyblish.api.CollectorOrder + 0.420 label = "Collect Deadline Pools" - families = ["rendering", + hosts = ["aftereffects", + "fusion", + "harmony" + "nuke", + "maya", + "max"] + + families = ["render", + "rendering", "render.farm", "renderFarm", "renderlayer", @@ -30,7 +51,6 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin, cls.secondary_pool = settings.get("secondary_pool", None) def process(self, instance): - attr_values = self.get_attr_values_from_data(instance.data) if not instance.data.get("primaryPool"): instance.data["primaryPool"] = ( @@ -60,8 +80,12 @@ class CollectDeadlinePools(pyblish.api.InstancePlugin, return [ TextDef("primaryPool", label="Primary Pool", - default=cls.primary_pool), + default=cls.primary_pool, + tooltip="Deadline primary pool, " + "applicable for farm rendering"), TextDef("secondaryPool", label="Secondary Pool", - default=cls.secondary_pool) + default=cls.secondary_pool, + tooltip="Deadline secondary pool, " + "applicable for farm rendering") ] diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py index 0c75f632cb..bf7fb45a8b 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py @@ -15,6 +15,7 @@ from openpype.lib import ( NumberDef ) + @attr.s class DeadlinePluginInfo(): SceneFile = attr.ib(default=None) @@ -41,6 +42,12 @@ class VrayRenderPluginInfo(): SeparateFilesPerFrame = attr.ib(default=True) +@attr.s +class RedshiftRenderPluginInfo(): + SceneFile = attr.ib(default=None) + Version = attr.ib(default=None) + + class HoudiniSubmitDeadline( abstract_submit_deadline.AbstractSubmitDeadline, OpenPypePyblishPluginMixin @@ -124,7 +131,7 @@ class HoudiniSubmitDeadline( # Whether Deadline render submission is being split in two # (extract + render) - split_render_job = instance.data["exportJob"] + split_render_job = instance.data.get("splitRender") # If there's some dependency job ids we can assume this is a render job # and not an export job @@ -132,18 +139,21 @@ class HoudiniSubmitDeadline( if dependency_job_ids: is_export_job = False + job_type = "[RENDER]" if split_render_job and not is_export_job: # Convert from family to Deadline plugin name # i.e., arnold_rop -> Arnold plugin = instance.data["family"].replace("_rop", "").capitalize() else: plugin = "Houdini" + if split_render_job: + job_type = "[EXPORT IFD]" job_info = DeadlineJobInfo(Plugin=plugin) filepath = context.data["currentFile"] filename = os.path.basename(filepath) - job_info.Name = "{} - {}".format(filename, instance.name) + job_info.Name = "{} - {} {}".format(filename, instance.name, job_type) job_info.BatchName = filename job_info.UserName = context.data.get( @@ -259,6 +269,25 @@ class HoudiniSubmitDeadline( plugin_info = VrayRenderPluginInfo( InputFilename=instance.data["ifdFile"], ) + elif family == "redshift_rop": + plugin_info = RedshiftRenderPluginInfo( + SceneFile=instance.data["ifdFile"] + ) + # Note: To use different versions of Redshift on Deadline + # set the `REDSHIFT_VERSION` env variable in the Tools + # settings in the AYON Application plugin. You will also + # need to set that version in `Redshift.param` file + # of the Redshift Deadline plugin: + # [Redshift_Executable_*] + # where * is the version number. + if os.getenv("REDSHIFT_VERSION"): + plugin_info.Version = os.getenv("REDSHIFT_VERSION") + else: + self.log.warning(( + "REDSHIFT_VERSION env variable is not set" + " - using version configured in Deadline" + )) + else: self.log.error( "Family '%s' not supported yet to split render job", diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 26a605a744..5591db151a 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -231,7 +231,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" # Adding file dependencies. - if self.asset_dependencies: + if not bool(os.environ.get("IS_TEST")) and self.asset_dependencies: dependencies = instance.context.data["fileDependencies"] for dependency in dependencies: job_info.AssetDependency += dependency @@ -570,7 +570,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline, job_info = copy.deepcopy(self.job_info) - if self.asset_dependencies: + if not bool(os.environ.get("IS_TEST")) and self.asset_dependencies: # Asset dependency to wait for at least the scene file to sync. job_info.AssetDependency += self.scene_path diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index c9019b496b..04ce2b3433 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -89,7 +89,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, """ - label = "Submit image sequence jobs to Deadline or Muster" + label = "Submit Image Publishing job to Deadline" order = pyblish.api.IntegratorOrder + 0.2 icon = "tractor" @@ -297,7 +297,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, job_index)] = assembly_id # noqa: E501 job_index += 1 elif instance.data.get("bakingSubmissionJobs"): - self.log.info("Adding baking submission jobs as dependencies...") + self.log.info( + "Adding baking submission jobs as dependencies..." + ) job_index = 0 for assembly_id in instance.data["bakingSubmissionJobs"]: payload["JobInfo"]["JobDependency{}".format( @@ -582,16 +584,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, ''' - render_job = None - submission_type = "" - if instance.data.get("toBeRenderedOn") == "deadline": - render_job = instance.data.pop("deadlineSubmissionJob", None) - submission_type = "deadline" - - if instance.data.get("toBeRenderedOn") == "muster": - render_job = instance.data.pop("musterSubmissionJob", None) - submission_type = "muster" - + render_job = instance.data.pop("deadlineSubmissionJob", None) if not render_job and instance.data.get("tileRendering") is False: raise AssertionError(("Cannot continue without valid Deadline " "or Muster submission.")) @@ -624,21 +617,19 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, "FTRACK_SERVER": os.environ.get("FTRACK_SERVER"), } - deadline_publish_job_id = None - if submission_type == "deadline": - # get default deadline webservice url from deadline module - self.deadline_url = instance.context.data["defaultDeadline"] - # if custom one is set in instance, use that - if instance.data.get("deadlineUrl"): - self.deadline_url = instance.data.get("deadlineUrl") - assert self.deadline_url, "Requires Deadline Webservice URL" + # get default deadline webservice url from deadline module + self.deadline_url = instance.context.data["defaultDeadline"] + # if custom one is set in instance, use that + if instance.data.get("deadlineUrl"): + self.deadline_url = instance.data.get("deadlineUrl") + assert self.deadline_url, "Requires Deadline Webservice URL" - deadline_publish_job_id = \ - self._submit_deadline_post_job(instance, render_job, instances) + deadline_publish_job_id = \ + self._submit_deadline_post_job(instance, render_job, instances) - # Inject deadline url to instances. - for inst in instances: - inst["deadlineUrl"] = self.deadline_url + # Inject deadline url to instances. + for inst in instances: + inst["deadlineUrl"] = self.deadline_url # publish job file publish_job = { @@ -664,15 +655,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin, if audio_file and os.path.isfile(audio_file): publish_job.update({"audio": audio_file}) - # pass Ftrack credentials in case of Muster - if submission_type == "muster": - ftrack = { - "FTRACK_API_USER": os.environ.get("FTRACK_API_USER"), - "FTRACK_API_KEY": os.environ.get("FTRACK_API_KEY"), - "FTRACK_SERVER": os.environ.get("FTRACK_SERVER"), - } - publish_job.update({"ftrack": ftrack}) - metadata_path, rootless_metadata_path = \ create_metadata_path(instance, anatomy) diff --git a/openpype/modules/ftrack/event_handlers_user/action_djvview.py b/openpype/modules/ftrack/event_handlers_user/action_djvview.py index 334519b4bb..cc37faacf2 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_djvview.py +++ b/openpype/modules/ftrack/event_handlers_user/action_djvview.py @@ -13,7 +13,7 @@ class DJVViewAction(BaseAction): description = "DJV View Launcher" icon = statics_icon("app_icons", "djvView.png") - type = 'Application' + type = "Application" allowed_types = [ "cin", "dpx", "avi", "dv", "gif", "flv", "mkv", "mov", "mpg", "mpeg", @@ -60,7 +60,7 @@ class DJVViewAction(BaseAction): return False def interface(self, session, entities, event): - if event['data'].get('values', {}): + if event["data"].get("values", {}): return entity = entities[0] @@ -70,32 +70,32 @@ class DJVViewAction(BaseAction): if entity_type == "assetversion": if ( entity[ - 'components' - ][0]['file_type'][1:] in self.allowed_types + "components" + ][0]["file_type"][1:] in self.allowed_types ): versions.append(entity) else: master_entity = entity if entity_type == "task": - master_entity = entity['parent'] + master_entity = entity["parent"] - for asset in master_entity['assets']: - for version in asset['versions']: + for asset in master_entity["assets"]: + for version in asset["versions"]: # Get only AssetVersion of selected task if ( entity_type == "task" and - version['task']['id'] != entity['id'] + version["task"]["id"] != entity["id"] ): continue # Get only components with allowed type - filetype = version['components'][0]['file_type'] + filetype = version["components"][0]["file_type"] if filetype[1:] in self.allowed_types: versions.append(version) if len(versions) < 1: return { - 'success': False, - 'message': 'There are no Asset Versions to open.' + "success": False, + "message": "There are no Asset Versions to open." } # TODO sort them (somehow?) @@ -134,68 +134,68 @@ class DJVViewAction(BaseAction): last_available = None select_value = None for version in versions: - for component in version['components']: + for component in version["components"]: label = base_label.format( - str(version['version']).zfill(3), - version['asset']['type']['name'], - component['name'] + str(version["version"]).zfill(3), + version["asset"]["type"]["name"], + component["name"] ) try: location = component[ - 'component_locations' - ][0]['location'] + "component_locations" + ][0]["location"] file_path = location.get_filesystem_path(component) except Exception: file_path = component[ - 'component_locations' - ][0]['resource_identifier'] + "component_locations" + ][0]["resource_identifier"] if os.path.isdir(os.path.dirname(file_path)): last_available = file_path - if component['name'] == default_component: + if component["name"] == default_component: select_value = file_path version_items.append( - {'label': label, 'value': file_path} + {"label": label, "value": file_path} ) if len(version_items) == 0: return { - 'success': False, - 'message': ( - 'There are no Asset Versions with accessible path.' + "success": False, + "message": ( + "There are no Asset Versions with accessible path." ) } item = { - 'label': 'Items to view', - 'type': 'enumerator', - 'name': 'path', - 'data': sorted( + "label": "Items to view", + "type": "enumerator", + "name": "path", + "data": sorted( version_items, - key=itemgetter('label'), + key=itemgetter("label"), reverse=True ) } if select_value is not None: - item['value'] = select_value + item["value"] = select_value else: - item['value'] = last_available + item["value"] = last_available items.append(item) - return {'items': items} + return {"items": items} def launch(self, session, entities, event): """Callback method for DJVView action.""" # Launching application - event_data = event["data"] - if "values" not in event_data: + event_values = event["data"].get("values") + if not event_values: return - djv_app_name = event_data["djv_app_name"] - app = self.applicaion_manager.applications.get(djv_app_name) + djv_app_name = event_values["djv_app_name"] + app = self.application_manager.applications.get(djv_app_name) executable = None if app is not None: executable = app.find_executable() @@ -206,18 +206,21 @@ class DJVViewAction(BaseAction): "message": "Couldn't find DJV executable." } - filpath = os.path.normpath(event_data["values"]["path"]) + filpath = os.path.normpath(event_values["path"]) cmd = [ # DJV path - executable, + str(executable), # PATH TO COMPONENT filpath ] try: # Run DJV with these commands - subprocess.Popen(cmd) + _process = subprocess.Popen(cmd) + # Keep process in memory for some time + time.sleep(0.1) + except FileNotFoundError: return { "success": False, diff --git a/openpype/modules/ftrack/lib/custom_attributes.py b/openpype/modules/ftrack/lib/custom_attributes.py index 3e40bb02f2..76c7bcd403 100644 --- a/openpype/modules/ftrack/lib/custom_attributes.py +++ b/openpype/modules/ftrack/lib/custom_attributes.py @@ -66,7 +66,7 @@ def get_openpype_attr(session, split_hierarchical=True, query_keys=None): "select {}" " from CustomAttributeConfiguration" # Kept `pype` for Backwards Compatibility - " where group.name in (\"pype\", \"{}\")" + " where group.name in (\"pype\", \"ayon\", \"{}\")" ).format(", ".join(query_keys), CUST_ATTR_GROUP) all_avalon_attr = session.query(cust_attrs_query).all() for cust_attr in all_avalon_attr: diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py index a3e6bc25c5..4b1307f9f0 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py @@ -352,7 +352,8 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin): # add extended name if any if ( - not self.keep_first_subset_name_for_review + multiple_reviewable + and not self.keep_first_subset_name_for_review and extended_asset_name ): other_item["asset_data"]["name"] = extended_asset_name diff --git a/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py b/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py index a1aa7c0daa..68a31035f6 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py @@ -21,7 +21,7 @@ def get_pype_attr(session, split_hierarchical=True): "select id, entity_type, object_type_id, is_hierarchical, default" " from CustomAttributeConfiguration" # Kept `pype` for Backwards Compatibility - " where group.name in (\"pype\", \"{}\")" + " where group.name in (\"pype\", \"ayon\", \"{}\")" ).format(CUST_ATTR_GROUP) all_avalon_attr = session.query(cust_attrs_query).all() for cust_attr in all_avalon_attr: diff --git a/openpype/modules/python_console_interpreter/window/widgets.py b/openpype/modules/python_console_interpreter/window/widgets.py index 28950f8369..d046c0de50 100644 --- a/openpype/modules/python_console_interpreter/window/widgets.py +++ b/openpype/modules/python_console_interpreter/window/widgets.py @@ -354,7 +354,7 @@ class PythonInterpreterWidget(QtWidgets.QWidget): default_width = 1000 default_height = 600 - def __init__(self, parent=None): + def __init__(self, allow_save_registry=True, parent=None): super(PythonInterpreterWidget, self).__init__(parent) self.setWindowTitle("{} Console".format( @@ -414,6 +414,8 @@ class PythonInterpreterWidget(QtWidgets.QWidget): self._first_show = True self._splitter_size_ratio = None + self._allow_save_registry = allow_save_registry + self._registry_saved = True self._init_from_registry() @@ -457,6 +459,11 @@ class PythonInterpreterWidget(QtWidgets.QWidget): pass def save_registry(self): + # Window was not showed + if not self._allow_save_registry or self._registry_saved: + return + + self._registry_saved = True setting_registry = PythonInterpreterRegistry() setting_registry.set_item("width", self.width()) @@ -650,6 +657,7 @@ class PythonInterpreterWidget(QtWidgets.QWidget): def showEvent(self, event): self._line_check_timer.start() + self._registry_saved = False super(PythonInterpreterWidget, self).showEvent(event) # First show setup if self._first_show: diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 4ea2f932f1..40cb94e2bf 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -58,41 +58,13 @@ def get_template_name_profiles( if not project_settings: project_settings = get_project_settings(project_name) - profiles = ( + return copy.deepcopy( project_settings ["global"] ["tools"] ["publish"] ["template_name_profiles"] ) - if profiles: - return copy.deepcopy(profiles) - - # Use legacy approach for cases new settings are not filled yet for the - # project - legacy_profiles = ( - project_settings - ["global"] - ["publish"] - ["IntegrateAssetNew"] - ["template_name_profiles"] - ) - if legacy_profiles: - if not logger: - logger = Logger.get_logger("get_template_name_profiles") - - logger.warning(( - "Project \"{}\" is using legacy access to publish template." - " It is recommended to move settings to new location" - " 'project_settings/global/tools/publish/template_name_profiles'." - ).format(project_name)) - - # Replace "tasks" key with "task_names" - profiles = [] - for profile in copy.deepcopy(legacy_profiles): - profile["task_names"] = profile.pop("tasks", []) - profiles.append(profile) - return profiles def get_hero_template_name_profiles( @@ -121,36 +93,13 @@ def get_hero_template_name_profiles( if not project_settings: project_settings = get_project_settings(project_name) - profiles = ( + return copy.deepcopy( project_settings ["global"] ["tools"] ["publish"] ["hero_template_name_profiles"] ) - if profiles: - return copy.deepcopy(profiles) - - # Use legacy approach for cases new settings are not filled yet for the - # project - legacy_profiles = copy.deepcopy( - project_settings - ["global"] - ["publish"] - ["IntegrateHeroVersion"] - ["template_name_profiles"] - ) - if legacy_profiles: - if not logger: - logger = Logger.get_logger("get_hero_template_name_profiles") - - logger.warning(( - "Project \"{}\" is using legacy access to hero publish template." - " It is recommended to move settings to new location" - " 'project_settings/global/tools/publish/" - "hero_template_name_profiles'." - ).format(project_name)) - return legacy_profiles def get_publish_template_name( diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index 9f0f7fe7f3..59dc6b5c64 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -54,7 +54,6 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): # permissions error on files (files were used or user didn't have perms) # *but all other plugins must be sucessfully completed - template_name_profiles = [] _default_template_name = "hero" def process(self, instance): diff --git a/openpype/plugins/publish/integrate_thumbnail_ayon.py b/openpype/plugins/publish/integrate_thumbnail_ayon.py index fc77a803fc..e56c567667 100644 --- a/openpype/plugins/publish/integrate_thumbnail_ayon.py +++ b/openpype/plugins/publish/integrate_thumbnail_ayon.py @@ -106,11 +106,8 @@ class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin): continue # Find thumbnail path on instance - thumbnail_source = instance.data.get("thumbnailSource") - thumbnail_path = instance.data.get("thumbnailPath") thumbnail_path = ( - thumbnail_source - or thumbnail_path + instance.data.get("thumbnailPath") or self._get_instance_thumbnail_path(published_repres) ) if thumbnail_path: diff --git a/openpype/resources/icons/folder-favorite.png b/openpype/resources/icons/folder-favorite.png index 198b289e9e..65f04d8c86 100644 Binary files a/openpype/resources/icons/folder-favorite.png and b/openpype/resources/icons/folder-favorite.png differ diff --git a/openpype/resources/icons/folder-favorite2.png b/openpype/resources/icons/folder-favorite2.png deleted file mode 100644 index 91bc3f0fbe..0000000000 Binary files a/openpype/resources/icons/folder-favorite2.png and /dev/null differ diff --git a/openpype/resources/icons/folder-favorite3.png b/openpype/resources/icons/folder-favorite3.png deleted file mode 100644 index ce1e6d7171..0000000000 Binary files a/openpype/resources/icons/folder-favorite3.png and /dev/null differ diff --git a/openpype/scripts/ocio_wrapper.py b/openpype/scripts/ocio_wrapper.py index fa231cd047..0a78e33c1f 100644 --- a/openpype/scripts/ocio_wrapper.py +++ b/openpype/scripts/ocio_wrapper.py @@ -21,7 +21,7 @@ Providing functionality: import click import json -from pathlib2 import Path +from pathlib import Path import PyOpenColorIO as ocio diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py index 222ce68c0f..a6d90d1cf0 100644 --- a/openpype/settings/ayon_settings.py +++ b/openpype/settings/ayon_settings.py @@ -478,15 +478,6 @@ def _convert_maya_project_settings(ayon_settings, output): for item in ayon_maya["ext_mapping"] } - # Publish UI filters - new_filters = {} - for item in ayon_maya["filters"]: - new_filters[item["name"]] = { - subitem["name"]: subitem["value"] - for subitem in item["value"] - } - ayon_maya["filters"] = new_filters - # Maya dirmap ayon_maya_dirmap = ayon_maya.pop("maya_dirmap") ayon_maya_dirmap_path = ayon_maya_dirmap["paths"] @@ -743,16 +734,6 @@ def _convert_nuke_project_settings(ayon_settings, output): dirmap["paths"][dst_key] = dirmap["paths"].pop(src_key) ayon_nuke["nuke-dirmap"] = dirmap - # --- Filters --- - new_gui_filters = {} - for item in ayon_nuke.pop("filters"): - subvalue = {} - key = item["name"] - for subitem in item["value"]: - subvalue[subitem["name"]] = subitem["value"] - new_gui_filters[key] = subvalue - ayon_nuke["filters"] = new_gui_filters - # --- Load --- ayon_load = ayon_nuke["load"] ayon_load["LoadClip"]["_representations"] = ( @@ -896,7 +877,7 @@ def _convert_hiero_project_settings(ayon_settings, output): _convert_host_imageio(ayon_hiero) new_gui_filters = {} - for item in ayon_hiero.pop("filters"): + for item in ayon_hiero.pop("filters", []): subvalue = {} key = item["name"] for subitem in item["value"]: @@ -963,17 +944,6 @@ def _convert_tvpaint_project_settings(ayon_settings, output): _convert_host_imageio(ayon_tvpaint) - filters = {} - for item in ayon_tvpaint["filters"]: - value = item["value"] - try: - value = json.loads(value) - - except ValueError: - value = {} - filters[item["name"]] = value - ayon_tvpaint["filters"] = filters - ayon_publish_settings = ayon_tvpaint["publish"] for plugin_name in ( "ValidateProjectSettings", diff --git a/openpype/settings/defaults/project_settings/max.json b/openpype/settings/defaults/project_settings/max.json index 19c9d10496..d1610610dc 100644 --- a/openpype/settings/defaults/project_settings/max.json +++ b/openpype/settings/defaults/project_settings/max.json @@ -1,4 +1,8 @@ { + "unit_scale_settings": { + "enabled": true, + "scene_unit_scale": "Meters" + }, "imageio": { "activate_host_color_management": true, "ocio_config": { diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 7719a5e255..615000183d 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -436,7 +436,7 @@ "viewTransform": "sRGB gamma" } }, - "mel_workspace": "workspace -fr \"shaders\" \"renderData/shaders\";\nworkspace -fr \"images\" \"renders/maya\";\nworkspace -fr \"particles\" \"particles\";\nworkspace -fr \"mayaAscii\" \"\";\nworkspace -fr \"mayaBinary\" \"\";\nworkspace -fr \"scene\" \"\";\nworkspace -fr \"alembicCache\" \"cache/alembic\";\nworkspace -fr \"renderData\" \"renderData\";\nworkspace -fr \"sourceImages\" \"sourceimages\";\nworkspace -fr \"fileCache\" \"cache/nCache\";\n", + "mel_workspace": "workspace -fr \"shaders\" \"renderData/shaders\";\nworkspace -fr \"images\" \"renders/maya\";\nworkspace -fr \"particles\" \"particles\";\nworkspace -fr \"mayaAscii\" \"\";\nworkspace -fr \"mayaBinary\" \"\";\nworkspace -fr \"scene\" \"\";\nworkspace -fr \"alembicCache\" \"cache/alembic\";\nworkspace -fr \"renderData\" \"renderData\";\nworkspace -fr \"sourceImages\" \"sourceimages\";\nworkspace -fr \"fileCache\" \"cache/nCache\";\nworkspace -fr \"autoSave\" \"autosave\";", "ext_mapping": { "model": "ma", "mayaAscii": "ma", @@ -1289,6 +1289,7 @@ "twoSidedLighting": true, "lineAAEnable": true, "multiSample": 8, + "loadTextures": false, "useDefaultMaterial": false, "wireframeOnShaded": false, "xray": false, @@ -1608,14 +1609,5 @@ }, "templated_workfile_build": { "profiles": [] - }, - "filters": { - "preset 1": { - "ValidateNoAnimation": false, - "ValidateShapeDefaultNames": false - }, - "preset 2": { - "ValidateNoAnimation": false - } } } diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 17932c793d..15c2d262e0 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -540,6 +540,5 @@ }, "templated_workfile_build": { "profiles": [] - }, - "filters": {} + } } diff --git a/openpype/settings/defaults/project_settings/tvpaint.json b/openpype/settings/defaults/project_settings/tvpaint.json index fdbd6d5d0f..d03b8b7227 100644 --- a/openpype/settings/defaults/project_settings/tvpaint.json +++ b/openpype/settings/defaults/project_settings/tvpaint.json @@ -107,6 +107,5 @@ "workfile_builder": { "create_first_version": false, "custom_templates": [] - }, - "filters": {} + } } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_max.json b/openpype/settings/entities/schemas/projects_schema/schema_project_max.json index 78cca357a3..e4d4d40ce7 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_max.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_max.json @@ -5,6 +5,34 @@ "label": "Max", "is_file": true, "children": [ + { + "key": "unit_scale_settings", + "type": "dict", + "label": "Set Unit Scale", + "collapsible": true, + "is_group": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "key": "scene_unit_scale", + "label": "Scene Unit Scale", + "type": "enum", + "multiselection": false, + "defaults": "exr", + "enum_items": [ + {"Millimeters": "mm"}, + {"Centimeters": "cm"}, + {"Meters": "m"}, + {"Kilometers": "km"} + ] + } + ] + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json b/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json index dca955dab4..a6fd742b40 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json @@ -258,10 +258,6 @@ { "type": "schema", "name": "schema_templated_workfile_build" - }, - { - "type": "schema", - "name": "schema_publish_gui_filter" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json index 6b516ddf4a..0b24c8231c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json @@ -291,10 +291,6 @@ { "type": "schema", "name": "schema_templated_workfile_build" - }, - { - "type": "schema", - "name": "schema_publish_gui_filter" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json index e9255f426e..5b2647bc6d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_tvpaint.json @@ -436,10 +436,6 @@ "workfile_builder/builder_on_start", "workfile_builder/profiles" ] - }, - { - "type": "schema", - "name": "schema_publish_gui_filter" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index ac2d9e190d..64f292a140 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -1023,49 +1023,6 @@ { "type": "label", "label": "NOTE: Hero publish template profiles settings were moved to Tools/Publish/Hero template name profiles. Please move values there." - }, - { - "type": "list", - "key": "template_name_profiles", - "label": "Template name profiles (DEPRECATED)", - "use_label_wrap": true, - "object_type": { - "type": "dict", - "children": [ - { - "key": "families", - "label": "Families", - "type": "list", - "object_type": "text" - }, - { - "type": "hosts-enum", - "key": "hosts", - "label": "Hosts", - "multiselection": true - }, - { - "key": "task_types", - "label": "Task types", - "type": "task-types-enum" - }, - { - "key": "task_names", - "label": "Task names", - "type": "list", - "object_type": "text" - }, - { - "type": "separator" - }, - { - "type": "text", - "key": "template_name", - "label": "Template name", - "tooltip": "Name of template from Anatomy templates" - } - ] - } } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json index d90527ac8c..76ad9a3ba2 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json @@ -236,6 +236,11 @@ { "type": "splitter" }, + { + "type": "boolean", + "key": "loadTextures", + "label": "Load Textures" + }, { "type": "boolean", "key": "useDefaultMaterial", @@ -908,6 +913,12 @@ { "type": "splitter" }, + { + "type": "boolean", + "key": "loadTextures", + "label": "Load Textures", + "default": false + }, { "type": "boolean", "key": "useDefaultMaterial", diff --git a/openpype/tools/ayon_workfiles/models/workfiles.py b/openpype/tools/ayon_workfiles/models/workfiles.py index d74a8e164d..f9f910ac8a 100644 --- a/openpype/tools/ayon_workfiles/models/workfiles.py +++ b/openpype/tools/ayon_workfiles/models/workfiles.py @@ -606,7 +606,7 @@ class PublishWorkfilesModel: print("Failed to format workfile path: {}".format(exc)) dirpath, filename = os.path.split(workfile_path) - created_at = arrow.get(repre_entity["createdAt"].to("local")) + created_at = arrow.get(repre_entity["createdAt"]).to("local") return FileItem( dirpath, filename, diff --git a/openpype/tools/sceneinventory/switch_dialog.py b/openpype/tools/sceneinventory/switch_dialog.py index ce2272df57..150e369678 100644 --- a/openpype/tools/sceneinventory/switch_dialog.py +++ b/openpype/tools/sceneinventory/switch_dialog.py @@ -1230,12 +1230,12 @@ class SwitchAssetDialog(QtWidgets.QDialog): version_ids = list() - version_docs_by_parent_id = {} + version_docs_by_parent_id_and_name = collections.defaultdict(dict) for version_doc in version_docs: parent_id = version_doc["parent"] - if parent_id not in version_docs_by_parent_id: - version_ids.append(version_doc["_id"]) - version_docs_by_parent_id[parent_id] = version_doc + version_ids.append(version_doc["_id"]) + name = version_doc["name"] + version_docs_by_parent_id_and_name[parent_id][name] = version_doc hero_version_docs_by_parent_id = {} for hero_version_doc in hero_version_docs: @@ -1293,13 +1293,32 @@ class SwitchAssetDialog(QtWidgets.QDialog): repre_doc = _repres.get(container_repre_name) if not repre_doc: - version_doc = version_docs_by_parent_id[subset_id] - version_id = version_doc["_id"] - repres_by_name = repre_docs_by_parent_id_by_name[version_id] - if selected_representation: - repre_doc = repres_by_name[selected_representation] + version_docs_by_name = version_docs_by_parent_id_and_name[ + subset_id + ] + + # If asset or subset are selected for switching, we use latest + # version else we try to keep the current container version. + if ( + selected_asset not in (None, container_asset_name) + or selected_subset not in (None, container_subset_name) + ): + version_name = max(version_docs_by_name) else: - repre_doc = repres_by_name[container_repre_name] + version_name = container_version["name"] + + version_doc = version_docs_by_name[version_name] + version_id = version_doc["_id"] + repres_docs_by_name = repre_docs_by_parent_id_by_name[ + version_id + ] + + if selected_representation: + repres_name = selected_representation + else: + repres_name = container_repre_name + + repre_doc = repres_docs_by_name[repres_name] error = None try: diff --git a/openpype/version.py b/openpype/version.py index e053a8364e..279575d110 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.18.2-nightly.1" +__version__ = "3.18.3-nightly.2" diff --git a/pyproject.toml b/pyproject.toml index e64018498f..ee8e8017e3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.18.1" # OpenPype +version = "3.18.2" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" @@ -181,3 +181,8 @@ reportMissingTypeStubs = false [tool.poetry.extras] docs = ["Sphinx", "furo", "sphinxcontrib-napoleon"] + +[tool.pydocstyle] +inherit = false +convetion = "google" +match = "(?!test_).*\\.py" diff --git a/server_addon/core/server/settings/publish_plugins.py b/server_addon/core/server/settings/publish_plugins.py index ef52416369..0c9b9c96ef 100644 --- a/server_addon/core/server/settings/publish_plugins.py +++ b/server_addon/core/server/settings/publish_plugins.py @@ -697,13 +697,6 @@ class IntegrateHeroVersionModel(BaseSettingsModel): optional: bool = Field(False, title="Optional") active: bool = Field(True, title="Active") families: list[str] = Field(default_factory=list, title="Families") - # TODO remove when removed from client code - template_name_profiles: list[IntegrateHeroTemplateNameProfileModel] = ( - Field( - default_factory=list, - title="Template name profiles" - ) - ) class CleanUpModel(BaseSettingsModel): @@ -1049,19 +1042,6 @@ DEFAULT_PUBLISH_VALUES = { "layout", "mayaScene", "simpleUnrealTexture" - ], - "template_name_profiles": [ - { - "product_types": [ - "simpleUnrealTexture" - ], - "hosts": [ - "standalonepublisher" - ], - "task_types": [], - "task_names": [], - "template_name": "simpleUnrealTextureHero" - } ] }, "CleanUp": { diff --git a/server_addon/deadline/server/version.py b/server_addon/deadline/server/version.py index 1276d0254f..0a8da88258 100644 --- a/server_addon/deadline/server/version.py +++ b/server_addon/deadline/server/version.py @@ -1 +1 @@ -__version__ = "0.1.5" +__version__ = "0.1.6" diff --git a/server_addon/houdini/server/version.py b/server_addon/houdini/server/version.py index 6232f7ab18..5635676f6b 100644 --- a/server_addon/houdini/server/version.py +++ b/server_addon/houdini/server/version.py @@ -1 +1 @@ -__version__ = "0.2.10" +__version__ = "0.2.11" diff --git a/server_addon/max/server/settings/main.py b/server_addon/max/server/settings/main.py index ea6a11915a..cad6024cf7 100644 --- a/server_addon/max/server/settings/main.py +++ b/server_addon/max/server/settings/main.py @@ -12,6 +12,25 @@ from .publishers import ( ) +def unit_scale_enum(): + """Return enumerator for scene unit scale.""" + return [ + {"label": "mm", "value": "Millimeters"}, + {"label": "cm", "value": "Centimeters"}, + {"label": "m", "value": "Meters"}, + {"label": "km", "value": "Kilometers"} + ] + + +class UnitScaleSettings(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + scene_unit_scale: str = Field( + "Centimeters", + title="Scene Unit Scale", + enum_resolver=unit_scale_enum + ) + + class PRTAttributesModel(BaseSettingsModel): _layout = "compact" name: str = Field(title="Name") @@ -24,6 +43,10 @@ class PointCloudSettings(BaseSettingsModel): class MaxSettings(BaseSettingsModel): + unit_scale_settings: UnitScaleSettings = Field( + default_factory=UnitScaleSettings, + title="Set Unit Scale" + ) imageio: ImageIOSettings = Field( default_factory=ImageIOSettings, title="Color Management (ImageIO)" @@ -46,6 +69,10 @@ class MaxSettings(BaseSettingsModel): DEFAULT_VALUES = { + "unit_scale_settings": { + "enabled": True, + "scene_unit_scale": "Centimeters" + }, "RenderSettings": DEFAULT_RENDER_SETTINGS, "CreateReview": DEFAULT_CREATE_REVIEW_SETTINGS, "PointCloud": { diff --git a/server_addon/max/server/version.py b/server_addon/max/server/version.py index ae7362549b..bbab0242f6 100644 --- a/server_addon/max/server/version.py +++ b/server_addon/max/server/version.py @@ -1 +1 @@ -__version__ = "0.1.3" +__version__ = "0.1.4" diff --git a/server_addon/maya/server/settings/main.py b/server_addon/maya/server/settings/main.py index 62fd12ec8a..3d084312e9 100644 --- a/server_addon/maya/server/settings/main.py +++ b/server_addon/maya/server/settings/main.py @@ -23,23 +23,6 @@ class ExtMappingItemModel(BaseSettingsModel): value: str = Field(title="Extension") -class PublishGUIFilterItemModel(BaseSettingsModel): - _layout = "compact" - name: str = Field(title="Name") - value: bool = Field(True, title="Active") - - -class PublishGUIFiltersModel(BaseSettingsModel): - _layout = "compact" - name: str = Field(title="Name") - value: list[PublishGUIFilterItemModel] = Field(default_factory=list) - - @validator("value") - def validate_unique_outputs(cls, value): - ensure_unique_names(value) - return value - - class MayaSettings(BaseSettingsModel): """Maya Project Settings.""" @@ -76,11 +59,8 @@ class MayaSettings(BaseSettingsModel): templated_workfile_build: TemplatedProfilesModel = Field( default_factory=TemplatedProfilesModel, title="Templated Workfile Build Settings") - filters: list[PublishGUIFiltersModel] = Field( - default_factory=list, - title="Publish GUI Filters") - @validator("filters", "ext_mapping") + @validator("ext_mapping") def validate_unique_outputs(cls, value): ensure_unique_names(value) return value @@ -97,7 +77,7 @@ DEFAULT_MEL_WORKSPACE_SETTINGS = "\n".join(( 'workspace -fr "renderData" "renderData";', 'workspace -fr "sourceImages" "sourceimages";', 'workspace -fr "fileCache" "cache/nCache";', - 'workspace -fr "autoSave" "autosave"', + 'workspace -fr "autoSave" "autosave";', '', )) @@ -123,20 +103,5 @@ DEFAULT_MAYA_SETTING = { "publish": DEFAULT_PUBLISH_SETTINGS, "load": DEFAULT_LOADERS_SETTING, "workfile_build": DEFAULT_WORKFILE_SETTING, - "templated_workfile_build": DEFAULT_TEMPLATED_WORKFILE_SETTINGS, - "filters": [ - { - "name": "preset 1", - "value": [ - {"name": "ValidateNoAnimation", "value": False}, - {"name": "ValidateShapeDefaultNames", "value": False}, - ] - }, - { - "name": "preset 2", - "value": [ - {"name": "ValidateNoAnimation", "value": False}, - ] - }, - ] + "templated_workfile_build": DEFAULT_TEMPLATED_WORKFILE_SETTINGS } diff --git a/server_addon/maya/server/settings/publish_playblast.py b/server_addon/maya/server/settings/publish_playblast.py index acfcaf5988..0abc9f7110 100644 --- a/server_addon/maya/server/settings/publish_playblast.py +++ b/server_addon/maya/server/settings/publish_playblast.py @@ -108,6 +108,7 @@ class ViewportOptionsSetting(BaseSettingsModel): True, title="Enable Anti-Aliasing", section="Anti-Aliasing" ) multiSample: int = Field(8, title="Anti Aliasing Samples") + loadTextures: bool = Field(False, title="Load Textures") useDefaultMaterial: bool = Field(False, title="Use Default Material") wireframeOnShaded: bool = Field(False, title="Wireframe On Shaded") xray: bool = Field(False, title="X-Ray") @@ -302,6 +303,7 @@ DEFAULT_PLAYBLAST_SETTING = { "twoSidedLighting": True, "lineAAEnable": True, "multiSample": 8, + "loadTextures": False, "useDefaultMaterial": False, "wireframeOnShaded": False, "xray": False, diff --git a/server_addon/nuke/server/settings/filters.py b/server_addon/nuke/server/settings/filters.py deleted file mode 100644 index 7e2702b3b7..0000000000 --- a/server_addon/nuke/server/settings/filters.py +++ /dev/null @@ -1,19 +0,0 @@ -from pydantic import Field, validator -from ayon_server.settings import BaseSettingsModel, ensure_unique_names - - -class PublishGUIFilterItemModel(BaseSettingsModel): - _layout = "compact" - name: str = Field(title="Name") - value: bool = Field(True, title="Active") - - -class PublishGUIFiltersModel(BaseSettingsModel): - _layout = "compact" - name: str = Field(title="Name") - value: list[PublishGUIFilterItemModel] = Field(default_factory=list) - - @validator("value") - def validate_unique_outputs(cls, value): - ensure_unique_names(value) - return value diff --git a/server_addon/nuke/server/settings/main.py b/server_addon/nuke/server/settings/main.py index cdaaa3a9e2..b6729e7c2f 100644 --- a/server_addon/nuke/server/settings/main.py +++ b/server_addon/nuke/server/settings/main.py @@ -44,7 +44,6 @@ from .workfile_builder import ( from .templated_workfile_build import ( TemplatedWorkfileBuildModel ) -from .filters import PublishGUIFilterItemModel class NukeSettings(BaseSettingsModel): @@ -98,16 +97,6 @@ class NukeSettings(BaseSettingsModel): default_factory=TemplatedWorkfileBuildModel ) - filters: list[PublishGUIFilterItemModel] = Field( - default_factory=list - ) - - @validator("filters") - def ensure_unique_names(cls, value): - """Ensure name fields within the lists have unique names.""" - ensure_unique_names(value) - return value - DEFAULT_VALUES = { "general": DEFAULT_GENERAL_SETTINGS, @@ -121,6 +110,5 @@ DEFAULT_VALUES = { "workfile_builder": DEFAULT_WORKFILE_BUILDER_SETTINGS, "templated_workfile_build": { "profiles": [] - }, - "filters": [] + } } diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py index f1380eede2..9cb17e7976 100644 --- a/server_addon/nuke/server/version.py +++ b/server_addon/nuke/server/version.py @@ -1 +1 @@ -__version__ = "0.1.7" +__version__ = "0.1.8" diff --git a/server_addon/openpype/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml index 40da8f6716..d8de9d4d96 100644 --- a/server_addon/openpype/client/pyproject.toml +++ b/server_addon/openpype/client/pyproject.toml @@ -8,17 +8,16 @@ aiohttp_json_rpc = "*" # TVPaint server aiohttp-middlewares = "^2.0.0" wsrpc_aiohttp = "^3.1.1" # websocket server clique = "1.6.*" -gazu = "^0.9.3" -google-api-python-client = "^1.12.8" # sync server google support (should be separate?) jsonschema = "^2.6.0" pymongo = "^3.11.2" log4mongo = "^1.7" -pathlib2= "^2.3.5" # deadline submit publish job only (single place, maybe not needed?) pyblish-base = "^1.8.11" -pynput = "^1.7.2" # Timers manager - TODO replace +pynput = "^1.7.2" # Timers manager - TODO remove "Qt.py" = "^1.3.3" qtawesome = "0.7.3" speedcopy = "^2.1" -slack-sdk = "^3.6.0" -pysftp = "^0.2.9" -dropbox = "^11.20.0" + +[ayon.runtimeDependencies] +OpenTimelineIO = "0.14.1" +opencolorio = "2.2.1" +Pillow = "9.5.0" diff --git a/server_addon/photoshop/server/settings/publish_plugins.py b/server_addon/photoshop/server/settings/publish_plugins.py index 2863979ca9..21e7d670f0 100644 --- a/server_addon/photoshop/server/settings/publish_plugins.py +++ b/server_addon/photoshop/server/settings/publish_plugins.py @@ -29,7 +29,7 @@ class ColorCodeMappings(BaseSettingsModel): ) layer_name_regex: list[str] = Field( - "", + default_factory=list, title="Layer name regex" ) diff --git a/server_addon/photoshop/server/version.py b/server_addon/photoshop/server/version.py index d4b9e2d7f3..a242f0e757 100644 --- a/server_addon/photoshop/server/version.py +++ b/server_addon/photoshop/server/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring addon version.""" -__version__ = "0.1.0" +__version__ = "0.1.1" diff --git a/server_addon/tvpaint/server/settings/main.py b/server_addon/tvpaint/server/settings/main.py index 4cd6ac4b1a..102acfaf3d 100644 --- a/server_addon/tvpaint/server/settings/main.py +++ b/server_addon/tvpaint/server/settings/main.py @@ -1,4 +1,4 @@ -from pydantic import Field, validator +from pydantic import Field from ayon_server.settings import ( BaseSettingsModel, ensure_unique_names, @@ -14,23 +14,6 @@ from .publish_plugins import ( ) -class PublishGUIFilterItemModel(BaseSettingsModel): - _layout = "compact" - name: str = Field(title="Name") - value: bool = Field(True, title="Active") - - -class PublishGUIFiltersModel(BaseSettingsModel): - _layout = "compact" - name: str = Field(title="Name") - value: list[PublishGUIFilterItemModel] = Field(default_factory=list) - - @validator("value") - def validate_unique_outputs(cls, value): - ensure_unique_names(value) - return value - - class TvpaintSettings(BaseSettingsModel): imageio: TVPaintImageIOModel = Field( default_factory=TVPaintImageIOModel, @@ -52,14 +35,6 @@ class TvpaintSettings(BaseSettingsModel): default_factory=WorkfileBuilderPlugin, title="Workfile Builder" ) - filters: list[PublishGUIFiltersModel] = Field( - default_factory=list, - title="Publish GUI Filters") - - @validator("filters") - def validate_unique_outputs(cls, value): - ensure_unique_names(value) - return value DEFAULT_VALUES = { diff --git a/server_addon/tvpaint/server/version.py b/server_addon/tvpaint/server/version.py index 3dc1f76bc6..485f44ac21 100644 --- a/server_addon/tvpaint/server/version.py +++ b/server_addon/tvpaint/server/version.py @@ -1 +1 @@ -__version__ = "0.1.0" +__version__ = "0.1.1" diff --git a/setup.cfg b/setup.cfg index ead9b25164..f0f754fb24 100644 --- a/setup.cfg +++ b/setup.cfg @@ -16,10 +16,6 @@ max-complexity = 30 [pylint.'MESSAGES CONTROL'] disable = no-member -[pydocstyle] -convention = google -ignore = D107 - [coverage:run] branch = True omit = /tests diff --git a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py index b99db24e75..0d97da6b8b 100644 --- a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py +++ b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects_legacy.py @@ -60,7 +60,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass): name="renderTest_taskMain")) failures.append( - DBAssert.count_of_types(dbcon, "representation", 2)) + DBAssert.count_of_types(dbcon, "representation", 3)) additional_args = {"context.subset": "workfileTest_task", "context.ext": "aep"} diff --git a/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v001.ma b/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v001.ma index 2cc87c2f48..8b90e987de 100644 --- a/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v001.ma +++ b/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v001.ma @@ -185,7 +185,7 @@ createNode objectSet -n "modelMain"; addAttr -ci true -sn "attrPrefix" -ln "attrPrefix" -dt "string"; addAttr -ci true -sn "publish_attributes" -ln "publish_attributes" -dt "string"; addAttr -ci true -sn "creator_attributes" -ln "creator_attributes" -dt "string"; - addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" + addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" -dt "string"; setAttr ".ihi" 0; setAttr ".cbId" -type "string" "60df31e2be2b48bd3695c056:7364ea6776c9"; @@ -296,7 +296,7 @@ createNode objectSet -n "workfileMain"; addAttr -ci true -sn "task" -ln "task" -dt "string"; addAttr -ci true -sn "publish_attributes" -ln "publish_attributes" -dt "string"; addAttr -ci true -sn "creator_attributes" -ln "creator_attributes" -dt "string"; - addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" + addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" -dt "string"; setAttr ".ihi" 0; setAttr ".hio" yes; diff --git a/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v002.ma b/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v002.ma index 6bd334466a..f2906058cf 100644 --- a/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v002.ma +++ b/tests/integration/hosts/maya/test_publish_in_maya/expected/test_project/test_asset/work/test_task/test_project_test_asset_test_task_v002.ma @@ -185,7 +185,7 @@ createNode objectSet -n "modelMain"; addAttr -ci true -sn "attrPrefix" -ln "attrPrefix" -dt "string"; addAttr -ci true -sn "publish_attributes" -ln "publish_attributes" -dt "string"; addAttr -ci true -sn "creator_attributes" -ln "creator_attributes" -dt "string"; - addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" + addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" -dt "string"; setAttr ".ihi" 0; setAttr ".cbId" -type "string" "60df31e2be2b48bd3695c056:7364ea6776c9"; @@ -296,7 +296,7 @@ createNode objectSet -n "workfileMain"; addAttr -ci true -sn "task" -ln "task" -dt "string"; addAttr -ci true -sn "publish_attributes" -ln "publish_attributes" -dt "string"; addAttr -ci true -sn "creator_attributes" -ln "creator_attributes" -dt "string"; - addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" + addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" -dt "string"; setAttr ".ihi" 0; setAttr ".hio" yes; diff --git a/tests/integration/hosts/maya/test_publish_in_maya/input/workfile/test_project_test_asset_test_task_v001.ma b/tests/integration/hosts/maya/test_publish_in_maya/input/workfile/test_project_test_asset_test_task_v001.ma index 2cc87c2f48..8b90e987de 100644 --- a/tests/integration/hosts/maya/test_publish_in_maya/input/workfile/test_project_test_asset_test_task_v001.ma +++ b/tests/integration/hosts/maya/test_publish_in_maya/input/workfile/test_project_test_asset_test_task_v001.ma @@ -185,7 +185,7 @@ createNode objectSet -n "modelMain"; addAttr -ci true -sn "attrPrefix" -ln "attrPrefix" -dt "string"; addAttr -ci true -sn "publish_attributes" -ln "publish_attributes" -dt "string"; addAttr -ci true -sn "creator_attributes" -ln "creator_attributes" -dt "string"; - addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" + addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" -dt "string"; setAttr ".ihi" 0; setAttr ".cbId" -type "string" "60df31e2be2b48bd3695c056:7364ea6776c9"; @@ -296,7 +296,7 @@ createNode objectSet -n "workfileMain"; addAttr -ci true -sn "task" -ln "task" -dt "string"; addAttr -ci true -sn "publish_attributes" -ln "publish_attributes" -dt "string"; addAttr -ci true -sn "creator_attributes" -ln "creator_attributes" -dt "string"; - addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" + addAttr -ci true -sn "__creator_attributes_keys" -ln "__creator_attributes_keys" -dt "string"; setAttr ".ihi" 0; setAttr ".hio" yes; diff --git a/tests/unit/openpype/lib/test_event_system.py b/tests/unit/openpype/lib/test_event_system.py index aa3f929065..b0a011d83e 100644 --- a/tests/unit/openpype/lib/test_event_system.py +++ b/tests/unit/openpype/lib/test_event_system.py @@ -1,4 +1,9 @@ -from openpype.lib.events import EventSystem, QueuedEventSystem +from functools import partial +from openpype.lib.events import ( + EventSystem, + QueuedEventSystem, + weakref_partial, +) def test_default_event_system(): @@ -81,3 +86,93 @@ def test_manual_event_system_queue(): assert output == expected_output, ( "Callbacks were not called in correct order") + + +def test_unordered_events(): + """ + Validate if callbacks are triggered in order of their register. + """ + + result = [] + + def function_a(): + result.append("A") + + def function_b(): + result.append("B") + + def function_c(): + result.append("C") + + # Without order + event_system = QueuedEventSystem() + event_system.add_callback("test", function_a) + event_system.add_callback("test", function_b) + event_system.add_callback("test", function_c) + event_system.emit("test", {}, "test") + + assert result == ["A", "B", "C"] + + +def test_ordered_events(): + """ + Validate if callbacks are triggered by their order and order + of their register. + """ + result = [] + + def function_a(): + result.append("A") + + def function_b(): + result.append("B") + + def function_c(): + result.append("C") + + def function_d(): + result.append("D") + + def function_e(): + result.append("E") + + def function_f(): + result.append("F") + + # Without order + event_system = QueuedEventSystem() + event_system.add_callback("test", function_a) + event_system.add_callback("test", function_b, order=-10) + event_system.add_callback("test", function_c, order=200) + event_system.add_callback("test", function_d, order=150) + event_system.add_callback("test", function_e) + event_system.add_callback("test", function_f, order=200) + event_system.emit("test", {}, "test") + + assert result == ["B", "A", "E", "D", "C", "F"] + + +def test_events_partial_callbacks(): + """ + Validate if partial callbacks are triggered. + """ + + result = [] + + def function(name): + result.append(name) + + def function_regular(): + result.append("regular") + + event_system = QueuedEventSystem() + event_system.add_callback("test", function_regular) + event_system.add_callback("test", partial(function, "foo")) + event_system.add_callback("test", weakref_partial(function, "bar")) + event_system.emit("test", {}, "test") + + # Delete function should also make partial callbacks invalid + del function + event_system.emit("test", {}, "test") + + assert result == ["regular", "bar", "regular"]