diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index 6ed6ae428c..96e768e420 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -6,6 +6,8 @@ labels: bug assignees: '' --- +**Running version** +[ex. 3.14.1-nightly.2] **Describe the bug** A clear and concise description of what the bug is. diff --git a/CHANGELOG.md b/CHANGELOG.md index 46bf56f5bd..af347cadfe 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,13 +1,47 @@ # Changelog +## [3.14.3-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.2...HEAD) + +**🚀 Enhancements** + +- Github issues adding `running version` section [\#3864](https://github.com/pypeclub/OpenPype/pull/3864) +- Publisher: Increase size of main window [\#3862](https://github.com/pypeclub/OpenPype/pull/3862) +- Houdini: Increment current file on workfile publish [\#3840](https://github.com/pypeclub/OpenPype/pull/3840) +- Publisher: Add new publisher to host tools [\#3833](https://github.com/pypeclub/OpenPype/pull/3833) +- General: lock task workfiles when they are working on [\#3810](https://github.com/pypeclub/OpenPype/pull/3810) +- Maya: Workspace mel loaded from settings [\#3790](https://github.com/pypeclub/OpenPype/pull/3790) + +**🐛 Bug fixes** + +- Settings: Add missing default settings [\#3870](https://github.com/pypeclub/OpenPype/pull/3870) +- General: Copy of workfile does not use 'copy' function but 'copyfile' [\#3869](https://github.com/pypeclub/OpenPype/pull/3869) +- Tray Publisher: skip plugin if otioTimeline is missing [\#3856](https://github.com/pypeclub/OpenPype/pull/3856) +- Maya: Extract Playblast fix textures + labelize viewport show settings [\#3852](https://github.com/pypeclub/OpenPype/pull/3852) +- Ftrack: Url validation does not require ftrackapp [\#3834](https://github.com/pypeclub/OpenPype/pull/3834) +- Maya+Ftrack: Change typo in family name `mayaascii` -\> `mayaAscii` [\#3820](https://github.com/pypeclub/OpenPype/pull/3820) +- Maya Deadline: Fix Tile Rendering by forcing integer pixel values [\#3758](https://github.com/pypeclub/OpenPype/pull/3758) + +**🔀 Refactored code** + +- Hiero: Use new Extractor location [\#3851](https://github.com/pypeclub/OpenPype/pull/3851) +- Maya: Remove old legacy \(ftrack\) plug-ins that are of no use anymore [\#3819](https://github.com/pypeclub/OpenPype/pull/3819) +- Nuke: Use new Extractor location [\#3799](https://github.com/pypeclub/OpenPype/pull/3799) +- Maya: Use new Extractor location [\#3775](https://github.com/pypeclub/OpenPype/pull/3775) +- General: Change publish template settings location [\#3755](https://github.com/pypeclub/OpenPype/pull/3755) + +**Merged pull requests:** + +- Remove lockfile during publish [\#3874](https://github.com/pypeclub/OpenPype/pull/3874) + ## [3.14.2](https://github.com/pypeclub/OpenPype/tree/3.14.2) (2022-09-12) -[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...3.14.2) +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.2-nightly.5...3.14.2) **🆕 New features** - Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763) -- Houdini: Publishing workfiles [\#3697](https://github.com/pypeclub/OpenPype/pull/3697) **🚀 Enhancements** @@ -43,13 +77,10 @@ - General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768) - General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766) - Maya: Refactor submit deadline to use AbstractSubmitDeadline [\#3759](https://github.com/pypeclub/OpenPype/pull/3759) -- General: Change publish template settings location [\#3755](https://github.com/pypeclub/OpenPype/pull/3755) - General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749) -- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736) +- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745) - Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735) - Fusion: Defined fusion as addon [\#3733](https://github.com/pypeclub/OpenPype/pull/3733) -- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732) -- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729) - Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727) **Merged pull requests:** @@ -61,18 +92,10 @@ [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.1-nightly.4...3.14.1) -### 📖 Documentation - -- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698) - **🚀 Enhancements** - General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750) - Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720) -- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712) -- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701) -- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700) -- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686) **🐛 Bug fixes** @@ -82,50 +105,32 @@ - Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737) - Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721) - Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716) -- Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709) -- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708) -- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704) -- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703) -- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684) **🔀 Refactored code** - General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751) -- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745) - General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744) - Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740) +- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736) - Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734) +- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732) - General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731) - AfterEffects: Move AE functions from general lib [\#3730](https://github.com/pypeclub/OpenPype/pull/3730) +- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729) - AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728) - General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725) - Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724) - General: Move subset name functionality [\#3723](https://github.com/pypeclub/OpenPype/pull/3723) - General: Move creators plugin getter [\#3714](https://github.com/pypeclub/OpenPype/pull/3714) -- General: Move constants from lib to client [\#3713](https://github.com/pypeclub/OpenPype/pull/3713) -- Loader: Subset groups using client operations [\#3710](https://github.com/pypeclub/OpenPype/pull/3710) -- TVPaint: Defined as module [\#3707](https://github.com/pypeclub/OpenPype/pull/3707) -- StandalonePublisher: Define StandalonePublisher as module [\#3706](https://github.com/pypeclub/OpenPype/pull/3706) -- TrayPublisher: Define TrayPublisher as module [\#3705](https://github.com/pypeclub/OpenPype/pull/3705) -- General: Move context specific functions to context tools [\#3702](https://github.com/pypeclub/OpenPype/pull/3702) **Merged pull requests:** - Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717) -- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694) ## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0) -**🚀 Enhancements** - -- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685) - -**🐛 Bug fixes** - -- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691) - ## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0) diff --git a/openpype/hosts/fusion/plugins/publish/collect_inputs.py b/openpype/hosts/fusion/plugins/publish/collect_inputs.py new file mode 100644 index 0000000000..8f9857b02f --- /dev/null +++ b/openpype/hosts/fusion/plugins/publish/collect_inputs.py @@ -0,0 +1,114 @@ +from bson.objectid import ObjectId + +import pyblish.api + +from openpype.pipeline import registered_host + + +def collect_input_containers(tools): + """Collect containers that contain any of the node in `nodes`. + + This will return any loaded Avalon container that contains at least one of + the nodes. As such, the Avalon container is an input for it. Or in short, + there are member nodes of that container. + + Returns: + list: Input avalon containers + + """ + + # Lookup by node ids + lookup = frozenset([tool.Name for tool in tools]) + + containers = [] + host = registered_host() + for container in host.ls(): + + name = container["_tool"].Name + + # We currently assume no "groups" as containers but just single tools + # like a single "Loader" operator. As such we just check whether the + # Loader is part of the processing queue. + if name in lookup: + containers.append(container) + + return containers + + +def iter_upstream(tool): + """Yields all upstream inputs for the current tool. + + Yields: + tool: The input tools. + + """ + + def get_connected_input_tools(tool): + """Helper function that returns connected input tools for a tool.""" + inputs = [] + + # Filter only to actual types that will have sensible upstream + # connections. So we ignore just "Number" inputs as they can be + # many to iterate, slowing things down quite a bit - and in practice + # they don't have upstream connections. + VALID_INPUT_TYPES = ['Image', 'Particles', 'Mask', 'DataType3D'] + for type_ in VALID_INPUT_TYPES: + for input_ in tool.GetInputList(type_).values(): + output = input_.GetConnectedOutput() + if output: + input_tool = output.GetTool() + inputs.append(input_tool) + + return inputs + + # Initialize process queue with the node's inputs itself + queue = get_connected_input_tools(tool) + + # We keep track of which node names we have processed so far, to ensure we + # don't process the same hierarchy again. We are not pushing the tool + # itself into the set as that doesn't correctly recognize the same tool. + # Since tool names are unique in a comp in Fusion we rely on that. + collected = set(tool.Name for tool in queue) + + # Traverse upstream references for all nodes and yield them as we + # process the queue. + while queue: + upstream_tool = queue.pop() + yield upstream_tool + + # Find upstream tools that are not collected yet. + upstream_inputs = get_connected_input_tools(upstream_tool) + upstream_inputs = [t for t in upstream_inputs if + t.Name not in collected] + + queue.extend(upstream_inputs) + collected.update(tool.Name for tool in upstream_inputs) + + +class CollectUpstreamInputs(pyblish.api.InstancePlugin): + """Collect source input containers used for this publish. + + This will include `inputs` data of which loaded publishes were used in the + generation of this publish. This leaves an upstream trace to what was used + as input. + + """ + + label = "Collect Inputs" + order = pyblish.api.CollectorOrder + 0.2 + hosts = ["fusion"] + + def process(self, instance): + + # Get all upstream and include itself + tool = instance[0] + nodes = list(iter_upstream(tool)) + nodes.append(tool) + + # Collect containers for the given set of nodes + containers = collect_input_containers(nodes) + + inputs = [ObjectId(c["representation"]) for c in containers] + instance.data["inputRepresentations"] = inputs + + self.log.info("Collected inputs: %s" % inputs) diff --git a/openpype/hosts/hiero/plugins/publish/extract_clip_effects.py b/openpype/hosts/hiero/plugins/publish/extract_clip_effects.py index 5b0aa270a7..7fb381ff7e 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_clip_effects.py +++ b/openpype/hosts/hiero/plugins/publish/extract_clip_effects.py @@ -2,10 +2,11 @@ import os import json import pyblish.api -import openpype + +from openpype.pipeline import publish -class ExtractClipEffects(openpype.api.Extractor): +class ExtractClipEffects(publish.Extractor): """Extract clip effects instances.""" order = pyblish.api.ExtractorOrder diff --git a/openpype/hosts/hiero/plugins/publish/extract_frames.py b/openpype/hosts/hiero/plugins/publish/extract_frames.py index aa3eda2e9f..f865d2fb39 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_frames.py +++ b/openpype/hosts/hiero/plugins/publish/extract_frames.py @@ -1,9 +1,14 @@ import os import pyblish.api -import openpype + +from openpype.lib import ( + get_oiio_tools_path, + run_subprocess, +) +from openpype.pipeline import publish -class ExtractFrames(openpype.api.Extractor): +class ExtractFrames(publish.Extractor): """Extracts frames""" order = pyblish.api.ExtractorOrder @@ -13,7 +18,7 @@ class ExtractFrames(openpype.api.Extractor): movie_extensions = ["mov", "mp4"] def process(self, instance): - oiio_tool_path = openpype.lib.get_oiio_tools_path() + oiio_tool_path = get_oiio_tools_path() staging_dir = self.staging_dir(instance) output_template = os.path.join(staging_dir, instance.data["name"]) sequence = instance.context.data["activeTimeline"] @@ -43,7 +48,7 @@ class ExtractFrames(openpype.api.Extractor): args.extend(["--powc", "0.45,0.45,0.45,1.0"]) args.extend([input_path, "-o", output_path]) - output = openpype.api.run_subprocess(args) + output = run_subprocess(args) failed_output = "oiiotool produced no output." if failed_output in output: diff --git a/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py b/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py index d12e7665bf..e64aa89b26 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/hiero/plugins/publish/extract_thumbnail.py @@ -1,9 +1,10 @@ import os import pyblish.api -import openpype.api + +from openpype.pipeline import publish -class ExtractThumnail(openpype.api.Extractor): +class ExtractThumnail(publish.Extractor): """ Extractor for track item's tumnails """ diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 2ae8a4dbf7..e4af1913ef 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -14,7 +14,7 @@ from openpype.pipeline import ( ) from openpype.pipeline.load import any_outdated_containers from openpype.hosts.houdini import HOUDINI_HOST_DIR -from openpype.hosts.houdini.api import lib +from openpype.hosts.houdini.api import lib, shelves from openpype.lib import ( register_event_callback, @@ -73,6 +73,7 @@ def install(): # so it initializes into the correct scene FPS, Frame Range, etc. # todo: make sure this doesn't trigger when opening with last workfile _set_context_settings() + shelves.generate_shelves() def uninstall(): diff --git a/openpype/hosts/houdini/api/shelves.py b/openpype/hosts/houdini/api/shelves.py new file mode 100644 index 0000000000..248d99105c --- /dev/null +++ b/openpype/hosts/houdini/api/shelves.py @@ -0,0 +1,204 @@ +import os +import logging +import platform +import six + +from openpype.settings import get_project_settings + +import hou + +log = logging.getLogger("openpype.hosts.houdini.shelves") + +if six.PY2: + FileNotFoundError = IOError + + +def generate_shelves(): + """This function generates complete shelves from shelf set to tools + in Houdini from openpype project settings houdini shelf definition. + + Raises: + FileNotFoundError: Raised when the shelf set filepath does not exist + """ + current_os = platform.system().lower() + + # load configuration of houdini shelves + project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + shelves_set_config = project_settings["houdini"]["shelves"] + + if not shelves_set_config: + log.debug( + "No custom shelves found in project settings." + ) + return + + for shelf_set_config in shelves_set_config: + shelf_set_filepath = shelf_set_config.get('shelf_set_source_path') + + if shelf_set_filepath[current_os]: + if not os.path.isfile(shelf_set_filepath[current_os]): + raise FileNotFoundError( + "This path doesn't exist - {}".format( + shelf_set_filepath[current_os] + ) + ) + + hou.shelves.newShelfSet(file_path=shelf_set_filepath[current_os]) + continue + + shelf_set_name = shelf_set_config.get('shelf_set_name') + if not shelf_set_name: + log.warning( + "No name found in shelf set definition." + ) + return + + shelf_set = get_or_create_shelf_set(shelf_set_name) + + shelves_definition = shelf_set_config.get('shelf_definition') + + if not shelves_definition: + log.debug( + "No shelf definition found for shelf set named '{}'".format( + shelf_set_name + ) + ) + return + + for shelf_definition in shelves_definition: + shelf_name = shelf_definition.get('shelf_name') + if not shelf_name: + log.warning( + "No name found in shelf definition." + ) + return + + shelf = get_or_create_shelf(shelf_name) + + if not shelf_definition.get('tools_list'): + log.debug( + "No tool definition found for shelf named {}".format( + shelf_name + ) + ) + return + + mandatory_attributes = {'name', 'script'} + for tool_definition in shelf_definition.get('tools_list'): + # We verify that the name and script attibutes of the tool + # are set + if not all( + tool_definition[key] for key in mandatory_attributes + ): + log.warning( + "You need to specify at least the name and \ +the script path of the tool.") + continue + + tool = get_or_create_tool(tool_definition, shelf) + + if not tool: + return + + # Add the tool to the shelf if not already in it + if tool not in shelf.tools(): + shelf.setTools(list(shelf.tools()) + [tool]) + + # Add the shelf in the shelf set if not already in it + if shelf not in shelf_set.shelves(): + shelf_set.setShelves(shelf_set.shelves() + (shelf,)) + + +def get_or_create_shelf_set(shelf_set_label): + """This function verifies if the shelf set label exists. If not, + creates a new shelf set. + + Arguments: + shelf_set_label (str): The label of the shelf set + + Returns: + hou.ShelfSet: The shelf set existing or the new one + """ + all_shelves_sets = hou.shelves.shelfSets().values() + + shelf_sets = [ + shelf for shelf in all_shelves_sets if shelf.label() == shelf_set_label + ] + + if shelf_sets: + return shelf_sets[0] + + shelf_set_name = shelf_set_label.replace(' ', '_').lower() + new_shelf_set = hou.shelves.newShelfSet( + name=shelf_set_name, + label=shelf_set_label + ) + return new_shelf_set + + +def get_or_create_shelf(shelf_label): + """This function verifies if the shelf label exists. If not, creates + a new shelf. + + Arguments: + shelf_label (str): The label of the shelf + + Returns: + hou.Shelf: The shelf existing or the new one + """ + all_shelves = hou.shelves.shelves().values() + + shelf = [s for s in all_shelves if s.label() == shelf_label] + + if shelf: + return shelf[0] + + shelf_name = shelf_label.replace(' ', '_').lower() + new_shelf = hou.shelves.newShelf( + name=shelf_name, + label=shelf_label + ) + return new_shelf + + +def get_or_create_tool(tool_definition, shelf): + """This function verifies if the tool exists and updates it. If not, creates + a new one. + + Arguments: + tool_definition (dict): Dict with label, script, icon and help + shelf (hou.Shelf): The parent shelf of the tool + + Returns: + hou.Tool: The tool updated or the new one + """ + existing_tools = shelf.tools() + tool_label = tool_definition.get('label') + + existing_tool = [ + tool for tool in existing_tools if tool.label() == tool_label + ] + + if existing_tool: + tool_definition.pop('name', None) + tool_definition.pop('label', None) + existing_tool[0].setData(**tool_definition) + return existing_tool[0] + + tool_name = tool_label.replace(' ', '_').lower() + + if not os.path.exists(tool_definition['script']): + log.warning( + "This path doesn't exist - {}".format( + tool_definition['script'] + ) + ) + return + + with open(tool_definition['script']) as f: + script = f.read() + tool_definition.update({'script': script}) + + new_tool = hou.shelves.newTool(name=tool_name, **tool_definition) + + return new_tool diff --git a/openpype/hosts/houdini/plugins/publish/collect_inputs.py b/openpype/hosts/houdini/plugins/publish/collect_inputs.py index 8c7098c710..9ee0248bd9 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_inputs.py +++ b/openpype/hosts/houdini/plugins/publish/collect_inputs.py @@ -1,3 +1,5 @@ +from bson.objectid import ObjectId + import pyblish.api from openpype.pipeline import registered_host @@ -115,7 +117,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin): # Collect containers for the given set of nodes containers = collect_input_containers(nodes) - inputs = [c["representation"] for c in containers] - instance.data["inputs"] = inputs + inputs = [ObjectId(c["representation"]) for c in containers] + instance.data["inputRepresentations"] = inputs self.log.info("Collected inputs: %s" % inputs) diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file.py b/openpype/hosts/houdini/plugins/publish/increment_current_file.py index 5cb14d732a..c990f481d3 100644 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/increment_current_file.py @@ -2,10 +2,9 @@ import pyblish.api from openpype.lib import version_up from openpype.pipeline import registered_host -from openpype.pipeline.publish import get_errored_plugins_from_context -class IncrementCurrentFile(pyblish.api.InstancePlugin): +class IncrementCurrentFile(pyblish.api.ContextPlugin): """Increment the current file. Saves the current scene with an increased version number. @@ -15,30 +14,10 @@ class IncrementCurrentFile(pyblish.api.InstancePlugin): label = "Increment current file" order = pyblish.api.IntegratorOrder + 9.0 hosts = ["houdini"] - families = ["colorbleed.usdrender", "redshift_rop"] - targets = ["local"] + families = ["workfile"] + optional = True - def process(self, instance): - - # This should be a ContextPlugin, but this is a workaround - # for a bug in pyblish to run once for a family: issue #250 - context = instance.context - key = "__hasRun{}".format(self.__class__.__name__) - if context.data.get(key, False): - return - else: - context.data[key] = True - - context = instance.context - errored_plugins = get_errored_plugins_from_context(context) - if any( - plugin.__name__ == "HoudiniSubmitPublishDeadline" - for plugin in errored_plugins - ): - raise RuntimeError( - "Skipping incrementing current file because " - "submission to deadline failed." - ) + def process(self, context): # Filename must not have changed since collecting host = registered_host() diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py b/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py deleted file mode 100644 index cb0d7e3680..0000000000 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py +++ /dev/null @@ -1,35 +0,0 @@ -import pyblish.api - -import hou -from openpype.lib import version_up -from openpype.pipeline.publish import get_errored_plugins_from_context - - -class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin): - """Increment the current file. - - Saves the current scene with an increased version number. - - """ - - label = "Increment current file" - order = pyblish.api.IntegratorOrder + 9.0 - hosts = ["houdini"] - targets = ["deadline"] - - def process(self, context): - - errored_plugins = get_errored_plugins_from_context(context) - if any( - plugin.__name__ == "HoudiniSubmitPublishDeadline" - for plugin in errored_plugins - ): - raise RuntimeError( - "Skipping incrementing current file because " - "submission to deadline failed." - ) - - current_filepath = context.data["currentFile"] - new_filepath = version_up(current_filepath) - - hou.hipFile.save(file_name=new_filepath, save_to_recent_files=True) diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 58e160cb2f..6a8447d6ad 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -2483,7 +2483,7 @@ def load_capture_preset(data=None): # DISPLAY OPTIONS id = 'Display Options' disp_options = {} - for key in preset['Display Options']: + for key in preset[id]: if key.startswith('background'): disp_options[key] = preset['Display Options'][key] if len(disp_options[key]) == 4: diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py index 7cd2193086..777a6ffbc9 100644 --- a/openpype/hosts/maya/api/lib_rendersettings.py +++ b/openpype/hosts/maya/api/lib_rendersettings.py @@ -5,6 +5,7 @@ import maya.mel as mel import six import sys +from openpype.lib import Logger from openpype.api import ( get_project_settings, get_current_project_settings @@ -38,6 +39,8 @@ class RenderSettings(object): "underscore": "_" } + log = Logger.get_logger("RenderSettings") + @classmethod def get_image_prefix_attr(cls, renderer): return cls._image_prefix_nodes[renderer] @@ -133,20 +136,7 @@ class RenderSettings(object): cmds.setAttr( "defaultArnoldDriver.mergeAOVs", multi_exr) - # Passes additional options in from the schema as a list - # but converts it to a dictionary because ftrack doesn't - # allow fullstops in custom attributes. Then checks for - # type of MtoA attribute passed to adjust the `setAttr` - # command accordingly. self._additional_attribs_setter(additional_options) - for item in additional_options: - attribute, value = item - if (cmds.getAttr(str(attribute), type=True)) == "long": - cmds.setAttr(str(attribute), int(value)) - elif (cmds.getAttr(str(attribute), type=True)) == "bool": - cmds.setAttr(str(attribute), int(value), type = "Boolean") # noqa - elif (cmds.getAttr(str(attribute), type=True)) == "string": - cmds.setAttr(str(attribute), str(value), type = "string") # noqa reset_frame_range() def _set_redshift_settings(self, width, height): @@ -230,12 +220,20 @@ class RenderSettings(object): cmds.setAttr("defaultRenderGlobals.extensionPadding", 4) def _additional_attribs_setter(self, additional_attribs): - print(additional_attribs) for item in additional_attribs: attribute, value = item - if (cmds.getAttr(str(attribute), type=True)) == "long": - cmds.setAttr(str(attribute), int(value)) - elif (cmds.getAttr(str(attribute), type=True)) == "bool": - cmds.setAttr(str(attribute), int(value)) # noqa - elif (cmds.getAttr(str(attribute), type=True)) == "string": - cmds.setAttr(str(attribute), str(value), type = "string") # noqa + attribute = str(attribute) # ensure str conversion from settings + attribute_type = cmds.getAttr(attribute, type=True) + if attribute_type in {"long", "bool"}: + cmds.setAttr(attribute, int(value)) + elif attribute_type == "string": + cmds.setAttr(attribute, str(value), type="string") + elif attribute_type in {"double", "doubleAngle", "doubleLinear"}: + cmds.setAttr(attribute, float(value)) + else: + self.log.error( + "Attribute {attribute} can not be set due to unsupported " + "type: {attribute_type}".format( + attribute=attribute, + attribute_type=attribute_type) + ) diff --git a/openpype/hosts/maya/api/lib_rendersetup.py b/openpype/hosts/maya/api/lib_rendersetup.py index 0fdc54a068..e616f26e1b 100644 --- a/openpype/hosts/maya/api/lib_rendersetup.py +++ b/openpype/hosts/maya/api/lib_rendersetup.py @@ -348,3 +348,71 @@ def get_attr_overrides(node_attr, layer, break return reversed(plug_overrides) + + +def get_shader_in_layer(node, layer): + """Return the assigned shader in a renderlayer without switching layers. + + This has been developed and tested for Legacy Renderlayers and *not* for + Render Setup. + + Note: This will also return the shader for any face assignments, however + it will *not* return the components they are assigned to. This could + be implemented, but since Maya's renderlayers are famous for breaking + with face assignments there has been no need for this function to + support that. + + Returns: + list: The list of assigned shaders in the given layer. + + """ + + def _get_connected_shader(plug): + """Return current shader""" + return cmds.listConnections(plug, + source=False, + destination=True, + plugs=False, + connections=False, + type="shadingEngine") or [] + + # We check the instObjGroups (shader connection) for layer overrides. + plug = node + ".instObjGroups" + + # Ignore complex query if we're in the layer anyway (optimization) + current_layer = cmds.editRenderLayerGlobals(query=True, + currentRenderLayer=True) + if layer == current_layer: + return _get_connected_shader(plug) + + connections = cmds.listConnections(plug, + plugs=True, + source=False, + destination=True, + type="renderLayer") or [] + connections = filter(lambda x: x.endswith(".outPlug"), connections) + if not connections: + # If no overrides anywhere on the shader, just get the current shader + return _get_connected_shader(plug) + + def _get_override(connections, layer): + """Return the overridden connection for that layer in connections""" + # If there's an override on that layer, return that. + for connection in connections: + if (connection.startswith(layer + ".outAdjustments") and + connection.endswith(".outPlug")): + + # This is a shader override on that layer so get the shader + # connected to .outValue of the .outAdjustment[i] + out_adjustment = connection.rsplit(".", 1)[0] + connection_attr = out_adjustment + ".outValue" + override = cmds.listConnections(connection_attr) or [] + + return override + + override_shader = _get_override(connections, layer) + if override_shader is not None: + return override_shader + else: + # Get the override for "defaultRenderLayer" (=masterLayer) + return _get_override(connections, layer="defaultRenderLayer") diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index acd8a55aa4..c13b47ef4a 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -16,6 +16,7 @@ from openpype.host import ( HostDirmap, ) from openpype.tools.utils import host_tools +from openpype.tools.workfiles.lock_dialog import WorkfileLockDialog from openpype.lib import ( register_event_callback, emit_event @@ -31,6 +32,12 @@ from openpype.pipeline import ( AVALON_CONTAINER_ID, ) from openpype.pipeline.load import any_outdated_containers +from openpype.pipeline.workfile.lock_workfile import ( + create_workfile_lock, + remove_workfile_lock, + is_workfile_locked, + is_workfile_lock_enabled +) from openpype.hosts.maya import MAYA_ROOT_DIR from openpype.hosts.maya.lib import create_workspace_mel @@ -66,8 +73,6 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): project_name = legacy_io.active_project() project_settings = get_project_settings(project_name) # process path mapping - project_name = legacy_io.active_project() - project_settings = get_project_settings(project_name) dirmap_processor = MayaDirmap("maya", project_name, project_settings) dirmap_processor.process_dirmap() @@ -101,8 +106,13 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): register_event_callback("open", on_open) register_event_callback("new", on_new) register_event_callback("before.save", on_before_save) + register_event_callback("after.save", on_after_save) + register_event_callback("before.close", on_before_close) + register_event_callback("before.file.open", before_file_open) register_event_callback("taskChanged", on_task_changed) + register_event_callback("workfile.open.before", before_workfile_open) register_event_callback("workfile.save.before", before_workfile_save) + register_event_callback("workfile.save.before", after_workfile_save) def open_workfile(self, filepath): return open_file(filepath) @@ -145,6 +155,13 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save ) + self._op_events[_after_scene_save] = ( + OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kAfterSave, + _after_scene_save + ) + ) + self._op_events[_before_scene_save] = ( OpenMaya.MSceneMessage.addCheckCallback( OpenMaya.MSceneMessage.kBeforeSaveCheck, @@ -163,15 +180,35 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): ) ) - self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback( - OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open + self._op_events[_on_scene_open] = ( + OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kAfterOpen, + _on_scene_open + ) + ) + + self._op_events[_before_scene_open] = ( + OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kBeforeOpen, + _before_scene_open + ) + ) + + self._op_events[_before_close_maya] = ( + OpenMaya.MSceneMessage.addCallback( + OpenMaya.MSceneMessage.kMayaExiting, + _before_close_maya + ) ) self.log.info("Installed event handler _on_scene_save..") self.log.info("Installed event handler _before_scene_save..") + self.log.info("Installed event handler _on_after_save..") self.log.info("Installed event handler _on_scene_new..") self.log.info("Installed event handler _on_maya_initialized..") self.log.info("Installed event handler _on_scene_open..") + self.log.info("Installed event handler _check_lock_file..") + self.log.info("Installed event handler _before_close_maya..") def _set_project(): @@ -210,6 +247,10 @@ def _on_scene_new(*args): emit_event("new") +def _after_scene_save(*arg): + emit_event("after.save") + + def _on_scene_save(*args): emit_event("save") @@ -218,6 +259,14 @@ def _on_scene_open(*args): emit_event("open") +def _before_close_maya(*args): + emit_event("before.close") + + +def _before_scene_open(*args): + emit_event("before.file.open") + + def _before_scene_save(return_code, client_data): # Default to allowing the action. Registered @@ -231,6 +280,23 @@ def _before_scene_save(return_code, client_data): ) +def _remove_workfile_lock(): + """Remove workfile lock on current file""" + if not handle_workfile_locks(): + return + filepath = current_file() + log.info("Removing lock on current file {}...".format(filepath)) + if filepath: + remove_workfile_lock(filepath) + + +def handle_workfile_locks(): + if lib.IS_HEADLESS: + return False + project_name = legacy_io.active_project() + return is_workfile_lock_enabled(MayaHost.name, project_name) + + def uninstall(): pyblish.api.deregister_plugin_path(PUBLISH_PATH) pyblish.api.deregister_host("mayabatch") @@ -428,6 +494,46 @@ def on_before_save(): return lib.validate_fps() +def on_after_save(): + """Check if there is a lockfile after save""" + check_lock_on_current_file() + + +def check_lock_on_current_file(): + + """Check if there is a user opening the file""" + if not handle_workfile_locks(): + return + log.info("Running callback on checking the lock file...") + + # add the lock file when opening the file + filepath = current_file() + + if is_workfile_locked(filepath): + # add lockfile dialog + workfile_dialog = WorkfileLockDialog(filepath) + if not workfile_dialog.exec_(): + cmds.file(new=True) + return + + create_workfile_lock(filepath) + + +def on_before_close(): + """Delete the lock file after user quitting the Maya Scene""" + log.info("Closing Maya...") + # delete the lock file + filepath = current_file() + if handle_workfile_locks(): + remove_workfile_lock(filepath) + + +def before_file_open(): + """check lock file when the file changed""" + # delete the lock file + _remove_workfile_lock() + + def on_save(): """Automatically add IDs to new nodes @@ -436,6 +542,8 @@ def on_save(): """ log.info("Running callback on save..") + # remove lockfile if users jumps over from one scene to another + _remove_workfile_lock() # # Update current task for the current scene # update_task_from_path(cmds.file(query=True, sceneName=True)) @@ -493,6 +601,9 @@ def on_open(): dialog.on_clicked.connect(_on_show_inventory) dialog.show() + # create lock file for the maya scene + check_lock_on_current_file() + def on_new(): """Set project resolution and fps when create a new file""" @@ -508,6 +619,7 @@ def on_new(): "from openpype.hosts.maya.api import lib;" "lib.add_render_layer_change_observer()") lib.set_context_settings() + _remove_workfile_lock() def on_task_changed(): @@ -546,13 +658,28 @@ def on_task_changed(): ) +def before_workfile_open(): + if handle_workfile_locks(): + _remove_workfile_lock() + + def before_workfile_save(event): project_name = legacy_io.active_project() + if handle_workfile_locks(): + _remove_workfile_lock() workdir_path = event["workdir_path"] if workdir_path: create_workspace_mel(workdir_path, project_name) +def after_workfile_save(event): + workfile_name = event["filename"] + if handle_workfile_locks(): + if workfile_name: + if not is_workfile_locked(workfile_name): + create_workfile_lock(workfile_name) + + class MayaDirmap(HostDirmap): def on_enable_dirmap(self): cmds.dirmap(en=True) diff --git a/openpype/hosts/maya/plugins/publish/collect_inputs.py b/openpype/hosts/maya/plugins/publish/collect_inputs.py new file mode 100644 index 0000000000..470fceffc9 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_inputs.py @@ -0,0 +1,215 @@ +import copy +from bson.objectid import ObjectId + +from maya import cmds +import maya.api.OpenMaya as om +import pyblish.api + +from openpype.pipeline import registered_host +from openpype.hosts.maya.api.lib import get_container_members +from openpype.hosts.maya.api.lib_rendersetup import get_shader_in_layer + + +def iter_history(nodes, + filter=om.MFn.kInvalid, + direction=om.MItDependencyGraph.kUpstream): + """Iterate unique upstream history for list of nodes. + + This acts as a replacement to maya.cmds.listHistory. + It's faster by about 2x-3x. It returns less than + maya.cmds.listHistory as it excludes the input nodes + from the output (unless an input node was history + for another input node). It also excludes duplicates. + + Args: + nodes (list): Maya node names to start search from. + filter (om.MFn.Type): Filter to only specific types. + e.g. to dag nodes using om.MFn.kDagNode + direction (om.MItDependencyGraph.Direction): Direction to traverse in. + Defaults to upstream. + + Yields: + str: Node names in upstream history. + + """ + if not nodes: + return + + sel = om.MSelectionList() + for node in nodes: + sel.add(node) + + it = om.MItDependencyGraph(sel.getDependNode(0)) # init iterator + handle = om.MObjectHandle + + traversed = set() + fn_dep = om.MFnDependencyNode() + fn_dag = om.MFnDagNode() + for i in range(sel.length()): + + start_node = sel.getDependNode(i) + start_node_hash = handle(start_node).hashCode() + if start_node_hash in traversed: + continue + + it.resetTo(start_node, + filter=filter, + direction=direction) + while not it.isDone(): + + node = it.currentNode() + node_hash = handle(node).hashCode() + + if node_hash in traversed: + it.prune() + it.next() # noqa: B305 + continue + + traversed.add(node_hash) + + if node.hasFn(om.MFn.kDagNode): + fn_dag.setObject(node) + yield fn_dag.fullPathName() + else: + fn_dep.setObject(node) + yield fn_dep.name() + + it.next() # noqa: B305 + + +def collect_input_containers(containers, nodes): + """Collect containers that contain any of the node in `nodes`. + + This will return any loaded Avalon container that contains at least one of + the nodes. As such, the Avalon container is an input for it. Or in short, + there are member nodes of that container. + + Returns: + list: Input avalon containers + + """ + # Assume the containers have collected their cached '_members' data + # in the collector. + return [container for container in containers + if any(node in container["_members"] for node in nodes)] + + +class CollectUpstreamInputs(pyblish.api.InstancePlugin): + """Collect input source inputs for this publish. + + This will include `inputs` data of which loaded publishes were used in the + generation of this publish. This leaves an upstream trace to what was used + as input. + + """ + + label = "Collect Inputs" + order = pyblish.api.CollectorOrder + 0.34 + hosts = ["maya"] + + def process(self, instance): + + # For large scenes the querying of "host.ls()" can be relatively slow + # e.g. up to a second. Many instances calling it easily slows this + # down. As such, we cache it so we trigger it only once. + # todo: Instead of hidden cache make "CollectContainers" plug-in + cache_key = "__cache_containers" + scene_containers = instance.context.data.get(cache_key, None) + if scene_containers is None: + # Query the scenes' containers if there's no cache yet + host = registered_host() + scene_containers = list(host.ls()) + for container in scene_containers: + # Embed the members into the container dictionary + container_members = set(get_container_members(container)) + container["_members"] = container_members + instance.context.data["__cache_containers"] = scene_containers + + # Collect the relevant input containers for this instance + if "renderlayer" in set(instance.data.get("families", [])): + # Special behavior for renderlayers + self.log.debug("Collecting renderlayer inputs....") + containers = self._collect_renderlayer_inputs(scene_containers, + instance) + + else: + # Basic behavior + nodes = instance[:] + + # Include any input connections of history with long names + # For optimization purposes only trace upstream from shape nodes + # looking for used dag nodes. This way having just a constraint + # on a transform is also ignored which tended to give irrelevant + # inputs for the majority of our use cases. We tend to care more + # about geometry inputs. + shapes = cmds.ls(nodes, + type=("mesh", "nurbsSurface", "nurbsCurve"), + noIntermediate=True) + if shapes: + history = list(iter_history(shapes, filter=om.MFn.kShape)) + history = cmds.ls(history, long=True) + + # Include the transforms in the collected history as shapes + # are excluded from containers + transforms = cmds.listRelatives(cmds.ls(history, shapes=True), + parent=True, + fullPath=True, + type="transform") + if transforms: + history.extend(transforms) + + if history: + nodes = list(set(nodes + history)) + + # Collect containers for the given set of nodes + containers = collect_input_containers(scene_containers, + nodes) + + inputs = [ObjectId(c["representation"]) for c in containers] + instance.data["inputRepresentations"] = inputs + + self.log.info("Collected inputs: %s" % inputs) + + def _collect_renderlayer_inputs(self, scene_containers, instance): + """Collects inputs from nodes in renderlayer, incl. shaders + camera""" + + # Get the renderlayer + renderlayer = instance.data.get("setMembers") + + if renderlayer == "defaultRenderLayer": + # Assume all loaded containers in the scene are inputs + # for the masterlayer + return copy.deepcopy(scene_containers) + else: + # Get the members of the layer + members = cmds.editRenderLayerMembers(renderlayer, + query=True, + fullNames=True) or [] + + # In some cases invalid objects are returned from + # `editRenderLayerMembers` so we filter them out + members = cmds.ls(members, long=True) + + # Include all children + children = cmds.listRelatives(members, + allDescendents=True, + fullPath=True) or [] + members.extend(children) + + # Include assigned shaders in renderlayer + shapes = cmds.ls(members, shapes=True, long=True) + shaders = set() + for shape in shapes: + shape_shaders = get_shader_in_layer(shape, layer=renderlayer) + if not shape_shaders: + continue + shaders.update(shape_shaders) + members.extend(shaders) + + # Explicitly include the camera being rendered in renderlayer + cameras = instance.data.get("cameras") + members.extend(cameras) + + containers = collect_input_containers(scene_containers, members) + + return containers diff --git a/openpype/hosts/maya/plugins/publish/collect_maya_scene.py b/openpype/hosts/maya/plugins/publish/collect_maya_scene.py deleted file mode 100644 index eb21b17989..0000000000 --- a/openpype/hosts/maya/plugins/publish/collect_maya_scene.py +++ /dev/null @@ -1,25 +0,0 @@ -from maya import cmds - -import pyblish.api - - -class CollectMayaScene(pyblish.api.InstancePlugin): - """Collect Maya Scene Data - - """ - - order = pyblish.api.CollectorOrder + 0.2 - label = 'Collect Model Data' - families = ["mayaScene"] - - def process(self, instance): - # Extract only current frame (override) - frame = cmds.currentTime(query=True) - instance.data["frameStart"] = frame - instance.data["frameEnd"] = frame - - # make ftrack publishable - if instance.data.get('families'): - instance.data['families'].append('ftrack') - else: - instance.data['families'] = ['ftrack'] diff --git a/openpype/hosts/maya/plugins/publish/collect_maya_scene_time.py b/openpype/hosts/maya/plugins/publish/collect_maya_scene_time.py new file mode 100644 index 0000000000..7e198df14d --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/collect_maya_scene_time.py @@ -0,0 +1,26 @@ +from maya import cmds + +import pyblish.api + + +class CollectMayaSceneTime(pyblish.api.InstancePlugin): + """Collect Maya Scene playback range + + This allows to reproduce the playback range for the content to be loaded. + It does *not* limit the extracted data to only data inside that time range. + + """ + + order = pyblish.api.CollectorOrder + 0.2 + label = 'Collect Maya Scene Time' + families = ["mayaScene"] + + def process(self, instance): + instance.data.update({ + "frameStart": cmds.playbackOptions(query=True, minTime=True), + "frameEnd": cmds.playbackOptions(query=True, maxTime=True), + "frameStartHandle": cmds.playbackOptions(query=True, + animationStartTime=True), + "frameEndHandle": cmds.playbackOptions(query=True, + animationEndTime=True) + }) diff --git a/openpype/hosts/maya/plugins/publish/collect_rig.py b/openpype/hosts/maya/plugins/publish/collect_rig.py deleted file mode 100644 index 98ae1e8009..0000000000 --- a/openpype/hosts/maya/plugins/publish/collect_rig.py +++ /dev/null @@ -1,22 +0,0 @@ -from maya import cmds - -import pyblish.api - - -class CollectRigData(pyblish.api.InstancePlugin): - """Collect rig data - - Ensures rigs are published to Ftrack. - - """ - - order = pyblish.api.CollectorOrder + 0.2 - label = 'Collect Rig Data' - families = ["rig"] - - def process(self, instance): - # make ftrack publishable - if instance.data.get('families'): - instance.data['families'].append('ftrack') - else: - instance.data['families'] = ['ftrack'] diff --git a/openpype/hosts/maya/plugins/publish/extract_ass.py b/openpype/hosts/maya/plugins/publish/extract_ass.py index 760f410f91..5c21a4ff08 100644 --- a/openpype/hosts/maya/plugins/publish/extract_ass.py +++ b/openpype/hosts/maya/plugins/publish/extract_ass.py @@ -1,12 +1,12 @@ import os -import openpype.api - from maya import cmds + +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractAssStandin(openpype.api.Extractor): +class ExtractAssStandin(publish.Extractor): """Extract the content of the instance to a ass file Things to pay attention to: diff --git a/openpype/hosts/maya/plugins/publish/extract_assembly.py b/openpype/hosts/maya/plugins/publish/extract_assembly.py index 120805894e..35932003ee 100644 --- a/openpype/hosts/maya/plugins/publish/extract_assembly.py +++ b/openpype/hosts/maya/plugins/publish/extract_assembly.py @@ -1,14 +1,13 @@ +import os import json -import os - -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import extract_alembic from maya import cmds -class ExtractAssembly(openpype.api.Extractor): +class ExtractAssembly(publish.Extractor): """Produce an alembic of just point positions and normals. Positions and normals are preserved, but nothing more, diff --git a/openpype/hosts/maya/plugins/publish/extract_assproxy.py b/openpype/hosts/maya/plugins/publish/extract_assproxy.py index 93720dbb82..4937a28a9e 100644 --- a/openpype/hosts/maya/plugins/publish/extract_assproxy.py +++ b/openpype/hosts/maya/plugins/publish/extract_assproxy.py @@ -3,17 +3,17 @@ import contextlib from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractAssProxy(openpype.api.Extractor): +class ExtractAssProxy(publish.Extractor): """Extract proxy model as Maya Ascii to use as arnold standin """ - order = openpype.api.Extractor.order + 0.2 + order = publish.Extractor.order + 0.2 label = "Ass Proxy (Maya ASCII)" hosts = ["maya"] families = ["ass"] diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py index b744bfd0fe..aa445a0387 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py @@ -2,11 +2,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api import lib -class ExtractCameraAlembic(openpype.api.Extractor): +class ExtractCameraAlembic(publish.Extractor): """Extract a Camera as Alembic. The cameras gets baked to world space by default. Only when the instance's diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py index 8d6c4b5f3c..7467fa027d 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py @@ -5,7 +5,7 @@ import itertools from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api import lib @@ -78,7 +78,7 @@ def unlock(plug): cmds.disconnectAttr(source, destination) -class ExtractCameraMayaScene(openpype.api.Extractor): +class ExtractCameraMayaScene(publish.Extractor): """Extract a Camera as Maya Scene. This will create a duplicate of the camera that will be baked *with* diff --git a/openpype/hosts/maya/plugins/publish/extract_fbx.py b/openpype/hosts/maya/plugins/publish/extract_fbx.py index fbbe8e06b0..9af3acef65 100644 --- a/openpype/hosts/maya/plugins/publish/extract_fbx.py +++ b/openpype/hosts/maya/plugins/publish/extract_fbx.py @@ -4,13 +4,13 @@ import os from maya import cmds # noqa import maya.mel as mel # noqa import pyblish.api -import openpype.api -from openpype.hosts.maya.api.lib import maintained_selection +from openpype.pipeline import publish +from openpype.hosts.maya.api.lib import maintained_selection from openpype.hosts.maya.api import fbx -class ExtractFBX(openpype.api.Extractor): +class ExtractFBX(publish.Extractor): """Extract FBX from Maya. This extracts reproducible FBX exports ignoring any of the diff --git a/openpype/hosts/maya/plugins/publish/extract_layout.py b/openpype/hosts/maya/plugins/publish/extract_layout.py index 991217684a..0f499b09b1 100644 --- a/openpype/hosts/maya/plugins/publish/extract_layout.py +++ b/openpype/hosts/maya/plugins/publish/extract_layout.py @@ -5,13 +5,11 @@ import json from maya import cmds from maya.api import OpenMaya as om -from bson.objectid import ObjectId - -from openpype.pipeline import legacy_io -import openpype.api +from openpype.client import get_representation_by_id +from openpype.pipeline import legacy_io, publish -class ExtractLayout(openpype.api.Extractor): +class ExtractLayout(publish.Extractor): """Extract a layout.""" label = "Extract Layout" @@ -30,6 +28,8 @@ class ExtractLayout(openpype.api.Extractor): instance.data["representations"] = [] json_data = [] + # TODO representation queries can be refactored to be faster + project_name = legacy_io.active_project() for asset in cmds.sets(str(instance), query=True): # Find the container @@ -43,11 +43,11 @@ class ExtractLayout(openpype.api.Extractor): representation_id = cmds.getAttr(f"{container}.representation") - representation = legacy_io.find_one( - { - "type": "representation", - "_id": ObjectId(representation_id) - }, projection={"parent": True, "context.family": True}) + representation = get_representation_by_id( + project_name, + representation_id, + fields=["parent", "context.family"] + ) self.log.info(representation) @@ -102,9 +102,10 @@ class ExtractLayout(openpype.api.Extractor): for i in range(0, len(t_matrix_list), row_length): t_matrix.append(t_matrix_list[i:i + row_length]) - json_element["transform_matrix"] = [] - for row in t_matrix: - json_element["transform_matrix"].append(list(row)) + json_element["transform_matrix"] = [ + list(row) + for row in t_matrix + ] basis_list = [ 1, 0, 0, 0, diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index ce3b265566..403b4ee6bc 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -13,8 +13,8 @@ from maya import cmds # noqa import pyblish.api -import openpype.api -from openpype.pipeline import legacy_io +from openpype.lib import source_hash, run_subprocess +from openpype.pipeline import legacy_io, publish from openpype.hosts.maya.api import lib # Modes for transfer @@ -68,7 +68,7 @@ def find_paths_by_hash(texture_hash): return legacy_io.distinct(key, {"type": "version"}) -def maketx(source, destination, *args): +def maketx(source, destination, args, logger): """Make `.tx` using `maketx` with some default settings. The settings are based on default as used in Arnold's @@ -79,7 +79,8 @@ def maketx(source, destination, *args): Args: source (str): Path to source file. destination (str): Writing destination path. - *args: Additional arguments for `maketx`. + args (list): Additional arguments for `maketx`. + logger (logging.Logger): Logger to log messages to. Returns: str: Output of `maketx` command. @@ -94,7 +95,7 @@ def maketx(source, destination, *args): "OIIO tool not found in {}".format(maketx_path)) raise AssertionError("OIIO tool not found") - cmd = [ + subprocess_args = [ maketx_path, "-v", # verbose "-u", # update mode @@ -103,27 +104,20 @@ def maketx(source, destination, *args): "--checknan", # use oiio-optimized settings for tile-size, planarconfig, metadata "--oiio", - "--filter lanczos3", - escape_space(source) + "--filter", "lanczos3", + source ] - cmd.extend(args) - cmd.extend(["-o", escape_space(destination)]) + subprocess_args.extend(args) + subprocess_args.extend(["-o", destination]) - cmd = " ".join(cmd) + cmd = " ".join(subprocess_args) + logger.debug(cmd) - CREATE_NO_WINDOW = 0x08000000 # noqa - kwargs = dict(args=cmd, stderr=subprocess.STDOUT) - - if sys.platform == "win32": - kwargs["creationflags"] = CREATE_NO_WINDOW try: - out = subprocess.check_output(**kwargs) - except subprocess.CalledProcessError as exc: - print(exc) - import traceback - - traceback.print_exc() + out = run_subprocess(subprocess_args) + except Exception: + logger.error("Maketx converion failed", exc_info=True) raise return out @@ -161,7 +155,7 @@ def no_workspace_dir(): os.rmdir(fake_workspace_dir) -class ExtractLook(openpype.api.Extractor): +class ExtractLook(publish.Extractor): """Extract Look (Maya Scene + JSON) Only extracts the sets (shadingEngines and alike) alongside a .json file @@ -505,7 +499,7 @@ class ExtractLook(openpype.api.Extractor): args = [] if do_maketx: args.append("maketx") - texture_hash = openpype.api.source_hash(filepath, *args) + texture_hash = source_hash(filepath, *args) # If source has been published before with the same settings, # then don't reprocess but hardlink from the original @@ -524,15 +518,17 @@ class ExtractLook(openpype.api.Extractor): if do_maketx and ext != ".tx": # Produce .tx file in staging if source file is not .tx converted = os.path.join(staging, "resources", fname + ".tx") - + additional_args = [ + "--sattrib", + "sourceHash", + texture_hash + ] if linearize: self.log.info("tx: converting sRGB -> linear") - colorconvert = "--colorconvert sRGB linear" - else: - colorconvert = "" + additional_args.extend(["--colorconvert", "sRGB", "linear"]) config_path = get_ocio_config_path("nuke-default") - color_config = "--colorconfig {0}".format(config_path) + additional_args.extend(["--colorconfig", config_path]) # Ensure folder exists if not os.path.exists(os.path.dirname(converted)): os.makedirs(os.path.dirname(converted)) @@ -541,12 +537,8 @@ class ExtractLook(openpype.api.Extractor): maketx( filepath, converted, - # Include `source-hash` as string metadata - "--sattrib", - "sourceHash", - escape_space(texture_hash), - colorconvert, - color_config + additional_args, + self.log ) return converted, COPY, texture_hash diff --git a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py index 3a47cdadb5..3769ec3605 100644 --- a/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py +++ b/openpype/hosts/maya/plugins/publish/extract_maya_scene_raw.py @@ -4,12 +4,11 @@ import os from maya import cmds -import openpype.api from openpype.hosts.maya.api.lib import maintained_selection -from openpype.pipeline import AVALON_CONTAINER_ID +from openpype.pipeline import AVALON_CONTAINER_ID, publish -class ExtractMayaSceneRaw(openpype.api.Extractor): +class ExtractMayaSceneRaw(publish.Extractor): """Extract as Maya Scene (raw). This will preserve all references, construction history, etc. diff --git a/openpype/hosts/maya/plugins/publish/extract_model.py b/openpype/hosts/maya/plugins/publish/extract_model.py index 0282d1e9c8..7c8c3a2981 100644 --- a/openpype/hosts/maya/plugins/publish/extract_model.py +++ b/openpype/hosts/maya/plugins/publish/extract_model.py @@ -4,11 +4,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api import lib -class ExtractModel(openpype.api.Extractor): +class ExtractModel(publish.Extractor): """Extract as Model (Maya Scene). Only extracts contents based on the original "setMembers" data to ensure diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py index 82e2b41929..92137acb95 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_look.py @@ -2,11 +2,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractMultiverseLook(openpype.api.Extractor): +class ExtractMultiverseLook(publish.Extractor): """Extractor for Multiverse USD look data. This will extract: diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py index 3654be7b34..6c352bebe6 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd.py @@ -3,11 +3,11 @@ import six from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractMultiverseUsd(openpype.api.Extractor): +class ExtractMultiverseUsd(publish.Extractor): """Extractor for Multiverse USD Asset data. This will extract settings for a Multiverse Write Asset operation: diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py index ad9303657f..a62729c198 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_comp.py @@ -2,11 +2,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractMultiverseUsdComposition(openpype.api.Extractor): +class ExtractMultiverseUsdComposition(publish.Extractor): """Extractor of Multiverse USD Composition data. This will extract settings for a Multiverse Write Composition operation: diff --git a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py index d44e3878b8..0628623e88 100644 --- a/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py +++ b/openpype/hosts/maya/plugins/publish/extract_multiverse_usd_over.py @@ -1,12 +1,12 @@ import os -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection from maya import cmds -class ExtractMultiverseUsdOverride(openpype.api.Extractor): +class ExtractMultiverseUsdOverride(publish.Extractor): """Extractor for Multiverse USD Override data. This will extract settings for a Multiverse Write Override operation: diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index 871adda0c3..81fdba2f98 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -1,18 +1,16 @@ import os -import glob -import contextlib import clique import capture +from openpype.pipeline import publish from openpype.hosts.maya.api import lib -import openpype.api from maya import cmds import pymel.core as pm -class ExtractPlayblast(openpype.api.Extractor): +class ExtractPlayblast(publish.Extractor): """Extract viewport playblast. Takes review camera and creates review Quicktime video based on viewport diff --git a/openpype/hosts/maya/plugins/publish/extract_pointcache.py b/openpype/hosts/maya/plugins/publish/extract_pointcache.py index bf6feecef3..7c1c6d5c12 100644 --- a/openpype/hosts/maya/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/maya/plugins/publish/extract_pointcache.py @@ -2,7 +2,7 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import ( extract_alembic, suspended_refresh, @@ -11,7 +11,7 @@ from openpype.hosts.maya.api.lib import ( ) -class ExtractAlembic(openpype.api.Extractor): +class ExtractAlembic(publish.Extractor): """Produce an alembic of just point positions and normals. Positions and normals, uvs, creases are preserved, but nothing more, diff --git a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py index 23cac9190d..4377275635 100644 --- a/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py +++ b/openpype/hosts/maya/plugins/publish/extract_redshift_proxy.py @@ -4,11 +4,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractRedshiftProxy(openpype.api.Extractor): +class ExtractRedshiftProxy(publish.Extractor): """Extract the content of the instance to a redshift proxy file.""" label = "Redshift Proxy (.rs)" diff --git a/openpype/hosts/maya/plugins/publish/extract_rendersetup.py b/openpype/hosts/maya/plugins/publish/extract_rendersetup.py index 6bdd5f590e..5970c038a4 100644 --- a/openpype/hosts/maya/plugins/publish/extract_rendersetup.py +++ b/openpype/hosts/maya/plugins/publish/extract_rendersetup.py @@ -1,10 +1,11 @@ -import json import os -import openpype.api +import json + import maya.app.renderSetup.model.renderSetup as renderSetup +from openpype.pipeline import publish -class ExtractRenderSetup(openpype.api.Extractor): +class ExtractRenderSetup(publish.Extractor): """ Produce renderSetup template file diff --git a/openpype/hosts/maya/plugins/publish/extract_rig.py b/openpype/hosts/maya/plugins/publish/extract_rig.py index 53c1eeb671..c71a2f710d 100644 --- a/openpype/hosts/maya/plugins/publish/extract_rig.py +++ b/openpype/hosts/maya/plugins/publish/extract_rig.py @@ -4,11 +4,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractRig(openpype.api.Extractor): +class ExtractRig(publish.Extractor): """Extract rig as Maya Scene.""" label = "Extract Rig (Maya Scene)" diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index 9380da5128..854301ea48 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -3,14 +3,14 @@ import glob import capture +from openpype.pipeline import publish from openpype.hosts.maya.api import lib -import openpype.api from maya import cmds import pymel.core as pm -class ExtractThumbnail(openpype.api.Extractor): +class ExtractThumbnail(publish.Extractor): """Extract viewport thumbnail. Takes review camera and creates a thumbnail based on viewport diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py index 7ef7f2f181..258120db2f 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_skeletalmesh.py @@ -6,7 +6,8 @@ from contextlib import contextmanager from maya import cmds # noqa import pyblish.api -import openpype.api + +from openpype.pipeline import publish from openpype.hosts.maya.api import fbx @@ -20,7 +21,7 @@ def renamed(original_name, renamed_name): cmds.rename(renamed_name, original_name) -class ExtractUnrealSkeletalMesh(openpype.api.Extractor): +class ExtractUnrealSkeletalMesh(publish.Extractor): """Extract Unreal Skeletal Mesh as FBX from Maya. """ order = pyblish.api.ExtractorOrder - 0.1 diff --git a/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py b/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py index 69d51f9ff1..44f0615a27 100644 --- a/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py +++ b/openpype/hosts/maya/plugins/publish/extract_unreal_staticmesh.py @@ -5,7 +5,8 @@ import os from maya import cmds # noqa import pyblish.api -import openpype.api + +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import ( parent_nodes, maintained_selection @@ -13,7 +14,7 @@ from openpype.hosts.maya.api.lib import ( from openpype.hosts.maya.api import fbx -class ExtractUnrealStaticMesh(openpype.api.Extractor): +class ExtractUnrealStaticMesh(publish.Extractor): """Extract Unreal Static Mesh as FBX from Maya. """ order = pyblish.api.ExtractorOrder - 0.1 diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py b/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py index 562ca078e1..38bf02245a 100644 --- a/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py +++ b/openpype/hosts/maya/plugins/publish/extract_vrayproxy.py @@ -2,11 +2,11 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import maintained_selection -class ExtractVRayProxy(openpype.api.Extractor): +class ExtractVRayProxy(publish.Extractor): """Extract the content of the instance to a vrmesh file Things to pay attention to: diff --git a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py index 5d41697e5f..8442df1611 100644 --- a/openpype/hosts/maya/plugins/publish/extract_vrayscene.py +++ b/openpype/hosts/maya/plugins/publish/extract_vrayscene.py @@ -3,14 +3,14 @@ import os import re -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.render_setup_tools import export_in_rs_layer from openpype.hosts.maya.api.lib import maintained_selection from maya import cmds -class ExtractVrayscene(openpype.api.Extractor): +class ExtractVrayscene(publish.Extractor): """Extractor for vrscene.""" label = "VRay Scene (.vrscene)" diff --git a/openpype/hosts/maya/plugins/publish/extract_xgen_cache.py b/openpype/hosts/maya/plugins/publish/extract_xgen_cache.py index 5728682abe..77350f343e 100644 --- a/openpype/hosts/maya/plugins/publish/extract_xgen_cache.py +++ b/openpype/hosts/maya/plugins/publish/extract_xgen_cache.py @@ -2,14 +2,14 @@ import os from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api.lib import ( suspended_refresh, maintained_selection ) -class ExtractXgenCache(openpype.api.Extractor): +class ExtractXgenCache(publish.Extractor): """Produce an alembic of just xgen interactive groom """ diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py b/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py index cf6db00e9a..b61f599cab 100644 --- a/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py +++ b/openpype/hosts/maya/plugins/publish/extract_yeti_cache.py @@ -3,10 +3,10 @@ import json from maya import cmds -import openpype.api +from openpype.pipeline import publish -class ExtractYetiCache(openpype.api.Extractor): +class ExtractYetiCache(publish.Extractor): """Producing Yeti cache files using scene time range. This will extract Yeti cache file sequence and fur settings. diff --git a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py index 6e21bffa4e..1d0c5e88c3 100644 --- a/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py +++ b/openpype/hosts/maya/plugins/publish/extract_yeti_rig.py @@ -7,7 +7,7 @@ import contextlib from maya import cmds -import openpype.api +from openpype.pipeline import publish from openpype.hosts.maya.api import lib @@ -90,7 +90,7 @@ def yetigraph_attribute_values(assumed_destination, resources): pass -class ExtractYetiRig(openpype.api.Extractor): +class ExtractYetiRig(publish.Extractor): """Extract the Yeti rig to a Maya Scene and write the Yeti rig data.""" label = "Extract Yeti Rig" diff --git a/openpype/hosts/maya/plugins/publish/save_scene.py b/openpype/hosts/maya/plugins/publish/save_scene.py index 50a2f2112a..45e62e7b44 100644 --- a/openpype/hosts/maya/plugins/publish/save_scene.py +++ b/openpype/hosts/maya/plugins/publish/save_scene.py @@ -1,4 +1,8 @@ import pyblish.api +from openpype.pipeline.workfile.lock_workfile import ( + is_workfile_lock_enabled, + remove_workfile_lock +) class SaveCurrentScene(pyblish.api.ContextPlugin): @@ -22,6 +26,10 @@ class SaveCurrentScene(pyblish.api.ContextPlugin): self.log.debug("Skipping file save as there " "are no modifications..") return - + project_name = context.data["projectName"] + project_settings = context.data["project_settings"] + # remove lockfile before saving + if is_workfile_lock_enabled("maya", project_name, project_settings): + remove_workfile_lock(current) self.log.info("Saving current file..") cmds.file(save=True, force=True) diff --git a/openpype/hosts/nuke/plugins/publish/extract_backdrop.py b/openpype/hosts/nuke/plugins/publish/extract_backdrop.py index 0a2df0898e..d1e5c4cc5a 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/extract_backdrop.py @@ -4,7 +4,7 @@ import nuke import pyblish.api -import openpype +from openpype.pipeline import publish from openpype.hosts.nuke.api.lib import ( maintained_selection, reset_selection, @@ -12,7 +12,7 @@ from openpype.hosts.nuke.api.lib import ( ) -class ExtractBackdropNode(openpype.api.Extractor): +class ExtractBackdropNode(publish.Extractor): """Extracting content of backdrop nodes Will create nuke script only with containing nodes. diff --git a/openpype/hosts/nuke/plugins/publish/extract_camera.py b/openpype/hosts/nuke/plugins/publish/extract_camera.py index 54f65a0be3..b751bfab03 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_camera.py +++ b/openpype/hosts/nuke/plugins/publish/extract_camera.py @@ -5,11 +5,12 @@ from pprint import pformat import nuke import pyblish.api -import openpype.api + +from openpype.pipeline import publish from openpype.hosts.nuke.api.lib import maintained_selection -class ExtractCamera(openpype.api.Extractor): +class ExtractCamera(publish.Extractor): """ 3D camera exctractor """ label = 'Exctract Camera' diff --git a/openpype/hosts/nuke/plugins/publish/extract_gizmo.py b/openpype/hosts/nuke/plugins/publish/extract_gizmo.py index 2d5bfdeb5e..3047ad6724 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_gizmo.py +++ b/openpype/hosts/nuke/plugins/publish/extract_gizmo.py @@ -3,7 +3,7 @@ import nuke import pyblish.api -import openpype +from openpype.pipeline import publish from openpype.hosts.nuke.api import utils as pnutils from openpype.hosts.nuke.api.lib import ( maintained_selection, @@ -12,7 +12,7 @@ from openpype.hosts.nuke.api.lib import ( ) -class ExtractGizmo(openpype.api.Extractor): +class ExtractGizmo(publish.Extractor): """Extracting Gizmo (Group) node Will create nuke script only with the Gizmo node. diff --git a/openpype/hosts/nuke/plugins/publish/extract_model.py b/openpype/hosts/nuke/plugins/publish/extract_model.py index 0375263338..d82cb3110b 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_model.py +++ b/openpype/hosts/nuke/plugins/publish/extract_model.py @@ -2,14 +2,15 @@ import os from pprint import pformat import nuke import pyblish.api -import openpype.api + +from openpype.pipeline import publish from openpype.hosts.nuke.api.lib import ( maintained_selection, select_nodes ) -class ExtractModel(openpype.api.Extractor): +class ExtractModel(publish.Extractor): """ 3D model exctractor """ label = 'Exctract Model' diff --git a/openpype/hosts/nuke/plugins/publish/extract_render_local.py b/openpype/hosts/nuke/plugins/publish/extract_render_local.py index 8879f0c999..843d588786 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_render_local.py +++ b/openpype/hosts/nuke/plugins/publish/extract_render_local.py @@ -1,11 +1,13 @@ -import pyblish.api -import nuke import os -import openpype + +import pyblish.api import clique +import nuke + +from openpype.pipeline import publish -class NukeRenderLocal(openpype.api.Extractor): +class NukeRenderLocal(publish.Extractor): # TODO: rewrite docstring to nuke """Render the current Nuke composition locally. diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data.py b/openpype/hosts/nuke/plugins/publish/extract_review_data.py index 38a8140cff..3c85b21b08 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data.py @@ -1,10 +1,11 @@ import os -import pyblish.api -import openpype from pprint import pformat +import pyblish.api + +from openpype.pipeline import publish -class ExtractReviewData(openpype.api.Extractor): +class ExtractReviewData(publish.Extractor): """Extracts review tag into available representation """ diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py index 4cf2fd7d9f..67779e9599 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py @@ -1,11 +1,12 @@ import os import pyblish.api -import openpype + +from openpype.pipeline import publish from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api.lib import maintained_selection -class ExtractReviewDataLut(openpype.api.Extractor): +class ExtractReviewDataLut(publish.Extractor): """Extracts movie and thumbnail with baked in luts must be run after extract_render_local.py diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index fc16e189fb..3fcfc2a4b5 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -1,13 +1,14 @@ import os -from pprint import pformat import re +from pprint import pformat import pyblish.api -import openpype + +from openpype.pipeline import publish from openpype.hosts.nuke.api import plugin from openpype.hosts.nuke.api.lib import maintained_selection -class ExtractReviewDataMov(openpype.api.Extractor): +class ExtractReviewDataMov(publish.Extractor): """Extracts movie and thumbnail with baked in luts must be run after extract_render_local.py diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index b5cad143db..e7197b4fa8 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -6,7 +6,7 @@ import copy import pyblish.api import six -import openpype +from openpype.pipeline import publish from openpype.hosts.nuke.api import ( maintained_selection, duplicate_node, @@ -14,7 +14,7 @@ from openpype.hosts.nuke.api import ( ) -class ExtractSlateFrame(openpype.api.Extractor): +class ExtractSlateFrame(publish.Extractor): """Extracts movie and thumbnail with baked in luts must be run after extract_render_local.py diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index 2a919051d2..19eae9638b 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -2,7 +2,8 @@ import sys import os import nuke import pyblish.api -import openpype + +from openpype.pipeline import publish from openpype.hosts.nuke.api import ( maintained_selection, get_view_process_node @@ -13,7 +14,7 @@ if sys.version_info[0] >= 3: unicode = str -class ExtractThumbnail(openpype.api.Extractor): +class ExtractThumbnail(publish.Extractor): """Extracts movie and thumbnail with baked in luts must be run after extract_render_local.py diff --git a/openpype/hosts/photoshop/plugins/publish/collect_published_version.py b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py new file mode 100644 index 0000000000..2502689e4b --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_published_version.py @@ -0,0 +1,55 @@ +"""Collects published version of workfile and increments it. + +For synchronization of published image and workfile version it is required +to store workfile version from workfile file name in context.data["version"]. +In remote publishing this name is unreliable (artist might not follow naming +convention etc.), last published workfile version for particular workfile +subset is used instead. + +This plugin runs only in remote publishing (eg. Webpublisher). + +Requires: + context.data["assetEntity"] + +Provides: + context["version"] - incremented latest published workfile version +""" + +import pyblish.api + +from openpype.client import get_last_version_by_subset_name + + +class CollectPublishedVersion(pyblish.api.ContextPlugin): + """Collects published version of workfile and increments it.""" + + order = pyblish.api.CollectorOrder + 0.190 + label = "Collect published version" + hosts = ["photoshop"] + targets = ["remotepublish"] + + def process(self, context): + workfile_subset_name = None + for instance in context: + if instance.data["family"] == "workfile": + workfile_subset_name = instance.data["subset"] + break + + if not workfile_subset_name: + self.log.warning("No workfile instance found, " + "synchronization of version will not work.") + return + + project_name = context.data["projectName"] + asset_doc = context.data["assetEntity"] + asset_id = asset_doc["_id"] + + version_doc = get_last_version_by_subset_name(project_name, + workfile_subset_name, + asset_id) + version_int = 1 + if version_doc: + version_int += int(version_doc["name"]) + + self.log.debug(f"Setting {version_int} to context.") + context.data["version"] = version_int diff --git a/openpype/hosts/photoshop/plugins/publish/collect_version.py b/openpype/hosts/photoshop/plugins/publish/collect_version.py new file mode 100644 index 0000000000..aff9f13bfb --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_version.py @@ -0,0 +1,24 @@ +import pyblish.api + + +class CollectVersion(pyblish.api.InstancePlugin): + """Collect version for publishable instances. + + Used to synchronize version from workfile to all publishable instances: + - image (manually created or color coded) + - review + + Dev comment: + Explicit collector created to control this from single place and not from + 3 different. + """ + order = pyblish.api.CollectorOrder + 0.200 + label = 'Collect Version' + + hosts = ["photoshop"] + families = ["image", "review"] + + def process(self, instance): + workfile_version = instance.context.data["version"] + self.log.debug(f"Applying version {workfile_version}") + instance.data["version"] = workfile_version diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index 60d5d3ed4a..51e34312f2 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -154,7 +154,7 @@ def convert_value_by_type_name(value_type, value, logger=None): elif parts_len == 4: divisor = 2 elif parts_len == 9: - divisor == 3 + divisor = 3 elif parts_len == 16: divisor = 4 else: diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 7c486b7c34..44f2b5b2b4 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -734,7 +734,7 @@ def _format_tiles( Result for tile 0 for 4x4 will be: `maya///_tile_1x1_4x4__` - Calculating coordinates is tricky as in Job they are defined as top, + Calculating coordinates is tricky as in Job they are defined as top, left, bottom, right with zero being in top-left corner. But Assembler configuration file takes tile coordinates as X, Y, Width and Height and zero is bottom left corner. @@ -743,13 +743,13 @@ def _format_tiles( filename (str): Filename to process as tiles. index (int): Index of that file if it is sequence. tiles_x (int): Number of tiles in X. - tiles_y (int): Number if tikes in Y. + tiles_y (int): Number of tiles in Y. width (int): Width resolution of final image. height (int): Height resolution of final image. prefix (str): Image prefix. Returns: - (dict, dict): Tuple of two dictionaires - first can be used to + (dict, dict): Tuple of two dictionaries - first can be used to extend JobInfo, second has tiles x, y, width and height used for assembler configuration. @@ -776,21 +776,24 @@ def _format_tiles( tiles_x, tiles_y ) - top = height - (tile_y * h_space) - bottom = height - ((tile_y - 1) * h_space) - 1 - left = (tile_x - 1) * w_space - right = (tile_x * w_space) - 1 - # Job Info new_filename = "{}/{}{}".format( os.path.dirname(filename), tile_prefix, os.path.basename(filename) ) - out["JobInfo"]["OutputFilename{}Tile{}".format(index, tile)] = new_filename # noqa + + top = height - (tile_y * h_space) + bottom = height - ((tile_y - 1) * h_space) - 1 + left = (tile_x - 1) * w_space + right = (tile_x * w_space) - 1 + + # Job info + out["JobInfo"]["OutputFilename{}Tile{}".format(index, tile)] = new_filename # noqa: E501 # Plugin Info - out["PluginInfo"]["RegionPrefix{}".format(tile)] = "/{}".format(tile_prefix).join(prefix.rsplit("/", 1)) # noqa: E501 + out["PluginInfo"]["RegionPrefix{}".format(str(tile))] = \ + "/{}".format(tile_prefix).join(prefix.rsplit("/", 1)) out["PluginInfo"]["RegionTop{}".format(tile)] = top out["PluginInfo"]["RegionBottom{}".format(tile)] = bottom out["PluginInfo"]["RegionLeft{}".format(tile)] = left diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index c9d1daffd1..aba505b3c6 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -778,7 +778,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): "resolutionHeight": data.get("resolutionHeight", 1080), "multipartExr": data.get("multipartExr", False), "jobBatchName": data.get("jobBatchName", ""), - "useSequenceForReview": data.get("useSequenceForReview", True) + "useSequenceForReview": data.get("useSequenceForReview", True), + # map inputVersions `ObjectId` -> `str` so json supports it + "inputVersions": list(map(str, data.get("inputVersions", []))) } # skip locking version if we are creating v01 diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py index 9fca1b5391..0d356785f9 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py @@ -71,7 +71,7 @@ def convert_value_by_type_name(value_type, value): elif parts_len == 4: divisor = 2 elif parts_len == 9: - divisor == 3 + divisor = 3 elif parts_len == 16: divisor = 4 else: @@ -453,7 +453,7 @@ class OpenPypeTileAssembler(DeadlinePlugin): # Swap to have input as foreground args.append("--swap") # Paste foreground to background - args.append("--paste +{}+{}".format(pos_x, pos_y)) + args.append("--paste {x:+d}{y:+d}".format(x=pos_x, y=pos_y)) args.append("-o") args.append(output_path) diff --git a/openpype/modules/ftrack/__init__.py b/openpype/modules/ftrack/__init__.py index 7261254c6f..e520f08337 100644 --- a/openpype/modules/ftrack/__init__.py +++ b/openpype/modules/ftrack/__init__.py @@ -1,9 +1,13 @@ from .ftrack_module import ( FtrackModule, - FTRACK_MODULE_DIR + FTRACK_MODULE_DIR, + + resolve_ftrack_url, ) __all__ = ( "FtrackModule", - "FTRACK_MODULE_DIR" + "FTRACK_MODULE_DIR", + + "resolve_ftrack_url", ) diff --git a/openpype/modules/ftrack/ftrack_module.py b/openpype/modules/ftrack/ftrack_module.py index cb4f204523..75ffd7f864 100644 --- a/openpype/modules/ftrack/ftrack_module.py +++ b/openpype/modules/ftrack/ftrack_module.py @@ -6,14 +6,16 @@ import platform import click from openpype.modules import OpenPypeModule -from openpype_interfaces import ( +from openpype.modules.interfaces import ( ITrayModule, IPluginPaths, ISettingsChangeListener ) from openpype.settings import SaveWarningExc +from openpype.lib import Logger FTRACK_MODULE_DIR = os.path.dirname(os.path.abspath(__file__)) +_URL_NOT_SET = object() class FtrackModule( @@ -28,17 +30,8 @@ class FtrackModule( ftrack_settings = settings[self.name] self.enabled = ftrack_settings["enabled"] - # Add http schema - ftrack_url = ftrack_settings["ftrack_server"].strip("/ ") - if ftrack_url: - if "http" not in ftrack_url: - ftrack_url = "https://" + ftrack_url - - # Check if "ftrack.app" is part os url - if "ftrackapp.com" not in ftrack_url: - ftrack_url = ftrack_url + ".ftrackapp.com" - - self.ftrack_url = ftrack_url + self._settings_ftrack_url = ftrack_settings["ftrack_server"] + self._ftrack_url = _URL_NOT_SET current_dir = os.path.dirname(os.path.abspath(__file__)) low_platform = platform.system().lower() @@ -70,6 +63,16 @@ class FtrackModule( self.timers_manager_connector = None self._timers_manager_module = None + def get_ftrack_url(self): + if self._ftrack_url is _URL_NOT_SET: + self._ftrack_url = resolve_ftrack_url( + self._settings_ftrack_url, + logger=self.log + ) + return self._ftrack_url + + ftrack_url = property(get_ftrack_url) + def get_global_environments(self): """Ftrack's global environments.""" return { @@ -479,6 +482,51 @@ class FtrackModule( click_group.add_command(cli_main) +def _check_ftrack_url(url): + import requests + + try: + result = requests.get(url, allow_redirects=False) + except requests.exceptions.RequestException: + return False + + if (result.status_code != 200 or "FTRACK_VERSION" not in result.headers): + return False + return True + + +def resolve_ftrack_url(url, logger=None): + """Checks if Ftrack server is responding.""" + + if logger is None: + logger = Logger.get_logger(__name__) + + url = url.strip("/ ") + if not url: + logger.error("Ftrack URL is not set!") + return None + + if not url.startswith("http"): + url = "https://" + url + + ftrack_url = None + if not url.endswith("ftrackapp.com"): + ftrackapp_url = url + ".ftrackapp.com" + if _check_ftrack_url(ftrackapp_url): + ftrack_url = ftrackapp_url + + if not ftrack_url and _check_ftrack_url(url): + ftrack_url = url + + if ftrack_url: + logger.debug("Ftrack server \"{}\" is accessible.".format(ftrack_url)) + + else: + logger.error("Ftrack server \"{}\" is not accessible!".format(url)) + + return ftrack_url + + @click.group(FtrackModule.name, help="Ftrack module related commands.") def cli_main(): pass diff --git a/openpype/modules/ftrack/ftrack_server/__init__.py b/openpype/modules/ftrack/ftrack_server/__init__.py index 9e3920b500..8e5f7c4c51 100644 --- a/openpype/modules/ftrack/ftrack_server/__init__.py +++ b/openpype/modules/ftrack/ftrack_server/__init__.py @@ -1,8 +1,6 @@ from .ftrack_server import FtrackServer -from .lib import check_ftrack_url __all__ = ( "FtrackServer", - "check_ftrack_url" ) diff --git a/openpype/modules/ftrack/ftrack_server/event_server_cli.py b/openpype/modules/ftrack/ftrack_server/event_server_cli.py index 3ef7c8270a..20c5ab24a8 100644 --- a/openpype/modules/ftrack/ftrack_server/event_server_cli.py +++ b/openpype/modules/ftrack/ftrack_server/event_server_cli.py @@ -20,9 +20,11 @@ from openpype.lib import ( get_openpype_version, get_build_version, ) -from openpype_modules.ftrack import FTRACK_MODULE_DIR +from openpype_modules.ftrack import ( + FTRACK_MODULE_DIR, + resolve_ftrack_url, +) from openpype_modules.ftrack.lib import credentials -from openpype_modules.ftrack.ftrack_server.lib import check_ftrack_url from openpype_modules.ftrack.ftrack_server import socket_thread @@ -114,7 +116,7 @@ def legacy_server(ftrack_url): while True: if not ftrack_accessible: - ftrack_accessible = check_ftrack_url(ftrack_url) + ftrack_accessible = resolve_ftrack_url(ftrack_url) # Run threads only if Ftrack is accessible if not ftrack_accessible and not printed_ftrack_error: @@ -257,7 +259,7 @@ def main_loop(ftrack_url): while True: # Check if accessible Ftrack and Mongo url if not ftrack_accessible: - ftrack_accessible = check_ftrack_url(ftrack_url) + ftrack_accessible = resolve_ftrack_url(ftrack_url) if not mongo_accessible: mongo_accessible = check_mongo_url(mongo_uri) @@ -441,7 +443,7 @@ def run_event_server( os.environ["CLOCKIFY_API_KEY"] = clockify_api_key # Check url regex and accessibility - ftrack_url = check_ftrack_url(ftrack_url) + ftrack_url = resolve_ftrack_url(ftrack_url) if not ftrack_url: print('Exiting! < Please enter Ftrack server url >') return 1 diff --git a/openpype/modules/ftrack/ftrack_server/lib.py b/openpype/modules/ftrack/ftrack_server/lib.py index 947dacf917..c8143f739c 100644 --- a/openpype/modules/ftrack/ftrack_server/lib.py +++ b/openpype/modules/ftrack/ftrack_server/lib.py @@ -26,45 +26,12 @@ except ImportError: from openpype_modules.ftrack.lib import get_ftrack_event_mongo_info from openpype.client import OpenPypeMongoConnection -from openpype.api import Logger +from openpype.lib import Logger TOPIC_STATUS_SERVER = "openpype.event.server.status" TOPIC_STATUS_SERVER_RESULT = "openpype.event.server.status.result" -def check_ftrack_url(url, log_errors=True, logger=None): - """Checks if Ftrack server is responding""" - if logger is None: - logger = Logger.get_logger(__name__) - - if not url: - logger.error("Ftrack URL is not set!") - return None - - url = url.strip('/ ') - - if 'http' not in url: - if url.endswith('ftrackapp.com'): - url = 'https://' + url - else: - url = 'https://{0}.ftrackapp.com'.format(url) - try: - result = requests.get(url, allow_redirects=False) - except requests.exceptions.RequestException: - if log_errors: - logger.error("Entered Ftrack URL is not accesible!") - return False - - if (result.status_code != 200 or 'FTRACK_VERSION' not in result.headers): - if log_errors: - logger.error("Entered Ftrack URL is not accesible!") - return False - - logger.debug("Ftrack server {} is accessible.".format(url)) - - return url - - class SocketBaseEventHub(ftrack_api.event.hub.EventHub): hearbeat_msg = b"hearbeat" diff --git a/openpype/modules/ftrack/lib/avalon_sync.py b/openpype/modules/ftrack/lib/avalon_sync.py index 72be6a8e9a..935d1e85c9 100644 --- a/openpype/modules/ftrack/lib/avalon_sync.py +++ b/openpype/modules/ftrack/lib/avalon_sync.py @@ -19,11 +19,8 @@ from openpype.client.operations import ( CURRENT_PROJECT_SCHEMA, CURRENT_PROJECT_CONFIG_SCHEMA, ) -from openpype.api import ( - Logger, - get_anatomy_settings -) -from openpype.lib import ApplicationManager +from openpype.settings import get_anatomy_settings +from openpype.lib import ApplicationManager, Logger from openpype.pipeline import AvalonMongoDB, schema from .constants import CUST_ATTR_ID_KEY, FPS_KEYS diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py index 6024781d87..5ff75e7060 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_instances.py @@ -35,7 +35,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin): family_mapping = { "camera": "cam", "look": "look", - "mayaascii": "scene", + "mayaAscii": "scene", "model": "geo", "rig": "rig", "setdress": "setdress", @@ -74,11 +74,15 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin): version_number = int(instance_version) family = instance.data["family"] - family_low = family.lower() + # Perform case-insensitive family mapping + family_low = family.lower() asset_type = instance.data.get("ftrackFamily") - if not asset_type and family_low in self.family_mapping: - asset_type = self.family_mapping[family_low] + if not asset_type: + for map_family, map_value in self.family_mapping.items(): + if map_family.lower() == family_low: + asset_type = map_value + break if not asset_type: asset_type = "upload" @@ -86,15 +90,6 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin): self.log.debug( "Family: {}\nMapping: {}".format(family_low, self.family_mapping) ) - - # Ignore this instance if neither "ftrackFamily" or a family mapping is - # found. - if not asset_type: - self.log.info(( - "Family \"{}\" does not match any asset type mapping" - ).format(family)) - return - status_name = self._get_asset_version_status_name(instance) # Base of component item data diff --git a/openpype/modules/ftrack/tray/ftrack_tray.py b/openpype/modules/ftrack/tray/ftrack_tray.py index 501d837a4c..e3c6e30ead 100644 --- a/openpype/modules/ftrack/tray/ftrack_tray.py +++ b/openpype/modules/ftrack/tray/ftrack_tray.py @@ -6,22 +6,18 @@ import threading from Qt import QtCore, QtWidgets, QtGui import ftrack_api -from ..ftrack_server.lib import check_ftrack_url -from ..ftrack_server import socket_thread -from ..lib import credentials -from ..ftrack_module import FTRACK_MODULE_DIR -from . import login_dialog - from openpype import resources from openpype.lib import Logger - - -log = Logger.get_logger("FtrackModule") +from openpype_modules.ftrack import resolve_ftrack_url, FTRACK_MODULE_DIR +from openpype_modules.ftrack.ftrack_server import socket_thread +from openpype_modules.ftrack.lib import credentials +from . import login_dialog class FtrackTrayWrapper: def __init__(self, module): self.module = module + self.log = Logger.get_logger(self.__class__.__name__) self.thread_action_server = None self.thread_socket_server = None @@ -62,19 +58,19 @@ class FtrackTrayWrapper: if validation: self.widget_login.set_credentials(ft_user, ft_api_key) self.module.set_credentials_to_env(ft_user, ft_api_key) - log.info("Connected to Ftrack successfully") + self.log.info("Connected to Ftrack successfully") self.on_login_change() return validation if not validation and ft_user and ft_api_key: - log.warning( + self.log.warning( "Current Ftrack credentials are not valid. {}: {} - {}".format( str(os.environ.get("FTRACK_SERVER")), ft_user, ft_api_key ) ) - log.info("Please sign in to Ftrack") + self.log.info("Please sign in to Ftrack") self.bool_logged = False self.show_login_widget() self.set_menu_visibility() @@ -104,7 +100,7 @@ class FtrackTrayWrapper: self.action_credentials.setIcon(self.icon_not_logged) self.action_credentials.setToolTip("Logged out") - log.info("Logged out of Ftrack") + self.log.info("Logged out of Ftrack") self.bool_logged = False self.set_menu_visibility() @@ -126,10 +122,6 @@ class FtrackTrayWrapper: ftrack_url = self.module.ftrack_url os.environ["FTRACK_SERVER"] = ftrack_url - parent_file_path = os.path.dirname( - os.path.dirname(os.path.realpath(__file__)) - ) - min_fail_seconds = 5 max_fail_count = 3 wait_time_after_max_fail = 10 @@ -154,17 +146,19 @@ class FtrackTrayWrapper: # Main loop while True: if not self.bool_action_server_running: - log.debug("Action server was pushed to stop.") + self.log.debug("Action server was pushed to stop.") break # Check if accessible Ftrack and Mongo url if not ftrack_accessible: - ftrack_accessible = check_ftrack_url(ftrack_url) + ftrack_accessible = resolve_ftrack_url(ftrack_url) # Run threads only if Ftrack is accessible if not ftrack_accessible: if not printed_ftrack_error: - log.warning("Can't access Ftrack {}".format(ftrack_url)) + self.log.warning( + "Can't access Ftrack {}".format(ftrack_url) + ) if self.thread_socket_server is not None: self.thread_socket_server.stop() @@ -191,7 +185,7 @@ class FtrackTrayWrapper: self.set_menu_visibility() elif failed_count == max_fail_count: - log.warning(( + self.log.warning(( "Action server failed {} times." " I'll try to run again {}s later" ).format( @@ -243,10 +237,10 @@ class FtrackTrayWrapper: self.thread_action_server.join() self.thread_action_server = None - log.info("Ftrack action server was forced to stop") + self.log.info("Ftrack action server was forced to stop") except Exception: - log.warning( + self.log.warning( "Error has happened during Killing action server", exc_info=True ) @@ -343,7 +337,7 @@ class FtrackTrayWrapper: self.thread_timer = None except Exception as e: - log.error("During Killing Timer event server: {0}".format(e)) + self.log.error("During Killing Timer event server: {0}".format(e)) def changed_user(self): self.stop_action_server() diff --git a/openpype/pipeline/workfile/lock_workfile.py b/openpype/pipeline/workfile/lock_workfile.py new file mode 100644 index 0000000000..fbec44247a --- /dev/null +++ b/openpype/pipeline/workfile/lock_workfile.py @@ -0,0 +1,82 @@ +import os +import json +from uuid import uuid4 +from openpype.lib import Logger, filter_profiles +from openpype.lib.pype_info import get_workstation_info +from openpype.settings import get_project_settings + + +def _read_lock_file(lock_filepath): + if not os.path.exists(lock_filepath): + log = Logger.get_logger("_read_lock_file") + log.debug("lock file is not created or readable as expected!") + with open(lock_filepath, "r") as stream: + data = json.load(stream) + return data + + +def _get_lock_file(filepath): + return filepath + ".oplock" + + +def is_workfile_locked(filepath): + lock_filepath = _get_lock_file(filepath) + if not os.path.exists(lock_filepath): + return False + return True + + +def get_workfile_lock_data(filepath): + lock_filepath = _get_lock_file(filepath) + return _read_lock_file(lock_filepath) + + +def is_workfile_locked_for_current_process(filepath): + if not is_workfile_locked(filepath): + return False + + lock_filepath = _get_lock_file(filepath) + data = _read_lock_file(lock_filepath) + return data["process_id"] == _get_process_id() + + +def delete_workfile_lock(filepath): + lock_filepath = _get_lock_file(filepath) + if os.path.exists(lock_filepath): + os.remove(lock_filepath) + + +def create_workfile_lock(filepath): + lock_filepath = _get_lock_file(filepath) + info = get_workstation_info() + info["process_id"] = _get_process_id() + with open(lock_filepath, "w") as stream: + json.dump(info, stream) + + +def remove_workfile_lock(filepath): + if is_workfile_locked_for_current_process(filepath): + delete_workfile_lock(filepath) + + +def _get_process_id(): + process_id = os.environ.get("OPENPYPE_PROCESS_ID") + if not process_id: + process_id = str(uuid4()) + os.environ["OPENPYPE_PROCESS_ID"] = process_id + return process_id + + +def is_workfile_lock_enabled(host_name, project_name, project_setting=None): + if project_setting is None: + project_setting = get_project_settings(project_name) + workfile_lock_profiles = ( + project_setting + ["global"] + ["tools"] + ["Workfiles"] + ["workfile_lock_profiles"]) + profile = filter_profiles(workfile_lock_profiles, {"host_name": host_name}) + if not profile: + return False + return profile["enabled"] diff --git a/openpype/plugins/publish/collect_input_representations_to_versions.py b/openpype/plugins/publish/collect_input_representations_to_versions.py new file mode 100644 index 0000000000..18a19bce80 --- /dev/null +++ b/openpype/plugins/publish/collect_input_representations_to_versions.py @@ -0,0 +1,47 @@ +import pyblish.api + +from bson.objectid import ObjectId + +from openpype.client import get_representations + + +class CollectInputRepresentationsToVersions(pyblish.api.ContextPlugin): + """Converts collected input representations to input versions. + + Any data in `instance.data["inputRepresentations"]` gets converted into + `instance.data["inputVersions"]` as supported in OpenPype v3. + + """ + # This is a ContextPlugin because then we can query the database only once + # for the conversion of representation ids to version ids (optimization) + label = "Input Representations to Versions" + order = pyblish.api.CollectorOrder + 0.499 + hosts = ["*"] + + def process(self, context): + # Query all version ids for representation ids from the database once + representations = set() + for instance in context: + inst_repre = instance.data.get("inputRepresentations", []) + representations.update(inst_repre) + + representations_docs = get_representations( + project_name=context.data["projectEntity"]["name"], + representation_ids=representations, + fields=["_id", "parent"]) + + representation_id_to_version_id = { + repre["_id"]: repre["parent"] for repre in representations_docs + } + + for instance in context: + inst_repre = instance.data.get("inputRepresentations", []) + if not inst_repre: + continue + + input_versions = instance.data.get("inputVersions", []) + for repre_id in inst_repre: + repre_id = ObjectId(repre_id) + version_id = representation_id_to_version_id[repre_id] + input_versions.append(version_id) + instance.data["inputVersions"] = input_versions diff --git a/openpype/plugins/publish/extract_otio_file.py b/openpype/plugins/publish/extract_otio_file.py index c692205d81..1a6a82117d 100644 --- a/openpype/plugins/publish/extract_otio_file.py +++ b/openpype/plugins/publish/extract_otio_file.py @@ -16,6 +16,8 @@ class ExtractOTIOFile(publish.Extractor): hosts = ["resolve", "hiero", "traypublisher"] def process(self, instance): + if not instance.context.data.get("otioTimeline"): + return # create representation data if "representations" not in instance.data: instance.data["representations"] = [] diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index 96d768e1c1..c0760a5471 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -577,8 +577,11 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): return except OSError as exc: - # re-raise exception if different than cross drive path - if exc.errno != errno.EXDEV: + # re-raise exception if different than + # EXDEV - cross drive path + # EINVAL - wrong format, must be NTFS + self.log.debug("Hardlink failed with errno:'{}'".format(exc.errno)) + if exc.errno not in [errno.EXDEV, errno.EINVAL]: raise shutil.copy(src_path, dst_path) diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index 85561495fd..f65d969c53 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -187,7 +187,7 @@ class PypeCommands: (to choose validator for example) """ - from openpype.hosts.webpublisher.cli_functions import ( + from openpype.hosts.webpublisher.publish_functions import ( cli_publish_from_app ) diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index 2720e0286d..7acecfaae0 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -36,35 +36,35 @@ "layout" ] }, - "ExtractBlendAnimation": { - "enabled": true, - "optional": true, - "active": true - }, - "ExtractCamera": { - "enabled": true, - "optional": true, - "active": true - }, "ExtractFBX": { "enabled": true, "optional": true, "active": false }, - "ExtractAnimationFBX": { - "enabled": true, - "optional": true, - "active": false - }, "ExtractABC": { "enabled": true, "optional": true, "active": false }, + "ExtractBlendAnimation": { + "enabled": true, + "optional": true, + "active": true + }, + "ExtractAnimationFBX": { + "enabled": true, + "optional": true, + "active": false + }, + "ExtractCamera": { + "enabled": true, + "optional": true, + "active": true + }, "ExtractLayout": { "enabled": true, "optional": true, "active": false } } -} +} \ No newline at end of file diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json index 09b194e21c..cdf861df4a 100644 --- a/openpype/settings/defaults/project_settings/ftrack.json +++ b/openpype/settings/defaults/project_settings/ftrack.json @@ -455,7 +455,7 @@ "family_mapping": { "camera": "cam", "look": "look", - "mayaascii": "scene", + "mayaAscii": "scene", "model": "geo", "rig": "rig", "setdress": "setdress", diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 14957d2b48..115a719995 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -407,7 +407,8 @@ "enabled": false } ], - "extra_folders": [] + "extra_folders": [], + "workfile_lock_profiles": [] }, "loader": { "family_filter_profiles": [ diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json index b7d2104ba1..cdf829db57 100644 --- a/openpype/settings/defaults/project_settings/houdini.json +++ b/openpype/settings/defaults/project_settings/houdini.json @@ -1,4 +1,15 @@ { + "shelves": [ + { + "shelf_set_name": "OpenPype Shelves", + "shelf_set_source_path": { + "windows": "", + "darwin": "", + "linux": "" + }, + "shelf_definition": [] + } + ], "create": { "CreateArnoldAss": { "enabled": true, diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 99ba4cdd5c..76ef0a7338 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -49,7 +49,7 @@ "vray_renderer": { "image_prefix": "maya///", "engine": "1", - "image_format": "png", + "image_format": "exr", "aov_list": [], "additional_options": [] }, @@ -57,7 +57,7 @@ "image_prefix": "maya///", "primary_gi_engine": "0", "secondary_gi_engine": "0", - "image_format": "iff", + "image_format": "exr", "multilayer_exr": true, "force_combine": true, "aov_list": [], @@ -678,25 +678,20 @@ "isolate_view": true, "off_screen": true }, - "PanZoom": { - "pan_zoom": true - }, "Renderer": { "rendererName": "vp2Renderer" }, "Resolution": { "width": 1920, - "height": 1080, - "percent": 1.0, - "mode": "Custom" + "height": 1080 }, "Viewport Options": { "override_viewport_options": true, "displayLights": "default", + "displayTextures": true, "textureMaxResolution": 1024, "renderDepthOfField": true, "shadows": true, - "textures": true, "twoSidedLighting": true, "lineAAEnable": true, "multiSample": 8, @@ -719,7 +714,6 @@ "motionBlurShutterOpenFraction": 0.2, "cameras": false, "clipGhosts": false, - "controlVertices": false, "deformers": false, "dimensions": false, "dynamicConstraints": false, @@ -731,8 +725,7 @@ "grid": false, "hairSystems": true, "handles": false, - "hud": false, - "hulls": false, + "headsUpDisplay": false, "ikHandles": false, "imagePlane": true, "joints": false, @@ -743,7 +736,9 @@ "nCloths": false, "nParticles": false, "nRigids": false, + "controlVertices": false, "nurbsCurves": false, + "hulls": false, "nurbsSurfaces": false, "particleInstancers": false, "pivots": false, @@ -751,7 +746,8 @@ "pluginShapes": false, "polymeshes": true, "strokes": false, - "subdivSurfaces": false + "subdivSurfaces": false, + "textures": false }, "Camera Options": { "displayGateMask": false, diff --git a/openpype/settings/defaults/project_settings/photoshop.json b/openpype/settings/defaults/project_settings/photoshop.json index 552c2c9cad..9d74df7cd5 100644 --- a/openpype/settings/defaults/project_settings/photoshop.json +++ b/openpype/settings/defaults/project_settings/photoshop.json @@ -15,6 +15,9 @@ "CollectInstances": { "flatten_subset_template": "" }, + "CollectVersion": { + "enabled": false + }, "ValidateContainers": { "enabled": true, "optional": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json index d8728c0f4b..808f154226 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json @@ -5,6 +5,10 @@ "label": "Houdini", "is_file": true, "children": [ + { + "type": "schema", + "name": "schema_houdini_scriptshelf" + }, { "type": "schema", "name": "schema_houdini_create" @@ -14,4 +18,4 @@ "name": "schema_houdini_publish" } ] -} +} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json index 7aa49c99a4..e63e25d2c2 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json @@ -131,6 +131,23 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "CollectVersion", + "label": "Collect Version", + "children": [ + { + "type": "label", + "label": "Synchronize version for image and review instances by workfile version." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + } + ] + }, { "type": "schema_template", "name": "template_publish_plugin", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json index c919cd73c5..ba446135e2 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json @@ -238,6 +238,31 @@ } ] } + }, + { + "type": "list", + "key": "workfile_lock_profiles", + "label": "Workfile lock profiles", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "hosts-enum", + "key": "host_name", + "label": "Hosts", + "multiselection": true + }, + { + "type": "splitter" + }, + { + "key": "enabled", + "label": "Enabled", + "type": "boolean" + } + ] + } } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_scriptshelf.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_scriptshelf.json new file mode 100644 index 0000000000..bab9b604b4 --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_scriptshelf.json @@ -0,0 +1,71 @@ +{ + "type": "list", + "key": "shelves", + "label": "Shelves Manager", + "is_group": true, + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "text", + "key": "shelf_set_name", + "label": "Shelf Set Name" + }, + { + "type": "path", + "key": "shelf_set_source_path", + "label": "Shelf Set Path (optional)", + "multipath": false, + "multiplatform": true + }, + { + "type": "list", + "key": "shelf_definition", + "label": "Shelves", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "text", + "key": "shelf_name", + "label": "Shelf Name" + }, + { + "type": "list", + "key": "tools_list", + "label": "Tools", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "text", + "key": "label", + "label": "Name" + }, + { + "type": "path", + "key": "script", + "label": "Script" + }, + { + "type": "path", + "key": "icon", + "label": "Icon" + }, + { + "type": "text", + "key": "help", + "label": "Help" + } + ] + } + } + ] + } + } + ] + } +} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json index 7a40f349cc..62c33f55fc 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json @@ -94,18 +94,6 @@ } ] }, - - { - "type": "dict", - "key": "PanZoom", - "children": [ - { - "type": "boolean", - "key": "pan_zoom", - "label": " Pan Zoom" - } - ] - }, { "type": "splitter" }, @@ -153,19 +141,6 @@ "decimal": 0, "minimum": 0, "maximum": 99999 - }, - { - "type": "number", - "key": "percent", - "label": "percent", - "decimal": 1, - "minimum": 0, - "maximum": 200 - }, - { - "type": "text", - "key": "mode", - "label": "Mode" } ] }, @@ -195,6 +170,11 @@ { "nolights": "No Lights"} ] }, + { + "type": "boolean", + "key": "displayTextures", + "label": "Display Textures" + }, { "type": "number", "key": "textureMaxResolution", @@ -217,11 +197,6 @@ "key": "shadows", "label": "Display Shadows" }, - { - "type": "boolean", - "key": "textures", - "label": "Display Textures" - }, { "type": "boolean", "key": "twoSidedLighting", @@ -369,120 +344,114 @@ { "type": "splitter" }, + { + "type": "label", + "label": "Show" + }, { "type": "boolean", "key": "cameras", - "label": "cameras" + "label": "Cameras" }, { "type": "boolean", "key": "clipGhosts", - "label": "clipGhosts" - }, - { - "type": "boolean", - "key": "controlVertices", - "label": "controlVertices" + "label": "Clip Ghosts" }, { "type": "boolean", "key": "deformers", - "label": "deformers" + "label": "Deformers" }, { "type": "boolean", "key": "dimensions", - "label": "dimensions" + "label": "Dimensions" }, { "type": "boolean", "key": "dynamicConstraints", - "label": "dynamicConstraints" + "label": "Dynamic Constraints" }, { "type": "boolean", "key": "dynamics", - "label": "dynamics" + "label": "Dynamics" }, { "type": "boolean", "key": "fluids", - "label": "fluids" + "label": "Fluids" }, { "type": "boolean", "key": "follicles", - "label": "follicles" + "label": "Follicles" }, { "type": "boolean", "key": "gpuCacheDisplayFilter", - "label": "gpuCacheDisplayFilter" + "label": "GPU Cache" }, { "type": "boolean", "key": "greasePencils", - "label": "greasePencils" + "label": "Grease Pencil" }, { "type": "boolean", "key": "grid", - "label": "grid" + "label": "Grid" }, { "type": "boolean", "key": "hairSystems", - "label": "hairSystems" + "label": "Hair Systems" }, { "type": "boolean", "key": "handles", - "label": "handles" + "label": "Handles" }, { "type": "boolean", - "key": "hud", - "label": "hud" - }, - { - "type": "boolean", - "key": "hulls", - "label": "hulls" + "key": "headsUpDisplay", + "label": "HUD" }, { "type": "boolean", "key": "ikHandles", - "label": "ikHandles" + "label": "IK Handles" }, { "type": "boolean", "key": "imagePlane", - "label": "imagePlane" + "label": "Image Planes" }, { "type": "boolean", "key": "joints", - "label": "joints" + "label": "Joints" }, { "type": "boolean", "key": "lights", - "label": "lights" + "label": "Lights" }, { "type": "boolean", "key": "locators", - "label": "locators" + "label": "Locators" }, { "type": "boolean", "key": "manipulators", - "label": "manipulators" + "label": "Manipulators" }, { "type": "boolean", "key": "motionTrails", - "label": "motionTrails" + "label": "Motion Trails" }, { "type": "boolean", @@ -499,50 +468,65 @@ "key": "nRigids", "label": "nRigids" }, + { + "type": "boolean", + "key": "controlVertices", + "label": "NURBS CVs" + }, { "type": "boolean", "key": "nurbsCurves", - "label": "nurbsCurves" + "label": "NURBS Curves" + }, + { + "type": "boolean", + "key": "hulls", + "label": "NURBS Hulls" }, { "type": "boolean", "key": "nurbsSurfaces", - "label": "nurbsSurfaces" + "label": "NURBS Surfaces" }, { "type": "boolean", "key": "particleInstancers", - "label": "particleInstancers" + "label": "Particle Instancers" }, { "type": "boolean", "key": "pivots", - "label": "pivots" + "label": "Pivots" }, { "type": "boolean", "key": "planes", - "label": "planes" + "label": "Planes" }, { "type": "boolean", "key": "pluginShapes", - "label": "pluginShapes" + "label": "Plugin Shapes" }, { "type": "boolean", "key": "polymeshes", - "label": "polymeshes" + "label": "Polygons" }, { "type": "boolean", "key": "strokes", - "label": "strokes" + "label": "Strokes" }, { "type": "boolean", "key": "subdivSurfaces", - "label": "subdivSurfaces" + "label": "Subdiv Surfaces" + }, + { + "type": "boolean", + "key": "textures", + "label": "Texture Placements" } ] }, @@ -555,47 +539,47 @@ { "type": "boolean", "key": "displayGateMask", - "label": "displayGateMask" + "label": "Display Gate Mask" }, { "type": "boolean", "key": "displayResolution", - "label": "displayResolution" + "label": "Display Resolution" }, { "type": "boolean", "key": "displayFilmGate", - "label": "displayFilmGate" + "label": "Display Film Gate" }, { "type": "boolean", "key": "displayFieldChart", - "label": "displayFieldChart" + "label": "Display Field Chart" }, { "type": "boolean", "key": "displaySafeAction", - "label": "displaySafeAction" + "label": "Display Safe Action" }, { "type": "boolean", "key": "displaySafeTitle", - "label": "displaySafeTitle" + "label": "Display Safe Title" }, { "type": "boolean", "key": "displayFilmPivot", - "label": "displayFilmPivot" + "label": "Display Film Pivot" }, { "type": "boolean", "key": "displayFilmOrigin", - "label": "displayFilmOrigin" + "label": "Display Film Origin" }, { "type": "number", "key": "overscan", - "label": "overscan", + "label": "Overscan", "decimal": 1, "minimum": 0, "maximum": 10 diff --git a/openpype/tools/loader/delegates.py b/openpype/tools/loader/delegates.py new file mode 100644 index 0000000000..e6663d48f1 --- /dev/null +++ b/openpype/tools/loader/delegates.py @@ -0,0 +1,28 @@ +from Qt import QtWidgets, QtGui, QtCore + + +class LoadedInSceneDelegate(QtWidgets.QStyledItemDelegate): + """Delegate for Loaded in Scene state columns. + + Shows "yes" or "no" for True or False values + Colorizes green or dark grey based on True or False values + + """ + + def __init__(self, *args, **kwargs): + super(LoadedInSceneDelegate, self).__init__(*args, **kwargs) + self._colors = { + True: QtGui.QColor(80, 170, 80), + False: QtGui.QColor(90, 90, 90) + } + + def displayText(self, value, locale): + return "yes" if value else "no" + + def initStyleOption(self, option, index): + super(LoadedInSceneDelegate, self).initStyleOption(option, index) + + # Colorize based on value + value = index.data(QtCore.Qt.DisplayRole) + color = self._colors[bool(value)] + option.palette.setBrush(QtGui.QPalette.Text, color) diff --git a/openpype/tools/loader/model.py b/openpype/tools/loader/model.py index 929e497890..77a8669c46 100644 --- a/openpype/tools/loader/model.py +++ b/openpype/tools/loader/model.py @@ -17,6 +17,7 @@ from openpype.client import ( get_representations ) from openpype.pipeline import ( + registered_host, HeroVersionType, schema, ) @@ -24,6 +25,7 @@ from openpype.pipeline import ( from openpype.style import get_default_entity_icon_color from openpype.tools.utils.models import TreeModel, Item from openpype.tools.utils import lib +from openpype.host import ILoadHost from openpype.modules import ModulesManager from openpype.tools.utils.constants import ( @@ -136,6 +138,7 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): "duration", "handles", "step", + "loaded_in_scene", "repre_info" ] @@ -150,6 +153,7 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): "duration": "Duration", "handles": "Handles", "step": "Step", + "loaded_in_scene": "In scene", "repre_info": "Availability" } @@ -231,8 +235,14 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): self._doc_fetching_stop = False self._doc_payload = {} - self.doc_fetched.connect(self._on_doc_fetched) + self._host = registered_host() + self._loaded_representation_ids = set() + # Refresh loaded scene containers only every 3 seconds at most + self._host_loaded_refresh_timeout = 3 + self._host_loaded_refresh_time = 0 + + self.doc_fetched.connect(self._on_doc_fetched) self.refresh() def get_item_by_id(self, item_id): @@ -474,6 +484,27 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): last_versions_by_subset_id[subset_id] = hero_version + # Check loaded subsets + loaded_subset_ids = set() + ids = self._loaded_representation_ids + if ids: + if self._doc_fetching_stop: + return + + # Get subset ids from loaded representations in workfile + # todo: optimize with aggregation query to distinct subset id + representations = get_representations(project_name, + representation_ids=ids, + fields=["parent"]) + version_ids = set(repre["parent"] for repre in representations) + versions = get_versions(project_name, + version_ids=version_ids, + fields=["parent"]) + loaded_subset_ids = set(version["parent"] for version in versions) + + if self._doc_fetching_stop: + return + repre_info_by_version_id = {} if self.sync_server.enabled: versions_by_id = {} @@ -501,7 +532,8 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): "subset_docs_by_id": subset_docs_by_id, "subset_families": subset_families, "last_versions_by_subset_id": last_versions_by_subset_id, - "repre_info_by_version_id": repre_info_by_version_id + "repre_info_by_version_id": repre_info_by_version_id, + "subsets_loaded_by_id": loaded_subset_ids } self.doc_fetched.emit() @@ -533,6 +565,20 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): self.doc_fetched.emit() return + # Collect scene container representations to compare loaded state + # This runs in the main thread because it involves the host DCC + if self._host: + time_since_refresh = time.time() - self._host_loaded_refresh_time + if time_since_refresh > self._host_loaded_refresh_timeout: + if isinstance(self._host, ILoadHost): + containers = self._host.get_containers() + else: + containers = self._host.ls() + + repre_ids = {con.get("representation") for con in containers} + self._loaded_representation_ids = repre_ids + self._host_loaded_refresh_time = time.time() + self.fetch_subset_and_version() def _on_doc_fetched(self): @@ -554,6 +600,10 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): "repre_info_by_version_id" ) + subsets_loaded_by_id = self._doc_payload.get( + "subsets_loaded_by_id" + ) + if ( asset_docs_by_id is None or subset_docs_by_id is None @@ -568,7 +618,8 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): asset_docs_by_id, subset_docs_by_id, last_versions_by_subset_id, - repre_info_by_version_id + repre_info_by_version_id, + subsets_loaded_by_id ) self.endResetModel() self.refreshed.emit(True) @@ -596,8 +647,12 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): return merge_group def _fill_subset_items( - self, asset_docs_by_id, subset_docs_by_id, last_versions_by_subset_id, - repre_info_by_version_id + self, + asset_docs_by_id, + subset_docs_by_id, + last_versions_by_subset_id, + repre_info_by_version_id, + subsets_loaded_by_id ): _groups_tuple = self.groups_config.split_subsets_for_groups( subset_docs_by_id.values(), self._grouping @@ -621,6 +676,35 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): "index": self.index(group_item.row(), 0) } + def _add_subset_item(subset_doc, parent_item, parent_index): + last_version = last_versions_by_subset_id.get( + subset_doc["_id"] + ) + # do not show subset without version + if not last_version: + return + + data = copy.deepcopy(subset_doc) + data["subset"] = subset_doc["name"] + + asset_id = subset_doc["parent"] + data["asset"] = asset_docs_by_id[asset_id]["name"] + + data["last_version"] = last_version + data["loaded_in_scene"] = subset_doc["_id"] in subsets_loaded_by_id + + # Sync server data + data.update( + self._get_last_repre_info(repre_info_by_version_id, + last_version["_id"])) + + item = Item() + item.update(data) + self.add_child(item, parent_item) + + index = self.index(item.row(), 0, parent_index) + self.set_version(index, last_version) + subset_counter = 0 for group_name, subset_docs_by_name in subset_docs_by_group.items(): parent_item = group_item_by_name[group_name]["item"] @@ -643,31 +727,9 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): _parent_index = parent_index for subset_doc in subset_docs: - asset_id = subset_doc["parent"] - - data = copy.deepcopy(subset_doc) - data["subset"] = subset_name - data["asset"] = asset_docs_by_id[asset_id]["name"] - - last_version = last_versions_by_subset_id.get( - subset_doc["_id"] - ) - data["last_version"] = last_version - - # do not show subset without version - if not last_version: - continue - - data.update( - self._get_last_repre_info(repre_info_by_version_id, - last_version["_id"])) - - item = Item() - item.update(data) - self.add_child(item, _parent_item) - - index = self.index(item.row(), 0, _parent_index) - self.set_version(index, last_version) + _add_subset_item(subset_doc, + parent_item=_parent_item, + parent_index=_parent_index) for subset_name in sorted(subset_docs_without_group.keys()): subset_docs = subset_docs_without_group[subset_name] @@ -682,31 +744,9 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): subset_counter += 1 for subset_doc in subset_docs: - asset_id = subset_doc["parent"] - - data = copy.deepcopy(subset_doc) - data["subset"] = subset_name - data["asset"] = asset_docs_by_id[asset_id]["name"] - - last_version = last_versions_by_subset_id.get( - subset_doc["_id"] - ) - data["last_version"] = last_version - - # do not show subset without version - if not last_version: - continue - - data.update( - self._get_last_repre_info(repre_info_by_version_id, - last_version["_id"])) - - item = Item() - item.update(data) - self.add_child(item, parent_item) - - index = self.index(item.row(), 0, parent_index) - self.set_version(index, last_version) + _add_subset_item(subset_doc, + parent_item=parent_item, + parent_index=parent_index) def data(self, index, role): if not index.isValid(): diff --git a/openpype/tools/loader/widgets.py b/openpype/tools/loader/widgets.py index cbf5720803..c028aa4174 100644 --- a/openpype/tools/loader/widgets.py +++ b/openpype/tools/loader/widgets.py @@ -58,6 +58,7 @@ from .model import ( ITEM_ID_ROLE ) from . import lib +from .delegates import LoadedInSceneDelegate from openpype.tools.utils.constants import ( LOCAL_PROVIDER_ROLE, @@ -169,6 +170,7 @@ class SubsetWidget(QtWidgets.QWidget): ("duration", 60), ("handles", 55), ("step", 10), + ("loaded_in_scene", 25), ("repre_info", 65) ) @@ -234,6 +236,10 @@ class SubsetWidget(QtWidgets.QWidget): column = model.Columns.index("repre_info") view.setItemDelegateForColumn(column, avail_delegate) + loaded_in_scene_delegate = LoadedInSceneDelegate(view) + column = model.Columns.index("loaded_in_scene") + view.setItemDelegateForColumn(column, loaded_in_scene_delegate) + layout = QtWidgets.QVBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addLayout(top_bar_layout) diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 90a36b4f01..2a0e6e940a 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -30,8 +30,8 @@ from .widgets import ( class PublisherWindow(QtWidgets.QDialog): """Main window of publisher.""" - default_width = 1000 - default_height = 600 + default_width = 1200 + default_height = 700 def __init__(self, parent=None, reset_on_show=None): super(PublisherWindow, self).__init__(parent) diff --git a/openpype/tools/traypublisher/window.py b/openpype/tools/traypublisher/window.py index cc33287091..930c27ca9c 100644 --- a/openpype/tools/traypublisher/window.py +++ b/openpype/tools/traypublisher/window.py @@ -7,6 +7,7 @@ publishing plugins. """ from Qt import QtWidgets, QtCore +import qtawesome from openpype.pipeline import ( install_host, @@ -20,6 +21,27 @@ from openpype.tools.utils.models import ( ProjectSortFilterProxy ) +from openpype.tools.utils import PlaceholderLineEdit +import appdirs +from openpype.lib import JSONSettingRegistry + + +class TrayPublisherRegistry(JSONSettingRegistry): + """Class handling OpenPype general settings registry. + + Attributes: + vendor (str): Name used for path construction. + product (str): Additional name used for path construction. + + """ + + def __init__(self): + self.vendor = "pypeclub" + self.product = "openpype" + name = "tray_publisher" + path = appdirs.user_data_dir(self.product, self.vendor) + super(TrayPublisherRegistry, self).__init__(name, path) + class StandaloneOverlayWidget(QtWidgets.QFrame): project_selected = QtCore.Signal(str) @@ -43,6 +65,7 @@ class StandaloneOverlayWidget(QtWidgets.QFrame): projects_model = ProjectModel(dbcon) projects_proxy = ProjectSortFilterProxy() projects_proxy.setSourceModel(projects_model) + projects_proxy.setFilterKeyColumn(0) projects_view = QtWidgets.QListView(content_widget) projects_view.setObjectName("ChooseProjectView") @@ -59,10 +82,17 @@ class StandaloneOverlayWidget(QtWidgets.QFrame): btns_layout.addWidget(cancel_btn, 0) btns_layout.addWidget(confirm_btn, 0) + txt_filter = PlaceholderLineEdit(content_widget) + txt_filter.setPlaceholderText("Quick filter projects..") + txt_filter.setClearButtonEnabled(True) + txt_filter.addAction(qtawesome.icon("fa.filter", color="gray"), + QtWidgets.QLineEdit.LeadingPosition) + content_layout = QtWidgets.QVBoxLayout(content_widget) content_layout.setContentsMargins(0, 0, 0, 0) content_layout.setSpacing(20) content_layout.addWidget(header_label, 0) + content_layout.addWidget(txt_filter, 0) content_layout.addWidget(projects_view, 1) content_layout.addLayout(btns_layout, 0) @@ -79,17 +109,40 @@ class StandaloneOverlayWidget(QtWidgets.QFrame): projects_view.doubleClicked.connect(self._on_double_click) confirm_btn.clicked.connect(self._on_confirm_click) cancel_btn.clicked.connect(self._on_cancel_click) + txt_filter.textChanged.connect(self._on_text_changed) self._projects_view = projects_view self._projects_model = projects_model + self._projects_proxy = projects_proxy self._cancel_btn = cancel_btn self._confirm_btn = confirm_btn + self._txt_filter = txt_filter self._publisher_window = publisher_window self._project_name = None def showEvent(self, event): self._projects_model.refresh() + # Sort projects after refresh + self._projects_proxy.sort(0) + + setting_registry = TrayPublisherRegistry() + try: + project_name = setting_registry.get_item("project_name") + except ValueError: + project_name = None + + if project_name: + index = None + src_index = self._projects_model.find_project(project_name) + if src_index is not None: + index = self._projects_proxy.mapFromSource(src_index) + if index: + mode = ( + QtCore.QItemSelectionModel.Select + | QtCore.QItemSelectionModel.Rows) + self._projects_view.selectionModel().select(index, mode) + self._cancel_btn.setVisible(self._project_name is not None) super(StandaloneOverlayWidget, self).showEvent(event) @@ -102,6 +155,10 @@ class StandaloneOverlayWidget(QtWidgets.QFrame): def _on_cancel_click(self): self._set_project(self._project_name) + def _on_text_changed(self): + self._projects_proxy.setFilterRegularExpression( + self._txt_filter.text()) + def set_selected_project(self): index = self._projects_view.currentIndex() @@ -119,6 +176,9 @@ class StandaloneOverlayWidget(QtWidgets.QFrame): self.setVisible(False) self.project_selected.emit(project_name) + setting_registry = TrayPublisherRegistry() + setting_registry.set_item("project_name", project_name) + class TrayPublishWindow(PublisherWindow): def __init__(self, *args, **kwargs): diff --git a/openpype/tools/utils/host_tools.py b/openpype/tools/utils/host_tools.py index 52d15a59f7..d2f05d3302 100644 --- a/openpype/tools/utils/host_tools.py +++ b/openpype/tools/utils/host_tools.py @@ -32,6 +32,7 @@ class HostToolsHelper: self._workfiles_tool = None self._loader_tool = None self._creator_tool = None + self._publisher_tool = None self._subset_manager_tool = None self._scene_inventory_tool = None self._library_loader_tool = None @@ -193,7 +194,6 @@ class HostToolsHelper: library_loader_tool.showNormal() library_loader_tool.refresh() - def show_publish(self, parent=None): """Try showing the most desirable publish GUI @@ -269,6 +269,31 @@ class HostToolsHelper: dialog.activateWindow() dialog.showNormal() + def get_publisher_tool(self, parent): + """Create, cache and return publisher window.""" + + if self._publisher_tool is None: + from openpype.tools.publisher import PublisherWindow + + host = registered_host() + ILoadHost.validate_load_methods(host) + + publisher_window = PublisherWindow( + parent=parent or self._parent + ) + self._publisher_tool = publisher_window + + return self._publisher_tool + + def show_publisher_tool(self, parent=None): + with qt_app_context(): + dialog = self.get_publisher_tool(parent) + + dialog.show() + dialog.raise_() + dialog.activateWindow() + dialog.showNormal() + def get_tool_by_name(self, tool_name, parent=None, *args, **kwargs): """Show tool by it's name. @@ -298,6 +323,10 @@ class HostToolsHelper: elif tool_name == "publish": self.log.info("Can't return publish tool window.") + # "new" publisher + elif tool_name == "publisher": + return self.get_publisher_tool(parent, *args, **kwargs) + elif tool_name == "experimental_tools": return self.get_experimental_tools_dialog(parent, *args, **kwargs) @@ -335,6 +364,9 @@ class HostToolsHelper: elif tool_name == "publish": self.show_publish(parent, *args, **kwargs) + elif tool_name == "publisher": + self.show_publisher_tool(parent, *args, **kwargs) + elif tool_name == "experimental_tools": self.show_experimental_tools_dialog(parent, *args, **kwargs) @@ -414,6 +446,10 @@ def show_publish(parent=None): _SingletonPoint.show_tool_by_name("publish", parent) +def show_publisher(parent=None): + _SingletonPoint.show_tool_by_name("publisher", parent) + + def show_experimental_tools_dialog(parent=None): _SingletonPoint.show_tool_by_name("experimental_tools", parent) diff --git a/openpype/tools/utils/models.py b/openpype/tools/utils/models.py index 1faccef4dd..2e93d94d7e 100644 --- a/openpype/tools/utils/models.py +++ b/openpype/tools/utils/models.py @@ -330,11 +330,26 @@ class ProjectModel(QtGui.QStandardItemModel): if new_items: root_item.appendRows(new_items) + def find_project(self, project_name): + """ + Get index of 'project_name' value. + + Args: + project_name (str): + Returns: + (QModelIndex) + """ + val = self._items_by_name.get(project_name) + if val: + return self.indexFromItem(val) + class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): def __init__(self, *args, **kwargs): super(ProjectSortFilterProxy, self).__init__(*args, **kwargs) self._filter_enabled = True + # Disable case sensitivity + self.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) def lessThan(self, left_index, right_index): if left_index.data(PROJECT_NAME_ROLE) is None: @@ -356,10 +371,14 @@ class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): def filterAcceptsRow(self, source_row, source_parent): index = self.sourceModel().index(source_row, 0, source_parent) + string_pattern = self.filterRegularExpression().pattern() if self._filter_enabled: result = self._custom_index_filter(index) if result is not None: - return result + project_name = index.data(PROJECT_NAME_ROLE) + if project_name is None: + return result + return string_pattern.lower() in project_name.lower() return super(ProjectSortFilterProxy, self).filterAcceptsRow( source_row, source_parent diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py index b4f5e422bc..b7d31e4af4 100644 --- a/openpype/tools/workfiles/files_widget.py +++ b/openpype/tools/workfiles/files_widget.py @@ -8,9 +8,15 @@ from Qt import QtWidgets, QtCore from openpype.host import IWorkfileHost from openpype.client import get_asset_by_id +from openpype.pipeline.workfile.lock_workfile import ( + is_workfile_locked, + is_workfile_lock_enabled, + is_workfile_locked_for_current_process +) from openpype.tools.utils import PlaceholderLineEdit from openpype.tools.utils.delegates import PrettyTimeDelegate from openpype.lib import emit_event +from openpype.tools.workfiles.lock_dialog import WorkfileLockDialog from openpype.pipeline import ( registered_host, legacy_io, @@ -452,8 +458,19 @@ class FilesWidget(QtWidgets.QWidget): "host_name": self.host_name } + def _is_workfile_locked(self, filepath): + if not is_workfile_lock_enabled(self.host_name, self.project_name): + return False + if not is_workfile_locked(filepath): + return False + return not is_workfile_locked_for_current_process(filepath) + def open_file(self, filepath): host = self.host + if self._is_workfile_locked(filepath): + # add lockfile dialog + WorkfileLockDialog(filepath) + if isinstance(host, IWorkfileHost): has_unsaved_changes = host.workfile_has_unsaved_changes() else: @@ -561,7 +578,7 @@ class FilesWidget(QtWidgets.QWidget): src = self._get_selected_filepath() dst = os.path.join(self._workfiles_root, work_file) - shutil.copy(src, dst) + shutil.copyfile(src, dst) self.workfile_created.emit(dst) @@ -658,7 +675,7 @@ class FilesWidget(QtWidgets.QWidget): else: self.host.save_file(filepath) else: - shutil.copy(src_path, filepath) + shutil.copyfile(src_path, filepath) if isinstance(self.host, IWorkfileHost): self.host.open_workfile(filepath) else: diff --git a/openpype/tools/workfiles/lock_dialog.py b/openpype/tools/workfiles/lock_dialog.py new file mode 100644 index 0000000000..c574a74e32 --- /dev/null +++ b/openpype/tools/workfiles/lock_dialog.py @@ -0,0 +1,47 @@ +from Qt import QtWidgets, QtCore, QtGui +from openpype.style import load_stylesheet, get_app_icon_path + +from openpype.pipeline.workfile.lock_workfile import get_workfile_lock_data + + +class WorkfileLockDialog(QtWidgets.QDialog): + def __init__(self, workfile_path, parent=None): + super(WorkfileLockDialog, self).__init__(parent) + self.setWindowTitle("Warning") + icon = QtGui.QIcon(get_app_icon_path()) + self.setWindowIcon(icon) + + data = get_workfile_lock_data(workfile_path) + + message = "{} on {} machine is working on the same workfile.".format( + data["username"], + data["hostname"] + ) + + msg_label = QtWidgets.QLabel(message, self) + + btns_widget = QtWidgets.QWidget(self) + + cancel_btn = QtWidgets.QPushButton("Cancel", btns_widget) + ignore_btn = QtWidgets.QPushButton("Ignore lock", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.setContentsMargins(0, 0, 0, 0) + btns_layout.setSpacing(10) + btns_layout.addStretch(1) + btns_layout.addWidget(cancel_btn, 0) + btns_layout.addWidget(ignore_btn, 0) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(15, 15, 15, 15) + main_layout.addWidget(msg_label, 1, QtCore.Qt.AlignCenter), + main_layout.addSpacing(10) + main_layout.addWidget(btns_widget, 0) + + cancel_btn.clicked.connect(self.reject) + ignore_btn.clicked.connect(self.accept) + + def showEvent(self, event): + super(WorkfileLockDialog, self).showEvent(event) + + self.setStyleSheet(load_stylesheet()) diff --git a/openpype/version.py b/openpype/version.py index 8469b1712a..a2335b696b 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.14.2" +__version__ = "3.14.3-nightly.2" diff --git a/tools/get_python_packages_info.py b/tools/get_python_packages_info.py new file mode 100644 index 0000000000..b4952840e6 --- /dev/null +++ b/tools/get_python_packages_info.py @@ -0,0 +1,83 @@ +# -*- coding: utf-8 -*- +"""Get version and license information on used Python packages. + +This is getting over all packages installed with Poetry and printing out +their name, version and available license information from PyPi in Markdown +table format. + +Usage: + ./.poetry/bin/poetry run python ./tools/get_python_packages_info.py + +""" + +import toml +import requests + + +packages = [] + +# define column headers +package_header = "Package" +version_header = "Version" +license_header = "License" + +name_col_width = len(package_header) +version_col_width = len(version_header) +license_col_width = len(license_header) + +# read lock file to get packages +with open("poetry.lock", "r") as fb: + lock_content = toml.load(fb) + + for package in lock_content["package"]: + # query pypi for license information + url = f"https://pypi.org/pypi/{package['name']}/json" + response = requests.get( + f"https://pypi.org/pypi/{package['name']}/json") + package_data = response.json() + version = package.get("version") or "N/A" + try: + package_license = package_data["info"].get("license") or "N/A" + except KeyError: + package_license = "N/A" + + if len(package_license) > 64: + package_license = f"{package_license[:32]}..." + packages.append( + ( + package["name"], + version, + package_license + ) + ) + + # update column width based on max string length + if len(package["name"]) > name_col_width: + name_col_width = len(package["name"]) + if len(version) > version_col_width: + version_col_width = len(version) + if len(package_license) > license_col_width: + license_col_width = len(package_license) + +# pad columns +name_col_width += 2 +version_col_width += 2 +license_col_width += 2 + +# print table header +print((f"|{package_header.center(name_col_width)}" + f"|{version_header.center(version_col_width)}" + f"|{license_header.center(license_col_width)}|")) + +print( + "|" + ("-" * len(package_header.center(name_col_width))) + + "|" + ("-" * len(version_header.center(version_col_width))) + + "|" + ("-" * len(license_header.center(license_col_width))) + "|") + +# print rest of the table +for package in packages: + print(( + f"|{package[0].center(name_col_width)}" + f"|{package[1].center(version_col_width)}" + f"|{package[2].center(license_col_width)}|" + )) diff --git a/website/docs/admin_hosts_houdini.md b/website/docs/admin_hosts_houdini.md new file mode 100644 index 0000000000..64c54db591 --- /dev/null +++ b/website/docs/admin_hosts_houdini.md @@ -0,0 +1,11 @@ +--- +id: admin_hosts_houdini +title: Houdini +sidebar_label: Houdini +--- + +## Shelves Manager +You can add your custom shelf set into Houdini by setting your shelf sets, shelves and tools in **Houdini -> Shelves Manager**. +![Custom menu definition](assets/houdini-admin_shelvesmanager.png) + +The Shelf Set Path is used to load a .shelf file to generate your shelf set. If the path is specified, you don't have to set the shelves and tools. \ No newline at end of file diff --git a/website/docs/assets/houdini-admin_shelvesmanager.png b/website/docs/assets/houdini-admin_shelvesmanager.png new file mode 100644 index 0000000000..ba2f15a6a5 Binary files /dev/null and b/website/docs/assets/houdini-admin_shelvesmanager.png differ diff --git a/website/sidebars.js b/website/sidebars.js index 920a3134f6..af64282d61 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -101,6 +101,7 @@ module.exports = { items: [ "admin_hosts_blender", "admin_hosts_hiero", + "admin_hosts_houdini", "admin_hosts_maya", "admin_hosts_nuke", "admin_hosts_resolve", @@ -142,7 +143,7 @@ module.exports = { ], }, ], - Dev: [ + Dev: [ "dev_introduction", "dev_requirements", "dev_build", @@ -157,5 +158,5 @@ module.exports = { "dev_publishing" ] } - ] + ] };