diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index e3ca8262e5..b92061f39a 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,15 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.17.4 + - 3.17.4-nightly.2 + - 3.17.4-nightly.1 + - 3.17.3 + - 3.17.3-nightly.2 + - 3.17.3-nightly.1 + - 3.17.2 + - 3.17.2-nightly.4 + - 3.17.2-nightly.3 - 3.17.2-nightly.2 - 3.17.2-nightly.1 - 3.17.1 @@ -126,15 +135,6 @@ body: - 3.15.1-nightly.2 - 3.15.1-nightly.1 - 3.15.0 - - 3.15.0-nightly.1 - - 3.14.11-nightly.4 - - 3.14.11-nightly.3 - - 3.14.11-nightly.2 - - 3.14.11-nightly.1 - - 3.14.10 - - 3.14.10-nightly.9 - - 3.14.10-nightly.8 - - 3.14.10-nightly.7 validations: required: true - type: dropdown diff --git a/CHANGELOG.md b/CHANGELOG.md index 8f14340348..7432b33e24 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,1118 @@ # Changelog +## [3.17.4](https://github.com/ynput/OpenPype/tree/3.17.4) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.3...3.17.4) + +### **🆕 New features** + + +
+Add Support for Husk-AYON Integration #5816 + +This draft pull request introduces support for integrating Husk with AYON within the OpenPype repository. + + +___ + +
+ + +
+Push to project tool: Prepare push to project tool for AYON #5770 + +Cloned Push to project tool for AYON and modified it. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Max: tycache family support #5624 + +Tycache family supports for Tyflow Plugin in Max + + +___ + +
+ + +
+Unreal: Changed behaviour for updating assets #5670 + +Changed how assets are updated in Unreal. + + +___ + +
+ + +
+Unreal: Improved error reporting for Sequence Frame Validator #5730 + +Improved error reporting for Sequence Frame Validator. + + +___ + +
+ + +
+Max: Setting tweaks on Review Family #5744 + +- Bug fix of not being able to publish the preferred visual style when creating preview animation +- Exposes the parameters after creating instance +- Add the Quality settings and viewport texture settings for preview animation +- add use selection for create review + + +___ + +
+ + +
+Max: Add families with frame range extractions back to the frame range validator #5757 + +In 3dsMax, there are some instances which exports the files in frame range but not being added to the optional frame range validator. In this PR, these instances would have the optional frame range validators to allow users to check if frame range aligns with the context data from DB.The following families have been added to have optional frame range validator: +- maxrender +- review +- camera +- redshift proxy +- pointcache +- point cloud(tyFlow PRT) + + +___ + +
+ + +
+TimersManager: Use available data to get context info #5804 + +Get context information from pyblish context data instead of using `legacy_io`. + + +___ + +
+ + +
+Chore: Removed unused variable from `AbstractCollectRender` #5805 + +Removed unused `_asset` variable from `RenderInstance`. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Bugfix/houdini: wrong frame calculation with handles #5698 + +This PR make collect plugins to consider `handleStart` and `handleEnd` when collecting frame range it affects three parts: +- get frame range in collect plugins +- expected file in render plugins +- submit houdini job deadline plugin + + +___ + +
+ + +
+Nuke: ayon server settings improvements #5746 + +Nuke settings were not aligned with OpenPype settings. Also labels needed to be improved. + + +___ + +
+ + +
+Blender: Fix pointcache family and fix alembic extractor #5747 + +Fixed `pointcache` family and fixed behaviour of the alembic extractor. + + +___ + +
+ + +
+AYON: Remove 'shotgun_api3' from dependencies #5803 + +Removed `shotgun_api3` dependency from openpype dependencies for AYON launcher. The dependency is already defined in shotgrid addon and change of version causes clashes. + + +___ + +
+ + +
+Chore: Fix typo in filename #5807 + +Move content of `contants.py` into `constants.py`. + + +___ + +
+ + +
+Chore: Create context respects instance changes #5809 + +Fix issue with unrespected change propagation in `CreateContext`. All successfully saved instances are marked as saved so they have no changes. Origin data of an instance are explicitly not handled directly by the object but by the attribute wrappers. + + +___ + +
+ + +
+Blender: Fix tools handling in AYON mode #5811 + +Skip logic in `before_window_show` in blender when in AYON mode. Most of the stuff called there happes on show automatically. + + +___ + +
+ + +
+Blender: Include Grease Pencil in review and thumbnails #5812 + +Include Grease Pencil in review and thumbnails. + + +___ + +
+ + +
+Workfiles tool AYON: Fix double click of workfile #5813 + +Fix double click on workfiles in workfiles tool to open the file. + + +___ + +
+ + +
+Webpublisher: removal of usage of no_of_frames in error message #5819 + +If it throws exception, `no_of_frames` value wont be available, so it doesn't make sense to log it. + + +___ + +
+ + +
+Attribute Defs: Hide multivalue widget in Number by default #5821 + +Fixed default look of `NumberAttrWidget` by hiding its multiselection widget. + + +___ + +
+ +### **Merged pull requests** + + +
+Corrected a typo in Readme.md (Top -> To) #5800 + + +___ + +
+ + +
+Photoshop: Removed redundant copy of extension.zxp #5802 + +`extension.zxp` shouldn't be inside of extension folder. + + +___ + +
+ + + + +## [3.17.3](https://github.com/ynput/OpenPype/tree/3.17.3) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.2...3.17.3) + +### **🆕 New features** + + +
+Maya: Multi-shot Layout Creator #5710 + +New Multi-shot Layout creator is a way of automating creation of the new Layout instances in Maya, associated with correct shots, frame ranges and Camera Sequencer in Maya. + + +___ + +
+ + +
+Colorspace: ociolook file product type workflow #5541 + +Traypublisher support for publishing of colorspace look files (ociolook) which are json files holding any LUT files. This new product is available for loading in Nuke host at the moment.Added colorspace selector to publisher attribute with better labeling. We are supporting also Roles and Alias (only v2 configs). + + +___ + +
+ + +
+Scene Inventory tool: Refactor Scene Inventory tool (for AYON) #5758 + +Modified scene inventory tool for AYON. The main difference is in how project name is defined and replacement of assets combobox with folders dialog. + + +___ + +
+ + +
+AYON: Support dev bundles #5783 + +Modules can be loaded in AYON dev mode from different location. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Testing: Ingest Maya userSetup #5734 + +Suggesting to ingest `userSetup.py` startup script for easier collaboration and transparency of testing. + + +___ + +
+ + +
+Fusion: Work with pathmaps #5329 + +Path maps are a big part of our Fusion workflow. We map the project folder to a path map within Fusion so all loaders and savers point to the path map variable. This way any computer on any OS can open any comp no matter where the project folder is located. + + +___ + +
+ + +
+Maya: Add Maya 2024 and remove pre 2022. #5674 + +Adding Maya 2024 as default application variant.Removing Maya 2020 and older, as these are not supported anymore. + + +___ + +
+ + +
+Enhancement: Houdini: Allow using template keys in Houdini shelves manager #5727 + +Allow using Template keys in Houdini shelves manager. + + +___ + +
+ + +
+Houdini: Fix Show in usdview loader action #5737 + +Fix the "Show in USD View" loader to show up in Houdini + + +___ + +
+ + +
+Nuke: validator of asset context with repair actions #5749 + +Instance nodes with different context of asset and task can be now validated and repaired via repair action. + + +___ + +
+ + +
+AYON: Tools enhancements #5753 + +Few enhancements and tweaks of AYON related tools. + + +___ + +
+ + +
+Max: Tweaks on ValidateMaxContents #5759 + +This PR provides enhancements on ValidateMaxContent as follow: +- Rename `ValidateMaxContents` to `ValidateContainers` +- Add related families which are required to pass the validation(All families except `Render` as the render instance is the one which only allows empty container) + + +___ + +
+ + +
+Enhancement: Nuke refactor `SelectInvalidAction` #5762 + +Refactor `SelectInvalidAction` to behave like other action for other host, create `SelectInstanceNodeAction` as dedicated action to select the instance node for a failed plugin. +- Note: Selecting Instance Node will still select the instance node even if the user has currently 'fixed' the problem. + + +___ + +
+ + +
+Enhancement: Tweak logging for Nuke for artist facing reports #5763 + +Tweak logs that are not artist-facing to debug level + in some cases clarify what the logged value is. + + +___ + +
+ + +
+AYON Settings: Disk mapping #5786 + +Added disk mapping settings to core addon settings. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: add colorspace argument to redshiftTextureProcessor #5645 + +In color managed Maya, texture processing during Look Extraction wasn't passing texture colorspaces set on textures to `redshiftTextureProcessor` tool. This in effect caused this tool to produce non-zero exit code (even though the texture was converted into wrong colorspace) and therefor crash of the extractor. This PR is passing colorspace to that tool if color management is enabled. + + +___ + +
+ + +
+Maya: don't call `cmds.ogs()` in headless mode #5769 + +`cmds.ogs()` is a call that will crash if Maya is running in headless mode (mayabatch, mayapy). This is handling that case. + + +___ + +
+ + +
+Resolve: inventory management fix #5673 + +Loaded Timeline item containers are now updating correctly and version management is working as it suppose to. +- [x] updating loaded timeline items +- [x] Removing of loaded timeline items + + +___ + +
+ + +
+Blender: Remove 'update_hierarchy' #5756 + +Remove `update_hierarchy` function which is causing crashes in scene inventory tool. + + +___ + +
+ + +
+Max: bug fix on the settings in pointcloud family #5768 + +Bug fix on the settings being errored out in validate point cloud(see links:https://github.com/ynput/OpenPype/pull/5759#pullrequestreview-1676681705) and passibly in point cloud extractor. + + +___ + +
+ + +
+AYON settings: Fix default factory of tools #5773 + +Fix default factory of application tools. + + +___ + +
+ + +
+Fusion: added missing OPENPYPE_VERSION #5776 + +Fusion submission to Deadline was missing OPENPYPE_VERSION env var when submitting from build (not source code directly). This missing env var might break rendering on DL if path to OP executable (openpype_console.exe) is not set explicitly and might cause an issue when different versions of OP are deployed.This PR adds this environment variable. + + +___ + +
+ + +
+Ftrack: Skip tasks when looking for asset equivalent entity #5777 + +Skip tasks when looking for asset equivalent entity. + + +___ + +
+ + +
+Nuke: loading gizmos fixes #5779 + +Gizmo product is not offered in Loader as plugin. It is also updating as expected. + + +___ + +
+ + +
+General: thumbnail extractor as last extractor #5780 + +Fixing issue with the order of the `ExtractOIIOTranscode` and `ExtractThumbnail` plugins. The problem was that the `ExtractThumbnail` plugin was processed before the `ExtractOIIOTranscode` plugin. As a result, the `ExtractThumbnail` plugin did not inherit the `review` tag into the representation data. This caused the `ExtractThumbnail` plugin to fail in processing and creating thumbnails. + + +___ + +
+ + +
+Bug: fix key in application json #5787 + +In PR #5705 `maya` was wrongly used instead of `mayapy`, breaking AYON defaults in AYON Application Addon. + + +___ + +
+ + +
+'NumberAttrWidget' shows 'Multiselection' label on multiselection #5792 + +Attribute definition widget 'NumberAttrWidget' shows `< Multiselection >` label on multiselection. + + +___ + +
+ + +
+Publisher: Selection change by enabled checkbox on instance update attributes #5793 + +Change of instance by clicking on enabled checkbox will actually update attributes on right side to match the selection. + + +___ + +
+ + +
+Houdini: Remove `setParms` call since it's responsibility of `self.imprint` to set the values #5796 + +Revert a recent change made in #5621 due to this comment. However the change is faulty as can be seen mentioned here + + +___ + +
+ + +
+AYON loader: Fix SubsetLoader functionality #5799 + +Fix SubsetLoader plugin processing in AYON loader tool. + + +___ + +
+ +### **Merged pull requests** + + +
+Houdini: Add self publish button #5621 + +This PR allows single publishing by adding a publish button to created rop nodes in HoudiniAdmins are much welcomed to enable it from houdini general settingsPublish Button also includes all input publish instances. in this screen shot the alembic instance is ignored because the switch is turned off + + +___ + +
+ + +
+Nuke: fixing UNC support for OCIO path #5771 + +UNC paths were broken on windows for custom OCIO path and this is solving the issue with removed double slash at start of path + + +___ + +
+ + + + +## [3.17.2](https://github.com/ynput/OpenPype/tree/3.17.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.1...3.17.2) + +### **🆕 New features** + + +
+Maya: Add MayaPy application. #5705 + +This adds mayapy to the application to be launched from a task. + + +___ + +
+ + +
+Feature: Copy resources when downloading last workfile #4944 + +When the last published workfile is downloaded as a prelaunch hook, all resource files referenced in the workfile representation are copied to the `resources` folder, which is inside the local workfile folder. + + +___ + +
+ + +
+Blender: Deadline support #5438 + +Add Deadline support for Blender. + + +___ + +
+ + +
+Fusion: implement toggle to use Deadline plugin FusionCmd #5678 + +Fusion 17 doesn't work in DL 10.3, but FusionCmd does. It might be probably better option as headless variant.Fusion plugin seems to be closing and reopening application when worker is running on artist machine, not so with FusionCmdAdded configuration to Project Settings for admin to select appropriate Deadline plugin: + + +___ + +
+ + +
+Loader tool: Refactor loader tool (for AYON) #5729 + +Refactored loader tool to new tool. Separated backend and frontend logic. Refactored logic is AYON-centric and is used only in AYON mode, so it does not affect OpenPype. The tool is also replacing library loader. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: implement matchmove publishing #5445 + +Add possibility to export multiple cameras in single `matchmove` family instance, both in `abc` and `ma`.Exposed flag 'Keep image planes' to control export of image planes. + + +___ + +
+ + +
+Maya: Add optional Fbx extractors in Rig and Animation family #5589 + +This PR allows user to export control rigs(optionally with mesh) and animated rig in fbx optionally by attaching the rig objects to the two newly introduced sets. + + +___ + +
+ + +
+Maya: Optional Resolution Validator for Render #5693 + +Adding optional resolution validator for maya in render family, similar to the one in Max.It checks if the resolution in render setting aligns with that in setting from the db. + + +___ + +
+ + +
+Use host's node uniqueness for instance id in new publisher #5490 + +Instead of writing `instance_id` as parm or attributes on the publish instances we can, for some hosts, just rely on a unique name or path within the scene to refer to that particular instance. By doing so we fix #4820 because upon duplicating such a publish instance using the host's (DCC) functionality the uniqueness for the duplicate is then already ensured instead of attributes remaining exact same value as where to were duplicated from, making `instance_id` a non-unique value. + + +___ + +
+ + +
+Max: Implementation of OCIO configuration #5499 + +Resolve #5473 Implementation of OCIO configuration for Max 2024 regarding to the update of Max 2024 + + +___ + +
+ + +
+Nuke: Multiple format supports for ExtractReviewDataMov #5623 + +This PR would fix the bug of the plugin `ExtractReviewDataMov` not being able to support extensions other than `mov`. The plugin is also renamed to `ExtractReviewDataBakingStreams` as i provides multiple format supoort. + + +___ + +
+ + +
+Bugfix: houdini switching context doesnt update variables #5651 + +Allows admins to have a list of vars (e.g. JOB) with (dynamic) values that will be updated on context changes, e.g. when switching to another asset or task.Using template keys is supported but formatting keys capitalization variants is not, e.g. {Asset} and {ASSET} won't workDisabling Update Houdini vars on context change feature will leave all Houdini vars unmanaged and thus no context update changes will occur.Also, this PR adds a new button in menu to update vars on demand. + + +___ + +
+ + +
+Publisher: Fix report maker memory leak + optimize lookups using set #5667 + +Fixes a memory leak where resetting publisher does not clear the stored plugins for the Publish Report Maker.Also changes the stored plugins to a `set` to optimize the lookup speeds. + + +___ + +
+ + +
+Add openpype_mongo command flag for testing. #5676 + +Instead of changing the environment, this command flag allows for changing the database. + + +___ + +
+ + +
+Nuke: minor docstring and code tweaks for ExtractReviewMov #5695 + +Code and docstring tweaks on https://github.com/ynput/OpenPype/pull/5623 + + +___ + +
+ + +
+AYON: Small settings fixes #5699 + +Small changes/fixes related to AYON settings. All foundry apps variant `13-0` has label `13.0`. Key `"ExtractReviewIntermediates"` is not mandatory in settings. + + +___ + +
+ + +
+Blender: Alembic Animation loader #5711 + +Implemented loading Alembic Animations in Blender. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Missing "data" field and enabling of audio #5618 + +When updating audio containers, the field "data" was missing and the audio node was not enabled on the timeline. + + +___ + +
+ + +
+Maya: Bug in validate Plug-in Path Attribute #5687 + +Overwriting list with string is causing `TypeError: string indices must be integers` in subsequent iterations, crashing the validator plugin. + + +___ + +
+ + +
+General: Avoid fallback if value is 0 for handle start/end #5652 + +There's a bug on the `pyblish_functions.get_time_data_from_instance_or_context` where if `handleStart` or `handleEnd` on the instance are set to value 0 it's falling back to grabbing the handles from the instance context. Instead, the logic should be that it only falls back to the `instance.context` if the key doesn't exist.This change was only affecting me on the `handleStart`/`handleEnd` and it's unlikely it could cause issues on `frameStart`, `frameEnd` or `fps` but regardless, the `get` logic is wrong. + + +___ + +
+ + +
+Fusion: added missing env vars to Deadline submission #5659 + +Environment variables discerning type of job was missing. Without this injection of environment variables won't start. + + +___ + +
+ + +
+Nuke: workfile version synchronization settings fixed #5662 + +Settings for synchronizing workfile version to published products is fixed. + + +___ + +
+ + +
+AYON Workfiles Tool: Open workfile changes context #5671 + +Change context when workfile is opened. + + +___ + +
+ + +
+Blender: Fix remove/update in new layout instance #5679 + +Fixes an error that occurs when removing or updating an asset in a new layout instance. + + +___ + +
+ + +
+AYON Launcher tool: Fix refresh btn #5685 + +Refresh button does propagate refreshed content properly. Folders and tasks are cached for 60 seconds instead of 10 seconds. Auto-refresh in launcher will refresh only actions and related data which is project and project settings. + + +___ + +
+ + +
+Deadline: handle all valid paths in RenderExecutable #5694 + +This commit enhances the path resolution mechanism in the RenderExecutable function of the Ayon plugin. Previously, the function only considered paths starting with a tilde (~), ignoring other valid paths listed in exe_list. This limitation led to an empty expanded_paths list when none of the paths in exe_list started with a tilde, causing the function to fail in finding the Ayon executable.With this fix, the RenderExecutable function now correctly processes and includes all valid paths from exe_list, improving its reliability and preventing unnecessary errors related to Ayon executable location. + + +___ + +
+ + +
+AYON Launcher tool: Fix skip last workfile boolean #5700 + +Skip last workfile boolean works as expected. + + +___ + +
+ + +
+Chore: Explore here action can work without task #5703 + +Explore here action does not crash when task is not selected, and change error message a little. + + +___ + +
+ + +
+Testing: Inject mongo_url argument earlier #5706 + +Fix for https://github.com/ynput/OpenPype/pull/5676The Mongo url is used earlier in the execution. + + +___ + +
+ + +
+Blender: Add support to auto-install PySide2 in blender 4 #5723 + +Change version regex to support blender 4 subfolder. + + +___ + +
+ + +
+Fix: Hardcoded main site and wrongly copied workfile #5733 + +Fixing these two issues: +- Hardcoded main site -> Replaced by `anatomy.fill_root`. +- Workfiles can sometimes be copied while they shouldn't. + + +___ + +
+ + +
+Bugfix: ServerDeleteOperation asset -> folder conversion typo #5735 + +Fix ServerDeleteOperation asset -> folder conversion typo + + +___ + +
+ + +
+Nuke: loaders are filtering correctly #5739 + +Variable name for filtering by extensions were not correct - it suppose to be plural. It is fixed now and filtering is working as suppose to. + + +___ + +
+ + +
+Nuke: failing multiple thumbnails integration #5741 + +This handles the situation when `ExtractReviewIntermediates` (previously `ExtractReviewDataMov`) has multiple outputs, including thumbnails that need to be integrated. Previously, integrating the thumbnail representation was causing an issue in the integration process. However, we have now resolved this issue by no longer integrating thumbnails as loadable representations.NOW default is that thumbnail representation are NOT integrated (eg. they will not show up in DB > couldn't be Loaded in Loader) and no `_thumb.jpg` will be left in `render` (most likely) publish folder.IF there would be need to override this behavior, please use `project_settings/global/publish/PreIntegrateThumbnails` + + +___ + +
+ + +
+AYON Settings: Fix global overrides #5745 + +The `output` dictionary that gets passed into `ayon_settings._convert_global_project_settings` gets replaced when converting the settings for `ExtractOIIOTranscode`. This results in `global` not being in the output dictionary and thus the defaults being used and not the project overrides. + + +___ + +
+ + +
+Chore: AYON query functions arguments #5752 + +Fixed how `archived` argument is handled in get subsets/assets function. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Publisher: Refactor Report Maker plugin data storage to be a dict by plugin.id #5668 + +Refactor Report Maker plugin data storage to be a dict by `plugin.id`Also fixes `_current_plugin_data` type on `__init__` + + +___ + +
+ + +
+Chore: Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost #5701 + +Refactor Resolve into new style HostBase, IWorkfileHost, ILoadHost + + +___ + +
+ +### **Merged pull requests** + + +
+Chore: Maya reduce get project settings calls #5669 + +Re-use system settings / project settings where we can instead of requerying. + + +___ + +
+ + +
+Extended error message when getting subset name #5649 + +Each Creator is using `get_subset_name` functions which collects context data and fills configured template with placeholders.If any key is missing in the template, non descriptive error is thrown.This should provide more verbose message: + + +___ + +
+ + +
+Tests: Remove checks for env var #5696 + +Env var will be filled in `env_var` fixture, here it is too early to check + + +___ + +
+ + + + ## [3.17.1](https://github.com/ynput/OpenPype/tree/3.17.1) diff --git a/README.md b/README.md index ce98f845e6..ed3e058002 100644 --- a/README.md +++ b/README.md @@ -279,7 +279,7 @@ arguments and it will create zip file that OpenPype can use. Building documentation ---------------------- -Top build API documentation, run `.\tools\make_docs(.ps1|.sh)`. It will create html documentation +To build API documentation, run `.\tools\make_docs(.ps1|.sh)`. It will create html documentation from current sources in `.\docs\build`. **Note that it needs existing virtual environment.** diff --git a/openpype/client/server/entities.py b/openpype/client/server/entities.py index 3ee62a3172..16223d3d91 100644 --- a/openpype/client/server/entities.py +++ b/openpype/client/server/entities.py @@ -75,9 +75,9 @@ def _get_subsets( ): fields.add(key) - active = None + active = True if archived: - active = False + active = None for subset in con.get_products( project_name, @@ -196,7 +196,7 @@ def get_assets( active = True if archived: - active = False + active = None con = get_server_api_connection() fields = folder_fields_v3_to_v4(fields, con) diff --git a/openpype/client/server/operations.py b/openpype/client/server/operations.py index eeb55784e1..5b38405c34 100644 --- a/openpype/client/server/operations.py +++ b/openpype/client/server/operations.py @@ -422,7 +422,7 @@ def failed_json_default(value): class ServerCreateOperation(CreateOperation): - """Opeartion to create an entity. + """Operation to create an entity. Args: project_name (str): On which project operation will happen. @@ -634,7 +634,7 @@ class ServerUpdateOperation(UpdateOperation): class ServerDeleteOperation(DeleteOperation): - """Opeartion to delete an entity. + """Operation to delete an entity. Args: project_name (str): On which project operation will happen. @@ -647,7 +647,7 @@ class ServerDeleteOperation(DeleteOperation): self._session = session if entity_type == "asset": - entity_type == "folder" + entity_type = "folder" elif entity_type == "hero_version": entity_type = "version" diff --git a/openpype/hooks/pre_foundry_apps.py b/openpype/hooks/pre_new_console_apps.py similarity index 82% rename from openpype/hooks/pre_foundry_apps.py rename to openpype/hooks/pre_new_console_apps.py index 7536df4c16..9727b4fb78 100644 --- a/openpype/hooks/pre_foundry_apps.py +++ b/openpype/hooks/pre_new_console_apps.py @@ -2,7 +2,7 @@ import subprocess from openpype.lib.applications import PreLaunchHook, LaunchTypes -class LaunchFoundryAppsWindows(PreLaunchHook): +class LaunchNewConsoleApps(PreLaunchHook): """Foundry applications have specific way how to launch them. Nuke is executed "like" python process so it is required to pass @@ -13,13 +13,15 @@ class LaunchFoundryAppsWindows(PreLaunchHook): # Should be as last hook because must change launch arguments to string order = 1000 - app_groups = {"nuke", "nukeassist", "nukex", "hiero", "nukestudio"} + app_groups = { + "nuke", "nukeassist", "nukex", "hiero", "nukestudio", "mayapy" + } platforms = {"windows"} launch_types = {LaunchTypes.local} def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE - # - on Windows nuke will create new window using its console + # - on Windows some apps will create new window using its console # Set `stdout` and `stderr` to None so new created console does not # have redirected output to DEVNULL in build self.launch_context.kwargs.update({ diff --git a/openpype/hosts/blender/api/capture.py b/openpype/hosts/blender/api/capture.py index 849f8ee629..bad6831143 100644 --- a/openpype/hosts/blender/api/capture.py +++ b/openpype/hosts/blender/api/capture.py @@ -148,13 +148,14 @@ def applied_view(window, camera, isolate=None, options=None): area.ui_type = "VIEW_3D" - meshes = [obj for obj in window.scene.objects if obj.type == "MESH"] + types = {"MESH", "GPENCIL"} + objects = [obj for obj in window.scene.objects if obj.type in types] if camera == "AUTO": space.region_3d.view_perspective = "ORTHO" - isolate_objects(window, isolate or meshes) + isolate_objects(window, isolate or objects) else: - isolate_objects(window, isolate or meshes) + isolate_objects(window, isolate or objects) space.camera = window.scene.objects.get(camera) space.region_3d.view_perspective = "CAMERA" diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 0eb90eeff9..208c11cfe8 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -284,6 +284,8 @@ class LaunchLoader(LaunchQtApp): _tool_name = "loader" def before_window_show(self): + if AYON_SERVER_ENABLED: + return self._window.set_context( {"asset": get_current_asset_name()}, refresh=True @@ -309,6 +311,8 @@ class LaunchManager(LaunchQtApp): _tool_name = "sceneinventory" def before_window_show(self): + if AYON_SERVER_ENABLED: + return self._window.refresh() @@ -320,6 +324,8 @@ class LaunchLibrary(LaunchQtApp): _tool_name = "libraryloader" def before_window_show(self): + if AYON_SERVER_ENABLED: + return self._window.refresh() @@ -340,6 +346,8 @@ class LaunchWorkFiles(LaunchQtApp): return result def before_window_show(self): + if AYON_SERVER_ENABLED: + return self._window.root = str(Path( os.environ.get("AVALON_WORKDIR", ""), os.environ.get("AVALON_SCENEDIR", ""), diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index 29339a512c..84af0904f0 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -460,36 +460,6 @@ def ls() -> Iterator: yield parse_container(container) -def update_hierarchy(containers): - """Hierarchical container support - - This is the function to support Scene Inventory to draw hierarchical - view for containers. - - We need both parent and children to visualize the graph. - - """ - - all_containers = set(ls()) # lookup set - - for container in containers: - # Find parent - # FIXME (jasperge): re-evaluate this. How would it be possible - # to 'nest' assets? Collections can have several parents, for - # now assume it has only 1 parent - parent = [ - coll for coll in bpy.data.collections if container in coll.children - ] - for node in parent: - if node in all_containers: - container["parent"] = node - break - - log.debug("Container: %s", container) - - yield container - - def publish(): """Shorthand to publish from within host.""" diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py index 777e383215..2aa3a5e49a 100644 --- a/openpype/hosts/blender/hooks/pre_pyside_install.py +++ b/openpype/hosts/blender/hooks/pre_pyside_install.py @@ -31,7 +31,7 @@ class InstallPySideToBlender(PreLaunchHook): def inner_execute(self): # Get blender's python directory - version_regex = re.compile(r"^[2-3]\.[0-9]+$") + version_regex = re.compile(r"^[2-4]\.[0-9]+$") platform = system().lower() executable = self.launch_context.executable.executable_path diff --git a/openpype/hosts/blender/plugins/create/create_pointcache.py b/openpype/hosts/blender/plugins/create/create_pointcache.py index 6220f68dc5..65cf18472d 100644 --- a/openpype/hosts/blender/plugins/create/create_pointcache.py +++ b/openpype/hosts/blender/plugins/create/create_pointcache.py @@ -3,11 +3,11 @@ import bpy from openpype.pipeline import get_current_task_name -import openpype.hosts.blender.api.plugin -from openpype.hosts.blender.api import lib +from openpype.hosts.blender.api import plugin, lib, ops +from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES -class CreatePointcache(openpype.hosts.blender.api.plugin.Creator): +class CreatePointcache(plugin.Creator): """Polygonal static geometry""" name = "pointcacheMain" @@ -16,20 +16,36 @@ class CreatePointcache(openpype.hosts.blender.api.plugin.Creator): icon = "gears" def process(self): + """ Run the creator on Blender main thread""" + mti = ops.MainThreadItem(self._process) + ops.execute_in_main_thread(mti) + def _process(self): + # Get Instance Container or create it if it does not exist + instances = bpy.data.collections.get(AVALON_INSTANCES) + if not instances: + instances = bpy.data.collections.new(name=AVALON_INSTANCES) + bpy.context.scene.collection.children.link(instances) + + # Create instance object asset = self.data["asset"] subset = self.data["subset"] - name = openpype.hosts.blender.api.plugin.asset_name(asset, subset) - collection = bpy.data.collections.new(name=name) - bpy.context.scene.collection.children.link(collection) + name = plugin.asset_name(asset, subset) + asset_group = bpy.data.objects.new(name=name, object_data=None) + asset_group.empty_display_type = 'SINGLE_ARROW' + instances.objects.link(asset_group) self.data['task'] = get_current_task_name() - lib.imprint(collection, self.data) + lib.imprint(asset_group, self.data) + # Add selected objects to instance if (self.options or {}).get("useSelection"): - objects = lib.get_selection() - for obj in objects: - collection.objects.link(obj) - if obj.type == 'EMPTY': - objects.extend(obj.children) + bpy.context.view_layer.objects.active = asset_group + selected = lib.get_selection() + for obj in selected: + if obj.parent in selected: + obj.select_set(False) + continue + selected.append(asset_group) + bpy.ops.object.parent_set(keep_transform=True) - return collection + return asset_group diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py index 292925c833..8d1863d4d5 100644 --- a/openpype/hosts/blender/plugins/load/load_abc.py +++ b/openpype/hosts/blender/plugins/load/load_abc.py @@ -26,8 +26,7 @@ class CacheModelLoader(plugin.AssetLoader): Note: At least for now it only supports Alembic files. """ - - families = ["model", "pointcache"] + families = ["model", "pointcache", "animation"] representations = ["abc"] label = "Load Alembic" @@ -53,32 +52,43 @@ class CacheModelLoader(plugin.AssetLoader): def _process(self, libpath, asset_group, group_name): plugin.deselect_all() - collection = bpy.context.view_layer.active_layer_collection.collection - relative = bpy.context.preferences.filepaths.use_relative_paths bpy.ops.wm.alembic_import( filepath=libpath, relative_path=relative ) - parent = bpy.context.scene.collection - imported = lib.get_selection() - # Children must be linked before parents, - # otherwise the hierarchy will break + # Use first EMPTY without parent as container + container = next( + (obj for obj in imported + if obj.type == "EMPTY" and not obj.parent), + None + ) + objects = [] + if container: + nodes = list(container.children) - for obj in imported: - obj.parent = asset_group + for obj in nodes: + obj.parent = asset_group - for obj in imported: - objects.append(obj) - imported.extend(list(obj.children)) + bpy.data.objects.remove(container) - objects.reverse() + objects.extend(nodes) + for obj in nodes: + objects.extend(obj.children_recursive) + else: + for obj in imported: + obj.parent = asset_group + objects = imported for obj in objects: + # Unlink the object from all collections + collections = obj.users_collection + for collection in collections: + collection.objects.unlink(obj) name = obj.name obj.name = f"{group_name}:{name}" if obj.type != 'EMPTY': @@ -90,7 +100,7 @@ class CacheModelLoader(plugin.AssetLoader): material_slot.material.name = f"{group_name}:{name_mat}" if not obj.get(AVALON_PROPERTY): - obj[AVALON_PROPERTY] = dict() + obj[AVALON_PROPERTY] = {} avalon_info = obj[AVALON_PROPERTY] avalon_info.update({"container_name": group_name}) @@ -99,6 +109,18 @@ class CacheModelLoader(plugin.AssetLoader): return objects + def _link_objects(self, objects, collection, containers, asset_group): + # Link the imported objects to any collection where the asset group is + # linked to, except the AVALON_CONTAINERS collection + group_collections = [ + collection + for collection in asset_group.users_collection + if collection != containers] + + for obj in objects: + for collection in group_collections: + collection.objects.link(obj) + def process_asset( self, context: dict, name: str, namespace: Optional[str] = None, options: Optional[Dict] = None @@ -120,18 +142,22 @@ class CacheModelLoader(plugin.AssetLoader): group_name = plugin.asset_name(asset, subset, unique_number) namespace = namespace or f"{asset}_{unique_number}" - avalon_containers = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_containers: - avalon_containers = bpy.data.collections.new( - name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_containers) + containers = bpy.data.collections.get(AVALON_CONTAINERS) + if not containers: + containers = bpy.data.collections.new(name=AVALON_CONTAINERS) + bpy.context.scene.collection.children.link(containers) asset_group = bpy.data.objects.new(group_name, object_data=None) - avalon_containers.objects.link(asset_group) + asset_group.empty_display_type = 'SINGLE_ARROW' + containers.objects.link(asset_group) objects = self._process(libpath, asset_group, group_name) - bpy.context.scene.collection.objects.link(asset_group) + # Link the asset group to the active collection + collection = bpy.context.view_layer.active_layer_collection.collection + collection.objects.link(asset_group) + + self._link_objects(objects, asset_group, containers, asset_group) asset_group[AVALON_PROPERTY] = { "schema": "openpype:container-2.0", @@ -207,7 +233,11 @@ class CacheModelLoader(plugin.AssetLoader): mat = asset_group.matrix_basis.copy() self._remove(asset_group) - self._process(str(libpath), asset_group, object_name) + objects = self._process(str(libpath), asset_group, object_name) + + containers = bpy.data.collections.get(AVALON_CONTAINERS) + self._link_objects(objects, asset_group, containers, asset_group) + asset_group.matrix_basis = mat metadata["libpath"] = str(libpath) diff --git a/openpype/hosts/blender/plugins/publish/collect_instances.py b/openpype/hosts/blender/plugins/publish/collect_instances.py index bc4b5ab092..ad2ce54147 100644 --- a/openpype/hosts/blender/plugins/publish/collect_instances.py +++ b/openpype/hosts/blender/plugins/publish/collect_instances.py @@ -19,85 +19,51 @@ class CollectInstances(pyblish.api.ContextPlugin): @staticmethod def get_asset_groups() -> Generator: - """Return all 'model' collections. - - Check if the family is 'model' and if it doesn't have the - representation set. If the representation is set, it is a loaded model - and we don't want to publish it. + """Return all instances that are empty objects asset groups. """ instances = bpy.data.collections.get(AVALON_INSTANCES) - for obj in instances.objects: - avalon_prop = obj.get(AVALON_PROPERTY) or dict() + for obj in list(instances.objects) + list(instances.children): + avalon_prop = obj.get(AVALON_PROPERTY) or {} if avalon_prop.get('id') == 'pyblish.avalon.instance': yield obj @staticmethod - def get_collections() -> Generator: - """Return all 'model' collections. - - Check if the family is 'model' and if it doesn't have the - representation set. If the representation is set, it is a loaded model - and we don't want to publish it. - """ - for collection in bpy.data.collections: - avalon_prop = collection.get(AVALON_PROPERTY) or dict() - if avalon_prop.get('id') == 'pyblish.avalon.instance': - yield collection + def create_instance(context, group): + avalon_prop = group[AVALON_PROPERTY] + asset = avalon_prop['asset'] + family = avalon_prop['family'] + subset = avalon_prop['subset'] + task = avalon_prop['task'] + name = f"{asset}_{subset}" + return context.create_instance( + name=name, + family=family, + families=[family], + subset=subset, + asset=asset, + task=task, + ) def process(self, context): """Collect the models from the current Blender scene.""" asset_groups = self.get_asset_groups() - collections = self.get_collections() for group in asset_groups: - avalon_prop = group[AVALON_PROPERTY] - asset = avalon_prop['asset'] - family = avalon_prop['family'] - subset = avalon_prop['subset'] - task = avalon_prop['task'] - name = f"{asset}_{subset}" - instance = context.create_instance( - name=name, - family=family, - families=[family], - subset=subset, - asset=asset, - task=task, - ) - objects = list(group.children) - members = set() - for obj in objects: - objects.extend(list(obj.children)) - members.add(obj) - members.add(group) - instance[:] = list(members) - self.log.debug(json.dumps(instance.data, indent=4)) - for obj in instance: - self.log.debug(obj) + instance = self.create_instance(context, group) + members = [] + if isinstance(group, bpy.types.Collection): + members = list(group.objects) + family = instance.data["family"] + if family == "animation": + for obj in group.objects: + if obj.type == 'EMPTY' and obj.get(AVALON_PROPERTY): + members.extend( + child for child in obj.children + if child.type == 'ARMATURE') + else: + members = group.children_recursive - for collection in collections: - avalon_prop = collection[AVALON_PROPERTY] - asset = avalon_prop['asset'] - family = avalon_prop['family'] - subset = avalon_prop['subset'] - task = avalon_prop['task'] - name = f"{asset}_{subset}" - instance = context.create_instance( - name=name, - family=family, - families=[family], - subset=subset, - asset=asset, - task=task, - ) - members = list(collection.objects) - if family == "animation": - for obj in collection.objects: - if obj.type == 'EMPTY' and obj.get(AVALON_PROPERTY): - for child in obj.children: - if child.type == 'ARMATURE': - members.append(child) - members.append(collection) + members.append(group) instance[:] = members self.log.debug(json.dumps(instance.data, indent=4)) for obj in instance: diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py index 87159e53f0..7b6c4d7ae7 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc.py @@ -12,8 +12,7 @@ class ExtractABC(publish.Extractor): label = "Extract ABC" hosts = ["blender"] - families = ["model", "pointcache"] - optional = True + families = ["pointcache"] def process(self, instance): # Define extract output file path @@ -62,3 +61,12 @@ class ExtractABC(publish.Extractor): self.log.info("Extracted instance '%s' to: %s", instance.name, representation) + + +class ExtractModelABC(ExtractABC): + """Extract model as ABC.""" + + label = "Extract Model ABC" + hosts = ["blender"] + families = ["model"] + optional = True diff --git a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py index 3d176f9c30..6ace14d77c 100644 --- a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py +++ b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py @@ -10,7 +10,7 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin): optional = True hosts = ["blender"] families = ["animation", "model", "rig", "action", "layout", "blendScene", - "render"] + "pointcache", "render"] def process(self, context): diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py index fccd8b2965..4564880b50 100644 --- a/openpype/hosts/fusion/plugins/create/create_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_saver.py @@ -165,7 +165,8 @@ class CreateSaver(NewCreator): filepath = self.temp_rendering_path_template.format( **formatting_data) - tool["Clip"] = os.path.normpath(filepath) + comp = get_current_comp() + tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath)) # Rename tool if tool.Name != subset: diff --git a/openpype/hosts/fusion/plugins/load/load_sequence.py b/openpype/hosts/fusion/plugins/load/load_sequence.py index 20be5faaba..4401af97eb 100644 --- a/openpype/hosts/fusion/plugins/load/load_sequence.py +++ b/openpype/hosts/fusion/plugins/load/load_sequence.py @@ -161,7 +161,7 @@ class FusionLoadSequence(load.LoaderPlugin): with comp_lock_and_undo_chunk(comp, "Create Loader"): args = (-32768, -32768) tool = comp.AddTool("Loader", *args) - tool["Clip"] = path + tool["Clip"] = comp.ReverseMapPath(path) # Set global in point to start frame (if in version.data) start = self._get_start(context["version"], tool) @@ -244,7 +244,7 @@ class FusionLoadSequence(load.LoaderPlugin): "TimeCodeOffset", ), ): - tool["Clip"] = path + tool["Clip"] = comp.ReverseMapPath(path) # Set the global in to the start frame of the sequence global_in_changed = loader_shift(tool, start, relative=False) diff --git a/openpype/hosts/fusion/plugins/publish/collect_render.py b/openpype/hosts/fusion/plugins/publish/collect_render.py index 341f3f191a..a7daa0b64c 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_render.py +++ b/openpype/hosts/fusion/plugins/publish/collect_render.py @@ -145,9 +145,11 @@ class CollectFusionRender( start = render_instance.frameStart - render_instance.handleStart end = render_instance.frameEnd + render_instance.handleEnd - path = ( - render_instance.tool["Clip"] - [render_instance.workfileComp.TIME_UNDEFINED] + comp = render_instance.workfileComp + path = comp.MapPath( + render_instance.tool["Clip"][ + render_instance.workfileComp.TIME_UNDEFINED + ] ) output_dir = os.path.dirname(path) render_instance.outputDir = output_dir diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index a3f691e1fc..cadeaa8ed4 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -1,6 +1,7 @@ # -*- coding: utf-8 -*- import sys import os +import errno import re import uuid import logging @@ -9,9 +10,21 @@ import json import six +from openpype.lib import StringTemplate from openpype.client import get_asset_by_name -from openpype.pipeline import get_current_project_name, get_current_asset_name -from openpype.pipeline.context_tools import get_current_project_asset +from openpype.settings import get_current_project_settings +from openpype.pipeline import ( + get_current_project_name, + get_current_asset_name, + registered_host +) +from openpype.pipeline.context_tools import ( + get_current_context_template_data, + get_current_project_asset +) +from openpype.widgets import popup +from openpype.tools.utils.host_tools import get_tool_by_name +from openpype.pipeline.create import CreateContext import hou @@ -160,8 +173,6 @@ def validate_fps(): if current_fps != fps: - from openpype.widgets import popup - # Find main window parent = hou.ui.mainQtWindow() if parent is None: @@ -321,52 +332,61 @@ def imprint(node, data, update=False): return current_parms = {p.name(): p for p in node.spareParms()} - update_parms = [] - templates = [] + update_parm_templates = [] + new_parm_templates = [] for key, value in data.items(): if value is None: continue - parm = get_template_from_value(key, value) + parm_template = get_template_from_value(key, value) if key in current_parms: - if node.evalParm(key) == data[key]: + if node.evalParm(key) == value: continue if not update: log.debug(f"{key} already exists on {node}") else: log.debug(f"replacing {key}") - update_parms.append(parm) + update_parm_templates.append(parm_template) continue - templates.append(parm) + new_parm_templates.append(parm_template) - parm_group = node.parmTemplateGroup() - parm_folder = parm_group.findFolder("Extra") - - # if folder doesn't exist yet, create one and append to it, - # else append to existing one - if not parm_folder: - parm_folder = hou.FolderParmTemplate("folder", "Extra") - parm_folder.setParmTemplates(templates) - parm_group.append(parm_folder) - else: - for template in templates: - parm_group.appendToFolder(parm_folder, template) - # this is needed because the pointer to folder - # is for some reason lost every call to `appendToFolder()` - parm_folder = parm_group.findFolder("Extra") - - node.setParmTemplateGroup(parm_group) - - # TODO: Updating is done here, by calling probably deprecated functions. - # This needs to be addressed in the future. - if not update_parms: + if not new_parm_templates and not update_parm_templates: return - for parm in update_parms: - node.replaceSpareParmTuple(parm.name(), parm) + parm_group = node.parmTemplateGroup() + + # Add new parm templates + if new_parm_templates: + parm_folder = parm_group.findFolder("Extra") + + # if folder doesn't exist yet, create one and append to it, + # else append to existing one + if not parm_folder: + parm_folder = hou.FolderParmTemplate("folder", "Extra") + parm_folder.setParmTemplates(new_parm_templates) + parm_group.append(parm_folder) + else: + # Add to parm template folder instance then replace with updated + # one in parm template group + for template in new_parm_templates: + parm_folder.addParmTemplate(template) + parm_group.replace(parm_folder.name(), parm_folder) + + # Update existing parm templates + for parm_template in update_parm_templates: + parm_group.replace(parm_template.name(), parm_template) + + # When replacing a parm with a parm of the same name it preserves its + # value if before the replacement the parm was not at the default, + # because it has a value override set. Since we're trying to update the + # parm by using the new value as `default` we enforce the parm is at + # default state + node.parm(parm_template.name()).revertToDefaults() + + node.setParmTemplateGroup(parm_group) def lsattr(attr, value=None, root="/"): @@ -548,29 +568,64 @@ def get_template_from_value(key, value): return parm -def get_frame_data(node): - """Get the frame data: start frame, end frame and steps. +def get_frame_data(node, handle_start=0, handle_end=0, log=None): + """Get the frame data: start frame, end frame, steps, + start frame with start handle and end frame with end handle. + + This function uses Houdini node's `trange`, `t1, `t2` and `t3` + parameters as the source of truth for the full inclusive frame + range to render, as such these are considered as the frame + range including the handles. + + The non-inclusive frame start and frame end without handles + are computed by subtracting the handles from the inclusive + frame range. Args: - node(hou.Node) + node (hou.Node): ROP node to retrieve frame range from, + the frame range is assumed to be the frame range + *including* the start and end handles. + handle_start (int): Start handles. + handle_end (int): End handles. + log (logging.Logger): Logger to log to. Returns: - dict: frame data for star, end and steps. + dict: frame data for start, end, steps, + start with handle and end with handle """ + + if log is None: + log = self.log + data = {} if node.parm("trange") is None: - + log.debug( + "Node has no 'trange' parameter: {}".format(node.path()) + ) return data if node.evalParm("trange") == 0: - self.log.debug("trange is 0") - return data + data["frameStartHandle"] = hou.intFrame() + data["frameEndHandle"] = hou.intFrame() + data["byFrameStep"] = 1.0 - data["frameStart"] = node.evalParm("f1") - data["frameEnd"] = node.evalParm("f2") - data["steps"] = node.evalParm("f3") + log.info( + "Node '{}' has 'Render current frame' set.\n" + "Asset Handles are ignored.\n" + "frameStart and frameEnd are set to the " + "current frame.".format(node.path()) + ) + else: + data["frameStartHandle"] = int(node.evalParm("f1")) + data["frameEndHandle"] = int(node.evalParm("f2")) + data["byFrameStep"] = node.evalParm("f3") + + data["handleStart"] = handle_start + data["handleEnd"] = handle_end + data["frameStart"] = data["frameStartHandle"] + data["handleStart"] + data["frameEnd"] = data["frameEndHandle"] - data["handleEnd"] return data @@ -747,3 +802,193 @@ def get_camera_from_container(container): assert len(cameras) == 1, "Camera instance must have only one camera" return cameras[0] + + +def get_context_var_changes(): + """get context var changes.""" + + houdini_vars_to_update = {} + + project_settings = get_current_project_settings() + houdini_vars_settings = \ + project_settings["houdini"]["general"]["update_houdini_var_context"] + + if not houdini_vars_settings["enabled"]: + return houdini_vars_to_update + + houdini_vars = houdini_vars_settings["houdini_vars"] + + # No vars specified - nothing to do + if not houdini_vars: + return houdini_vars_to_update + + # Get Template data + template_data = get_current_context_template_data() + + # Set Houdini Vars + for item in houdini_vars: + # For consistency reasons we always force all vars to be uppercase + # Also remove any leading, and trailing whitespaces. + var = item["var"].strip().upper() + + # get and resolve template in value + item_value = StringTemplate.format_template( + item["value"], + template_data + ) + + if var == "JOB" and item_value == "": + # sync $JOB to $HIP if $JOB is empty + item_value = os.environ["HIP"] + + if item["is_directory"]: + item_value = item_value.replace("\\", "/") + + current_value = hou.hscript("echo -n `${}`".format(var))[0] + + if current_value != item_value: + houdini_vars_to_update[var] = ( + current_value, item_value, item["is_directory"] + ) + + return houdini_vars_to_update + + +def update_houdini_vars_context(): + """Update asset context variables""" + + for var, (_old, new, is_directory) in get_context_var_changes().items(): + if is_directory: + try: + os.makedirs(new) + except OSError as e: + if e.errno != errno.EEXIST: + print( + "Failed to create ${} dir. Maybe due to " + "insufficient permissions.".format(var) + ) + + hou.hscript("set {}={}".format(var, new)) + os.environ[var] = new + print("Updated ${} to {}".format(var, new)) + + +def update_houdini_vars_context_dialog(): + """Show pop-up to update asset context variables""" + update_vars = get_context_var_changes() + if not update_vars: + # Nothing to change + print("Nothing to change, Houdini vars are already up to date.") + return + + message = "\n".join( + "${}: {} -> {}".format(var, old or "None", new or "None") + for var, (old, new, _is_directory) in update_vars.items() + ) + + # TODO: Use better UI! + parent = hou.ui.mainQtWindow() + dialog = popup.Popup(parent=parent) + dialog.setModal(True) + dialog.setWindowTitle("Houdini scene has outdated asset variables") + dialog.setMessage(message) + dialog.setButtonText("Fix") + + # on_show is the Fix button clicked callback + dialog.on_clicked.connect(update_houdini_vars_context) + + dialog.show() + + +def publisher_show_and_publish(comment=None): + """Open publisher window and trigger publishing action. + + Args: + comment (Optional[str]): Comment to set in publisher window. + """ + + main_window = get_main_window() + publisher_window = get_tool_by_name( + tool_name="publisher", + parent=main_window, + ) + publisher_window.show_and_publish(comment) + + +def find_rop_input_dependencies(input_tuple): + """Self publish from ROP nodes. + + Arguments: + tuple (hou.RopNode.inputDependencies) which can be a nested tuples + represents the input dependencies of the ROP node, consisting of ROPs, + and the frames that need to be be rendered prior to rendering the ROP. + + Returns: + list of the RopNode.path() that can be found inside + the input tuple. + """ + + out_list = [] + if isinstance(input_tuple[0], hou.RopNode): + return input_tuple[0].path() + + if isinstance(input_tuple[0], tuple): + for item in input_tuple: + out_list.append(find_rop_input_dependencies(item)) + + return out_list + + +def self_publish(): + """Self publish from ROP nodes. + + Firstly, it gets the node and its dependencies. + Then, it deactivates all other ROPs + And finaly, it triggers the publishing action. + """ + + result, comment = hou.ui.readInput( + "Add Publish Comment", + buttons=("Publish", "Cancel"), + title="Publish comment", + close_choice=1 + ) + + if result: + return + + current_node = hou.node(".") + inputs_paths = find_rop_input_dependencies( + current_node.inputDependencies() + ) + inputs_paths.append(current_node.path()) + + host = registered_host() + context = CreateContext(host, reset=True) + + for instance in context.instances: + node_path = instance.data.get("instance_node") + instance["active"] = node_path and node_path in inputs_paths + + context.save_changes() + + publisher_show_and_publish(comment) + + +def add_self_publish_button(node): + """Adds a self publish button to the rop node.""" + + label = os.environ.get("AVALON_LABEL") or "OpenPype" + + button_parm = hou.ButtonParmTemplate( + "ayon_self_publish", + "{} Publish".format(label), + script_callback="from openpype.hosts.houdini.api.lib import " + "self_publish; self_publish()", + script_callback_language=hou.scriptLanguage.Python, + join_with_next=True + ) + + template = node.parmTemplateGroup() + template.insertBefore((0,), button_parm) + node.setParmTemplateGroup(template) diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 6aa65deb89..f8db45c56b 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -300,6 +300,9 @@ def on_save(): log.info("Running callback on save..") + # update houdini vars + lib.update_houdini_vars_context_dialog() + nodes = lib.get_id_required_nodes() for node, new_id in lib.generate_ids(nodes): lib.set_id(node, new_id, overwrite=False) @@ -335,6 +338,9 @@ def on_open(): log.info("Running callback on open..") + # update houdini vars + lib.update_houdini_vars_context_dialog() + # Validate FPS after update_task_from_path to # ensure it is using correct FPS for the asset lib.validate_fps() @@ -399,6 +405,7 @@ def _set_context_settings(): """ lib.reset_framerange() + lib.update_houdini_vars_context() def on_pyblish_instance_toggled(instance, new_value, old_value): diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index a0a7dcc2e4..72565f7211 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -13,7 +13,7 @@ from openpype.pipeline import ( CreatedInstance ) from openpype.lib import BoolDef -from .lib import imprint, read, lsattr +from .lib import imprint, read, lsattr, add_self_publish_button class OpenPypeCreatorError(CreatorError): @@ -168,6 +168,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): """Base class for most of the Houdini creator plugins.""" selected_nodes = [] settings_name = None + add_publish_button = False def create(self, subset_name, instance_data, pre_create_data): try: @@ -195,6 +196,10 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): self) self._add_instance_to_context(instance) self.imprint(instance_node, instance.data_to_store()) + + if self.add_publish_button: + add_self_publish_button(instance_node) + return instance except hou.Error as er: @@ -245,6 +250,7 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): key: changes[key].new_value for key in changes.changed_keys } + # Update parm templates and values self.imprint( instance_node, new_values, @@ -316,6 +322,12 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): def apply_settings(self, project_settings): """Method called on initialization of plugin to apply settings.""" + # Apply General Settings + houdini_general_settings = project_settings["houdini"]["general"] + self.add_publish_button = houdini_general_settings.get( + "add_self_publish_button", False) + + # Apply Creator Settings settings_name = self.settings_name if settings_name is None: settings_name = self.__class__.__name__ diff --git a/openpype/hosts/houdini/api/shelves.py b/openpype/hosts/houdini/api/shelves.py index 21e44e494a..4b5ebd4202 100644 --- a/openpype/hosts/houdini/api/shelves.py +++ b/openpype/hosts/houdini/api/shelves.py @@ -6,6 +6,9 @@ import platform from openpype.settings import get_project_settings from openpype.pipeline import get_current_project_name +from openpype.lib import StringTemplate +from openpype.pipeline.context_tools import get_current_context_template_data + import hou log = logging.getLogger("openpype.hosts.houdini.shelves") @@ -26,10 +29,16 @@ def generate_shelves(): log.debug("No custom shelves found in project settings.") return + # Get Template data + template_data = get_current_context_template_data() + for shelf_set_config in shelves_set_config: shelf_set_filepath = shelf_set_config.get('shelf_set_source_path') shelf_set_os_filepath = shelf_set_filepath[current_os] if shelf_set_os_filepath: + shelf_set_os_filepath = get_path_using_template_data( + shelf_set_os_filepath, template_data + ) if not os.path.isfile(shelf_set_os_filepath): log.error("Shelf path doesn't exist - " "{}".format(shelf_set_os_filepath)) @@ -81,7 +90,9 @@ def generate_shelves(): "script path of the tool.") continue - tool = get_or_create_tool(tool_definition, shelf) + tool = get_or_create_tool( + tool_definition, shelf, template_data + ) if not tool: continue @@ -144,7 +155,7 @@ def get_or_create_shelf(shelf_label): return new_shelf -def get_or_create_tool(tool_definition, shelf): +def get_or_create_tool(tool_definition, shelf, template_data): """This function verifies if the tool exists and updates it. If not, creates a new one. @@ -162,10 +173,16 @@ def get_or_create_tool(tool_definition, shelf): return script_path = tool_definition["script"] + script_path = get_path_using_template_data(script_path, template_data) if not script_path or not os.path.exists(script_path): log.warning("This path doesn't exist - {}".format(script_path)) return + icon_path = tool_definition["icon"] + if icon_path: + icon_path = get_path_using_template_data(icon_path, template_data) + tool_definition["icon"] = icon_path + existing_tools = shelf.tools() existing_tool = next( (tool for tool in existing_tools if tool.label() == tool_label), @@ -184,3 +201,10 @@ def get_or_create_tool(tool_definition, shelf): tool_name = re.sub(r"[^\w\d]+", "_", tool_label).lower() return hou.shelves.newTool(name=tool_name, **tool_definition) + + +def get_path_using_template_data(path, template_data): + path = StringTemplate.format_template(path, template_data) + path = path.replace("\\", "/") + + return path diff --git a/openpype/hosts/houdini/plugins/load/show_usdview.py b/openpype/hosts/houdini/plugins/load/show_usdview.py index 7b03a0738a..d56c4acc4f 100644 --- a/openpype/hosts/houdini/plugins/load/show_usdview.py +++ b/openpype/hosts/houdini/plugins/load/show_usdview.py @@ -1,4 +1,5 @@ import os +import platform import subprocess from openpype.lib.vendor_bin_utils import find_executable @@ -8,17 +9,31 @@ from openpype.pipeline import load class ShowInUsdview(load.LoaderPlugin): """Open USD file in usdview""" - families = ["colorbleed.usd"] label = "Show in usdview" - representations = ["usd", "usda", "usdlc", "usdnc"] - order = 10 + representations = ["*"] + families = ["*"] + extensions = {"usd", "usda", "usdlc", "usdnc", "abc"} + order = 15 icon = "code-fork" color = "white" def load(self, context, name=None, namespace=None, data=None): + from pathlib import Path - usdview = find_executable("usdview") + if platform.system() == "Windows": + executable = "usdview.bat" + else: + executable = "usdview" + + usdview = find_executable(executable) + if not usdview: + raise RuntimeError("Unable to find usdview") + + # For some reason Windows can return the path like: + # C:/PROGRA~1/SIDEEF~1/HOUDIN~1.435/bin/usdview + # convert to resolved path so `subprocess` can take it + usdview = str(Path(usdview).resolve().as_posix()) filepath = self.filepath_from_context(context) filepath = os.path.normpath(filepath) @@ -30,14 +45,4 @@ class ShowInUsdview(load.LoaderPlugin): self.log.info("Start houdini variant of usdview...") - # For now avoid some pipeline environment variables that initialize - # Avalon in Houdini as it is redundant for usdview and slows boot time - env = os.environ.copy() - env.pop("PYTHONPATH", None) - env.pop("HOUDINI_SCRIPT_PATH", None) - env.pop("HOUDINI_MENU_PATH", None) - - # Force string to avoid unicode issues - env = {str(key): str(value) for key, value in env.items()} - - subprocess.Popen([usdview, filepath, "--renderer", "GL"], env=env) + subprocess.Popen([usdview, filepath, "--renderer", "GL"]) diff --git a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py index 43b8428c60..b489f83b29 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_arnold_rop.py @@ -20,7 +20,9 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin): """ label = "Arnold ROP Render Products" - order = pyblish.api.CollectorOrder + 0.4 + # This specific order value is used so that + # this plugin runs after CollectRopFrameRange + order = pyblish.api.CollectorOrder + 0.4999 hosts = ["houdini"] families = ["arnold_rop"] @@ -126,8 +128,9 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin): return path expected_files = [] - start = instance.data["frameStart"] - end = instance.data["frameEnd"] + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + for i in range(int(start), (int(end) + 1)): expected_files.append( os.path.join(dir, (file % i)).replace("\\", "/")) diff --git a/openpype/hosts/houdini/plugins/publish/collect_instance_frame_data.py b/openpype/hosts/houdini/plugins/publish/collect_instance_frame_data.py deleted file mode 100644 index 584343cd64..0000000000 --- a/openpype/hosts/houdini/plugins/publish/collect_instance_frame_data.py +++ /dev/null @@ -1,56 +0,0 @@ -import hou - -import pyblish.api - - -class CollectInstanceNodeFrameRange(pyblish.api.InstancePlugin): - """Collect time range frame data for the instance node.""" - - order = pyblish.api.CollectorOrder + 0.001 - label = "Instance Node Frame Range" - hosts = ["houdini"] - - def process(self, instance): - - node_path = instance.data.get("instance_node") - node = hou.node(node_path) if node_path else None - if not node_path or not node: - self.log.debug("No instance node found for instance: " - "{}".format(instance)) - return - - frame_data = self.get_frame_data(node) - if not frame_data: - return - - self.log.info("Collected time data: {}".format(frame_data)) - instance.data.update(frame_data) - - def get_frame_data(self, node): - """Get the frame data: start frame, end frame and steps - Args: - node(hou.Node) - - Returns: - dict - - """ - - data = {} - - if node.parm("trange") is None: - self.log.debug("Node has no 'trange' parameter: " - "{}".format(node.path())) - return data - - if node.evalParm("trange") == 0: - # Ignore 'render current frame' - self.log.debug("Node '{}' has 'Render current frame' set. " - "Time range data ignored.".format(node.path())) - return data - - data["frameStart"] = node.evalParm("f1") - data["frameEnd"] = node.evalParm("f2") - data["byFrameStep"] = node.evalParm("f3") - - return data diff --git a/openpype/hosts/houdini/plugins/publish/collect_instances.py b/openpype/hosts/houdini/plugins/publish/collect_instances.py index 3772c9e705..52966fb3c2 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_instances.py +++ b/openpype/hosts/houdini/plugins/publish/collect_instances.py @@ -91,27 +91,3 @@ class CollectInstances(pyblish.api.ContextPlugin): context[:] = sorted(context, key=sort_by_family) return context - - def get_frame_data(self, node): - """Get the frame data: start frame, end frame and steps - Args: - node(hou.Node) - - Returns: - dict - - """ - - data = {} - - if node.parm("trange") is None: - return data - - if node.evalParm("trange") == 0: - return data - - data["frameStart"] = node.evalParm("f1") - data["frameEnd"] = node.evalParm("f2") - data["byFrameStep"] = node.evalParm("f3") - - return data diff --git a/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py b/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py index eabb1128d8..fe0b8711fc 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_karma_rop.py @@ -24,7 +24,9 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin): """ label = "Karma ROP Render Products" - order = pyblish.api.CollectorOrder + 0.4 + # This specific order value is used so that + # this plugin runs after CollectRopFrameRange + order = pyblish.api.CollectorOrder + 0.4999 hosts = ["houdini"] families = ["karma_rop"] @@ -95,8 +97,9 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin): return path expected_files = [] - start = instance.data["frameStart"] - end = instance.data["frameEnd"] + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + for i in range(int(start), (int(end) + 1)): expected_files.append( os.path.join(dir, (file % i)).replace("\\", "/")) diff --git a/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py b/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py index c4460f5350..cc412f30a1 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_mantra_rop.py @@ -24,7 +24,9 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin): """ label = "Mantra ROP Render Products" - order = pyblish.api.CollectorOrder + 0.4 + # This specific order value is used so that + # this plugin runs after CollectRopFrameRange + order = pyblish.api.CollectorOrder + 0.4999 hosts = ["houdini"] families = ["mantra_rop"] @@ -118,8 +120,9 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin): return path expected_files = [] - start = instance.data["frameStart"] - end = instance.data["frameEnd"] + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + for i in range(int(start), (int(end) + 1)): expected_files.append( os.path.join(dir, (file % i)).replace("\\", "/")) diff --git a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py index dbb15ab88f..deb9eac971 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_redshift_rop.py @@ -24,7 +24,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): """ label = "Redshift ROP Render Products" - order = pyblish.api.CollectorOrder + 0.4 + # This specific order value is used so that + # this plugin runs after CollectRopFrameRange + order = pyblish.api.CollectorOrder + 0.4999 hosts = ["houdini"] families = ["redshift_rop"] @@ -132,8 +134,9 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin): return path expected_files = [] - start = instance.data["frameStart"] - end = instance.data["frameEnd"] + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + for i in range(int(start), (int(end) + 1)): expected_files.append( os.path.join(dir, (file % i)).replace("\\", "/")) diff --git a/openpype/hosts/houdini/plugins/publish/collect_rop_frame_range.py b/openpype/hosts/houdini/plugins/publish/collect_rop_frame_range.py index 2a6be6b9f1..186244fedd 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_rop_frame_range.py +++ b/openpype/hosts/houdini/plugins/publish/collect_rop_frame_range.py @@ -2,40 +2,106 @@ """Collector plugin for frames data on ROP instances.""" import hou # noqa import pyblish.api +from openpype.lib import BoolDef from openpype.hosts.houdini.api import lib +from openpype.pipeline import OpenPypePyblishPluginMixin -class CollectRopFrameRange(pyblish.api.InstancePlugin): +class CollectRopFrameRange(pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin): + """Collect all frames which would be saved from the ROP nodes""" - order = pyblish.api.CollectorOrder + hosts = ["houdini"] + # This specific order value is used so that + # this plugin runs after CollectAnatomyInstanceData + order = pyblish.api.CollectorOrder + 0.499 label = "Collect RopNode Frame Range" + use_asset_handles = True def process(self, instance): node_path = instance.data.get("instance_node") if node_path is None: # Instance without instance node like a workfile instance + self.log.debug( + "No instance node found for instance: {}".format(instance) + ) return ropnode = hou.node(node_path) - frame_data = lib.get_frame_data(ropnode) - if "frameStart" in frame_data and "frameEnd" in frame_data: + attr_values = self.get_attr_values_from_data(instance.data) - # Log artist friendly message about the collected frame range - message = ( - "Frame range {0[frameStart]} - {0[frameEnd]}" - ).format(frame_data) - if frame_data.get("step", 1.0) != 1.0: - message += " with step {0[step]}".format(frame_data) - self.log.info(message) + if attr_values.get("use_handles", self.use_asset_handles): + asset_data = instance.data["assetEntity"]["data"] + handle_start = asset_data.get("handleStart", 0) + handle_end = asset_data.get("handleEnd", 0) + else: + handle_start = 0 + handle_end = 0 - instance.data.update(frame_data) + frame_data = lib.get_frame_data( + ropnode, handle_start, handle_end, self.log + ) - # Add frame range to label if the instance has a frame range. - label = instance.data.get("label", instance.data["name"]) - instance.data["label"] = ( - "{0} [{1[frameStart]} - {1[frameEnd]}]".format(label, - frame_data) + if not frame_data: + return + + # Log debug message about the collected frame range + frame_start = frame_data["frameStart"] + frame_end = frame_data["frameEnd"] + + if attr_values.get("use_handles", self.use_asset_handles): + self.log.debug( + "Full Frame range with Handles " + "[{frame_start_handle} - {frame_end_handle}]" + .format( + frame_start_handle=frame_data["frameStartHandle"], + frame_end_handle=frame_data["frameEndHandle"] + ) ) + else: + self.log.debug( + "Use handles is deactivated for this instance, " + "start and end handles are set to 0." + ) + + # Log collected frame range to the user + message = "Frame range [{frame_start} - {frame_end}]".format( + frame_start=frame_start, + frame_end=frame_end + ) + if handle_start or handle_end: + message += " with handles [{handle_start}]-[{handle_end}]".format( + handle_start=handle_start, + handle_end=handle_end + ) + self.log.info(message) + + if frame_data.get("byFrameStep", 1.0) != 1.0: + self.log.info("Frame steps {}".format(frame_data["byFrameStep"])) + + instance.data.update(frame_data) + + # Add frame range to label if the instance has a frame range. + label = instance.data.get("label", instance.data["name"]) + instance.data["label"] = ( + "{label} [{frame_start} - {frame_end}]" + .format( + label=label, + frame_start=frame_start, + frame_end=frame_end + ) + ) + + @classmethod + def get_attribute_defs(cls): + return [ + BoolDef("use_handles", + tooltip="Disable this if you want the publisher to" + " ignore start and end handles specified in the" + " asset data for this publish instance", + default=cls.use_asset_handles, + label="Use asset handles") + ] diff --git a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py index 277f922ba4..53072aebc6 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py +++ b/openpype/hosts/houdini/plugins/publish/collect_vray_rop.py @@ -24,7 +24,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin): """ label = "VRay ROP Render Products" - order = pyblish.api.CollectorOrder + 0.4 + # This specific order value is used so that + # this plugin runs after CollectRopFrameRange + order = pyblish.api.CollectorOrder + 0.4999 hosts = ["houdini"] families = ["vray_rop"] @@ -115,8 +117,9 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin): return path expected_files = [] - start = instance.data["frameStart"] - end = instance.data["frameEnd"] + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + for i in range(int(start), (int(end) + 1)): expected_files.append( os.path.join(dir, (file % i)).replace("\\", "/")) diff --git a/openpype/hosts/houdini/plugins/publish/validate_frame_range.py b/openpype/hosts/houdini/plugins/publish/validate_frame_range.py new file mode 100644 index 0000000000..6a66f3de9f --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_frame_range.py @@ -0,0 +1,96 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from openpype.pipeline.publish import RepairAction +from openpype.hosts.houdini.api.action import SelectInvalidAction + +import hou + + +class DisableUseAssetHandlesAction(RepairAction): + label = "Disable use asset handles" + icon = "mdi.toggle-switch-off" + + +class ValidateFrameRange(pyblish.api.InstancePlugin): + """Validate Frame Range. + + Due to the usage of start and end handles, + then Frame Range must be >= (start handle + end handle) + which results that frameEnd be smaller than frameStart + """ + + order = pyblish.api.ValidatorOrder - 0.1 + hosts = ["houdini"] + label = "Validate Frame Range" + actions = [DisableUseAssetHandlesAction, SelectInvalidAction] + + def process(self, instance): + + invalid = self.get_invalid(instance) + if invalid: + raise PublishValidationError( + title="Invalid Frame Range", + message=( + "Invalid frame range because the instance " + "start frame ({0[frameStart]}) is higher than " + "the end frame ({0[frameEnd]})" + .format(instance.data) + ), + description=( + "## Invalid Frame Range\n" + "The frame range for the instance is invalid because " + "the start frame is higher than the end frame.\n\nThis " + "is likely due to asset handles being applied to your " + "instance or the ROP node's start frame " + "is set higher than the end frame.\n\nIf your ROP frame " + "range is correct and you do not want to apply asset " + "handles make sure to disable Use asset handles on the " + "publish instance." + ) + ) + + @classmethod + def get_invalid(cls, instance): + + if not instance.data.get("instance_node"): + return + + rop_node = hou.node(instance.data["instance_node"]) + if instance.data["frameStart"] > instance.data["frameEnd"]: + cls.log.info( + "The ROP node render range is set to " + "{0[frameStartHandle]} - {0[frameEndHandle]} " + "The asset handles applied to the instance are start handle " + "{0[handleStart]} and end handle {0[handleEnd]}" + .format(instance.data) + ) + return [rop_node] + + @classmethod + def repair(cls, instance): + + if not cls.get_invalid(instance): + # Already fixed + return + + # Disable use asset handles + context = instance.context + create_context = context.data["create_context"] + instance_id = instance.data.get("instance_id") + if not instance_id: + cls.log.debug("'{}' must have instance id" + .format(instance)) + return + + created_instance = create_context.get_instance_by_id(instance_id) + if not instance_id: + cls.log.debug("Unable to find instance '{}' by id" + .format(instance)) + return + + created_instance.publish_attributes["CollectRopFrameRange"]["use_handles"] = False # noqa + + create_context.save_changes() + cls.log.debug("use asset handles is turned off for '{}'" + .format(instance)) diff --git a/openpype/hosts/houdini/startup/MainMenuCommon.xml b/openpype/hosts/houdini/startup/MainMenuCommon.xml index 5818a117eb..b2e32a70f9 100644 --- a/openpype/hosts/houdini/startup/MainMenuCommon.xml +++ b/openpype/hosts/houdini/startup/MainMenuCommon.xml @@ -86,6 +86,14 @@ openpype.hosts.houdini.api.lib.reset_framerange() ]]> + + + + + diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index 8b70b3ced7..cbaf8a0c33 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -234,27 +234,40 @@ def reset_scene_resolution(): set_scene_resolution(width, height) -def get_frame_range() -> Union[Dict[str, Any], None]: +def get_frame_range(asset_doc=None) -> Union[Dict[str, Any], None]: """Get the current assets frame range and handles. + Args: + asset_doc (dict): Asset Entity Data + Returns: dict: with frame start, frame end, handle start, handle end. """ # Set frame start/end - asset = get_current_project_asset() - frame_start = asset["data"].get("frameStart") - frame_end = asset["data"].get("frameEnd") + if asset_doc is None: + asset_doc = get_current_project_asset() + + data = asset_doc["data"] + frame_start = data.get("frameStart") + frame_end = data.get("frameEnd") if frame_start is None or frame_end is None: - return + return {} + + frame_start = int(frame_start) + frame_end = int(frame_end) + handle_start = int(data.get("handleStart", 0)) + handle_end = int(data.get("handleEnd", 0)) + frame_start_handle = frame_start - handle_start + frame_end_handle = frame_end + handle_end - handle_start = asset["data"].get("handleStart", 0) - handle_end = asset["data"].get("handleEnd", 0) return { "frameStart": frame_start, "frameEnd": frame_end, "handleStart": handle_start, - "handleEnd": handle_end + "handleEnd": handle_end, + "frameStartHandle": frame_start_handle, + "frameEndHandle": frame_end_handle, } @@ -274,12 +287,11 @@ def reset_frame_range(fps: bool = True): fps_number = float(data_fps["data"]["fps"]) rt.frameRate = fps_number frame_range = get_frame_range() - frame_start_handle = frame_range["frameStart"] - int( - frame_range["handleStart"] - ) - frame_end_handle = frame_range["frameEnd"] + int(frame_range["handleEnd"]) - set_timeline(frame_start_handle, frame_end_handle) - set_render_frame_range(frame_start_handle, frame_end_handle) + + set_timeline( + frame_range["frameStartHandle"], frame_range["frameEndHandle"]) + set_render_frame_range( + frame_range["frameStartHandle"], frame_range["frameEndHandle"]) def set_context_setting(): @@ -321,21 +333,6 @@ def is_headless(): return rt.maxops.isInNonInteractiveMode() -@contextlib.contextmanager -def viewport_camera(camera): - original = rt.viewport.getCamera() - if not original: - # if there is no original camera - # use the current camera as original - original = rt.getNodeByName(camera) - review_camera = rt.getNodeByName(camera) - try: - rt.viewport.setCamera(review_camera) - yield - finally: - rt.viewport.setCamera(original) - - def set_timeline(frameStart, frameEnd): """Set frame range for timeline editor in Max """ @@ -497,3 +494,22 @@ def get_plugins() -> list: plugin_info_list.append(plugin_info) return plugin_info_list + + +@contextlib.contextmanager +def render_resolution(width, height): + """Set render resolution option during context + + Args: + width (int): render width + height (int): render height + """ + current_renderWidth = rt.renderWidth + current_renderHeight = rt.renderHeight + try: + rt.renderWidth = width + rt.renderHeight = height + yield + finally: + rt.renderWidth = current_renderWidth + rt.renderHeight = current_renderHeight diff --git a/openpype/hosts/max/api/preview_animation.py b/openpype/hosts/max/api/preview_animation.py new file mode 100644 index 0000000000..1bf99b86d0 --- /dev/null +++ b/openpype/hosts/max/api/preview_animation.py @@ -0,0 +1,309 @@ +import logging +import contextlib +from pymxs import runtime as rt +from .lib import get_max_version, render_resolution + +log = logging.getLogger("openpype.hosts.max") + + +@contextlib.contextmanager +def play_preview_when_done(has_autoplay): + """Set preview playback option during context + + Args: + has_autoplay (bool): autoplay during creating + preview animation + """ + current_playback = rt.preferences.playPreviewWhenDone + try: + rt.preferences.playPreviewWhenDone = has_autoplay + yield + finally: + rt.preferences.playPreviewWhenDone = current_playback + + +@contextlib.contextmanager +def viewport_camera(camera): + """Set viewport camera during context + ***For 3dsMax 2024+ + Args: + camera (str): viewport camera + """ + original = rt.viewport.getCamera() + if not original: + # if there is no original camera + # use the current camera as original + original = rt.getNodeByName(camera) + review_camera = rt.getNodeByName(camera) + try: + rt.viewport.setCamera(review_camera) + yield + finally: + rt.viewport.setCamera(original) + + +@contextlib.contextmanager +def viewport_preference_setting(general_viewport, + nitrous_viewport, + vp_button_mgr): + """Function to set viewport setting during context + ***For Max Version < 2024 + Args: + camera (str): Viewport camera for review render + general_viewport (dict): General viewport setting + nitrous_viewport (dict): Nitrous setting for + preview animation + vp_button_mgr (dict): Viewport button manager Setting + preview_preferences (dict): Preview Preferences Setting + """ + orig_vp_grid = rt.viewport.getGridVisibility(1) + orig_vp_bkg = rt.viewport.IsSolidBackgroundColorMode() + + nitrousGraphicMgr = rt.NitrousGraphicsManager + viewport_setting = nitrousGraphicMgr.GetActiveViewportSetting() + vp_button_mgr_original = { + key: getattr(rt.ViewportButtonMgr, key) for key in vp_button_mgr + } + nitrous_viewport_original = { + key: getattr(viewport_setting, key) for key in nitrous_viewport + } + + try: + rt.viewport.setGridVisibility(1, general_viewport["dspGrid"]) + rt.viewport.EnableSolidBackgroundColorMode(general_viewport["dspBkg"]) + for key, value in vp_button_mgr.items(): + setattr(rt.ViewportButtonMgr, key, value) + for key, value in nitrous_viewport.items(): + if nitrous_viewport[key] != nitrous_viewport_original[key]: + setattr(viewport_setting, key, value) + yield + + finally: + rt.viewport.setGridVisibility(1, orig_vp_grid) + rt.viewport.EnableSolidBackgroundColorMode(orig_vp_bkg) + for key, value in vp_button_mgr_original.items(): + setattr(rt.ViewportButtonMgr, key, value) + for key, value in nitrous_viewport_original.items(): + setattr(viewport_setting, key, value) + + +def _render_preview_animation_max_2024( + filepath, start, end, percentSize, ext, viewport_options): + """Render viewport preview with MaxScript using `CreateAnimation`. + ****For 3dsMax 2024+ + Args: + filepath (str): filepath for render output without frame number and + extension, for example: /path/to/file + start (int): startFrame + end (int): endFrame + percentSize (float): render resolution multiplier by 100 + e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x + viewport_options (dict): viewport setting options, e.g. + {"vpStyle": "defaultshading", "vpPreset": "highquality"} + Returns: + list: Created files + """ + # the percentSize argument must be integer + percent = int(percentSize) + filepath = filepath.replace("\\", "/") + preview_output = f"{filepath}..{ext}" + frame_template = f"{filepath}.{{:04d}}.{ext}" + job_args = [] + for key, value in viewport_options.items(): + if isinstance(value, bool): + if value: + job_args.append(f"{key}:{value}") + elif isinstance(value, str): + if key == "vpStyle": + if value == "Realistic": + value = "defaultshading" + elif value == "Shaded": + log.warning( + "'Shaded' Mode not supported in " + "preview animation in Max 2024.\n" + "Using 'defaultshading' instead.") + value = "defaultshading" + elif value == "ConsistentColors": + value = "flatcolor" + else: + value = value.lower() + elif key == "vpPreset": + if value == "Quality": + value = "highquality" + elif value == "Customize": + value = "userdefined" + else: + value = value.lower() + job_args.append(f"{key}: #{value}") + + job_str = ( + f'CreatePreview filename:"{preview_output}" outputAVI:false ' + f"percentSize:{percent} start:{start} end:{end} " + f"{' '.join(job_args)} " + "autoPlay:false" + ) + rt.completeRedraw() + rt.execute(job_str) + # Return the created files + return [frame_template.format(frame) for frame in range(start, end + 1)] + + +def _render_preview_animation_max_pre_2024( + filepath, startFrame, endFrame, percentSize, ext): + """Render viewport animation by creating bitmaps + ***For 3dsMax Version <2024 + Args: + filepath (str): filepath without frame numbers and extension + startFrame (int): start frame + endFrame (int): end frame + percentSize (float): render resolution multiplier by 100 + e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x + ext (str): image extension + Returns: + list: Created filepaths + """ + # get the screenshot + percent = percentSize / 100.0 + res_width = int(round(rt.renderWidth * percent)) + res_height = int(round(rt.renderHeight * percent)) + viewportRatio = float(res_width / res_height) + frame_template = "{}.{{:04}}.{}".format(filepath, ext) + frame_template.replace("\\", "/") + files = [] + user_cancelled = False + for frame in range(startFrame, endFrame + 1): + rt.sliderTime = frame + filepath = frame_template.format(frame) + preview_res = rt.bitmap( + res_width, res_height, filename=filepath + ) + dib = rt.gw.getViewportDib() + dib_width = float(dib.width) + dib_height = float(dib.height) + renderRatio = float(dib_width / dib_height) + if viewportRatio <= renderRatio: + heightCrop = (dib_width / renderRatio) + topEdge = int((dib_height - heightCrop) / 2.0) + tempImage_bmp = rt.bitmap(dib_width, heightCrop) + src_box_value = rt.Box2(0, topEdge, dib_width, heightCrop) + else: + widthCrop = dib_height * renderRatio + leftEdge = int((dib_width - widthCrop) / 2.0) + tempImage_bmp = rt.bitmap(widthCrop, dib_height) + src_box_value = rt.Box2(0, leftEdge, dib_width, dib_height) + rt.pasteBitmap(dib, tempImage_bmp, src_box_value, rt.Point2(0, 0)) + # copy the bitmap and close it + rt.copy(tempImage_bmp, preview_res) + rt.close(tempImage_bmp) + rt.save(preview_res) + rt.close(preview_res) + rt.close(dib) + files.append(filepath) + if rt.keyboard.escPressed: + user_cancelled = True + break + # clean up the cache + rt.gc(delayed=True) + if user_cancelled: + raise RuntimeError("User cancelled rendering of viewport animation.") + return files + + +def render_preview_animation( + filepath, + ext, + camera, + start_frame=None, + end_frame=None, + percentSize=100.0, + width=1920, + height=1080, + viewport_options=None): + """Render camera review animation + Args: + filepath (str): filepath to render to, without frame number and + extension + ext (str): output file extension + camera (str): viewport camera for preview render + start_frame (int): start frame + end_frame (int): end frame + percentSize (float): render resolution multiplier by 100 + e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x + width (int): render resolution width + height (int): render resolution height + viewport_options (dict): viewport setting options + Returns: + list: Rendered output files + """ + if start_frame is None: + start_frame = int(rt.animationRange.start) + if end_frame is None: + end_frame = int(rt.animationRange.end) + + if viewport_options is None: + viewport_options = viewport_options_for_preview_animation() + with play_preview_when_done(False): + with viewport_camera(camera): + with render_resolution(width, height): + if int(get_max_version()) < 2024: + with viewport_preference_setting( + viewport_options["general_viewport"], + viewport_options["nitrous_viewport"], + viewport_options["vp_btn_mgr"] + ): + return _render_preview_animation_max_pre_2024( + filepath, + start_frame, + end_frame, + percentSize, + ext + ) + else: + return _render_preview_animation_max_2024( + filepath, + start_frame, + end_frame, + percentSize, + ext, + viewport_options + ) + + +def viewport_options_for_preview_animation(): + """Get default viewport options for `render_preview_animation`. + + Returns: + dict: viewport setting options + """ + # viewport_options should be the dictionary + if int(get_max_version()) < 2024: + return { + "visualStyleMode": "defaultshading", + "viewportPreset": "highquality", + "vpTexture": False, + "dspGeometry": True, + "dspShapes": False, + "dspLights": False, + "dspCameras": False, + "dspHelpers": False, + "dspParticles": True, + "dspBones": False, + "dspBkg": True, + "dspGrid": False, + "dspSafeFrame": False, + "dspFrameNums": False + } + else: + viewport_options = {} + viewport_options["general_viewport"] = { + "dspBkg": True, + "dspGrid": False + } + viewport_options["nitrous_viewport"] = { + "VisualStyleMode": "defaultshading", + "ViewportPreset": "highquality", + "UseTextureEnabled": False + } + viewport_options["vp_btn_mgr"] = { + "EnableButtons": False} + return viewport_options diff --git a/openpype/hosts/max/plugins/create/create_review.py b/openpype/hosts/max/plugins/create/create_review.py index 5737114fcc..8052b74f06 100644 --- a/openpype/hosts/max/plugins/create/create_review.py +++ b/openpype/hosts/max/plugins/create/create_review.py @@ -13,31 +13,50 @@ class CreateReview(plugin.MaxCreator): icon = "video-camera" def create(self, subset_name, instance_data, pre_create_data): - - instance_data["imageFormat"] = pre_create_data.get("imageFormat") - instance_data["keepImages"] = pre_create_data.get("keepImages") - instance_data["percentSize"] = pre_create_data.get("percentSize") - instance_data["rndLevel"] = pre_create_data.get("rndLevel") + # Transfer settings from pre create to instance + creator_attributes = instance_data.setdefault( + "creator_attributes", dict()) + for key in ["imageFormat", + "keepImages", + "review_width", + "review_height", + "percentSize", + "visualStyleMode", + "viewportPreset", + "vpTexture"]: + if key in pre_create_data: + creator_attributes[key] = pre_create_data[key] super(CreateReview, self).create( subset_name, instance_data, pre_create_data) - def get_pre_create_attr_defs(self): - attrs = super(CreateReview, self).get_pre_create_attr_defs() + def get_instance_attr_defs(self): + image_format_enum = ["exr", "jpg", "png"] - image_format_enum = [ - "bmp", "cin", "exr", "jpg", "hdr", "rgb", "png", - "rla", "rpf", "dds", "sgi", "tga", "tif", "vrimg" + visual_style_preset_enum = [ + "Realistic", "Shaded", "Facets", + "ConsistentColors", "HiddenLine", + "Wireframe", "BoundingBox", "Ink", + "ColorInk", "Acrylic", "Tech", "Graphite", + "ColorPencil", "Pastel", "Clay", "ModelAssist" ] + preview_preset_enum = [ + "Quality", "Standard", "Performance", + "DXMode", "Customize"] - rndLevel_enum = [ - "smoothhighlights", "smooth", "facethighlights", - "facet", "flat", "litwireframe", "wireframe", "box" - ] - - return attrs + [ + return [ + NumberDef("review_width", + label="Review width", + decimals=0, + minimum=0, + default=1920), + NumberDef("review_height", + label="Review height", + decimals=0, + minimum=0, + default=1080), BoolDef("keepImages", label="Keep Image Sequences", default=False), @@ -50,8 +69,20 @@ class CreateReview(plugin.MaxCreator): default=100, minimum=1, decimals=0), - EnumDef("rndLevel", - rndLevel_enum, - default="smoothhighlights", - label="Preference") + EnumDef("visualStyleMode", + visual_style_preset_enum, + default="Realistic", + label="Preference"), + EnumDef("viewportPreset", + preview_preset_enum, + default="Quality", + label="Pre-View Preset"), + BoolDef("vpTexture", + label="Viewport Texture", + default=False) ] + + def get_pre_create_attr_defs(self): + # Use same attributes as for instance attributes + attrs = super().get_pre_create_attr_defs() + return attrs + self.get_instance_attr_defs() diff --git a/openpype/hosts/max/plugins/create/create_tycache.py b/openpype/hosts/max/plugins/create/create_tycache.py new file mode 100644 index 0000000000..92d12e012f --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_tycache.py @@ -0,0 +1,11 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating TyCache.""" +from openpype.hosts.max.api import plugin + + +class CreateTyCache(plugin.MaxCreator): + """Creator plugin for TyCache.""" + identifier = "io.openpype.creators.max.tycache" + label = "TyCache" + family = "tycache" + icon = "gear" diff --git a/openpype/hosts/max/plugins/load/load_tycache.py b/openpype/hosts/max/plugins/load/load_tycache.py new file mode 100644 index 0000000000..41ea267c3d --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_tycache.py @@ -0,0 +1,64 @@ +import os +from openpype.hosts.max.api import lib, maintained_selection +from openpype.hosts.max.api.lib import ( + unique_namespace, + +) +from openpype.hosts.max.api.pipeline import ( + containerise, + get_previous_loaded_object, + update_custom_attribute_data +) +from openpype.pipeline import get_representation_path, load + + +class TyCacheLoader(load.LoaderPlugin): + """TyCache Loader.""" + + families = ["tycache"] + representations = ["tyc"] + order = -8 + icon = "code-fork" + color = "green" + + def load(self, context, name=None, namespace=None, data=None): + """Load tyCache""" + from pymxs import runtime as rt + filepath = os.path.normpath(self.filepath_from_context(context)) + obj = rt.tyCache() + obj.filename = filepath + + namespace = unique_namespace( + name + "_", + suffix="_", + ) + obj.name = f"{namespace}:{obj.name}" + + return containerise( + name, [obj], context, + namespace, loader=self.__class__.__name__) + + def update(self, container, representation): + """update the container""" + from pymxs import runtime as rt + + path = get_representation_path(representation) + node = rt.GetNodeByName(container["instance_node"]) + node_list = get_previous_loaded_object(node) + update_custom_attribute_data(node, node_list) + with maintained_selection(): + for tyc in node_list: + tyc.filename = path + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + """remove the container""" + from pymxs import runtime as rt + + node = rt.GetNodeByName(container["instance_node"]) + rt.Delete(node) diff --git a/openpype/hosts/max/plugins/publish/collect_frame_range.py b/openpype/hosts/max/plugins/publish/collect_frame_range.py new file mode 100644 index 0000000000..86fb6e856c --- /dev/null +++ b/openpype/hosts/max/plugins/publish/collect_frame_range.py @@ -0,0 +1,22 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from pymxs import runtime as rt + + +class CollectFrameRange(pyblish.api.InstancePlugin): + """Collect Frame Range.""" + + order = pyblish.api.CollectorOrder + 0.01 + label = "Collect Frame Range" + hosts = ['max'] + families = ["camera", "maxrender", + "pointcache", "pointcloud", + "review", "redshiftproxy"] + + def process(self, instance): + if instance.data["family"] == "maxrender": + instance.data["frameStartHandle"] = int(rt.rendStart) + instance.data["frameEndHandle"] = int(rt.rendEnd) + else: + instance.data["frameStartHandle"] = int(rt.animationRange.start) + instance.data["frameEndHandle"] = int(rt.animationRange.end) diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index a359e61921..7765b3b924 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -14,7 +14,7 @@ from openpype.client import get_last_version_by_subset_name class CollectRender(pyblish.api.InstancePlugin): """Collect Render for Deadline""" - order = pyblish.api.CollectorOrder + 0.01 + order = pyblish.api.CollectorOrder + 0.02 label = "Collect 3dsmax Render Layers" hosts = ['max'] families = ["maxrender"] @@ -97,8 +97,8 @@ class CollectRender(pyblish.api.InstancePlugin): "renderer": renderer, "source": filepath, "plugin": "3dsmax", - "frameStart": int(rt.rendStart), - "frameEnd": int(rt.rendEnd), + "frameStart": instance.data["frameStartHandle"], + "frameEnd": instance.data["frameEndHandle"], "version": version_int, "farm": True } diff --git a/openpype/hosts/max/plugins/publish/collect_review.py b/openpype/hosts/max/plugins/publish/collect_review.py index 8e27a857d7..b1d9c2d25e 100644 --- a/openpype/hosts/max/plugins/publish/collect_review.py +++ b/openpype/hosts/max/plugins/publish/collect_review.py @@ -5,7 +5,10 @@ import pyblish.api from pymxs import runtime as rt from openpype.lib import BoolDef from openpype.hosts.max.api.lib import get_max_version -from openpype.pipeline.publish import OpenPypePyblishPluginMixin +from openpype.pipeline.publish import ( + OpenPypePyblishPluginMixin, + KnownPublishError +) class CollectReview(pyblish.api.InstancePlugin, @@ -19,30 +22,41 @@ class CollectReview(pyblish.api.InstancePlugin, def process(self, instance): nodes = instance.data["members"] - focal_length = None - camera_name = None - for node in nodes: - if rt.classOf(node) in rt.Camera.classes: - camera_name = node.name - focal_length = node.fov + def is_camera(node): + is_camera_class = rt.classOf(node) in rt.Camera.classes + return is_camera_class and rt.isProperty(node, "fov") + + # Use first camera in instance + cameras = [node for node in nodes if is_camera(node)] + if cameras: + if len(cameras) > 1: + self.log.warning( + "Found more than one camera in instance, using first " + f"one found: {cameras[0]}" + ) + camera = cameras[0] + camera_name = camera.name + focal_length = camera.fov + else: + raise KnownPublishError( + "Unable to find a valid camera in 'Review' container." + " Only native max Camera supported. " + f"Found objects: {nodes}" + ) + creator_attrs = instance.data["creator_attributes"] attr_values = self.get_attr_values_from_data(instance.data) - data = { + + general_preview_data = { "review_camera": camera_name, - "frameStart": instance.context.data["frameStart"], - "frameEnd": instance.context.data["frameEnd"], + "frameStart": instance.data["frameStartHandle"], + "frameEnd": instance.data["frameEndHandle"], + "percentSize": creator_attrs["percentSize"], + "imageFormat": creator_attrs["imageFormat"], + "keepImages": creator_attrs["keepImages"], "fps": instance.context.data["fps"], - "dspGeometry": attr_values.get("dspGeometry"), - "dspShapes": attr_values.get("dspShapes"), - "dspLights": attr_values.get("dspLights"), - "dspCameras": attr_values.get("dspCameras"), - "dspHelpers": attr_values.get("dspHelpers"), - "dspParticles": attr_values.get("dspParticles"), - "dspBones": attr_values.get("dspBones"), - "dspBkg": attr_values.get("dspBkg"), - "dspGrid": attr_values.get("dspGrid"), - "dspSafeFrame": attr_values.get("dspSafeFrame"), - "dspFrameNums": attr_values.get("dspFrameNums") + "review_width": creator_attrs["review_width"], + "review_height": creator_attrs["review_height"], } if int(get_max_version()) >= 2024: @@ -55,14 +69,46 @@ class CollectReview(pyblish.api.InstancePlugin, instance.data["colorspaceDisplay"] = display instance.data["colorspaceView"] = view_transform + preview_data = { + "vpStyle": creator_attrs["visualStyleMode"], + "vpPreset": creator_attrs["viewportPreset"], + "vpTextures": creator_attrs["vpTexture"], + "dspGeometry": attr_values.get("dspGeometry"), + "dspShapes": attr_values.get("dspShapes"), + "dspLights": attr_values.get("dspLights"), + "dspCameras": attr_values.get("dspCameras"), + "dspHelpers": attr_values.get("dspHelpers"), + "dspParticles": attr_values.get("dspParticles"), + "dspBones": attr_values.get("dspBones"), + "dspBkg": attr_values.get("dspBkg"), + "dspGrid": attr_values.get("dspGrid"), + "dspSafeFrame": attr_values.get("dspSafeFrame"), + "dspFrameNums": attr_values.get("dspFrameNums") + } + else: + general_viewport = { + "dspBkg": attr_values.get("dspBkg"), + "dspGrid": attr_values.get("dspGrid") + } + nitrous_viewport = { + "VisualStyleMode": creator_attrs["visualStyleMode"], + "ViewportPreset": creator_attrs["viewportPreset"], + "UseTextureEnabled": creator_attrs["vpTexture"] + } + preview_data = { + "general_viewport": general_viewport, + "nitrous_viewport": nitrous_viewport, + "vp_btn_mgr": {"EnableButtons": False} + } + # Enable ftrack functionality instance.data.setdefault("families", []).append('ftrack') burnin_members = instance.data.setdefault("burninDataMembers", {}) burnin_members["focalLength"] = focal_length - self.log.debug(f"data:{data}") - instance.data.update(data) + instance.data.update(general_preview_data) + instance.data["viewport_options"] = preview_data @classmethod def get_attribute_defs(cls): diff --git a/openpype/hosts/max/plugins/publish/collect_tycache_attributes.py b/openpype/hosts/max/plugins/publish/collect_tycache_attributes.py new file mode 100644 index 0000000000..0351ca45c5 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/collect_tycache_attributes.py @@ -0,0 +1,76 @@ +import pyblish.api + +from openpype.lib import EnumDef, TextDef +from openpype.pipeline.publish import OpenPypePyblishPluginMixin + + +class CollectTyCacheData(pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin): + """Collect Channel Attributes for TyCache Export""" + + order = pyblish.api.CollectorOrder + 0.02 + label = "Collect tyCache attribute Data" + hosts = ['max'] + families = ["tycache"] + + def process(self, instance): + attr_values = self.get_attr_values_from_data(instance.data) + attributes = {} + for attr_key in attr_values.get("tycacheAttributes", []): + attributes[attr_key] = True + + for key in ["tycacheLayer", "tycacheObjectName"]: + attributes[key] = attr_values.get(key, "") + + # Collect the selected channel data before exporting + instance.data["tyc_attrs"] = attributes + self.log.debug( + f"Found tycache attributes: {attributes}" + ) + + @classmethod + def get_attribute_defs(cls): + # TODO: Support the attributes with maxObject array + tyc_attr_enum = ["tycacheChanAge", "tycacheChanGroups", + "tycacheChanPos", "tycacheChanRot", + "tycacheChanScale", "tycacheChanVel", + "tycacheChanSpin", "tycacheChanShape", + "tycacheChanMatID", "tycacheChanMapping", + "tycacheChanMaterials", "tycacheChanCustomFloat" + "tycacheChanCustomVector", "tycacheChanCustomTM", + "tycacheChanPhysX", "tycacheMeshBackup", + "tycacheCreateObject", + "tycacheCreateObjectIfNotCreated", + "tycacheAdditionalCloth", + "tycacheAdditionalSkin", + "tycacheAdditionalSkinID", + "tycacheAdditionalSkinIDValue", + "tycacheAdditionalTerrain", + "tycacheAdditionalVDB", + "tycacheAdditionalSplinePaths", + "tycacheAdditionalGeo", + "tycacheAdditionalGeoActivateModifiers", + "tycacheSplines", + "tycacheSplinesAdditionalSplines" + ] + tyc_default_attrs = ["tycacheChanGroups", "tycacheChanPos", + "tycacheChanRot", "tycacheChanScale", + "tycacheChanVel", "tycacheChanShape", + "tycacheChanMatID", "tycacheChanMapping", + "tycacheChanMaterials", + "tycacheCreateObjectIfNotCreated"] + return [ + EnumDef("tycacheAttributes", + tyc_attr_enum, + default=tyc_default_attrs, + multiselection=True, + label="TyCache Attributes"), + TextDef("tycacheLayer", + label="TyCache Layer", + tooltip="Name of tycache layer", + default="$(tyFlowLayer)"), + TextDef("tycacheObjectName", + label="TyCache Object Name", + tooltip="TyCache Object Name", + default="$(tyFlowName)_tyCache") + ] diff --git a/openpype/hosts/max/plugins/publish/extract_camera_abc.py b/openpype/hosts/max/plugins/publish/extract_camera_abc.py index b1918c53e0..a42f27be6e 100644 --- a/openpype/hosts/max/plugins/publish/extract_camera_abc.py +++ b/openpype/hosts/max/plugins/publish/extract_camera_abc.py @@ -19,8 +19,8 @@ class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin): def process(self, instance): if not self.is_active(instance.data): return - start = float(instance.data.get("frameStartHandle", 1)) - end = float(instance.data.get("frameEndHandle", 1)) + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] self.log.info("Extracting Camera ...") diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py index c3de623bc0..f6a8500c08 100644 --- a/openpype/hosts/max/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py @@ -51,8 +51,8 @@ class ExtractAlembic(publish.Extractor): families = ["pointcache"] def process(self, instance): - start = float(instance.data.get("frameStartHandle", 1)) - end = float(instance.data.get("frameEndHandle", 1)) + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] self.log.debug("Extracting pointcache ...") diff --git a/openpype/hosts/max/plugins/publish/extract_pointcloud.py b/openpype/hosts/max/plugins/publish/extract_pointcloud.py index 583bbb6dbd..d9fbe5e9dd 100644 --- a/openpype/hosts/max/plugins/publish/extract_pointcloud.py +++ b/openpype/hosts/max/plugins/publish/extract_pointcloud.py @@ -36,11 +36,12 @@ class ExtractPointCloud(publish.Extractor): label = "Extract Point Cloud" hosts = ["max"] families = ["pointcloud"] + settings = [] def process(self, instance): self.settings = self.get_setting(instance) - start = int(instance.context.data.get("frameStart")) - end = int(instance.context.data.get("frameEnd")) + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] self.log.info("Extracting PRT...") stagingdir = self.staging_dir(instance) diff --git a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py index f67ed30c6b..47ed85977b 100644 --- a/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py +++ b/openpype/hosts/max/plugins/publish/extract_redshift_proxy.py @@ -16,8 +16,8 @@ class ExtractRedshiftProxy(publish.Extractor): families = ["redshiftproxy"] def process(self, instance): - start = int(instance.context.data.get("frameStart")) - end = int(instance.context.data.get("frameEnd")) + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] self.log.debug("Extracting Redshift Proxy...") stagingdir = self.staging_dir(instance) diff --git a/openpype/hosts/max/plugins/publish/extract_review_animation.py b/openpype/hosts/max/plugins/publish/extract_review_animation.py index 8e06e52b5c..99dc5c5cdc 100644 --- a/openpype/hosts/max/plugins/publish/extract_review_animation.py +++ b/openpype/hosts/max/plugins/publish/extract_review_animation.py @@ -1,8 +1,9 @@ import os import pyblish.api -from pymxs import runtime as rt from openpype.pipeline import publish -from openpype.hosts.max.api.lib import viewport_camera, get_max_version +from openpype.hosts.max.api.preview_animation import ( + render_preview_animation +) class ExtractReviewAnimation(publish.Extractor): @@ -18,24 +19,26 @@ class ExtractReviewAnimation(publish.Extractor): def process(self, instance): staging_dir = self.staging_dir(instance) ext = instance.data.get("imageFormat") - filename = "{0}..{1}".format(instance.name, ext) start = int(instance.data["frameStart"]) end = int(instance.data["frameEnd"]) - fps = int(instance.data["fps"]) - filepath = os.path.join(staging_dir, filename) - filepath = filepath.replace("\\", "/") - filenames = self.get_files( - instance.name, start, end, ext) - + filepath = os.path.join(staging_dir, instance.name) self.log.debug( - "Writing Review Animation to" - " '%s' to '%s'" % (filename, staging_dir)) + "Writing Review Animation to '{}'".format(filepath)) review_camera = instance.data["review_camera"] - with viewport_camera(review_camera): - preview_arg = self.set_preview_arg( - instance, filepath, start, end, fps) - rt.execute(preview_arg) + viewport_options = instance.data.get("viewport_options", {}) + files = render_preview_animation( + filepath, + ext, + review_camera, + start, + end, + percentSize=instance.data["percentSize"], + width=instance.data["review_width"], + height=instance.data["review_height"], + viewport_options=viewport_options) + + filenames = [os.path.basename(path) for path in files] tags = ["review"] if not instance.data.get("keepImages"): @@ -48,8 +51,8 @@ class ExtractReviewAnimation(publish.Extractor): "ext": instance.data["imageFormat"], "files": filenames, "stagingDir": staging_dir, - "frameStart": instance.data["frameStart"], - "frameEnd": instance.data["frameEnd"], + "frameStart": instance.data["frameStartHandle"], + "frameEnd": instance.data["frameEndHandle"], "tags": tags, "preview": True, "camera_name": review_camera @@ -59,44 +62,3 @@ class ExtractReviewAnimation(publish.Extractor): if "representations" not in instance.data: instance.data["representations"] = [] instance.data["representations"].append(representation) - - def get_files(self, filename, start, end, ext): - file_list = [] - for frame in range(int(start), int(end) + 1): - actual_name = "{}.{:04}.{}".format( - filename, frame, ext) - file_list.append(actual_name) - - return file_list - - def set_preview_arg(self, instance, filepath, - start, end, fps): - job_args = list() - default_option = f'CreatePreview filename:"{filepath}"' - job_args.append(default_option) - frame_option = f"outputAVI:false start:{start} end:{end} fps:{fps}" # noqa - job_args.append(frame_option) - rndLevel = instance.data.get("rndLevel") - if rndLevel: - option = f"rndLevel:#{rndLevel}" - job_args.append(option) - options = [ - "percentSize", "dspGeometry", "dspShapes", - "dspLights", "dspCameras", "dspHelpers", "dspParticles", - "dspBones", "dspBkg", "dspGrid", "dspSafeFrame", "dspFrameNums" - ] - - for key in options: - enabled = instance.data.get(key) - if enabled: - job_args.append(f"{key}:{enabled}") - - if get_max_version() == 2024: - # hardcoded for current stage - auto_play_option = "autoPlay:false" - job_args.append(auto_play_option) - - job_str = " ".join(job_args) - self.log.debug(job_str) - - return job_str diff --git a/openpype/hosts/max/plugins/publish/extract_thumbnail.py b/openpype/hosts/max/plugins/publish/extract_thumbnail.py index 82f4fc7a8b..02fa75e032 100644 --- a/openpype/hosts/max/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/max/plugins/publish/extract_thumbnail.py @@ -1,14 +1,11 @@ import os -import tempfile import pyblish.api -from pymxs import runtime as rt from openpype.pipeline import publish -from openpype.hosts.max.api.lib import viewport_camera, get_max_version +from openpype.hosts.max.api.preview_animation import render_preview_animation class ExtractThumbnail(publish.Extractor): - """ - Extract Thumbnail for Review + """Extract Thumbnail for Review """ order = pyblish.api.ExtractorOrder @@ -17,34 +14,33 @@ class ExtractThumbnail(publish.Extractor): families = ["review"] def process(self, instance): - # TODO: Create temp directory for thumbnail - # - this is to avoid "override" of source file - tmp_staging = tempfile.mkdtemp(prefix="pyblish_tmp_") - self.log.debug( - f"Create temp directory {tmp_staging} for thumbnail" - ) - fps = int(instance.data["fps"]) + ext = instance.data.get("imageFormat") frame = int(instance.data["frameStart"]) - instance.context.data["cleanupFullPaths"].append(tmp_staging) - filename = "{name}_thumbnail..png".format(**instance.data) - filepath = os.path.join(tmp_staging, filename) - filepath = filepath.replace("\\", "/") - thumbnail = self.get_filename(instance.name, frame) + staging_dir = self.staging_dir(instance) + filepath = os.path.join( + staging_dir, f"{instance.name}_thumbnail") + self.log.debug("Writing Thumbnail to '{}'".format(filepath)) - self.log.debug( - "Writing Thumbnail to" - " '%s' to '%s'" % (filename, tmp_staging)) review_camera = instance.data["review_camera"] - with viewport_camera(review_camera): - preview_arg = self.set_preview_arg( - instance, filepath, fps, frame) - rt.execute(preview_arg) + viewport_options = instance.data.get("viewport_options", {}) + files = render_preview_animation( + filepath, + ext, + review_camera, + start_frame=frame, + end_frame=frame, + percentSize=instance.data["percentSize"], + width=instance.data["review_width"], + height=instance.data["review_height"], + viewport_options=viewport_options) + + thumbnail = next(os.path.basename(path) for path in files) representation = { "name": "thumbnail", - "ext": "png", + "ext": ext, "files": thumbnail, - "stagingDir": tmp_staging, + "stagingDir": staging_dir, "thumbnail": True } @@ -53,39 +49,3 @@ class ExtractThumbnail(publish.Extractor): if "representations" not in instance.data: instance.data["representations"] = [] instance.data["representations"].append(representation) - - def get_filename(self, filename, target_frame): - thumbnail_name = "{}_thumbnail.{:04}.png".format( - filename, target_frame - ) - return thumbnail_name - - def set_preview_arg(self, instance, filepath, fps, frame): - job_args = list() - default_option = f'CreatePreview filename:"{filepath}"' - job_args.append(default_option) - frame_option = f"outputAVI:false start:{frame} end:{frame} fps:{fps}" # noqa - job_args.append(frame_option) - rndLevel = instance.data.get("rndLevel") - if rndLevel: - option = f"rndLevel:#{rndLevel}" - job_args.append(option) - options = [ - "percentSize", "dspGeometry", "dspShapes", - "dspLights", "dspCameras", "dspHelpers", "dspParticles", - "dspBones", "dspBkg", "dspGrid", "dspSafeFrame", "dspFrameNums" - ] - - for key in options: - enabled = instance.data.get(key) - if enabled: - job_args.append(f"{key}:{enabled}") - if get_max_version() == 2024: - # hardcoded for current stage - auto_play_option = "autoPlay:false" - job_args.append(auto_play_option) - - job_str = " ".join(job_args) - self.log.debug(job_str) - - return job_str diff --git a/openpype/hosts/max/plugins/publish/extract_tycache.py b/openpype/hosts/max/plugins/publish/extract_tycache.py new file mode 100644 index 0000000000..9bfe74f679 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_tycache.py @@ -0,0 +1,157 @@ +import os + +import pyblish.api +from pymxs import runtime as rt + +from openpype.hosts.max.api import maintained_selection +from openpype.pipeline import publish + + +class ExtractTyCache(publish.Extractor): + """Extract tycache format with tyFlow operators. + Notes: + - TyCache only works for TyFlow Pro Plugin. + + Methods: + self.get_export_particles_job_args(): sets up all job arguments + for attributes to be exported in MAXscript + + self.get_operators(): get the export_particle operator + + self.get_files(): get the files with tyFlow naming convention + before publishing + """ + + order = pyblish.api.ExtractorOrder - 0.2 + label = "Extract TyCache" + hosts = ["max"] + families = ["tycache"] + + def process(self, instance): + # TODO: let user decide the param + start = int(instance.context.data["frameStart"]) + end = int(instance.context.data.get("frameEnd")) + self.log.debug("Extracting Tycache...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.tyc".format(**instance.data) + path = os.path.join(stagingdir, filename) + filenames = self.get_files(instance, start, end) + additional_attributes = instance.data.get("tyc_attrs", {}) + + with maintained_selection(): + job_args = self.get_export_particles_job_args( + instance.data["members"], + start, end, path, + additional_attributes) + for job in job_args: + rt.Execute(job) + representations = instance.data.setdefault("representations", []) + representation = { + 'name': 'tyc', + 'ext': 'tyc', + 'files': filenames if len(filenames) > 1 else filenames[0], + "stagingDir": stagingdir, + } + representations.append(representation) + + # Get the tyMesh filename for extraction + mesh_filename = f"{instance.name}__tyMesh.tyc" + mesh_repres = { + 'name': 'tyMesh', + 'ext': 'tyc', + 'files': mesh_filename, + "stagingDir": stagingdir, + "outputName": '__tyMesh' + } + representations.append(mesh_repres) + self.log.debug(f"Extracted instance '{instance.name}' to: {filenames}") + + def get_files(self, instance, start_frame, end_frame): + """Get file names for tyFlow in tyCache format. + + Set the filenames accordingly to the tyCache file + naming extension(.tyc) for the publishing purpose + + Actual File Output from tyFlow in tyCache format: + __tyPart_.tyc + + e.g. tycacheMain__tyPart_00000.tyc + + Args: + instance (pyblish.api.Instance): instance. + start_frame (int): Start frame. + end_frame (int): End frame. + + Returns: + filenames(list): list of filenames + + """ + filenames = [] + for frame in range(int(start_frame), int(end_frame) + 1): + filename = f"{instance.name}__tyPart_{frame:05}.tyc" + filenames.append(filename) + return filenames + + def get_export_particles_job_args(self, members, start, end, + filepath, additional_attributes): + """Sets up all job arguments for attributes. + + Those attributes are to be exported in MAX Script. + + Args: + members (list): Member nodes of the instance. + start (int): Start frame. + end (int): End frame. + filepath (str): Output path of the TyCache file. + additional_attributes (dict): channel attributes data + which needed to be exported + + Returns: + list of arguments for MAX Script. + + """ + settings = { + "exportMode": 2, + "frameStart": start, + "frameEnd": end, + "tyCacheFilename": filepath.replace("\\", "/") + } + settings.update(additional_attributes) + + job_args = [] + for operator in self.get_operators(members): + for key, value in settings.items(): + if isinstance(value, str): + # embed in quotes + value = f'"{value}"' + + job_args.append(f"{operator}.{key}={value}") + job_args.append(f"{operator}.exportTyCache()") + return job_args + + @staticmethod + def get_operators(members): + """Get Export Particles Operator. + + Args: + members (list): Instance members. + + Returns: + list of particle operators + + """ + opt_list = [] + for member in members: + obj = member.baseobject + # TODO: see if it can use maxscript instead + anim_names = rt.GetSubAnimNames(obj) + for anim_name in anim_names: + sub_anim = rt.GetSubAnim(obj, anim_name) + boolean = rt.IsProperty(sub_anim, "Export_Particles") + if boolean: + event_name = sub_anim.Name + opt = f"${member.Name}.{event_name}.export_particles" + opt_list.append(opt) + + return opt_list diff --git a/openpype/hosts/max/plugins/publish/validate_animation_timeline.py b/openpype/hosts/max/plugins/publish/validate_animation_timeline.py deleted file mode 100644 index 2a9483c763..0000000000 --- a/openpype/hosts/max/plugins/publish/validate_animation_timeline.py +++ /dev/null @@ -1,48 +0,0 @@ -import pyblish.api - -from pymxs import runtime as rt -from openpype.pipeline.publish import ( - RepairAction, - ValidateContentsOrder, - PublishValidationError -) -from openpype.hosts.max.api.lib import get_frame_range, set_timeline - - -class ValidateAnimationTimeline(pyblish.api.InstancePlugin): - """ - Validates Animation Timeline for Preview Animation in Max - """ - - label = "Animation Timeline for Review" - order = ValidateContentsOrder - families = ["review"] - hosts = ["max"] - actions = [RepairAction] - - def process(self, instance): - frame_range = get_frame_range() - frame_start_handle = frame_range["frameStart"] - int( - frame_range["handleStart"] - ) - frame_end_handle = frame_range["frameEnd"] + int( - frame_range["handleEnd"] - ) - if rt.animationRange.start != frame_start_handle or ( - rt.animationRange.end != frame_end_handle - ): - raise PublishValidationError("Incorrect animation timeline " - "set for preview animation.. " - "\nYou can use repair action to " - "the correct animation timeline") - - @classmethod - def repair(cls, instance): - frame_range = get_frame_range() - frame_start_handle = frame_range["frameStart"] - int( - frame_range["handleStart"] - ) - frame_end_handle = frame_range["frameEnd"] + int( - frame_range["handleEnd"] - ) - set_timeline(frame_start_handle, frame_end_handle) diff --git a/openpype/hosts/max/plugins/publish/validate_frame_range.py b/openpype/hosts/max/plugins/publish/validate_frame_range.py index 21e847405e..0e8316e844 100644 --- a/openpype/hosts/max/plugins/publish/validate_frame_range.py +++ b/openpype/hosts/max/plugins/publish/validate_frame_range.py @@ -7,8 +7,10 @@ from openpype.pipeline import ( from openpype.pipeline.publish import ( RepairAction, ValidateContentsOrder, - PublishValidationError + PublishValidationError, + KnownPublishError ) +from openpype.hosts.max.api.lib import get_frame_range, set_timeline class ValidateFrameRange(pyblish.api.InstancePlugin, @@ -27,38 +29,60 @@ class ValidateFrameRange(pyblish.api.InstancePlugin, label = "Validate Frame Range" order = ValidateContentsOrder - families = ["maxrender"] + families = ["camera", "maxrender", + "pointcache", "pointcloud", + "review", "redshiftproxy"] hosts = ["max"] optional = True actions = [RepairAction] def process(self, instance): if not self.is_active(instance.data): - self.log.info("Skipping validation...") + self.log.debug("Skipping Validate Frame Range...") return - context = instance.context - frame_start = int(context.data.get("frameStart")) - frame_end = int(context.data.get("frameEnd")) - - inst_frame_start = int(instance.data.get("frameStart")) - inst_frame_end = int(instance.data.get("frameEnd")) + frame_range = get_frame_range( + asset_doc=instance.data["assetEntity"]) + inst_frame_start = instance.data.get("frameStartHandle") + inst_frame_end = instance.data.get("frameEndHandle") + if inst_frame_start is None or inst_frame_end is None: + raise KnownPublishError( + "Missing frame start and frame end on " + "instance to to validate." + ) + frame_start_handle = frame_range["frameStartHandle"] + frame_end_handle = frame_range["frameEndHandle"] errors = [] - if frame_start != inst_frame_start: + if frame_start_handle != inst_frame_start: errors.append( f"Start frame ({inst_frame_start}) on instance does not match " # noqa - f"with the start frame ({frame_start}) set on the asset data. ") # noqa - if frame_end != inst_frame_end: + f"with the start frame ({frame_start_handle}) set on the asset data. ") # noqa + if frame_end_handle != inst_frame_end: errors.append( f"End frame ({inst_frame_end}) on instance does not match " - f"with the end frame ({frame_start}) from the asset data. ") + f"with the end frame ({frame_end_handle}) " + "from the asset data. ") if errors: - errors.append("You can use repair action to fix it.") - raise PublishValidationError("\n".join(errors)) + bullet_point_errors = "\n".join( + "- {}".format(error) for error in errors + ) + report = ( + "Frame range settings are incorrect.\n\n" + f"{bullet_point_errors}\n\n" + "You can use repair action to fix it." + ) + raise PublishValidationError(report, title="Frame Range incorrect") @classmethod def repair(cls, instance): - rt.rendStart = instance.context.data.get("frameStart") - rt.rendEnd = instance.context.data.get("frameEnd") + frame_range = get_frame_range() + frame_start_handle = frame_range["frameStartHandle"] + frame_end_handle = frame_range["frameEndHandle"] + + if instance.data["family"] == "maxrender": + rt.rendStart = frame_start_handle + rt.rendEnd = frame_end_handle + else: + set_timeline(frame_start_handle, frame_end_handle) diff --git a/openpype/hosts/max/plugins/publish/validate_no_max_content.py b/openpype/hosts/max/plugins/publish/validate_instance_has_members.py similarity index 52% rename from openpype/hosts/max/plugins/publish/validate_no_max_content.py rename to openpype/hosts/max/plugins/publish/validate_instance_has_members.py index 73e12e75c9..3c0039d5e0 100644 --- a/openpype/hosts/max/plugins/publish/validate_no_max_content.py +++ b/openpype/hosts/max/plugins/publish/validate_instance_has_members.py @@ -1,21 +1,24 @@ # -*- coding: utf-8 -*- import pyblish.api from openpype.pipeline import PublishValidationError -from pymxs import runtime as rt -class ValidateMaxContents(pyblish.api.InstancePlugin): - """Validates Max contents. +class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): + """Validates Instance has members. - Check if MaxScene container includes any contents underneath. + Check if MaxScene containers includes any contents underneath. """ order = pyblish.api.ValidatorOrder families = ["camera", + "model", "maxScene", - "review"] + "review", + "pointcache", + "pointcloud", + "redshiftproxy"] hosts = ["max"] - label = "Max Scene Contents" + label = "Container Contents" def process(self, instance): if not instance.data["members"]: diff --git a/openpype/hosts/max/plugins/publish/validate_pointcloud.py b/openpype/hosts/max/plugins/publish/validate_pointcloud.py index 295a23f1f6..54d6d0f11a 100644 --- a/openpype/hosts/max/plugins/publish/validate_pointcloud.py +++ b/openpype/hosts/max/plugins/publish/validate_pointcloud.py @@ -14,29 +14,16 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): def process(self, instance): """ Notes: - - 1. Validate the container only include tyFlow objects - 2. Validate if tyFlow operator Export Particle exists - 3. Validate if the export mode of Export Particle is at PRT format - 4. Validate the partition count and range set as default value + 1. Validate if the export mode of Export Particle is at PRT format + 2. Validate the partition count and range set as default value Partition Count : 100 Partition Range : 1 to 1 - 5. Validate if the custom attribute(s) exist as parameter(s) + 3. Validate if the custom attribute(s) exist as parameter(s) of export_particle operator """ report = [] - invalid_object = self.get_tyflow_object(instance) - if invalid_object: - report.append(f"Non tyFlow object found: {invalid_object}") - - invalid_operator = self.get_tyflow_operator(instance) - if invalid_operator: - report.append(( - "tyFlow ExportParticle operator not " - f"found: {invalid_operator}")) - if self.validate_export_mode(instance): report.append("The export mode is not at PRT") @@ -52,46 +39,6 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): if report: raise PublishValidationError(f"{report}") - def get_tyflow_object(self, instance): - invalid = [] - container = instance.data["instance_node"] - self.log.info(f"Validating tyFlow container for {container}") - - selection_list = instance.data["members"] - for sel in selection_list: - sel_tmp = str(sel) - if rt.ClassOf(sel) in [rt.tyFlow, - rt.Editable_Mesh]: - if "tyFlow" not in sel_tmp: - invalid.append(sel) - else: - invalid.append(sel) - - return invalid - - def get_tyflow_operator(self, instance): - invalid = [] - container = instance.data["instance_node"] - self.log.info(f"Validating tyFlow object for {container}") - selection_list = instance.data["members"] - bool_list = [] - for sel in selection_list: - obj = sel.baseobject - anim_names = rt.GetSubAnimNames(obj) - for anim_name in anim_names: - # get all the names of the related tyFlow nodes - sub_anim = rt.GetSubAnim(obj, anim_name) - # check if there is export particle operator - boolean = rt.IsProperty(sub_anim, "Export_Particles") - bool_list.append(str(boolean)) - # if the export_particles property is not there - # it means there is not a "Export Particle" operator - if "True" not in bool_list: - self.log.error("Operator 'Export Particles' not found!") - invalid.append(sel) - - return invalid - def validate_custom_attribute(self, instance): invalid = [] container = instance.data["instance_node"] @@ -100,8 +47,8 @@ class ValidatePointCloud(pyblish.api.InstancePlugin): selection_list = instance.data["members"] - project_setting = instance.data["project_setting"] - attr_settings = project_setting["max"]["PointCloud"]["attribute"] + project_settings = instance.context.data["project_settings"] + attr_settings = project_settings["max"]["PointCloud"]["attribute"] for sel in selection_list: obj = sel.baseobject anim_names = rt.GetSubAnimNames(obj) diff --git a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py index 5ac41b10a0..7d91a7b991 100644 --- a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py +++ b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py @@ -21,7 +21,7 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin, if not self.is_active(instance.data): return width, height = self.get_db_resolution(instance) - current_width = rt.renderwidth + current_width = rt.renderWidth current_height = rt.renderHeight if current_width != width and current_height != height: raise PublishValidationError("Resolution Setting " diff --git a/openpype/hosts/max/plugins/publish/validate_tyflow_data.py b/openpype/hosts/max/plugins/publish/validate_tyflow_data.py new file mode 100644 index 0000000000..c0f29422ec --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_tyflow_data.py @@ -0,0 +1,88 @@ +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateTyFlowData(pyblish.api.InstancePlugin): + """Validate TyFlow plugins or relevant operators are set correctly.""" + + order = pyblish.api.ValidatorOrder + families = ["pointcloud", "tycache"] + hosts = ["max"] + label = "TyFlow Data" + + def process(self, instance): + """ + Notes: + 1. Validate the container only include tyFlow objects + 2. Validate if tyFlow operator Export Particle exists + + """ + + invalid_object = self.get_tyflow_object(instance) + if invalid_object: + self.log.error(f"Non tyFlow object found: {invalid_object}") + + invalid_operator = self.get_tyflow_operator(instance) + if invalid_operator: + self.log.error( + "Operator 'Export Particles' not found in tyFlow editor.") + if invalid_object or invalid_operator: + raise PublishValidationError( + "issues occurred", + description="Container should only include tyFlow object " + "and tyflow operator 'Export Particle' should be in " + "the tyFlow editor.") + + def get_tyflow_object(self, instance): + """Get the nodes which are not tyFlow object(s) + and editable mesh(es) + + Args: + instance (pyblish.api.Instance): instance + + Returns: + list: invalid nodes which are not tyFlow + object(s) and editable mesh(es). + """ + container = instance.data["instance_node"] + self.log.debug(f"Validating tyFlow container for {container}") + + allowed_classes = [rt.tyFlow, rt.Editable_Mesh] + return [ + member for member in instance.data["members"] + if rt.ClassOf(member) not in allowed_classes + ] + + def get_tyflow_operator(self, instance): + """Check if the Export Particle Operators in the node + connections. + + Args: + instance (str): instance node + + Returns: + invalid(list): list of invalid nodes which do + not consist of Export Particle Operators as parts + of the node connections + """ + invalid = [] + members = instance.data["members"] + for member in members: + obj = member.baseobject + + # There must be at least one animation with export + # particles enabled + has_export_particles = False + anim_names = rt.GetSubAnimNames(obj) + for anim_name in anim_names: + # get name of the related tyFlow node + sub_anim = rt.GetSubAnim(obj, anim_name) + # check if there is export particle operator + if rt.IsProperty(sub_anim, "Export_Particles"): + has_export_particles = True + break + + if not has_export_particles: + invalid.append(member) + return invalid diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 510d4ecc85..7c49c837e9 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -146,6 +146,10 @@ def suspended_refresh(suspend=True): cmds.ogs(pause=True) is a toggle so we cant pass False. """ + if IS_HEADLESS: + yield + return + original_state = cmds.ogs(query=True, pause=True) try: if suspend and not original_state: diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index 38d7ae08c1..6b791c9665 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -95,6 +95,8 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): self.log.info("Installing callbacks ... ") register_event_callback("init", on_init) + _set_project() + if lib.IS_HEADLESS: self.log.info(( "Running in headless mode, skipping Maya save/open/new" @@ -103,7 +105,6 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): return - _set_project() self._register_callbacks() menu.install(project_settings) diff --git a/openpype/hosts/maya/hooks/pre_copy_mel.py b/openpype/hosts/maya/hooks/pre_copy_mel.py index 0fb5af149a..6cd2c69e20 100644 --- a/openpype/hosts/maya/hooks/pre_copy_mel.py +++ b/openpype/hosts/maya/hooks/pre_copy_mel.py @@ -7,7 +7,7 @@ class PreCopyMel(PreLaunchHook): Hook `GlobalHostDataHook` must be executed before this hook. """ - app_groups = {"maya"} + app_groups = {"maya", "mayapy"} launch_types = {LaunchTypes.local} def execute(self): diff --git a/openpype/hosts/maya/plugins/create/create_multishot_layout.py b/openpype/hosts/maya/plugins/create/create_multishot_layout.py new file mode 100644 index 0000000000..0b027c02ea --- /dev/null +++ b/openpype/hosts/maya/plugins/create/create_multishot_layout.py @@ -0,0 +1,211 @@ +from ayon_api import ( + get_folder_by_name, + get_folder_by_path, + get_folders, +) +from maya import cmds # noqa: F401 + +from openpype import AYON_SERVER_ENABLED +from openpype.client import get_assets +from openpype.hosts.maya.api import plugin +from openpype.lib import BoolDef, EnumDef, TextDef +from openpype.pipeline import ( + Creator, + get_current_asset_name, + get_current_project_name, +) +from openpype.pipeline.create import CreatorError + + +class CreateMultishotLayout(plugin.MayaCreator): + """Create a multi-shot layout in the Maya scene. + + This creator will create a Camera Sequencer in the Maya scene based on + the shots found under the specified folder. The shots will be added to + the sequencer in the order of their clipIn and clipOut values. For each + shot a Layout will be created. + + """ + identifier = "io.openpype.creators.maya.multishotlayout" + label = "Multi-shot Layout" + family = "layout" + icon = "project-diagram" + + def get_pre_create_attr_defs(self): + # Present artist with a list of parents of the current context + # to choose from. This will be used to get the shots under the + # selected folder to create the Camera Sequencer. + + """ + Todo: `get_folder_by_name` should be switched to `get_folder_by_path` + once the fork to pure AYON is done. + + Warning: this will not work for projects where the asset name + is not unique across the project until the switch mentioned + above is done. + """ + + current_folder = get_folder_by_name( + project_name=get_current_project_name(), + folder_name=get_current_asset_name(), + ) + + current_path_parts = current_folder["path"].split("/") + + # populate the list with parents of the current folder + # this will create menu items like: + # [ + # { + # "value": "", + # "label": "project (shots directly under the project)" + # }, { + # "value": "shots/shot_01", "label": "shot_01 (current)" + # }, { + # "value": "shots", "label": "shots" + # } + # ] + + # add the project as the first item + items_with_label = [ + { + "label": f"{self.project_name} " + "(shots directly under the project)", + "value": "" + } + ] + + # go through the current folder path and add each part to the list, + # but mark the current folder. + for part_idx, part in enumerate(current_path_parts): + label = part + if label == current_folder["name"]: + label = f"{label} (current)" + + value = "/".join(current_path_parts[:part_idx + 1]) + + items_with_label.append({"label": label, "value": value}) + + return [ + EnumDef("shotParent", + default=current_folder["name"], + label="Shot Parent Folder", + items=items_with_label, + ), + BoolDef("groupLoadedAssets", + label="Group Loaded Assets", + tooltip="Enable this when you want to publish group of " + "loaded asset", + default=False), + TextDef("taskName", + label="Associated Task Name", + tooltip=("Task name to be associated " + "with the created Layout"), + default="layout"), + ] + + def create(self, subset_name, instance_data, pre_create_data): + shots = list( + self.get_related_shots(folder_path=pre_create_data["shotParent"]) + ) + if not shots: + # There are no shot folders under the specified folder. + # We are raising an error here but in the future we might + # want to create a new shot folders by publishing the layouts + # and shot defined in the sequencer. Sort of editorial publish + # in side of Maya. + raise CreatorError(( + "No shots found under the specified " + f"folder: {pre_create_data['shotParent']}.")) + + # Get layout creator + layout_creator_id = "io.openpype.creators.maya.layout" + layout_creator: Creator = self.create_context.creators.get( + layout_creator_id) + if not layout_creator: + raise CreatorError( + f"Creator {layout_creator_id} not found.") + + # Get OpenPype style asset documents for the shots + op_asset_docs = get_assets( + self.project_name, [s["id"] for s in shots]) + asset_docs_by_id = {doc["_id"]: doc for doc in op_asset_docs} + for shot in shots: + # we are setting shot name to be displayed in the sequencer to + # `shot name (shot label)` if the label is set, otherwise just + # `shot name`. So far, labels are used only when the name is set + # with characters that are not allowed in the shot name. + if not shot["active"]: + continue + + # get task for shot + asset_doc = asset_docs_by_id[shot["id"]] + + tasks = asset_doc.get("data").get("tasks").keys() + layout_task = None + if pre_create_data["taskName"] in tasks: + layout_task = pre_create_data["taskName"] + + shot_name = f"{shot['name']}%s" % ( + f" ({shot['label']})" if shot["label"] else "") + cmds.shot(sequenceStartTime=shot["attrib"]["clipIn"], + sequenceEndTime=shot["attrib"]["clipOut"], + shotName=shot_name) + + # Create layout instance by the layout creator + + instance_data = { + "asset": shot["name"], + "variant": layout_creator.get_default_variant() + } + if layout_task: + instance_data["task"] = layout_task + + layout_creator.create( + subset_name=layout_creator.get_subset_name( + layout_creator.get_default_variant(), + self.create_context.get_current_task_name(), + asset_doc, + self.project_name), + instance_data=instance_data, + pre_create_data={ + "groupLoadedAssets": pre_create_data["groupLoadedAssets"] + } + ) + + def get_related_shots(self, folder_path: str): + """Get all shots related to the current asset. + + Get all folders of type Shot under specified folder. + + Args: + folder_path (str): Path of the folder. + + Returns: + list: List of dicts with folder data. + + """ + # if folder_path is None, project is selected as a root + # and its name is used as a parent id + parent_id = self.project_name + if folder_path: + current_folder = get_folder_by_path( + project_name=self.project_name, + folder_path=folder_path, + ) + parent_id = current_folder["id"] + + # get all child folders of the current one + return get_folders( + project_name=self.project_name, + parent_ids=[parent_id], + fields=[ + "attrib.clipIn", "attrib.clipOut", + "attrib.frameStart", "attrib.frameEnd", + "name", "label", "path", "folderType", "id" + ] + ) + + +# blast this creator if Ayon server is not enabled +if not AYON_SERVER_ENABLED: + del CreateMultishotLayout diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index 74fcb58d29..635c2c425c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """Maya look extractor.""" +import sys from abc import ABCMeta, abstractmethod from collections import OrderedDict import contextlib @@ -176,6 +177,24 @@ class MakeRSTexBin(TextureProcessor): source ] + # if color management is enabled we pass color space information + if color_management["enabled"]: + config_path = color_management["config"] + if not os.path.exists(config_path): + raise RuntimeError("OCIO config not found at: " + "{}".format(config_path)) + + if not os.getenv("OCIO"): + self.log.debug( + "OCIO environment variable not set." + "Setting it with OCIO config from Maya." + ) + os.environ["OCIO"] = config_path + + self.log.debug("converting colorspace {0} to redshift render " + "colorspace".format(colorspace)) + subprocess_args.extend(["-cs", colorspace]) + hash_args = ["rstex"] texture_hash = source_hash(source, *hash_args) @@ -186,11 +205,11 @@ class MakeRSTexBin(TextureProcessor): self.log.debug(" ".join(subprocess_args)) try: - run_subprocess(subprocess_args) + run_subprocess(subprocess_args, logger=self.log) except Exception: self.log.error("Texture .rstexbin conversion failed", exc_info=True) - raise + six.reraise(*sys.exc_info()) return TextureResult( path=destination, @@ -472,7 +491,7 @@ class ExtractLook(publish.Extractor): "rstex": MakeRSTexBin }.items(): if instance.data.get(key, False): - processor = Processor() + processor = Processor(log=self.log) processor.apply_settings(context.data["system_settings"], context.data["project_settings"]) processors.append(processor) diff --git a/openpype/hosts/maya/plugins/publish/validate_resolution.py b/openpype/hosts/maya/plugins/publish/validate_resolution.py new file mode 100644 index 0000000000..91b473b250 --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_resolution.py @@ -0,0 +1,117 @@ +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from maya import cmds +from openpype.pipeline.publish import RepairAction +from openpype.hosts.maya.api import lib +from openpype.hosts.maya.api.lib import reset_scene_resolution + + +class ValidateResolution(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate the render resolution setting aligned with DB""" + + order = pyblish.api.ValidatorOrder + families = ["renderlayer"] + hosts = ["maya"] + label = "Validate Resolution" + actions = [RepairAction] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + invalid = self.get_invalid_resolution(instance) + if invalid: + raise PublishValidationError( + "Render resolution is invalid. See log for details.", + description=( + "Wrong render resolution setting. " + "Please use repair button to fix it.\n\n" + "If current renderer is V-Ray, " + "make sure vraySettings node has been created." + ) + ) + + @classmethod + def get_invalid_resolution(cls, instance): + width, height, pixelAspect = cls.get_db_resolution(instance) + current_renderer = instance.data["renderer"] + layer = instance.data["renderlayer"] + invalid = False + if current_renderer == "vray": + vray_node = "vraySettings" + if cmds.objExists(vray_node): + current_width = lib.get_attr_in_layer( + "{}.width".format(vray_node), layer=layer) + current_height = lib.get_attr_in_layer( + "{}.height".format(vray_node), layer=layer) + current_pixelAspect = lib.get_attr_in_layer( + "{}.pixelAspect".format(vray_node), layer=layer + ) + else: + cls.log.error( + "Can't detect VRay resolution because there is no node " + "named: `{}`".format(vray_node) + ) + return True + else: + current_width = lib.get_attr_in_layer( + "defaultResolution.width", layer=layer) + current_height = lib.get_attr_in_layer( + "defaultResolution.height", layer=layer) + current_pixelAspect = lib.get_attr_in_layer( + "defaultResolution.pixelAspect", layer=layer + ) + if current_width != width or current_height != height: + cls.log.error( + "Render resolution {}x{} does not match " + "asset resolution {}x{}".format( + current_width, current_height, + width, height + )) + invalid = True + if current_pixelAspect != pixelAspect: + cls.log.error( + "Render pixel aspect {} does not match " + "asset pixel aspect {}".format( + current_pixelAspect, pixelAspect + )) + invalid = True + return invalid + + @classmethod + def get_db_resolution(cls, instance): + asset_doc = instance.data["assetEntity"] + project_doc = instance.context.data["projectEntity"] + for data in [asset_doc["data"], project_doc["data"]]: + if ( + "resolutionWidth" in data and + "resolutionHeight" in data and + "pixelAspect" in data + ): + width = data["resolutionWidth"] + height = data["resolutionHeight"] + pixelAspect = data["pixelAspect"] + return int(width), int(height), float(pixelAspect) + + # Defaults if not found in asset document or project document + return 1920, 1080, 1.0 + + @classmethod + def repair(cls, instance): + # Usually without renderlayer overrides the renderlayers + # all share the same resolution value - so fixing the first + # will have fixed all the others too. It's much faster to + # check whether it's invalid first instead of switching + # into all layers individually + if not cls.get_invalid_resolution(instance): + cls.log.debug( + "Nothing to repair on instance: {}".format(instance) + ) + return + layer_node = instance.data['setMembers'] + with lib.renderlayer(layer_node): + reset_scene_resolution() diff --git a/openpype/hosts/nuke/api/__init__.py b/openpype/hosts/nuke/api/__init__.py index 1af5ff365d..c6ccd0baf1 100644 --- a/openpype/hosts/nuke/api/__init__.py +++ b/openpype/hosts/nuke/api/__init__.py @@ -50,6 +50,11 @@ from .utils import ( get_colorspace_list ) +from .actions import ( + SelectInvalidAction, + SelectInstanceNodeAction +) + __all__ = ( "file_extensions", "has_unsaved_changes", @@ -92,5 +97,8 @@ __all__ = ( "create_write_node", "colorspace_exists_on_node", - "get_colorspace_list" + "get_colorspace_list", + + "SelectInvalidAction", + "SelectInstanceNodeAction" ) diff --git a/openpype/hosts/nuke/api/actions.py b/openpype/hosts/nuke/api/actions.py index c955a85acc..995e6427af 100644 --- a/openpype/hosts/nuke/api/actions.py +++ b/openpype/hosts/nuke/api/actions.py @@ -20,33 +20,58 @@ class SelectInvalidAction(pyblish.api.Action): def process(self, context, plugin): - try: - import nuke - except ImportError: - raise ImportError("Current host is not Nuke") - errored_instances = get_errored_instances_from_context(context, plugin=plugin) # Get the invalid nodes for the plug-ins self.log.info("Finding invalid nodes..") - invalid = list() + invalid = set() for instance in errored_instances: invalid_nodes = plugin.get_invalid(instance) if invalid_nodes: if isinstance(invalid_nodes, (list, tuple)): - invalid.append(invalid_nodes[0]) + invalid.update(invalid_nodes) else: self.log.warning("Plug-in returned to be invalid, " "but has no selectable nodes.") - # Ensure unique (process each node only once) - invalid = list(set(invalid)) - if invalid: self.log.info("Selecting invalid nodes: {}".format(invalid)) reset_selection() select_nodes(invalid) else: self.log.info("No invalid nodes found.") + + +class SelectInstanceNodeAction(pyblish.api.Action): + """Select instance node for failed plugin.""" + label = "Select instance node" + on = "failed" # This action is only available on a failed plug-in + icon = "mdi.cursor-default-click" + + def process(self, context, plugin): + + # Get the errored instances for the plug-in + errored_instances = get_errored_instances_from_context( + context, plugin) + + # Get the invalid nodes for the plug-ins + self.log.info("Finding instance nodes..") + nodes = set() + for instance in errored_instances: + instance_node = instance.data.get("transientData", {}).get("node") + if not instance_node: + raise RuntimeError( + "No transientData['node'] found on instance: {}".format( + instance + ) + ) + nodes.add(instance_node) + + if nodes: + self.log.info("Selecting instance nodes: {}".format(nodes)) + reset_selection() + select_nodes(nodes) + else: + self.log.info("No instance nodes found.") diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 390545b806..734a565541 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -48,20 +48,15 @@ from openpype.pipeline import ( get_current_asset_name, ) from openpype.pipeline.context_tools import ( - get_current_project_asset, get_custom_workfile_template_from_session ) -from openpype.pipeline.colorspace import ( - get_imageio_config -) +from openpype.pipeline.colorspace import get_imageio_config from openpype.pipeline.workfile import BuildWorkfile from . import gizmo_menu from .constants import ASSIST -from .workio import ( - save_file, - open_file -) +from .workio import save_file +from .utils import get_node_outputs log = Logger.get_logger(__name__) @@ -2222,7 +2217,6 @@ Reopening Nuke should synchronize these paths and resolve any discrepancies. """ # replace path with env var if possible ocio_path = self._replace_ocio_path_with_env_var(config_data) - ocio_path = ocio_path.replace("\\", "/") log.info("Setting OCIO config path to: `{}`".format( ocio_path)) @@ -2802,16 +2796,28 @@ def find_free_space_to_paste_nodes( @contextlib.contextmanager -def maintained_selection(): +def maintained_selection(exclude_nodes=None): """Maintain selection during context + Maintain selection during context and unselect + all nodes after context is done. + + Arguments: + exclude_nodes (list[nuke.Node]): list of nodes to be unselected + before context is done + Example: >>> with maintained_selection(): ... node["selected"].setValue(True) >>> print(node["selected"].value()) False """ + if exclude_nodes: + for node in exclude_nodes: + node["selected"].setValue(False) + previous_selection = nuke.selectedNodes() + try: yield finally: @@ -2823,6 +2829,51 @@ def maintained_selection(): select_nodes(previous_selection) +@contextlib.contextmanager +def swap_node_with_dependency(old_node, new_node): + """ Swap node with dependency + + Swap node with dependency and reconnect all inputs and outputs. + It removes old node. + + Arguments: + old_node (nuke.Node): node to be replaced + new_node (nuke.Node): node to replace with + + Example: + >>> old_node_name = old_node["name"].value() + >>> print(old_node_name) + old_node_name_01 + >>> with swap_node_with_dependency(old_node, new_node) as node_name: + ... new_node["name"].setValue(node_name) + >>> print(new_node["name"].value()) + old_node_name_01 + """ + # preserve position + xpos, ypos = old_node.xpos(), old_node.ypos() + # preserve selection after all is done + outputs = get_node_outputs(old_node) + inputs = old_node.dependencies() + node_name = old_node["name"].value() + + try: + nuke.delete(old_node) + + yield node_name + finally: + + # Reconnect inputs + for i, node in enumerate(inputs): + new_node.setInput(i, node) + # Reconnect outputs + if outputs: + for n, pipes in outputs.items(): + for i in pipes: + n.setInput(i, new_node) + # return to original position + new_node.setXYpos(xpos, ypos) + + def reset_selection(): """Deselect all selected nodes""" for node in nuke.selectedNodes(): @@ -2833,9 +2884,10 @@ def select_nodes(nodes): """Selects all inputted nodes Arguments: - nodes (list): nuke nodes to be selected + nodes (Union[list, tuple, set]): nuke nodes to be selected """ - assert isinstance(nodes, (list, tuple)), "nodes has to be list or tuple" + assert isinstance(nodes, (list, tuple, set)), \ + "nodes has to be list, tuple or set" for node in nodes: node["selected"].setValue(True) @@ -2919,13 +2971,13 @@ def process_workfile_builder(): "workfile_builder", {}) # get settings - createfv_on = workfile_builder.get("create_first_version") or None + create_fv_on = workfile_builder.get("create_first_version") or None builder_on = workfile_builder.get("builder_on_start") or None last_workfile_path = os.environ.get("AVALON_LAST_WORKFILE") # generate first version in file not existing and feature is enabled - if createfv_on and not os.path.exists(last_workfile_path): + if create_fv_on and not os.path.exists(last_workfile_path): # get custom template path if any custom_template_path = get_custom_workfile_template_from_session( project_settings=project_settings diff --git a/openpype/hosts/nuke/plugins/load/actions.py b/openpype/hosts/nuke/plugins/load/actions.py index 3227a7ed98..635318f53d 100644 --- a/openpype/hosts/nuke/plugins/load/actions.py +++ b/openpype/hosts/nuke/plugins/load/actions.py @@ -17,7 +17,7 @@ class SetFrameRangeLoader(load.LoaderPlugin): "yeticache", "pointcache"] representations = ["*"] - extension = {"*"} + extensions = {"*"} label = "Set frame range" order = 11 diff --git a/openpype/hosts/nuke/plugins/load/load_backdrop.py b/openpype/hosts/nuke/plugins/load/load_backdrop.py index fe82d70b5e..0cbd380697 100644 --- a/openpype/hosts/nuke/plugins/load/load_backdrop.py +++ b/openpype/hosts/nuke/plugins/load/load_backdrop.py @@ -27,7 +27,7 @@ class LoadBackdropNodes(load.LoaderPlugin): families = ["workfile", "nukenodes"] representations = ["*"] - extension = {"nk"} + extensions = {"nk"} label = "Import Nuke Nodes" order = 0 diff --git a/openpype/hosts/nuke/plugins/load/load_camera_abc.py b/openpype/hosts/nuke/plugins/load/load_camera_abc.py index 2939ceebae..e245b0cb5e 100644 --- a/openpype/hosts/nuke/plugins/load/load_camera_abc.py +++ b/openpype/hosts/nuke/plugins/load/load_camera_abc.py @@ -26,7 +26,7 @@ class AlembicCameraLoader(load.LoaderPlugin): families = ["camera"] representations = ["*"] - extension = {"abc"} + extensions = {"abc"} label = "Load Alembic Camera" icon = "camera" diff --git a/openpype/hosts/nuke/plugins/load/load_effects.py b/openpype/hosts/nuke/plugins/load/load_effects.py index 89597e76cc..cacc00854e 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects.py +++ b/openpype/hosts/nuke/plugins/load/load_effects.py @@ -24,7 +24,7 @@ class LoadEffects(load.LoaderPlugin): families = ["effect"] representations = ["*"] - extension = {"json"} + extensions = {"json"} label = "Load Effects - nodes" order = 0 diff --git a/openpype/hosts/nuke/plugins/load/load_effects_ip.py b/openpype/hosts/nuke/plugins/load/load_effects_ip.py index efe67be4aa..bdf3cd6965 100644 --- a/openpype/hosts/nuke/plugins/load/load_effects_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_effects_ip.py @@ -25,7 +25,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin): families = ["effect"] representations = ["*"] - extension = {"json"} + extensions = {"json"} label = "Load Effects - Input Process" order = 0 diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo.py b/openpype/hosts/nuke/plugins/load/load_gizmo.py index 6b848ee276..23cf4d7741 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo.py @@ -12,7 +12,8 @@ from openpype.pipeline import ( from openpype.hosts.nuke.api.lib import ( maintained_selection, get_avalon_knob_data, - set_avalon_knob_data + set_avalon_knob_data, + swap_node_with_dependency, ) from openpype.hosts.nuke.api import ( containerise, @@ -26,7 +27,7 @@ class LoadGizmo(load.LoaderPlugin): families = ["gizmo"] representations = ["*"] - extension = {"gizmo"} + extensions = {"nk"} label = "Load Gizmo" order = 0 @@ -45,7 +46,7 @@ class LoadGizmo(load.LoaderPlugin): data (dict): compulsory attribute > not used Returns: - nuke node: containerised nuke node object + nuke node: containerized nuke node object """ # get main variables @@ -83,12 +84,12 @@ class LoadGizmo(load.LoaderPlugin): # add group from nk nuke.nodePaste(file) - GN = nuke.selectedNode() + group_node = nuke.selectedNode() - GN["name"].setValue(object_name) + group_node["name"].setValue(object_name) return containerise( - node=GN, + node=group_node, name=name, namespace=namespace, context=context, @@ -110,7 +111,7 @@ class LoadGizmo(load.LoaderPlugin): version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node - GN = nuke.toNode(container['objectName']) + group_node = nuke.toNode(container['objectName']) file = get_representation_path(representation).replace("\\", "/") name = container['name'] @@ -135,22 +136,24 @@ class LoadGizmo(load.LoaderPlugin): for k in add_keys: data_imprint.update({k: version_data[k]}) + # capture pipeline metadata + avalon_data = get_avalon_knob_data(group_node) + # adding nodes to node graph # just in case we are in group lets jump out of it nuke.endGroup() - with maintained_selection(): - xpos = GN.xpos() - ypos = GN.ypos() - avalon_data = get_avalon_knob_data(GN) - nuke.delete(GN) - # add group from nk + with maintained_selection([group_node]): + # insert nuke script to the script nuke.nodePaste(file) - - GN = nuke.selectedNode() - set_avalon_knob_data(GN, avalon_data) - GN.setXYpos(xpos, ypos) - GN["name"].setValue(object_name) + # convert imported to selected node + new_group_node = nuke.selectedNode() + # swap nodes with maintained connections + with swap_node_with_dependency( + group_node, new_group_node) as node_name: + new_group_node["name"].setValue(node_name) + # set updated pipeline metadata + set_avalon_knob_data(new_group_node, avalon_data) last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] @@ -161,11 +164,12 @@ class LoadGizmo(load.LoaderPlugin): color_value = self.node_color else: color_value = "0xd88467ff" - GN["tile_color"].setValue(int(color_value, 16)) + + new_group_node["tile_color"].setValue(int(color_value, 16)) self.log.info("updated to version: {}".format(version_doc.get("name"))) - return update_container(GN, data_imprint) + return update_container(new_group_node, data_imprint) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py index a8e1218cbe..ce0a1615f1 100644 --- a/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py +++ b/openpype/hosts/nuke/plugins/load/load_gizmo_ip.py @@ -14,7 +14,8 @@ from openpype.hosts.nuke.api.lib import ( maintained_selection, create_backdrop, get_avalon_knob_data, - set_avalon_knob_data + set_avalon_knob_data, + swap_node_with_dependency, ) from openpype.hosts.nuke.api import ( containerise, @@ -28,7 +29,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): families = ["gizmo"] representations = ["*"] - extension = {"gizmo"} + extensions = {"nk"} label = "Load Gizmo - Input Process" order = 0 @@ -47,7 +48,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): data (dict): compulsory attribute > not used Returns: - nuke node: containerised nuke node object + nuke node: containerized nuke node object """ # get main variables @@ -85,17 +86,17 @@ class LoadGizmoInputProcess(load.LoaderPlugin): # add group from nk nuke.nodePaste(file) - GN = nuke.selectedNode() + group_node = nuke.selectedNode() - GN["name"].setValue(object_name) + group_node["name"].setValue(object_name) # try to place it under Viewer1 - if not self.connect_active_viewer(GN): - nuke.delete(GN) + if not self.connect_active_viewer(group_node): + nuke.delete(group_node) return return containerise( - node=GN, + node=group_node, name=name, namespace=namespace, context=context, @@ -117,7 +118,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin): version_doc = get_version_by_id(project_name, representation["parent"]) # get corresponding node - GN = nuke.toNode(container['objectName']) + group_node = nuke.toNode(container['objectName']) file = get_representation_path(representation).replace("\\", "/") name = container['name'] @@ -142,22 +143,24 @@ class LoadGizmoInputProcess(load.LoaderPlugin): for k in add_keys: data_imprint.update({k: version_data[k]}) + # capture pipeline metadata + avalon_data = get_avalon_knob_data(group_node) + # adding nodes to node graph # just in case we are in group lets jump out of it nuke.endGroup() - with maintained_selection(): - xpos = GN.xpos() - ypos = GN.ypos() - avalon_data = get_avalon_knob_data(GN) - nuke.delete(GN) - # add group from nk + with maintained_selection([group_node]): + # insert nuke script to the script nuke.nodePaste(file) - - GN = nuke.selectedNode() - set_avalon_knob_data(GN, avalon_data) - GN.setXYpos(xpos, ypos) - GN["name"].setValue(object_name) + # convert imported to selected node + new_group_node = nuke.selectedNode() + # swap nodes with maintained connections + with swap_node_with_dependency( + group_node, new_group_node) as node_name: + new_group_node["name"].setValue(node_name) + # set updated pipeline metadata + set_avalon_knob_data(new_group_node, avalon_data) last_version_doc = get_last_version_by_subset_id( project_name, version_doc["parent"], fields=["_id"] @@ -168,11 +171,11 @@ class LoadGizmoInputProcess(load.LoaderPlugin): color_value = self.node_color else: color_value = "0xd88467ff" - GN["tile_color"].setValue(int(color_value, 16)) + new_group_node["tile_color"].setValue(int(color_value, 16)) self.log.info("updated to version: {}".format(version_doc.get("name"))) - return update_container(GN, data_imprint) + return update_container(new_group_node, data_imprint) def connect_active_viewer(self, group_node): """ diff --git a/openpype/hosts/nuke/plugins/load/load_image.py b/openpype/hosts/nuke/plugins/load/load_image.py index 0dd3a940db..6bffb97e6f 100644 --- a/openpype/hosts/nuke/plugins/load/load_image.py +++ b/openpype/hosts/nuke/plugins/load/load_image.py @@ -204,8 +204,6 @@ class LoadImage(load.LoaderPlugin): last = first = int(frame_number) # Set the global in to the start frame of the sequence - read_name = self._get_node_name(representation) - node["name"].setValue(read_name) node["file"].setValue(file) node["origfirst"].setValue(first) node["first"].setValue(first) diff --git a/openpype/hosts/nuke/plugins/load/load_matchmove.py b/openpype/hosts/nuke/plugins/load/load_matchmove.py index f942422c00..14ddf20dc3 100644 --- a/openpype/hosts/nuke/plugins/load/load_matchmove.py +++ b/openpype/hosts/nuke/plugins/load/load_matchmove.py @@ -9,7 +9,7 @@ class MatchmoveLoader(load.LoaderPlugin): families = ["matchmove"] representations = ["*"] - extension = {"py"} + extensions = {"py"} defaults = ["Camera", "Object"] diff --git a/openpype/hosts/nuke/plugins/load/load_model.py b/openpype/hosts/nuke/plugins/load/load_model.py index 0bdcd93dff..b9b8a0f4c0 100644 --- a/openpype/hosts/nuke/plugins/load/load_model.py +++ b/openpype/hosts/nuke/plugins/load/load_model.py @@ -24,7 +24,7 @@ class AlembicModelLoader(load.LoaderPlugin): families = ["model", "pointcache", "animation"] representations = ["*"] - extension = {"abc"} + extensions = {"abc"} label = "Load Alembic" icon = "cube" diff --git a/openpype/hosts/nuke/plugins/load/load_ociolook.py b/openpype/hosts/nuke/plugins/load/load_ociolook.py new file mode 100644 index 0000000000..18c8cdba35 --- /dev/null +++ b/openpype/hosts/nuke/plugins/load/load_ociolook.py @@ -0,0 +1,350 @@ +import os +import json +import secrets +import nuke +import six + +from openpype.client import ( + get_version_by_id, + get_last_version_by_subset_id +) +from openpype.pipeline import ( + load, + get_current_project_name, + get_representation_path, +) +from openpype.hosts.nuke.api import ( + containerise, + viewer_update_and_undo_stop, + update_container, +) + + +class LoadOcioLookNodes(load.LoaderPlugin): + """Loading Ocio look to the nuke.Node graph""" + + families = ["ociolook"] + representations = ["*"] + extensions = {"json"} + + label = "Load OcioLook [nodes]" + order = 0 + icon = "cc" + color = "white" + ignore_attr = ["useLifetime"] + + # plugin attributes + current_node_color = "0x4ecd91ff" + old_node_color = "0xd88467ff" + + # json file variables + schema_version = 1 + + def load(self, context, name, namespace, data): + """ + Loading function to get the soft effects to particular read node + + Arguments: + context (dict): context of version + name (str): name of the version + namespace (str): asset name + data (dict): compulsory attribute > not used + + Returns: + nuke.Node: containerized nuke.Node object + """ + namespace = namespace or context['asset']['name'] + suffix = secrets.token_hex(nbytes=4) + object_name = "{}_{}_{}".format( + name, namespace, suffix) + + # getting file path + filepath = self.filepath_from_context(context) + + json_f = self._load_json_data(filepath) + + group_node = self._create_group_node( + object_name, filepath, json_f["data"]) + + self._node_version_color(context["version"], group_node) + + self.log.info( + "Loaded lut setup: `{}`".format(group_node["name"].value())) + + return containerise( + node=group_node, + name=name, + namespace=namespace, + context=context, + loader=self.__class__.__name__, + data={ + "objectName": object_name, + } + ) + + def _create_group_node( + self, + object_name, + filepath, + data + ): + """Creates group node with all the nodes inside. + + Creating mainly `OCIOFileTransform` nodes with `OCIOColorSpace` nodes + in between - in case those are needed. + + Arguments: + object_name (str): name of the group node + filepath (str): path to json file + data (dict): data from json file + + Returns: + nuke.Node: group node with all the nodes inside + """ + # get corresponding node + + root_working_colorspace = nuke.root()["workingSpaceLUT"].value() + + dir_path = os.path.dirname(filepath) + all_files = os.listdir(dir_path) + + ocio_working_colorspace = _colorspace_name_by_type( + data["ocioLookWorkingSpace"]) + + # adding nodes to node graph + # just in case we are in group lets jump out of it + nuke.endGroup() + + input_node = None + output_node = None + group_node = nuke.toNode(object_name) + if group_node: + # remove all nodes between Input and Output nodes + for node in group_node.nodes(): + if node.Class() not in ["Input", "Output"]: + nuke.delete(node) + elif node.Class() == "Input": + input_node = node + elif node.Class() == "Output": + output_node = node + else: + group_node = nuke.createNode( + "Group", + "name {}_1".format(object_name), + inpanel=False + ) + + # adding content to the group node + with group_node: + pre_colorspace = root_working_colorspace + + # reusing input node if it exists during update + if input_node: + pre_node = input_node + else: + pre_node = nuke.createNode("Input") + pre_node["name"].setValue("rgb") + + # Compare script working colorspace with ocio working colorspace + # found in json file and convert to json's if needed + if pre_colorspace != ocio_working_colorspace: + pre_node = _add_ocio_colorspace_node( + pre_node, + pre_colorspace, + ocio_working_colorspace + ) + pre_colorspace = ocio_working_colorspace + + for ocio_item in data["ocioLookItems"]: + input_space = _colorspace_name_by_type( + ocio_item["input_colorspace"]) + output_space = _colorspace_name_by_type( + ocio_item["output_colorspace"]) + + # making sure we are set to correct colorspace for otio item + if pre_colorspace != input_space: + pre_node = _add_ocio_colorspace_node( + pre_node, + pre_colorspace, + input_space + ) + + node = nuke.createNode("OCIOFileTransform") + + # file path from lut representation + extension = ocio_item["ext"] + item_name = ocio_item["name"] + + item_lut_file = next( + ( + file for file in all_files + if file.endswith(extension) + ), + None + ) + if not item_lut_file: + raise ValueError( + "File with extension '{}' not " + "found in directory".format(extension) + ) + + item_lut_path = os.path.join( + dir_path, item_lut_file).replace("\\", "/") + node["file"].setValue(item_lut_path) + node["name"].setValue(item_name) + node["direction"].setValue(ocio_item["direction"]) + node["interpolation"].setValue(ocio_item["interpolation"]) + node["working_space"].setValue(input_space) + + pre_node.autoplace() + node.setInput(0, pre_node) + node.autoplace() + # pass output space into pre_colorspace for next iteration + # or for output node comparison + pre_colorspace = output_space + pre_node = node + + # making sure we are back in script working colorspace + if pre_colorspace != root_working_colorspace: + pre_node = _add_ocio_colorspace_node( + pre_node, + pre_colorspace, + root_working_colorspace + ) + + # reusing output node if it exists during update + if not output_node: + output = nuke.createNode("Output") + else: + output = output_node + + output.setInput(0, pre_node) + + return group_node + + def update(self, container, representation): + + project_name = get_current_project_name() + version_doc = get_version_by_id(project_name, representation["parent"]) + + object_name = container['objectName'] + + filepath = get_representation_path(representation) + + json_f = self._load_json_data(filepath) + + group_node = self._create_group_node( + object_name, + filepath, + json_f["data"] + ) + + self._node_version_color(version_doc, group_node) + + self.log.info("Updated lut setup: `{}`".format( + group_node["name"].value())) + + return update_container( + group_node, {"representation": str(representation["_id"])}) + + def _load_json_data(self, filepath): + # getting data from json file with unicode conversion + with open(filepath, "r") as _file: + json_f = {self._bytify(key): self._bytify(value) + for key, value in json.load(_file).items()} + + # check if the version in json_f is the same as plugin version + if json_f["version"] != self.schema_version: + raise KeyError( + "Version of json file is not the same as plugin version") + + return json_f + + def _bytify(self, input): + """ + Converts unicode strings to strings + It goes through all dictionary + + Arguments: + input (dict/str): input + + Returns: + dict: with fixed values and keys + + """ + + if isinstance(input, dict): + return {self._bytify(key): self._bytify(value) + for key, value in input.items()} + elif isinstance(input, list): + return [self._bytify(element) for element in input] + elif isinstance(input, six.text_type): + return str(input) + else: + return input + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + node = nuke.toNode(container['objectName']) + with viewer_update_and_undo_stop(): + nuke.delete(node) + + def _node_version_color(self, version, node): + """ Coloring a node by correct color by actual version""" + + project_name = get_current_project_name() + last_version_doc = get_last_version_by_subset_id( + project_name, version["parent"], fields=["_id"] + ) + + # change color of node + if version["_id"] == last_version_doc["_id"]: + color_value = self.current_node_color + else: + color_value = self.old_node_color + node["tile_color"].setValue(int(color_value, 16)) + + +def _colorspace_name_by_type(colorspace_data): + """ + Returns colorspace name by type + + Arguments: + colorspace_data (dict): colorspace data + + Returns: + str: colorspace name + """ + if colorspace_data["type"] == "colorspaces": + return colorspace_data["name"] + elif colorspace_data["type"] == "roles": + return colorspace_data["colorspace"] + else: + raise KeyError("Unknown colorspace type: {}".format( + colorspace_data["type"])) + + +def _add_ocio_colorspace_node(pre_node, input_space, output_space): + """ + Adds OCIOColorSpace node to the node graph + + Arguments: + pre_node (nuke.Node): node to connect to + input_space (str): input colorspace + output_space (str): output colorspace + + Returns: + nuke.Node: node with OCIOColorSpace node + """ + node = nuke.createNode("OCIOColorSpace") + node.setInput(0, pre_node) + node["in_colorspace"].setValue(input_space) + node["out_colorspace"].setValue(output_space) + + pre_node.autoplace() + node.setInput(0, pre_node) + node.autoplace() + + return node diff --git a/openpype/hosts/nuke/plugins/load/load_script_precomp.py b/openpype/hosts/nuke/plugins/load/load_script_precomp.py index 48d4a0900a..d5f9d24765 100644 --- a/openpype/hosts/nuke/plugins/load/load_script_precomp.py +++ b/openpype/hosts/nuke/plugins/load/load_script_precomp.py @@ -22,7 +22,7 @@ class LinkAsGroup(load.LoaderPlugin): families = ["workfile", "nukenodes"] representations = ["*"] - extension = {"nk"} + extensions = {"nk"} label = "Load Precomp" order = 0 diff --git a/openpype/hosts/nuke/plugins/publish/collect_backdrop.py b/openpype/hosts/nuke/plugins/publish/collect_backdrop.py index 7d51af7e9e..d04c1204e3 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/collect_backdrop.py @@ -57,4 +57,4 @@ class CollectBackdrops(pyblish.api.InstancePlugin): if version: instance.data['version'] = version - self.log.info("Backdrop instance collected: `{}`".format(instance)) + self.log.debug("Backdrop instance collected: `{}`".format(instance)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_context_data.py b/openpype/hosts/nuke/plugins/publish/collect_context_data.py index f1b4965205..b85e924f55 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_context_data.py +++ b/openpype/hosts/nuke/plugins/publish/collect_context_data.py @@ -64,4 +64,4 @@ class CollectContextData(pyblish.api.ContextPlugin): context.data["scriptData"] = script_data context.data.update(script_data) - self.log.info('Context from Nuke script collected') + self.log.debug('Context from Nuke script collected') diff --git a/openpype/hosts/nuke/plugins/publish/collect_gizmo.py b/openpype/hosts/nuke/plugins/publish/collect_gizmo.py index e3c40a7a90..c410de7c32 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_gizmo.py +++ b/openpype/hosts/nuke/plugins/publish/collect_gizmo.py @@ -43,4 +43,4 @@ class CollectGizmo(pyblish.api.InstancePlugin): "frameStart": first_frame, "frameEnd": last_frame }) - self.log.info("Gizmo instance collected: `{}`".format(instance)) + self.log.debug("Gizmo instance collected: `{}`".format(instance)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_model.py b/openpype/hosts/nuke/plugins/publish/collect_model.py index 3fdf376d0c..a099f06be0 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_model.py +++ b/openpype/hosts/nuke/plugins/publish/collect_model.py @@ -43,4 +43,4 @@ class CollectModel(pyblish.api.InstancePlugin): "frameStart": first_frame, "frameEnd": last_frame }) - self.log.info("Model instance collected: `{}`".format(instance)) + self.log.debug("Model instance collected: `{}`".format(instance)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py index c7d65ffd24..3baa0cd9b5 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_slate_node.py +++ b/openpype/hosts/nuke/plugins/publish/collect_slate_node.py @@ -39,7 +39,7 @@ class CollectSlate(pyblish.api.InstancePlugin): instance.data["slateNode"] = slate_node instance.data["slate"] = True instance.data["families"].append("slate") - self.log.info( + self.log.debug( "Slate node is in node graph: `{}`".format(slate.name())) self.log.debug( "__ instance.data: `{}`".format(instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/collect_workfile.py b/openpype/hosts/nuke/plugins/publish/collect_workfile.py index 852042e6e9..0f03572f8b 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_workfile.py +++ b/openpype/hosts/nuke/plugins/publish/collect_workfile.py @@ -37,4 +37,6 @@ class CollectWorkfile(pyblish.api.InstancePlugin): # adding basic script data instance.data.update(script_data) - self.log.info("Collect script version") + self.log.debug( + "Collected current script version: {}".format(current_file) + ) diff --git a/openpype/hosts/nuke/plugins/publish/extract_backdrop.py b/openpype/hosts/nuke/plugins/publish/extract_backdrop.py index 5166fa4b2c..2a6a5dee2a 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/extract_backdrop.py @@ -56,8 +56,6 @@ class ExtractBackdropNode(publish.Extractor): # connect output node for n, output in connections_out.items(): opn = nuke.createNode("Output") - self.log.info(n.name()) - self.log.info(output.name()) output.setInput( next((i for i, d in enumerate(output.dependencies()) if d.name() in n.name()), 0), opn) @@ -102,5 +100,5 @@ class ExtractBackdropNode(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '{}' to: {}".format( + self.log.debug("Extracted instance '{}' to: {}".format( instance.name, path)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_camera.py b/openpype/hosts/nuke/plugins/publish/extract_camera.py index 33df6258ae..3ec85c1f11 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_camera.py +++ b/openpype/hosts/nuke/plugins/publish/extract_camera.py @@ -36,11 +36,8 @@ class ExtractCamera(publish.Extractor): step = 1 output_range = str(nuke.FrameRange(first_frame, last_frame, step)) - self.log.info("instance.data: `{}`".format( - pformat(instance.data))) - rm_nodes = [] - self.log.info("Crating additional nodes") + self.log.debug("Creating additional nodes for 3D Camera Extractor") subset = instance.data["subset"] staging_dir = self.staging_dir(instance) @@ -84,8 +81,6 @@ class ExtractCamera(publish.Extractor): for n in rm_nodes: nuke.delete(n) - self.log.info(file_path) - # create representation data if "representations" not in instance.data: instance.data["representations"] = [] @@ -112,7 +107,7 @@ class ExtractCamera(publish.Extractor): "frameEndHandle": last_frame, }) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, file_path)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_gizmo.py b/openpype/hosts/nuke/plugins/publish/extract_gizmo.py index b0b1a9f7b7..ecec0d6f80 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_gizmo.py +++ b/openpype/hosts/nuke/plugins/publish/extract_gizmo.py @@ -85,8 +85,5 @@ class ExtractGizmo(publish.Extractor): } instance.data["representations"].append(representation) - self.log.info("Extracted instance '{}' to: {}".format( + self.log.debug("Extracted instance '{}' to: {}".format( instance.name, path)) - - self.log.info("Data {}".format( - instance.data)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_model.py b/openpype/hosts/nuke/plugins/publish/extract_model.py index 00462f8035..a8b37fb173 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_model.py +++ b/openpype/hosts/nuke/plugins/publish/extract_model.py @@ -33,13 +33,13 @@ class ExtractModel(publish.Extractor): first_frame = int(nuke.root()["first_frame"].getValue()) last_frame = int(nuke.root()["last_frame"].getValue()) - self.log.info("instance.data: `{}`".format( + self.log.debug("instance.data: `{}`".format( pformat(instance.data))) rm_nodes = [] model_node = instance.data["transientData"]["node"] - self.log.info("Crating additional nodes") + self.log.debug("Creating additional nodes for Extract Model") subset = instance.data["subset"] staging_dir = self.staging_dir(instance) @@ -76,7 +76,7 @@ class ExtractModel(publish.Extractor): for n in rm_nodes: nuke.delete(n) - self.log.info(file_path) + self.log.debug("Filepath: {}".format(file_path)) # create representation data if "representations" not in instance.data: @@ -104,5 +104,5 @@ class ExtractModel(publish.Extractor): "frameEndHandle": last_frame, }) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, file_path)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py b/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py index e66cfd9018..3fe1443bb3 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py +++ b/openpype/hosts/nuke/plugins/publish/extract_ouput_node.py @@ -27,7 +27,7 @@ class CreateOutputNode(pyblish.api.ContextPlugin): if active_node: active_node = active_node.pop() - self.log.info(active_node) + self.log.debug("Active node: {}".format(active_node)) active_node['selected'].setValue(True) # select only instance render node diff --git a/openpype/hosts/nuke/plugins/publish/extract_render_local.py b/openpype/hosts/nuke/plugins/publish/extract_render_local.py index e2cf2addc5..ff04367e20 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_render_local.py +++ b/openpype/hosts/nuke/plugins/publish/extract_render_local.py @@ -119,7 +119,7 @@ class NukeRenderLocal(publish.Extractor, instance.data["representations"].append(repre) - self.log.info("Extracted instance '{0}' to: {1}".format( + self.log.debug("Extracted instance '{0}' to: {1}".format( instance.name, out_dir )) @@ -143,7 +143,7 @@ class NukeRenderLocal(publish.Extractor, instance.data["families"] = families collections, remainder = clique.assemble(filenames) - self.log.info('collections: {}'.format(str(collections))) + self.log.debug('collections: {}'.format(str(collections))) if collections: collection = collections[0] diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py index 2a26ed82fb..b007f90f6c 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py @@ -20,7 +20,7 @@ class ExtractReviewDataLut(publish.Extractor): hosts = ["nuke"] def process(self, instance): - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") if "representations" in instance.data: staging_dir = instance.data[ "representations"][0]["stagingDir"].replace("\\", "/") @@ -33,7 +33,7 @@ class ExtractReviewDataLut(publish.Extractor): staging_dir = os.path.normpath(os.path.dirname(render_path)) instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) # generate data diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py index 9730e3b61f..3ee166eb56 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_intermediates.py @@ -52,7 +52,7 @@ class ExtractReviewIntermediates(publish.Extractor): task_type = instance.context.data["taskType"] subset = instance.data["subset"] - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") if "representations" not in instance.data: instance.data["representations"] = [] @@ -62,10 +62,10 @@ class ExtractReviewIntermediates(publish.Extractor): instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) - self.log.info(self.outputs) + self.log.debug("Outputs: {}".format(self.outputs)) # generate data with maintained_selection(): @@ -104,9 +104,10 @@ class ExtractReviewIntermediates(publish.Extractor): re.search(s, subset) for s in f_subsets): continue - self.log.info( + self.log.debug( "Baking output `{}` with settings: {}".format( - o_name, o_data)) + o_name, o_data) + ) # check if settings have more then one preset # so we dont need to add outputName to representation @@ -155,10 +156,10 @@ class ExtractReviewIntermediates(publish.Extractor): instance.data["useSequenceForReview"] = False else: instance.data["families"].remove("review") - self.log.info(( + self.log.debug( "Removing `review` from families. " "Not available baking profile." - )) + ) self.log.debug(instance.data["families"]) self.log.debug( diff --git a/openpype/hosts/nuke/plugins/publish/extract_script_save.py b/openpype/hosts/nuke/plugins/publish/extract_script_save.py index 0c8e561fd7..e44e5686b6 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_script_save.py +++ b/openpype/hosts/nuke/plugins/publish/extract_script_save.py @@ -3,13 +3,12 @@ import pyblish.api class ExtractScriptSave(pyblish.api.Extractor): - """ - """ + """Save current Nuke workfile script""" label = 'Script Save' order = pyblish.api.Extractor.order - 0.1 hosts = ['nuke'] def process(self, instance): - self.log.info('saving script') + self.log.debug('Saving current script') nuke.scriptSave() diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index 25262a7418..7befb7b7f3 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -48,7 +48,7 @@ class ExtractSlateFrame(publish.Extractor): if instance.data.get("bakePresets"): for o_name, o_data in instance.data["bakePresets"].items(): - self.log.info("_ o_name: {}, o_data: {}".format( + self.log.debug("_ o_name: {}, o_data: {}".format( o_name, pformat(o_data))) self.render_slate( instance, @@ -65,14 +65,14 @@ class ExtractSlateFrame(publish.Extractor): def _create_staging_dir(self, instance): - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") staging_dir = os.path.normpath( os.path.dirname(instance.data["path"])) instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) def _check_frames_exists(self, instance): @@ -275,10 +275,10 @@ class ExtractSlateFrame(publish.Extractor): break if not matching_repre: - self.log.info(( - "Matching reresentaion was not found." + self.log.info( + "Matching reresentation was not found." " Representation files were not filled with slate." - )) + ) return # Add frame to matching representation files @@ -345,7 +345,7 @@ class ExtractSlateFrame(publish.Extractor): try: node[key].setValue(value) - self.log.info("Change key \"{}\" to value \"{}\"".format( + self.log.debug("Change key \"{}\" to value \"{}\"".format( key, value )) except NameError: diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index b20df4ffe2..de7567c1b1 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -8,6 +8,7 @@ from openpype.hosts.nuke import api as napi from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings +# Python 2/3 compatibility if sys.version_info[0] >= 3: unicode = str @@ -45,11 +46,12 @@ class ExtractThumbnail(publish.Extractor): for o_name, o_data in instance.data["bakePresets"].items(): self.render_thumbnail(instance, o_name, **o_data) else: - viewer_process_swithes = { + viewer_process_switches = { "bake_viewer_process": True, "bake_viewer_input_process": True } - self.render_thumbnail(instance, None, **viewer_process_swithes) + self.render_thumbnail( + instance, None, **viewer_process_switches) def render_thumbnail(self, instance, output_name=None, **kwargs): first_frame = instance.data["frameStartHandle"] @@ -61,15 +63,13 @@ class ExtractThumbnail(publish.Extractor): # solve output name if any is set output_name = output_name or "" - if output_name: - output_name = "_" + output_name bake_viewer_process = kwargs["bake_viewer_process"] bake_viewer_input_process_node = kwargs[ "bake_viewer_input_process"] node = instance.data["transientData"]["node"] # group node - self.log.info("Creating staging dir...") + self.log.debug("Creating staging dir...") if "representations" not in instance.data: instance.data["representations"] = [] @@ -79,7 +79,7 @@ class ExtractThumbnail(publish.Extractor): instance.data["stagingDir"] = staging_dir - self.log.info( + self.log.debug( "StagingDir `{0}`...".format(instance.data["stagingDir"])) temporary_nodes = [] @@ -166,26 +166,42 @@ class ExtractThumbnail(publish.Extractor): previous_node = dag_node temporary_nodes.append(dag_node) + thumb_name = "thumbnail" + # only add output name and + # if there are more than one bake preset + if ( + output_name + and len(instance.data.get("bakePresets", {}).keys()) > 1 + ): + thumb_name = "{}_{}".format(output_name, thumb_name) + # create write node write_node = nuke.createNode("Write") - file = fhead[:-1] + output_name + ".jpg" - name = "thumbnail" - path = os.path.join(staging_dir, file).replace("\\", "/") - instance.data["thumbnail"] = path - write_node["file"].setValue(path) + file = fhead[:-1] + thumb_name + ".jpg" + thumb_path = os.path.join(staging_dir, file).replace("\\", "/") + + # add thumbnail to cleanup + instance.context.data["cleanupFullPaths"].append(thumb_path) + + # make sure only one thumbnail path is set + # and it is existing file + instance_thumb_path = instance.data.get("thumbnailPath") + if not instance_thumb_path or not os.path.isfile(instance_thumb_path): + instance.data["thumbnailPath"] = thumb_path + + write_node["file"].setValue(thumb_path) write_node["file_type"].setValue("jpg") write_node["raw"].setValue(1) write_node.setInput(0, previous_node) temporary_nodes.append(write_node) - tags = ["thumbnail", "publish_on_farm"] repre = { - 'name': name, + 'name': thumb_name, 'ext': "jpg", - "outputName": "thumb", + "outputName": thumb_name, 'files': file, "stagingDir": staging_dir, - "tags": tags + "tags": ["thumbnail", "publish_on_farm", "delete"] } instance.data["representations"].append(repre) diff --git a/openpype/hosts/nuke/plugins/publish/help/validate_asset_context.xml b/openpype/hosts/nuke/plugins/publish/help/validate_asset_context.xml new file mode 100644 index 0000000000..d9394ae510 --- /dev/null +++ b/openpype/hosts/nuke/plugins/publish/help/validate_asset_context.xml @@ -0,0 +1,31 @@ + + + + Shot/Asset name + +## Publishing to a different asset context + +There are publish instances present which are publishing into a different asset than your current context. + +Usually this is not what you want but there can be cases where you might want to publish into another asset/shot or task. + +If that's the case you can disable the validation on the instance to ignore it. + +The wrong node's name is: \`{node_name}\` + +### Correct context keys and values: + +\`{correct_values}\` + +### Wrong keys and values: + +\`{wrong_values}\`. + + +## How to repair? + +1. Use \"Repair\" button. +2. Hit Reload button on the publisher. + + + diff --git a/openpype/hosts/nuke/plugins/publish/help/validate_asset_name.xml b/openpype/hosts/nuke/plugins/publish/help/validate_asset_name.xml deleted file mode 100644 index 0422917e9c..0000000000 --- a/openpype/hosts/nuke/plugins/publish/help/validate_asset_name.xml +++ /dev/null @@ -1,18 +0,0 @@ - - - - Shot/Asset name - -## Invalid Shot/Asset name in subset - -Following Node with name `{node_name}`: -Is in context of `{correct_name}` but Node _asset_ knob is set as `{wrong_name}`. - -### How to repair? - -1. Either use Repair or Select button. -2. If you chose Select then rename asset knob to correct name. -3. Hit Reload button on the publisher. - - - \ No newline at end of file diff --git a/openpype/hosts/nuke/plugins/publish/validate_asset_context.py b/openpype/hosts/nuke/plugins/publish/validate_asset_context.py new file mode 100644 index 0000000000..731645a11c --- /dev/null +++ b/openpype/hosts/nuke/plugins/publish/validate_asset_context.py @@ -0,0 +1,112 @@ +# -*- coding: utf-8 -*- +"""Validate if instance asset is the same as context asset.""" +from __future__ import absolute_import + +import pyblish.api + +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishXmlValidationError, + OptionalPyblishPluginMixin +) +from openpype.hosts.nuke.api import SelectInstanceNodeAction + + +class ValidateCorrectAssetContext( + pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin +): + """Validator to check if instance asset context match context asset. + + When working in per-shot style you always publish data in context of + current asset (shot). This validator checks if this is so. It is optional + so it can be disabled when needed. + + Checking `asset` and `task` keys. + """ + order = ValidateContentsOrder + label = "Validate asset context" + hosts = ["nuke"] + actions = [ + RepairAction, + SelectInstanceNodeAction + ] + optional = True + + @classmethod + def apply_settings(cls, project_settings): + """Apply deprecated settings from project settings. + """ + nuke_publish = project_settings["nuke"]["publish"] + if "ValidateCorrectAssetName" in nuke_publish: + settings = nuke_publish["ValidateCorrectAssetName"] + else: + settings = nuke_publish["ValidateCorrectAssetContext"] + + cls.enabled = settings["enabled"] + cls.optional = settings["optional"] + cls.active = settings["active"] + + def process(self, instance): + if not self.is_active(instance.data): + return + + invalid_keys = self.get_invalid(instance) + + if not invalid_keys: + return + + message_values = { + "node_name": instance.data["transientData"]["node"].name(), + "correct_values": ", ".join([ + "{} > {}".format(_key, instance.context.data[_key]) + for _key in invalid_keys + ]), + "wrong_values": ", ".join([ + "{} > {}".format(_key, instance.data.get(_key)) + for _key in invalid_keys + ]) + } + + msg = ( + "Instance `{node_name}` has wrong context keys:\n" + "Correct: `{correct_values}` | Wrong: `{wrong_values}`").format( + **message_values) + + self.log.debug(msg) + + raise PublishXmlValidationError( + self, msg, formatting_data=message_values + ) + + @classmethod + def get_invalid(cls, instance): + """Get invalid keys from instance data and context data.""" + + invalid_keys = [] + testing_keys = ["asset", "task"] + for _key in testing_keys: + if _key not in instance.data: + invalid_keys.append(_key) + continue + if instance.data[_key] != instance.context.data[_key]: + invalid_keys.append(_key) + + return invalid_keys + + @classmethod + def repair(cls, instance): + """Repair instance data with context data.""" + invalid_keys = cls.get_invalid(instance) + + create_context = instance.context.data["create_context"] + + instance_id = instance.data.get("instance_id") + created_instance = create_context.get_instance_by_id( + instance_id + ) + for _key in invalid_keys: + created_instance[_key] = instance.context.data[_key] + + create_context.save_changes() diff --git a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py b/openpype/hosts/nuke/plugins/publish/validate_asset_name.py deleted file mode 100644 index df05f76a5b..0000000000 --- a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py +++ /dev/null @@ -1,138 +0,0 @@ -# -*- coding: utf-8 -*- -"""Validate if instance asset is the same as context asset.""" -from __future__ import absolute_import - -import pyblish.api - -import openpype.hosts.nuke.api.lib as nlib - -from openpype.pipeline.publish import ( - ValidateContentsOrder, - PublishXmlValidationError, - OptionalPyblishPluginMixin -) - -class SelectInvalidInstances(pyblish.api.Action): - """Select invalid instances in Outliner.""" - - label = "Select" - icon = "briefcase" - on = "failed" - - def process(self, context, plugin): - """Process invalid validators and select invalid instances.""" - # Get the errored instances - failed = [] - for result in context.data["results"]: - if ( - result["error"] is None - or result["instance"] is None - or result["instance"] in failed - or result["plugin"] != plugin - ): - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - - if instances: - self.deselect() - self.log.info( - "Selecting invalid nodes: %s" % ", ".join( - [str(x) for x in instances] - ) - ) - self.select(instances) - else: - self.log.info("No invalid nodes found.") - self.deselect() - - def select(self, instances): - for inst in instances: - if inst.data.get("transientData", {}).get("node"): - select_node = inst.data["transientData"]["node"] - select_node["selected"].setValue(True) - - def deselect(self): - nlib.reset_selection() - - -class RepairSelectInvalidInstances(pyblish.api.Action): - """Repair the instance asset.""" - - label = "Repair" - icon = "wrench" - on = "failed" - - def process(self, context, plugin): - # Get the errored instances - failed = [] - for result in context.data["results"]: - if ( - result["error"] is None - or result["instance"] is None - or result["instance"] in failed - or result["plugin"] != plugin - ): - continue - - failed.append(result["instance"]) - - # Apply pyblish.logic to get the instances for the plug-in - instances = pyblish.api.instances_by_plugin(failed, plugin) - self.log.debug(instances) - - context_asset = context.data["assetEntity"]["name"] - for instance in instances: - node = instance.data["transientData"]["node"] - node_data = nlib.get_node_data(node, nlib.INSTANCE_DATA_KNOB) - node_data["asset"] = context_asset - nlib.set_node_data(node, nlib.INSTANCE_DATA_KNOB, node_data) - - -class ValidateCorrectAssetName( - pyblish.api.InstancePlugin, - OptionalPyblishPluginMixin -): - """Validator to check if instance asset match context asset. - - When working in per-shot style you always publish data in context of - current asset (shot). This validator checks if this is so. It is optional - so it can be disabled when needed. - - Action on this validator will select invalid instances in Outliner. - """ - order = ValidateContentsOrder - label = "Validate correct asset name" - hosts = ["nuke"] - actions = [ - SelectInvalidInstances, - RepairSelectInvalidInstances - ] - optional = True - - def process(self, instance): - if not self.is_active(instance.data): - return - - asset = instance.data.get("asset") - context_asset = instance.context.data["assetEntity"]["name"] - node = instance.data["transientData"]["node"] - - msg = ( - "Instance `{}` has wrong shot/asset name:\n" - "Correct: `{}` | Wrong: `{}`").format( - instance.name, asset, context_asset) - - self.log.debug(msg) - - if asset != context_asset: - raise PublishXmlValidationError( - self, msg, formatting_data={ - "node_name": node.name(), - "wrong_name": asset, - "correct_name": context_asset - } - ) diff --git a/openpype/hosts/nuke/plugins/publish/validate_backdrop.py b/openpype/hosts/nuke/plugins/publish/validate_backdrop.py index ad60089952..761b080caa 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/validate_backdrop.py @@ -43,8 +43,8 @@ class SelectCenterInNodeGraph(pyblish.api.Action): all_xC.append(xC) all_yC.append(yC) - self.log.info("all_xC: `{}`".format(all_xC)) - self.log.info("all_yC: `{}`".format(all_yC)) + self.log.debug("all_xC: `{}`".format(all_xC)) + self.log.debug("all_yC: `{}`".format(all_yC)) # zoom to nodes in node graph nuke.zoom(2, [min(all_xC), min(all_yC)]) diff --git a/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py b/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py index dbcd216a84..ff6d73c6ec 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py +++ b/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py @@ -23,7 +23,7 @@ class ValidateOutputResolution( order = pyblish.api.ValidatorOrder optional = True families = ["render"] - label = "Write resolution" + label = "Validate Write resolution" hosts = ["nuke"] actions = [RepairAction] @@ -104,9 +104,9 @@ class ValidateOutputResolution( _rfn["resize"].setValue(0) _rfn["black_outside"].setValue(1) - cls.log.info("I am adding reformat node") + cls.log.info("Adding reformat node") if cls.resolution_msg == invalid: reformat = cls.get_reformat(instance) reformat["format"].setValue(nuke.root()["format"].value()) - cls.log.info("I am fixing reformat to root.format") + cls.log.info("Fixing reformat to root.format") diff --git a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py index 9a35b61a0e..64bf69b69b 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py +++ b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py @@ -76,8 +76,8 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin): return collections, remainder = clique.assemble(repre["files"]) - self.log.info("collections: {}".format(str(collections))) - self.log.info("remainder: {}".format(str(remainder))) + self.log.debug("collections: {}".format(str(collections))) + self.log.debug("remainder: {}".format(str(remainder))) collection = collections[0] @@ -103,15 +103,15 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin): coll_start = min(collection.indexes) coll_end = max(collection.indexes) - self.log.info("frame_length: {}".format(frame_length)) - self.log.info("collected_frames_len: {}".format( + self.log.debug("frame_length: {}".format(frame_length)) + self.log.debug("collected_frames_len: {}".format( collected_frames_len)) - self.log.info("f_start_h-f_end_h: {}-{}".format( + self.log.debug("f_start_h-f_end_h: {}-{}".format( f_start_h, f_end_h)) - self.log.info( + self.log.debug( "coll_start-coll_end: {}-{}".format(coll_start, coll_end)) - self.log.info( + self.log.debug( "len(collection.indexes): {}".format(collected_frames_len) ) diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py index 2a925fbeff..9c8bfae388 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py @@ -39,7 +39,7 @@ class RepairNukeWriteNodeAction(pyblish.api.Action): set_node_knobs_from_settings(write_node, correct_data["knobs"]) - self.log.info("Node attributes were fixed") + self.log.debug("Node attributes were fixed") class ValidateNukeWriteNode( @@ -82,12 +82,6 @@ class ValidateNukeWriteNode( correct_data = get_write_node_template_attr(write_group_node) check = [] - self.log.debug("__ write_node: {}".format( - write_node - )) - self.log.debug("__ correct_data: {}".format( - correct_data - )) # Collect key values of same type in a list. values_by_name = defaultdict(list) @@ -96,9 +90,6 @@ class ValidateNukeWriteNode( for knob_data in correct_data["knobs"]: knob_type = knob_data["type"] - self.log.debug("__ knob_type: {}".format( - knob_type - )) if ( knob_type == "__legacy__" @@ -134,9 +125,6 @@ class ValidateNukeWriteNode( fixed_values.append(value) - self.log.debug("__ key: {} | values: {}".format( - key, fixed_values - )) if ( node_value not in fixed_values and key != "file" @@ -144,8 +132,6 @@ class ValidateNukeWriteNode( ): check.append([key, value, write_node[key].value()]) - self.log.info(check) - if check: self._make_error(check) diff --git a/openpype/hosts/photoshop/api/extension/extension.zxp b/openpype/hosts/photoshop/api/extension/extension.zxp deleted file mode 100644 index 39b766cd0d..0000000000 Binary files a/openpype/hosts/photoshop/api/extension/extension.zxp and /dev/null differ diff --git a/openpype/hosts/resolve/api/lib.py b/openpype/hosts/resolve/api/lib.py index eaee3bb9ba..37410c9727 100644 --- a/openpype/hosts/resolve/api/lib.py +++ b/openpype/hosts/resolve/api/lib.py @@ -125,15 +125,19 @@ def get_any_timeline(): return project.GetTimelineByIndex(1) -def get_new_timeline(): +def get_new_timeline(timeline_name: str = None): """Get new timeline object. + Arguments: + timeline_name (str): New timeline name. + Returns: object: resolve.Timeline """ project = get_current_project() media_pool = project.GetMediaPool() - new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name) + new_timeline = media_pool.CreateEmptyTimeline( + timeline_name or self.pype_timeline_name) project.SetCurrentTimeline(new_timeline) return new_timeline @@ -179,53 +183,52 @@ def create_bin(name: str, root: object = None) -> object: return media_pool.GetCurrentFolder() -def create_media_pool_item(fpath: str, - root: object = None) -> object: +def remove_media_pool_item(media_pool_item: object) -> bool: + media_pool = get_current_project().GetMediaPool() + return media_pool.DeleteClips([media_pool_item]) + + +def create_media_pool_item( + files: list, + root: object = None, +) -> object: """ Create media pool item. Args: - fpath (str): absolute path to a file + files (list[str]): list of absolute paths to files root (resolve.Folder)[optional]: root folder / bin object Returns: object: resolve.MediaPoolItem """ # get all variables - media_storage = get_media_storage() media_pool = get_current_project().GetMediaPool() root_bin = root or media_pool.GetRootFolder() + # make sure files list is not empty and first available file exists + filepath = next((f for f in files if os.path.isfile(f)), None) + if not filepath: + raise FileNotFoundError("No file found in input files list") + # try to search in bin if the clip does not exist - existing_mpi = get_media_pool_item(fpath, root_bin) + existing_mpi = get_media_pool_item(filepath, root_bin) if existing_mpi: return existing_mpi - dirname, file = os.path.split(fpath) - _name, ext = os.path.splitext(file) + # add all data in folder to media pool + media_pool_items = media_pool.ImportMedia(files) - # add all data in folder to mediapool - media_pool_items = media_storage.AddItemListToMediaPool( - os.path.normpath(dirname)) - - if not media_pool_items: - return False - - # if any are added then look into them for the right extension - media_pool_item = [mpi for mpi in media_pool_items - if ext in mpi.GetClipProperty("File Path")] - - # return only first found - return media_pool_item.pop() + return media_pool_items.pop() if media_pool_items else False -def get_media_pool_item(fpath, root: object = None) -> object: +def get_media_pool_item(filepath, root: object = None) -> object: """ Return clip if found in folder with use of input file path. Args: - fpath (str): absolute path to a file + filepath (str): absolute path to a file root (resolve.Folder)[optional]: root folder / bin object Returns: @@ -233,7 +236,7 @@ def get_media_pool_item(fpath, root: object = None) -> object: """ media_pool = get_current_project().GetMediaPool() root = root or media_pool.GetRootFolder() - fname = os.path.basename(fpath) + fname = os.path.basename(filepath) for _mpi in root.GetClipList(): _mpi_name = _mpi.GetClipProperty("File Name") @@ -277,7 +280,6 @@ def create_timeline_item(media_pool_item: object, if source_end is not None: clip_data.update({"endFrame": source_end}) - print(clip_data) # add to timeline media_pool.AppendToTimeline([clip_data]) @@ -394,14 +396,22 @@ def get_current_timeline_items( def get_pype_timeline_item_by_name(name: str) -> object: - track_itmes = get_current_timeline_items() - for _ti in track_itmes: - tag_data = get_timeline_item_pype_tag(_ti["clip"]["item"]) - tag_name = tag_data.get("name") + """Get timeline item by name. + + Args: + name (str): name of timeline item + + Returns: + object: resolve.TimelineItem + """ + for _ti_data in get_current_timeline_items(): + _ti_clip = _ti_data["clip"]["item"] + tag_data = get_timeline_item_pype_tag(_ti_clip) + tag_name = tag_data.get("namespace") if not tag_name: continue - if tag_data.get("name") in name: - return _ti + if tag_name in name: + return _ti_clip return None @@ -544,12 +554,11 @@ def set_pype_marker(timeline_item, tag_data): def get_pype_marker(timeline_item): timeline_item_markers = timeline_item.GetMarkers() - for marker_frame in timeline_item_markers: - note = timeline_item_markers[marker_frame]["note"] - color = timeline_item_markers[marker_frame]["color"] - name = timeline_item_markers[marker_frame]["name"] - print(f"_ marker data: {marker_frame} | {name} | {color} | {note}") + for marker_frame, marker in timeline_item_markers.items(): + color = marker["color"] + name = marker["name"] if name == self.pype_marker_name and color == self.pype_marker_color: + note = marker["note"] self.temp_marker_frame = marker_frame return json.loads(note) @@ -618,7 +627,7 @@ def create_compound_clip(clip_data, name, folder): if c.GetName() in name), None) if cct: - print(f"_ cct exists: {cct}") + print(f"Compound clip exists: {cct}") else: # Create empty timeline in current folder and give name: cct = mp.CreateEmptyTimeline(name) @@ -627,7 +636,7 @@ def create_compound_clip(clip_data, name, folder): clips = folder.GetClipList() cct = next((c for c in clips if c.GetName() in name), None) - print(f"_ cct created: {cct}") + print(f"Compound clip created: {cct}") with maintain_current_timeline(cct, tl_origin): # Add input clip to the current timeline: diff --git a/openpype/hosts/resolve/api/pipeline.py b/openpype/hosts/resolve/api/pipeline.py index 05f556fa5b..93dec300fb 100644 --- a/openpype/hosts/resolve/api/pipeline.py +++ b/openpype/hosts/resolve/api/pipeline.py @@ -127,10 +127,8 @@ def containerise(timeline_item, }) if data: - for k, v in data.items(): - data_imprint.update({k: v}) + data_imprint.update(data) - print("_ data_imprint: {}".format(data_imprint)) lib.set_timeline_item_pype_tag(timeline_item, data_imprint) return timeline_item diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py index e2bd76ffa2..8381f81acb 100644 --- a/openpype/hosts/resolve/api/plugin.py +++ b/openpype/hosts/resolve/api/plugin.py @@ -1,6 +1,5 @@ import re import uuid - import qargparse from qtpy import QtWidgets, QtCore @@ -9,6 +8,7 @@ from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline import ( LegacyCreator, LoaderPlugin, + Anatomy ) from . import lib @@ -291,20 +291,19 @@ class ClipLoader: active_bin = None data = dict() - def __init__(self, cls, context, path, **options): + def __init__(self, loader_obj, context, **options): """ Initialize object Arguments: - cls (openpype.pipeline.load.LoaderPlugin): plugin object + loader_obj (openpype.pipeline.load.LoaderPlugin): plugin object context (dict): loader plugin context options (dict)[optional]: possible keys: projectBinPath: "path/to/binItem" """ - self.__dict__.update(cls.__dict__) + self.__dict__.update(loader_obj.__dict__) self.context = context self.active_project = lib.get_current_project() - self.fname = path # try to get value from options or evaluate key value for `handles` self.with_handles = options.get("handles") or bool( @@ -319,54 +318,54 @@ class ClipLoader: # inject asset data to representation dict self._get_asset_data() - print("__init__ self.data: `{}`".format(self.data)) # add active components to class if self.new_timeline: - if options.get("timeline"): + loader_cls = loader_obj.__class__ + if loader_cls.timeline: # if multiselection is set then use options sequence - self.active_timeline = options["timeline"] + self.active_timeline = loader_cls.timeline else: # create new sequence - self.active_timeline = ( - lib.get_current_timeline() or - lib.get_new_timeline() + self.active_timeline = lib.get_new_timeline( + "{}_{}".format( + self.data["timeline_basename"], + str(uuid.uuid4())[:8] + ) ) + loader_cls.timeline = self.active_timeline + else: self.active_timeline = lib.get_current_timeline() - cls.timeline = self.active_timeline - def _populate_data(self): """ Gets context and convert it to self.data data structure: { "name": "assetName_subsetName_representationName" - "path": "path/to/file/created/by/get_repr..", "binPath": "projectBinPath", } """ # create name - repr = self.context["representation"] - repr_cntx = repr["context"] - asset = str(repr_cntx["asset"]) - subset = str(repr_cntx["subset"]) - representation = str(repr_cntx["representation"]) - self.data["clip_name"] = "_".join([asset, subset, representation]) + representation = self.context["representation"] + representation_context = representation["context"] + asset = str(representation_context["asset"]) + subset = str(representation_context["subset"]) + representation_name = str(representation_context["representation"]) + self.data["clip_name"] = "_".join([ + asset, + subset, + representation_name + ]) self.data["versionData"] = self.context["version"]["data"] - # gets file path - file = self.fname - if not file: - repr_id = repr["_id"] - print( - "Representation id `{}` is failing to load".format(repr_id)) - return None - self.data["path"] = file.replace("\\", "/") + + self.data["timeline_basename"] = "timeline_{}_{}".format( + subset, representation_name) # solve project bin structure path hierarchy = str("/".join(( "Loader", - repr_cntx["hierarchy"].replace("\\", "/"), + representation_context["hierarchy"].replace("\\", "/"), asset ))) @@ -383,25 +382,24 @@ class ClipLoader: asset_name = self.context["representation"]["context"]["asset"] self.data["assetData"] = get_current_project_asset(asset_name)["data"] - def load(self): + def load(self, files): + """Load clip into timeline + + Arguments: + files (list[str]): list of files to load into timeline + """ # create project bin for the media to be imported into self.active_bin = lib.create_bin(self.data["binPath"]) - # create mediaItem in active project bin - # create clip media + handle_start = self.data["versionData"].get("handleStart") or 0 + handle_end = self.data["versionData"].get("handleEnd") or 0 media_pool_item = lib.create_media_pool_item( - self.data["path"], self.active_bin) + files, + self.active_bin + ) _clip_property = media_pool_item.GetClipProperty - # get handles - handle_start = self.data["versionData"].get("handleStart") - handle_end = self.data["versionData"].get("handleEnd") - if handle_start is None: - handle_start = int(self.data["assetData"]["handleStart"]) - if handle_end is None: - handle_end = int(self.data["assetData"]["handleEnd"]) - source_in = int(_clip_property("Start")) source_out = int(_clip_property("End")) @@ -421,14 +419,16 @@ class ClipLoader: print("Loading clips: `{}`".format(self.data["clip_name"])) return timeline_item - def update(self, timeline_item): + def update(self, timeline_item, files): # create project bin for the media to be imported into self.active_bin = lib.create_bin(self.data["binPath"]) # create mediaItem in active project bin # create clip media media_pool_item = lib.create_media_pool_item( - self.data["path"], self.active_bin) + files, + self.active_bin + ) _clip_property = media_pool_item.GetClipProperty source_in = int(_clip_property("Start")) @@ -649,8 +649,6 @@ class PublishClip: # define ui inputs if non gui mode was used self.shot_num = self.ti_index - print( - "____ self.shot_num: {}".format(self.shot_num)) # ui_inputs data or default values if gui was not used self.rename = self.ui_inputs.get( @@ -829,3 +827,12 @@ class PublishClip: for key in par_split: parent = self._convert_to_entity(key) self.parents.append(parent) + + +def get_representation_files(representation): + anatomy = Anatomy() + files = [] + for file_data in representation["files"]: + path = anatomy.fill_root(file_data["path"]) + files.append(path) + return files diff --git a/openpype/hosts/resolve/plugins/load/load_clip.py b/openpype/hosts/resolve/plugins/load/load_clip.py index 3a59ecea80..d3f83c7f24 100644 --- a/openpype/hosts/resolve/plugins/load/load_clip.py +++ b/openpype/hosts/resolve/plugins/load/load_clip.py @@ -1,13 +1,7 @@ -from copy import deepcopy - -from openpype.client import ( - get_version_by_id, - get_last_version_by_subset_id, -) -# from openpype.hosts import resolve +from openpype.client import get_last_version_by_subset_id from openpype.pipeline import ( - get_representation_path, - get_current_project_name, + get_representation_context, + get_current_project_name ) from openpype.hosts.resolve.api import lib, plugin from openpype.hosts.resolve.api.pipeline import ( @@ -48,48 +42,17 @@ class LoadClip(plugin.TimelineItemLoader): def load(self, context, name, namespace, options): - # in case loader uses multiselection - if self.timeline: - options.update({ - "timeline": self.timeline, - }) - # load clip to timeline and get main variables - path = self.filepath_from_context(context) + files = plugin.get_representation_files(context["representation"]) + timeline_item = plugin.ClipLoader( - self, context, path, **options).load() + self, context, **options).load(files) namespace = namespace or timeline_item.GetName() - version = context['version'] - version_data = version.get("data", {}) - version_name = version.get("name", None) - colorspace = version_data.get("colorspace", None) - object_name = "{}_{}".format(name, namespace) - - # add additional metadata from the version to imprint Avalon knob - add_keys = [ - "frameStart", "frameEnd", "source", "author", - "fps", "handleStart", "handleEnd" - ] - - # move all version data keys to tag data - data_imprint = {} - for key in add_keys: - data_imprint.update({ - key: version_data.get(key, str(None)) - }) - - # add variables related to version context - data_imprint.update({ - "version": version_name, - "colorspace": colorspace, - "objectName": object_name - }) # update color of clip regarding the version order - self.set_item_color(timeline_item, version) - - self.log.info("Loader done: `{}`".format(name)) + self.set_item_color(timeline_item, version=context["version"]) + data_imprint = self.get_tag_data(context, name, namespace) return containerise( timeline_item, name, namespace, context, @@ -103,53 +66,61 @@ class LoadClip(plugin.TimelineItemLoader): """ Updating previously loaded clips """ - # load clip to timeline and get main variables - context = deepcopy(representation["context"]) - context.update({"representation": representation}) + context = get_representation_context(representation) name = container['name'] namespace = container['namespace'] - timeline_item_data = lib.get_pype_timeline_item_by_name(namespace) - timeline_item = timeline_item_data["clip"]["item"] - project_name = get_current_project_name() - version = get_version_by_id(project_name, representation["parent"]) + timeline_item = container["_timeline_item"] + + media_pool_item = timeline_item.GetMediaPoolItem() + + files = plugin.get_representation_files(representation) + + loader = plugin.ClipLoader(self, context) + timeline_item = loader.update(timeline_item, files) + + # update color of clip regarding the version order + self.set_item_color(timeline_item, version=context["version"]) + + # if original media pool item has no remaining usages left + # remove it from the media pool + if int(media_pool_item.GetClipProperty("Usage")) == 0: + lib.remove_media_pool_item(media_pool_item) + + data_imprint = self.get_tag_data(context, name, namespace) + return update_container(timeline_item, data_imprint) + + def get_tag_data(self, context, name, namespace): + """Return data to be imprinted on the timeline item marker""" + + representation = context["representation"] + version = context['version'] version_data = version.get("data", {}) version_name = version.get("name", None) colorspace = version_data.get("colorspace", None) object_name = "{}_{}".format(name, namespace) - path = get_representation_path(representation) - context["version"] = {"data": version_data} - - loader = plugin.ClipLoader(self, context, path) - timeline_item = loader.update(timeline_item) # add additional metadata from the version to imprint Avalon knob - add_keys = [ + # move all version data keys to tag data + add_version_data_keys = [ "frameStart", "frameEnd", "source", "author", "fps", "handleStart", "handleEnd" ] - - # move all version data keys to tag data - data_imprint = {} - for key in add_keys: - data_imprint.update({ - key: version_data.get(key, str(None)) - }) + data = { + key: version_data.get(key, "None") for key in add_version_data_keys + } # add variables related to version context - data_imprint.update({ + data.update({ "representation": str(representation["_id"]), "version": version_name, "colorspace": colorspace, "objectName": object_name }) - - # update color of clip regarding the version order - self.set_item_color(timeline_item, version) - - return update_container(timeline_item, data_imprint) + return data @classmethod def set_item_color(cls, timeline_item, version): + """Color timeline item based on whether it is outdated or latest""" # define version name version_name = version.get("name", None) # get all versions in list @@ -169,3 +140,28 @@ class LoadClip(plugin.TimelineItemLoader): timeline_item.SetClipColor(cls.clip_color_last) else: timeline_item.SetClipColor(cls.clip_color) + + def remove(self, container): + timeline_item = container["_timeline_item"] + media_pool_item = timeline_item.GetMediaPoolItem() + timeline = lib.get_current_timeline() + + # DeleteClips function was added in Resolve 18.5+ + # by checking None we can detect whether the + # function exists in Resolve + if timeline.DeleteClips is not None: + timeline.DeleteClips([timeline_item]) + else: + # Resolve versions older than 18.5 can't delete clips via API + # so all we can do is just remove the pype marker to 'untag' it + if lib.get_pype_marker(timeline_item): + # Note: We must call `get_pype_marker` because + # `delete_pype_marker` uses a global variable set by + # `get_pype_marker` to delete the right marker + # TODO: Improve code to avoid the global `temp_marker_frame` + lib.delete_pype_marker(timeline_item) + + # if media pool item has no remaining usages left + # remove it from the media pool + if int(media_pool_item.GetClipProperty("Usage")) == 0: + lib.remove_media_pool_item(media_pool_item) diff --git a/openpype/hosts/resolve/utility_scripts/tests/test_otio_as_edl.py b/openpype/hosts/resolve/utility_scripts/tests/test_otio_as_edl.py deleted file mode 100644 index 92f2e43a72..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/test_otio_as_edl.py +++ /dev/null @@ -1,49 +0,0 @@ -#! python3 -import os -import sys - -import opentimelineio as otio - -from openpype.pipeline import install_host - -import openpype.hosts.resolve.api as bmdvr -from openpype.hosts.resolve.api.testing_utils import TestGUI -from openpype.hosts.resolve.otio import davinci_export as otio_export - - -class ThisTestGUI(TestGUI): - extensions = [".exr", ".jpg", ".mov", ".png", ".mp4", ".ari", ".arx"] - - def __init__(self): - super(ThisTestGUI, self).__init__() - # activate resolve from openpype - install_host(bmdvr) - - def _open_dir_button_pressed(self, event): - # selected_path = self.fu.RequestFile(os.path.expanduser("~")) - selected_path = self.fu.RequestDir(os.path.expanduser("~")) - self._widgets["inputTestSourcesFolder"].Text = selected_path - - # main function - def process(self, event): - self.input_dir_path = self._widgets["inputTestSourcesFolder"].Text - project = bmdvr.get_current_project() - otio_timeline = otio_export.create_otio_timeline(project) - print(f"_ otio_timeline: `{otio_timeline}`") - edl_path = os.path.join(self.input_dir_path, "this_file_name.edl") - print(f"_ edl_path: `{edl_path}`") - # xml_string = otio_adapters.fcpx_xml.write_to_string(otio_timeline) - # print(f"_ xml_string: `{xml_string}`") - otio.adapters.write_to_file( - otio_timeline, edl_path, adapter_name="cmx_3600") - project = bmdvr.get_current_project() - media_pool = project.GetMediaPool() - timeline = media_pool.ImportTimelineFromFile(edl_path) - # at the end close the window - self._close_window(None) - - -if __name__ == "__main__": - test_gui = ThisTestGUI() - test_gui.show_gui() - sys.exit(not bool(True)) diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_create_timeline_item_from_path.py b/openpype/hosts/resolve/utility_scripts/tests/testing_create_timeline_item_from_path.py deleted file mode 100644 index 91a361ec08..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_create_timeline_item_from_path.py +++ /dev/null @@ -1,73 +0,0 @@ -#! python3 -import os -import sys - -import clique - -from openpype.pipeline import install_host -from openpype.hosts.resolve.api.testing_utils import TestGUI -import openpype.hosts.resolve.api as bmdvr -from openpype.hosts.resolve.api.lib import ( - create_media_pool_item, - create_timeline_item, -) - - -class ThisTestGUI(TestGUI): - extensions = [".exr", ".jpg", ".mov", ".png", ".mp4", ".ari", ".arx"] - - def __init__(self): - super(ThisTestGUI, self).__init__() - # activate resolve from openpype - install_host(bmdvr) - - def _open_dir_button_pressed(self, event): - # selected_path = self.fu.RequestFile(os.path.expanduser("~")) - selected_path = self.fu.RequestDir(os.path.expanduser("~")) - self._widgets["inputTestSourcesFolder"].Text = selected_path - - # main function - def process(self, event): - self.input_dir_path = self._widgets["inputTestSourcesFolder"].Text - - self.dir_processing(self.input_dir_path) - - # at the end close the window - self._close_window(None) - - def dir_processing(self, dir_path): - collections, reminders = clique.assemble(os.listdir(dir_path)) - - # process reminders - for _rem in reminders: - _rem_path = os.path.join(dir_path, _rem) - - # go deeper if directory - if os.path.isdir(_rem_path): - print(_rem_path) - self.dir_processing(_rem_path) - else: - self.file_processing(_rem_path) - - # process collections - for _coll in collections: - _coll_path = os.path.join(dir_path, list(_coll).pop()) - self.file_processing(_coll_path) - - def file_processing(self, fpath): - print(f"_ fpath: `{fpath}`") - _base, ext = os.path.splitext(fpath) - # skip if unwanted extension - if ext not in self.extensions: - return - media_pool_item = create_media_pool_item(fpath) - print(media_pool_item) - - track_item = create_timeline_item(media_pool_item) - print(track_item) - - -if __name__ == "__main__": - test_gui = ThisTestGUI() - test_gui.show_gui() - sys.exit(not bool(True)) diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_load_media_pool_item.py b/openpype/hosts/resolve/utility_scripts/tests/testing_load_media_pool_item.py deleted file mode 100644 index 2e83188bde..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_load_media_pool_item.py +++ /dev/null @@ -1,24 +0,0 @@ -#! python3 -from openpype.pipeline import install_host -from openpype.hosts.resolve import api as bmdvr -from openpype.hosts.resolve.api.lib import ( - create_media_pool_item, - create_timeline_item, -) - - -def file_processing(fpath): - media_pool_item = create_media_pool_item(fpath) - print(media_pool_item) - - track_item = create_timeline_item(media_pool_item) - print(track_item) - - -if __name__ == "__main__": - path = "C:/CODE/__openpype_projects/jtest03dev/shots/sq01/mainsq01sh030/publish/plate/plateMain/v006/jt3d_mainsq01sh030_plateMain_v006.0996.exr" - - # activate resolve from openpype - install_host(bmdvr) - - file_processing(path) diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_startup_script.py b/openpype/hosts/resolve/utility_scripts/tests/testing_startup_script.py deleted file mode 100644 index b64714ab16..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_startup_script.py +++ /dev/null @@ -1,5 +0,0 @@ -#! python3 -from openpype.hosts.resolve.startup import main - -if __name__ == "__main__": - main() diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py b/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py deleted file mode 100644 index 8270496f64..0000000000 --- a/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py +++ /dev/null @@ -1,13 +0,0 @@ -#! python3 -from openpype.pipeline import install_host -from openpype.hosts.resolve import api as bmdvr -from openpype.hosts.resolve.api.lib import get_current_project - -if __name__ == "__main__": - install_host(bmdvr) - project = get_current_project() - timeline_count = project.GetTimelineCount() - print(f"Timeline count: {timeline_count}") - timeline = project.GetTimelineByIndex(timeline_count) - print(f"Timeline name: {timeline.GetName()}") - print(timeline.GetTrackCount("video")) diff --git a/openpype/hosts/traypublisher/plugins/create/create_colorspace_look.py b/openpype/hosts/traypublisher/plugins/create/create_colorspace_look.py new file mode 100644 index 0000000000..5628d0973f --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/create/create_colorspace_look.py @@ -0,0 +1,173 @@ +# -*- coding: utf-8 -*- +"""Creator of colorspace look files. + +This creator is used to publish colorspace look files thanks to +production type `ociolook`. All files are published as representation. +""" +from pathlib import Path + +from openpype.client import get_asset_by_name +from openpype.lib.attribute_definitions import ( + FileDef, EnumDef, TextDef, UISeparatorDef +) +from openpype.pipeline import ( + CreatedInstance, + CreatorError +) +from openpype.pipeline import colorspace +from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator + + +class CreateColorspaceLook(TrayPublishCreator): + """Creates colorspace look files.""" + + identifier = "io.openpype.creators.traypublisher.colorspace_look" + label = "Colorspace Look" + family = "ociolook" + description = "Publishes color space look file." + extensions = [".cc", ".cube", ".3dl", ".spi1d", ".spi3d", ".csp", ".lut"] + enabled = False + + colorspace_items = [ + (None, "Not set") + ] + colorspace_attr_show = False + config_items = None + config_data = None + + def get_detail_description(self): + return """# Colorspace Look + +This creator publishes color space look file (LUT). + """ + + def get_icon(self): + return "mdi.format-color-fill" + + def create(self, subset_name, instance_data, pre_create_data): + repr_file = pre_create_data.get("luts_file") + if not repr_file: + raise CreatorError("No files specified") + + files = repr_file.get("filenames") + if not files: + # this should never happen + raise CreatorError("Missing files from representation") + + asset_doc = get_asset_by_name( + self.project_name, instance_data["asset"]) + + subset_name = self.get_subset_name( + variant=instance_data["variant"], + task_name=instance_data["task"] or "Not set", + project_name=self.project_name, + asset_doc=asset_doc, + ) + + instance_data["creator_attributes"] = { + "abs_lut_path": ( + Path(repr_file["directory"]) / files[0]).as_posix() + } + + # Create new instance + new_instance = CreatedInstance(self.family, subset_name, + instance_data, self) + new_instance.transient_data["config_items"] = self.config_items + new_instance.transient_data["config_data"] = self.config_data + + self._store_new_instance(new_instance) + + def collect_instances(self): + super().collect_instances() + for instance in self.create_context.instances: + if instance.creator_identifier == self.identifier: + instance.transient_data["config_items"] = self.config_items + instance.transient_data["config_data"] = self.config_data + + def get_instance_attr_defs(self): + return [ + EnumDef( + "working_colorspace", + self.colorspace_items, + default="Not set", + label="Working Colorspace", + ), + UISeparatorDef( + label="Advanced1" + ), + TextDef( + "abs_lut_path", + label="LUT Path", + ), + EnumDef( + "input_colorspace", + self.colorspace_items, + default="Not set", + label="Input Colorspace", + ), + EnumDef( + "direction", + [ + (None, "Not set"), + ("forward", "Forward"), + ("inverse", "Inverse") + ], + default="Not set", + label="Direction" + ), + EnumDef( + "interpolation", + [ + (None, "Not set"), + ("linear", "Linear"), + ("tetrahedral", "Tetrahedral"), + ("best", "Best"), + ("nearest", "Nearest") + ], + default="Not set", + label="Interpolation" + ), + EnumDef( + "output_colorspace", + self.colorspace_items, + default="Not set", + label="Output Colorspace", + ), + ] + + def get_pre_create_attr_defs(self): + return [ + FileDef( + "luts_file", + folders=False, + extensions=self.extensions, + allow_sequences=False, + single_item=True, + label="Look Files", + ) + ] + + def apply_settings(self, project_settings, system_settings): + host = self.create_context.host + host_name = host.name + project_name = host.get_current_project_name() + config_data = colorspace.get_imageio_config( + project_name, host_name, + project_settings=project_settings + ) + + if not config_data: + self.enabled = False + return + + filepath = config_data["path"] + config_items = colorspace.get_ocio_config_colorspaces(filepath) + labeled_colorspaces = colorspace.get_colorspaces_enumerator_items( + config_items, + include_aliases=True, + include_roles=True + ) + self.config_items = config_items + self.config_data = config_data + self.colorspace_items.extend(labeled_colorspaces) + self.enabled = True diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_colorspace_look.py b/openpype/hosts/traypublisher/plugins/publish/collect_colorspace_look.py new file mode 100644 index 0000000000..6aede099bf --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/collect_colorspace_look.py @@ -0,0 +1,86 @@ +import os +from pprint import pformat +import pyblish.api +from openpype.pipeline import publish +from openpype.pipeline import colorspace + + +class CollectColorspaceLook(pyblish.api.InstancePlugin, + publish.OpenPypePyblishPluginMixin): + """Collect OCIO colorspace look from LUT file + """ + + label = "Collect Colorspace Look" + order = pyblish.api.CollectorOrder + hosts = ["traypublisher"] + families = ["ociolook"] + + def process(self, instance): + creator_attrs = instance.data["creator_attributes"] + + lut_repre_name = "LUTfile" + file_url = creator_attrs["abs_lut_path"] + file_name = os.path.basename(file_url) + base_name, ext = os.path.splitext(file_name) + + # set output name with base_name which was cleared + # of all symbols and all parts were capitalized + output_name = (base_name.replace("_", " ") + .replace(".", " ") + .replace("-", " ") + .title() + .replace(" ", "")) + + # get config items + config_items = instance.data["transientData"]["config_items"] + config_data = instance.data["transientData"]["config_data"] + + # get colorspace items + converted_color_data = {} + for colorspace_key in [ + "working_colorspace", + "input_colorspace", + "output_colorspace" + ]: + if creator_attrs[colorspace_key]: + color_data = colorspace.convert_colorspace_enumerator_item( + creator_attrs[colorspace_key], config_items) + converted_color_data[colorspace_key] = color_data + else: + converted_color_data[colorspace_key] = None + + # add colorspace to config data + if converted_color_data["working_colorspace"]: + config_data["colorspace"] = ( + converted_color_data["working_colorspace"]["name"] + ) + + # create lut representation data + lut_repre = { + "name": lut_repre_name, + "output": output_name, + "ext": ext.lstrip("."), + "files": file_name, + "stagingDir": os.path.dirname(file_url), + "tags": [] + } + instance.data.update({ + "representations": [lut_repre], + "source": file_url, + "ocioLookWorkingSpace": converted_color_data["working_colorspace"], + "ocioLookItems": [ + { + "name": lut_repre_name, + "ext": ext.lstrip("."), + "input_colorspace": converted_color_data[ + "input_colorspace"], + "output_colorspace": converted_color_data[ + "output_colorspace"], + "direction": creator_attrs["direction"], + "interpolation": creator_attrs["interpolation"], + "config_data": config_data + } + ], + }) + + self.log.debug(pformat(instance.data)) diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py b/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py index eb7fbd87a0..5db2b0cbad 100644 --- a/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py +++ b/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py @@ -1,6 +1,8 @@ import pyblish.api -from openpype.pipeline import registered_host -from openpype.pipeline import publish +from openpype.pipeline import ( + publish, + registered_host +) from openpype.lib import EnumDef from openpype.pipeline import colorspace @@ -13,11 +15,14 @@ class CollectColorspace(pyblish.api.InstancePlugin, label = "Choose representation colorspace" order = pyblish.api.CollectorOrder + 0.49 hosts = ["traypublisher"] + families = ["render", "plate", "reference", "image", "online"] + enabled = False colorspace_items = [ (None, "Don't override") ] colorspace_attr_show = False + config_items = None def process(self, instance): values = self.get_attr_values_from_data(instance.data) @@ -48,10 +53,14 @@ class CollectColorspace(pyblish.api.InstancePlugin, if config_data: filepath = config_data["path"] config_items = colorspace.get_ocio_config_colorspaces(filepath) - cls.colorspace_items.extend(( - (name, name) for name in config_items.keys() - )) - cls.colorspace_attr_show = True + labeled_colorspaces = colorspace.get_colorspaces_enumerator_items( + config_items, + include_aliases=True, + include_roles=True + ) + cls.config_items = config_items + cls.colorspace_items.extend(labeled_colorspaces) + cls.enabled = True @classmethod def get_attribute_defs(cls): @@ -60,7 +69,6 @@ class CollectColorspace(pyblish.api.InstancePlugin, "colorspace", cls.colorspace_items, default="Don't override", - label="Override Colorspace", - hidden=not cls.colorspace_attr_show + label="Override Colorspace" ) ] diff --git a/openpype/hosts/traypublisher/plugins/publish/extract_colorspace_look.py b/openpype/hosts/traypublisher/plugins/publish/extract_colorspace_look.py new file mode 100644 index 0000000000..f94bbc7a49 --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/extract_colorspace_look.py @@ -0,0 +1,45 @@ +import os +import json +import pyblish.api +from openpype.pipeline import publish + + +class ExtractColorspaceLook(publish.Extractor, + publish.OpenPypePyblishPluginMixin): + """Extract OCIO colorspace look from LUT file + """ + + label = "Extract Colorspace Look" + order = pyblish.api.ExtractorOrder + hosts = ["traypublisher"] + families = ["ociolook"] + + def process(self, instance): + ociolook_items = instance.data["ocioLookItems"] + ociolook_working_color = instance.data["ocioLookWorkingSpace"] + staging_dir = self.staging_dir(instance) + + # create ociolook file attributes + ociolook_file_name = "ocioLookFile.json" + ociolook_file_content = { + "version": 1, + "data": { + "ocioLookItems": ociolook_items, + "ocioLookWorkingSpace": ociolook_working_color + } + } + + # write ociolook content into json file saved in staging dir + file_url = os.path.join(staging_dir, ociolook_file_name) + with open(file_url, "w") as f_: + json.dump(ociolook_file_content, f_, indent=4) + + # create lut representation data + ociolook_repre = { + "name": "ocioLookFile", + "ext": "json", + "files": ociolook_file_name, + "stagingDir": staging_dir, + "tags": [] + } + instance.data["representations"].append(ociolook_repre) diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py index 75b41cf606..03f9f299b2 100644 --- a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py +++ b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py @@ -18,6 +18,7 @@ class ValidateColorspace(pyblish.api.InstancePlugin, label = "Validate representation colorspace" order = pyblish.api.ValidatorOrder hosts = ["traypublisher"] + families = ["render", "plate", "reference", "image", "online"] def process(self, instance): diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace_look.py b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace_look.py new file mode 100644 index 0000000000..548ce9d15a --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace_look.py @@ -0,0 +1,89 @@ +import pyblish.api + +from openpype.pipeline import ( + publish, + PublishValidationError +) + + +class ValidateColorspaceLook(pyblish.api.InstancePlugin, + publish.OpenPypePyblishPluginMixin): + """Validate colorspace look attributes""" + + label = "Validate colorspace look attributes" + order = pyblish.api.ValidatorOrder + hosts = ["traypublisher"] + families = ["ociolook"] + + def process(self, instance): + create_context = instance.context.data["create_context"] + created_instance = create_context.get_instance_by_id( + instance.data["instance_id"]) + creator_defs = created_instance.creator_attribute_defs + + ociolook_working_color = instance.data.get("ocioLookWorkingSpace") + ociolook_items = instance.data.get("ocioLookItems", []) + + creator_defs_by_key = {_def.key: _def.label for _def in creator_defs} + + not_set_keys = {} + if not ociolook_working_color: + not_set_keys["working_colorspace"] = creator_defs_by_key[ + "working_colorspace"] + + for ociolook_item in ociolook_items: + item_not_set_keys = self.validate_colorspace_set_attrs( + ociolook_item, creator_defs_by_key) + if item_not_set_keys: + not_set_keys[ociolook_item["name"]] = item_not_set_keys + + if not_set_keys: + message = ( + "Colorspace look attributes are not set: \n" + ) + for key, value in not_set_keys.items(): + if isinstance(value, list): + values_string = "\n\t- ".join(value) + message += f"\n\t{key}:\n\t- {values_string}" + else: + message += f"\n\t{value}" + + raise PublishValidationError( + title="Colorspace Look attributes", + message=message, + description=message + ) + + def validate_colorspace_set_attrs( + self, + ociolook_item, + creator_defs_by_key + ): + """Validate colorspace look attributes""" + + self.log.debug(f"Validate colorspace look attributes: {ociolook_item}") + + check_keys = [ + "input_colorspace", + "output_colorspace", + "direction", + "interpolation" + ] + + not_set_keys = [] + for key in check_keys: + if ociolook_item[key]: + # key is set and it is correct + continue + + def_label = creator_defs_by_key.get(key) + + if not def_label: + # raise since key is not recognized by creator defs + raise KeyError( + f"Colorspace look attribute '{key}' is not " + f"recognized by creator attributes: {creator_defs_by_key}" + ) + not_set_keys.append(def_label) + + return not_set_keys diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py index 72816c9b81..f2d7b5f73e 100644 --- a/openpype/hosts/unreal/api/pipeline.py +++ b/openpype/hosts/unreal/api/pipeline.py @@ -13,8 +13,10 @@ from openpype.client import get_asset_by_name, get_assets from openpype.pipeline import ( register_loader_plugin_path, register_creator_plugin_path, + register_inventory_action_path, deregister_loader_plugin_path, deregister_creator_plugin_path, + deregister_inventory_action_path, AYON_CONTAINER_ID, legacy_io, ) @@ -28,6 +30,7 @@ import unreal # noqa logger = logging.getLogger("openpype.hosts.unreal") AYON_CONTAINERS = "AyonContainers" +AYON_ASSET_DIR = "/Game/Ayon/Assets" CONTEXT_CONTAINER = "Ayon/context.json" UNREAL_VERSION = semver.VersionInfo( *os.getenv("AYON_UNREAL_VERSION").split(".") @@ -127,6 +130,7 @@ def install(): pyblish.api.register_plugin_path(str(PUBLISH_PATH)) register_loader_plugin_path(str(LOAD_PATH)) register_creator_plugin_path(str(CREATE_PATH)) + register_inventory_action_path(str(INVENTORY_PATH)) _register_callbacks() _register_events() @@ -136,6 +140,7 @@ def uninstall(): pyblish.api.deregister_plugin_path(str(PUBLISH_PATH)) deregister_loader_plugin_path(str(LOAD_PATH)) deregister_creator_plugin_path(str(CREATE_PATH)) + deregister_inventory_action_path(str(INVENTORY_PATH)) def _register_callbacks(): @@ -649,6 +654,141 @@ def generate_sequence(h, h_dir): return sequence, (min_frame, max_frame) +def _get_comps_and_assets( + component_class, asset_class, old_assets, new_assets, selected +): + eas = unreal.get_editor_subsystem(unreal.EditorActorSubsystem) + + components = [] + if selected: + sel_actors = eas.get_selected_level_actors() + for actor in sel_actors: + comps = actor.get_components_by_class(component_class) + components.extend(comps) + else: + comps = eas.get_all_level_actors_components() + components = [ + c for c in comps if isinstance(c, component_class) + ] + + # Get all the static meshes among the old assets in a dictionary with + # the name as key + selected_old_assets = {} + for a in old_assets: + asset = unreal.EditorAssetLibrary.load_asset(a) + if isinstance(asset, asset_class): + selected_old_assets[asset.get_name()] = asset + + # Get all the static meshes among the new assets in a dictionary with + # the name as key + selected_new_assets = {} + for a in new_assets: + asset = unreal.EditorAssetLibrary.load_asset(a) + if isinstance(asset, asset_class): + selected_new_assets[asset.get_name()] = asset + + return components, selected_old_assets, selected_new_assets + + +def replace_static_mesh_actors(old_assets, new_assets, selected): + smes = unreal.get_editor_subsystem(unreal.StaticMeshEditorSubsystem) + + static_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets( + unreal.StaticMeshComponent, + unreal.StaticMesh, + old_assets, + new_assets, + selected + ) + + for old_name, old_mesh in old_meshes.items(): + new_mesh = new_meshes.get(old_name) + + if not new_mesh: + continue + + smes.replace_mesh_components_meshes( + static_mesh_comps, old_mesh, new_mesh) + + +def replace_skeletal_mesh_actors(old_assets, new_assets, selected): + skeletal_mesh_comps, old_meshes, new_meshes = _get_comps_and_assets( + unreal.SkeletalMeshComponent, + unreal.SkeletalMesh, + old_assets, + new_assets, + selected + ) + + for old_name, old_mesh in old_meshes.items(): + new_mesh = new_meshes.get(old_name) + + if not new_mesh: + continue + + for comp in skeletal_mesh_comps: + if comp.get_skeletal_mesh_asset() == old_mesh: + comp.set_skeletal_mesh_asset(new_mesh) + + +def replace_geometry_cache_actors(old_assets, new_assets, selected): + geometry_cache_comps, old_caches, new_caches = _get_comps_and_assets( + unreal.GeometryCacheComponent, + unreal.GeometryCache, + old_assets, + new_assets, + selected + ) + + for old_name, old_mesh in old_caches.items(): + new_mesh = new_caches.get(old_name) + + if not new_mesh: + continue + + for comp in geometry_cache_comps: + if comp.get_editor_property("geometry_cache") == old_mesh: + comp.set_geometry_cache(new_mesh) + + +def delete_asset_if_unused(container, asset_content): + ar = unreal.AssetRegistryHelpers.get_asset_registry() + + references = set() + + for asset_path in asset_content: + asset = ar.get_asset_by_object_path(asset_path) + refs = ar.get_referencers( + asset.package_name, + unreal.AssetRegistryDependencyOptions( + include_soft_package_references=False, + include_hard_package_references=True, + include_searchable_names=False, + include_soft_management_references=False, + include_hard_management_references=False + )) + if not refs: + continue + references = references.union(set(refs)) + + # Filter out references that are in the Temp folder + cleaned_references = { + ref for ref in references if not str(ref).startswith("/Temp/")} + + # Check which of the references are Levels + for ref in cleaned_references: + loaded_asset = unreal.EditorAssetLibrary.load_asset(ref) + if isinstance(loaded_asset, unreal.World): + # If there is at least a level, we can stop, we don't want to + # delete the container + return + + unreal.log("Previous version unused, deleting...") + + # No levels, delete the asset + unreal.EditorAssetLibrary.delete_directory(container["namespace"]) + + @contextmanager def maintained_selection(): """Stub to be either implemented or replaced. diff --git a/openpype/hosts/unreal/plugins/inventory/delete_unused_assets.py b/openpype/hosts/unreal/plugins/inventory/delete_unused_assets.py new file mode 100644 index 0000000000..8320e3c92d --- /dev/null +++ b/openpype/hosts/unreal/plugins/inventory/delete_unused_assets.py @@ -0,0 +1,66 @@ +import unreal + +from openpype.hosts.unreal.api.tools_ui import qt_app_context +from openpype.hosts.unreal.api.pipeline import delete_asset_if_unused +from openpype.pipeline import InventoryAction + + +class DeleteUnusedAssets(InventoryAction): + """Delete all the assets that are not used in any level. + """ + + label = "Delete Unused Assets" + icon = "trash" + color = "red" + order = 1 + + dialog = None + + def _delete_unused_assets(self, containers): + allowed_families = ["model", "rig"] + + for container in containers: + container_dir = container.get("namespace") + if container.get("family") not in allowed_families: + unreal.log_warning( + f"Container {container_dir} is not supported.") + continue + + asset_content = unreal.EditorAssetLibrary.list_assets( + container_dir, recursive=True, include_folder=False + ) + + delete_asset_if_unused(container, asset_content) + + def _show_confirmation_dialog(self, containers): + from qtpy import QtCore + from openpype.widgets import popup + from openpype.style import load_stylesheet + + dialog = popup.Popup() + dialog.setWindowFlags( + QtCore.Qt.Window + | QtCore.Qt.WindowStaysOnTopHint + ) + dialog.setFocusPolicy(QtCore.Qt.StrongFocus) + dialog.setWindowTitle("Delete all unused assets") + dialog.setMessage( + "You are about to delete all the assets in the project that \n" + "are not used in any level. Are you sure you want to continue?" + ) + dialog.setButtonText("Delete") + + dialog.on_clicked.connect( + lambda: self._delete_unused_assets(containers) + ) + + dialog.show() + dialog.raise_() + dialog.activateWindow() + dialog.setStyleSheet(load_stylesheet()) + + self.dialog = dialog + + def process(self, containers): + with qt_app_context(): + self._show_confirmation_dialog(containers) diff --git a/openpype/hosts/unreal/plugins/inventory/update_actors.py b/openpype/hosts/unreal/plugins/inventory/update_actors.py new file mode 100644 index 0000000000..b0d941ba80 --- /dev/null +++ b/openpype/hosts/unreal/plugins/inventory/update_actors.py @@ -0,0 +1,84 @@ +import unreal + +from openpype.hosts.unreal.api.pipeline import ( + ls, + replace_static_mesh_actors, + replace_skeletal_mesh_actors, + replace_geometry_cache_actors, +) +from openpype.pipeline import InventoryAction + + +def update_assets(containers, selected): + allowed_families = ["model", "rig"] + + # Get all the containers in the Unreal Project + all_containers = ls() + + for container in containers: + container_dir = container.get("namespace") + if container.get("family") not in allowed_families: + unreal.log_warning( + f"Container {container_dir} is not supported.") + continue + + # Get all containers with same asset_name but different objectName. + # These are the containers that need to be updated in the level. + sa_containers = [ + i + for i in all_containers + if ( + i.get("asset_name") == container.get("asset_name") and + i.get("objectName") != container.get("objectName") + ) + ] + + asset_content = unreal.EditorAssetLibrary.list_assets( + container_dir, recursive=True, include_folder=False + ) + + # Update all actors in level + for sa_cont in sa_containers: + sa_dir = sa_cont.get("namespace") + old_content = unreal.EditorAssetLibrary.list_assets( + sa_dir, recursive=True, include_folder=False + ) + + if container.get("family") == "rig": + replace_skeletal_mesh_actors( + old_content, asset_content, selected) + replace_static_mesh_actors( + old_content, asset_content, selected) + elif container.get("family") == "model": + if container.get("loader") == "PointCacheAlembicLoader": + replace_geometry_cache_actors( + old_content, asset_content, selected) + else: + replace_static_mesh_actors( + old_content, asset_content, selected) + + unreal.EditorLevelLibrary.save_current_level() + + +class UpdateAllActors(InventoryAction): + """Update all the Actors in the current level to the version of the asset + selected in the scene manager. + """ + + label = "Replace all Actors in level to this version" + icon = "arrow-up" + + def process(self, containers): + update_assets(containers, False) + + +class UpdateSelectedActors(InventoryAction): + """Update only the selected Actors in the current level to the version + of the asset selected in the scene manager. + """ + + label = "Replace selected Actors in level to this version" + icon = "arrow-up" + + def process(self, containers): + update_assets(containers, True) diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py index 1d60b63f9a..0328d2ae9f 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py @@ -69,7 +69,7 @@ class AnimationAlembicLoader(plugin.Loader): """ # Create directory for asset and ayon container - root = "/Game/Ayon/Assets" + root = unreal_pipeline.AYON_ASSET_DIR asset = context.get('asset').get('name') suffix = "_CON" if asset: diff --git a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py index 13ba236a7d..ec9c52b9fb 100644 --- a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py @@ -7,7 +7,11 @@ from openpype.pipeline import ( AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + AYON_ASSET_DIR, + create_container, + imprint, +) import unreal # noqa @@ -21,8 +25,11 @@ class PointCacheAlembicLoader(plugin.Loader): icon = "cube" color = "orange" + root = AYON_ASSET_DIR + + @staticmethod def get_task( - self, filename, asset_dir, asset_name, replace, + filename, asset_dir, asset_name, replace, frame_start=None, frame_end=None ): task = unreal.AssetImportTask() @@ -38,8 +45,6 @@ class PointCacheAlembicLoader(plugin.Loader): task.set_editor_property('automated', True) task.set_editor_property('save', True) - # set import options here - # Unreal 4.24 ignores the settings. It works with Unreal 4.26 options.set_editor_property( 'import_type', unreal.AlembicImportType.GEOMETRY_CACHE) @@ -64,13 +69,42 @@ class PointCacheAlembicLoader(plugin.Loader): return task - def load(self, context, name, namespace, data): - """Load and containerise representation into Content Browser. + def import_and_containerize( + self, filepath, asset_dir, asset_name, container_name, + frame_start, frame_end + ): + unreal.EditorAssetLibrary.make_directory(asset_dir) - This is two step process. First, import FBX to temporary path and - then call `containerise()` on it - this moves all content to new - directory and then it will create AssetContainer there and imprint it - with metadata. This will mark this path as container. + task = self.get_task( + filepath, asset_dir, asset_name, False, frame_start, frame_end) + + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + + # Create Asset Container + create_container(container=container_name, path=asset_dir) + + def imprint( + self, asset, asset_dir, container_name, asset_name, representation, + frame_start, frame_end + ): + data = { + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": representation["_id"], + "parent": representation["parent"], + "family": representation["context"]["family"], + "frame_start": frame_start, + "frame_end": frame_end + } + imprint(f"{asset_dir}/{container_name}", data) + + def load(self, context, name, namespace, options): + """Load and containerise representation into Content Browser. Args: context (dict): application context @@ -79,30 +113,28 @@ class PointCacheAlembicLoader(plugin.Loader): This is not passed here, so namespace is set by `containerise()` because only then we know real path. - data (dict): Those would be data to be imprinted. This is not used - now, data are imprinted by `containerise()`. + data (dict): Those would be data to be imprinted. Returns: list(str): list of container content - """ # Create directory for asset and Ayon container - root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" else: - asset_name = "{}".format(name) + name_version = f"{name}_v{version.get('name'):03d}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - "{}/{}/{}".format(root, asset, name), suffix="") + f"{self.root}/{asset}/{name_version}", suffix="") container_name += suffix - unreal.EditorAssetLibrary.make_directory(asset_dir) - frame_start = context.get('asset').get('data').get('frameStart') frame_end = context.get('asset').get('data').get('frameEnd') @@ -111,30 +143,16 @@ class PointCacheAlembicLoader(plugin.Loader): if frame_start == frame_end: frame_end += 1 - path = self.filepath_from_context(context) - task = self.get_task( - path, asset_dir, asset_name, False, frame_start, frame_end) + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = self.filepath_from_context(context) - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 + self.import_and_containerize( + path, asset_dir, asset_name, container_name, + frame_start, frame_end) - # Create Asset Container - unreal_pipeline.create_container( - container=container_name, path=asset_dir) - - data = { - "schema": "ayon:container-2.0", - "id": AYON_CONTAINER_ID, - "asset": asset, - "namespace": asset_dir, - "container_name": container_name, - "asset_name": asset_name, - "loader": str(self.__class__.__name__), - "representation": context["representation"]["_id"], - "parent": context["representation"]["parent"], - "family": context["representation"]["context"]["family"] - } - unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + self.imprint( + asset, asset_dir, container_name, asset_name, + context["representation"], frame_start, frame_end) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -146,27 +164,43 @@ class PointCacheAlembicLoader(plugin.Loader): return asset_content def update(self, container, representation): - name = container["asset_name"] - source_path = get_representation_path(representation) - destination_path = container["namespace"] - representation["context"] + context = representation.get("context", {}) - task = self.get_task(source_path, destination_path, name, False) - # do import fbx and replace existing data - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + unreal.log_warning(context) - container_path = "{}/{}".format(container["namespace"], - container["objectName"]) - # update metadata - unreal_pipeline.imprint( - container_path, - { - "representation": str(representation["_id"]), - "parent": str(representation["parent"]) - }) + if not context: + raise RuntimeError("No context found in representation") + + # Create directory for asset and Ayon container + asset = context.get('asset') + name = context.get('subset') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + name_version = f"{name}_v{version:03d}" if version else f"{name}_hero" + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{self.root}/{asset}/{name_version}", suffix="") + + container_name += suffix + + frame_start = int(container.get("frame_start")) + frame_end = int(container.get("frame_end")) + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = get_representation_path(representation) + + self.import_and_containerize( + path, asset_dir, asset_name, container_name, + frame_start, frame_end) + + self.imprint( + asset, asset_dir, container_name, asset_name, representation, + frame_start, frame_end) asset_content = unreal.EditorAssetLibrary.list_assets( - destination_path, recursive=True, include_folder=True + asset_dir, recursive=True, include_folder=False ) for a in asset_content: diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py index 9285602b64..8ebd9a82b6 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py @@ -7,7 +7,11 @@ from openpype.pipeline import ( AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + AYON_ASSET_DIR, + create_container, + imprint, +) import unreal # noqa @@ -20,10 +24,12 @@ class SkeletalMeshAlembicLoader(plugin.Loader): icon = "cube" color = "orange" - def get_task(self, filename, asset_dir, asset_name, replace): + root = AYON_ASSET_DIR + + @staticmethod + def get_task(filename, asset_dir, asset_name, replace, default_conversion): task = unreal.AssetImportTask() options = unreal.AbcImportSettings() - sm_settings = unreal.AbcStaticMeshSettings() conversion_settings = unreal.AbcConversionSettings( preset=unreal.AbcConversionPreset.CUSTOM, flip_u=False, flip_v=False, @@ -37,72 +43,38 @@ class SkeletalMeshAlembicLoader(plugin.Loader): task.set_editor_property('automated', True) task.set_editor_property('save', True) - # set import options here - # Unreal 4.24 ignores the settings. It works with Unreal 4.26 options.set_editor_property( 'import_type', unreal.AlembicImportType.SKELETAL) - options.static_mesh_settings = sm_settings - options.conversion_settings = conversion_settings + if not default_conversion: + conversion_settings = unreal.AbcConversionSettings( + preset=unreal.AbcConversionPreset.CUSTOM, + flip_u=False, flip_v=False, + rotation=[0.0, 0.0, 0.0], + scale=[1.0, 1.0, 1.0]) + options.conversion_settings = conversion_settings + task.options = options return task - def load(self, context, name, namespace, data): - """Load and containerise representation into Content Browser. + def import_and_containerize( + self, filepath, asset_dir, asset_name, container_name, + default_conversion=False + ): + unreal.EditorAssetLibrary.make_directory(asset_dir) - This is two step process. First, import FBX to temporary path and - then call `containerise()` on it - this moves all content to new - directory and then it will create AssetContainer there and imprint it - with metadata. This will mark this path as container. + task = self.get_task( + filepath, asset_dir, asset_name, False, default_conversion) - Args: - context (dict): application context - name (str): subset name - namespace (str): in Unreal this is basically path to container. - This is not passed here, so namespace is set - by `containerise()` because only then we know - real path. - data (dict): Those would be data to be imprinted. This is not used - now, data are imprinted by `containerise()`. + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) - Returns: - list(str): list of container content - """ - - # Create directory for asset and ayon container - root = "/Game/Ayon/Assets" - asset = context.get('asset').get('name') - suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) - else: - asset_name = "{}".format(name) - version = context.get('version') - # Check if version is hero version and use different name - if not version.get("name") and version.get('type') == "hero_version": - name_version = f"{name}_hero" - else: - name_version = f"{name}_v{version.get('name'):03d}" - - tools = unreal.AssetToolsHelpers().get_asset_tools() - asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name_version}", suffix="") - - container_name += suffix - - if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): - unreal.EditorAssetLibrary.make_directory(asset_dir) - - path = self.filepath_from_context(context) - task = self.get_task(path, asset_dir, asset_name, False) - - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 - - # Create Asset Container - unreal_pipeline.create_container( - container=container_name, path=asset_dir) + # Create Asset Container + create_container(container=container_name, path=asset_dir) + def imprint( + self, asset, asset_dir, container_name, asset_name, representation + ): data = { "schema": "ayon:container-2.0", "id": AYON_CONTAINER_ID, @@ -111,12 +83,57 @@ class SkeletalMeshAlembicLoader(plugin.Loader): "container_name": container_name, "asset_name": asset_name, "loader": str(self.__class__.__name__), - "representation": context["representation"]["_id"], - "parent": context["representation"]["parent"], - "family": context["representation"]["context"]["family"] + "representation": representation["_id"], + "parent": representation["parent"], + "family": representation["context"]["family"] } - unreal_pipeline.imprint( - f"{asset_dir}/{container_name}", data) + imprint(f"{asset_dir}/{container_name}", data) + + def load(self, context, name, namespace, options): + """Load and containerise representation into Content Browser. + + Args: + context (dict): application context + name (str): subset name + namespace (str): in Unreal this is basically path to container. + This is not passed here, so namespace is set + by `containerise()` because only then we know + real path. + data (dict): Those would be data to be imprinted. + + Returns: + list(str): list of container content + """ + # Create directory for asset and ayon container + asset = context.get('asset').get('name') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + if not version.get("name") and version.get('type') == "hero_version": + name_version = f"{name}_hero" + else: + name_version = f"{name}_v{version.get('name'):03d}" + + default_conversion = False + if options.get("default_conversion"): + default_conversion = options.get("default_conversion") + + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{self.root}/{asset}/{name_version}", suffix="") + + container_name += suffix + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = self.filepath_from_context(context) + + self.import_and_containerize(path, asset_dir, asset_name, + container_name, default_conversion) + + self.imprint( + asset, asset_dir, container_name, asset_name, + context["representation"]) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -128,26 +145,36 @@ class SkeletalMeshAlembicLoader(plugin.Loader): return asset_content def update(self, container, representation): - name = container["asset_name"] - source_path = get_representation_path(representation) - destination_path = container["namespace"] + context = representation.get("context", {}) - task = self.get_task(source_path, destination_path, name, True) + if not context: + raise RuntimeError("No context found in representation") - # do import fbx and replace existing data - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) - container_path = "{}/{}".format(container["namespace"], - container["objectName"]) - # update metadata - unreal_pipeline.imprint( - container_path, - { - "representation": str(representation["_id"]), - "parent": str(representation["parent"]) - }) + # Create directory for asset and Ayon container + asset = context.get('asset') + name = context.get('subset') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + name_version = f"{name}_v{version:03d}" if version else f"{name}_hero" + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{self.root}/{asset}/{name_version}", suffix="") + + container_name += suffix + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = get_representation_path(representation) + + self.import_and_containerize(path, asset_dir, asset_name, + container_name) + + self.imprint( + asset, asset_dir, container_name, asset_name, representation) asset_content = unreal.EditorAssetLibrary.list_assets( - destination_path, recursive=True, include_folder=True + asset_dir, recursive=True, include_folder=False ) for a in asset_content: diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py index 9aa0e4d1a8..a5a8730732 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py @@ -7,7 +7,11 @@ from openpype.pipeline import ( AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + AYON_ASSET_DIR, + create_container, + imprint, +) import unreal # noqa @@ -20,14 +24,79 @@ class SkeletalMeshFBXLoader(plugin.Loader): icon = "cube" color = "orange" + root = AYON_ASSET_DIR + + @staticmethod + def get_task(filename, asset_dir, asset_name, replace): + task = unreal.AssetImportTask() + options = unreal.FbxImportUI() + + task.set_editor_property('filename', filename) + task.set_editor_property('destination_path', asset_dir) + task.set_editor_property('destination_name', asset_name) + task.set_editor_property('replace_existing', replace) + task.set_editor_property('automated', True) + task.set_editor_property('save', True) + + options.set_editor_property( + 'automated_import_should_detect_type', False) + options.set_editor_property('import_as_skeletal', True) + options.set_editor_property('import_animations', False) + options.set_editor_property('import_mesh', True) + options.set_editor_property('import_materials', False) + options.set_editor_property('import_textures', False) + options.set_editor_property('skeleton', None) + options.set_editor_property('create_physics_asset', False) + + options.set_editor_property( + 'mesh_type_to_import', + unreal.FBXImportType.FBXIT_SKELETAL_MESH) + + options.skeletal_mesh_import_data.set_editor_property( + 'import_content_type', + unreal.FBXImportContentType.FBXICT_ALL) + + options.skeletal_mesh_import_data.set_editor_property( + 'normal_import_method', + unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS) + + task.options = options + + return task + + def import_and_containerize( + self, filepath, asset_dir, asset_name, container_name + ): + unreal.EditorAssetLibrary.make_directory(asset_dir) + + task = self.get_task( + filepath, asset_dir, asset_name, False) + + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + + # Create Asset Container + create_container(container=container_name, path=asset_dir) + + def imprint( + self, asset, asset_dir, container_name, asset_name, representation + ): + data = { + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": representation["_id"], + "parent": representation["parent"], + "family": representation["context"]["family"] + } + imprint(f"{asset_dir}/{container_name}", data) + def load(self, context, name, namespace, options): """Load and containerise representation into Content Browser. - This is a two step process. First, import FBX to temporary path and - then call `containerise()` on it - this moves all content to new - directory and then it will create AssetContainer there and imprint it - with metadata. This will mark this path as container. - Args: context (dict): application context name (str): subset name @@ -35,23 +104,15 @@ class SkeletalMeshFBXLoader(plugin.Loader): This is not passed here, so namespace is set by `containerise()` because only then we know real path. - options (dict): Those would be data to be imprinted. This is not - used now, data are imprinted by `containerise()`. + data (dict): Those would be data to be imprinted. Returns: list(str): list of container content - """ # Create directory for asset and Ayon container - root = "/Game/Ayon/Assets" - if options and options.get("asset_dir"): - root = options["asset_dir"] asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) - else: - asset_name = "{}".format(name) + asset_name = f"{asset}_{name}" if asset else f"{name}" version = context.get('version') # Check if version is hero version and use different name if not version.get("name") and version.get('type') == "hero_version": @@ -61,67 +122,20 @@ class SkeletalMeshFBXLoader(plugin.Loader): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name_version}", suffix="") + f"{self.root}/{asset}/{name_version}", suffix="" + ) container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): - unreal.EditorAssetLibrary.make_directory(asset_dir) - - task = unreal.AssetImportTask() - path = self.filepath_from_context(context) - task.set_editor_property('filename', path) - task.set_editor_property('destination_path', asset_dir) - task.set_editor_property('destination_name', asset_name) - task.set_editor_property('replace_existing', False) - task.set_editor_property('automated', True) - task.set_editor_property('save', False) - # set import options here - options = unreal.FbxImportUI() - options.set_editor_property('import_as_skeletal', True) - options.set_editor_property('import_animations', False) - options.set_editor_property('import_mesh', True) - options.set_editor_property('import_materials', False) - options.set_editor_property('import_textures', False) - options.set_editor_property('skeleton', None) - options.set_editor_property('create_physics_asset', False) + self.import_and_containerize( + path, asset_dir, asset_name, container_name) - options.set_editor_property( - 'mesh_type_to_import', - unreal.FBXImportType.FBXIT_SKELETAL_MESH) - - options.skeletal_mesh_import_data.set_editor_property( - 'import_content_type', - unreal.FBXImportContentType.FBXICT_ALL) - # set to import normals, otherwise Unreal will compute them - # and it will take a long time, depending on the size of the mesh - options.skeletal_mesh_import_data.set_editor_property( - 'normal_import_method', - unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS) - - task.options = options - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 - - # Create Asset Container - unreal_pipeline.create_container( - container=container_name, path=asset_dir) - - data = { - "schema": "ayon:container-2.0", - "id": AYON_CONTAINER_ID, - "asset": asset, - "namespace": asset_dir, - "container_name": container_name, - "asset_name": asset_name, - "loader": str(self.__class__.__name__), - "representation": context["representation"]["_id"], - "parent": context["representation"]["parent"], - "family": context["representation"]["context"]["family"] - } - unreal_pipeline.imprint( - f"{asset_dir}/{container_name}", data) + self.imprint( + asset, asset_dir, container_name, asset_name, + context["representation"]) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -133,58 +147,36 @@ class SkeletalMeshFBXLoader(plugin.Loader): return asset_content def update(self, container, representation): - name = container["asset_name"] - source_path = get_representation_path(representation) - destination_path = container["namespace"] + context = representation.get("context", {}) - task = unreal.AssetImportTask() + if not context: + raise RuntimeError("No context found in representation") - task.set_editor_property('filename', source_path) - task.set_editor_property('destination_path', destination_path) - task.set_editor_property('destination_name', name) - task.set_editor_property('replace_existing', True) - task.set_editor_property('automated', True) - task.set_editor_property('save', True) + # Create directory for asset and Ayon container + asset = context.get('asset') + name = context.get('subset') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + name_version = f"{name}_v{version:03d}" if version else f"{name}_hero" + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{self.root}/{asset}/{name_version}", suffix="") - # set import options here - options = unreal.FbxImportUI() - options.set_editor_property('import_as_skeletal', True) - options.set_editor_property('import_animations', False) - options.set_editor_property('import_mesh', True) - options.set_editor_property('import_materials', True) - options.set_editor_property('import_textures', True) - options.set_editor_property('skeleton', None) - options.set_editor_property('create_physics_asset', False) + container_name += suffix - options.set_editor_property('mesh_type_to_import', - unreal.FBXImportType.FBXIT_SKELETAL_MESH) + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = get_representation_path(representation) - options.skeletal_mesh_import_data.set_editor_property( - 'import_content_type', - unreal.FBXImportContentType.FBXICT_ALL - ) - # set to import normals, otherwise Unreal will compute them - # and it will take a long time, depending on the size of the mesh - options.skeletal_mesh_import_data.set_editor_property( - 'normal_import_method', - unreal.FBXNormalImportMethod.FBXNIM_IMPORT_NORMALS - ) + self.import_and_containerize( + path, asset_dir, asset_name, container_name) - task.options = options - # do import fbx and replace existing data - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 - container_path = "{}/{}".format(container["namespace"], - container["objectName"]) - # update metadata - unreal_pipeline.imprint( - container_path, - { - "representation": str(representation["_id"]), - "parent": str(representation["parent"]) - }) + self.imprint( + asset, asset_dir, container_name, asset_name, representation) asset_content = unreal.EditorAssetLibrary.list_assets( - destination_path, recursive=True, include_folder=True + asset_dir, recursive=True, include_folder=False ) for a in asset_content: diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py index bb13692f9e..019a95a9bf 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py @@ -7,7 +7,11 @@ from openpype.pipeline import ( AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + AYON_ASSET_DIR, + create_container, + imprint, +) import unreal # noqa @@ -20,6 +24,8 @@ class StaticMeshAlembicLoader(plugin.Loader): icon = "cube" color = "orange" + root = AYON_ASSET_DIR + @staticmethod def get_task(filename, asset_dir, asset_name, replace, default_conversion): task = unreal.AssetImportTask() @@ -53,14 +59,40 @@ class StaticMeshAlembicLoader(plugin.Loader): return task + def import_and_containerize( + self, filepath, asset_dir, asset_name, container_name, + default_conversion=False + ): + unreal.EditorAssetLibrary.make_directory(asset_dir) + + task = self.get_task( + filepath, asset_dir, asset_name, False, default_conversion) + + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + + # Create Asset Container + create_container(container=container_name, path=asset_dir) + + def imprint( + self, asset, asset_dir, container_name, asset_name, representation + ): + data = { + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": representation["_id"], + "parent": representation["parent"], + "family": representation["context"]["family"] + } + imprint(f"{asset_dir}/{container_name}", data) + def load(self, context, name, namespace, options): """Load and containerise representation into Content Browser. - This is two step process. First, import FBX to temporary path and - then call `containerise()` on it - this moves all content to new - directory and then it will create AssetContainer there and imprint it - with metadata. This will mark this path as container. - Args: context (dict): application context name (str): subset name @@ -68,15 +100,12 @@ class StaticMeshAlembicLoader(plugin.Loader): This is not passed here, so namespace is set by `containerise()` because only then we know real path. - data (dict): Those would be data to be imprinted. This is not used - now, data are imprinted by `containerise()`. + data (dict): Those would be data to be imprinted. Returns: list(str): list of container content - """ # Create directory for asset and Ayon container - root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" asset_name = f"{asset}_{name}" if asset else f"{name}" @@ -93,39 +122,22 @@ class StaticMeshAlembicLoader(plugin.Loader): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name_version}", suffix="") + f"{self.root}/{asset}/{name_version}", suffix="") container_name += suffix if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): - unreal.EditorAssetLibrary.make_directory(asset_dir) - path = self.filepath_from_context(context) - task = self.get_task( - path, asset_dir, asset_name, False, default_conversion) - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 + self.import_and_containerize(path, asset_dir, asset_name, + container_name, default_conversion) - # Create Asset Container - unreal_pipeline.create_container( - container=container_name, path=asset_dir) - - data = { - "schema": "ayon:container-2.0", - "id": AYON_CONTAINER_ID, - "asset": asset, - "namespace": asset_dir, - "container_name": container_name, - "asset_name": asset_name, - "loader": str(self.__class__.__name__), - "representation": context["representation"]["_id"], - "parent": context["representation"]["parent"], - "family": context["representation"]["context"]["family"] - } - unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) + self.imprint( + asset, asset_dir, container_name, asset_name, + context["representation"]) asset_content = unreal.EditorAssetLibrary.list_assets( - asset_dir, recursive=True, include_folder=True + asset_dir, recursive=True, include_folder=False ) for a in asset_content: @@ -134,27 +146,36 @@ class StaticMeshAlembicLoader(plugin.Loader): return asset_content def update(self, container, representation): - name = container["asset_name"] - source_path = get_representation_path(representation) - destination_path = container["namespace"] + context = representation.get("context", {}) - task = self.get_task(source_path, destination_path, name, True, False) + if not context: + raise RuntimeError("No context found in representation") - # do import fbx and replace existing data - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + # Create directory for asset and Ayon container + asset = context.get('asset') + name = context.get('subset') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + name_version = f"{name}_v{version:03d}" if version else f"{name}_hero" + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{self.root}/{asset}/{name_version}", suffix="") - container_path = "{}/{}".format(container["namespace"], - container["objectName"]) - # update metadata - unreal_pipeline.imprint( - container_path, - { - "representation": str(representation["_id"]), - "parent": str(representation["parent"]) - }) + container_name += suffix + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = get_representation_path(representation) + + self.import_and_containerize(path, asset_dir, asset_name, + container_name) + + self.imprint( + asset, asset_dir, container_name, asset_name, representation) asset_content = unreal.EditorAssetLibrary.list_assets( - destination_path, recursive=True, include_folder=True + asset_dir, recursive=True, include_folder=False ) for a in asset_content: diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py index ffc68d8375..66088d793c 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py @@ -7,7 +7,11 @@ from openpype.pipeline import ( AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin -from openpype.hosts.unreal.api import pipeline as unreal_pipeline +from openpype.hosts.unreal.api.pipeline import ( + AYON_ASSET_DIR, + create_container, + imprint, +) import unreal # noqa @@ -20,6 +24,8 @@ class StaticMeshFBXLoader(plugin.Loader): icon = "cube" color = "orange" + root = AYON_ASSET_DIR + @staticmethod def get_task(filename, asset_dir, asset_name, replace): task = unreal.AssetImportTask() @@ -46,14 +52,39 @@ class StaticMeshFBXLoader(plugin.Loader): return task + def import_and_containerize( + self, filepath, asset_dir, asset_name, container_name + ): + unreal.EditorAssetLibrary.make_directory(asset_dir) + + task = self.get_task( + filepath, asset_dir, asset_name, False) + + unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + + # Create Asset Container + create_container(container=container_name, path=asset_dir) + + def imprint( + self, asset, asset_dir, container_name, asset_name, representation + ): + data = { + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, + "asset": asset, + "namespace": asset_dir, + "container_name": container_name, + "asset_name": asset_name, + "loader": str(self.__class__.__name__), + "representation": representation["_id"], + "parent": representation["parent"], + "family": representation["context"]["family"] + } + imprint(f"{asset_dir}/{container_name}", data) + def load(self, context, name, namespace, options): """Load and containerise representation into Content Browser. - This is two step process. First, import FBX to temporary path and - then call `containerise()` on it - this moves all content to new - directory and then it will create AssetContainer there and imprint it - with metadata. This will mark this path as container. - Args: context (dict): application context name (str): subset name @@ -61,23 +92,15 @@ class StaticMeshFBXLoader(plugin.Loader): This is not passed here, so namespace is set by `containerise()` because only then we know real path. - options (dict): Those would be data to be imprinted. This is not - used now, data are imprinted by `containerise()`. + options (dict): Those would be data to be imprinted. Returns: list(str): list of container content """ - # Create directory for asset and Ayon container - root = "/Game/Ayon/Assets" - if options and options.get("asset_dir"): - root = options["asset_dir"] asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) - else: - asset_name = "{}".format(name) + asset_name = f"{asset}_{name}" if asset else f"{name}" version = context.get('version') # Check if version is hero version and use different name if not version.get("name") and version.get('type') == "hero_version": @@ -87,35 +110,20 @@ class StaticMeshFBXLoader(plugin.Loader): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - f"{root}/{asset}/{name_version}", suffix="" + f"{self.root}/{asset}/{name_version}", suffix="" ) container_name += suffix - unreal.EditorAssetLibrary.make_directory(asset_dir) + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = self.filepath_from_context(context) - path = self.filepath_from_context(context) - task = self.get_task(path, asset_dir, asset_name, False) + self.import_and_containerize( + path, asset_dir, asset_name, container_name) - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501 - - # Create Asset Container - unreal_pipeline.create_container( - container=container_name, path=asset_dir) - - data = { - "schema": "ayon:container-2.0", - "id": AYON_CONTAINER_ID, - "asset": asset, - "namespace": asset_dir, - "container_name": container_name, - "asset_name": asset_name, - "loader": str(self.__class__.__name__), - "representation": context["representation"]["_id"], - "parent": context["representation"]["parent"], - "family": context["representation"]["context"]["family"] - } - unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) + self.imprint( + asset, asset_dir, container_name, asset_name, + context["representation"]) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -127,27 +135,36 @@ class StaticMeshFBXLoader(plugin.Loader): return asset_content def update(self, container, representation): - name = container["asset_name"] - source_path = get_representation_path(representation) - destination_path = container["namespace"] + context = representation.get("context", {}) - task = self.get_task(source_path, destination_path, name, True) + if not context: + raise RuntimeError("No context found in representation") - # do import fbx and replace existing data - unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) + # Create directory for asset and Ayon container + asset = context.get('asset') + name = context.get('subset') + suffix = "_CON" + asset_name = f"{asset}_{name}" if asset else f"{name}" + version = context.get('version') + # Check if version is hero version and use different name + name_version = f"{name}_v{version:03d}" if version else f"{name}_hero" + tools = unreal.AssetToolsHelpers().get_asset_tools() + asset_dir, container_name = tools.create_unique_asset_name( + f"{self.root}/{asset}/{name_version}", suffix="") - container_path = "{}/{}".format(container["namespace"], - container["objectName"]) - # update metadata - unreal_pipeline.imprint( - container_path, - { - "representation": str(representation["_id"]), - "parent": str(representation["parent"]) - }) + container_name += suffix + + if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir): + path = get_representation_path(representation) + + self.import_and_containerize( + path, asset_dir, asset_name, container_name) + + self.imprint( + asset, asset_dir, container_name, asset_name, representation) asset_content = unreal.EditorAssetLibrary.list_assets( - destination_path, recursive=True, include_folder=True + asset_dir, recursive=True, include_folder=False ) for a in asset_content: diff --git a/openpype/hosts/unreal/plugins/load/load_uasset.py b/openpype/hosts/unreal/plugins/load/load_uasset.py index 88aaac41e8..dfd92d2fe5 100644 --- a/openpype/hosts/unreal/plugins/load/load_uasset.py +++ b/openpype/hosts/unreal/plugins/load/load_uasset.py @@ -41,7 +41,7 @@ class UAssetLoader(plugin.Loader): """ # Create directory for asset and Ayon container - root = "/Game/Ayon/Assets" + root = unreal_pipeline.AYON_ASSET_DIR asset = context.get('asset').get('name') suffix = "_CON" asset_name = f"{asset}_{name}" if asset else f"{name}" diff --git a/openpype/hosts/unreal/plugins/load/load_yeticache.py b/openpype/hosts/unreal/plugins/load/load_yeticache.py index 22f5029bac..780ed7c484 100644 --- a/openpype/hosts/unreal/plugins/load/load_yeticache.py +++ b/openpype/hosts/unreal/plugins/load/load_yeticache.py @@ -86,7 +86,7 @@ class YetiLoader(plugin.Loader): raise RuntimeError("Groom plugin is not activated.") # Create directory for asset and Ayon container - root = "/Game/Ayon/Assets" + root = unreal_pipeline.AYON_ASSET_DIR asset = context.get('asset').get('name') suffix = "_CON" asset_name = f"{asset}_{name}" if asset else f"{name}" diff --git a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py index 96485d5a2d..06acbf0992 100644 --- a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py +++ b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py @@ -3,6 +3,7 @@ import os import re import pyblish.api +from openpype.pipeline.publish import PublishValidationError class ValidateSequenceFrames(pyblish.api.InstancePlugin): @@ -39,8 +40,22 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): collections, remainder = clique.assemble( repr["files"], minimum_items=1, patterns=patterns) - assert not remainder, "Must not have remainder" - assert len(collections) == 1, "Must detect single collection" + if remainder: + raise PublishValidationError( + "Some files have been found outside a sequence. " + f"Invalid files: {remainder}") + if not collections: + raise PublishValidationError( + "We have been unable to find a sequence in the " + "files. Please ensure the files are named " + "appropriately. " + f"Files: {repr_files}") + if len(collections) > 1: + raise PublishValidationError( + "Multiple collections detected. There should be a single " + "collection per representation. " + f"Collections identified: {collections}") + collection = collections[0] frames = list(collection.indexes) @@ -53,8 +68,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): data["clipOut"]) if current_range != required_range: - raise ValueError(f"Invalid frame range: {current_range} - " - f"expected: {required_range}") + raise PublishValidationError( + f"Invalid frame range: {current_range} - " + f"expected: {required_range}") missing = collection.holes().indexes - assert not missing, "Missing frames: %s" % (missing,) + if missing: + raise PublishValidationError( + "Missing frames have been detected. " + f"Missing frames: {missing}") diff --git a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py index 1416255083..6bb67ef260 100644 --- a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py +++ b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py @@ -156,8 +156,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin): self.log.debug("frameEnd:: {}".format( instance.data["frameEnd"])) except Exception: - self.log.warning("Unable to count frames " - "duration {}".format(no_of_frames)) + self.log.warning("Unable to count frames duration.") instance.data["handleStart"] = asset_doc["data"]["handleStart"] instance.data["handleEnd"] = asset_doc["data"]["handleEnd"] diff --git a/openpype/modules/avalon_apps/avalon_app.py b/openpype/modules/avalon_apps/avalon_app.py index a0226ecc5c..57754793c4 100644 --- a/openpype/modules/avalon_apps/avalon_app.py +++ b/openpype/modules/avalon_apps/avalon_app.py @@ -1,5 +1,6 @@ import os +from openpype import AYON_SERVER_ENABLED from openpype.modules import OpenPypeModule, ITrayModule @@ -75,20 +76,11 @@ class AvalonModule(OpenPypeModule, ITrayModule): def show_library_loader(self): if self._library_loader_window is None: - from qtpy import QtCore - from openpype.tools.libraryloader import LibraryLoaderWindow from openpype.pipeline import install_openpype_plugins - - libraryloader = LibraryLoaderWindow( - show_projects=True, - show_libraries=True - ) - # Remove always on top flag for tray - window_flags = libraryloader.windowFlags() - if window_flags | QtCore.Qt.WindowStaysOnTopHint: - window_flags ^= QtCore.Qt.WindowStaysOnTopHint - libraryloader.setWindowFlags(window_flags) - self._library_loader_window = libraryloader + if AYON_SERVER_ENABLED: + self._init_ayon_loader() + else: + self._init_library_loader() install_openpype_plugins() @@ -106,3 +98,25 @@ class AvalonModule(OpenPypeModule, ITrayModule): if self.tray_initialized: from .rest_api import AvalonRestApiResource self.rest_api_obj = AvalonRestApiResource(self, server_manager) + + def _init_library_loader(self): + from qtpy import QtCore + from openpype.tools.libraryloader import LibraryLoaderWindow + + libraryloader = LibraryLoaderWindow( + show_projects=True, + show_libraries=True + ) + # Remove always on top flag for tray + window_flags = libraryloader.windowFlags() + if window_flags | QtCore.Qt.WindowStaysOnTopHint: + window_flags ^= QtCore.Qt.WindowStaysOnTopHint + libraryloader.setWindowFlags(window_flags) + self._library_loader_window = libraryloader + + def _init_ayon_loader(self): + from openpype.tools.ayon_loader.ui import LoaderWindow + + libraryloader = LoaderWindow() + + self._library_loader_window = libraryloader diff --git a/openpype/modules/base.py b/openpype/modules/base.py index a3c21718b9..080be251f3 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -31,13 +31,13 @@ from openpype.settings.lib import ( get_studio_system_settings_overrides, load_json_file ) +from openpype.settings.ayon_settings import is_dev_mode_enabled from openpype.lib import ( Logger, import_filepath, import_module_from_dirpath, ) -from openpype.lib.openpype_version import is_staging_enabled from .interfaces import ( OpenPypeInterface, @@ -317,21 +317,10 @@ def load_modules(force=False): time.sleep(0.1) -def _get_ayon_addons_information(): - """Receive information about addons to use from server. - - Todos: - Actually ask server for the information. - Allow project name as optional argument to be able to query information - about used addons for specific project. - Returns: - List[Dict[str, Any]]: List of addon information to use. - """ - - output = [] +def _get_ayon_bundle_data(): bundle_name = os.getenv("AYON_BUNDLE_NAME") bundles = ayon_api.get_bundles()["bundles"] - final_bundle = next( + return next( ( bundle for bundle in bundles @@ -339,10 +328,22 @@ def _get_ayon_addons_information(): ), None ) - if final_bundle is None: - return output - bundle_addons = final_bundle["addons"] + +def _get_ayon_addons_information(bundle_info): + """Receive information about addons to use from server. + + Todos: + Actually ask server for the information. + Allow project name as optional argument to be able to query information + about used addons for specific project. + + Returns: + List[Dict[str, Any]]: List of addon information to use. + """ + + output = [] + bundle_addons = bundle_info["addons"] addons = ayon_api.get_addons_info()["addons"] for addon in addons: name = addon["name"] @@ -378,38 +379,73 @@ def _load_ayon_addons(openpype_modules, modules_key, log): v3_addons_to_skip = [] - addons_info = _get_ayon_addons_information() + bundle_info = _get_ayon_bundle_data() + addons_info = _get_ayon_addons_information(bundle_info) if not addons_info: return v3_addons_to_skip + addons_dir = os.environ.get("AYON_ADDONS_DIR") if not addons_dir: addons_dir = os.path.join( appdirs.user_data_dir("AYON", "Ynput"), "addons" ) - if not os.path.exists(addons_dir): + + dev_mode_enabled = is_dev_mode_enabled() + dev_addons_info = {} + if dev_mode_enabled: + # Get dev addons info only when dev mode is enabled + dev_addons_info = bundle_info.get("addonDevelopment", dev_addons_info) + + addons_dir_exists = os.path.exists(addons_dir) + if not addons_dir_exists: log.warning("Addons directory does not exists. Path \"{}\"".format( addons_dir )) - return v3_addons_to_skip for addon_info in addons_info: addon_name = addon_info["name"] addon_version = addon_info["version"] - folder_name = "{}_{}".format(addon_name, addon_version) - addon_dir = os.path.join(addons_dir, folder_name) - if not os.path.exists(addon_dir): - log.debug(( - "No localized client code found for addon {} {}." - ).format(addon_name, addon_version)) + dev_addon_info = dev_addons_info.get(addon_name, {}) + use_dev_path = dev_addon_info.get("enabled", False) + + addon_dir = None + if use_dev_path: + addon_dir = dev_addon_info["path"] + if not addon_dir or not os.path.exists(addon_dir): + log.warning(( + "Dev addon {} {} path does not exists. Path \"{}\"" + ).format(addon_name, addon_version, addon_dir)) + continue + + elif addons_dir_exists: + folder_name = "{}_{}".format(addon_name, addon_version) + addon_dir = os.path.join(addons_dir, folder_name) + if not os.path.exists(addon_dir): + log.debug(( + "No localized client code found for addon {} {}." + ).format(addon_name, addon_version)) + continue + + if not addon_dir: continue sys.path.insert(0, addon_dir) imported_modules = [] for name in os.listdir(addon_dir): + # Ignore of files is implemented to be able to run code from code + # where usually is more files than just the addon + # Ignore start and setup scripts + if name in ("setup.py", "start.py"): + continue + path = os.path.join(addon_dir, name) basename, ext = os.path.splitext(name) + # Ignore folders/files with dot in name + # - dot names cannot be imported in Python + if "." in basename: + continue is_dir = os.path.isdir(path) is_py_file = ext.lower() == ".py" if not is_py_file and not is_dir: diff --git a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py index 0b97582d2a..9a718aa089 100644 --- a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py @@ -13,7 +13,8 @@ from openpype.pipeline.publish import ( ) from openpype.lib import ( BoolDef, - NumberDef + NumberDef, + is_running_from_build ) @@ -230,6 +231,11 @@ class FusionSubmitDeadline( "OPENPYPE_LOG_NO_COLORS", "IS_TEST" ] + + # Add OpenPype version if we are running from build. + if is_running_from_build(): + keys.append("OPENPYPE_VERSION") + environment = dict({key: os.environ[key] for key in keys if key in os.environ}, **legacy_io.Session) diff --git a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py index 8f21a920be..6f885c578a 100644 --- a/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_houdini_render_deadline.py @@ -65,9 +65,11 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): job_info.BatchName += datetime.now().strftime("%d%m%Y%H%M%S") # Deadline requires integers in frame range + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] frames = "{start}-{end}x{step}".format( - start=int(instance.data["frameStart"]), - end=int(instance.data["frameEnd"]), + start=int(start), + end=int(end), step=int(instance.data["byFrameStep"]), ) job_info.Frames = frames diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py index fe3275ce2c..bea76718ca 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py +++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py @@ -44,19 +44,25 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): self.log.debug("Project found: {0}".format(project_entity)) + task_object_type = session.query( + "ObjectType where name is 'Task'").one() + task_object_type_id = task_object_type["id"] asset_entity = None if asset_name: # Find asset entity entity_query = ( - 'TypedContext where project_id is "{0}"' - ' and name is "{1}"' - ).format(project_entity["id"], asset_name) + "TypedContext where project_id is '{}'" + " and name is '{}'" + " and object_type_id != '{}'" + ).format( + project_entity["id"], + asset_name, + task_object_type_id + ) self.log.debug("Asset entity query: < {0} >".format(entity_query)) asset_entities = [] for entity in session.query(entity_query).all(): - # Skip tasks - if entity.entity_type.lower() != "task": - asset_entities.append(entity) + asset_entities.append(entity) if len(asset_entities) == 0: raise AssertionError(( @@ -103,10 +109,19 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): context.data["ftrackEntity"] = asset_entity context.data["ftrackTask"] = task_entity - self.per_instance_process(context, asset_entity, task_entity) + self.per_instance_process( + context, + asset_entity, + task_entity, + task_object_type_id + ) def per_instance_process( - self, context, context_asset_entity, context_task_entity + self, + context, + context_asset_entity, + context_task_entity, + task_object_type_id ): context_task_name = None context_asset_name = None @@ -182,23 +197,27 @@ class CollectFtrackApi(pyblish.api.ContextPlugin): session = context.data["ftrackSession"] project_entity = context.data["ftrackProject"] - asset_names = set() - for asset_name in instance_by_asset_and_task.keys(): - asset_names.add(asset_name) + asset_names = set(instance_by_asset_and_task.keys()) joined_asset_names = ",".join([ "\"{}\"".format(name) for name in asset_names ]) - entities = session.query(( - "TypedContext where project_id is \"{}\" and name in ({})" - ).format(project_entity["id"], joined_asset_names)).all() + entities = session.query( + ( + "TypedContext where project_id is \"{}\" and name in ({})" + " and object_type_id != '{}'" + ).format( + project_entity["id"], + joined_asset_names, + task_object_type_id + ) + ).all() entities_by_name = { entity["name"]: entity for entity in entities } - for asset_name, by_task_data in instance_by_asset_and_task.items(): entity = entities_by_name.get(asset_name) task_entity_by_name = {} diff --git a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py index 047e35e3ac..bdb4b109a1 100644 --- a/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py +++ b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py @@ -1,5 +1,6 @@ import os import shutil +import filecmp from openpype.client.entities import get_representations from openpype.lib.applications import PreLaunchHook, LaunchTypes @@ -194,3 +195,69 @@ class CopyLastPublishedWorkfile(PreLaunchHook): self.data["last_workfile_path"] = local_workfile_path # Keep source filepath for further path conformation self.data["source_filepath"] = last_published_workfile_path + + # Get resources directory + resources_dir = os.path.join( + os.path.dirname(local_workfile_path), 'resources' + ) + # Make resource directory if it doesn't exist + if not os.path.exists(resources_dir): + os.mkdir(resources_dir) + + # Copy resources to the local resources directory + for file in workfile_representation['files']: + # Get resource main path + resource_main_path = anatomy.fill_root(file["path"]) + + # Get resource file basename + resource_basename = os.path.basename(resource_main_path) + + # Only copy if the resource file exists, and it's not the workfile + if ( + not os.path.exists(resource_main_path) + or resource_basename == os.path.basename( + last_published_workfile_path + ) + ): + continue + + # Get resource path in workfile folder + resource_work_path = os.path.join( + resources_dir, resource_basename + ) + + # Check if the resource file already exists in the resources folder + if os.path.exists(resource_work_path): + # Check if both files are the same + if filecmp.cmp(resource_main_path, resource_work_path): + self.log.warning( + 'Resource "{}" already exists.' + .format(resource_basename) + ) + continue + else: + # Add `.old` to existing resource path + resource_path_old = resource_work_path + '.old' + if os.path.exists(resource_work_path + '.old'): + for i in range(1, 100): + p = resource_path_old + '%02d' % i + if not os.path.exists(p): + # Rename existing resource file to + # `resource_name.old` + 2 digits + shutil.move(resource_work_path, p) + break + else: + self.log.warning( + 'There are a hundred old files for ' + 'resource "{}". ' + 'Perhaps is it time to clean up your ' + 'resources folder' + .format(resource_basename) + ) + continue + else: + # Rename existing resource file to `resource_name.old` + shutil.move(resource_work_path, resource_path_old) + + # Copy resource file to workfile resources folder + shutil.copy(resource_main_path, resources_dir) diff --git a/openpype/modules/timers_manager/plugins/publish/start_timer.py b/openpype/modules/timers_manager/plugins/publish/start_timer.py index 6408327ca1..19a67292f5 100644 --- a/openpype/modules/timers_manager/plugins/publish/start_timer.py +++ b/openpype/modules/timers_manager/plugins/publish/start_timer.py @@ -6,8 +6,6 @@ Requires: import pyblish.api -from openpype.pipeline import legacy_io - class StartTimer(pyblish.api.ContextPlugin): label = "Start Timer" @@ -25,9 +23,9 @@ class StartTimer(pyblish.api.ContextPlugin): self.log.debug("Publish is not affecting running timers.") return - project_name = legacy_io.active_project() - asset_name = legacy_io.Session.get("AVALON_ASSET") - task_name = legacy_io.Session.get("AVALON_TASK") + project_name = context.data["projectName"] + asset_name = context.data.get("asset") + task_name = context.data.get("task") if not project_name or not asset_name or not task_name: self.log.info(( "Current context does not contain all" diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py index 2800050496..9f720f6ae9 100644 --- a/openpype/pipeline/colorspace.py +++ b/openpype/pipeline/colorspace.py @@ -1,4 +1,3 @@ -from copy import deepcopy import re import os import json @@ -7,6 +6,7 @@ import functools import platform import tempfile import warnings +from copy import deepcopy from openpype import PACKAGE_DIR from openpype.settings import get_project_settings @@ -356,7 +356,10 @@ def parse_colorspace_from_filepath( "Must provide `config_path` if `colorspaces` is not provided." ) - colorspaces = colorspaces or get_ocio_config_colorspaces(config_path) + colorspaces = ( + colorspaces + or get_ocio_config_colorspaces(config_path)["colorspaces"] + ) underscored_colorspaces = { key.replace(" ", "_"): key for key in colorspaces if " " in key @@ -393,7 +396,7 @@ def validate_imageio_colorspace_in_config(config_path, colorspace_name): Returns: bool: True if exists """ - colorspaces = get_ocio_config_colorspaces(config_path) + colorspaces = get_ocio_config_colorspaces(config_path)["colorspaces"] if colorspace_name not in colorspaces: raise KeyError( "Missing colorspace '{}' in config file '{}'".format( @@ -530,6 +533,157 @@ def get_ocio_config_colorspaces(config_path): return CachedData.ocio_config_colorspaces[config_path] +def convert_colorspace_enumerator_item( + colorspace_enum_item, + config_items +): + """Convert colorspace enumerator item to dictionary + + Args: + colorspace_item (str): colorspace and family in couple + config_items (dict[str,dict]): colorspace data + + Returns: + dict: colorspace data + """ + if "::" not in colorspace_enum_item: + return None + + # split string with `::` separator and set first as key and second as value + item_type, item_name = colorspace_enum_item.split("::") + + item_data = None + if item_type == "aliases": + # loop through all colorspaces and find matching alias + for name, _data in config_items.get("colorspaces", {}).items(): + if item_name in _data.get("aliases", []): + item_data = deepcopy(_data) + item_data.update({ + "name": name, + "type": "colorspace" + }) + break + else: + # find matching colorspace item found in labeled_colorspaces + item_data = config_items.get(item_type, {}).get(item_name) + if item_data: + item_data = deepcopy(item_data) + item_data.update({ + "name": item_name, + "type": item_type + }) + + # raise exception if item is not found + if not item_data: + message_config_keys = ", ".join( + "'{}':{}".format( + key, + set(config_items.get(key, {}).keys()) + ) for key in config_items.keys() + ) + raise KeyError( + "Missing colorspace item '{}' in config data: [{}]".format( + colorspace_enum_item, message_config_keys + ) + ) + + return item_data + + +def get_colorspaces_enumerator_items( + config_items, + include_aliases=False, + include_looks=False, + include_roles=False, + include_display_views=False +): + """Get all colorspace data with labels + + Wrapper function for aggregating all names and its families. + Families can be used for building menu and submenus in gui. + + Args: + config_items (dict[str,dict]): colorspace data coming from + `get_ocio_config_colorspaces` function + include_aliases (bool): include aliases in result + include_looks (bool): include looks in result + include_roles (bool): include roles in result + + Returns: + list[tuple[str,str]]: colorspace and family in couple + """ + labeled_colorspaces = [] + aliases = set() + colorspaces = set() + looks = set() + roles = set() + display_views = set() + for items_type, colorspace_items in config_items.items(): + if items_type == "colorspaces": + for color_name, color_data in colorspace_items.items(): + if color_data.get("aliases"): + aliases.update([ + ( + "aliases::{}".format(alias_name), + "[alias] {} ({})".format(alias_name, color_name) + ) + for alias_name in color_data["aliases"] + ]) + colorspaces.add(( + "{}::{}".format(items_type, color_name), + "[colorspace] {}".format(color_name) + )) + + elif items_type == "looks": + looks.update([ + ( + "{}::{}".format(items_type, name), + "[look] {} ({})".format(name, role_data["process_space"]) + ) + for name, role_data in colorspace_items.items() + ]) + + elif items_type == "displays_views": + display_views.update([ + ( + "{}::{}".format(items_type, name), + "[view (display)] {}".format(name) + ) + for name, _ in colorspace_items.items() + ]) + + elif items_type == "roles": + roles.update([ + ( + "{}::{}".format(items_type, name), + "[role] {} ({})".format(name, role_data["colorspace"]) + ) + for name, role_data in colorspace_items.items() + ]) + + if roles and include_roles: + roles = sorted(roles, key=lambda x: x[0]) + labeled_colorspaces.extend(roles) + + # add colorspaces as second so it is not first in menu + colorspaces = sorted(colorspaces, key=lambda x: x[0]) + labeled_colorspaces.extend(colorspaces) + + if aliases and include_aliases: + aliases = sorted(aliases, key=lambda x: x[0]) + labeled_colorspaces.extend(aliases) + + if looks and include_looks: + looks = sorted(looks, key=lambda x: x[0]) + labeled_colorspaces.extend(looks) + + if display_views and include_display_views: + display_views = sorted(display_views, key=lambda x: x[0]) + labeled_colorspaces.extend(display_views) + + return labeled_colorspaces + + # TODO: remove this in future - backward compatibility @deprecated("_get_wrapped_with_subprocess") def get_colorspace_data_subprocess(config_path): diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index f567118062..13630ae7ca 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -25,7 +25,10 @@ from openpype.tests.lib import is_in_tests from .publish.lib import filter_pyblish_plugins from .anatomy import Anatomy -from .template_data import get_template_data_with_names +from .template_data import ( + get_template_data_with_names, + get_template_data +) from .workfile import ( get_workfile_template_key, get_custom_workfile_template_by_string_context, @@ -658,3 +661,70 @@ def get_process_id(): if _process_id is None: _process_id = str(uuid.uuid4()) return _process_id + + +def get_current_context_template_data(): + """Template data for template fill from current context + + Returns: + Dict[str, Any] of the following tokens and their values + Supported Tokens: + - Regular Tokens + - app + - user + - asset + - parent + - hierarchy + - folder[name] + - root[work, ...] + - studio[code, name] + - project[code, name] + - task[type, name, short] + + - Context Specific Tokens + - assetData[frameStart] + - assetData[frameEnd] + - assetData[handleStart] + - assetData[handleEnd] + - assetData[frameStartHandle] + - assetData[frameEndHandle] + - assetData[resolutionHeight] + - assetData[resolutionWidth] + + """ + + # pre-prepare get_template_data args + current_context = get_current_context() + project_name = current_context["project_name"] + asset_name = current_context["asset_name"] + anatomy = Anatomy(project_name) + + # prepare get_template_data args + project_doc = get_project(project_name) + asset_doc = get_asset_by_name(project_name, asset_name) + task_name = current_context["task_name"] + host_name = get_current_host_name() + + # get regular template data + template_data = get_template_data( + project_doc, asset_doc, task_name, host_name + ) + + template_data["root"] = anatomy.roots + + # get context specific vars + asset_data = asset_doc["data"].copy() + + # compute `frameStartHandle` and `frameEndHandle` + if "frameStart" in asset_data and "handleStart" in asset_data: + asset_data["frameStartHandle"] = \ + asset_data["frameStart"] - asset_data["handleStart"] + + if "frameEnd" in asset_data and "handleEnd" in asset_data: + asset_data["frameEndHandle"] = \ + asset_data["frameEnd"] + asset_data["handleEnd"] + + # add assetData + template_data["assetData"] = asset_data + + return template_data diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index f9e3f86652..25f03ddd3b 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -758,7 +758,7 @@ class PublishAttributes: yield name def mark_as_stored(self): - self._origin_data = copy.deepcopy(self._data) + self._origin_data = copy.deepcopy(self.data_to_store()) def data_to_store(self): """Convert attribute values to "data to store".""" @@ -912,6 +912,12 @@ class CreatedInstance: # Create a copy of passed data to avoid changing them on the fly data = copy.deepcopy(data or {}) + + # Pop dictionary values that will be converted to objects to be able + # catch changes + orig_creator_attributes = data.pop("creator_attributes", None) or {} + orig_publish_attributes = data.pop("publish_attributes", None) or {} + # Store original value of passed data self._orig_data = copy.deepcopy(data) @@ -919,10 +925,6 @@ class CreatedInstance: data.pop("family", None) data.pop("subset", None) - # Pop dictionary values that will be converted to objects to be able - # catch changes - orig_creator_attributes = data.pop("creator_attributes", None) or {} - orig_publish_attributes = data.pop("publish_attributes", None) or {} # QUESTION Does it make sense to have data stored as ordered dict? self._data = collections.OrderedDict() @@ -1039,7 +1041,10 @@ class CreatedInstance: @property def origin_data(self): - return copy.deepcopy(self._orig_data) + output = copy.deepcopy(self._orig_data) + output["creator_attributes"] = self.creator_attributes.origin_data + output["publish_attributes"] = self.publish_attributes.origin_data + return output @property def creator_identifier(self): @@ -1095,7 +1100,7 @@ class CreatedInstance: def changes(self): """Calculate and return changes.""" - return TrackChangesItem(self._orig_data, self.data_to_store()) + return TrackChangesItem(self.origin_data, self.data_to_store()) def mark_as_stored(self): """Should be called when instance data are stored. @@ -1211,7 +1216,7 @@ class CreatedInstance: publish_attributes = self.publish_attributes.serialize_attributes() return { "data": self.data_to_store(), - "orig_data": copy.deepcopy(self._orig_data), + "orig_data": self.origin_data, "creator_attr_defs": creator_attr_defs, "publish_attributes": publish_attributes, "creator_label": self._creator_label, @@ -1251,7 +1256,7 @@ class CreatedInstance: creator_identifier=creator_identifier, creator_label=creator_label, group_label=group_label, - creator_attributes=creator_attr_defs + creator_attr_defs=creator_attr_defs ) obj._orig_data = serialized_data["orig_data"] obj.publish_attributes.deserialize_attributes(publish_attributes) @@ -2331,6 +2336,10 @@ class CreateContext: identifier, label, exc_info, add_traceback ) ) + else: + for update_data in update_list: + instance = update_data.instance + instance.mark_as_stored() if failed_info: raise CreatorsSaveFailed(failed_info) diff --git a/openpype/pipeline/create/subset_name.py b/openpype/pipeline/create/subset_name.py index 3f0692b46a..00025b19b8 100644 --- a/openpype/pipeline/create/subset_name.py +++ b/openpype/pipeline/create/subset_name.py @@ -14,6 +14,13 @@ class TaskNotSetError(KeyError): super(TaskNotSetError, self).__init__(msg) +class TemplateFillError(Exception): + def __init__(self, msg=None): + if not msg: + msg = "Creator's subset name template is missing key value." + super(TemplateFillError, self).__init__(msg) + + def get_subset_name_template( project_name, family, @@ -112,6 +119,10 @@ def get_subset_name( for project. Settings are queried if not passed. family_filter (Optional[str]): Use different family for subset template filtering. Value of 'family' is used when not passed. + + Raises: + TemplateFillError: If filled template contains placeholder key which is not + collected. """ if not family: @@ -154,4 +165,10 @@ def get_subset_name( for key, value in dynamic_data.items(): fill_pairs[key] = value - return template.format(**prepare_template_data(fill_pairs)) + try: + return template.format(**prepare_template_data(fill_pairs)) + except KeyError as exp: + raise TemplateFillError( + "Value for {} key is missing in template '{}'." + " Available values are {}".format(str(exp), template, fill_pairs) + ) diff --git a/openpype/pipeline/farm/pyblish_functions.py b/openpype/pipeline/farm/pyblish_functions.py index fe3ab97de8..7ef3439dbd 100644 --- a/openpype/pipeline/farm/pyblish_functions.py +++ b/openpype/pipeline/farm/pyblish_functions.py @@ -107,17 +107,18 @@ def get_time_data_from_instance_or_context(instance): TimeData: dataclass holding time information. """ + context = instance.context return TimeData( - start=(instance.data.get("frameStart") or - instance.context.data.get("frameStart")), - end=(instance.data.get("frameEnd") or - instance.context.data.get("frameEnd")), - fps=(instance.data.get("fps") or - instance.context.data.get("fps")), - handle_start=(instance.data.get("handleStart") or - instance.context.data.get("handleStart")), # noqa: E501 - handle_end=(instance.data.get("handleEnd") or - instance.context.data.get("handleEnd")) + start=instance.data.get("frameStart", context.data.get("frameStart")), + end=instance.data.get("frameEnd", context.data.get("frameEnd")), + fps=instance.data.get("fps", context.data.get("fps")), + step=instance.data.get("byFrameStep", instance.data.get("step", 1)), + handle_start=instance.data.get( + "handleStart", context.data.get("handleStart") + ), + handle_end=instance.data.get( + "handleEnd", context.data.get("handleEnd") + ) ) diff --git a/openpype/pipeline/load/__init__.py b/openpype/pipeline/load/__init__.py index 7320a9f0e8..ca11b26211 100644 --- a/openpype/pipeline/load/__init__.py +++ b/openpype/pipeline/load/__init__.py @@ -32,6 +32,7 @@ from .utils import ( loaders_from_repre_context, loaders_from_representation, + filter_repre_contexts_by_loader, any_outdated_containers, get_outdated_containers, @@ -85,6 +86,7 @@ __all__ = ( "loaders_from_repre_context", "loaders_from_representation", + "filter_repre_contexts_by_loader", "any_outdated_containers", "get_outdated_containers", diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py index b10d6032b3..c81aeff6bd 100644 --- a/openpype/pipeline/load/utils.py +++ b/openpype/pipeline/load/utils.py @@ -790,6 +790,24 @@ def loaders_from_repre_context(loaders, repre_context): ] +def filter_repre_contexts_by_loader(repre_contexts, loader): + """Filter representation contexts for loader. + + Args: + repre_contexts (list[dict[str, Ant]]): Representation context. + loader (LoaderPlugin): Loader plugin to filter contexts for. + + Returns: + list[dict[str, Any]]: Filtered representation contexts. + """ + + return [ + repre_context + for repre_context in repre_contexts + if is_compatible_loader(loader, repre_context) + ] + + def loaders_from_representation(loaders, representation): """Return all compatible loaders for a representation.""" diff --git a/openpype/pipeline/publish/abstract_collect_render.py b/openpype/pipeline/publish/abstract_collect_render.py index 8a26402bd8..764532cadb 100644 --- a/openpype/pipeline/publish/abstract_collect_render.py +++ b/openpype/pipeline/publish/abstract_collect_render.py @@ -11,7 +11,6 @@ import six import pyblish.api -from openpype.pipeline import legacy_io from .publish_plugins import AbstractMetaContextPlugin @@ -31,7 +30,7 @@ class RenderInstance(object): label = attr.ib() # label to show in GUI subset = attr.ib() # subset name task = attr.ib() # task name - asset = attr.ib() # asset name (AVALON_ASSET) + asset = attr.ib() # asset name attachTo = attr.ib() # subset name to attach render to setMembers = attr.ib() # list of nodes/members producing render output publish = attr.ib() # bool, True to publish instance @@ -129,7 +128,6 @@ class AbstractCollectRender(pyblish.api.ContextPlugin): """Constructor.""" super(AbstractCollectRender, self).__init__(*args, **kwargs) self._file_path = None - self._asset = legacy_io.Session["AVALON_ASSET"] self._context = None def process(self, context): diff --git a/openpype/pipeline/publish/constants.py b/openpype/pipeline/publish/constants.py index dcd3445200..92e3fb089f 100644 --- a/openpype/pipeline/publish/constants.py +++ b/openpype/pipeline/publish/constants.py @@ -5,3 +5,7 @@ ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 + +DEFAULT_PUBLISH_TEMPLATE = "publish" +DEFAULT_HERO_PUBLISH_TEMPLATE = "hero" +TRANSIENT_DIR_TEMPLATE = "transient" diff --git a/openpype/pipeline/publish/contants.py b/openpype/pipeline/publish/contants.py deleted file mode 100644 index c5296afe9a..0000000000 --- a/openpype/pipeline/publish/contants.py +++ /dev/null @@ -1,3 +0,0 @@ -DEFAULT_PUBLISH_TEMPLATE = "publish" -DEFAULT_HERO_PUBLISH_TEMPLATE = "hero" -TRANSIENT_DIR_TEMPLATE = "transient" diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 4d9443f635..4ea2f932f1 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -25,7 +25,7 @@ from openpype.pipeline import ( ) from openpype.pipeline.plugin_discover import DiscoverResult -from .contants import ( +from .constants import ( DEFAULT_PUBLISH_TEMPLATE, DEFAULT_HERO_PUBLISH_TEMPLATE, TRANSIENT_DIR_TEMPLATE diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py index b2b3679450..63c55d0c19 100644 --- a/openpype/pipeline/thumbnail.py +++ b/openpype/pipeline/thumbnail.py @@ -166,8 +166,12 @@ class ServerThumbnailResolver(ThumbnailResolver): # This is new way how thumbnails can be received from server # - output is 'ThumbnailContent' object - if hasattr(ayon_api, "get_thumbnail_by_id"): - result = ayon_api.get_thumbnail_by_id(thumbnail_id) + # NOTE Use 'get_server_api_connection' because public function + # 'get_thumbnail_by_id' does not return output of 'ServerAPI' + # method. + con = ayon_api.get_server_api_connection() + if hasattr(con, "get_thumbnail_by_id"): + result = con.get_thumbnail_by_id(thumbnail_id) if result.is_valid: filepath = cache.store_thumbnail( project_name, @@ -178,7 +182,7 @@ class ServerThumbnailResolver(ThumbnailResolver): else: # Backwards compatibility for ayon api where 'get_thumbnail_by_id' # is not implemented and output is filepath - filepath = ayon_api.get_thumbnail( + filepath = con.get_thumbnail( project_name, entity_type, entity_id, thumbnail_id ) diff --git a/openpype/plugins/load/push_to_library.py b/openpype/plugins/load/push_to_library.py index dd7291e686..5befc5eb9d 100644 --- a/openpype/plugins/load/push_to_library.py +++ b/openpype/plugins/load/push_to_library.py @@ -1,6 +1,6 @@ import os -from openpype import PACKAGE_DIR +from openpype import PACKAGE_DIR, AYON_SERVER_ENABLED from openpype.lib import get_openpype_execute_args, run_detached_process from openpype.pipeline import load from openpype.pipeline.load import LoadError @@ -32,12 +32,22 @@ class PushToLibraryProject(load.SubsetLoaderPlugin): raise LoadError("Please select only one item") context = tuple(filtered_contexts)[0] - push_tool_script_path = os.path.join( - PACKAGE_DIR, - "tools", - "push_to_project", - "app.py" - ) + + if AYON_SERVER_ENABLED: + push_tool_script_path = os.path.join( + PACKAGE_DIR, + "tools", + "ayon_push_to_project", + "main.py" + ) + else: + push_tool_script_path = os.path.join( + PACKAGE_DIR, + "tools", + "push_to_project", + "app.py" + ) + project_doc = context["project"] version_doc = context["version"] project_name = project_doc["name"] diff --git a/openpype/plugins/publish/collect_resources_path.py b/openpype/plugins/publish/collect_resources_path.py index 57a612c5ae..cfb4d63c1b 100644 --- a/openpype/plugins/publish/collect_resources_path.py +++ b/openpype/plugins/publish/collect_resources_path.py @@ -63,7 +63,8 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): "staticMesh", "skeletalMesh", "xgen", - "yeticacheUE" + "yeticacheUE", + "tycache" ] def process(self, instance): diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index de101ac7ac..0ddbb3f40b 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -17,7 +17,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): """Create jpg thumbnail from sequence using ffmpeg""" label = "Extract Thumbnail" - order = pyblish.api.ExtractorOrder + order = pyblish.api.ExtractorOrder + 0.49 families = [ "imagesequence", "render", "render2d", "prerender", "source", "clip", "take", "online", "image" diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index ce24dad1b5..5bb51a3049 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -107,6 +107,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "rig", "plate", "look", + "ociolook", "audio", "yetiRig", "yeticache", @@ -140,7 +141,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "online", "uasset", "blendScene", - "yeticacheUE" + "yeticacheUE", + "tycache" ] default_template_name = "publish" diff --git a/openpype/plugins/publish/preintegrate_thumbnail_representation.py b/openpype/plugins/publish/preintegrate_thumbnail_representation.py index 1c95b82c97..77bf2edba5 100644 --- a/openpype/plugins/publish/preintegrate_thumbnail_representation.py +++ b/openpype/plugins/publish/preintegrate_thumbnail_representation.py @@ -29,13 +29,12 @@ class PreIntegrateThumbnails(pyblish.api.InstancePlugin): if not repres: return - thumbnail_repre = None + thumbnail_repres = [] for repre in repres: - if repre["name"] == "thumbnail": - thumbnail_repre = repre - break + if "thumbnail" in repre.get("tags", []): + thumbnail_repres.append(repre) - if not thumbnail_repre: + if not thumbnail_repres: return family = instance.data["family"] @@ -60,14 +59,15 @@ class PreIntegrateThumbnails(pyblish.api.InstancePlugin): if not found_profile: return - thumbnail_repre.setdefault("tags", []) + for thumbnail_repre in thumbnail_repres: + thumbnail_repre.setdefault("tags", []) - if not found_profile["integrate_thumbnail"]: - if "delete" not in thumbnail_repre["tags"]: - thumbnail_repre["tags"].append("delete") - else: - if "delete" in thumbnail_repre["tags"]: - thumbnail_repre["tags"].remove("delete") + if not found_profile["integrate_thumbnail"]: + if "delete" not in thumbnail_repre["tags"]: + thumbnail_repre["tags"].append("delete") + else: + if "delete" in thumbnail_repre["tags"]: + thumbnail_repre["tags"].remove("delete") - self.log.debug( - "Thumbnail repre tags {}".format(thumbnail_repre["tags"])) + self.log.debug( + "Thumbnail repre tags {}".format(thumbnail_repre["tags"])) diff --git a/openpype/resources/__init__.py b/openpype/resources/__init__.py index b8671f517a..b33d1bf023 100644 --- a/openpype/resources/__init__.py +++ b/openpype/resources/__init__.py @@ -55,6 +55,9 @@ def get_openpype_staging_icon_filepath(): def get_openpype_icon_filepath(staging=None): + if AYON_SERVER_ENABLED and os.getenv("AYON_USE_DEV") == "1": + return get_resource("icons", "AYON_icon_dev.png") + if staging is None: staging = is_running_staging() @@ -68,7 +71,9 @@ def get_openpype_splash_filepath(staging=None): staging = is_running_staging() if AYON_SERVER_ENABLED: - if staging: + if os.getenv("AYON_USE_DEV") == "1": + splash_file_name = "AYON_splash_dev.png" + elif staging: splash_file_name = "AYON_splash_staging.png" else: splash_file_name = "AYON_splash.png" diff --git a/openpype/resources/icons/AYON_icon_dev.png b/openpype/resources/icons/AYON_icon_dev.png new file mode 100644 index 0000000000..3e867ef372 Binary files /dev/null and b/openpype/resources/icons/AYON_icon_dev.png differ diff --git a/openpype/resources/icons/AYON_splash_dev.png b/openpype/resources/icons/AYON_splash_dev.png new file mode 100644 index 0000000000..3b366e755d Binary files /dev/null and b/openpype/resources/icons/AYON_splash_dev.png differ diff --git a/openpype/scripts/ocio_wrapper.py b/openpype/scripts/ocio_wrapper.py index c362670126..fa231cd047 100644 --- a/openpype/scripts/ocio_wrapper.py +++ b/openpype/scripts/ocio_wrapper.py @@ -106,11 +106,47 @@ def _get_colorspace_data(config_path): config = ocio.Config().CreateFromFile(str(config_path)) - return { - c_.getName(): c_.getFamily() - for c_ in config.getColorSpaces() + colorspace_data = { + "roles": {}, + "colorspaces": { + color.getName(): { + "family": color.getFamily(), + "categories": list(color.getCategories()), + "aliases": list(color.getAliases()), + "equalitygroup": color.getEqualityGroup(), + } + for color in config.getColorSpaces() + }, + "displays_views": { + f"{view} ({display})": { + "display": display, + "view": view + + } + for display in config.getDisplays() + for view in config.getViews(display) + }, + "looks": {} } + # add looks + looks = config.getLooks() + if looks: + colorspace_data["looks"] = { + look.getName(): {"process_space": look.getProcessSpace()} + for look in looks + } + + # add roles + roles = config.getRoles() + if roles: + colorspace_data["roles"] = { + role: {"colorspace": colorspace} + for (role, colorspace) in roles + } + + return colorspace_data + @config.command( name="get_views", diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py index d54d71e851..8d4683490b 100644 --- a/openpype/settings/ayon_settings.py +++ b/openpype/settings/ayon_settings.py @@ -290,6 +290,16 @@ def _convert_modules_system( modules_settings[key] = value +def is_dev_mode_enabled(): + """Dev mode is enabled in AYON. + + Returns: + bool: True if dev mode is enabled. + """ + + return os.getenv("AYON_USE_DEV") == "1" + + def convert_system_settings(ayon_settings, default_settings, addon_versions): default_settings = copy.deepcopy(default_settings) output = { @@ -819,9 +829,43 @@ def _convert_nuke_project_settings(ayon_settings, output): # NOTE 'monitorOutLut' is maybe not yet in v3 (ut should be) _convert_host_imageio(ayon_nuke) ayon_imageio = ayon_nuke["imageio"] - for item in ayon_imageio["nodes"]["requiredNodes"]: + + # workfile + imageio_workfile = ayon_imageio["workfile"] + workfile_keys_mapping = ( + ("color_management", "colorManagement"), + ("native_ocio_config", "OCIO_config"), + ("working_space", "workingSpaceLUT"), + ("thumbnail_space", "monitorLut"), + ) + for src, dst in workfile_keys_mapping: + if ( + src in imageio_workfile + and dst not in imageio_workfile + ): + imageio_workfile[dst] = imageio_workfile.pop(src) + + # regex inputs + if "regex_inputs" in ayon_imageio: + ayon_imageio["regexInputs"] = ayon_imageio.pop("regex_inputs") + + # nodes + ayon_imageio_nodes = ayon_imageio["nodes"] + if "required_nodes" in ayon_imageio_nodes: + ayon_imageio_nodes["requiredNodes"] = ( + ayon_imageio_nodes.pop("required_nodes")) + if "override_nodes" in ayon_imageio_nodes: + ayon_imageio_nodes["overrideNodes"] = ( + ayon_imageio_nodes.pop("override_nodes")) + + for item in ayon_imageio_nodes["requiredNodes"]: + if "nuke_node_class" in item: + item["nukeNodeClass"] = item.pop("nuke_node_class") item["knobs"] = _convert_nuke_knobs(item["knobs"]) - for item in ayon_imageio["nodes"]["overrideNodes"]: + + for item in ayon_imageio_nodes["overrideNodes"]: + if "nuke_node_class" in item: + item["nukeNodeClass"] = item.pop("nuke_node_class") item["knobs"] = _convert_nuke_knobs(item["knobs"]) output["nuke"] = ayon_nuke @@ -1164,19 +1208,19 @@ def _convert_global_project_settings(ayon_settings, output, default_settings): for profile in extract_oiio_transcode_profiles: new_outputs = {} name_counter = {} - for output in profile["outputs"]: - if "name" in output: - name = output.pop("name") + for profile_output in profile["outputs"]: + if "name" in profile_output: + name = profile_output.pop("name") else: # Backwards compatibility for setting without 'name' in model - name = output["extension"] + name = profile_output["extension"] if name in new_outputs: name_counter[name] += 1 name = "{}_{}".format(name, name_counter[name]) else: name_counter[name] = 0 - new_outputs[name] = output + new_outputs[name] = profile_output profile["outputs"] = new_outputs # Extract Burnin plugin @@ -1400,15 +1444,39 @@ class _AyonSettingsCache: if _AyonSettingsCache.variant is None: from openpype.lib.openpype_version import is_staging_enabled - _AyonSettingsCache.variant = ( - "staging" if is_staging_enabled() else "production" - ) + variant = "production" + if is_dev_mode_enabled(): + variant = cls._get_dev_mode_settings_variant() + elif is_staging_enabled(): + variant = "staging" + _AyonSettingsCache.variant = variant return _AyonSettingsCache.variant @classmethod def _get_bundle_name(cls): return os.environ["AYON_BUNDLE_NAME"] + @classmethod + def _get_dev_mode_settings_variant(cls): + """Develop mode settings variant. + + Returns: + str: Name of settings variant. + """ + + bundles = ayon_api.get_bundles() + user = ayon_api.get_user() + username = user["name"] + for bundle in bundles: + if ( + bundle.get("isDev") + and bundle.get("activeUser") == username + ): + return bundle["name"] + # Return fake variant - distribution logic will tell user that he + # does not have set any dev bundle + return "dev" + @classmethod def get_value_by_project(cls, project_name): cache_item = _AyonSettingsCache.cache_by_project_name[project_name] diff --git a/openpype/settings/defaults/project_anatomy/templates.json b/openpype/settings/defaults/project_anatomy/templates.json index e5e535bf19..5766a09100 100644 --- a/openpype/settings/defaults/project_anatomy/templates.json +++ b/openpype/settings/defaults/project_anatomy/templates.json @@ -53,6 +53,11 @@ "file": "{originalBasename}<.{@frame}><_{udim}>.{ext}", "path": "{@folder}/{@file}" }, + "tycache": { + "folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/publish/{family}/{subset}/{@version}", + "file": "{originalBasename}.{ext}", + "path": "{@folder}/{@file}" + }, "source": { "folder": "{root[work]}/{originalDirname}", "file": "{originalBasename}.{ext}", @@ -66,6 +71,7 @@ "simpleUnrealTextureHero": "Simple Unreal Texture - Hero", "simpleUnrealTexture": "Simple Unreal Texture", "online": "online", + "tycache": "tycache", "source": "source", "transient": "transient" } diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index f3eb31174f..7fb8c333a6 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -89,10 +89,10 @@ "optional": true, "active": false }, - "ExtractABC": { + "ExtractModelABC": { "enabled": true, "optional": true, - "active": false + "active": true }, "ExtractBlendAnimation": { "enabled": true, diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 06a595d1c5..9ccf5cae05 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -546,6 +546,17 @@ "task_types": [], "task_names": [], "template_name": "online" + }, + { + "families": [ + "tycache" + ], + "hosts": [ + "max" + ], + "task_types": [], + "task_names": [], + "template_name": "tycache" } ], "hero_template_name_profiles": [ diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json index 5392fc34dd..0344dc7fd2 100644 --- a/openpype/settings/defaults/project_settings/houdini.json +++ b/openpype/settings/defaults/project_settings/houdini.json @@ -1,4 +1,17 @@ { + "general": { + "add_self_publish_button": false, + "update_houdini_var_context": { + "enabled": true, + "houdini_vars":[ + { + "var": "JOB", + "value": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{task[name]}", + "is_directory": true + } + ] + } + }, "imageio": { "activate_host_color_management": true, "ocio_config": { @@ -94,6 +107,9 @@ } }, "publish": { + "CollectRopFrameRange": { + "use_asset_handles": true + }, "ValidateWorkfilePaths": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 300d63985b..7719a5e255 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -829,6 +829,11 @@ "redshift_render_attributes": [], "renderman_render_attributes": [] }, + "ValidateResolution": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateCurrentRenderLayerIsRenderable": { "enabled": true, "optional": false, diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index ad9f46c8ab..1cadedd797 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -341,7 +341,7 @@ "write" ] }, - "ValidateCorrectAssetName": { + "ValidateCorrectAssetContext": { "enabled": true, "optional": true, "active": true @@ -374,7 +374,7 @@ "optional": true, "active": true }, - "ValidateScript": { + "ValidateScriptAttributes": { "enabled": true, "optional": true, "active": true diff --git a/openpype/settings/defaults/system_settings/applications.json b/openpype/settings/defaults/system_settings/applications.json index f2fc7d933a..6a0ddb398e 100644 --- a/openpype/settings/defaults/system_settings/applications.json +++ b/openpype/settings/defaults/system_settings/applications.json @@ -12,6 +12,26 @@ "LC_ALL": "C" }, "variants": { + "2024": { + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2024/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": { + "MAYA_VERSION": "2024" + } + }, "2023": { "use_python_2": false, "executables": { @@ -51,66 +71,65 @@ "environment": { "MAYA_VERSION": "2022" } - }, - "2020": { - "use_python_2": true, + } + } + }, + "mayapy": { + "enabled": true, + "label": "MayaPy", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": { + "MAYA_DISABLE_CLIC_IPM": "Yes", + "MAYA_DISABLE_CIP": "Yes", + "MAYA_DISABLE_CER": "Yes", + "PYMEL_SKIP_MEL_INIT": "Yes", + "LC_ALL": "C" + }, + "variants": { + "2024": { + "use_python_2": false, "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2020\\bin\\maya.exe" + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\mayapy.exe" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2020/bin/maya" + "/usr/autodesk/maya2024/bin/mayapy" ] }, "arguments": { - "windows": [], - "darwin": [], - "linux": [] - }, - "environment": { - "MAYA_VERSION": "2020" - } - }, - "2019": { - "use_python_2": true, - "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2019\\bin\\maya.exe" + "-I" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2019/bin/maya" + "-I" ] }, - "arguments": { - "windows": [], - "darwin": [], - "linux": [] - }, - "environment": { - "MAYA_VERSION": "2019" - } + "environment": {} }, - "2018": { - "use_python_2": true, + "2023": { + "use_python_2": false, "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2018\\bin\\maya.exe" + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\mayapy.exe" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2018/bin/maya" + "/usr/autodesk/maya2023/bin/mayapy" ] }, "arguments": { - "windows": [], + "windows": [ + "-I" + ], "darwin": [], - "linux": [] + "linux": [ + "-I" + ] }, - "environment": { - "MAYA_VERSION": "2018" - } + "environment": {} } } }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json index 7f782e3647..d4d0565ec9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json @@ -5,6 +5,10 @@ "label": "Houdini", "is_file": true, "children": [ + { + "type": "schema", + "name": "schema_houdini_general" + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 7f1a8a915b..b84c663e6c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -181,12 +181,12 @@ "name": "template_publish_plugin", "template_data": [ { - "key": "ExtractFBX", - "label": "Extract FBX (model and rig)" + "key": "ExtractModelABC", + "label": "Extract ABC (model)" }, { - "key": "ExtractABC", - "label": "Extract ABC (model and pointcache)" + "key": "ExtractFBX", + "label": "Extract FBX (model and rig)" }, { "key": "ExtractBlendAnimation", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_general.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_general.json new file mode 100644 index 0000000000..e118f83d21 --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_general.json @@ -0,0 +1,58 @@ +{ + "type": "dict", + "key": "general", + "label": "General", + "collapsible": true, + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "add_self_publish_button", + "label": "Add Self Publish Button" + }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "update_houdini_var_context", + "label": "Update Houdini Vars on context change", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "Sync vars with context changes.
If a value is treated as a directory on update it will be ensured the folder exists" + }, + { + "type": "list", + "key": "houdini_vars", + "label": "Houdini Vars", + "collapsible": false, + "object_type": { + "type": "dict", + "children": [ + { + "type": "text", + "key": "var", + "label": "Var" + }, + { + "type": "text", + "key": "value", + "label": "Value" + }, + { + "type": "boolean", + "key": "is_directory", + "label": "Treat as directory" + } + ] + } + } + ] + } + ] +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json index d5f70b0312..7bdb643dbb 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json @@ -4,6 +4,27 @@ "key": "publish", "label": "Publish plugins", "children": [ + { + "type":"label", + "label":"Collectors" + }, + { + "type": "dict", + "collapsible": true, + "key": "CollectRopFrameRange", + "label": "Collect Rop Frame Range", + "children": [ + { + "type": "label", + "label": "Disable this if you want the publisher to ignore start and end handles specified in the asset data for publish instances" + }, + { + "type": "boolean", + "key": "use_asset_handles", + "label": "Use asset handles" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 8a0815c185..d2e7c51e24 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -431,6 +431,10 @@ "type": "schema_template", "name": "template_publish_plugin", "template_data": [ + { + "key": "ValidateResolution", + "label": "Validate Resolution Settings" + }, { "key": "ValidateCurrentRenderLayerIsRenderable", "label": "Validate Current Render Layer Has Renderable Camera" diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index fa08e19c63..e0cd086119 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -61,7 +61,7 @@ "name": "template_publish_plugin", "template_data": [ { - "key": "ValidateCorrectAssetName", + "key": "ValidateCorrectAssetContext", "label": "Validate Correct Asset Name" } ] @@ -113,8 +113,8 @@ "label": "Validate Gizmo (Group)" }, { - "key": "ValidateScript", - "label": "Validate script settings" + "key": "ValidateScriptAttributes", + "label": "Validate workfile attributes" } ] }, diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_mayapy.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_mayapy.json new file mode 100644 index 0000000000..bbdc7e13b0 --- /dev/null +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_mayapy.json @@ -0,0 +1,39 @@ +{ + "type": "dict", + "key": "mayapy", + "label": "Autodesk MayaPy", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "schema_template", + "name": "template_host_unchangables" + }, + { + "key": "environment", + "label": "Environment", + "type": "raw-json" + }, + { + "type": "dict-modifiable", + "key": "variants", + "collapsible_key": true, + "use_label_wrap": false, + "object_type": { + "type": "dict", + "collapsible": true, + "children": [ + { + "type": "schema_template", + "name": "template_host_variant_items" + } + ] + } + } + ] +} diff --git a/openpype/settings/entities/schemas/system_schema/schema_applications.json b/openpype/settings/entities/schemas/system_schema/schema_applications.json index abea37a9ab..7965c344ae 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_applications.json +++ b/openpype/settings/entities/schemas/system_schema/schema_applications.json @@ -9,6 +9,10 @@ "type": "schema", "name": "schema_maya" }, + { + "type": "schema", + "name": "schema_mayapy" + }, { "type": "schema", "name": "schema_3dsmax" diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py index d9c55f4a64..8957f2b19d 100644 --- a/openpype/tools/attribute_defs/widgets.py +++ b/openpype/tools/attribute_defs/widgets.py @@ -251,6 +251,30 @@ class LabelAttrWidget(_BaseAttrDefWidget): self.main_layout.addWidget(input_widget, 0) +class ClickableLineEdit(QtWidgets.QLineEdit): + clicked = QtCore.Signal() + + def __init__(self, text, parent): + super(ClickableLineEdit, self).__init__(parent) + self.setText(text) + self.setReadOnly(True) + + self._mouse_pressed = False + + def mousePressEvent(self, event): + if event.button() == QtCore.Qt.LeftButton: + self._mouse_pressed = True + super(ClickableLineEdit, self).mousePressEvent(event) + + def mouseReleaseEvent(self, event): + if self._mouse_pressed: + self._mouse_pressed = False + if self.rect().contains(event.pos()): + self.clicked.emit() + + super(ClickableLineEdit, self).mouseReleaseEvent(event) + + class NumberAttrWidget(_BaseAttrDefWidget): def _ui_init(self): decimals = self.attr_def.decimals @@ -270,20 +294,38 @@ class NumberAttrWidget(_BaseAttrDefWidget): input_widget.setButtonSymbols( QtWidgets.QAbstractSpinBox.ButtonSymbols.NoButtons ) + input_line_edit = input_widget.lineEdit() + input_widget.installEventFilter(self) + + multisel_widget = ClickableLineEdit("< Multiselection >", self) + multisel_widget.setVisible(False) input_widget.valueChanged.connect(self._on_value_change) + multisel_widget.clicked.connect(self._on_multi_click) self._input_widget = input_widget + self._input_line_edit = input_line_edit + self._multisel_widget = multisel_widget + self._last_multivalue = None + self._multivalue = False self.main_layout.addWidget(input_widget, 0) + self.main_layout.addWidget(multisel_widget, 0) - def _on_value_change(self, new_value): - self.value_changed.emit(new_value, self.attr_def.id) + def eventFilter(self, obj, event): + if ( + self._multivalue + and obj is self._input_widget + and event.type() == QtCore.QEvent.FocusOut + ): + self._set_multiselection_visible(True) + return False def current_value(self): return self._input_widget.value() def set_value(self, value, multivalue=False): + self._last_multivalue = None if multivalue: set_value = set(value) if None in set_value: @@ -291,13 +333,47 @@ class NumberAttrWidget(_BaseAttrDefWidget): set_value.add(self.attr_def.default) if len(set_value) > 1: - self._input_widget.setSpecialValueText("Multiselection") + self._last_multivalue = next(iter(set_value), None) + self._set_multiselection_visible(True) + self._multivalue = True return value = tuple(set_value)[0] + self._multivalue = False + self._set_multiselection_visible(False) + if self.current_value != value: self._input_widget.setValue(value) + def _on_value_change(self, new_value): + self._multivalue = False + self.value_changed.emit(new_value, self.attr_def.id) + + def _on_multi_click(self): + self._set_multiselection_visible(False, True) + + def _set_multiselection_visible(self, visible, change_focus=False): + self._input_widget.setVisible(not visible) + self._multisel_widget.setVisible(visible) + if visible: + return + + # Change value once user clicked on the input field + if self._last_multivalue is None: + value = self.attr_def.default + else: + value = self._last_multivalue + self._input_widget.blockSignals(True) + self._input_widget.setValue(value) + self._input_widget.blockSignals(False) + if not change_focus: + return + # Change focus to input field and move cursor to the end + self._input_widget.setFocus(QtCore.Qt.MouseFocusReason) + self._input_line_edit.setCursorPosition( + len(self._input_line_edit.text()) + ) + class TextAttrWidget(_BaseAttrDefWidget): def _ui_init(self): diff --git a/openpype/tools/ayon_launcher/ui/hierarchy_page.py b/openpype/tools/ayon_launcher/ui/hierarchy_page.py index 8c546b38ac..d56d43fdec 100644 --- a/openpype/tools/ayon_launcher/ui/hierarchy_page.py +++ b/openpype/tools/ayon_launcher/ui/hierarchy_page.py @@ -103,4 +103,4 @@ class HierarchyPage(QtWidgets.QWidget): self._controller.refresh() def _on_filter_text_changed(self, text): - self._folders_widget.set_name_filer(text) + self._folders_widget.set_name_filter(text) diff --git a/openpype/tools/ayon_loader/__init__.py b/openpype/tools/ayon_loader/__init__.py new file mode 100644 index 0000000000..09ecf65f3a --- /dev/null +++ b/openpype/tools/ayon_loader/__init__.py @@ -0,0 +1,6 @@ +from .control import LoaderController + + +__all__ = ( + "LoaderController", +) diff --git a/openpype/tools/ayon_loader/abstract.py b/openpype/tools/ayon_loader/abstract.py new file mode 100644 index 0000000000..45042395d9 --- /dev/null +++ b/openpype/tools/ayon_loader/abstract.py @@ -0,0 +1,851 @@ +from abc import ABCMeta, abstractmethod +import six + +from openpype.lib.attribute_definitions import ( + AbstractAttrDef, + serialize_attr_defs, + deserialize_attr_defs, +) + + +class ProductTypeItem: + """Item representing product type. + + Args: + name (str): Product type name. + icon (dict[str, Any]): Product type icon definition. + checked (bool): Is product type checked for filtering. + """ + + def __init__(self, name, icon, checked): + self.name = name + self.icon = icon + self.checked = checked + + def to_data(self): + return { + "name": self.name, + "icon": self.icon, + "checked": self.checked, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class ProductItem: + """Product item with it versions. + + Args: + product_id (str): Product id. + product_type (str): Product type. + product_name (str): Product name. + product_icon (dict[str, Any]): Product icon definition. + product_type_icon (dict[str, Any]): Product type icon definition. + product_in_scene (bool): Is product in scene (only when used in DCC). + group_name (str): Group name. + folder_id (str): Folder id. + folder_label (str): Folder label. + version_items (dict[str, VersionItem]): Version items by id. + """ + + def __init__( + self, + product_id, + product_type, + product_name, + product_icon, + product_type_icon, + product_in_scene, + group_name, + folder_id, + folder_label, + version_items, + ): + self.product_id = product_id + self.product_type = product_type + self.product_name = product_name + self.product_icon = product_icon + self.product_type_icon = product_type_icon + self.product_in_scene = product_in_scene + self.group_name = group_name + self.folder_id = folder_id + self.folder_label = folder_label + self.version_items = version_items + + def to_data(self): + return { + "product_id": self.product_id, + "product_type": self.product_type, + "product_name": self.product_name, + "product_icon": self.product_icon, + "product_type_icon": self.product_type_icon, + "product_in_scene": self.product_in_scene, + "group_name": self.group_name, + "folder_id": self.folder_id, + "folder_label": self.folder_label, + "version_items": { + version_id: version_item.to_data() + for version_id, version_item in self.version_items.items() + }, + } + + @classmethod + def from_data(cls, data): + version_items = { + version_id: VersionItem.from_data(version) + for version_id, version in data["version_items"].items() + } + data["version_items"] = version_items + return cls(**data) + + +class VersionItem: + """Version item. + + Object have implemented comparison operators to be sortable. + + Args: + version_id (str): Version id. + version (int): Version. Can be negative when is hero version. + is_hero (bool): Is hero version. + product_id (str): Product id. + thumbnail_id (Union[str, None]): Thumbnail id. + published_time (Union[str, None]): Published time in format + '%Y%m%dT%H%M%SZ'. + author (Union[str, None]): Author. + frame_range (Union[str, None]): Frame range. + duration (Union[int, None]): Duration. + handles (Union[str, None]): Handles. + step (Union[int, None]): Step. + comment (Union[str, None]): Comment. + source (Union[str, None]): Source. + """ + + def __init__( + self, + version_id, + version, + is_hero, + product_id, + thumbnail_id, + published_time, + author, + frame_range, + duration, + handles, + step, + comment, + source + ): + self.version_id = version_id + self.product_id = product_id + self.thumbnail_id = thumbnail_id + self.version = version + self.is_hero = is_hero + self.published_time = published_time + self.author = author + self.frame_range = frame_range + self.duration = duration + self.handles = handles + self.step = step + self.comment = comment + self.source = source + + def __eq__(self, other): + if not isinstance(other, VersionItem): + return False + return ( + self.is_hero == other.is_hero + and self.version == other.version + and self.version_id == other.version_id + and self.product_id == other.product_id + ) + + def __ne__(self, other): + return not self.__eq__(other) + + def __gt__(self, other): + if not isinstance(other, VersionItem): + return False + if ( + other.version == self.version + and self.is_hero + ): + return True + return other.version < self.version + + def to_data(self): + return { + "version_id": self.version_id, + "product_id": self.product_id, + "thumbnail_id": self.thumbnail_id, + "version": self.version, + "is_hero": self.is_hero, + "published_time": self.published_time, + "author": self.author, + "frame_range": self.frame_range, + "duration": self.duration, + "handles": self.handles, + "step": self.step, + "comment": self.comment, + "source": self.source, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class RepreItem: + """Representation item. + + Args: + representation_id (str): Representation id. + representation_name (str): Representation name. + representation_icon (dict[str, Any]): Representation icon definition. + product_name (str): Product name. + folder_label (str): Folder label. + """ + + def __init__( + self, + representation_id, + representation_name, + representation_icon, + product_name, + folder_label, + ): + self.representation_id = representation_id + self.representation_name = representation_name + self.representation_icon = representation_icon + self.product_name = product_name + self.folder_label = folder_label + + def to_data(self): + return { + "representation_id": self.representation_id, + "representation_name": self.representation_name, + "representation_icon": self.representation_icon, + "product_name": self.product_name, + "folder_label": self.folder_label, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class ActionItem: + """Action item that can be triggered. + + Action item is defined for a specific context. To trigger the action + use 'identifier' and context, it necessary also use 'options'. + + Args: + identifier (str): Action identifier. + label (str): Action label. + icon (dict[str, Any]): Action icon definition. + tooltip (str): Action tooltip. + options (Union[list[AbstractAttrDef], list[qargparse.QArgument]]): + Action options. Note: 'qargparse' is considered as deprecated. + order (int): Action order. + project_name (str): Project name. + folder_ids (list[str]): Folder ids. + product_ids (list[str]): Product ids. + version_ids (list[str]): Version ids. + representation_ids (list[str]): Representation ids. + """ + + def __init__( + self, + identifier, + label, + icon, + tooltip, + options, + order, + project_name, + folder_ids, + product_ids, + version_ids, + representation_ids, + ): + self.identifier = identifier + self.label = label + self.icon = icon + self.tooltip = tooltip + self.options = options + self.order = order + self.project_name = project_name + self.folder_ids = folder_ids + self.product_ids = product_ids + self.version_ids = version_ids + self.representation_ids = representation_ids + + def _options_to_data(self): + options = self.options + if not options: + return options + if isinstance(options[0], AbstractAttrDef): + return serialize_attr_defs(options) + # NOTE: Data conversion is not used by default in loader tool. But for + # future development of detached UI tools it would be better to be + # prepared for it. + raise NotImplementedError( + "{}.to_data is not implemented. Use Attribute definitions" + " from 'openpype.lib' instead of 'qargparse'.".format( + self.__class__.__name__ + ) + ) + + def to_data(self): + options = self._options_to_data() + return { + "identifier": self.identifier, + "label": self.label, + "icon": self.icon, + "tooltip": self.tooltip, + "options": options, + "order": self.order, + "project_name": self.project_name, + "folder_ids": self.folder_ids, + "product_ids": self.product_ids, + "version_ids": self.version_ids, + "representation_ids": self.representation_ids, + } + + @classmethod + def from_data(cls, data): + options = data["options"] + if options: + options = deserialize_attr_defs(options) + data["options"] = options + return cls(**data) + + +@six.add_metaclass(ABCMeta) +class _BaseLoaderController(object): + """Base loader controller abstraction. + + Abstract base class that is required for both frontend and backed. + """ + + @abstractmethod + def get_current_context(self): + """Current context is a context of the current scene. + + Example output: + { + "project_name": "MyProject", + "folder_id": "0011223344-5566778-99", + "task_name": "Compositing", + } + + Returns: + dict[str, Union[str, None]]: Context data. + """ + + pass + + @abstractmethod + def reset(self): + """Reset all cached data to reload everything. + + Triggers events "controller.reset.started" and + "controller.reset.finished". + """ + + pass + + # Model wrappers + @abstractmethod + def get_folder_items(self, project_name, sender=None): + """Folder items for a project. + + Args: + project_name (str): Project name. + sender (Optional[str]): Sender who requested the name. + + Returns: + list[FolderItem]: Folder items for the project. + """ + + pass + + # Expected selection helpers + @abstractmethod + def get_expected_selection_data(self): + """Full expected selection information. + + Expected selection is a selection that may not be yet selected in UI + e.g. because of refreshing, this data tell the UI what should be + selected when they finish their refresh. + + Returns: + dict[str, Any]: Expected selection data. + """ + + pass + + @abstractmethod + def set_expected_selection(self, project_name, folder_id): + """Set expected selection. + + Args: + project_name (str): Name of project to be selected. + folder_id (str): Id of folder to be selected. + """ + + pass + + +class BackendLoaderController(_BaseLoaderController): + """Backend loader controller abstraction. + + What backend logic requires from a controller for proper logic. + """ + + @abstractmethod + def emit_event(self, topic, data=None, source=None): + """Emit event with a certain topic, data and source. + + The event should be sent to both frontend and backend. + + Args: + topic (str): Event topic name. + data (Optional[dict[str, Any]]): Event data. + source (Optional[str]): Event source. + """ + + pass + + @abstractmethod + def get_loaded_product_ids(self): + """Return set of loaded product ids. + + Returns: + set[str]: Set of loaded product ids. + """ + + pass + + +class FrontendLoaderController(_BaseLoaderController): + @abstractmethod + def register_event_callback(self, topic, callback): + """Register callback for an event topic. + + Args: + topic (str): Event topic name. + callback (func): Callback triggered when the event is emitted. + """ + + pass + + # Expected selection helpers + @abstractmethod + def expected_project_selected(self, project_name): + """Expected project was selected in frontend. + + Args: + project_name (str): Project name. + """ + + pass + + @abstractmethod + def expected_folder_selected(self, folder_id): + """Expected folder was selected in frontend. + + Args: + folder_id (str): Folder id. + """ + + pass + + # Model wrapper calls + @abstractmethod + def get_project_items(self, sender=None): + """Items for all projects available on server. + + Triggers event topics "projects.refresh.started" and + "projects.refresh.finished" with data: + { + "sender": sender + } + + Notes: + Filtering of projects is done in UI. + + Args: + sender (Optional[str]): Sender who requested the items. + + Returns: + list[ProjectItem]: List of project items. + """ + + pass + + @abstractmethod + def get_product_items(self, project_name, folder_ids, sender=None): + """Product items for folder ids. + + Triggers event topics "products.refresh.started" and + "products.refresh.finished" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "sender": sender + } + + Args: + project_name (str): Project name. + folder_ids (Iterable[str]): Folder ids. + sender (Optional[str]): Sender who requested the items. + + Returns: + list[ProductItem]: List of product items. + """ + + pass + + @abstractmethod + def get_product_item(self, project_name, product_id): + """Receive single product item. + + Args: + project_name (str): Project name. + product_id (str): Product id. + + Returns: + Union[ProductItem, None]: Product info or None if not found. + """ + + pass + + @abstractmethod + def get_product_type_items(self, project_name): + """Product type items for a project. + + Product types have defined if are checked for filtering or not. + + Returns: + list[ProductTypeItem]: List of product type items for a project. + """ + + pass + + @abstractmethod + def get_representation_items( + self, project_name, version_ids, sender=None + ): + """Representation items for version ids. + + Triggers event topics "model.representations.refresh.started" and + "model.representations.refresh.finished" with data: + { + "project_name": project_name, + "version_ids": version_ids, + "sender": sender + } + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + sender (Optional[str]): Sender who requested the items. + + Returns: + list[RepreItem]: List of representation items. + """ + + pass + + @abstractmethod + def get_version_thumbnail_ids(self, project_name, version_ids): + """Get thumbnail ids for version ids. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + dict[str, Union[str, Any]]: Thumbnail id by version id. + """ + + pass + + @abstractmethod + def get_folder_thumbnail_ids(self, project_name, folder_ids): + """Get thumbnail ids for folder ids. + + Args: + project_name (str): Project name. + folder_ids (Iterable[str]): Folder ids. + + Returns: + dict[str, Union[str, Any]]: Thumbnail id by folder id. + """ + + pass + + @abstractmethod + def get_thumbnail_path(self, project_name, thumbnail_id): + """Get thumbnail path for thumbnail id. + + This method should get a path to a thumbnail based on thumbnail id. + Which probably means to download the thumbnail from server and store + it locally. + + Args: + project_name (str): Project name. + thumbnail_id (str): Thumbnail id. + + Returns: + Union[str, None]: Thumbnail path or None if not found. + """ + + pass + + # Selection model wrapper calls + @abstractmethod + def get_selected_project_name(self): + """Get selected project name. + + The information is based on last selection from UI. + + Returns: + Union[str, None]: Selected project name. + """ + + pass + + @abstractmethod + def get_selected_folder_ids(self): + """Get selected folder ids. + + The information is based on last selection from UI. + + Returns: + list[str]: Selected folder ids. + """ + + pass + + @abstractmethod + def get_selected_version_ids(self): + """Get selected version ids. + + The information is based on last selection from UI. + + Returns: + list[str]: Selected version ids. + """ + + pass + + @abstractmethod + def get_selected_representation_ids(self): + """Get selected representation ids. + + The information is based on last selection from UI. + + Returns: + list[str]: Selected representation ids. + """ + + pass + + @abstractmethod + def set_selected_project(self, project_name): + """Set selected project. + + Project selection changed in UI. Method triggers event with topic + "selection.project.changed" with data: + { + "project_name": self._project_name + } + + Args: + project_name (Union[str, None]): Selected project name. + """ + + pass + + @abstractmethod + def set_selected_folders(self, folder_ids): + """Set selected folders. + + Folder selection changed in UI. Method triggers event with topic + "selection.folders.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids + } + + Args: + folder_ids (Iterable[str]): Selected folder ids. + """ + + pass + + @abstractmethod + def set_selected_versions(self, version_ids): + """Set selected versions. + + Version selection changed in UI. Method triggers event with topic + "selection.versions.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "version_ids": version_ids + } + + Args: + version_ids (Iterable[str]): Selected version ids. + """ + + pass + + @abstractmethod + def set_selected_representations(self, repre_ids): + """Set selected representations. + + Representation selection changed in UI. Method triggers event with + topic "selection.representations.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "version_ids": version_ids, + "representation_ids": representation_ids + } + + Args: + repre_ids (Iterable[str]): Selected representation ids. + """ + + pass + + # Load action items + @abstractmethod + def get_versions_action_items(self, project_name, version_ids): + """Action items for versions selection. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + list[ActionItem]: List of action items. + """ + + pass + + @abstractmethod + def get_representations_action_items( + self, project_name, representation_ids + ): + """Action items for representations selection. + + Args: + project_name (str): Project name. + representation_ids (Iterable[str]): Representation ids. + + Returns: + list[ActionItem]: List of action items. + """ + + pass + + @abstractmethod + def trigger_action_item( + self, + identifier, + options, + project_name, + version_ids, + representation_ids + ): + """Trigger action item. + + Triggers event "load.started" with data: + { + "identifier": identifier, + "id": , + } + + And triggers "load.finished" with data: + { + "identifier": identifier, + "id": , + "error_info": [...], + } + + Args: + identifier (str): Action identifier. + options (dict[str, Any]): Action option values from UI. + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + representation_ids (Iterable[str]): Representation ids. + """ + + pass + + @abstractmethod + def change_products_group(self, project_name, product_ids, group_name): + """Change group of products. + + Triggers event "products.group.changed" with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "product_ids": product_ids, + "group_name": group_name, + } + + Args: + project_name (str): Project name. + product_ids (Iterable[str]): Product ids. + group_name (str): New group name. + """ + + pass + + @abstractmethod + def fill_root_in_source(self, source): + """Fill root in source path. + + Args: + source (Union[str, None]): Source of a published version. Usually + rootless workfile path. + """ + + pass + + # NOTE: Methods 'is_loaded_products_supported' and + # 'is_standard_projects_filter_enabled' are both based on being in host + # or not. Maybe we could implement only single method 'is_in_host'? + @abstractmethod + def is_loaded_products_supported(self): + """Is capable to get information about loaded products. + + Returns: + bool: True if it is supported. + """ + + pass + + @abstractmethod + def is_standard_projects_filter_enabled(self): + """Is standard projects filter enabled. + + This is used for filtering out when loader tool is used in a host. In + that case only current project and library projects should be shown. + + Returns: + bool: Frontend should filter out non-library projects, except + current context project. + """ + + pass diff --git a/openpype/tools/ayon_loader/control.py b/openpype/tools/ayon_loader/control.py new file mode 100644 index 0000000000..2b779f5c2e --- /dev/null +++ b/openpype/tools/ayon_loader/control.py @@ -0,0 +1,343 @@ +import logging + +import ayon_api + +from openpype.lib.events import QueuedEventSystem +from openpype.pipeline import Anatomy, get_current_context +from openpype.host import ILoadHost +from openpype.tools.ayon_utils.models import ( + ProjectsModel, + HierarchyModel, + NestedCacheItem, + CacheItem, + ThumbnailsModel, +) + +from .abstract import BackendLoaderController, FrontendLoaderController +from .models import SelectionModel, ProductsModel, LoaderActionsModel + + +class ExpectedSelection: + def __init__(self, controller): + self._project_name = None + self._folder_id = None + + self._project_selected = True + self._folder_selected = True + + self._controller = controller + + def _emit_change(self): + self._controller.emit_event( + "expected_selection_changed", + self.get_expected_selection_data(), + ) + + def set_expected_selection(self, project_name, folder_id): + self._project_name = project_name + self._folder_id = folder_id + + self._project_selected = False + self._folder_selected = False + self._emit_change() + + def get_expected_selection_data(self): + project_current = False + folder_current = False + if not self._project_selected: + project_current = True + elif not self._folder_selected: + folder_current = True + return { + "project": { + "name": self._project_name, + "current": project_current, + "selected": self._project_selected, + }, + "folder": { + "id": self._folder_id, + "current": folder_current, + "selected": self._folder_selected, + }, + } + + def is_expected_project_selected(self, project_name): + return project_name == self._project_name and self._project_selected + + def is_expected_folder_selected(self, folder_id): + return folder_id == self._folder_id and self._folder_selected + + def expected_project_selected(self, project_name): + if project_name != self._project_name: + return False + self._project_selected = True + self._emit_change() + return True + + def expected_folder_selected(self, folder_id): + if folder_id != self._folder_id: + return False + self._folder_selected = True + self._emit_change() + return True + + +class LoaderController(BackendLoaderController, FrontendLoaderController): + """ + + Args: + host (Optional[AbstractHost]): Host object. Defaults to None. + """ + + def __init__(self, host=None): + self._log = None + self._host = host + + self._event_system = self._create_event_system() + + self._project_anatomy_cache = NestedCacheItem( + levels=1, lifetime=60) + self._loaded_products_cache = CacheItem( + default_factory=set, lifetime=60) + + self._selection_model = SelectionModel(self) + self._expected_selection = ExpectedSelection(self) + self._projects_model = ProjectsModel(self) + self._hierarchy_model = HierarchyModel(self) + self._products_model = ProductsModel(self) + self._loader_actions_model = LoaderActionsModel(self) + self._thumbnails_model = ThumbnailsModel() + + @property + def log(self): + if self._log is None: + self._log = logging.getLogger(self.__class__.__name__) + return self._log + + # --------------------------------- + # Implementation of abstract methods + # --------------------------------- + # Events system + def emit_event(self, topic, data=None, source=None): + """Use implemented event system to trigger event.""" + + if data is None: + data = {} + self._event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self._event_system.add_callback(topic, callback) + + def reset(self): + self._emit_event("controller.reset.started") + + project_name = self.get_selected_project_name() + folder_ids = self.get_selected_folder_ids() + + self._project_anatomy_cache.reset() + self._loaded_products_cache.reset() + + self._products_model.reset() + self._hierarchy_model.reset() + self._loader_actions_model.reset() + self._projects_model.reset() + self._thumbnails_model.reset() + + self._projects_model.refresh() + + if not project_name and not folder_ids: + context = self.get_current_context() + project_name = context["project_name"] + folder_id = context["folder_id"] + self.set_expected_selection(project_name, folder_id) + + self._emit_event("controller.reset.finished") + + # Expected selection helpers + def get_expected_selection_data(self): + return self._expected_selection.get_expected_selection_data() + + def set_expected_selection(self, project_name, folder_id): + self._expected_selection.set_expected_selection( + project_name, folder_id + ) + + def expected_project_selected(self, project_name): + self._expected_selection.expected_project_selected(project_name) + + def expected_folder_selected(self, folder_id): + self._expected_selection.expected_folder_selected(folder_id) + + # Entity model wrappers + def get_project_items(self, sender=None): + return self._projects_model.get_project_items(sender) + + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_product_items(self, project_name, folder_ids, sender=None): + return self._products_model.get_product_items( + project_name, folder_ids, sender) + + def get_product_item(self, project_name, product_id): + return self._products_model.get_product_item( + project_name, product_id + ) + + def get_product_type_items(self, project_name): + return self._products_model.get_product_type_items(project_name) + + def get_representation_items( + self, project_name, version_ids, sender=None + ): + return self._products_model.get_repre_items( + project_name, version_ids, sender + ) + + def get_folder_thumbnail_ids(self, project_name, folder_ids): + return self._thumbnails_model.get_folder_thumbnail_ids( + project_name, folder_ids) + + def get_version_thumbnail_ids(self, project_name, version_ids): + return self._thumbnails_model.get_version_thumbnail_ids( + project_name, version_ids) + + def get_thumbnail_path(self, project_name, thumbnail_id): + return self._thumbnails_model.get_thumbnail_path( + project_name, thumbnail_id + ) + + def change_products_group(self, project_name, product_ids, group_name): + self._products_model.change_products_group( + project_name, product_ids, group_name + ) + + def get_versions_action_items(self, project_name, version_ids): + return self._loader_actions_model.get_versions_action_items( + project_name, version_ids) + + def get_representations_action_items( + self, project_name, representation_ids): + return self._loader_actions_model.get_representations_action_items( + project_name, representation_ids) + + def trigger_action_item( + self, + identifier, + options, + project_name, + version_ids, + representation_ids + ): + self._loader_actions_model.trigger_action_item( + identifier, + options, + project_name, + version_ids, + representation_ids + ) + + # Selection model wrappers + def get_selected_project_name(self): + return self._selection_model.get_selected_project_name() + + def set_selected_project(self, project_name): + self._selection_model.set_selected_project(project_name) + + # Selection model wrappers + def get_selected_folder_ids(self): + return self._selection_model.get_selected_folder_ids() + + def set_selected_folders(self, folder_ids): + self._selection_model.set_selected_folders(folder_ids) + + def get_selected_version_ids(self): + return self._selection_model.get_selected_version_ids() + + def set_selected_versions(self, version_ids): + self._selection_model.set_selected_versions(version_ids) + + def get_selected_representation_ids(self): + return self._selection_model.get_selected_representation_ids() + + def set_selected_representations(self, repre_ids): + self._selection_model.set_selected_representations(repre_ids) + + def fill_root_in_source(self, source): + project_name = self.get_selected_project_name() + anatomy = self._get_project_anatomy(project_name) + if anatomy is None: + return source + + try: + return anatomy.fill_root(source) + except Exception: + return source + + def get_current_context(self): + if self._host is None: + return { + "project_name": None, + "folder_id": None, + "task_name": None, + } + if hasattr(self._host, "get_current_context"): + context = self._host.get_current_context() + else: + context = get_current_context() + folder_id = None + project_name = context.get("project_name") + asset_name = context.get("asset_name") + if project_name and asset_name: + folder = ayon_api.get_folder_by_name( + project_name, asset_name, fields=["id"] + ) + if folder: + folder_id = folder["id"] + return { + "project_name": project_name, + "folder_id": folder_id, + "task_name": context.get("task_name"), + } + + def get_loaded_product_ids(self): + if self._host is None: + return set() + + context = self.get_current_context() + project_name = context["project_name"] + if not project_name: + return set() + + if not self._loaded_products_cache.is_valid: + if isinstance(self._host, ILoadHost): + containers = self._host.get_containers() + else: + containers = self._host.ls() + repre_ids = {c.get("representation") for c in containers} + repre_ids.discard(None) + product_ids = self._products_model.get_product_ids_by_repre_ids( + project_name, repre_ids + ) + self._loaded_products_cache.update_data(product_ids) + return self._loaded_products_cache.get_data() + + def is_loaded_products_supported(self): + return self._host is not None + + def is_standard_projects_filter_enabled(self): + return self._host is not None + + def _get_project_anatomy(self, project_name): + if not project_name: + return None + cache = self._project_anatomy_cache[project_name] + if not cache.is_valid: + cache.update_data(Anatomy(project_name)) + return cache.get_data() + + def _create_event_system(self): + return QueuedEventSystem() + + def _emit_event(self, topic, data=None): + self._event_system.emit(topic, data or {}, "controller") diff --git a/openpype/tools/ayon_loader/models/__init__.py b/openpype/tools/ayon_loader/models/__init__.py new file mode 100644 index 0000000000..6adfe71d86 --- /dev/null +++ b/openpype/tools/ayon_loader/models/__init__.py @@ -0,0 +1,10 @@ +from .selection import SelectionModel +from .products import ProductsModel +from .actions import LoaderActionsModel + + +__all__ = ( + "SelectionModel", + "ProductsModel", + "LoaderActionsModel", +) diff --git a/openpype/tools/ayon_loader/models/actions.py b/openpype/tools/ayon_loader/models/actions.py new file mode 100644 index 0000000000..177335a933 --- /dev/null +++ b/openpype/tools/ayon_loader/models/actions.py @@ -0,0 +1,871 @@ +import sys +import traceback +import inspect +import copy +import collections +import uuid + +from openpype.client import ( + get_project, + get_assets, + get_subsets, + get_versions, + get_representations, +) +from openpype.pipeline.load import ( + discover_loader_plugins, + SubsetLoaderPlugin, + filter_repre_contexts_by_loader, + get_loader_identifier, + load_with_repre_context, + load_with_subset_context, + load_with_subset_contexts, + LoadError, + IncompatibleLoaderError, +) +from openpype.tools.ayon_utils.models import NestedCacheItem +from openpype.tools.ayon_loader.abstract import ActionItem + +ACTIONS_MODEL_SENDER = "actions.model" +NOT_SET = object() + + +class LoaderActionsModel: + """Model for loader actions. + + This is probably only part of models that requires to use codebase from + 'openpype.client' because of backwards compatibility with loaders logic + which are expecting mongo documents. + + TODOs: + Deprecate 'qargparse' usage in loaders and implement conversion + of 'ActionItem' to data (and 'from_data'). + Use controller to get entities (documents) -> possible only when + loaders are able to handle AYON vs. OpenPype logic. + Add missing site sync logic, and if possible remove it from loaders. + Implement loader actions to replace load plugins. + Ask loader actions to return action items instead of guessing them. + """ + + # Cache loader plugins for some time + # NOTE Set to '0' for development + loaders_cache_lifetime = 30 + + def __init__(self, controller): + self._controller = controller + self._current_context_project = NOT_SET + self._loaders_by_identifier = NestedCacheItem( + levels=1, lifetime=self.loaders_cache_lifetime) + self._product_loaders = NestedCacheItem( + levels=1, lifetime=self.loaders_cache_lifetime) + self._repre_loaders = NestedCacheItem( + levels=1, lifetime=self.loaders_cache_lifetime) + + def reset(self): + """Reset the model with all cached items.""" + + self._current_context_project = NOT_SET + self._loaders_by_identifier.reset() + self._product_loaders.reset() + self._repre_loaders.reset() + + def get_versions_action_items(self, project_name, version_ids): + """Get action items for given version ids. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + list[ActionItem]: List of action items. + """ + + ( + version_context_by_id, + repre_context_by_id + ) = self._contexts_for_versions( + project_name, + version_ids + ) + return self._get_action_items_for_contexts( + project_name, + version_context_by_id, + repre_context_by_id + ) + + def get_representations_action_items( + self, project_name, representation_ids + ): + """Get action items for given representation ids. + + Args: + project_name (str): Project name. + representation_ids (Iterable[str]): Representation ids. + + Returns: + list[ActionItem]: List of action items. + """ + + ( + product_context_by_id, + repre_context_by_id + ) = self._contexts_for_representations( + project_name, + representation_ids + ) + return self._get_action_items_for_contexts( + project_name, + product_context_by_id, + repre_context_by_id + ) + + def trigger_action_item( + self, + identifier, + options, + project_name, + version_ids, + representation_ids + ): + """Trigger action by identifier. + + Triggers the action by identifier for given contexts. + + Triggers events "load.started" and "load.finished". Finished event + also contains "error_info" key with error information if any + happened. + + Args: + identifier (str): Loader identifier. + options (dict[str, Any]): Loader option values. + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + representation_ids (Iterable[str]): Representation ids. + """ + + event_data = { + "identifier": identifier, + "id": uuid.uuid4().hex, + } + self._controller.emit_event( + "load.started", + event_data, + ACTIONS_MODEL_SENDER, + ) + loader = self._get_loader_by_identifier(project_name, identifier) + if representation_ids is not None: + error_info = self._trigger_representation_loader( + loader, + options, + project_name, + representation_ids, + ) + elif version_ids is not None: + error_info = self._trigger_version_loader( + loader, + options, + project_name, + version_ids, + ) + else: + raise NotImplementedError( + "Invalid arguments to trigger action item") + + event_data["error_info"] = error_info + self._controller.emit_event( + "load.finished", + event_data, + ACTIONS_MODEL_SENDER, + ) + + def _get_current_context_project(self): + """Get current context project name. + + The value is based on controller (host) and cached. + + Returns: + Union[str, None]: Current context project. + """ + + if self._current_context_project is NOT_SET: + context = self._controller.get_current_context() + self._current_context_project = context["project_name"] + return self._current_context_project + + def _get_action_label(self, loader, representation=None): + """Pull label info from loader class. + + Args: + loader (LoaderPlugin): Plugin class. + representation (Optional[dict[str, Any]]): Representation data. + + Returns: + str: Action label. + """ + + label = getattr(loader, "label", None) + if label is None: + label = loader.__name__ + if representation: + # Add the representation as suffix + label = "{} ({})".format(label, representation["name"]) + return label + + def _get_action_icon(self, loader): + """Pull icon info from loader class. + + Args: + loader (LoaderPlugin): Plugin class. + + Returns: + Union[dict[str, Any], None]: Icon definition based on + loader plugin. + """ + + # Support font-awesome icons using the `.icon` and `.color` + # attributes on plug-ins. + icon = getattr(loader, "icon", None) + if icon is not None and not isinstance(icon, dict): + icon = { + "type": "awesome-font", + "name": icon, + "color": getattr(loader, "color", None) or "white" + } + return icon + + def _get_action_tooltip(self, loader): + """Pull tooltip info from loader class. + + Args: + loader (LoaderPlugin): Plugin class. + + Returns: + str: Action tooltip. + """ + + # Add tooltip and statustip from Loader docstring + return inspect.getdoc(loader) + + def _filter_loaders_by_tool_name(self, project_name, loaders): + """Filter loaders by tool name. + + Tool names are based on OpenPype tools loader tool and library + loader tool. The new tool merged both into one tool and the difference + is based only on current project name. + + Args: + project_name (str): Project name. + loaders (list[LoaderPlugin]): List of loader plugins. + + Returns: + list[LoaderPlugin]: Filtered list of loader plugins. + """ + + # Keep filtering by tool name + # - if current context project name is same as project name we do + # expect the tool is used as OpenPype loader tool, otherwise + # as library loader tool. + if project_name == self._get_current_context_project(): + tool_name = "loader" + else: + tool_name = "library_loader" + filtered_loaders = [] + for loader in loaders: + tool_names = getattr(loader, "tool_names", None) + if ( + tool_names is None + or "*" in tool_names + or tool_name in tool_names + ): + filtered_loaders.append(loader) + return filtered_loaders + + def _create_loader_action_item( + self, + loader, + contexts, + project_name, + folder_ids=None, + product_ids=None, + version_ids=None, + representation_ids=None, + repre_name=None, + ): + label = self._get_action_label(loader) + if repre_name: + label = "{} ({})".format(label, repre_name) + return ActionItem( + get_loader_identifier(loader), + label=label, + icon=self._get_action_icon(loader), + tooltip=self._get_action_tooltip(loader), + options=loader.get_options(contexts), + order=loader.order, + project_name=project_name, + folder_ids=folder_ids, + product_ids=product_ids, + version_ids=version_ids, + representation_ids=representation_ids, + ) + + def _get_loaders(self, project_name): + """Loaders with loaded settings for a project. + + Questions: + Project name is required because of settings. Should we actually + pass in current project name instead of project name where + we want to show loaders for? + + Returns: + tuple[list[SubsetLoaderPlugin], list[LoaderPlugin]]: Discovered + loader plugins. + """ + + loaders_by_identifier_c = self._loaders_by_identifier[project_name] + product_loaders_c = self._product_loaders[project_name] + repre_loaders_c = self._repre_loaders[project_name] + if loaders_by_identifier_c.is_valid: + return product_loaders_c.get_data(), repre_loaders_c.get_data() + + # Get all representation->loader combinations available for the + # index under the cursor, so we can list the user the options. + available_loaders = self._filter_loaders_by_tool_name( + project_name, discover_loader_plugins(project_name) + ) + + repre_loaders = [] + product_loaders = [] + loaders_by_identifier = {} + for loader_cls in available_loaders: + if not loader_cls.enabled: + continue + + identifier = get_loader_identifier(loader_cls) + loaders_by_identifier[identifier] = loader_cls + if issubclass(loader_cls, SubsetLoaderPlugin): + product_loaders.append(loader_cls) + else: + repre_loaders.append(loader_cls) + + loaders_by_identifier_c.update_data(loaders_by_identifier) + product_loaders_c.update_data(product_loaders) + repre_loaders_c.update_data(repre_loaders) + return product_loaders, repre_loaders + + def _get_loader_by_identifier(self, project_name, identifier): + if not self._loaders_by_identifier[project_name].is_valid: + self._get_loaders(project_name) + loaders_by_identifier_c = self._loaders_by_identifier[project_name] + loaders_by_identifier = loaders_by_identifier_c.get_data() + return loaders_by_identifier.get(identifier) + + def _actions_sorter(self, action_item): + """Sort the Loaders by their order and then their name. + + Returns: + tuple[int, str]: Sort keys. + """ + + return action_item.order, action_item.label + + def _get_version_docs(self, project_name, version_ids): + """Get version documents for given version ids. + + This function also handles hero versions and copies data from + source version to it. + + Todos: + Remove this function when this is completely rewritten to + use AYON calls. + """ + + version_docs = list(get_versions( + project_name, version_ids=version_ids, hero=True + )) + hero_versions_by_src_id = collections.defaultdict(list) + src_hero_version = set() + for version_doc in version_docs: + if version_doc["type"] != "hero": + continue + version_id = "" + src_hero_version.add(version_id) + hero_versions_by_src_id[version_id].append(version_doc) + + src_versions = [] + if src_hero_version: + src_versions = get_versions(project_name, version_ids=version_ids) + for src_version in src_versions: + src_version_id = src_version["_id"] + for hero_version in hero_versions_by_src_id[src_version_id]: + hero_version["data"] = copy.deepcopy(src_version["data"]) + + return version_docs + + def _contexts_for_versions(self, project_name, version_ids): + """Get contexts for given version ids. + + Prepare version contexts for 'SubsetLoaderPlugin' and representation + contexts for 'LoaderPlugin' for all children representations of + given versions. + + This method is very similar to '_contexts_for_representations' but the + queries of documents are called in a different order. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + + Returns: + tuple[list[dict[str, Any]], list[dict[str, Any]]]: Version and + representation contexts. + """ + + # TODO fix hero version + version_context_by_id = {} + repre_context_by_id = {} + if not project_name and not version_ids: + return version_context_by_id, repre_context_by_id + + version_docs = self._get_version_docs(project_name, version_ids) + version_docs_by_id = {} + version_docs_by_product_id = collections.defaultdict(list) + for version_doc in version_docs: + version_id = version_doc["_id"] + product_id = version_doc["parent"] + version_docs_by_id[version_id] = version_doc + version_docs_by_product_id[product_id].append(version_doc) + + _product_ids = set(version_docs_by_product_id.keys()) + _product_docs = get_subsets(project_name, subset_ids=_product_ids) + product_docs_by_id = {p["_id"]: p for p in _product_docs} + + _folder_ids = {p["parent"] for p in product_docs_by_id.values()} + _folder_docs = get_assets(project_name, asset_ids=_folder_ids) + folder_docs_by_id = {f["_id"]: f for f in _folder_docs} + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + + for version_doc in version_docs: + version_id = version_doc["_id"] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + version_context_by_id[version_id] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + } + + repre_docs = get_representations( + project_name, version_ids=version_ids) + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + version_doc = version_docs_by_id[version_id] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + + repre_context_by_id[repre_doc["_id"]] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + "representation": repre_doc, + } + + return version_context_by_id, repre_context_by_id + + def _contexts_for_representations(self, project_name, repre_ids): + """Get contexts for given representation ids. + + Prepare version contexts for 'SubsetLoaderPlugin' and representation + contexts for 'LoaderPlugin' for all children representations of + given versions. + + This method is very similar to '_contexts_for_versions' but the + queries of documents are called in a different order. + + Args: + project_name (str): Project name. + repre_ids (Iterable[str]): Representation ids. + + Returns: + tuple[list[dict[str, Any]], list[dict[str, Any]]]: Version and + representation contexts. + """ + + product_context_by_id = {} + repre_context_by_id = {} + if not project_name and not repre_ids: + return product_context_by_id, repre_context_by_id + + repre_docs = list(get_representations( + project_name, representation_ids=repre_ids + )) + version_ids = {r["parent"] for r in repre_docs} + version_docs = self._get_version_docs(project_name, version_ids) + version_docs_by_id = { + v["_id"]: v for v in version_docs + } + + product_ids = {v["parent"] for v in version_docs_by_id.values()} + product_docs = get_subsets(project_name, subset_ids=product_ids) + product_docs_by_id = { + p["_id"]: p for p in product_docs + } + + folder_ids = {p["parent"] for p in product_docs_by_id.values()} + folder_docs = get_assets(project_name, asset_ids=folder_ids) + folder_docs_by_id = { + f["_id"]: f for f in folder_docs + } + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + + for product_id, product_doc in product_docs_by_id.items(): + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + product_context_by_id[product_id] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + } + + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + version_doc = version_docs_by_id[version_id] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + + repre_context_by_id[repre_doc["_id"]] = { + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + "representation": repre_doc, + } + return product_context_by_id, repre_context_by_id + + def _get_action_items_for_contexts( + self, + project_name, + version_context_by_id, + repre_context_by_id + ): + """Prepare action items based on contexts. + + Actions are prepared based on discovered loader plugins and contexts. + The context must be valid for the loader plugin. + + Args: + project_name (str): Project name. + version_context_by_id (dict[str, dict[str, Any]]): Version + contexts by version id. + repre_context_by_id (dict[str, dict[str, Any]]): Representation + """ + + action_items = [] + if not version_context_by_id and not repre_context_by_id: + return action_items + + product_loaders, repre_loaders = self._get_loaders(project_name) + + repre_contexts_by_name = collections.defaultdict(list) + for repre_context in repre_context_by_id.values(): + repre_name = repre_context["representation"]["name"] + repre_contexts_by_name[repre_name].append(repre_context) + + for loader in repre_loaders: + # # do not allow download whole repre, select specific repre + # if tools_lib.is_sync_loader(loader): + # continue + + for repre_name, repre_contexts in repre_contexts_by_name.items(): + filtered_repre_contexts = filter_repre_contexts_by_loader( + repre_contexts, loader) + if not filtered_repre_contexts: + continue + + repre_ids = set() + repre_version_ids = set() + repre_product_ids = set() + repre_folder_ids = set() + for repre_context in filtered_repre_contexts: + repre_ids.add(repre_context["representation"]["_id"]) + repre_product_ids.add(repre_context["subset"]["_id"]) + repre_version_ids.add(repre_context["version"]["_id"]) + repre_folder_ids.add(repre_context["asset"]["_id"]) + + item = self._create_loader_action_item( + loader, + repre_contexts, + project_name=project_name, + folder_ids=repre_folder_ids, + product_ids=repre_product_ids, + version_ids=repre_version_ids, + representation_ids=repre_ids, + repre_name=repre_name, + ) + action_items.append(item) + + # Subset Loaders. + version_ids = set(version_context_by_id.keys()) + product_folder_ids = set() + product_ids = set() + for product_context in version_context_by_id.values(): + product_ids.add(product_context["subset"]["_id"]) + product_folder_ids.add(product_context["asset"]["_id"]) + + version_contexts = list(version_context_by_id.values()) + for loader in product_loaders: + item = self._create_loader_action_item( + loader, + version_contexts, + project_name=project_name, + folder_ids=product_folder_ids, + product_ids=product_ids, + version_ids=version_ids, + ) + action_items.append(item) + + action_items.sort(key=self._actions_sorter) + return action_items + + def _trigger_version_loader( + self, + loader, + options, + project_name, + version_ids, + ): + """Trigger version loader. + + This triggers 'load' method of 'SubsetLoaderPlugin' for given version + ids. + + Note: + Even when the plugin is 'SubsetLoaderPlugin' it actually expects + versions and should be named 'VersionLoaderPlugin'. Because it + is planned to refactor load system and introduce + 'LoaderAction' plugins it is not relevant to change it + anymore. + + Args: + loader (SubsetLoaderPlugin): Loader plugin to use. + options (dict): Option values for loader. + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + """ + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + + version_docs = self._get_version_docs(project_name, version_ids) + product_ids = {v["parent"] for v in version_docs} + product_docs = get_subsets(project_name, subset_ids=product_ids) + product_docs_by_id = {f["_id"]: f for f in product_docs} + folder_ids = {p["parent"] for p in product_docs_by_id.values()} + folder_docs = get_assets(project_name, asset_ids=folder_ids) + folder_docs_by_id = {f["_id"]: f for f in folder_docs} + product_contexts = [] + for version_doc in version_docs: + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + product_contexts.append({ + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + }) + + return self._load_products_by_loader( + loader, product_contexts, options + ) + + def _trigger_representation_loader( + self, + loader, + options, + project_name, + representation_ids, + ): + """Trigger representation loader. + + This triggers 'load' method of 'LoaderPlugin' for given representation + ids. For that are prepared contexts for each representation, with + all parent documents. + + Args: + loader (LoaderPlugin): Loader plugin to use. + options (dict): Option values for loader. + project_name (str): Project name. + representation_ids (Iterable[str]): Representation ids. + """ + + project_doc = get_project(project_name) + project_doc["code"] = project_doc["data"]["code"] + repre_docs = list(get_representations( + project_name, representation_ids=representation_ids + )) + version_ids = {r["parent"] for r in repre_docs} + version_docs = self._get_version_docs(project_name, version_ids) + version_docs_by_id = {v["_id"]: v for v in version_docs} + product_ids = {v["parent"] for v in version_docs_by_id.values()} + product_docs = get_subsets(project_name, subset_ids=product_ids) + product_docs_by_id = {p["_id"]: p for p in product_docs} + folder_ids = {p["parent"] for p in product_docs_by_id.values()} + folder_docs = get_assets(project_name, asset_ids=folder_ids) + folder_docs_by_id = {f["_id"]: f for f in folder_docs} + repre_contexts = [] + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + version_doc = version_docs_by_id[version_id] + product_id = version_doc["parent"] + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + folder_doc = folder_docs_by_id[folder_id] + repre_contexts.append({ + "project": project_doc, + "asset": folder_doc, + "subset": product_doc, + "version": version_doc, + "representation": repre_doc, + }) + + return self._load_representations_by_loader( + loader, repre_contexts, options + ) + + def _load_representations_by_loader(self, loader, repre_contexts, options): + """Loops through list of repre_contexts and loads them with one loader + + Args: + loader (LoaderPlugin): Loader plugin to use. + repre_contexts (list[dict]): Full info about selected + representations, containing repre, version, subset, asset and + project documents. + options (dict): Data from options. + """ + + error_info = [] + for repre_context in repre_contexts: + version_doc = repre_context["version"] + if version_doc["type"] == "hero_version": + version_name = "Hero" + else: + version_name = version_doc.get("name") + try: + load_with_repre_context( + loader, + repre_context, + options=options + ) + + except IncompatibleLoaderError as exc: + print(exc) + error_info.append(( + "Incompatible Loader", + None, + repre_context["representation"]["name"], + repre_context["subset"]["name"], + version_name + )) + + except Exception as exc: + formatted_traceback = None + if not isinstance(exc, LoadError): + exc_type, exc_value, exc_traceback = sys.exc_info() + formatted_traceback = "".join(traceback.format_exception( + exc_type, exc_value, exc_traceback + )) + + error_info.append(( + str(exc), + formatted_traceback, + repre_context["representation"]["name"], + repre_context["subset"]["name"], + version_name + )) + return error_info + + def _load_products_by_loader(self, loader, version_contexts, options): + """Triggers load with SubsetLoader type of loaders. + + Warning: + Plugin is named 'SubsetLoader' but version is passed to context + too. + + Args: + loader (SubsetLoder): Loader used to load. + version_contexts (list[dict[str, Any]]): For context for each + version. + options (dict[str, Any]): Options for loader that user could fill. + """ + + error_info = [] + if loader.is_multiple_contexts_compatible: + subset_names = [] + for context in version_contexts: + subset_name = context.get("subset", {}).get("name") or "N/A" + subset_names.append(subset_name) + try: + load_with_subset_contexts( + loader, + version_contexts, + options=options + ) + + except Exception as exc: + formatted_traceback = None + if not isinstance(exc, LoadError): + exc_type, exc_value, exc_traceback = sys.exc_info() + formatted_traceback = "".join(traceback.format_exception( + exc_type, exc_value, exc_traceback + )) + error_info.append(( + str(exc), + formatted_traceback, + None, + ", ".join(subset_names), + None + )) + else: + for version_context in version_contexts: + subset_name = ( + version_context.get("subset", {}).get("name") or "N/A" + ) + try: + load_with_subset_context( + loader, + version_context, + options=options + ) + + except Exception as exc: + formatted_traceback = None + if not isinstance(exc, LoadError): + exc_type, exc_value, exc_traceback = sys.exc_info() + formatted_traceback = "".join( + traceback.format_exception( + exc_type, exc_value, exc_traceback + ) + ) + + error_info.append(( + str(exc), + formatted_traceback, + None, + subset_name, + None + )) + + return error_info diff --git a/openpype/tools/ayon_loader/models/products.py b/openpype/tools/ayon_loader/models/products.py new file mode 100644 index 0000000000..33023cc164 --- /dev/null +++ b/openpype/tools/ayon_loader/models/products.py @@ -0,0 +1,682 @@ +import collections +import contextlib + +import arrow +import ayon_api +from ayon_api.operations import OperationsSession + +from openpype.style import get_default_entity_icon_color +from openpype.tools.ayon_utils.models import NestedCacheItem +from openpype.tools.ayon_loader.abstract import ( + ProductTypeItem, + ProductItem, + VersionItem, + RepreItem, +) + +PRODUCTS_MODEL_SENDER = "products.model" + + +def version_item_from_entity(version): + version_attribs = version["attrib"] + frame_start = version_attribs.get("frameStart") + frame_end = version_attribs.get("frameEnd") + handle_start = version_attribs.get("handleStart") + handle_end = version_attribs.get("handleEnd") + step = version_attribs.get("step") + comment = version_attribs.get("comment") + source = version_attribs.get("source") + + frame_range = None + duration = None + handles = None + if frame_start is not None and frame_end is not None: + # Remove superfluous zeros from numbers (3.0 -> 3) to improve + # readability for most frame ranges + frame_start = int(frame_start) + frame_end = int(frame_end) + frame_range = "{}-{}".format(frame_start, frame_end) + duration = frame_end - frame_start + 1 + + if handle_start is not None and handle_end is not None: + handles = "{}-{}".format(int(handle_start), int(handle_end)) + + # NOTE There is also 'updatedAt', should be used that instead? + # TODO skip conversion - converting to '%Y%m%dT%H%M%SZ' is because + # 'PrettyTimeDelegate' expects it + created_at = arrow.get(version["createdAt"]) + published_time = created_at.strftime("%Y%m%dT%H%M%SZ") + author = version["author"] + version_num = version["version"] + is_hero = version_num < 0 + + return VersionItem( + version_id=version["id"], + version=version_num, + is_hero=is_hero, + product_id=version["productId"], + thumbnail_id=version["thumbnailId"], + published_time=published_time, + author=author, + frame_range=frame_range, + duration=duration, + handles=handles, + step=step, + comment=comment, + source=source, + ) + + +def product_item_from_entity( + product_entity, + version_entities, + product_type_items_by_name, + folder_label, + product_in_scene, +): + product_attribs = product_entity["attrib"] + group = product_attribs.get("productGroup") + product_type = product_entity["productType"] + product_type_item = product_type_items_by_name[product_type] + product_type_icon = product_type_item.icon + + product_icon = { + "type": "awesome-font", + "name": "fa.file-o", + "color": get_default_entity_icon_color(), + } + version_items = { + version_entity["id"]: version_item_from_entity(version_entity) + for version_entity in version_entities + } + + return ProductItem( + product_id=product_entity["id"], + product_type=product_type, + product_name=product_entity["name"], + product_icon=product_icon, + product_type_icon=product_type_icon, + product_in_scene=product_in_scene, + group_name=group, + folder_id=product_entity["folderId"], + folder_label=folder_label, + version_items=version_items, + ) + + +def product_type_item_from_data(product_type_data): + # TODO implement icon implementation + # icon = product_type_data["icon"] + # color = product_type_data["color"] + icon = { + "type": "awesome-font", + "name": "fa.folder", + "color": "#0091B2", + } + # TODO implement checked logic + return ProductTypeItem(product_type_data["name"], icon, True) + + +class ProductsModel: + """Model for products, version and representation. + + All of the entities are product based. This model prepares data for UI + and caches it for faster access. + + Note: + Data are not used for actions model because that would require to + break OpenPype compatibility of 'LoaderPlugin's. + """ + + lifetime = 60 # In seconds (minute by default) + + def __init__(self, controller): + self._controller = controller + + # Mapping helpers + # NOTE - mapping must be cleaned up with cache cleanup + self._product_item_by_id = collections.defaultdict(dict) + self._version_item_by_id = collections.defaultdict(dict) + self._product_folder_ids_mapping = collections.defaultdict(dict) + + # Cache helpers + self._product_type_items_cache = NestedCacheItem( + levels=1, default_factory=list, lifetime=self.lifetime) + self._product_items_cache = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + self._repre_items_cache = NestedCacheItem( + levels=2, default_factory=dict, lifetime=self.lifetime) + + def reset(self): + """Reset model with all cached data.""" + + self._product_item_by_id.clear() + self._version_item_by_id.clear() + self._product_folder_ids_mapping.clear() + + self._product_type_items_cache.reset() + self._product_items_cache.reset() + self._repre_items_cache.reset() + + def get_product_type_items(self, project_name): + """Product type items for project. + + Args: + project_name (str): Project name. + + Returns: + list[ProductTypeItem]: Product type items. + """ + + cache = self._product_type_items_cache[project_name] + if not cache.is_valid: + product_types = ayon_api.get_project_product_types(project_name) + cache.update_data([ + product_type_item_from_data(product_type) + for product_type in product_types + ]) + return cache.get_data() + + def get_product_items(self, project_name, folder_ids, sender): + """Product items with versions for project and folder ids. + + Product items also contain version items. They're directly connected + to product items in the UI and the separation is not needed. + + Args: + project_name (Union[str, None]): Project name. + folder_ids (Iterable[str]): Folder ids. + sender (Union[str, None]): Who triggered the method. + + Returns: + list[ProductItem]: Product items. + """ + + if not project_name or not folder_ids: + return [] + + project_cache = self._product_items_cache[project_name] + output = [] + folder_ids_to_update = set() + for folder_id in folder_ids: + cache = project_cache[folder_id] + if cache.is_valid: + output.extend(cache.get_data().values()) + else: + folder_ids_to_update.add(folder_id) + + self._refresh_product_items( + project_name, folder_ids_to_update, sender) + + for folder_id in folder_ids_to_update: + cache = project_cache[folder_id] + output.extend(cache.get_data().values()) + return output + + def get_product_item(self, project_name, product_id): + """Get product item based on passed product id. + + This method is using cached items, but if cache is not valid it also + can query the item. + + Args: + project_name (Union[str, None]): Where to look for product. + product_id (Union[str, None]): Product id to receive. + + Returns: + Union[ProductItem, None]: Product item or 'None' if not found. + """ + + if not any((project_name, product_id)): + return None + + product_items_by_id = self._product_item_by_id[project_name] + product_item = product_items_by_id.get(product_id) + if product_item is not None: + return product_item + for product_item in self._query_product_items_by_ids( + project_name, product_ids=[product_id] + ).values(): + return product_item + + def get_product_ids_by_repre_ids(self, project_name, repre_ids): + """Get product ids based on passed representation ids. + + Args: + project_name (str): Where to look for representations. + repre_ids (Iterable[str]): Representation ids. + + Returns: + set[str]: Product ids for passed representation ids. + """ + + # TODO look out how to use single server call + if not repre_ids: + return set() + repres = ayon_api.get_representations( + project_name, repre_ids, fields=["versionId"] + ) + version_ids = {repre["versionId"] for repre in repres} + if not version_ids: + return set() + versions = ayon_api.get_versions( + project_name, version_ids=version_ids, fields=["productId"] + ) + return {v["productId"] for v in versions} + + def get_repre_items(self, project_name, version_ids, sender): + """Get representation items for passed version ids. + + Args: + project_name (str): Project name. + version_ids (Iterable[str]): Version ids. + sender (Union[str, None]): Who triggered the method. + + Returns: + list[RepreItem]: Representation items. + """ + + output = [] + if not any((project_name, version_ids)): + return output + + invalid_version_ids = set() + project_cache = self._repre_items_cache[project_name] + for version_id in version_ids: + version_cache = project_cache[version_id] + if version_cache.is_valid: + output.extend(version_cache.get_data().values()) + else: + invalid_version_ids.add(version_id) + + if invalid_version_ids: + self.refresh_representation_items( + project_name, invalid_version_ids, sender + ) + + for version_id in invalid_version_ids: + version_cache = project_cache[version_id] + output.extend(version_cache.get_data().values()) + + return output + + def change_products_group(self, project_name, product_ids, group_name): + """Change group name for passed product ids. + + Group name is stored in 'attrib' of product entity and is used in UI + to group items. + + Method triggers "products.group.changed" event with data: + { + "project_name": project_name, + "folder_ids": folder_ids, + "product_ids": product_ids, + "group_name": group_name + } + + Args: + project_name (str): Project name. + product_ids (Iterable[str]): Product ids to change group name for. + group_name (str): Group name to set. + """ + + if not product_ids: + return + + product_items = self._get_product_items_by_id( + project_name, product_ids + ) + if not product_items: + return + + session = OperationsSession() + folder_ids = set() + for product_item in product_items.values(): + session.update_entity( + project_name, + "product", + product_item.product_id, + {"attrib": {"productGroup": group_name}} + ) + folder_ids.add(product_item.folder_id) + product_item.group_name = group_name + + session.commit() + self._controller.emit_event( + "products.group.changed", + { + "project_name": project_name, + "folder_ids": folder_ids, + "product_ids": product_ids, + "group_name": group_name, + }, + PRODUCTS_MODEL_SENDER + ) + + def _get_product_items_by_id(self, project_name, product_ids): + product_item_by_id = self._product_item_by_id[project_name] + missing_product_ids = set() + output = {} + for product_id in product_ids: + product_item = product_item_by_id.get(product_id) + if product_item is not None: + output[product_id] = product_item + else: + missing_product_ids.add(product_id) + + output.update( + self._query_product_items_by_ids( + project_name, missing_product_ids + ) + ) + return output + + def _get_version_items_by_id(self, project_name, version_ids): + version_item_by_id = self._version_item_by_id[project_name] + missing_version_ids = set() + output = {} + for version_id in version_ids: + version_item = version_item_by_id.get(version_id) + if version_item is not None: + output[version_id] = version_item + else: + missing_version_ids.add(version_id) + + output.update( + self._query_version_items_by_ids( + project_name, missing_version_ids + ) + ) + return output + + def _create_product_items( + self, + project_name, + products, + versions, + folder_items=None, + product_type_items=None, + ): + if folder_items is None: + folder_items = self._controller.get_folder_items(project_name) + + if product_type_items is None: + product_type_items = self.get_product_type_items(project_name) + + loaded_product_ids = self._controller.get_loaded_product_ids() + + versions_by_product_id = collections.defaultdict(list) + for version in versions: + versions_by_product_id[version["productId"]].append(version) + product_type_items_by_name = { + product_type_item.name: product_type_item + for product_type_item in product_type_items + } + output = {} + for product in products: + product_id = product["id"] + folder_id = product["folderId"] + folder_item = folder_items.get(folder_id) + if not folder_item: + continue + versions = versions_by_product_id[product_id] + if not versions: + continue + product_item = product_item_from_entity( + product, + versions, + product_type_items_by_name, + folder_item.label, + product_id in loaded_product_ids, + ) + output[product_id] = product_item + return output + + def _query_product_items_by_ids( + self, + project_name, + folder_ids=None, + product_ids=None, + folder_items=None + ): + """Query product items. + + This method does get from, or store to, cache attributes. + + One of 'product_ids' or 'folder_ids' must be passed to the method. + + Args: + project_name (str): Project name. + folder_ids (Optional[Iterable[str]]): Folder ids under which are + products. + product_ids (Optional[Iterable[str]]): Product ids to use. + folder_items (Optional[Dict[str, FolderItem]]): Prepared folder + items from controller. + + Returns: + dict[str, ProductItem]: Product items by product id. + """ + + if not folder_ids and not product_ids: + return {} + + kwargs = {} + if folder_ids is not None: + kwargs["folder_ids"] = folder_ids + + if product_ids is not None: + kwargs["product_ids"] = product_ids + + products = list(ayon_api.get_products(project_name, **kwargs)) + product_ids = {product["id"] for product in products} + + versions = ayon_api.get_versions( + project_name, product_ids=product_ids + ) + + return self._create_product_items( + project_name, products, versions, folder_items=folder_items + ) + + def _query_version_items_by_ids(self, project_name, version_ids): + versions = list(ayon_api.get_versions( + project_name, version_ids=version_ids + )) + product_ids = {version["productId"] for version in versions} + products = list(ayon_api.get_products( + project_name, product_ids=product_ids + )) + product_items = self._create_product_items( + project_name, products, versions + ) + version_items = {} + for product_item in product_items.values(): + version_items.update(product_item.version_items) + return version_items + + def _clear_product_version_items(self, project_name, folder_ids): + """Clear product and version items from memory. + + When products are re-queried for a folders, the old product and version + items in '_product_item_by_id' and '_version_item_by_id' should + be cleaned up from memory. And mapping in stored in + '_product_folder_ids_mapping' is not relevant either. + + Args: + project_name (str): Name of project. + folder_ids (Iterable[str]): Folder ids which are being refreshed. + """ + + project_mapping = self._product_folder_ids_mapping[project_name] + if not project_mapping: + return + + product_item_by_id = self._product_item_by_id[project_name] + version_item_by_id = self._version_item_by_id[project_name] + for folder_id in folder_ids: + product_ids = project_mapping.pop(folder_id, None) + if not product_ids: + continue + + for product_id in product_ids: + product_item = product_item_by_id.pop(product_id, None) + if product_item is None: + continue + for version_item in product_item.version_items.values(): + version_item_by_id.pop(version_item.version_id, None) + + def _refresh_product_items(self, project_name, folder_ids, sender): + """Refresh product items and store them in cache. + + Args: + project_name (str): Name of project. + folder_ids (Iterable[str]): Folder ids which are being refreshed. + sender (Union[str, None]): Who triggered the refresh. + """ + + if not project_name or not folder_ids: + return + + self._clear_product_version_items(project_name, folder_ids) + + project_mapping = self._product_folder_ids_mapping[project_name] + product_item_by_id = self._product_item_by_id[project_name] + version_item_by_id = self._version_item_by_id[project_name] + + for folder_id in folder_ids: + project_mapping[folder_id] = set() + + with self._product_refresh_event_manager( + project_name, folder_ids, sender + ): + folder_items = self._controller.get_folder_items(project_name) + items_by_folder_id = { + folder_id: {} + for folder_id in folder_ids + } + product_items_by_id = self._query_product_items_by_ids( + project_name, + folder_ids=folder_ids, + folder_items=folder_items + ) + for product_id, product_item in product_items_by_id.items(): + folder_id = product_item.folder_id + items_by_folder_id[product_item.folder_id][product_id] = ( + product_item + ) + + project_mapping[folder_id].add(product_id) + product_item_by_id[product_id] = product_item + for version_id, version_item in ( + product_item.version_items.items() + ): + version_item_by_id[version_id] = version_item + + project_cache = self._product_items_cache[project_name] + for folder_id, product_items in items_by_folder_id.items(): + project_cache[folder_id].update_data(product_items) + + @contextlib.contextmanager + def _product_refresh_event_manager( + self, project_name, folder_ids, sender + ): + self._controller.emit_event( + "products.refresh.started", + { + "project_name": project_name, + "folder_ids": folder_ids, + "sender": sender, + }, + PRODUCTS_MODEL_SENDER + ) + try: + yield + + finally: + self._controller.emit_event( + "products.refresh.finished", + { + "project_name": project_name, + "folder_ids": folder_ids, + "sender": sender, + }, + PRODUCTS_MODEL_SENDER + ) + + def refresh_representation_items( + self, project_name, version_ids, sender + ): + if not any((project_name, version_ids)): + return + self._controller.emit_event( + "model.representations.refresh.started", + { + "project_name": project_name, + "version_ids": version_ids, + "sender": sender, + }, + PRODUCTS_MODEL_SENDER + ) + failed = False + try: + self._refresh_representation_items(project_name, version_ids) + except Exception: + # TODO add more information about failed refresh + failed = True + + self._controller.emit_event( + "model.representations.refresh.finished", + { + "project_name": project_name, + "version_ids": version_ids, + "sender": sender, + "failed": failed, + }, + PRODUCTS_MODEL_SENDER + ) + + def _refresh_representation_items(self, project_name, version_ids): + representations = list(ayon_api.get_representations( + project_name, + version_ids=version_ids, + fields=["id", "name", "versionId"] + )) + + version_items_by_id = self._get_version_items_by_id( + project_name, version_ids + ) + product_ids = { + version_item.product_id + for version_item in version_items_by_id.values() + } + product_items_by_id = self._get_product_items_by_id( + project_name, product_ids + ) + repre_icon = { + "type": "awesome-font", + "name": "fa.file-o", + "color": get_default_entity_icon_color(), + } + repre_items_by_version_id = collections.defaultdict(dict) + for representation in representations: + version_id = representation["versionId"] + version_item = version_items_by_id.get(version_id) + if version_item is None: + continue + product_item = product_items_by_id.get(version_item.product_id) + if product_item is None: + continue + repre_id = representation["id"] + repre_item = RepreItem( + repre_id, + representation["name"], + repre_icon, + product_item.product_name, + product_item.folder_label, + ) + repre_items_by_version_id[version_id][repre_id] = repre_item + + project_cache = self._repre_items_cache[project_name] + for version_id, repre_items in repre_items_by_version_id.items(): + version_cache = project_cache[version_id] + version_cache.update_data(repre_items) diff --git a/openpype/tools/ayon_loader/models/selection.py b/openpype/tools/ayon_loader/models/selection.py new file mode 100644 index 0000000000..326ff835f6 --- /dev/null +++ b/openpype/tools/ayon_loader/models/selection.py @@ -0,0 +1,85 @@ +class SelectionModel(object): + """Model handling selection changes. + + Triggering events: + - "selection.project.changed" + - "selection.folders.changed" + - "selection.versions.changed" + """ + + event_source = "selection.model" + + def __init__(self, controller): + self._controller = controller + + self._project_name = None + self._folder_ids = set() + self._version_ids = set() + self._representation_ids = set() + + def get_selected_project_name(self): + return self._project_name + + def set_selected_project(self, project_name): + if self._project_name == project_name: + return + + self._project_name = project_name + self._controller.emit_event( + "selection.project.changed", + {"project_name": self._project_name}, + self.event_source + ) + + def get_selected_folder_ids(self): + return self._folder_ids + + def set_selected_folders(self, folder_ids): + if folder_ids == self._folder_ids: + return + + self._folder_ids = folder_ids + self._controller.emit_event( + "selection.folders.changed", + { + "project_name": self._project_name, + "folder_ids": folder_ids, + }, + self.event_source + ) + + def get_selected_version_ids(self): + return self._version_ids + + def set_selected_versions(self, version_ids): + if version_ids == self._version_ids: + return + + self._version_ids = version_ids + self._controller.emit_event( + "selection.versions.changed", + { + "project_name": self._project_name, + "folder_ids": self._folder_ids, + "version_ids": self._version_ids, + }, + self.event_source + ) + + def get_selected_representation_ids(self): + return self._representation_ids + + def set_selected_representations(self, repre_ids): + if repre_ids == self._representation_ids: + return + + self._representation_ids = repre_ids + self._controller.emit_event( + "selection.representations.changed", + { + "project_name": self._project_name, + "folder_ids": self._folder_ids, + "version_ids": self._version_ids, + "representation_ids": self._representation_ids, + } + ) diff --git a/openpype/tools/ayon_loader/ui/__init__.py b/openpype/tools/ayon_loader/ui/__init__.py new file mode 100644 index 0000000000..41e4418641 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/__init__.py @@ -0,0 +1,6 @@ +from .window import LoaderWindow + + +__all__ = ( + "LoaderWindow", +) diff --git a/openpype/tools/ayon_loader/ui/actions_utils.py b/openpype/tools/ayon_loader/ui/actions_utils.py new file mode 100644 index 0000000000..a269b643dc --- /dev/null +++ b/openpype/tools/ayon_loader/ui/actions_utils.py @@ -0,0 +1,118 @@ +import uuid + +from qtpy import QtWidgets, QtGui +import qtawesome + +from openpype.lib.attribute_definitions import AbstractAttrDef +from openpype.tools.attribute_defs import AttributeDefinitionsDialog +from openpype.tools.utils.widgets import ( + OptionalMenu, + OptionalAction, + OptionDialog, +) +from openpype.tools.ayon_utils.widgets import get_qt_icon + + +def show_actions_menu(action_items, global_point, one_item_selected, parent): + selected_action_item = None + selected_options = None + + if not action_items: + menu = QtWidgets.QMenu(parent) + action = _get_no_loader_action(menu, one_item_selected) + menu.addAction(action) + menu.exec_(global_point) + return selected_action_item, selected_options + + menu = OptionalMenu(parent) + + action_items_by_id = {} + for action_item in action_items: + item_id = uuid.uuid4().hex + action_items_by_id[item_id] = action_item + item_options = action_item.options + icon = get_qt_icon(action_item.icon) + use_option = bool(item_options) + action = OptionalAction( + action_item.label, + icon, + use_option, + menu + ) + if use_option: + # Add option box tip + action.set_option_tip(item_options) + + tip = action_item.tooltip + if tip: + action.setToolTip(tip) + action.setStatusTip(tip) + + action.setData(item_id) + + menu.addAction(action) + + action = menu.exec_(global_point) + if action is not None: + item_id = action.data() + selected_action_item = action_items_by_id.get(item_id) + + if selected_action_item is not None: + selected_options = _get_options(action, selected_action_item, parent) + + return selected_action_item, selected_options + + +def _get_options(action, action_item, parent): + """Provides dialog to select value from loader provided options. + + Loader can provide static or dynamically created options based on + AttributeDefinitions, and for backwards compatibility qargparse. + + Args: + action (OptionalAction) - Action object in menu. + action_item (ActionItem) - Action item with context information. + parent (QtCore.QObject) - Parent object for dialog. + + Returns: + Union[dict[str, Any], None]: Selected value from attributes or + 'None' if dialog was cancelled. + """ + + # Pop option dialog + options = action_item.options + if not getattr(action, "optioned", False) or not options: + return {} + + if isinstance(options[0], AbstractAttrDef): + qargparse_options = False + dialog = AttributeDefinitionsDialog(options, parent) + else: + qargparse_options = True + dialog = OptionDialog(parent) + dialog.create(options) + + dialog.setWindowTitle(action.label + " Options") + + if not dialog.exec_(): + return None + + # Get option + if qargparse_options: + return dialog.parse() + return dialog.get_values() + + +def _get_no_loader_action(menu, one_item_selected): + """Creates dummy no loader option in 'menu'""" + + if one_item_selected: + submsg = "this version." + else: + submsg = "your selection." + msg = "No compatible loaders for {}".format(submsg) + icon = qtawesome.icon( + "fa.exclamation", + color=QtGui.QColor(255, 51, 0) + ) + return QtWidgets.QAction(icon, ("*" + msg), menu) diff --git a/openpype/tools/ayon_loader/ui/folders_widget.py b/openpype/tools/ayon_loader/ui/folders_widget.py new file mode 100644 index 0000000000..53351f76d9 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/folders_widget.py @@ -0,0 +1,407 @@ +import qtpy +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + DeselectableTreeView, +) +from openpype.style import get_objected_colors + +from openpype.tools.ayon_utils.widgets import ( + FoldersModel, + FOLDERS_MODEL_SENDER_NAME, +) +from openpype.tools.ayon_utils.widgets.folders_widget import FOLDER_ID_ROLE + +if qtpy.API == "pyside": + from PySide.QtGui import QStyleOptionViewItemV4 +elif qtpy.API == "pyqt4": + from PyQt4.QtGui import QStyleOptionViewItemV4 + +UNDERLINE_COLORS_ROLE = QtCore.Qt.UserRole + 50 + + +class UnderlinesFolderDelegate(QtWidgets.QItemDelegate): + """Item delegate drawing bars under folder label. + + This is used in loader tool. Multiselection of folders + may group products by name under colored groups. Selected color groups are + then propagated back to selected folders as underlines. + """ + bar_height = 3 + + def __init__(self, *args, **kwargs): + super(UnderlinesFolderDelegate, self).__init__(*args, **kwargs) + colors = get_objected_colors("loader", "asset-view") + self._selected_color = colors["selected"].get_qcolor() + self._hover_color = colors["hover"].get_qcolor() + self._selected_hover_color = colors["selected-hover"].get_qcolor() + + def sizeHint(self, option, index): + """Add bar height to size hint.""" + result = super(UnderlinesFolderDelegate, self).sizeHint(option, index) + height = result.height() + result.setHeight(height + self.bar_height) + + return result + + def paint(self, painter, option, index): + """Replicate painting of an item and draw color bars if needed.""" + # Qt4 compat + if qtpy.API in ("pyside", "pyqt4"): + option = QStyleOptionViewItemV4(option) + + painter.save() + + item_rect = QtCore.QRect(option.rect) + item_rect.setHeight(option.rect.height() - self.bar_height) + + subset_colors = index.data(UNDERLINE_COLORS_ROLE) or [] + + subset_colors_width = 0 + if subset_colors: + subset_colors_width = option.rect.width() / len(subset_colors) + + subset_rects = [] + counter = 0 + for subset_c in subset_colors: + new_color = None + new_rect = None + if subset_c: + new_color = QtGui.QColor(subset_c) + + new_rect = QtCore.QRect( + option.rect.left() + (counter * subset_colors_width), + option.rect.top() + ( + option.rect.height() - self.bar_height + ), + subset_colors_width, + self.bar_height + ) + subset_rects.append((new_color, new_rect)) + counter += 1 + + # Background + if option.state & QtWidgets.QStyle.State_Selected: + if len(subset_colors) == 0: + item_rect.setTop(item_rect.top() + (self.bar_height / 2)) + + if option.state & QtWidgets.QStyle.State_MouseOver: + bg_color = self._selected_hover_color + else: + bg_color = self._selected_color + else: + item_rect.setTop(item_rect.top() + (self.bar_height / 2)) + if option.state & QtWidgets.QStyle.State_MouseOver: + bg_color = self._hover_color + else: + bg_color = QtGui.QColor() + bg_color.setAlpha(0) + + # When not needed to do a rounded corners (easier and without + # painter restore): + painter.fillRect( + option.rect, + QtGui.QBrush(bg_color) + ) + + if option.state & QtWidgets.QStyle.State_Selected: + for color, subset_rect in subset_rects: + if not color or not subset_rect: + continue + painter.fillRect(subset_rect, QtGui.QBrush(color)) + + # Icon + icon_index = index.model().index( + index.row(), index.column(), index.parent() + ) + # - Default icon_rect if not icon + icon_rect = QtCore.QRect( + item_rect.left(), + item_rect.top(), + # To make sure it's same size all the time + option.rect.height() - self.bar_height, + option.rect.height() - self.bar_height + ) + icon = index.model().data(icon_index, QtCore.Qt.DecorationRole) + + if icon: + mode = QtGui.QIcon.Normal + if not (option.state & QtWidgets.QStyle.State_Enabled): + mode = QtGui.QIcon.Disabled + elif option.state & QtWidgets.QStyle.State_Selected: + mode = QtGui.QIcon.Selected + + if isinstance(icon, QtGui.QPixmap): + icon = QtGui.QIcon(icon) + option.decorationSize = icon.size() / icon.devicePixelRatio() + + elif isinstance(icon, QtGui.QColor): + pixmap = QtGui.QPixmap(option.decorationSize) + pixmap.fill(icon) + icon = QtGui.QIcon(pixmap) + + elif isinstance(icon, QtGui.QImage): + icon = QtGui.QIcon(QtGui.QPixmap.fromImage(icon)) + option.decorationSize = icon.size() / icon.devicePixelRatio() + + elif isinstance(icon, QtGui.QIcon): + state = QtGui.QIcon.Off + if option.state & QtWidgets.QStyle.State_Open: + state = QtGui.QIcon.On + actual_size = option.icon.actualSize( + option.decorationSize, mode, state + ) + option.decorationSize = QtCore.QSize( + min(option.decorationSize.width(), actual_size.width()), + min(option.decorationSize.height(), actual_size.height()) + ) + + state = QtGui.QIcon.Off + if option.state & QtWidgets.QStyle.State_Open: + state = QtGui.QIcon.On + + icon.paint( + painter, icon_rect, + QtCore.Qt.AlignLeft, mode, state + ) + + # Text + text_rect = QtCore.QRect( + icon_rect.left() + icon_rect.width() + 2, + item_rect.top(), + item_rect.width(), + item_rect.height() + ) + + painter.drawText( + text_rect, QtCore.Qt.AlignVCenter, + index.data(QtCore.Qt.DisplayRole) + ) + + painter.restore() + + +class LoaderFoldersModel(FoldersModel): + def __init__(self, *args, **kwargs): + super(LoaderFoldersModel, self).__init__(*args, **kwargs) + + self._colored_items = set() + + def _fill_item_data(self, item, folder_item): + """ + + Args: + item (QtGui.QStandardItem): Item to fill data. + folder_item (FolderItem): Folder item. + """ + + super(LoaderFoldersModel, self)._fill_item_data(item, folder_item) + + def set_merged_products_selection(self, items): + changes = { + folder_id: None + for folder_id in self._colored_items + } + + all_folder_ids = set() + for item in items: + folder_ids = item["folder_ids"] + all_folder_ids.update(folder_ids) + + for folder_id in all_folder_ids: + changes[folder_id] = [] + + for item in items: + item_color = item["color"] + item_folder_ids = item["folder_ids"] + for folder_id in all_folder_ids: + folder_color = ( + item_color + if folder_id in item_folder_ids + else None + ) + changes[folder_id].append(folder_color) + + for folder_id, color_value in changes.items(): + item = self._items_by_id.get(folder_id) + if item is not None: + item.setData(color_value, UNDERLINE_COLORS_ROLE) + + self._colored_items = all_folder_ids + + +class LoaderFoldersWidget(QtWidgets.QWidget): + """Folders widget. + + Widget that handles folders view, model and selection. + + Expected selection handling is disabled by default. If enabled, the + widget will handle the expected in predefined way. Widget is listening + to event 'expected_selection_changed' with expected event data below, + the same data must be available when called method + 'get_expected_selection_data' on controller. + + { + "folder": { + "current": bool, # Folder is what should be set now + "folder_id": Union[str, None], # Folder id that should be selected + }, + ... + } + + Selection is confirmed by calling method 'expected_folder_selected' on + controller. + + + Args: + controller (AbstractWorkfilesFrontend): The control object. + parent (QtWidgets.QWidget): The parent widget. + """ + + refreshed = QtCore.Signal() + + def __init__(self, controller, parent): + super(LoaderFoldersWidget, self).__init__(parent) + + folders_view = DeselectableTreeView(self) + folders_view.setHeaderHidden(True) + folders_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection) + + folders_model = LoaderFoldersModel(controller) + folders_proxy_model = RecursiveSortFilterProxyModel() + folders_proxy_model.setSourceModel(folders_model) + folders_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + + folders_label_delegate = UnderlinesFolderDelegate(folders_view) + + folders_view.setModel(folders_proxy_model) + folders_view.setItemDelegate(folders_label_delegate) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(folders_view, 1) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_change, + ) + controller.register_event_callback( + "folders.refresh.finished", + self._on_folders_refresh_finished + ) + controller.register_event_callback( + "controller.refresh.finished", + self._on_controller_refresh + ) + controller.register_event_callback( + "expected_selection_changed", + self._on_expected_selection_change + ) + + selection_model = folders_view.selectionModel() + selection_model.selectionChanged.connect(self._on_selection_change) + + folders_model.refreshed.connect(self._on_model_refresh) + + self._controller = controller + self._folders_view = folders_view + self._folders_model = folders_model + self._folders_proxy_model = folders_proxy_model + self._folders_label_delegate = folders_label_delegate + + self._expected_selection = None + + def set_name_filter(self, name): + """Set filter of folder name. + + Args: + name (str): The string filter. + """ + + self._folders_proxy_model.setFilterFixedString(name) + + def set_merged_products_selection(self, items): + """ + + Args: + items (list[dict[str, Any]]): List of merged items with folder + ids. + """ + + self._folders_model.set_merged_products_selection(items) + + def refresh(self): + self._folders_model.refresh() + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + self._set_project_name(project_name) + + def _set_project_name(self, project_name): + self._folders_model.set_project_name(project_name) + + def _clear(self): + self._folders_model.clear() + + def _on_folders_refresh_finished(self, event): + if event["sender"] != FOLDERS_MODEL_SENDER_NAME: + self._set_project_name(event["project_name"]) + + def _on_controller_refresh(self): + self._update_expected_selection() + + def _on_model_refresh(self): + if self._expected_selection: + self._set_expected_selection() + self._folders_proxy_model.sort(0) + self.refreshed.emit() + + def _get_selected_item_ids(self): + selection_model = self._folders_view.selectionModel() + item_ids = [] + for index in selection_model.selectedIndexes(): + item_id = index.data(FOLDER_ID_ROLE) + if item_id is not None: + item_ids.append(item_id) + return item_ids + + def _on_selection_change(self): + item_ids = self._get_selected_item_ids() + self._controller.set_selected_folders(item_ids) + + # Expected selection handling + def _on_expected_selection_change(self, event): + self._update_expected_selection(event.data) + + def _update_expected_selection(self, expected_data=None): + if expected_data is None: + expected_data = self._controller.get_expected_selection_data() + + folder_data = expected_data.get("folder") + if not folder_data or not folder_data["current"]: + return + + folder_id = folder_data["id"] + self._expected_selection = folder_id + if not self._folders_model.is_refreshing: + self._set_expected_selection() + + def _set_expected_selection(self): + folder_id = self._expected_selection + selected_ids = self._get_selected_item_ids() + self._expected_selection = None + skip_selection = ( + folder_id is None + or ( + folder_id in selected_ids + and len(selected_ids) == 1 + ) + ) + if not skip_selection: + index = self._folders_model.get_index_by_id(folder_id) + if index.isValid(): + proxy_index = self._folders_proxy_model.mapFromSource(index) + self._folders_view.setCurrentIndex(proxy_index) + self._controller.expected_folder_selected(folder_id) diff --git a/openpype/tools/ayon_loader/ui/info_widget.py b/openpype/tools/ayon_loader/ui/info_widget.py new file mode 100644 index 0000000000..b7d1b0811f --- /dev/null +++ b/openpype/tools/ayon_loader/ui/info_widget.py @@ -0,0 +1,141 @@ +import datetime + +from qtpy import QtWidgets + +from openpype.tools.utils.lib import format_version + + +class VersionTextEdit(QtWidgets.QTextEdit): + """QTextEdit that displays version specific information. + + This also overrides the context menu to add actions like copying + source path to clipboard or copying the raw data of the version + to clipboard. + + """ + def __init__(self, controller, parent): + super(VersionTextEdit, self).__init__(parent=parent) + + self._version_item = None + self._product_item = None + + self._controller = controller + + # Reset + self.set_current_item() + + def set_current_item(self, product_item=None, version_item=None): + """ + + Args: + product_item (Union[ProductItem, None]): Product item. + version_item (Union[VersionItem, None]): Version item to display. + """ + + self._product_item = product_item + self._version_item = version_item + + if version_item is None: + # Reset state to empty + self.setText("") + return + + version_label = format_version(abs(version_item.version)) + if version_item.version < 0: + version_label = "Hero version {}".format(version_label) + + # Define readable creation timestamp + created = version_item.published_time + created = datetime.datetime.strptime(created, "%Y%m%dT%H%M%SZ") + created = datetime.datetime.strftime(created, "%b %d %Y %H:%M") + + comment = version_item.comment or "No comment" + source = version_item.source or "No source" + + self.setHtml( + ( + "

{product_name}

" + "

{version_label}

" + "Comment
" + "{comment}

" + + "Created
" + "{created}

" + + "Source
" + "{source}" + ).format( + product_name=product_item.product_name, + version_label=version_label, + comment=comment, + created=created, + source=source, + ) + ) + + def contextMenuEvent(self, event): + """Context menu with additional actions""" + menu = self.createStandardContextMenu() + + # Add additional actions when any text, so we can assume + # the version is set. + source = None + if self._version_item is not None: + source = self._version_item.source + + if source: + menu.addSeparator() + action = QtWidgets.QAction( + "Copy source path to clipboard", menu + ) + action.triggered.connect(self._on_copy_source) + menu.addAction(action) + + menu.exec_(event.globalPos()) + + def _on_copy_source(self): + """Copy formatted source path to clipboard.""" + + source = self._version_item.source + if not source: + return + + filled_source = self._controller.fill_root_in_source(source) + clipboard = QtWidgets.QApplication.clipboard() + clipboard.setText(filled_source) + + +class InfoWidget(QtWidgets.QWidget): + """A Widget that display information about a specific version""" + def __init__(self, controller, parent): + super(InfoWidget, self).__init__(parent=parent) + + label_widget = QtWidgets.QLabel("Version Info", self) + info_text_widget = VersionTextEdit(controller, self) + info_text_widget.setReadOnly(True) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(label_widget, 0) + layout.addWidget(info_text_widget, 1) + + self._controller = controller + + self._info_text_widget = info_text_widget + self._label_widget = label_widget + + def set_selected_version_info(self, project_name, items): + if not items or not project_name: + self._info_text_widget.set_current_item() + return + first_item = next(iter(items)) + product_item = self._controller.get_product_item( + project_name, + first_item["product_id"], + ) + version_id = first_item["version_id"] + version_item = None + if product_item is not None: + version_item = product_item.version_items.get(version_id) + + self._info_text_widget.set_current_item(product_item, version_item) diff --git a/openpype/tools/ayon_loader/ui/product_group_dialog.py b/openpype/tools/ayon_loader/ui/product_group_dialog.py new file mode 100644 index 0000000000..5737ce58a4 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/product_group_dialog.py @@ -0,0 +1,45 @@ +from qtpy import QtWidgets + +from openpype.tools.utils import PlaceholderLineEdit + + +class ProductGroupDialog(QtWidgets.QDialog): + def __init__(self, controller, parent): + super(ProductGroupDialog, self).__init__(parent) + self.setWindowTitle("Grouping products") + self.setMinimumWidth(250) + self.setModal(True) + + main_label = QtWidgets.QLabel("Group Name", self) + + group_name_input = PlaceholderLineEdit(self) + group_name_input.setPlaceholderText("Remain blank to ungroup..") + + group_btn = QtWidgets.QPushButton("Apply", self) + group_btn.setAutoDefault(True) + group_btn.setDefault(True) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(main_label, 0) + layout.addWidget(group_name_input, 0) + layout.addWidget(group_btn, 0) + + group_btn.clicked.connect(self._on_apply_click) + + self._project_name = None + self._product_ids = set() + + self._controller = controller + self._group_btn = group_btn + self._group_name_input = group_name_input + + def set_product_ids(self, project_name, product_ids): + self._project_name = project_name + self._product_ids = product_ids + + def _on_apply_click(self): + group_name = self._group_name_input.text().strip() or None + self._controller.change_products_group( + self._project_name, self._product_ids, group_name + ) + self.close() diff --git a/openpype/tools/ayon_loader/ui/product_types_widget.py b/openpype/tools/ayon_loader/ui/product_types_widget.py new file mode 100644 index 0000000000..a84a7ff846 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/product_types_widget.py @@ -0,0 +1,220 @@ +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.tools.ayon_utils.widgets import get_qt_icon + +PRODUCT_TYPE_ROLE = QtCore.Qt.UserRole + 1 + + +class ProductTypesQtModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + filter_changed = QtCore.Signal() + + def __init__(self, controller): + super(ProductTypesQtModel, self).__init__() + self._controller = controller + + self._refreshing = False + self._bulk_change = False + self._items_by_name = {} + + def is_refreshing(self): + return self._refreshing + + def get_filter_info(self): + """Product types filtering info. + + Returns: + dict[str, bool]: Filtering value by product type name. False value + means to hide product type. + """ + + return { + name: item.checkState() == QtCore.Qt.Checked + for name, item in self._items_by_name.items() + } + + def refresh(self, project_name): + self._refreshing = True + product_type_items = self._controller.get_product_type_items( + project_name) + + items_to_remove = set(self._items_by_name.keys()) + new_items = [] + for product_type_item in product_type_items: + name = product_type_item.name + items_to_remove.discard(name) + item = self._items_by_name.get(product_type_item.name) + if item is None: + item = QtGui.QStandardItem(name) + item.setData(name, PRODUCT_TYPE_ROLE) + item.setEditable(False) + item.setCheckable(True) + new_items.append(item) + self._items_by_name[name] = item + + item.setCheckState( + QtCore.Qt.Checked + if product_type_item.checked + else QtCore.Qt.Unchecked + ) + icon = get_qt_icon(product_type_item.icon) + item.setData(icon, QtCore.Qt.DecorationRole) + + root_item = self.invisibleRootItem() + if new_items: + root_item.appendRows(new_items) + + for name in items_to_remove: + item = self._items_by_name.pop(name) + root_item.removeRow(item.row()) + + self._refreshing = False + self.refreshed.emit() + + def setData(self, index, value, role=None): + checkstate_changed = False + if role is None: + role = QtCore.Qt.EditRole + elif role == QtCore.Qt.CheckStateRole: + checkstate_changed = True + output = super(ProductTypesQtModel, self).setData(index, value, role) + if checkstate_changed and not self._bulk_change: + self.filter_changed.emit() + return output + + def change_state_for_all(self, checked): + if self._items_by_name: + self.change_states(checked, self._items_by_name.keys()) + + def change_states(self, checked, product_types): + product_types = set(product_types) + if not product_types: + return + + if checked is None: + state = None + elif checked: + state = QtCore.Qt.Checked + else: + state = QtCore.Qt.Unchecked + + self._bulk_change = True + + changed = False + for product_type in product_types: + item = self._items_by_name.get(product_type) + if item is None: + continue + new_state = state + item_checkstate = item.checkState() + if new_state is None: + if item_checkstate == QtCore.Qt.Checked: + new_state = QtCore.Qt.Unchecked + else: + new_state = QtCore.Qt.Checked + elif item_checkstate == new_state: + continue + changed = True + item.setCheckState(new_state) + + self._bulk_change = False + + if changed: + self.filter_changed.emit() + + +class ProductTypesView(QtWidgets.QListView): + filter_changed = QtCore.Signal() + + def __init__(self, controller, parent): + super(ProductTypesView, self).__init__(parent) + + self.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection + ) + self.setAlternatingRowColors(True) + self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + product_types_model = ProductTypesQtModel(controller) + product_types_proxy_model = QtCore.QSortFilterProxyModel() + product_types_proxy_model.setSourceModel(product_types_model) + + self.setModel(product_types_proxy_model) + + product_types_model.refreshed.connect(self._on_refresh_finished) + product_types_model.filter_changed.connect(self._on_filter_change) + self.customContextMenuRequested.connect(self._on_context_menu) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_change + ) + + self._controller = controller + + self._product_types_model = product_types_model + self._product_types_proxy_model = product_types_proxy_model + + def get_filter_info(self): + return self._product_types_model.get_filter_info() + + def _on_project_change(self, event): + project_name = event["project_name"] + self._product_types_model.refresh(project_name) + + def _on_refresh_finished(self): + self.filter_changed.emit() + + def _on_filter_change(self): + if not self._product_types_model.is_refreshing(): + self.filter_changed.emit() + + def _change_selection_state(self, checkstate): + selection_model = self.selectionModel() + product_types = { + index.data(PRODUCT_TYPE_ROLE) + for index in selection_model.selectedIndexes() + } + product_types.discard(None) + self._product_types_model.change_states(checkstate, product_types) + + def _on_enable_all(self): + self._product_types_model.change_state_for_all(True) + + def _on_disable_all(self): + self._product_types_model.change_state_for_all(False) + + def _on_context_menu(self, pos): + menu = QtWidgets.QMenu(self) + + # Add enable all action + action_check_all = QtWidgets.QAction(menu) + action_check_all.setText("Enable All") + action_check_all.triggered.connect(self._on_enable_all) + # Add disable all action + action_uncheck_all = QtWidgets.QAction(menu) + action_uncheck_all.setText("Disable All") + action_uncheck_all.triggered.connect(self._on_disable_all) + + menu.addAction(action_check_all) + menu.addAction(action_uncheck_all) + + # Get mouse position + global_pos = self.viewport().mapToGlobal(pos) + menu.exec_(global_pos) + + def event(self, event): + if event.type() == QtCore.QEvent.KeyPress: + if event.key() == QtCore.Qt.Key_Space: + self._change_selection_state(None) + return True + + if event.key() == QtCore.Qt.Key_Backspace: + self._change_selection_state(False) + return True + + if event.key() == QtCore.Qt.Key_Return: + self._change_selection_state(True) + return True + + return super(ProductTypesView, self).event(event) diff --git a/openpype/tools/ayon_loader/ui/products_delegates.py b/openpype/tools/ayon_loader/ui/products_delegates.py new file mode 100644 index 0000000000..6729468bfa --- /dev/null +++ b/openpype/tools/ayon_loader/ui/products_delegates.py @@ -0,0 +1,191 @@ +import numbers +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.tools.utils.lib import format_version + +from .products_model import ( + PRODUCT_ID_ROLE, + VERSION_NAME_EDIT_ROLE, + VERSION_ID_ROLE, + PRODUCT_IN_SCENE_ROLE, +) + + +class VersionComboBox(QtWidgets.QComboBox): + value_changed = QtCore.Signal(str) + + def __init__(self, product_id, parent): + super(VersionComboBox, self).__init__(parent) + self._product_id = product_id + self._items_by_id = {} + + self._current_id = None + + self.currentIndexChanged.connect(self._on_index_change) + + def update_versions(self, version_items, current_version_id): + model = self.model() + root_item = model.invisibleRootItem() + version_items = list(reversed(version_items)) + version_ids = [ + version_item.version_id + for version_item in version_items + ] + if current_version_id not in version_ids and version_ids: + current_version_id = version_ids[0] + self._current_id = current_version_id + + to_remove = set(self._items_by_id.keys()) - set(version_ids) + for item_id in to_remove: + item = self._items_by_id.pop(item_id) + root_item.removeRow(item.row()) + + for idx, version_item in enumerate(version_items): + version_id = version_item.version_id + + item = self._items_by_id.get(version_id) + if item is None: + label = format_version( + abs(version_item.version), version_item.is_hero + ) + item = QtGui.QStandardItem(label) + item.setData(version_id, QtCore.Qt.UserRole) + self._items_by_id[version_id] = item + + if item.row() != idx: + root_item.insertRow(idx, item) + + index = version_ids.index(current_version_id) + if self.currentIndex() != index: + self.setCurrentIndex(index) + + def _on_index_change(self): + idx = self.currentIndex() + value = self.itemData(idx) + if value == self._current_id: + return + self._current_id = value + self.value_changed.emit(self._product_id) + + +class VersionDelegate(QtWidgets.QStyledItemDelegate): + """A delegate that display version integer formatted as version string.""" + + version_changed = QtCore.Signal() + + def __init__(self, *args, **kwargs): + super(VersionDelegate, self).__init__(*args, **kwargs) + self._editor_by_product_id = {} + + def displayText(self, value, locale): + if not isinstance(value, numbers.Integral): + return "N/A" + return format_version(abs(value), value < 0) + + def paint(self, painter, option, index): + fg_color = index.data(QtCore.Qt.ForegroundRole) + if fg_color: + if isinstance(fg_color, QtGui.QBrush): + fg_color = fg_color.color() + elif isinstance(fg_color, QtGui.QColor): + pass + else: + fg_color = None + + if not fg_color: + return super(VersionDelegate, self).paint(painter, option, index) + + if option.widget: + style = option.widget.style() + else: + style = QtWidgets.QApplication.style() + + style.drawControl( + style.CE_ItemViewItem, option, painter, option.widget + ) + + painter.save() + + text = self.displayText( + index.data(QtCore.Qt.DisplayRole), option.locale + ) + pen = painter.pen() + pen.setColor(fg_color) + painter.setPen(pen) + + text_rect = style.subElementRect(style.SE_ItemViewItemText, option) + text_margin = style.proxy().pixelMetric( + style.PM_FocusFrameHMargin, option, option.widget + ) + 1 + + painter.drawText( + text_rect.adjusted(text_margin, 0, - text_margin, 0), + option.displayAlignment, + text + ) + + painter.restore() + + def createEditor(self, parent, option, index): + product_id = index.data(PRODUCT_ID_ROLE) + if not product_id: + return + + editor = VersionComboBox(product_id, parent) + self._editor_by_product_id[product_id] = editor + editor.value_changed.connect(self._on_editor_change) + + return editor + + def _on_editor_change(self, product_id): + editor = self._editor_by_product_id[product_id] + + # Update model data + self.commitData.emit(editor) + # Display model data + self.version_changed.emit() + + def setEditorData(self, editor, index): + editor.clear() + + # Current value of the index + versions = index.data(VERSION_NAME_EDIT_ROLE) or [] + version_id = index.data(VERSION_ID_ROLE) + editor.update_versions(versions, version_id) + + def setModelData(self, editor, model, index): + """Apply the integer version back in the model""" + + version_id = editor.itemData(editor.currentIndex()) + model.setData(index, version_id, VERSION_NAME_EDIT_ROLE) + + +class LoadedInSceneDelegate(QtWidgets.QStyledItemDelegate): + """Delegate for Loaded in Scene state columns. + + Shows "Yes" or "No" for 1 or 0 values, or "N/A" for other values. + Colorizes green or dark grey based on values. + """ + + def __init__(self, *args, **kwargs): + super(LoadedInSceneDelegate, self).__init__(*args, **kwargs) + self._colors = { + 1: QtGui.QColor(80, 170, 80), + 0: QtGui.QColor(90, 90, 90), + } + self._default_color = QtGui.QColor(90, 90, 90) + + def displayText(self, value, locale): + if value == 0: + return "No" + elif value == 1: + return "Yes" + return "N/A" + + def initStyleOption(self, option, index): + super(LoadedInSceneDelegate, self).initStyleOption(option, index) + + # Colorize based on value + value = index.data(PRODUCT_IN_SCENE_ROLE) + color = self._colors.get(value, self._default_color) + option.palette.setBrush(QtGui.QPalette.Text, color) diff --git a/openpype/tools/ayon_loader/ui/products_model.py b/openpype/tools/ayon_loader/ui/products_model.py new file mode 100644 index 0000000000..741f15766b --- /dev/null +++ b/openpype/tools/ayon_loader/ui/products_model.py @@ -0,0 +1,590 @@ +import collections + +import qtawesome +from qtpy import QtGui, QtCore + +from openpype.style import get_default_entity_icon_color +from openpype.tools.ayon_utils.widgets import get_qt_icon + +PRODUCTS_MODEL_SENDER_NAME = "qt_products_model" + +GROUP_TYPE_ROLE = QtCore.Qt.UserRole + 1 +MERGED_COLOR_ROLE = QtCore.Qt.UserRole + 2 +FOLDER_LABEL_ROLE = QtCore.Qt.UserRole + 3 +FOLDER_ID_ROLE = QtCore.Qt.UserRole + 4 +PRODUCT_ID_ROLE = QtCore.Qt.UserRole + 5 +PRODUCT_NAME_ROLE = QtCore.Qt.UserRole + 6 +PRODUCT_TYPE_ROLE = QtCore.Qt.UserRole + 7 +PRODUCT_TYPE_ICON_ROLE = QtCore.Qt.UserRole + 8 +PRODUCT_IN_SCENE_ROLE = QtCore.Qt.UserRole + 9 +VERSION_ID_ROLE = QtCore.Qt.UserRole + 10 +VERSION_HERO_ROLE = QtCore.Qt.UserRole + 11 +VERSION_NAME_ROLE = QtCore.Qt.UserRole + 12 +VERSION_NAME_EDIT_ROLE = QtCore.Qt.UserRole + 13 +VERSION_PUBLISH_TIME_ROLE = QtCore.Qt.UserRole + 14 +VERSION_AUTHOR_ROLE = QtCore.Qt.UserRole + 15 +VERSION_FRAME_RANGE_ROLE = QtCore.Qt.UserRole + 16 +VERSION_DURATION_ROLE = QtCore.Qt.UserRole + 17 +VERSION_HANDLES_ROLE = QtCore.Qt.UserRole + 18 +VERSION_STEP_ROLE = QtCore.Qt.UserRole + 19 +VERSION_AVAILABLE_ROLE = QtCore.Qt.UserRole + 20 +VERSION_THUMBNAIL_ID_ROLE = QtCore.Qt.UserRole + 21 + + +class ProductsModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + version_changed = QtCore.Signal() + column_labels = [ + "Product name", + "Product type", + "Folder", + "Version", + "Time", + "Author", + "Frames", + "Duration", + "Handles", + "Step", + "In scene", + "Availability", + ] + merged_items_colors = [ + ("#{0:02x}{1:02x}{2:02x}".format(*c), QtGui.QColor(*c)) + for c in [ + (55, 161, 222), # Light Blue + (231, 176, 0), # Yellow + (154, 13, 255), # Purple + (130, 184, 30), # Light Green + (211, 79, 63), # Light Red + (179, 181, 182), # Grey + (194, 57, 179), # Pink + (0, 120, 215), # Dark Blue + (0, 204, 106), # Dark Green + (247, 99, 12), # Orange + ] + ] + + version_col = column_labels.index("Version") + published_time_col = column_labels.index("Time") + folders_label_col = column_labels.index("Folder") + in_scene_col = column_labels.index("In scene") + + def __init__(self, controller): + super(ProductsModel, self).__init__() + self.setColumnCount(len(self.column_labels)) + for idx, label in enumerate(self.column_labels): + self.setHeaderData(idx, QtCore.Qt.Horizontal, label) + self._controller = controller + + # Variables to store 'QStandardItem' + self._items_by_id = {} + self._group_items_by_name = {} + self._merged_items_by_id = {} + + # product item objects (they have version information) + self._product_items_by_id = {} + self._grouping_enabled = True + self._reset_merge_color = False + self._color_iterator = self._color_iter() + self._group_icon = None + + self._last_project_name = None + self._last_folder_ids = [] + + def get_product_item_indexes(self): + return [ + item.index() + for item in self._items_by_id.values() + ] + + def get_product_item_by_id(self, product_id): + """ + + Args: + product_id (str): Product id. + + Returns: + Union[ProductItem, None]: Product item with version information. + """ + + return self._product_items_by_id.get(product_id) + + def set_enable_grouping(self, enable_grouping): + if enable_grouping is self._grouping_enabled: + return + self._grouping_enabled = enable_grouping + # Ignore change if groups are not available + self.refresh(self._last_project_name, self._last_folder_ids) + + def flags(self, index): + # Make the version column editable + if index.column() == self.version_col and index.data(PRODUCT_ID_ROLE): + return ( + QtCore.Qt.ItemIsEnabled + | QtCore.Qt.ItemIsSelectable + | QtCore.Qt.ItemIsEditable + ) + if index.column() != 0: + index = self.index(index.row(), 0, index.parent()) + return super(ProductsModel, self).flags(index) + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + if not index.isValid(): + return None + + col = index.column() + if col == 0: + return super(ProductsModel, self).data(index, role) + + if role == QtCore.Qt.DecorationRole: + if col == 1: + role = PRODUCT_TYPE_ICON_ROLE + else: + return None + + if ( + role == VERSION_NAME_EDIT_ROLE + or (role == QtCore.Qt.EditRole and col == self.version_col) + ): + index = self.index(index.row(), 0, index.parent()) + product_id = index.data(PRODUCT_ID_ROLE) + product_item = self._product_items_by_id.get(product_id) + if product_item is None: + return None + return list(product_item.version_items.values()) + + if role == QtCore.Qt.EditRole: + return None + + if role == QtCore.Qt.DisplayRole: + if not index.data(PRODUCT_ID_ROLE): + return None + if col == self.version_col: + role = VERSION_NAME_ROLE + elif col == 1: + role = PRODUCT_TYPE_ROLE + elif col == 2: + role = FOLDER_LABEL_ROLE + elif col == 4: + role = VERSION_PUBLISH_TIME_ROLE + elif col == 5: + role = VERSION_AUTHOR_ROLE + elif col == 6: + role = VERSION_FRAME_RANGE_ROLE + elif col == 7: + role = VERSION_DURATION_ROLE + elif col == 8: + role = VERSION_HANDLES_ROLE + elif col == 9: + role = VERSION_STEP_ROLE + elif col == 10: + role = PRODUCT_IN_SCENE_ROLE + elif col == 11: + role = VERSION_AVAILABLE_ROLE + else: + return None + + index = self.index(index.row(), 0, index.parent()) + + return super(ProductsModel, self).data(index, role) + + def setData(self, index, value, role=None): + if not index.isValid(): + return False + + if role is None: + role = QtCore.Qt.EditRole + + col = index.column() + if col == self.version_col and role == QtCore.Qt.EditRole: + role = VERSION_NAME_EDIT_ROLE + + if role == VERSION_NAME_EDIT_ROLE: + if col != 0: + index = self.index(index.row(), 0, index.parent()) + product_id = index.data(PRODUCT_ID_ROLE) + product_item = self._product_items_by_id[product_id] + final_version_item = None + for v_id, version_item in product_item.version_items.items(): + if v_id == value: + final_version_item = version_item + break + + if final_version_item is None: + return False + if index.data(VERSION_ID_ROLE) == final_version_item.version_id: + return True + item = self.itemFromIndex(index) + self._set_version_data_to_product_item(item, final_version_item) + self.version_changed.emit() + return True + return super(ProductsModel, self).setData(index, value, role) + + def _get_next_color(self): + return next(self._color_iterator) + + def _color_iter(self): + while True: + for color in self.merged_items_colors: + if self._reset_merge_color: + self._reset_merge_color = False + break + yield color + + def _clear(self): + root_item = self.invisibleRootItem() + root_item.removeRows(0, root_item.rowCount()) + + self._items_by_id = {} + self._group_items_by_name = {} + self._merged_items_by_id = {} + self._product_items_by_id = {} + self._reset_merge_color = True + + def _get_group_icon(self): + if self._group_icon is None: + self._group_icon = qtawesome.icon( + "fa.object-group", + color=get_default_entity_icon_color() + ) + return self._group_icon + + def _get_group_model_item(self, group_name): + model_item = self._group_items_by_name.get(group_name) + if model_item is None: + model_item = QtGui.QStandardItem(group_name) + model_item.setData( + self._get_group_icon(), QtCore.Qt.DecorationRole + ) + model_item.setData(0, GROUP_TYPE_ROLE) + model_item.setEditable(False) + model_item.setColumnCount(self.columnCount()) + self._group_items_by_name[group_name] = model_item + return model_item + + def _get_merged_model_item(self, path, count, hex_color): + model_item = self._merged_items_by_id.get(path) + if model_item is None: + model_item = QtGui.QStandardItem() + model_item.setData(1, GROUP_TYPE_ROLE) + model_item.setData(hex_color, MERGED_COLOR_ROLE) + model_item.setEditable(False) + model_item.setColumnCount(self.columnCount()) + self._merged_items_by_id[path] = model_item + label = "{} ({})".format(path, count) + model_item.setData(label, QtCore.Qt.DisplayRole) + return model_item + + def _set_version_data_to_product_item(self, model_item, version_item): + """ + + Args: + model_item (QtGui.QStandardItem): Item which should have values + from version item. + version_item (VersionItem): Item from entities model with + information about version. + """ + + model_item.setData(version_item.version_id, VERSION_ID_ROLE) + model_item.setData(version_item.version, VERSION_NAME_ROLE) + model_item.setData(version_item.version_id, VERSION_ID_ROLE) + model_item.setData(version_item.is_hero, VERSION_HERO_ROLE) + model_item.setData( + version_item.published_time, VERSION_PUBLISH_TIME_ROLE + ) + model_item.setData(version_item.author, VERSION_AUTHOR_ROLE) + model_item.setData(version_item.frame_range, VERSION_FRAME_RANGE_ROLE) + model_item.setData(version_item.duration, VERSION_DURATION_ROLE) + model_item.setData(version_item.handles, VERSION_HANDLES_ROLE) + model_item.setData(version_item.step, VERSION_STEP_ROLE) + model_item.setData( + version_item.thumbnail_id, VERSION_THUMBNAIL_ID_ROLE) + + def _get_product_model_item(self, product_item): + model_item = self._items_by_id.get(product_item.product_id) + versions = list(product_item.version_items.values()) + versions.sort() + last_version = versions[-1] + if model_item is None: + product_id = product_item.product_id + model_item = QtGui.QStandardItem(product_item.product_name) + model_item.setEditable(False) + icon = get_qt_icon(product_item.product_icon) + product_type_icon = get_qt_icon(product_item.product_type_icon) + model_item.setColumnCount(self.columnCount()) + model_item.setData(icon, QtCore.Qt.DecorationRole) + model_item.setData(product_id, PRODUCT_ID_ROLE) + model_item.setData(product_item.product_name, PRODUCT_NAME_ROLE) + model_item.setData(product_item.product_type, PRODUCT_TYPE_ROLE) + model_item.setData(product_type_icon, PRODUCT_TYPE_ICON_ROLE) + model_item.setData(product_item.folder_id, FOLDER_ID_ROLE) + + self._product_items_by_id[product_id] = product_item + self._items_by_id[product_id] = model_item + + model_item.setData(product_item.folder_label, FOLDER_LABEL_ROLE) + in_scene = 1 if product_item.product_in_scene else 0 + model_item.setData(in_scene, PRODUCT_IN_SCENE_ROLE) + + self._set_version_data_to_product_item(model_item, last_version) + return model_item + + def get_last_project_name(self): + return self._last_project_name + + def refresh(self, project_name, folder_ids): + self._clear() + + self._last_project_name = project_name + self._last_folder_ids = folder_ids + + product_items = self._controller.get_product_items( + project_name, + folder_ids, + sender=PRODUCTS_MODEL_SENDER_NAME + ) + product_items_by_id = { + product_item.product_id: product_item + for product_item in product_items + } + + # Prepare product groups + product_name_matches_by_group = collections.defaultdict(dict) + for product_item in product_items_by_id.values(): + group_name = None + if self._grouping_enabled: + group_name = product_item.group_name + + product_name = product_item.product_name + group = product_name_matches_by_group[group_name] + if product_name not in group: + group[product_name] = [product_item] + continue + group[product_name].append(product_item) + + group_names = set(product_name_matches_by_group.keys()) + + root_item = self.invisibleRootItem() + new_root_items = [] + merged_paths = set() + for group_name in group_names: + key_parts = [] + if group_name: + key_parts.append(group_name) + + groups = product_name_matches_by_group[group_name] + merged_product_items = {} + top_items = [] + group_product_types = set() + for product_name, product_items in groups.items(): + group_product_types |= {p.product_type for p in product_items} + if len(product_items) == 1: + top_items.append(product_items[0]) + else: + path = "/".join(key_parts + [product_name]) + merged_paths.add(path) + merged_product_items[path] = ( + product_name, + product_items, + ) + + parent_item = None + if group_name: + parent_item = self._get_group_model_item(group_name) + parent_item.setData( + "|".join(group_product_types), PRODUCT_TYPE_ROLE) + + new_items = [] + if parent_item is not None and parent_item.row() < 0: + new_root_items.append(parent_item) + + for product_item in top_items: + item = self._get_product_model_item(product_item) + new_items.append(item) + + for path_info in merged_product_items.values(): + product_name, product_items = path_info + (merged_color_hex, merged_color_qt) = self._get_next_color() + merged_color = qtawesome.icon( + "fa.circle", color=merged_color_qt) + merged_item = self._get_merged_model_item( + product_name, len(product_items), merged_color_hex) + merged_item.setData(merged_color, QtCore.Qt.DecorationRole) + new_items.append(merged_item) + + merged_product_types = set() + new_merged_items = [] + for product_item in product_items: + item = self._get_product_model_item(product_item) + new_merged_items.append(item) + merged_product_types.add(product_item.product_type) + + merged_item.setData( + "|".join(merged_product_types), PRODUCT_TYPE_ROLE) + if new_merged_items: + merged_item.appendRows(new_merged_items) + + if not new_items: + continue + + if parent_item is None: + new_root_items.extend(new_items) + else: + parent_item.appendRows(new_items) + + if new_root_items: + root_item.appendRows(new_root_items) + + self.refreshed.emit() + # --------------------------------- + # This implementation does not call '_clear' at the start + # but is more complex and probably slower + # --------------------------------- + # def _remove_items(self, items): + # if not items: + # return + # root_item = self.invisibleRootItem() + # for item in items: + # row = item.row() + # if row < 0: + # continue + # parent = item.parent() + # if parent is None: + # parent = root_item + # parent.removeRow(row) + # + # def _remove_group_items(self, group_names): + # group_items = [ + # self._group_items_by_name.pop(group_name) + # for group_name in group_names + # ] + # self._remove_items(group_items) + # + # def _remove_merged_items(self, paths): + # merged_items = [ + # self._merged_items_by_id.pop(path) + # for path in paths + # ] + # self._remove_items(merged_items) + # + # def _remove_product_items(self, product_ids): + # product_items = [] + # for product_id in product_ids: + # self._product_items_by_id.pop(product_id) + # product_items.append(self._items_by_id.pop(product_id)) + # self._remove_items(product_items) + # + # def _add_to_new_items(self, item, parent_item, new_items, root_item): + # if item.row() < 0: + # new_items.append(item) + # else: + # item_parent = item.parent() + # if item_parent is not parent_item: + # if item_parent is None: + # item_parent = root_item + # item_parent.takeRow(item.row()) + # new_items.append(item) + + # def refresh(self, project_name, folder_ids): + # product_items = self._controller.get_product_items( + # project_name, + # folder_ids, + # sender=PRODUCTS_MODEL_SENDER_NAME + # ) + # product_items_by_id = { + # product_item.product_id: product_item + # for product_item in product_items + # } + # # Remove product items that are not available + # product_ids_to_remove = ( + # set(self._items_by_id.keys()) - set(product_items_by_id.keys()) + # ) + # self._remove_product_items(product_ids_to_remove) + # + # # Prepare product groups + # product_name_matches_by_group = collections.defaultdict(dict) + # for product_item in product_items_by_id.values(): + # group_name = None + # if self._grouping_enabled: + # group_name = product_item.group_name + # + # product_name = product_item.product_name + # group = product_name_matches_by_group[group_name] + # if product_name not in group: + # group[product_name] = [product_item] + # continue + # group[product_name].append(product_item) + # + # group_names = set(product_name_matches_by_group.keys()) + # + # root_item = self.invisibleRootItem() + # new_root_items = [] + # merged_paths = set() + # for group_name in group_names: + # key_parts = [] + # if group_name: + # key_parts.append(group_name) + # + # groups = product_name_matches_by_group[group_name] + # merged_product_items = {} + # top_items = [] + # for product_name, product_items in groups.items(): + # if len(product_items) == 1: + # top_items.append(product_items[0]) + # else: + # path = "/".join(key_parts + [product_name]) + # merged_paths.add(path) + # merged_product_items[path] = product_items + # + # parent_item = None + # if group_name: + # parent_item = self._get_group_model_item(group_name) + # + # new_items = [] + # if parent_item is not None and parent_item.row() < 0: + # new_root_items.append(parent_item) + # + # for product_item in top_items: + # item = self._get_product_model_item(product_item) + # self._add_to_new_items( + # item, parent_item, new_items, root_item + # ) + # + # for path, product_items in merged_product_items.items(): + # merged_item = self._get_merged_model_item(path) + # self._add_to_new_items( + # merged_item, parent_item, new_items, root_item + # ) + # + # new_merged_items = [] + # for product_item in product_items: + # item = self._get_product_model_item(product_item) + # self._add_to_new_items( + # item, merged_item, new_merged_items, root_item + # ) + # + # if new_merged_items: + # merged_item.appendRows(new_merged_items) + # + # if not new_items: + # continue + # + # if parent_item is not None: + # parent_item.appendRows(new_items) + # continue + # + # new_root_items.extend(new_items) + # + # root_item.appendRows(new_root_items) + # + # merged_item_ids_to_remove = ( + # set(self._merged_items_by_id.keys()) - merged_paths + # ) + # group_names_to_remove = ( + # set(self._group_items_by_name.keys()) - set(group_names) + # ) + # self._remove_merged_items(merged_item_ids_to_remove) + # self._remove_group_items(group_names_to_remove) diff --git a/openpype/tools/ayon_loader/ui/products_widget.py b/openpype/tools/ayon_loader/ui/products_widget.py new file mode 100644 index 0000000000..2d4959dc19 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/products_widget.py @@ -0,0 +1,400 @@ +import collections + +from qtpy import QtWidgets, QtCore + +from openpype.tools.utils import ( + RecursiveSortFilterProxyModel, + DeselectableTreeView, +) +from openpype.tools.utils.delegates import PrettyTimeDelegate + +from .products_model import ( + ProductsModel, + PRODUCTS_MODEL_SENDER_NAME, + PRODUCT_TYPE_ROLE, + GROUP_TYPE_ROLE, + MERGED_COLOR_ROLE, + FOLDER_ID_ROLE, + PRODUCT_ID_ROLE, + VERSION_ID_ROLE, + VERSION_THUMBNAIL_ID_ROLE, +) +from .products_delegates import VersionDelegate, LoadedInSceneDelegate +from .actions_utils import show_actions_menu + + +class ProductsProxyModel(RecursiveSortFilterProxyModel): + def __init__(self, parent=None): + super(ProductsProxyModel, self).__init__(parent) + + self._product_type_filters = {} + self._ascending_sort = True + + def set_product_type_filters(self, product_type_filters): + self._product_type_filters = product_type_filters + self.invalidateFilter() + + def filterAcceptsRow(self, source_row, source_parent): + source_model = self.sourceModel() + index = source_model.index(source_row, 0, source_parent) + product_types_s = source_model.data(index, PRODUCT_TYPE_ROLE) + product_types = [] + if product_types_s: + product_types = product_types_s.split("|") + + for product_type in product_types: + if not self._product_type_filters.get(product_type, True): + return False + return super(ProductsProxyModel, self).filterAcceptsRow( + source_row, source_parent) + + def lessThan(self, left, right): + l_model = left.model() + r_model = right.model() + left_group_type = l_model.data(left, GROUP_TYPE_ROLE) + right_group_type = r_model.data(right, GROUP_TYPE_ROLE) + # Groups are always on top, merged product types are below + # and items without group at the bottom + # QUESTION Do we need to do it this way? + if left_group_type != right_group_type: + if left_group_type is None: + output = False + elif right_group_type is None: + output = True + else: + output = left_group_type < right_group_type + if not self._ascending_sort: + output = not output + return output + return super(ProductsProxyModel, self).lessThan(left, right) + + def sort(self, column, order=None): + if order is None: + order = QtCore.Qt.AscendingOrder + self._ascending_sort = order == QtCore.Qt.AscendingOrder + super(ProductsProxyModel, self).sort(column, order) + + +class ProductsWidget(QtWidgets.QWidget): + refreshed = QtCore.Signal() + merged_products_selection_changed = QtCore.Signal() + selection_changed = QtCore.Signal() + version_changed = QtCore.Signal() + default_widths = ( + 200, # Product name + 90, # Product type + 130, # Folder label + 60, # Version + 125, # Time + 75, # Author + 75, # Frames + 60, # Duration + 55, # Handles + 10, # Step + 25, # Loaded in scene + 65, # Site info (maybe?) + ) + + def __init__(self, controller, parent): + super(ProductsWidget, self).__init__(parent) + + self._controller = controller + + products_view = DeselectableTreeView(self) + # TODO - define custom object name in style + products_view.setObjectName("SubsetView") + products_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection + ) + products_view.setAllColumnsShowFocus(True) + # TODO - add context menu + products_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + products_view.setSortingEnabled(True) + # Sort by product type + products_view.sortByColumn(1, QtCore.Qt.AscendingOrder) + products_view.setAlternatingRowColors(True) + + products_model = ProductsModel(controller) + products_proxy_model = ProductsProxyModel() + products_proxy_model.setSourceModel(products_model) + + products_view.setModel(products_proxy_model) + + for idx, width in enumerate(self.default_widths): + products_view.setColumnWidth(idx, width) + + version_delegate = VersionDelegate() + products_view.setItemDelegateForColumn( + products_model.version_col, version_delegate) + + time_delegate = PrettyTimeDelegate() + products_view.setItemDelegateForColumn( + products_model.published_time_col, time_delegate) + + in_scene_delegate = LoadedInSceneDelegate() + products_view.setItemDelegateForColumn( + products_model.in_scene_col, in_scene_delegate) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(products_view, 1) + + products_proxy_model.rowsInserted.connect(self._on_rows_inserted) + products_proxy_model.rowsMoved.connect(self._on_rows_moved) + products_model.refreshed.connect(self._on_refresh) + products_view.customContextMenuRequested.connect( + self._on_context_menu) + products_view.selectionModel().selectionChanged.connect( + self._on_selection_change) + products_model.version_changed.connect(self._on_version_change) + + controller.register_event_callback( + "selection.folders.changed", + self._on_folders_selection_change, + ) + controller.register_event_callback( + "products.refresh.finished", + self._on_products_refresh_finished + ) + controller.register_event_callback( + "products.group.changed", + self._on_group_changed + ) + + self._products_view = products_view + self._products_model = products_model + self._products_proxy_model = products_proxy_model + + self._version_delegate = version_delegate + self._time_delegate = time_delegate + + self._selected_project_name = None + self._selected_folder_ids = set() + + self._selected_merged_products = [] + self._selected_versions_info = [] + + # Set initial state of widget + # - Hide folders column + self._update_folders_label_visible() + # - Hide in scene column if is not supported (this won't change) + products_view.setColumnHidden( + products_model.in_scene_col, + not controller.is_loaded_products_supported() + ) + + def set_name_filter(self, name): + """Set filter of product name. + + Args: + name (str): The string filter. + """ + + self._products_proxy_model.setFilterFixedString(name) + + def set_product_type_filter(self, product_type_filters): + """ + + Args: + product_type_filters (dict[str, bool]): The filter of product + types. + """ + + self._products_proxy_model.set_product_type_filters( + product_type_filters + ) + + def set_enable_grouping(self, enable_grouping): + self._products_model.set_enable_grouping(enable_grouping) + + def get_selected_merged_products(self): + return self._selected_merged_products + + def get_selected_version_info(self): + return self._selected_versions_info + + def refresh(self): + self._refresh_model() + + def _fill_version_editor(self): + model = self._products_proxy_model + index_queue = collections.deque() + for row in range(model.rowCount()): + index_queue.append((row, None)) + + version_col = self._products_model.version_col + while index_queue: + (row, parent_index) = index_queue.popleft() + args = [row, 0] + if parent_index is not None: + args.append(parent_index) + index = model.index(*args) + rows = model.rowCount(index) + for row in range(rows): + index_queue.append((row, index)) + + product_id = model.data(index, PRODUCT_ID_ROLE) + if product_id is not None: + args[1] = version_col + v_index = model.index(*args) + self._products_view.openPersistentEditor(v_index) + + def _on_refresh(self): + self._fill_version_editor() + self.refreshed.emit() + + def _on_rows_inserted(self): + self._fill_version_editor() + + def _on_rows_moved(self): + self._fill_version_editor() + + def _refresh_model(self): + self._products_model.refresh( + self._selected_project_name, + self._selected_folder_ids + ) + + def _on_context_menu(self, point): + selection_model = self._products_view.selectionModel() + model = self._products_view.model() + project_name = self._products_model.get_last_project_name() + + version_ids = set() + indexes_queue = collections.deque() + indexes_queue.extend(selection_model.selectedIndexes()) + while indexes_queue: + index = indexes_queue.popleft() + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + version_id = model.data(index, VERSION_ID_ROLE) + if version_id is not None: + version_ids.add(version_id) + + action_items = self._controller.get_versions_action_items( + project_name, version_ids) + + # Prepare global point where to show the menu + global_point = self._products_view.mapToGlobal(point) + + result = show_actions_menu( + action_items, + global_point, + len(version_ids) == 1, + self + ) + action_item, options = result + if action_item is None or options is None: + return + + self._controller.trigger_action_item( + action_item.identifier, + options, + action_item.project_name, + version_ids=action_item.version_ids, + representation_ids=action_item.representation_ids, + ) + + def _on_selection_change(self): + selected_merged_products = [] + selection_model = self._products_view.selectionModel() + model = self._products_view.model() + indexes_queue = collections.deque() + indexes_queue.extend(selection_model.selectedIndexes()) + + # Helper for 'version_items' to avoid duplicated items + all_product_ids = set() + selected_version_ids = set() + # Version items contains information about selected version items + selected_versions_info = [] + while indexes_queue: + index = indexes_queue.popleft() + if index.column() != 0: + continue + + group_type = model.data(index, GROUP_TYPE_ROLE) + if group_type is None: + product_id = model.data(index, PRODUCT_ID_ROLE) + # Skip duplicates - when group and item are selected the item + # would be in the loop multiple times + if product_id in all_product_ids: + continue + + all_product_ids.add(product_id) + + version_id = model.data(index, VERSION_ID_ROLE) + selected_version_ids.add(version_id) + + thumbnail_id = model.data(index, VERSION_THUMBNAIL_ID_ROLE) + selected_versions_info.append({ + "folder_id": model.data(index, FOLDER_ID_ROLE), + "product_id": product_id, + "version_id": version_id, + "thumbnail_id": thumbnail_id, + }) + continue + + if group_type == 0: + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + continue + + if group_type != 1: + continue + + item_folder_ids = set() + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + + folder_id = model.data(child_index, FOLDER_ID_ROLE) + item_folder_ids.add(folder_id) + + if not item_folder_ids: + continue + + hex_color = model.data(index, MERGED_COLOR_ROLE) + item_data = { + "color": hex_color, + "folder_ids": item_folder_ids + } + selected_merged_products.append(item_data) + + prev_selected_merged_products = self._selected_merged_products + self._selected_merged_products = selected_merged_products + self._selected_versions_info = selected_versions_info + + if selected_merged_products != prev_selected_merged_products: + self.merged_products_selection_changed.emit() + self.selection_changed.emit() + self._controller.set_selected_versions(selected_version_ids) + + def _on_version_change(self): + self._on_selection_change() + + def _on_folders_selection_change(self, event): + self._selected_project_name = event["project_name"] + self._selected_folder_ids = event["folder_ids"] + self._refresh_model() + self._update_folders_label_visible() + + def _update_folders_label_visible(self): + folders_label_hidden = len(self._selected_folder_ids) <= 1 + self._products_view.setColumnHidden( + self._products_model.folders_label_col, + folders_label_hidden + ) + + def _on_products_refresh_finished(self, event): + if event["sender"] != PRODUCTS_MODEL_SENDER_NAME: + self._refresh_model() + + def _on_group_changed(self, event): + if event["project_name"] != self._selected_project_name: + return + folder_ids = event["folder_ids"] + if not set(folder_ids).intersection(set(self._selected_folder_ids)): + return + self.refresh() diff --git a/openpype/tools/ayon_loader/ui/repres_widget.py b/openpype/tools/ayon_loader/ui/repres_widget.py new file mode 100644 index 0000000000..7de582e629 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/repres_widget.py @@ -0,0 +1,338 @@ +import collections + +from qtpy import QtWidgets, QtGui, QtCore +import qtawesome + +from openpype.style import get_default_entity_icon_color +from openpype.tools.ayon_utils.widgets import get_qt_icon +from openpype.tools.utils import DeselectableTreeView + +from .actions_utils import show_actions_menu + +REPRESENTAION_NAME_ROLE = QtCore.Qt.UserRole + 1 +REPRESENTATION_ID_ROLE = QtCore.Qt.UserRole + 2 +PRODUCT_NAME_ROLE = QtCore.Qt.UserRole + 3 +FOLDER_LABEL_ROLE = QtCore.Qt.UserRole + 4 +GROUP_TYPE_ROLE = QtCore.Qt.UserRole + 5 + + +class RepresentationsModel(QtGui.QStandardItemModel): + refreshed = QtCore.Signal() + colums_info = [ + ("Name", 120), + ("Product name", 125), + ("Folder", 125), + # ("Active site", 85), + # ("Remote site", 85) + ] + column_labels = [label for label, _ in colums_info] + column_widths = [width for _, width in colums_info] + folder_column = column_labels.index("Product name") + + def __init__(self, controller): + super(RepresentationsModel, self).__init__() + + self.setColumnCount(len(self.column_labels)) + + for idx, label in enumerate(self.column_labels): + self.setHeaderData(idx, QtCore.Qt.Horizontal, label) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_change + ) + controller.register_event_callback( + "selection.versions.changed", + self._on_version_change + ) + self._selected_project_name = None + self._selected_version_ids = None + + self._group_icon = None + + self._items_by_id = {} + self._groups_items_by_name = {} + + self._controller = controller + + def refresh(self): + repre_items = self._controller.get_representation_items( + self._selected_project_name, self._selected_version_ids + ) + self._fill_items(repre_items) + self.refreshed.emit() + + def data(self, index, role=None): + if role is None: + role = QtCore.Qt.DisplayRole + + col = index.column() + if col != 0: + if role == QtCore.Qt.DecorationRole: + return None + + if role == QtCore.Qt.DisplayRole: + if col == 1: + role = PRODUCT_NAME_ROLE + elif col == 2: + role = FOLDER_LABEL_ROLE + index = self.index(index.row(), 0, index.parent()) + return super(RepresentationsModel, self).data(index, role) + + def setData(self, index, value, role=None): + if role is None: + role = QtCore.Qt.EditRole + return super(RepresentationsModel, self).setData(index, value, role) + + def _clear_items(self): + self._items_by_id = {} + root_item = self.invisibleRootItem() + root_item.removeRows(0, root_item.rowCount()) + + def _get_repre_item(self, repre_item): + repre_id = repre_item.representation_id + repre_name = repre_item.representation_name + repre_icon = repre_item.representation_icon + item = self._items_by_id.get(repre_id) + is_new_item = False + if item is None: + is_new_item = True + item = QtGui.QStandardItem() + self._items_by_id[repre_id] = item + item.setColumnCount(self.columnCount()) + item.setEditable(False) + + icon = get_qt_icon(repre_icon) + item.setData(repre_name, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + item.setData(repre_name, REPRESENTAION_NAME_ROLE) + item.setData(repre_id, REPRESENTATION_ID_ROLE) + item.setData(repre_item.product_name, PRODUCT_NAME_ROLE) + item.setData(repre_item.folder_label, FOLDER_LABEL_ROLE) + return is_new_item, item + + def _get_group_icon(self): + if self._group_icon is None: + self._group_icon = qtawesome.icon( + "fa.folder", + color=get_default_entity_icon_color() + ) + return self._group_icon + + def _get_group_item(self, repre_name): + item = self._groups_items_by_name.get(repre_name) + if item is not None: + return False, item + + # TODO add color + item = QtGui.QStandardItem() + item.setColumnCount(self.columnCount()) + item.setData(repre_name, QtCore.Qt.DisplayRole) + item.setData(self._get_group_icon(), QtCore.Qt.DecorationRole) + item.setData(0, GROUP_TYPE_ROLE) + item.setEditable(False) + self._groups_items_by_name[repre_name] = item + return True, item + + def _fill_items(self, repre_items): + items_to_remove = set(self._items_by_id.keys()) + repre_items_by_name = collections.defaultdict(list) + for repre_item in repre_items: + items_to_remove.discard(repre_item.representation_id) + repre_name = repre_item.representation_name + repre_items_by_name[repre_name].append(repre_item) + + root_item = self.invisibleRootItem() + for repre_id in items_to_remove: + item = self._items_by_id.pop(repre_id) + parent_item = item.parent() + if parent_item is None: + parent_item = root_item + parent_item.removeRow(item.row()) + + group_names = set() + new_root_items = [] + for repre_name, repre_name_items in repre_items_by_name.items(): + group_item = None + parent_is_group = False + if len(repre_name_items) > 1: + group_names.add(repre_name) + is_new_group, group_item = self._get_group_item(repre_name) + if is_new_group: + new_root_items.append(group_item) + parent_is_group = True + + new_group_items = [] + for repre_item in repre_name_items: + is_new_item, item = self._get_repre_item(repre_item) + item_parent = item.parent() + if item_parent is None: + item_parent = root_item + + if not is_new_item: + if parent_is_group: + if item_parent is group_item: + continue + elif item_parent is root_item: + continue + item_parent.takeRow(item.row()) + is_new_item = True + + if is_new_item: + new_group_items.append(item) + + if not new_group_items: + continue + + if group_item is not None: + group_item.appendRows(new_group_items) + else: + new_root_items.extend(new_group_items) + + if new_root_items: + root_item.appendRows(new_root_items) + + for group_name in set(self._groups_items_by_name) - group_names: + item = self._groups_items_by_name.pop(group_name) + parent_item = item.parent() + if parent_item is None: + parent_item = root_item + parent_item.removeRow(item.row()) + + def _on_project_change(self, event): + self._selected_project_name = event["project_name"] + + def _on_version_change(self, event): + self._selected_version_ids = event["version_ids"] + self.refresh() + + +class RepresentationsWidget(QtWidgets.QWidget): + def __init__(self, controller, parent): + super(RepresentationsWidget, self).__init__(parent) + + repre_view = DeselectableTreeView(self) + repre_view.setSelectionMode( + QtWidgets.QAbstractItemView.ExtendedSelection + ) + repre_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + repre_view.setSortingEnabled(True) + repre_view.setAlternatingRowColors(True) + + repre_model = RepresentationsModel(controller) + repre_proxy_model = QtCore.QSortFilterProxyModel() + repre_proxy_model.setSourceModel(repre_model) + repre_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + repre_view.setModel(repre_proxy_model) + + for idx, width in enumerate(repre_model.column_widths): + repre_view.setColumnWidth(idx, width) + + main_layout = QtWidgets.QVBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(repre_view, 1) + + repre_view.customContextMenuRequested.connect( + self._on_context_menu) + repre_view.selectionModel().selectionChanged.connect( + self._on_selection_change) + repre_model.refreshed.connect(self._on_model_refresh) + + controller.register_event_callback( + "selection.project.changed", + self._on_project_change + ) + controller.register_event_callback( + "selection.folders.changed", + self._on_folder_change + ) + + self._controller = controller + self._selected_project_name = None + self._selected_multiple_folders = None + + self._repre_view = repre_view + self._repre_model = repre_model + self._repre_proxy_model = repre_proxy_model + + self._set_multiple_folders_selected(False) + + def refresh(self): + self._repre_model.refresh() + + def _on_folder_change(self, event): + self._set_multiple_folders_selected(len(event["folder_ids"]) > 1) + + def _on_project_change(self, event): + self._selected_project_name = event["project_name"] + + def _set_multiple_folders_selected(self, selected_multiple_folders): + if selected_multiple_folders == self._selected_multiple_folders: + return + self._selected_multiple_folders = selected_multiple_folders + self._repre_view.setColumnHidden( + self._repre_model.folder_column, + not self._selected_multiple_folders + ) + + def _on_model_refresh(self): + self._repre_proxy_model.sort(0) + + def _get_selected_repre_indexes(self): + selection_model = self._repre_view.selectionModel() + model = self._repre_view.model() + indexes_queue = collections.deque() + indexes_queue.extend(selection_model.selectedIndexes()) + + selected_indexes = [] + while indexes_queue: + index = indexes_queue.popleft() + if index.column() != 0: + continue + + group_type = model.data(index, GROUP_TYPE_ROLE) + if group_type is None: + selected_indexes.append(index) + + elif group_type == 0: + for row in range(model.rowCount(index)): + child_index = model.index(row, 0, index) + indexes_queue.append(child_index) + + return selected_indexes + + def _get_selected_repre_ids(self): + repre_ids = { + index.data(REPRESENTATION_ID_ROLE) + for index in self._get_selected_repre_indexes() + } + repre_ids.discard(None) + return repre_ids + + def _on_selection_change(self): + selected_repre_ids = self._get_selected_repre_ids() + self._controller.set_selected_representations(selected_repre_ids) + + def _on_context_menu(self, point): + repre_ids = self._get_selected_repre_ids() + action_items = self._controller.get_representations_action_items( + self._selected_project_name, repre_ids + ) + global_point = self._repre_view.mapToGlobal(point) + result = show_actions_menu( + action_items, + global_point, + len(repre_ids) == 1, + self + ) + action_item, options = result + if action_item is None or options is None: + return + + self._controller.trigger_action_item( + action_item.identifier, + options, + action_item.project_name, + version_ids=action_item.version_ids, + representation_ids=action_item.representation_ids, + ) diff --git a/openpype/tools/ayon_loader/ui/window.py b/openpype/tools/ayon_loader/ui/window.py new file mode 100644 index 0000000000..a6d40d52e7 --- /dev/null +++ b/openpype/tools/ayon_loader/ui/window.py @@ -0,0 +1,511 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.resources import get_openpype_icon_filepath +from openpype.style import load_stylesheet +from openpype.tools.utils import ( + PlaceholderLineEdit, + ErrorMessageBox, + ThumbnailPainterWidget, + RefreshButton, + GoToCurrentButton, +) +from openpype.tools.utils.lib import center_window +from openpype.tools.ayon_utils.widgets import ProjectsCombobox +from openpype.tools.ayon_loader.control import LoaderController + +from .folders_widget import LoaderFoldersWidget +from .products_widget import ProductsWidget +from .product_types_widget import ProductTypesView +from .product_group_dialog import ProductGroupDialog +from .info_widget import InfoWidget +from .repres_widget import RepresentationsWidget + + +class LoadErrorMessageBox(ErrorMessageBox): + def __init__(self, messages, parent=None): + self._messages = messages + super(LoadErrorMessageBox, self).__init__("Loading failed", parent) + + def _create_top_widget(self, parent_widget): + label_widget = QtWidgets.QLabel(parent_widget) + label_widget.setText( + "Failed to load items" + ) + return label_widget + + def _get_report_data(self): + report_data = [] + for exc_msg, tb_text, repre, product, version in self._messages: + report_message = ( + "During load error happened on Product: \"{product}\"" + " Representation: \"{repre}\" Version: {version}" + "\n\nError message: {message}" + ).format( + product=product, + repre=repre, + version=version, + message=exc_msg + ) + if tb_text: + report_message += "\n\n{}".format(tb_text) + report_data.append(report_message) + return report_data + + def _create_content(self, content_layout): + item_name_template = ( + "Product: {}
" + "Version: {}
" + "Representation: {}
" + ) + exc_msg_template = "{}" + + for exc_msg, tb_text, repre, product, version in self._messages: + line = self._create_line() + content_layout.addWidget(line) + + item_name = item_name_template.format(product, version, repre) + item_name_widget = QtWidgets.QLabel( + item_name.replace("\n", "
"), self + ) + item_name_widget.setWordWrap(True) + content_layout.addWidget(item_name_widget) + + exc_msg = exc_msg_template.format(exc_msg.replace("\n", "
")) + message_label_widget = QtWidgets.QLabel(exc_msg, self) + message_label_widget.setWordWrap(True) + content_layout.addWidget(message_label_widget) + + if tb_text: + line = self._create_line() + tb_widget = self._create_traceback_widget(tb_text, self) + content_layout.addWidget(line) + content_layout.addWidget(tb_widget) + + +class RefreshHandler: + def __init__(self): + self._project_refreshed = False + self._folders_refreshed = False + self._products_refreshed = False + + @property + def project_refreshed(self): + return self._products_refreshed + + @property + def folders_refreshed(self): + return self._folders_refreshed + + @property + def products_refreshed(self): + return self._products_refreshed + + def reset(self): + self._project_refreshed = False + self._folders_refreshed = False + self._products_refreshed = False + + def set_project_refreshed(self): + self._project_refreshed = True + + def set_folders_refreshed(self): + self._folders_refreshed = True + + def set_products_refreshed(self): + self._products_refreshed = True + + +class LoaderWindow(QtWidgets.QWidget): + def __init__(self, controller=None, parent=None): + super(LoaderWindow, self).__init__(parent) + + icon = QtGui.QIcon(get_openpype_icon_filepath()) + self.setWindowIcon(icon) + self.setWindowTitle("AYON Loader") + self.setFocusPolicy(QtCore.Qt.StrongFocus) + self.setAttribute(QtCore.Qt.WA_DeleteOnClose, False) + self.setWindowFlags(self.windowFlags() | QtCore.Qt.Window) + + if controller is None: + controller = LoaderController() + + main_splitter = QtWidgets.QSplitter(self) + + context_splitter = QtWidgets.QSplitter(main_splitter) + context_splitter.setOrientation(QtCore.Qt.Vertical) + + # Context selection widget + context_widget = QtWidgets.QWidget(context_splitter) + + context_top_widget = QtWidgets.QWidget(context_widget) + projects_combobox = ProjectsCombobox( + controller, + context_top_widget, + handle_expected_selection=True + ) + projects_combobox.set_select_item_visible(True) + projects_combobox.set_libraries_separator_visible(True) + projects_combobox.set_standard_filter_enabled( + controller.is_standard_projects_filter_enabled() + ) + + go_to_current_btn = GoToCurrentButton(context_top_widget) + refresh_btn = RefreshButton(context_top_widget) + + context_top_layout = QtWidgets.QHBoxLayout(context_top_widget) + context_top_layout.setContentsMargins(0, 0, 0, 0,) + context_top_layout.addWidget(projects_combobox, 1) + context_top_layout.addWidget(go_to_current_btn, 0) + context_top_layout.addWidget(refresh_btn, 0) + + folders_filter_input = PlaceholderLineEdit(context_widget) + folders_filter_input.setPlaceholderText("Folder name filter...") + + folders_widget = LoaderFoldersWidget(controller, context_widget) + + product_types_widget = ProductTypesView(controller, context_splitter) + + context_layout = QtWidgets.QVBoxLayout(context_widget) + context_layout.setContentsMargins(0, 0, 0, 0) + context_layout.addWidget(context_top_widget, 0) + context_layout.addWidget(folders_filter_input, 0) + context_layout.addWidget(folders_widget, 1) + + context_splitter.addWidget(context_widget) + context_splitter.addWidget(product_types_widget) + context_splitter.setStretchFactor(0, 65) + context_splitter.setStretchFactor(1, 35) + + # Product + version selection item + products_wrap_widget = QtWidgets.QWidget(main_splitter) + + products_inputs_widget = QtWidgets.QWidget(products_wrap_widget) + + products_filter_input = PlaceholderLineEdit(products_inputs_widget) + products_filter_input.setPlaceholderText("Product name filter...") + product_group_checkbox = QtWidgets.QCheckBox( + "Enable grouping", products_inputs_widget) + product_group_checkbox.setChecked(True) + + products_widget = ProductsWidget(controller, products_wrap_widget) + + products_inputs_layout = QtWidgets.QHBoxLayout(products_inputs_widget) + products_inputs_layout.setContentsMargins(0, 0, 0, 0) + products_inputs_layout.addWidget(products_filter_input, 1) + products_inputs_layout.addWidget(product_group_checkbox, 0) + + products_wrap_layout = QtWidgets.QVBoxLayout(products_wrap_widget) + products_wrap_layout.setContentsMargins(0, 0, 0, 0) + products_wrap_layout.addWidget(products_inputs_widget, 0) + products_wrap_layout.addWidget(products_widget, 1) + + right_panel_splitter = QtWidgets.QSplitter(main_splitter) + right_panel_splitter.setOrientation(QtCore.Qt.Vertical) + + thumbnails_widget = ThumbnailPainterWidget(right_panel_splitter) + thumbnails_widget.set_use_checkboard(False) + + info_widget = InfoWidget(controller, right_panel_splitter) + + repre_widget = RepresentationsWidget(controller, right_panel_splitter) + + right_panel_splitter.addWidget(thumbnails_widget) + right_panel_splitter.addWidget(info_widget) + right_panel_splitter.addWidget(repre_widget) + + right_panel_splitter.setStretchFactor(0, 1) + right_panel_splitter.setStretchFactor(1, 1) + right_panel_splitter.setStretchFactor(2, 2) + + main_splitter.addWidget(context_splitter) + main_splitter.addWidget(products_wrap_widget) + main_splitter.addWidget(right_panel_splitter) + + main_splitter.setStretchFactor(0, 4) + main_splitter.setStretchFactor(1, 6) + main_splitter.setStretchFactor(2, 1) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.addWidget(main_splitter) + + show_timer = QtCore.QTimer() + show_timer.setInterval(1) + + show_timer.timeout.connect(self._on_show_timer) + + projects_combobox.refreshed.connect(self._on_projects_refresh) + folders_widget.refreshed.connect(self._on_folders_refresh) + products_widget.refreshed.connect(self._on_products_refresh) + folders_filter_input.textChanged.connect( + self._on_folder_filter_change + ) + product_types_widget.filter_changed.connect( + self._on_product_type_filter_change + ) + products_filter_input.textChanged.connect( + self._on_product_filter_change + ) + product_group_checkbox.stateChanged.connect( + self._on_product_group_change + ) + products_widget.merged_products_selection_changed.connect( + self._on_merged_products_selection_change + ) + products_widget.selection_changed.connect( + self._on_products_selection_change + ) + go_to_current_btn.clicked.connect( + self._on_go_to_current_context_click + ) + refresh_btn.clicked.connect( + self._on_refresh_click + ) + controller.register_event_callback( + "load.finished", + self._on_load_finished, + ) + controller.register_event_callback( + "selection.project.changed", + self._on_project_selection_changed, + ) + controller.register_event_callback( + "selection.folders.changed", + self._on_folders_selection_changed, + ) + controller.register_event_callback( + "selection.versions.changed", + self._on_versions_selection_changed, + ) + controller.register_event_callback( + "controller.reset.started", + self._on_controller_reset_start, + ) + controller.register_event_callback( + "controller.reset.finished", + self._on_controller_reset_finish, + ) + + self._group_dialog = ProductGroupDialog(controller, self) + + self._main_splitter = main_splitter + + self._go_to_current_btn = go_to_current_btn + self._refresh_btn = refresh_btn + self._projects_combobox = projects_combobox + + self._folders_filter_input = folders_filter_input + self._folders_widget = folders_widget + + self._product_types_widget = product_types_widget + + self._products_filter_input = products_filter_input + self._product_group_checkbox = product_group_checkbox + self._products_widget = products_widget + + self._right_panel_splitter = right_panel_splitter + self._thumbnails_widget = thumbnails_widget + self._info_widget = info_widget + self._repre_widget = repre_widget + + self._controller = controller + self._refresh_handler = RefreshHandler() + self._first_show = True + self._reset_on_show = True + self._show_counter = 0 + self._show_timer = show_timer + self._selected_project_name = None + self._selected_folder_ids = set() + self._selected_version_ids = set() + + self._products_widget.set_enable_grouping( + self._product_group_checkbox.isChecked() + ) + + def refresh(self): + self._controller.reset() + + def showEvent(self, event): + super(LoaderWindow, self).showEvent(event) + + if self._first_show: + self._on_first_show() + + self._show_timer.start() + + def keyPressEvent(self, event): + modifiers = event.modifiers() + ctrl_pressed = QtCore.Qt.ControlModifier & modifiers + + # Grouping products on pressing Ctrl + G + if ( + ctrl_pressed + and event.key() == QtCore.Qt.Key_G + and not event.isAutoRepeat() + ): + self._show_group_dialog() + event.setAccepted(True) + return + + super(LoaderWindow, self).keyPressEvent(event) + + def _on_first_show(self): + self._first_show = False + # width, height = 1800, 900 + width, height = 1500, 750 + + self.resize(width, height) + + mid_width = int(width / 1.8) + sides_width = int((width - mid_width) * 0.5) + self._main_splitter.setSizes( + [sides_width, mid_width, sides_width] + ) + + thumbnail_height = int(height / 3.6) + info_height = int((height - thumbnail_height) * 0.5) + self._right_panel_splitter.setSizes( + [thumbnail_height, info_height, info_height] + ) + self.setStyleSheet(load_stylesheet()) + center_window(self) + + def _on_show_timer(self): + if self._show_counter < 2: + self._show_counter += 1 + return + + self._show_counter = 0 + self._show_timer.stop() + + if self._reset_on_show: + self._reset_on_show = False + self._controller.reset() + + def _show_group_dialog(self): + project_name = self._projects_combobox.get_selected_project_name() + if not project_name: + return + + product_ids = { + i["product_id"] + for i in self._products_widget.get_selected_version_info() + } + if not product_ids: + return + + self._group_dialog.set_product_ids(project_name, product_ids) + self._group_dialog.show() + + def _on_folder_filter_change(self, text): + self._folders_widget.set_name_filter(text) + + def _on_product_group_change(self): + self._products_widget.set_enable_grouping( + self._product_group_checkbox.isChecked() + ) + + def _on_product_filter_change(self, text): + self._products_widget.set_name_filter(text) + + def _on_product_type_filter_change(self): + self._products_widget.set_product_type_filter( + self._product_types_widget.get_filter_info() + ) + + def _on_merged_products_selection_change(self): + items = self._products_widget.get_selected_merged_products() + self._folders_widget.set_merged_products_selection(items) + + def _on_products_selection_change(self): + items = self._products_widget.get_selected_version_info() + self._info_widget.set_selected_version_info( + self._projects_combobox.get_selected_project_name(), + items + ) + + def _on_go_to_current_context_click(self): + context = self._controller.get_current_context() + self._controller.set_expected_selection( + context["project_name"], + context["folder_id"], + ) + + def _on_refresh_click(self): + self._controller.reset() + + def _on_controller_reset_start(self): + self._refresh_handler.reset() + + def _on_controller_reset_finish(self): + context = self._controller.get_current_context() + project_name = context["project_name"] + self._go_to_current_btn.setVisible(bool(project_name)) + self._projects_combobox.set_current_context_project(project_name) + if not self._refresh_handler.project_refreshed: + self._projects_combobox.refresh() + + def _on_load_finished(self, event): + error_info = event["error_info"] + if not error_info: + return + + box = LoadErrorMessageBox(error_info, self) + box.show() + + def _on_project_selection_changed(self, event): + self._selected_project_name = event["project_name"] + + def _on_folders_selection_changed(self, event): + self._selected_folder_ids = set(event["folder_ids"]) + self._update_thumbnails() + + def _on_versions_selection_changed(self, event): + self._selected_version_ids = set(event["version_ids"]) + self._update_thumbnails() + + def _update_thumbnails(self): + project_name = self._selected_project_name + thumbnail_ids = set() + if self._selected_version_ids: + thumbnail_id_by_entity_id = ( + self._controller.get_version_thumbnail_ids( + project_name, + self._selected_version_ids + ) + ) + thumbnail_ids = set(thumbnail_id_by_entity_id.values()) + elif self._selected_folder_ids: + thumbnail_id_by_entity_id = ( + self._controller.get_folder_thumbnail_ids( + project_name, + self._selected_folder_ids + ) + ) + thumbnail_ids = set(thumbnail_id_by_entity_id.values()) + + thumbnail_ids.discard(None) + + if not thumbnail_ids: + self._thumbnails_widget.set_current_thumbnails(None) + return + + thumbnail_paths = set() + for thumbnail_id in thumbnail_ids: + thumbnail_path = self._controller.get_thumbnail_path( + project_name, thumbnail_id) + thumbnail_paths.add(thumbnail_path) + thumbnail_paths.discard(None) + self._thumbnails_widget.set_current_thumbnail_paths(thumbnail_paths) + + def _on_projects_refresh(self): + self._refresh_handler.set_project_refreshed() + if not self._refresh_handler.folders_refreshed: + self._folders_widget.refresh() + + def _on_folders_refresh(self): + self._refresh_handler.set_folders_refreshed() + if not self._refresh_handler.products_refreshed: + self._products_widget.refresh() + + def _on_products_refresh(self): + self._refresh_handler.set_products_refreshed() diff --git a/openpype/tools/ayon_push_to_project/__init__.py b/openpype/tools/ayon_push_to_project/__init__.py new file mode 100644 index 0000000000..83df110c96 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/__init__.py @@ -0,0 +1,6 @@ +from .control import PushToContextController + + +__all__ = ( + "PushToContextController", +) diff --git a/openpype/tools/ayon_push_to_project/control.py b/openpype/tools/ayon_push_to_project/control.py new file mode 100644 index 0000000000..0a19136701 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/control.py @@ -0,0 +1,344 @@ +import threading + +from openpype.client import ( + get_asset_by_id, + get_subset_by_id, + get_version_by_id, + get_representations, +) +from openpype.settings import get_project_settings +from openpype.lib import prepare_template_data +from openpype.lib.events import QueuedEventSystem +from openpype.pipeline.create import get_subset_name_template +from openpype.tools.ayon_utils.models import ProjectsModel, HierarchyModel + +from .models import ( + PushToProjectSelectionModel, + UserPublishValuesModel, + IntegrateModel, +) + + +class PushToContextController: + def __init__(self, project_name=None, version_id=None): + self._event_system = self._create_event_system() + + self._projects_model = ProjectsModel(self) + self._hierarchy_model = HierarchyModel(self) + self._integrate_model = IntegrateModel(self) + + self._selection_model = PushToProjectSelectionModel(self) + self._user_values = UserPublishValuesModel(self) + + self._src_project_name = None + self._src_version_id = None + self._src_asset_doc = None + self._src_subset_doc = None + self._src_version_doc = None + self._src_label = None + + self._submission_enabled = False + self._process_thread = None + self._process_item_id = None + + self.set_source(project_name, version_id) + + # Events system + def emit_event(self, topic, data=None, source=None): + """Use implemented event system to trigger event.""" + + if data is None: + data = {} + self._event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self._event_system.add_callback(topic, callback) + + def set_source(self, project_name, version_id): + """Set source project and version. + + Args: + project_name (Union[str, None]): Source project name. + version_id (Union[str, None]): Source version id. + """ + + if ( + project_name == self._src_project_name + and version_id == self._src_version_id + ): + return + + self._src_project_name = project_name + self._src_version_id = version_id + self._src_label = None + asset_doc = None + subset_doc = None + version_doc = None + if project_name and version_id: + version_doc = get_version_by_id(project_name, version_id) + + if version_doc: + subset_doc = get_subset_by_id(project_name, version_doc["parent"]) + + if subset_doc: + asset_doc = get_asset_by_id(project_name, subset_doc["parent"]) + + self._src_asset_doc = asset_doc + self._src_subset_doc = subset_doc + self._src_version_doc = version_doc + if asset_doc: + self._user_values.set_new_folder_name(asset_doc["name"]) + variant = self._get_src_variant() + if variant: + self._user_values.set_variant(variant) + + comment = version_doc["data"].get("comment") + if comment: + self._user_values.set_comment(comment) + + self._emit_event( + "source.changed", + { + "project_name": project_name, + "version_id": version_id + } + ) + + def get_source_label(self): + """Get source label. + + Returns: + str: Label describing source project and version as path. + """ + + if self._src_label is None: + self._src_label = self._prepare_source_label() + return self._src_label + + def get_project_items(self, sender=None): + return self._projects_model.get_project_items(sender) + + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_task_items(self, project_name, folder_id, sender=None): + return self._hierarchy_model.get_task_items( + project_name, folder_id, sender + ) + + def get_user_values(self): + return self._user_values.get_data() + + def set_user_value_folder_name(self, folder_name): + self._user_values.set_new_folder_name(folder_name) + self._invalidate() + + def set_user_value_variant(self, variant): + self._user_values.set_variant(variant) + self._invalidate() + + def set_user_value_comment(self, comment): + self._user_values.set_comment(comment) + self._invalidate() + + def set_selected_project(self, project_name): + self._selection_model.set_selected_project(project_name) + self._invalidate() + + def set_selected_folder(self, folder_id): + self._selection_model.set_selected_folder(folder_id) + self._invalidate() + + def set_selected_task(self, task_id, task_name): + self._selection_model.set_selected_task(task_id, task_name) + + def get_process_item_status(self, item_id): + return self._integrate_model.get_item_status(item_id) + + # Processing methods + def submit(self, wait=True): + if not self._submission_enabled: + return + + if self._process_thread is not None: + return + + item_id = self._integrate_model.create_process_item( + self._src_project_name, + self._src_version_id, + self._selection_model.get_selected_project_name(), + self._selection_model.get_selected_folder_id(), + self._selection_model.get_selected_task_name(), + self._user_values.variant, + comment=self._user_values.comment, + new_folder_name=self._user_values.new_folder_name, + dst_version=1 + ) + + self._process_item_id = item_id + self._emit_event("submit.started") + if wait: + self._submit_callback() + self._process_item_id = None + return item_id + + thread = threading.Thread(target=self._submit_callback) + self._process_thread = thread + thread.start() + return item_id + + def wait_for_process_thread(self): + if self._process_thread is None: + return + self._process_thread.join() + self._process_thread = None + + def _prepare_source_label(self): + if not self._src_project_name or not self._src_version_id: + return "Source is not defined" + + asset_doc = self._src_asset_doc + if not asset_doc: + return "Source is invalid" + + folder_path_parts = list(asset_doc["data"]["parents"]) + folder_path_parts.append(asset_doc["name"]) + folder_path = "/".join(folder_path_parts) + subset_doc = self._src_subset_doc + version_doc = self._src_version_doc + return "Source: {}/{}/{}/v{:0>3}".format( + self._src_project_name, + folder_path, + subset_doc["name"], + version_doc["name"] + ) + + def _get_task_info_from_repre_docs(self, asset_doc, repre_docs): + asset_tasks = asset_doc["data"].get("tasks") or {} + found_comb = [] + for repre_doc in repre_docs: + context = repre_doc["context"] + task_info = context.get("task") + if task_info is None: + continue + + task_name = None + task_type = None + if isinstance(task_info, str): + task_name = task_info + asset_task_info = asset_tasks.get(task_info) or {} + task_type = asset_task_info.get("type") + + elif isinstance(task_info, dict): + task_name = task_info.get("name") + task_type = task_info.get("type") + + if task_name and task_type: + return task_name, task_type + + if task_name: + found_comb.append((task_name, task_type)) + + for task_name, task_type in found_comb: + return task_name, task_type + return None, None + + def _get_src_variant(self): + project_name = self._src_project_name + version_doc = self._src_version_doc + asset_doc = self._src_asset_doc + repre_docs = get_representations( + project_name, version_ids=[version_doc["_id"]] + ) + task_name, task_type = self._get_task_info_from_repre_docs( + asset_doc, repre_docs + ) + + project_settings = get_project_settings(project_name) + subset_doc = self._src_subset_doc + family = subset_doc["data"].get("family") + if not family: + family = subset_doc["data"]["families"][0] + template = get_subset_name_template( + self._src_project_name, + family, + task_name, + task_type, + None, + project_settings=project_settings + ) + template_low = template.lower() + variant_placeholder = "{variant}" + if ( + variant_placeholder not in template_low + or (not task_name and "{task" in template_low) + ): + return "" + + idx = template_low.index(variant_placeholder) + template_s = template[:idx] + template_e = template[idx + len(variant_placeholder):] + fill_data = prepare_template_data({ + "family": family, + "task": task_name + }) + try: + subset_s = template_s.format(**fill_data) + subset_e = template_e.format(**fill_data) + except Exception as exc: + print("Failed format", exc) + return "" + + subset_name = self._src_subset_doc["name"] + if ( + (subset_s and not subset_name.startswith(subset_s)) + or (subset_e and not subset_name.endswith(subset_e)) + ): + return "" + + if subset_s: + subset_name = subset_name[len(subset_s):] + if subset_e: + subset_name = subset_name[:len(subset_e)] + return subset_name + + def _check_submit_validations(self): + if not self._user_values.is_valid: + return False + + if not self._selection_model.get_selected_project_name(): + return False + + if ( + not self._user_values.new_folder_name + and not self._selection_model.get_selected_folder_id() + ): + return False + return True + + def _invalidate(self): + submission_enabled = self._check_submit_validations() + if submission_enabled == self._submission_enabled: + return + self._submission_enabled = submission_enabled + self._emit_event( + "submission.enabled.changed", + {"enabled": submission_enabled} + ) + + def _submit_callback(self): + process_item_id = self._process_item_id + if process_item_id is None: + return + self._integrate_model.integrate_item(process_item_id) + self._emit_event("submit.finished", {}) + if process_item_id == self._process_item_id: + self._process_item_id = None + + def _emit_event(self, topic, data=None): + if data is None: + data = {} + self.emit_event(topic, data, "controller") + + def _create_event_system(self): + return QueuedEventSystem() diff --git a/openpype/tools/ayon_push_to_project/main.py b/openpype/tools/ayon_push_to_project/main.py new file mode 100644 index 0000000000..e36940e488 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/main.py @@ -0,0 +1,32 @@ +import click + +from openpype.tools.utils import get_openpype_qt_app +from openpype.tools.ayon_push_to_project.ui import PushToContextSelectWindow + + +def main_show(project_name, version_id): + app = get_openpype_qt_app() + + window = PushToContextSelectWindow() + window.show() + window.set_source(project_name, version_id) + + app.exec_() + + +@click.command() +@click.option("--project", help="Source project name") +@click.option("--version", help="Source version id") +def main(project, version): + """Run PushToProject tool to integrate version in different project. + + Args: + project (str): Source project name. + version (str): Version id. + """ + + main_show(project, version) + + +if __name__ == "__main__": + main() diff --git a/openpype/tools/ayon_push_to_project/models/__init__.py b/openpype/tools/ayon_push_to_project/models/__init__.py new file mode 100644 index 0000000000..99355b4296 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/models/__init__.py @@ -0,0 +1,10 @@ +from .selection import PushToProjectSelectionModel +from .user_values import UserPublishValuesModel +from .integrate import IntegrateModel + + +__all__ = ( + "PushToProjectSelectionModel", + "UserPublishValuesModel", + "IntegrateModel", +) diff --git a/openpype/tools/ayon_push_to_project/models/integrate.py b/openpype/tools/ayon_push_to_project/models/integrate.py new file mode 100644 index 0000000000..976d8cb4f0 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/models/integrate.py @@ -0,0 +1,1214 @@ +import os +import re +import copy +import socket +import itertools +import datetime +import sys +import traceback +import uuid + +from bson.objectid import ObjectId + +from openpype.client import ( + get_project, + get_assets, + get_asset_by_id, + get_subset_by_id, + get_subset_by_name, + get_version_by_id, + get_last_version_by_subset_id, + get_version_by_name, + get_representations, +) +from openpype.client.operations import ( + OperationsSession, + new_asset_document, + new_subset_document, + new_version_doc, + new_representation_doc, + prepare_version_update_data, + prepare_representation_update_data, +) +from openpype.modules import ModulesManager +from openpype.lib import ( + StringTemplate, + get_openpype_username, + get_formatted_current_time, + source_hash, +) + +from openpype.lib.file_transaction import FileTransaction +from openpype.settings import get_project_settings +from openpype.pipeline import Anatomy +from openpype.pipeline.version_start import get_versioning_start +from openpype.pipeline.template_data import get_template_data +from openpype.pipeline.publish import get_publish_template_name +from openpype.pipeline.create import get_subset_name + +UNKNOWN = object() + + +class PushToProjectError(Exception): + pass + + +class FileItem(object): + def __init__(self, path): + self.path = path + + @property + def is_valid_file(self): + return os.path.exists(self.path) and os.path.isfile(self.path) + + +class SourceFile(FileItem): + def __init__(self, path, frame=None, udim=None): + super(SourceFile, self).__init__(path) + self.frame = frame + self.udim = udim + + def __repr__(self): + subparts = [self.__class__.__name__] + if self.frame is not None: + subparts.append("frame: {}".format(self.frame)) + if self.udim is not None: + subparts.append("UDIM: {}".format(self.udim)) + + return "<{}> '{}'".format(" - ".join(subparts), self.path) + + +class ResourceFile(FileItem): + def __init__(self, path, relative_path): + super(ResourceFile, self).__init__(path) + self.relative_path = relative_path + + def __repr__(self): + return "<{}> '{}'".format(self.__class__.__name__, self.relative_path) + + @property + def is_valid_file(self): + if not self.relative_path: + return False + return super(ResourceFile, self).is_valid_file + + +class ProjectPushItem: + def __init__( + self, + src_project_name, + src_version_id, + dst_project_name, + dst_folder_id, + dst_task_name, + variant, + comment, + new_folder_name, + dst_version, + item_id=None, + ): + if not item_id: + item_id = uuid.uuid4().hex + self.src_project_name = src_project_name + self.src_version_id = src_version_id + self.dst_project_name = dst_project_name + self.dst_folder_id = dst_folder_id + self.dst_task_name = dst_task_name + self.dst_version = dst_version + self.variant = variant + self.new_folder_name = new_folder_name + self.comment = comment or "" + self.item_id = item_id + self._repr_value = None + + @property + def _repr(self): + if not self._repr_value: + self._repr_value = "|".join([ + self.src_project_name, + self.src_version_id, + self.dst_project_name, + str(self.dst_folder_id), + str(self.new_folder_name), + str(self.dst_task_name), + str(self.dst_version) + ]) + return self._repr_value + + def __repr__(self): + return "<{} - {}>".format(self.__class__.__name__, self._repr) + + def to_data(self): + return { + "src_project_name": self.src_project_name, + "src_version_id": self.src_version_id, + "dst_project_name": self.dst_project_name, + "dst_folder_id": self.dst_folder_id, + "dst_task_name": self.dst_task_name, + "dst_version": self.dst_version, + "variant": self.variant, + "comment": self.comment, + "new_folder_name": self.new_folder_name, + "item_id": self.item_id, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class StatusMessage: + def __init__(self, message, level): + self.message = message + self.level = level + + def __str__(self): + return "{}: {}".format(self.level.upper(), self.message) + + def __repr__(self): + return "<{} - {}> {}".format( + self.__class__.__name__, self.level.upper, self.message + ) + + +class ProjectPushItemStatus: + def __init__( + self, + started=False, + failed=False, + finished=False, + fail_reason=None, + full_traceback=None + ): + self.started = started + self.failed = failed + self.finished = finished + self.fail_reason = fail_reason + self.full_traceback = full_traceback + + def set_failed(self, fail_reason, exc_info=None): + """Set status as failed. + + Attribute 'fail_reason' can change automatically based on passed value. + Reason is unset if 'failed' is 'False' and is set do default reason if + is set to 'True' and reason is not set. + + Args: + fail_reason (str): Reason why failed. + exc_info(tuple): Exception info. + """ + + failed = True + if not fail_reason and not exc_info: + failed = False + + full_traceback = None + if exc_info is not None: + full_traceback = "".join(traceback.format_exception(*exc_info)) + if not fail_reason: + fail_reason = "Failed without specified reason" + + self.failed = failed + self.fail_reason = fail_reason or None + self.full_traceback = full_traceback + + def to_data(self): + return { + "started": self.started, + "failed": self.failed, + "finished": self.finished, + "fail_reason": self.fail_reason, + "full_traceback": self.full_traceback, + } + + @classmethod + def from_data(cls, data): + return cls(**data) + + +class ProjectPushRepreItem: + """Representation item. + + Representation item based on representation document and project roots. + + Representation document may have reference to: + - source files: Files defined with publish template + - resource files: Files that should be in publish directory + but filenames are not template based. + + Args: + repre_doc (Dict[str, Ant]): Representation document. + roots (Dict[str, str]): Project roots (based on project anatomy). + """ + + def __init__(self, repre_doc, roots): + self._repre_doc = repre_doc + self._roots = roots + self._src_files = None + self._resource_files = None + self._frame = UNKNOWN + + @property + def repre_doc(self): + return self._repre_doc + + @property + def src_files(self): + if self._src_files is None: + self.get_source_files() + return self._src_files + + @property + def resource_files(self): + if self._resource_files is None: + self.get_source_files() + return self._resource_files + + @staticmethod + def _clean_path(path): + new_value = path.replace("\\", "/") + while "//" in new_value: + new_value = new_value.replace("//", "/") + return new_value + + @staticmethod + def _get_relative_path(path, src_dirpath): + dirpath, basename = os.path.split(path) + if not dirpath.lower().startswith(src_dirpath.lower()): + return None + + relative_dir = dirpath[len(src_dirpath):].lstrip("/") + if relative_dir: + relative_path = "/".join([relative_dir, basename]) + else: + relative_path = basename + return relative_path + + @property + def frame(self): + """First frame of representation files. + + This value will be in representation document context if is sequence. + + Returns: + Union[int, None]: First frame in representation files based on + source files or None if frame is not part of filename. + """ + + if self._frame is UNKNOWN: + frame = None + for src_file in self.src_files: + src_frame = src_file.frame + if ( + src_frame is not None + and (frame is None or src_frame < frame) + ): + frame = src_frame + self._frame = frame + return self._frame + + @staticmethod + def validate_source_files(src_files, resource_files): + if not src_files: + raise AssertionError(( + "Couldn't figure out source files from representation." + " Found resource files {}" + ).format(", ".join(str(i) for i in resource_files))) + + invalid_items = [ + item + for item in itertools.chain(src_files, resource_files) + if not item.is_valid_file + ] + if invalid_items: + raise AssertionError(( + "Source files that were not found on disk: {}" + ).format(", ".join(str(i) for i in invalid_items))) + + def get_source_files(self): + if self._src_files is not None: + return self._src_files, self._resource_files + + repre_context = self._repre_doc["context"] + if "frame" in repre_context or "udim" in repre_context: + src_files, resource_files = self._get_source_files_with_frames() + else: + src_files, resource_files = self._get_source_files() + + self.validate_source_files(src_files, resource_files) + + self._src_files = src_files + self._resource_files = resource_files + return self._src_files, self._resource_files + + def _get_source_files_with_frames(self): + frame_placeholder = "__frame__" + udim_placeholder = "__udim__" + src_files = [] + resource_files = [] + template = self._repre_doc["data"]["template"] + # Remove padding from 'udim' and 'frame' formatting keys + # - "{frame:0>4}" -> "{frame}" + for key in ("udim", "frame"): + sub_part = "{" + key + "[^}]*}" + replacement = "{{{}}}".format(key) + template = re.sub(sub_part, replacement, template) + + repre_context = self._repre_doc["context"] + fill_repre_context = copy.deepcopy(repre_context) + if "frame" in fill_repre_context: + fill_repre_context["frame"] = frame_placeholder + + if "udim" in fill_repre_context: + fill_repre_context["udim"] = udim_placeholder + + fill_roots = fill_repre_context["root"] + for root_name in tuple(fill_roots.keys()): + fill_roots[root_name] = "{{root[{}]}}".format(root_name) + repre_path = StringTemplate.format_template( + template, fill_repre_context) + repre_path = self._clean_path(repre_path) + src_dirpath, src_basename = os.path.split(repre_path) + src_basename = ( + re.escape(src_basename) + .replace(frame_placeholder, "(?P[0-9]+)") + .replace(udim_placeholder, "(?P[0-9]+)") + ) + src_basename_regex = re.compile("^{}$".format(src_basename)) + for file_info in self._repre_doc["files"]: + filepath_template = self._clean_path(file_info["path"]) + filepath = self._clean_path( + filepath_template.format(root=self._roots) + ) + dirpath, basename = os.path.split(filepath_template) + if ( + dirpath.lower() != src_dirpath.lower() + or not src_basename_regex.match(basename) + ): + relative_path = self._get_relative_path(filepath, src_dirpath) + resource_files.append(ResourceFile(filepath, relative_path)) + continue + + filepath = os.path.join(src_dirpath, basename) + frame = None + udim = None + for item in src_basename_regex.finditer(basename): + group_name = item.lastgroup + value = item.group(group_name) + if group_name == "frame": + frame = int(value) + elif group_name == "udim": + udim = value + + src_files.append(SourceFile(filepath, frame, udim)) + + return src_files, resource_files + + def _get_source_files(self): + src_files = [] + resource_files = [] + template = self._repre_doc["data"]["template"] + repre_context = self._repre_doc["context"] + fill_repre_context = copy.deepcopy(repre_context) + fill_roots = fill_repre_context["root"] + for root_name in tuple(fill_roots.keys()): + fill_roots[root_name] = "{{root[{}]}}".format(root_name) + repre_path = StringTemplate.format_template(template, + fill_repre_context) + repre_path = self._clean_path(repre_path) + src_dirpath = os.path.dirname(repre_path) + for file_info in self._repre_doc["files"]: + filepath_template = self._clean_path(file_info["path"]) + filepath = self._clean_path( + filepath_template.format(root=self._roots)) + + if filepath_template.lower() == repre_path.lower(): + src_files.append( + SourceFile(repre_path.format(root=self._roots)) + ) + else: + relative_path = self._get_relative_path( + filepath_template, src_dirpath + ) + resource_files.append( + ResourceFile(filepath, relative_path) + ) + return src_files, resource_files + + +class ProjectPushItemProcess: + """ + Args: + model (IntegrateModel): Model which is processing item. + item (ProjectPushItem): Item which is being processed. + """ + + # TODO where to get host?!!! + host_name = "republisher" + + def __init__(self, model, item): + self._model = model + self._item = item + + self._src_asset_doc = None + self._src_subset_doc = None + self._src_version_doc = None + self._src_repre_items = None + + self._project_doc = None + self._anatomy = None + self._asset_doc = None + self._created_asset_doc = None + self._task_info = None + self._subset_doc = None + self._version_doc = None + + self._family = None + self._subset_name = None + + self._project_settings = None + self._template_name = None + + self._status = ProjectPushItemStatus() + self._operations = OperationsSession() + self._file_transaction = FileTransaction() + + self._messages = [] + + @property + def item_id(self): + return self._item.item_id + + @property + def started(self): + return self._status.started + + def get_status_data(self): + return self._status.to_data() + + def integrate(self): + self._status.started = True + try: + self._log_info("Process started") + self._fill_source_variables() + self._log_info("Source entities were found") + self._fill_destination_project() + self._log_info("Destination project was found") + self._fill_or_create_destination_asset() + self._log_info("Destination asset was determined") + self._determine_family() + self._determine_publish_template_name() + self._determine_subset_name() + self._make_sure_subset_exists() + self._make_sure_version_exists() + self._log_info("Prerequirements were prepared") + self._integrate_representations() + self._log_info("Integration finished") + + except PushToProjectError as exc: + if not self._status.failed: + self._status.set_failed(str(exc)) + + except Exception as exc: + _exc, _value, _tb = sys.exc_info() + self._status.set_failed( + "Unhandled error happened: {}".format(str(exc)), + (_exc, _value, _tb) + ) + + finally: + self._status.finished = True + self._emit_event( + "push.finished.changed", + { + "finished": True, + "item_id": self.item_id, + } + ) + + def _emit_event(self, topic, data): + self._model.emit_event(topic, data) + + # Loggin helpers + # TODO better logging + def _add_message(self, message, level): + message_obj = StatusMessage(message, level) + self._messages.append(message_obj) + self._emit_event( + "push.message.added", + { + "message": message, + "level": level, + "item_id": self.item_id, + } + ) + print(message_obj) + return message_obj + + def _log_debug(self, message): + return self._add_message(message, "debug") + + def _log_info(self, message): + return self._add_message(message, "info") + + def _log_warning(self, message): + return self._add_message(message, "warning") + + def _log_error(self, message): + return self._add_message(message, "error") + + def _log_critical(self, message): + return self._add_message(message, "critical") + + def _fill_source_variables(self): + src_project_name = self._item.src_project_name + src_version_id = self._item.src_version_id + + project_doc = get_project(src_project_name) + if not project_doc: + self._status.set_failed( + f"Source project \"{src_project_name}\" was not found" + ) + + self._emit_event( + "push.failed.changed", + {"item_id": self.item_id} + ) + raise PushToProjectError(self._status.fail_reason) + + self._log_debug(f"Project '{src_project_name}' found") + + version_doc = get_version_by_id(src_project_name, src_version_id) + if not version_doc: + self._status.set_failed(( + f"Source version with id \"{src_version_id}\"" + f" was not found in project \"{src_project_name}\"" + )) + raise PushToProjectError(self._status.fail_reason) + + subset_id = version_doc["parent"] + subset_doc = get_subset_by_id(src_project_name, subset_id) + if not subset_doc: + self._status.set_failed(( + f"Could find subset with id \"{subset_id}\"" + f" in project \"{src_project_name}\"" + )) + raise PushToProjectError(self._status.fail_reason) + + asset_id = subset_doc["parent"] + asset_doc = get_asset_by_id(src_project_name, asset_id) + if not asset_doc: + self._status.set_failed(( + f"Could find asset with id \"{asset_id}\"" + f" in project \"{src_project_name}\"" + )) + raise PushToProjectError(self._status.fail_reason) + + anatomy = Anatomy(src_project_name) + + repre_docs = get_representations( + src_project_name, + version_ids=[src_version_id] + ) + repre_items = [ + ProjectPushRepreItem(repre_doc, anatomy.roots) + for repre_doc in repre_docs + ] + self._log_debug(( + f"Found {len(repre_items)} representations on" + f" version {src_version_id} in project '{src_project_name}'" + )) + if not repre_items: + self._status.set_failed( + "Source version does not have representations" + f" (Version id: {src_version_id})" + ) + raise PushToProjectError(self._status.fail_reason) + + self._src_asset_doc = asset_doc + self._src_subset_doc = subset_doc + self._src_version_doc = version_doc + self._src_repre_items = repre_items + + def _fill_destination_project(self): + # --- Destination entities --- + dst_project_name = self._item.dst_project_name + # Validate project existence + dst_project_doc = get_project(dst_project_name) + if not dst_project_doc: + self._status.set_failed( + f"Destination project '{dst_project_name}' was not found" + ) + raise PushToProjectError(self._status.fail_reason) + + self._log_debug( + f"Destination project '{dst_project_name}' found" + ) + self._project_doc = dst_project_doc + self._anatomy = Anatomy(dst_project_name) + self._project_settings = get_project_settings( + self._item.dst_project_name + ) + + def _create_asset( + self, + src_asset_doc, + project_doc, + parent_asset_doc, + asset_name + ): + parent_id = None + parents = [] + tools = [] + if parent_asset_doc: + parent_id = parent_asset_doc["_id"] + parents = list(parent_asset_doc["data"]["parents"]) + parents.append(parent_asset_doc["name"]) + _tools = parent_asset_doc["data"].get("tools_env") + if _tools: + tools = list(_tools) + + asset_name_low = asset_name.lower() + other_asset_docs = get_assets( + project_doc["name"], fields=["_id", "name", "data.visualParent"] + ) + for other_asset_doc in other_asset_docs: + other_name = other_asset_doc["name"] + other_parent_id = other_asset_doc["data"].get("visualParent") + if other_name.lower() != asset_name_low: + continue + + if other_parent_id != parent_id: + self._status.set_failed(( + f"Asset with name \"{other_name}\" already" + " exists in different hierarchy." + )) + raise PushToProjectError(self._status.fail_reason) + + self._log_debug(( + f"Found already existing asset with name \"{other_name}\"" + f" which match requested name \"{asset_name}\"" + )) + return get_asset_by_id(project_doc["name"], other_asset_doc["_id"]) + + data_keys = ( + "clipIn", + "clipOut", + "frameStart", + "frameEnd", + "handleStart", + "handleEnd", + "resolutionWidth", + "resolutionHeight", + "fps", + "pixelAspect", + ) + asset_data = { + "visualParent": parent_id, + "parents": parents, + "tasks": {}, + "tools_env": tools + } + src_asset_data = src_asset_doc["data"] + for key in data_keys: + if key in src_asset_data: + asset_data[key] = src_asset_data[key] + + asset_doc = new_asset_document( + asset_name, + project_doc["_id"], + parent_id, + parents, + data=asset_data + ) + self._operations.create_entity( + project_doc["name"], + asset_doc["type"], + asset_doc + ) + self._log_info( + f"Creating new asset with name \"{asset_name}\"" + ) + self._created_asset_doc = asset_doc + return asset_doc + + def _fill_or_create_destination_asset(self): + dst_project_name = self._item.dst_project_name + dst_folder_id = self._item.dst_folder_id + dst_task_name = self._item.dst_task_name + new_folder_name = self._item.new_folder_name + if not dst_folder_id and not new_folder_name: + self._status.set_failed( + "Push item does not have defined destination asset" + ) + raise PushToProjectError(self._status.fail_reason) + + # Get asset document + parent_asset_doc = None + if dst_folder_id: + parent_asset_doc = get_asset_by_id( + self._item.dst_project_name, self._item.dst_folder_id + ) + if not parent_asset_doc: + self._status.set_failed( + f"Could find asset with id \"{dst_folder_id}\"" + f" in project \"{dst_project_name}\"" + ) + raise PushToProjectError(self._status.fail_reason) + + if not new_folder_name: + asset_doc = parent_asset_doc + else: + asset_doc = self._create_asset( + self._src_asset_doc, + self._project_doc, + parent_asset_doc, + new_folder_name + ) + self._asset_doc = asset_doc + if not dst_task_name: + self._task_info = {} + return + + asset_path_parts = list(asset_doc["data"]["parents"]) + asset_path_parts.append(asset_doc["name"]) + asset_path = "/".join(asset_path_parts) + asset_tasks = asset_doc.get("data", {}).get("tasks") or {} + task_info = asset_tasks.get(dst_task_name) + if not task_info: + self._status.set_failed( + f"Could find task with name \"{dst_task_name}\"" + f" on asset \"{asset_path}\"" + f" in project \"{dst_project_name}\"" + ) + raise PushToProjectError(self._status.fail_reason) + + # Create copy of task info to avoid changing data in asset document + task_info = copy.deepcopy(task_info) + task_info["name"] = dst_task_name + # Fill rest of task information based on task type + task_type = task_info["type"] + task_type_info = self._project_doc["config"]["tasks"].get( + task_type, {}) + task_info.update(task_type_info) + self._task_info = task_info + + def _determine_family(self): + subset_doc = self._src_subset_doc + family = subset_doc["data"].get("family") + families = subset_doc["data"].get("families") + if not family and families: + family = families[0] + + if not family: + self._status.set_failed( + "Couldn't figure out family from source subset" + ) + raise PushToProjectError(self._status.fail_reason) + + self._log_debug( + f"Publishing family is '{family}' (Based on source subset)" + ) + self._family = family + + def _determine_publish_template_name(self): + template_name = get_publish_template_name( + self._item.dst_project_name, + self.host_name, + self._family, + self._task_info.get("name"), + self._task_info.get("type"), + project_settings=self._project_settings + ) + self._log_debug( + f"Using template '{template_name}' for integration" + ) + self._template_name = template_name + + def _determine_subset_name(self): + family = self._family + asset_doc = self._asset_doc + task_info = self._task_info + subset_name = get_subset_name( + family, + self._item.variant, + task_info.get("name"), + asset_doc, + project_name=self._item.dst_project_name, + host_name=self.host_name, + project_settings=self._project_settings + ) + self._log_info( + f"Push will be integrating to subset with name '{subset_name}'" + ) + self._subset_name = subset_name + + def _make_sure_subset_exists(self): + project_name = self._item.dst_project_name + asset_id = self._asset_doc["_id"] + subset_name = self._subset_name + family = self._family + subset_doc = get_subset_by_name(project_name, subset_name, asset_id) + if subset_doc: + self._subset_doc = subset_doc + return subset_doc + + data = { + "families": [family] + } + subset_doc = new_subset_document( + subset_name, family, asset_id, data + ) + self._operations.create_entity(project_name, "subset", subset_doc) + self._subset_doc = subset_doc + + def _make_sure_version_exists(self): + """Make sure version document exits in database.""" + + project_name = self._item.dst_project_name + version = self._item.dst_version + src_version_doc = self._src_version_doc + subset_doc = self._subset_doc + subset_id = subset_doc["_id"] + src_data = src_version_doc["data"] + families = subset_doc["data"].get("families") + if not families: + families = [subset_doc["data"]["family"]] + + version_data = { + "families": list(families), + "fps": src_data.get("fps"), + "source": src_data.get("source"), + "machine": socket.gethostname(), + "comment": self._item.comment or "", + "author": get_openpype_username(), + "time": get_formatted_current_time(), + } + if version is None: + last_version_doc = get_last_version_by_subset_id( + project_name, subset_id + ) + if last_version_doc: + version = int(last_version_doc["name"]) + 1 + else: + version = get_versioning_start( + project_name, + self.host_name, + task_name=self._task_info["name"], + task_type=self._task_info["type"], + family=families[0], + subset=subset_doc["name"] + ) + + existing_version_doc = get_version_by_name( + project_name, version, subset_id + ) + # Update existing version + if existing_version_doc: + version_doc = new_version_doc( + version, subset_id, version_data, existing_version_doc["_id"] + ) + update_data = prepare_version_update_data( + existing_version_doc, version_doc + ) + if update_data: + self._operations.update_entity( + project_name, + "version", + existing_version_doc["_id"], + update_data + ) + self._version_doc = version_doc + + return + + version_doc = new_version_doc( + version, subset_id, version_data + ) + self._operations.create_entity(project_name, "version", version_doc) + + self._version_doc = version_doc + + def _integrate_representations(self): + try: + self._real_integrate_representations() + except Exception: + self._operations.clear() + self._file_transaction.rollback() + raise + + def _real_integrate_representations(self): + version_doc = self._version_doc + version_id = version_doc["_id"] + existing_repres = get_representations( + self._item.dst_project_name, + version_ids=[version_id] + ) + existing_repres_by_low_name = { + repre_doc["name"].lower(): repre_doc + for repre_doc in existing_repres + } + template_name = self._template_name + anatomy = self._anatomy + formatting_data = get_template_data( + self._project_doc, + self._asset_doc, + self._task_info.get("name"), + self.host_name + ) + formatting_data.update({ + "subset": self._subset_name, + "family": self._family, + "version": version_doc["name"] + }) + + path_template = anatomy.templates[template_name]["path"].replace( + "\\", "/" + ) + file_template = StringTemplate( + anatomy.templates[template_name]["file"] + ) + self._log_info("Preparing files to transfer") + processed_repre_items = self._prepare_file_transactions( + anatomy, template_name, formatting_data, file_template + ) + self._file_transaction.process() + self._log_info("Preparing database changes") + self._prepare_database_operations( + version_id, + processed_repre_items, + path_template, + existing_repres_by_low_name + ) + self._log_info("Finalization") + self._operations.commit() + self._file_transaction.finalize() + + def _prepare_file_transactions( + self, anatomy, template_name, formatting_data, file_template + ): + processed_repre_items = [] + for repre_item in self._src_repre_items: + repre_doc = repre_item.repre_doc + repre_name = repre_doc["name"] + repre_format_data = copy.deepcopy(formatting_data) + repre_format_data["representation"] = repre_name + for src_file in repre_item.src_files: + ext = os.path.splitext(src_file.path)[-1] + repre_format_data["ext"] = ext[1:] + break + + # Re-use 'output' from source representation + repre_output_name = repre_doc["context"].get("output") + if repre_output_name is not None: + repre_format_data["output"] = repre_output_name + + template_obj = anatomy.templates_obj[template_name]["folder"] + folder_path = template_obj.format_strict(formatting_data) + repre_context = folder_path.used_values + folder_path_rootless = folder_path.rootless + repre_filepaths = [] + published_path = None + for src_file in repre_item.src_files: + file_data = copy.deepcopy(repre_format_data) + frame = src_file.frame + if frame is not None: + file_data["frame"] = frame + + udim = src_file.udim + if udim is not None: + file_data["udim"] = udim + + filename = file_template.format_strict(file_data) + dst_filepath = os.path.normpath( + os.path.join(folder_path, filename) + ) + dst_rootless_path = os.path.normpath( + os.path.join(folder_path_rootless, filename) + ) + if published_path is None or frame == repre_item.frame: + published_path = dst_filepath + repre_context.update(filename.used_values) + + repre_filepaths.append((dst_filepath, dst_rootless_path)) + self._file_transaction.add(src_file.path, dst_filepath) + + for resource_file in repre_item.resource_files: + dst_filepath = os.path.normpath( + os.path.join(folder_path, resource_file.relative_path) + ) + dst_rootless_path = os.path.normpath( + os.path.join( + folder_path_rootless, resource_file.relative_path + ) + ) + repre_filepaths.append((dst_filepath, dst_rootless_path)) + self._file_transaction.add(resource_file.path, dst_filepath) + processed_repre_items.append( + (repre_item, repre_filepaths, repre_context, published_path) + ) + return processed_repre_items + + def _prepare_database_operations( + self, + version_id, + processed_repre_items, + path_template, + existing_repres_by_low_name + ): + modules_manager = ModulesManager() + sync_server_module = modules_manager.get("sync_server") + if sync_server_module is None or not sync_server_module.enabled: + sites = [{ + "name": "studio", + "created_dt": datetime.datetime.now() + }] + else: + sites = sync_server_module.compute_resource_sync_sites( + project_name=self._item.dst_project_name + ) + + added_repre_names = set() + for item in processed_repre_items: + (repre_item, repre_filepaths, repre_context, published_path) = item + repre_name = repre_item.repre_doc["name"] + added_repre_names.add(repre_name.lower()) + new_repre_data = { + "path": published_path, + "template": path_template + } + new_repre_files = [] + for (path, rootless_path) in repre_filepaths: + new_repre_files.append({ + "_id": ObjectId(), + "path": rootless_path, + "size": os.path.getsize(path), + "hash": source_hash(path), + "sites": sites + }) + + existing_repre = existing_repres_by_low_name.get( + repre_name.lower() + ) + entity_id = None + if existing_repre: + entity_id = existing_repre["_id"] + new_repre_doc = new_representation_doc( + repre_name, + version_id, + repre_context, + data=new_repre_data, + entity_id=entity_id + ) + new_repre_doc["files"] = new_repre_files + if not existing_repre: + self._operations.create_entity( + self._item.dst_project_name, + new_repre_doc["type"], + new_repre_doc + ) + else: + update_data = prepare_representation_update_data( + existing_repre, new_repre_doc + ) + if update_data: + self._operations.update_entity( + self._item.dst_project_name, + new_repre_doc["type"], + new_repre_doc["_id"], + update_data + ) + + existing_repre_names = set(existing_repres_by_low_name.keys()) + for repre_name in (existing_repre_names - added_repre_names): + repre_doc = existing_repres_by_low_name[repre_name] + self._operations.update_entity( + self._item.dst_project_name, + repre_doc["type"], + repre_doc["_id"], + {"type": "archived_representation"} + ) + + +class IntegrateModel: + def __init__(self, controller): + self._controller = controller + self._process_items = {} + + def reset(self): + self._process_items = {} + + def emit_event(self, topic, data=None, source=None): + self._controller.emit_event(topic, data, source) + + def create_process_item( + self, + src_project_name, + src_version_id, + dst_project_name, + dst_folder_id, + dst_task_name, + variant, + comment, + new_folder_name, + dst_version, + ): + """Create new item for integration. + + Args: + src_project_name (str): Source project name. + src_version_id (str): Source version id. + dst_project_name (str): Destination project name. + dst_folder_id (str): Destination folder id. + dst_task_name (str): Destination task name. + variant (str): Variant name. + comment (Union[str, None]): Comment. + new_folder_name (Union[str, None]): New folder name. + dst_version (int): Destination version number. + + Returns: + str: Item id. The id can be used to trigger integration or get + status information. + """ + + item = ProjectPushItem( + src_project_name, + src_version_id, + dst_project_name, + dst_folder_id, + dst_task_name, + variant, + comment=comment, + new_folder_name=new_folder_name, + dst_version=dst_version + ) + process_item = ProjectPushItemProcess(self, item) + self._process_items[item.item_id] = process_item + return item.item_id + + def integrate_item(self, item_id): + """Start integration of item. + + Args: + item_id (str): Item id which should be integrated. + """ + + item = self._process_items.get(item_id) + if item is None or item.started: + return + item.integrate() + + def get_item_status(self, item_id): + """Status of an item. + + Args: + item_id (str): Item id for which status should be returned. + + Returns: + dict[str, Any]: Status data. + """ + + item = self._process_items.get(item_id) + if item is not None: + return item.get_status_data() + return None diff --git a/openpype/tools/ayon_push_to_project/models/selection.py b/openpype/tools/ayon_push_to_project/models/selection.py new file mode 100644 index 0000000000..19f1c6d37d --- /dev/null +++ b/openpype/tools/ayon_push_to_project/models/selection.py @@ -0,0 +1,72 @@ +class PushToProjectSelectionModel(object): + """Model handling selection changes. + + Triggering events: + - "selection.project.changed" + - "selection.folder.changed" + - "selection.task.changed" + """ + + event_source = "push-to-project.selection.model" + + def __init__(self, controller): + self._controller = controller + + self._project_name = None + self._folder_id = None + self._task_name = None + self._task_id = None + + def get_selected_project_name(self): + return self._project_name + + def set_selected_project(self, project_name): + if project_name == self._project_name: + return + + self._project_name = project_name + self._controller.emit_event( + "selection.project.changed", + {"project_name": project_name}, + self.event_source + ) + + def get_selected_folder_id(self): + return self._folder_id + + def set_selected_folder(self, folder_id): + if folder_id == self._folder_id: + return + + self._folder_id = folder_id + self._controller.emit_event( + "selection.folder.changed", + { + "project_name": self._project_name, + "folder_id": folder_id, + }, + self.event_source + ) + + def get_selected_task_name(self): + return self._task_name + + def get_selected_task_id(self): + return self._task_id + + def set_selected_task(self, task_id, task_name): + if task_id == self._task_id: + return + + self._task_name = task_name + self._task_id = task_id + self._controller.emit_event( + "selection.task.changed", + { + "project_name": self._project_name, + "folder_id": self._folder_id, + "task_name": task_name, + "task_id": task_id, + }, + self.event_source + ) diff --git a/openpype/tools/ayon_push_to_project/models/user_values.py b/openpype/tools/ayon_push_to_project/models/user_values.py new file mode 100644 index 0000000000..2a4faeb136 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/models/user_values.py @@ -0,0 +1,110 @@ +import re + +from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS + + +class UserPublishValuesModel: + """Helper object to validate values required for push to different project. + + Args: + controller (PushToContextController): Event system to catch + and emit events. + """ + + folder_name_regex = re.compile("^[a-zA-Z0-9_.]+$") + variant_regex = re.compile("^[{}]+$".format(SUBSET_NAME_ALLOWED_SYMBOLS)) + + def __init__(self, controller): + self._controller = controller + self._new_folder_name = None + self._variant = None + self._comment = None + self._is_variant_valid = False + self._is_new_folder_name_valid = False + + self.set_new_folder_name("") + self.set_variant("") + self.set_comment("") + + @property + def new_folder_name(self): + return self._new_folder_name + + @property + def variant(self): + return self._variant + + @property + def comment(self): + return self._comment + + @property + def is_variant_valid(self): + return self._is_variant_valid + + @property + def is_new_folder_name_valid(self): + return self._is_new_folder_name_valid + + @property + def is_valid(self): + return self.is_variant_valid and self.is_new_folder_name_valid + + def get_data(self): + return { + "new_folder_name": self._new_folder_name, + "variant": self._variant, + "comment": self._comment, + "is_variant_valid": self._is_variant_valid, + "is_new_folder_name_valid": self._is_new_folder_name_valid, + "is_valid": self.is_valid + } + + def set_variant(self, variant): + if variant == self._variant: + return + + self._variant = variant + is_valid = False + if variant: + is_valid = self.variant_regex.match(variant) is not None + self._is_variant_valid = is_valid + + self._controller.emit_event( + "variant.changed", + { + "variant": variant, + "is_valid": self._is_variant_valid, + }, + "user_values" + ) + + def set_new_folder_name(self, folder_name): + if self._new_folder_name == folder_name: + return + + self._new_folder_name = folder_name + is_valid = True + if folder_name: + is_valid = ( + self.folder_name_regex.match(folder_name) is not None + ) + self._is_new_folder_name_valid = is_valid + self._controller.emit_event( + "new_folder_name.changed", + { + "new_folder_name": self._new_folder_name, + "is_valid": self._is_new_folder_name_valid, + }, + "user_values" + ) + + def set_comment(self, comment): + if comment == self._comment: + return + self._comment = comment + self._controller.emit_event( + "comment.changed", + {"comment": comment}, + "user_values" + ) diff --git a/openpype/tools/ayon_push_to_project/ui/__init__.py b/openpype/tools/ayon_push_to_project/ui/__init__.py new file mode 100644 index 0000000000..1e86475530 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/ui/__init__.py @@ -0,0 +1,6 @@ +from .window import PushToContextSelectWindow + + +__all__ = ( + "PushToContextSelectWindow", +) diff --git a/openpype/tools/ayon_push_to_project/ui/window.py b/openpype/tools/ayon_push_to_project/ui/window.py new file mode 100644 index 0000000000..535c01c643 --- /dev/null +++ b/openpype/tools/ayon_push_to_project/ui/window.py @@ -0,0 +1,432 @@ +from qtpy import QtWidgets, QtGui, QtCore + +from openpype.style import load_stylesheet, get_app_icon_path +from openpype.tools.utils import ( + PlaceholderLineEdit, + SeparatorWidget, + set_style_property, +) +from openpype.tools.ayon_utils.widgets import ( + ProjectsCombobox, + FoldersWidget, + TasksWidget, +) +from openpype.tools.ayon_push_to_project.control import ( + PushToContextController, +) + + +class PushToContextSelectWindow(QtWidgets.QWidget): + def __init__(self, controller=None): + super(PushToContextSelectWindow, self).__init__() + if controller is None: + controller = PushToContextController() + self._controller = controller + + self.setWindowTitle("Push to project (select context)") + self.setWindowIcon(QtGui.QIcon(get_app_icon_path())) + + main_context_widget = QtWidgets.QWidget(self) + + header_widget = QtWidgets.QWidget(main_context_widget) + + header_label = QtWidgets.QLabel( + controller.get_source_label(), + header_widget + ) + + header_layout = QtWidgets.QHBoxLayout(header_widget) + header_layout.setContentsMargins(0, 0, 0, 0) + header_layout.addWidget(header_label) + + main_splitter = QtWidgets.QSplitter( + QtCore.Qt.Horizontal, main_context_widget + ) + + context_widget = QtWidgets.QWidget(main_splitter) + + projects_combobox = ProjectsCombobox(controller, context_widget) + projects_combobox.set_select_item_visible(True) + projects_combobox.set_standard_filter_enabled(True) + + context_splitter = QtWidgets.QSplitter( + QtCore.Qt.Vertical, context_widget + ) + + folders_widget = FoldersWidget(controller, context_splitter) + folders_widget.set_deselectable(True) + tasks_widget = TasksWidget(controller, context_splitter) + + context_splitter.addWidget(folders_widget) + context_splitter.addWidget(tasks_widget) + + context_layout = QtWidgets.QVBoxLayout(context_widget) + context_layout.setContentsMargins(0, 0, 0, 0) + context_layout.addWidget(projects_combobox, 0) + context_layout.addWidget(context_splitter, 1) + + # --- Inputs widget --- + inputs_widget = QtWidgets.QWidget(main_splitter) + + folder_name_input = PlaceholderLineEdit(inputs_widget) + folder_name_input.setPlaceholderText("< Name of new folder >") + folder_name_input.setObjectName("ValidatedLineEdit") + + variant_input = PlaceholderLineEdit(inputs_widget) + variant_input.setPlaceholderText("< Variant >") + variant_input.setObjectName("ValidatedLineEdit") + + comment_input = PlaceholderLineEdit(inputs_widget) + comment_input.setPlaceholderText("< Publish comment >") + + inputs_layout = QtWidgets.QFormLayout(inputs_widget) + inputs_layout.setContentsMargins(0, 0, 0, 0) + inputs_layout.addRow("New folder name", folder_name_input) + inputs_layout.addRow("Variant", variant_input) + inputs_layout.addRow("Comment", comment_input) + + main_splitter.addWidget(context_widget) + main_splitter.addWidget(inputs_widget) + + # --- Buttons widget --- + btns_widget = QtWidgets.QWidget(self) + cancel_btn = QtWidgets.QPushButton("Cancel", btns_widget) + publish_btn = QtWidgets.QPushButton("Publish", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.setContentsMargins(0, 0, 0, 0) + btns_layout.addStretch(1) + btns_layout.addWidget(cancel_btn, 0) + btns_layout.addWidget(publish_btn, 0) + + sep_1 = SeparatorWidget(parent=main_context_widget) + sep_2 = SeparatorWidget(parent=main_context_widget) + main_context_layout = QtWidgets.QVBoxLayout(main_context_widget) + main_context_layout.addWidget(header_widget, 0) + main_context_layout.addWidget(sep_1, 0) + main_context_layout.addWidget(main_splitter, 1) + main_context_layout.addWidget(sep_2, 0) + main_context_layout.addWidget(btns_widget, 0) + + # NOTE This was added in hurry + # - should be reorganized and changed styles + overlay_widget = QtWidgets.QFrame(self) + overlay_widget.setObjectName("OverlayFrame") + + overlay_label = QtWidgets.QLabel(overlay_widget) + overlay_label.setAlignment(QtCore.Qt.AlignCenter) + + overlay_btns_widget = QtWidgets.QWidget(overlay_widget) + overlay_btns_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + # Add try again button (requires changes in controller) + overlay_try_btn = QtWidgets.QPushButton( + "Try again", overlay_btns_widget + ) + overlay_close_btn = QtWidgets.QPushButton( + "Close", overlay_btns_widget + ) + + overlay_btns_layout = QtWidgets.QHBoxLayout(overlay_btns_widget) + overlay_btns_layout.addStretch(1) + overlay_btns_layout.addWidget(overlay_try_btn, 0) + overlay_btns_layout.addWidget(overlay_close_btn, 0) + overlay_btns_layout.addStretch(1) + + overlay_layout = QtWidgets.QVBoxLayout(overlay_widget) + overlay_layout.addWidget(overlay_label, 0) + overlay_layout.addWidget(overlay_btns_widget, 0) + overlay_layout.setAlignment(QtCore.Qt.AlignCenter) + + main_layout = QtWidgets.QStackedLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(main_context_widget) + main_layout.addWidget(overlay_widget) + main_layout.setStackingMode(QtWidgets.QStackedLayout.StackAll) + main_layout.setCurrentWidget(main_context_widget) + + show_timer = QtCore.QTimer() + show_timer.setInterval(0) + + main_thread_timer = QtCore.QTimer() + main_thread_timer.setInterval(10) + + user_input_changed_timer = QtCore.QTimer() + user_input_changed_timer.setInterval(200) + user_input_changed_timer.setSingleShot(True) + + main_thread_timer.timeout.connect(self._on_main_thread_timer) + show_timer.timeout.connect(self._on_show_timer) + user_input_changed_timer.timeout.connect(self._on_user_input_timer) + folder_name_input.textChanged.connect(self._on_new_asset_change) + variant_input.textChanged.connect(self._on_variant_change) + comment_input.textChanged.connect(self._on_comment_change) + + publish_btn.clicked.connect(self._on_select_click) + cancel_btn.clicked.connect(self._on_close_click) + overlay_close_btn.clicked.connect(self._on_close_click) + overlay_try_btn.clicked.connect(self._on_try_again_click) + + controller.register_event_callback( + "new_folder_name.changed", + self._on_controller_new_asset_change + ) + controller.register_event_callback( + "variant.changed", self._on_controller_variant_change + ) + controller.register_event_callback( + "comment.changed", self._on_controller_comment_change + ) + controller.register_event_callback( + "submission.enabled.changed", self._on_submission_change + ) + controller.register_event_callback( + "source.changed", self._on_controller_source_change + ) + controller.register_event_callback( + "submit.started", self._on_controller_submit_start + ) + controller.register_event_callback( + "submit.finished", self._on_controller_submit_end + ) + controller.register_event_callback( + "push.message.added", self._on_push_message + ) + + self._main_layout = main_layout + + self._main_context_widget = main_context_widget + + self._header_label = header_label + self._main_splitter = main_splitter + + self._projects_combobox = projects_combobox + self._folders_widget = folders_widget + self._tasks_widget = tasks_widget + + self._variant_input = variant_input + self._folder_name_input = folder_name_input + self._comment_input = comment_input + + self._publish_btn = publish_btn + + self._overlay_widget = overlay_widget + self._overlay_close_btn = overlay_close_btn + self._overlay_try_btn = overlay_try_btn + self._overlay_label = overlay_label + + self._user_input_changed_timer = user_input_changed_timer + # Store current value on input text change + # The value is unset when is passed to controller + # The goal is to have controll over changes happened during user change + # in UI and controller auto-changes + self._variant_input_text = None + self._new_folder_name_input_text = None + self._comment_input_text = None + + self._first_show = True + self._show_timer = show_timer + self._show_counter = 0 + + self._main_thread_timer = main_thread_timer + self._main_thread_timer_can_stop = True + self._last_submit_message = None + self._process_item_id = None + + self._variant_is_valid = None + self._folder_is_valid = None + + publish_btn.setEnabled(False) + overlay_close_btn.setVisible(False) + overlay_try_btn.setVisible(False) + + # Support of public api function of controller + def set_source(self, project_name, version_id): + """Set source project and version. + + Call the method on controller. + + Args: + project_name (Union[str, None]): Name of project. + version_id (Union[str, None]): Version id. + """ + + self._controller.set_source(project_name, version_id) + + def showEvent(self, event): + super(PushToContextSelectWindow, self).showEvent(event) + if self._first_show: + self._first_show = False + self._on_first_show() + + def refresh(self): + user_values = self._controller.get_user_values() + new_folder_name = user_values["new_folder_name"] + variant = user_values["variant"] + self._folder_name_input.setText(new_folder_name or "") + self._variant_input.setText(variant or "") + self._invalidate_variant(user_values["is_variant_valid"]) + self._invalidate_new_folder_name( + new_folder_name, user_values["is_new_folder_name_valid"] + ) + + self._projects_combobox.refresh() + + def _on_first_show(self): + width = 740 + height = 640 + inputs_width = 360 + self.setStyleSheet(load_stylesheet()) + self.resize(width, height) + self._main_splitter.setSizes([width - inputs_width, inputs_width]) + self._show_timer.start() + + def _on_show_timer(self): + if self._show_counter < 3: + self._show_counter += 1 + return + self._show_timer.stop() + + self._show_counter = 0 + + self.refresh() + + def _on_new_asset_change(self, text): + self._new_folder_name_input_text = text + self._user_input_changed_timer.start() + + def _on_variant_change(self, text): + self._variant_input_text = text + self._user_input_changed_timer.start() + + def _on_comment_change(self, text): + self._comment_input_text = text + self._user_input_changed_timer.start() + + def _on_user_input_timer(self): + folder_name = self._new_folder_name_input_text + if folder_name is not None: + self._new_folder_name_input_text = None + self._controller.set_user_value_folder_name(folder_name) + + variant = self._variant_input_text + if variant is not None: + self._variant_input_text = None + self._controller.set_user_value_variant(variant) + + comment = self._comment_input_text + if comment is not None: + self._comment_input_text = None + self._controller.set_user_value_comment(comment) + + def _on_controller_new_asset_change(self, event): + folder_name = event["new_folder_name"] + if ( + self._new_folder_name_input_text is None + and folder_name != self._folder_name_input.text() + ): + self._folder_name_input.setText(folder_name) + + self._invalidate_new_folder_name(folder_name, event["is_valid"]) + + def _on_controller_variant_change(self, event): + is_valid = event["is_valid"] + variant = event["variant"] + if ( + self._variant_input_text is None + and variant != self._variant_input.text() + ): + self._variant_input.setText(variant) + + self._invalidate_variant(is_valid) + + def _on_controller_comment_change(self, event): + comment = event["comment"] + if ( + self._comment_input_text is None + and comment != self._comment_input.text() + ): + self._comment_input.setText(comment) + + def _on_controller_source_change(self): + self._header_label.setText(self._controller.get_source_label()) + + def _invalidate_new_folder_name(self, folder_name, is_valid): + self._tasks_widget.setVisible(not folder_name) + if self._folder_is_valid is is_valid: + return + self._folder_is_valid = is_valid + state = "" + if folder_name: + if is_valid is True: + state = "valid" + elif is_valid is False: + state = "invalid" + set_style_property( + self._folder_name_input, "state", state + ) + + def _invalidate_variant(self, is_valid): + if self._variant_is_valid is is_valid: + return + self._variant_is_valid = is_valid + state = "valid" if is_valid else "invalid" + set_style_property(self._variant_input, "state", state) + + def _on_submission_change(self, event): + self._publish_btn.setEnabled(event["enabled"]) + + def _on_close_click(self): + self.close() + + def _on_select_click(self): + self._process_item_id = self._controller.submit(wait=False) + + def _on_try_again_click(self): + self._process_item_id = None + self._last_submit_message = None + + self._overlay_close_btn.setVisible(False) + self._overlay_try_btn.setVisible(False) + self._main_layout.setCurrentWidget(self._main_context_widget) + + def _on_main_thread_timer(self): + if self._last_submit_message: + self._overlay_label.setText(self._last_submit_message) + self._last_submit_message = None + + process_status = self._controller.get_process_item_status( + self._process_item_id + ) + push_failed = process_status["failed"] + fail_traceback = process_status["full_traceback"] + if self._main_thread_timer_can_stop: + self._main_thread_timer.stop() + self._overlay_close_btn.setVisible(True) + if push_failed and not fail_traceback: + self._overlay_try_btn.setVisible(True) + + if push_failed: + message = "Push Failed:\n{}".format(process_status["fail_reason"]) + if fail_traceback: + message += "\n{}".format(fail_traceback) + self._overlay_label.setText(message) + set_style_property(self._overlay_close_btn, "state", "error") + + if self._main_thread_timer_can_stop: + # Join thread in controller + self._controller.wait_for_process_thread() + # Reset process item to None + self._process_item_id = None + + def _on_controller_submit_start(self): + self._main_thread_timer_can_stop = False + self._main_thread_timer.start() + self._main_layout.setCurrentWidget(self._overlay_widget) + self._overlay_label.setText("Submittion started") + + def _on_controller_submit_end(self): + self._main_thread_timer_can_stop = True + + def _on_push_message(self, event): + self._last_submit_message = event["message"] diff --git a/openpype/tools/ayon_sceneinventory/__init__.py b/openpype/tools/ayon_sceneinventory/__init__.py new file mode 100644 index 0000000000..5412e2fea2 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/__init__.py @@ -0,0 +1,6 @@ +from .control import SceneInventoryController + + +__all__ = ( + "SceneInventoryController", +) diff --git a/openpype/tools/ayon_sceneinventory/control.py b/openpype/tools/ayon_sceneinventory/control.py new file mode 100644 index 0000000000..e98b0e307b --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/control.py @@ -0,0 +1,134 @@ +import ayon_api + +from openpype.lib.events import QueuedEventSystem +from openpype.host import ILoadHost +from openpype.pipeline import ( + registered_host, + get_current_context, +) +from openpype.tools.ayon_utils.models import HierarchyModel + +from .models import SiteSyncModel + + +class SceneInventoryController: + """This is a temporary controller for AYON. + + Goal of this temporary controller is to provide a way to get current + context instead of using 'AvalonMongoDB' object (or 'legacy_io'). + + Also provides (hopefully) cleaner api for site sync. + """ + + def __init__(self, host=None): + if host is None: + host = registered_host() + self._host = host + self._current_context = None + self._current_project = None + self._current_folder_id = None + self._current_folder_set = False + + self._site_sync_model = SiteSyncModel(self) + # Switch dialog requirements + self._hierarchy_model = HierarchyModel(self) + self._event_system = self._create_event_system() + + def emit_event(self, topic, data=None, source=None): + if data is None: + data = {} + self._event_system.emit(topic, data, source) + + def register_event_callback(self, topic, callback): + self._event_system.add_callback(topic, callback) + + def reset(self): + self._current_context = None + self._current_project = None + self._current_folder_id = None + self._current_folder_set = False + + self._site_sync_model.reset() + self._hierarchy_model.reset() + + def get_current_context(self): + if self._current_context is None: + if hasattr(self._host, "get_current_context"): + self._current_context = self._host.get_current_context() + else: + self._current_context = get_current_context() + return self._current_context + + def get_current_project_name(self): + if self._current_project is None: + self._current_project = self.get_current_context()["project_name"] + return self._current_project + + def get_current_folder_id(self): + if self._current_folder_set: + return self._current_folder_id + + context = self.get_current_context() + project_name = context["project_name"] + folder_path = context.get("folder_path") + folder_name = context.get("asset_name") + folder_id = None + if folder_path: + folder = ayon_api.get_folder_by_path(project_name, folder_path) + if folder: + folder_id = folder["id"] + elif folder_name: + for folder in ayon_api.get_folders( + project_name, folder_names=[folder_name] + ): + folder_id = folder["id"] + break + + self._current_folder_id = folder_id + self._current_folder_set = True + return self._current_folder_id + + def get_containers(self): + host = self._host + if isinstance(host, ILoadHost): + return host.get_containers() + elif hasattr(host, "ls"): + return host.ls() + return [] + + # Site Sync methods + def is_sync_server_enabled(self): + return self._site_sync_model.is_sync_server_enabled() + + def get_sites_information(self): + return self._site_sync_model.get_sites_information() + + def get_site_provider_icons(self): + return self._site_sync_model.get_site_provider_icons() + + def get_representations_site_progress(self, representation_ids): + return self._site_sync_model.get_representations_site_progress( + representation_ids + ) + + def resync_representations(self, representation_ids, site_type): + return self._site_sync_model.resync_representations( + representation_ids, site_type + ) + + # Switch dialog methods + def get_folder_items(self, project_name, sender=None): + return self._hierarchy_model.get_folder_items(project_name, sender) + + def get_folder_label(self, folder_id): + if not folder_id: + return None + project_name = self.get_current_project_name() + folder_item = self._hierarchy_model.get_folder_item( + project_name, folder_id) + if folder_item is None: + return None + return folder_item.label + + def _create_event_system(self): + return QueuedEventSystem() diff --git a/openpype/tools/ayon_sceneinventory/model.py b/openpype/tools/ayon_sceneinventory/model.py new file mode 100644 index 0000000000..16924b0a7e --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/model.py @@ -0,0 +1,622 @@ +import collections +import re +import logging +import uuid +import copy + +from collections import defaultdict + +from qtpy import QtCore, QtGui +import qtawesome + +from openpype.client import ( + get_assets, + get_subsets, + get_versions, + get_last_version_by_subset_id, + get_representations, +) +from openpype.pipeline import ( + get_current_project_name, + schema, + HeroVersionType, +) +from openpype.style import get_default_entity_icon_color +from openpype.tools.utils.models import TreeModel, Item + + +def walk_hierarchy(node): + """Recursively yield group node.""" + for child in node.children(): + if child.get("isGroupNode"): + yield child + + for _child in walk_hierarchy(child): + yield _child + + +class InventoryModel(TreeModel): + """The model for the inventory""" + + Columns = [ + "Name", + "version", + "count", + "family", + "group", + "loader", + "objectName", + "active_site", + "remote_site", + ] + active_site_col = Columns.index("active_site") + remote_site_col = Columns.index("remote_site") + + OUTDATED_COLOR = QtGui.QColor(235, 30, 30) + CHILD_OUTDATED_COLOR = QtGui.QColor(200, 160, 30) + GRAYOUT_COLOR = QtGui.QColor(160, 160, 160) + + UniqueRole = QtCore.Qt.UserRole + 2 # unique label role + + def __init__(self, controller, parent=None): + super(InventoryModel, self).__init__(parent) + self.log = logging.getLogger(self.__class__.__name__) + + self._controller = controller + + self._hierarchy_view = False + + self._default_icon_color = get_default_entity_icon_color() + + site_icons = self._controller.get_site_provider_icons() + + self._site_icons = { + provider: QtGui.QIcon(icon_path) + for provider, icon_path in site_icons.items() + } + + def outdated(self, item): + value = item.get("version") + if isinstance(value, HeroVersionType): + return False + + if item.get("version") == item.get("highest_version"): + return False + return True + + def data(self, index, role): + if not index.isValid(): + return + + item = index.internalPointer() + + if role == QtCore.Qt.FontRole: + # Make top-level entries bold + if item.get("isGroupNode") or item.get("isNotSet"): # group-item + font = QtGui.QFont() + font.setBold(True) + return font + + if role == QtCore.Qt.ForegroundRole: + # Set the text color to the OUTDATED_COLOR when the + # collected version is not the same as the highest version + key = self.Columns[index.column()] + if key == "version": # version + if item.get("isGroupNode"): # group-item + if self.outdated(item): + return self.OUTDATED_COLOR + + if self._hierarchy_view: + # If current group is not outdated, check if any + # outdated children. + for _node in walk_hierarchy(item): + if self.outdated(_node): + return self.CHILD_OUTDATED_COLOR + else: + + if self._hierarchy_view: + # Although this is not a group item, we still need + # to distinguish which one contain outdated child. + for _node in walk_hierarchy(item): + if self.outdated(_node): + return self.CHILD_OUTDATED_COLOR.darker(150) + + return self.GRAYOUT_COLOR + + if key == "Name" and not item.get("isGroupNode"): + return self.GRAYOUT_COLOR + + # Add icons + if role == QtCore.Qt.DecorationRole: + if index.column() == 0: + # Override color + color = item.get("color", self._default_icon_color) + if item.get("isGroupNode"): # group-item + return qtawesome.icon("fa.folder", color=color) + if item.get("isNotSet"): + return qtawesome.icon("fa.exclamation-circle", color=color) + + return qtawesome.icon("fa.file-o", color=color) + + if index.column() == 3: + # Family icon + return item.get("familyIcon", None) + + column_name = self.Columns[index.column()] + + if column_name == "group" and item.get("group"): + return qtawesome.icon("fa.object-group", + color=get_default_entity_icon_color()) + + if item.get("isGroupNode"): + if column_name == "active_site": + provider = item.get("active_site_provider") + return self._site_icons.get(provider) + + if column_name == "remote_site": + provider = item.get("remote_site_provider") + return self._site_icons.get(provider) + + if role == QtCore.Qt.DisplayRole and item.get("isGroupNode"): + column_name = self.Columns[index.column()] + progress = None + if column_name == "active_site": + progress = item.get("active_site_progress", 0) + elif column_name == "remote_site": + progress = item.get("remote_site_progress", 0) + if progress is not None: + return "{}%".format(max(progress, 0) * 100) + + if role == self.UniqueRole: + return item["representation"] + item.get("objectName", "") + + return super(InventoryModel, self).data(index, role) + + def set_hierarchy_view(self, state): + """Set whether to display subsets in hierarchy view.""" + state = bool(state) + + if state != self._hierarchy_view: + self._hierarchy_view = state + + def refresh(self, selected=None, containers=None): + """Refresh the model""" + + # for debugging or testing, injecting items from outside + if containers is None: + containers = self._controller.get_containers() + + self.clear() + if not selected or not self._hierarchy_view: + self._add_containers(containers) + return + + # Filter by cherry-picked items + self._add_containers(( + container + for container in containers + if container["objectName"] in selected + )) + + def _add_containers(self, containers, parent=None): + """Add the items to the model. + + The items should be formatted similar to `api.ls()` returns, an item + is then represented as: + {"filename_v001.ma": [full/filename/of/loaded/filename_v001.ma, + full/filename/of/loaded/filename_v001.ma], + "nodetype" : "reference", + "node": "referenceNode1"} + + Note: When performing an additional call to `add_items` it will *not* + group the new items with previously existing item groups of the + same type. + + Args: + containers (generator): Container items. + parent (Item, optional): Set this item as parent for the added + items when provided. Defaults to the root of the model. + + Returns: + node.Item: root node which has children added based on the data + """ + + project_name = get_current_project_name() + + self.beginResetModel() + + # Group by representation + grouped = defaultdict(lambda: {"containers": list()}) + for container in containers: + repre_id = container["representation"] + grouped[repre_id]["containers"].append(container) + + ( + repres_by_id, + versions_by_id, + products_by_id, + folders_by_id, + ) = self._query_entities(project_name, set(grouped.keys())) + # Add to model + not_found = defaultdict(list) + not_found_ids = [] + for repre_id, group_dict in sorted(grouped.items()): + group_containers = group_dict["containers"] + representation = repres_by_id.get(repre_id) + if not representation: + not_found["representation"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + version = versions_by_id.get(representation["parent"]) + if not version: + not_found["version"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + product = products_by_id.get(version["parent"]) + if not product: + not_found["product"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + folder = folders_by_id.get(product["parent"]) + if not folder: + not_found["folder"].extend(group_containers) + not_found_ids.append(repre_id) + continue + + group_dict.update({ + "representation": representation, + "version": version, + "subset": product, + "asset": folder + }) + + for _repre_id in not_found_ids: + grouped.pop(_repre_id) + + for where, group_containers in not_found.items(): + # create the group header + group_node = Item() + name = "< NOT FOUND - {} >".format(where) + group_node["Name"] = name + group_node["representation"] = name + group_node["count"] = len(group_containers) + group_node["isGroupNode"] = False + group_node["isNotSet"] = True + + self.add_child(group_node, parent=parent) + + for container in group_containers: + item_node = Item() + item_node.update(container) + item_node["Name"] = container.get("objectName", "NO NAME") + item_node["isNotFound"] = True + self.add_child(item_node, parent=group_node) + + # TODO Use product icons + family_icon = qtawesome.icon( + "fa.folder", color="#0091B2" + ) + # Prepare site sync specific data + progress_by_id = self._controller.get_representations_site_progress( + set(grouped.keys()) + ) + sites_info = self._controller.get_sites_information() + + for repre_id, group_dict in sorted(grouped.items()): + group_containers = group_dict["containers"] + representation = group_dict["representation"] + version = group_dict["version"] + subset = group_dict["subset"] + asset = group_dict["asset"] + + # Get the primary family + maj_version, _ = schema.get_schema_version(subset["schema"]) + if maj_version < 3: + src_doc = version + else: + src_doc = subset + + prim_family = src_doc["data"].get("family") + if not prim_family: + families = src_doc["data"].get("families") + if families: + prim_family = families[0] + + # Store the highest available version so the model can know + # whether current version is currently up-to-date. + highest_version = get_last_version_by_subset_id( + project_name, version["parent"] + ) + + # create the group header + group_node = Item() + group_node["Name"] = "{}_{}: ({})".format( + asset["name"], subset["name"], representation["name"] + ) + group_node["representation"] = repre_id + group_node["version"] = version["name"] + group_node["highest_version"] = highest_version["name"] + group_node["family"] = prim_family or "" + group_node["familyIcon"] = family_icon + group_node["count"] = len(group_containers) + group_node["isGroupNode"] = True + group_node["group"] = subset["data"].get("subsetGroup") + + # Site sync specific data + progress = progress_by_id[repre_id] + group_node.update(sites_info) + group_node["active_site_progress"] = progress["active_site"] + group_node["remote_site_progress"] = progress["remote_site"] + + self.add_child(group_node, parent=parent) + + for container in group_containers: + item_node = Item() + item_node.update(container) + + # store the current version on the item + item_node["version"] = version["name"] + + # Remapping namespace to item name. + # Noted that the name key is capital "N", by doing this, we + # can view namespace in GUI without changing container data. + item_node["Name"] = container["namespace"] + + self.add_child(item_node, parent=group_node) + + self.endResetModel() + + return self._root_item + + def _query_entities(self, project_name, repre_ids): + """Query entities for representations from containers. + + Returns: + tuple[dict, dict, dict, dict]: Representation, version, product + and folder documents by id. + """ + + repres_by_id = {} + versions_by_id = {} + products_by_id = {} + folders_by_id = {} + output = ( + repres_by_id, + versions_by_id, + products_by_id, + folders_by_id, + ) + + filtered_repre_ids = set() + for repre_id in repre_ids: + # Filter out invalid representation ids + # NOTE: This is added because scenes from OpenPype did contain + # ObjectId from mongo. + try: + uuid.UUID(repre_id) + filtered_repre_ids.add(repre_id) + except ValueError: + continue + if not filtered_repre_ids: + return output + + repre_docs = get_representations(project_name, repre_ids) + repres_by_id.update({ + repre_doc["_id"]: repre_doc + for repre_doc in repre_docs + }) + version_ids = { + repre_doc["parent"] for repre_doc in repres_by_id.values() + } + if not version_ids: + return output + + version_docs = get_versions(project_name, version_ids, hero=True) + versions_by_id.update({ + version_doc["_id"]: version_doc + for version_doc in version_docs + }) + hero_versions_by_subversion_id = collections.defaultdict(list) + for version_doc in versions_by_id.values(): + if version_doc["type"] != "hero_version": + continue + subversion = version_doc["version_id"] + hero_versions_by_subversion_id[subversion].append(version_doc) + + if hero_versions_by_subversion_id: + subversion_ids = set( + hero_versions_by_subversion_id.keys() + ) + subversion_docs = get_versions(project_name, subversion_ids) + for subversion_doc in subversion_docs: + subversion_id = subversion_doc["_id"] + subversion_ids.discard(subversion_id) + h_version_docs = hero_versions_by_subversion_id[subversion_id] + for version_doc in h_version_docs: + version_doc["name"] = HeroVersionType( + subversion_doc["name"] + ) + version_doc["data"] = copy.deepcopy( + subversion_doc["data"] + ) + + for subversion_id in subversion_ids: + h_version_docs = hero_versions_by_subversion_id[subversion_id] + for version_doc in h_version_docs: + versions_by_id.pop(version_doc["_id"]) + + product_ids = { + version_doc["parent"] + for version_doc in versions_by_id.values() + } + if not product_ids: + return output + product_docs = get_subsets(project_name, product_ids) + products_by_id.update({ + product_doc["_id"]: product_doc + for product_doc in product_docs + }) + folder_ids = { + product_doc["parent"] + for product_doc in products_by_id.values() + } + if not folder_ids: + return output + + folder_docs = get_assets(project_name, folder_ids) + folders_by_id.update({ + folder_doc["_id"]: folder_doc + for folder_doc in folder_docs + }) + return output + + +class FilterProxyModel(QtCore.QSortFilterProxyModel): + """Filter model to where key column's value is in the filtered tags""" + + def __init__(self, *args, **kwargs): + super(FilterProxyModel, self).__init__(*args, **kwargs) + self._filter_outdated = False + self._hierarchy_view = False + + def filterAcceptsRow(self, row, parent): + model = self.sourceModel() + source_index = model.index(row, self.filterKeyColumn(), parent) + + # Always allow bottom entries (individual containers), since their + # parent group hidden if it wouldn't have been validated. + rows = model.rowCount(source_index) + if not rows: + return True + + # Filter by regex + if hasattr(self, "filterRegExp"): + regex = self.filterRegExp() + else: + regex = self.filterRegularExpression() + pattern = regex.pattern() + if pattern: + pattern = re.escape(pattern) + + if not self._matches(row, parent, pattern): + return False + + if self._filter_outdated: + # When filtering to outdated we filter the up to date entries + # thus we "allow" them when they are outdated + if not self._is_outdated(row, parent): + return False + + return True + + def set_filter_outdated(self, state): + """Set whether to show the outdated entries only.""" + state = bool(state) + + if state != self._filter_outdated: + self._filter_outdated = bool(state) + self.invalidateFilter() + + def set_hierarchy_view(self, state): + state = bool(state) + + if state != self._hierarchy_view: + self._hierarchy_view = state + + def _is_outdated(self, row, parent): + """Return whether row is outdated. + + A row is considered outdated if it has "version" and "highest_version" + data and in the internal data structure, and they are not of an + equal value. + + """ + def outdated(node): + version = node.get("version", None) + highest = node.get("highest_version", None) + + # Always allow indices that have no version data at all + if version is None and highest is None: + return True + + # If either a version or highest is present but not the other + # consider the item invalid. + if not self._hierarchy_view: + # Skip this check if in hierarchy view, or the child item + # node will be hidden even it's actually outdated. + if version is None or highest is None: + return False + return version != highest + + index = self.sourceModel().index(row, self.filterKeyColumn(), parent) + + # The scene contents are grouped by "representation", e.g. the same + # "representation" loaded twice is grouped under the same header. + # Since the version check filters these parent groups we skip that + # check for the individual children. + has_parent = index.parent().isValid() + if has_parent and not self._hierarchy_view: + return True + + # Filter to those that have the different version numbers + node = index.internalPointer() + if outdated(node): + return True + + if self._hierarchy_view: + for _node in walk_hierarchy(node): + if outdated(_node): + return True + + return False + + def _matches(self, row, parent, pattern): + """Return whether row matches regex pattern. + + Args: + row (int): row number in model + parent (QtCore.QModelIndex): parent index + pattern (regex.pattern): pattern to check for in key + + Returns: + bool + + """ + model = self.sourceModel() + column = self.filterKeyColumn() + role = self.filterRole() + + def matches(row, parent, pattern): + index = model.index(row, column, parent) + key = model.data(index, role) + if re.search(pattern, key, re.IGNORECASE): + return True + + if matches(row, parent, pattern): + return True + + # Also allow if any of the children matches + source_index = model.index(row, column, parent) + rows = model.rowCount(source_index) + + if any( + matches(idx, source_index, pattern) + for idx in range(rows) + ): + return True + + if not self._hierarchy_view: + return False + + for idx in range(rows): + child_index = model.index(idx, column, source_index) + child_rows = model.rowCount(child_index) + return any( + self._matches(child_idx, child_index, pattern) + for child_idx in range(child_rows) + ) + + return True diff --git a/openpype/tools/ayon_sceneinventory/models/__init__.py b/openpype/tools/ayon_sceneinventory/models/__init__.py new file mode 100644 index 0000000000..c861d3c1a0 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/models/__init__.py @@ -0,0 +1,6 @@ +from .site_sync import SiteSyncModel + + +__all__ = ( + "SiteSyncModel", +) diff --git a/openpype/tools/ayon_sceneinventory/models/site_sync.py b/openpype/tools/ayon_sceneinventory/models/site_sync.py new file mode 100644 index 0000000000..b8c9443230 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/models/site_sync.py @@ -0,0 +1,176 @@ +from openpype.client import get_representations +from openpype.modules import ModulesManager + +NOT_SET = object() + + +class SiteSyncModel: + def __init__(self, controller): + self._controller = controller + + self._sync_server_module = NOT_SET + self._sync_server_enabled = None + self._active_site = NOT_SET + self._remote_site = NOT_SET + self._active_site_provider = NOT_SET + self._remote_site_provider = NOT_SET + + def reset(self): + self._sync_server_module = NOT_SET + self._sync_server_enabled = None + self._active_site = NOT_SET + self._remote_site = NOT_SET + self._active_site_provider = NOT_SET + self._remote_site_provider = NOT_SET + + def is_sync_server_enabled(self): + """Site sync is enabled. + + Returns: + bool: Is enabled or not. + """ + + self._cache_sync_server_module() + return self._sync_server_enabled + + def get_site_provider_icons(self): + """Icon paths per provider. + + Returns: + dict[str, str]: Path by provider name. + """ + + site_sync = self._get_sync_server_module() + if site_sync is None: + return {} + return site_sync.get_site_icons() + + def get_sites_information(self): + return { + "active_site": self._get_active_site(), + "active_site_provider": self._get_active_site_provider(), + "remote_site": self._get_remote_site(), + "remote_site_provider": self._get_remote_site_provider() + } + + def get_representations_site_progress(self, representation_ids): + """Get progress of representations sync.""" + + representation_ids = set(representation_ids) + output = { + repre_id: { + "active_site": 0, + "remote_site": 0, + } + for repre_id in representation_ids + } + if not self.is_sync_server_enabled(): + return output + + project_name = self._controller.get_current_project_name() + site_sync = self._get_sync_server_module() + repre_docs = get_representations(project_name, representation_ids) + active_site = self._get_active_site() + remote_site = self._get_remote_site() + + for repre_doc in repre_docs: + repre_output = output[repre_doc["_id"]] + result = site_sync.get_progress_for_repre( + repre_doc, active_site, remote_site + ) + repre_output["active_site"] = result[active_site] + repre_output["remote_site"] = result[remote_site] + + return output + + def resync_representations(self, representation_ids, site_type): + """ + + Args: + representation_ids (Iterable[str]): Representation ids. + site_type (Literal[active_site, remote_site]): Site type. + """ + + project_name = self._controller.get_current_project_name() + site_sync = self._get_sync_server_module() + active_site = self._get_active_site() + remote_site = self._get_remote_site() + progress = self.get_representations_site_progress( + representation_ids + ) + for repre_id in representation_ids: + repre_progress = progress.get(repre_id) + if not repre_progress: + continue + + if site_type == "active_site": + # check opposite from added site, must be 1 or unable to sync + check_progress = repre_progress["remote_site"] + site = active_site + else: + check_progress = repre_progress["active_site"] + site = remote_site + + if check_progress == 1: + site_sync.add_site( + project_name, repre_id, site, force=True + ) + + def _get_sync_server_module(self): + self._cache_sync_server_module() + return self._sync_server_module + + def _cache_sync_server_module(self): + if self._sync_server_module is not NOT_SET: + return self._sync_server_module + manager = ModulesManager() + site_sync = manager.modules_by_name.get("sync_server") + sync_enabled = site_sync is not None and site_sync.enabled + self._sync_server_module = site_sync + self._sync_server_enabled = sync_enabled + + def _get_active_site(self): + if self._active_site is NOT_SET: + self._cache_sites() + return self._active_site + + def _get_remote_site(self): + if self._remote_site is NOT_SET: + self._cache_sites() + return self._remote_site + + def _get_active_site_provider(self): + if self._active_site_provider is NOT_SET: + self._cache_sites() + return self._active_site_provider + + def _get_remote_site_provider(self): + if self._remote_site_provider is NOT_SET: + self._cache_sites() + return self._remote_site_provider + + def _cache_sites(self): + site_sync = self._get_sync_server_module() + active_site = None + remote_site = None + active_site_provider = None + remote_site_provider = None + if site_sync is not None: + project_name = self._controller.get_current_project_name() + active_site = site_sync.get_active_site(project_name) + remote_site = site_sync.get_remote_site(project_name) + active_site_provider = "studio" + remote_site_provider = "studio" + if active_site != "studio": + active_site_provider = site_sync.get_active_provider( + project_name, active_site + ) + if remote_site != "studio": + remote_site_provider = site_sync.get_active_provider( + project_name, remote_site + ) + + self._active_site = active_site + self._remote_site = remote_site + self._active_site_provider = active_site_provider + self._remote_site_provider = remote_site_provider diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/__init__.py b/openpype/tools/ayon_sceneinventory/switch_dialog/__init__.py new file mode 100644 index 0000000000..4c07832829 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/__init__.py @@ -0,0 +1,6 @@ +from .dialog import SwitchAssetDialog + + +__all__ = ( + "SwitchAssetDialog", +) diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/dialog.py b/openpype/tools/ayon_sceneinventory/switch_dialog/dialog.py new file mode 100644 index 0000000000..2ebed7f89b --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/dialog.py @@ -0,0 +1,1333 @@ +import collections +import logging + +from qtpy import QtWidgets, QtCore +import qtawesome + +from openpype.client import ( + get_assets, + get_subset_by_name, + get_subsets, + get_versions, + get_hero_versions, + get_last_versions, + get_representations, +) +from openpype.pipeline.load import ( + discover_loader_plugins, + switch_container, + get_repres_contexts, + loaders_from_repre_context, + LoaderSwitchNotImplementedError, + IncompatibleLoaderError, + LoaderNotFoundError +) + +from .widgets import ( + ButtonWithMenu, + SearchComboBox +) +from .folders_input import FoldersField + +log = logging.getLogger("SwitchAssetDialog") + + +class ValidationState: + def __init__(self): + self.folder_ok = True + self.product_ok = True + self.repre_ok = True + + @property + def all_ok(self): + return ( + self.folder_ok + and self.product_ok + and self.repre_ok + ) + + +class SwitchAssetDialog(QtWidgets.QDialog): + """Widget to support asset switching""" + + MIN_WIDTH = 550 + + switched = QtCore.Signal() + + def __init__(self, controller, parent=None, items=None): + super(SwitchAssetDialog, self).__init__(parent) + + self.setWindowTitle("Switch selected items ...") + + # Force and keep focus dialog + self.setModal(True) + + folders_field = FoldersField(controller, self) + products_combox = SearchComboBox(self) + repres_combobox = SearchComboBox(self) + + products_combox.set_placeholder("") + repres_combobox.set_placeholder("") + + folder_label = QtWidgets.QLabel(self) + product_label = QtWidgets.QLabel(self) + repre_label = QtWidgets.QLabel(self) + + current_folder_btn = QtWidgets.QPushButton("Use current folder", self) + + accept_icon = qtawesome.icon("fa.check", color="white") + accept_btn = ButtonWithMenu(self) + accept_btn.setIcon(accept_icon) + + main_layout = QtWidgets.QGridLayout(self) + # Folder column + main_layout.addWidget(current_folder_btn, 0, 0) + main_layout.addWidget(folders_field, 1, 0) + main_layout.addWidget(folder_label, 2, 0) + # Product column + main_layout.addWidget(products_combox, 1, 1) + main_layout.addWidget(product_label, 2, 1) + # Representation column + main_layout.addWidget(repres_combobox, 1, 2) + main_layout.addWidget(repre_label, 2, 2) + # Btn column + main_layout.addWidget(accept_btn, 1, 3) + main_layout.setColumnStretch(0, 1) + main_layout.setColumnStretch(1, 1) + main_layout.setColumnStretch(2, 1) + main_layout.setColumnStretch(3, 0) + + show_timer = QtCore.QTimer() + show_timer.setInterval(0) + show_timer.setSingleShot(False) + + show_timer.timeout.connect(self._on_show_timer) + folders_field.value_changed.connect( + self._combobox_value_changed + ) + products_combox.currentIndexChanged.connect( + self._combobox_value_changed + ) + repres_combobox.currentIndexChanged.connect( + self._combobox_value_changed + ) + accept_btn.clicked.connect(self._on_accept) + current_folder_btn.clicked.connect(self._on_current_folder) + + self._show_timer = show_timer + self._show_counter = 0 + + self._current_folder_btn = current_folder_btn + + self._folders_field = folders_field + self._products_combox = products_combox + self._representations_box = repres_combobox + + self._folder_label = folder_label + self._product_label = product_label + self._repre_label = repre_label + + self._accept_btn = accept_btn + + self.setMinimumWidth(self.MIN_WIDTH) + + # Set default focus to accept button so you don't directly type in + # first asset field, this also allows to see the placeholder value. + accept_btn.setFocus() + + self._folder_docs_by_id = {} + self._product_docs_by_id = {} + self._version_docs_by_id = {} + self._repre_docs_by_id = {} + + self._missing_folder_ids = set() + self._missing_product_ids = set() + self._missing_version_ids = set() + self._missing_repre_ids = set() + self._missing_docs = False + + self._inactive_folder_ids = set() + self._inactive_product_ids = set() + self._inactive_repre_ids = set() + + self._init_folder_id = None + self._init_product_name = None + self._init_repre_name = None + + self._fill_check = False + + self._project_name = controller.get_current_project_name() + self._folder_id = controller.get_current_folder_id() + + self._current_folder_btn.setEnabled(self._folder_id is not None) + + self._controller = controller + + self._items = items + self._prepare_content_data() + + def showEvent(self, event): + super(SwitchAssetDialog, self).showEvent(event) + self._show_timer.start() + + def refresh(self, init_refresh=False): + """Build the need comboboxes with content""" + if not self._fill_check and not init_refresh: + return + + self._fill_check = False + + validation_state = ValidationState() + self._folders_field.refresh() + # Set other comboboxes to empty if any document is missing or + # any folder of loaded representations is archived. + self._is_folder_ok(validation_state) + if validation_state.folder_ok: + product_values = self._get_product_box_values() + self._fill_combobox(product_values, "product") + self._is_product_ok(validation_state) + + if validation_state.folder_ok and validation_state.product_ok: + repre_values = sorted(self._representations_box_values()) + self._fill_combobox(repre_values, "repre") + self._is_repre_ok(validation_state) + + # Fill comboboxes with values + self.set_labels() + + self.apply_validations(validation_state) + + self._build_loaders_menu() + + if init_refresh: + # pre select context if possible + self._folders_field.set_selected_item(self._init_folder_id) + self._products_combox.set_valid_value(self._init_product_name) + self._representations_box.set_valid_value(self._init_repre_name) + + self._fill_check = True + + def set_labels(self): + folder_label = self._folders_field.get_selected_folder_label() + product_label = self._products_combox.get_valid_value() + repre_label = self._representations_box.get_valid_value() + + default = "*No changes" + self._folder_label.setText(folder_label or default) + self._product_label.setText(product_label or default) + self._repre_label.setText(repre_label or default) + + def apply_validations(self, validation_state): + error_msg = "*Please select" + error_sheet = "border: 1px solid red;" + + product_sheet = None + repre_sheet = None + accept_state = "" + if validation_state.folder_ok is False: + self._folder_label.setText(error_msg) + elif validation_state.product_ok is False: + product_sheet = error_sheet + self._product_label.setText(error_msg) + elif validation_state.repre_ok is False: + repre_sheet = error_sheet + self._repre_label.setText(error_msg) + + if validation_state.all_ok: + accept_state = "1" + + self._folders_field.set_valid(validation_state.folder_ok) + self._products_combox.setStyleSheet(product_sheet or "") + self._representations_box.setStyleSheet(repre_sheet or "") + + self._accept_btn.setEnabled(validation_state.all_ok) + self._set_style_property(self._accept_btn, "state", accept_state) + + def find_last_versions(self, product_ids): + project_name = self._project_name + return get_last_versions( + project_name, + subset_ids=product_ids, + fields=["_id", "parent", "type"] + ) + + def _on_show_timer(self): + if self._show_counter == 2: + self._show_timer.stop() + self.refresh(True) + else: + self._show_counter += 1 + + def _prepare_content_data(self): + repre_ids = { + item["representation"] + for item in self._items + } + + project_name = self._project_name + repres = list(get_representations( + project_name, + representation_ids=repre_ids, + archived=True, + )) + repres_by_id = {str(repre["_id"]): repre for repre in repres} + + content_repre_docs_by_id = {} + inactive_repre_ids = set() + missing_repre_ids = set() + version_ids = set() + for repre_id in repre_ids: + repre_doc = repres_by_id.get(repre_id) + if repre_doc is None: + missing_repre_ids.add(repre_id) + elif repres_by_id[repre_id]["type"] == "archived_representation": + inactive_repre_ids.add(repre_id) + version_ids.add(repre_doc["parent"]) + else: + content_repre_docs_by_id[repre_id] = repre_doc + version_ids.add(repre_doc["parent"]) + + version_docs = get_versions( + project_name, + version_ids=version_ids, + hero=True + ) + content_version_docs_by_id = {} + for version_doc in version_docs: + version_id = version_doc["_id"] + content_version_docs_by_id[version_id] = version_doc + + missing_version_ids = set() + product_ids = set() + for version_id in version_ids: + version_doc = content_version_docs_by_id.get(version_id) + if version_doc is None: + missing_version_ids.add(version_id) + else: + product_ids.add(version_doc["parent"]) + + product_docs = get_subsets( + project_name, subset_ids=product_ids, archived=True + ) + product_docs_by_id = {sub["_id"]: sub for sub in product_docs} + + folder_ids = set() + inactive_product_ids = set() + missing_product_ids = set() + content_product_docs_by_id = {} + for product_id in product_ids: + product_doc = product_docs_by_id.get(product_id) + if product_doc is None: + missing_product_ids.add(product_id) + elif product_doc["type"] == "archived_subset": + folder_ids.add(product_doc["parent"]) + inactive_product_ids.add(product_id) + else: + folder_ids.add(product_doc["parent"]) + content_product_docs_by_id[product_id] = product_doc + + folder_docs = get_assets( + project_name, asset_ids=folder_ids, archived=True + ) + folder_docs_by_id = { + folder_doc["_id"]: folder_doc + for folder_doc in folder_docs + } + + missing_folder_ids = set() + inactive_folder_ids = set() + content_folder_docs_by_id = {} + for folder_id in folder_ids: + folder_doc = folder_docs_by_id.get(folder_id) + if folder_doc is None: + missing_folder_ids.add(folder_id) + elif folder_doc["type"] == "archived_asset": + inactive_folder_ids.add(folder_id) + else: + content_folder_docs_by_id[folder_id] = folder_doc + + # stash context values, works only for single representation + init_folder_id = None + init_product_name = None + init_repre_name = None + if len(repres) == 1: + init_repre_doc = repres[0] + init_version_doc = content_version_docs_by_id.get( + init_repre_doc["parent"]) + init_product_doc = None + init_folder_doc = None + if init_version_doc: + init_product_doc = content_product_docs_by_id.get( + init_version_doc["parent"] + ) + if init_product_doc: + init_folder_doc = content_folder_docs_by_id.get( + init_product_doc["parent"] + ) + if init_folder_doc: + init_repre_name = init_repre_doc["name"] + init_product_name = init_product_doc["name"] + init_folder_id = init_folder_doc["_id"] + + self._init_folder_id = init_folder_id + self._init_product_name = init_product_name + self._init_repre_name = init_repre_name + + self._folder_docs_by_id = content_folder_docs_by_id + self._product_docs_by_id = content_product_docs_by_id + self._version_docs_by_id = content_version_docs_by_id + self._repre_docs_by_id = content_repre_docs_by_id + + self._missing_folder_ids = missing_folder_ids + self._missing_product_ids = missing_product_ids + self._missing_version_ids = missing_version_ids + self._missing_repre_ids = missing_repre_ids + self._missing_docs = ( + bool(missing_folder_ids) + or bool(missing_version_ids) + or bool(missing_product_ids) + or bool(missing_repre_ids) + ) + + self._inactive_folder_ids = inactive_folder_ids + self._inactive_product_ids = inactive_product_ids + self._inactive_repre_ids = inactive_repre_ids + + def _combobox_value_changed(self, *args, **kwargs): + self.refresh() + + def _build_loaders_menu(self): + repre_ids = self._get_current_output_repre_ids() + loaders = self._get_loaders(repre_ids) + # Get and destroy the action group + self._accept_btn.clear_actions() + + if not loaders: + return + + # Build new action group + group = QtWidgets.QActionGroup(self._accept_btn) + + for loader in loaders: + # Label + label = getattr(loader, "label", None) + if label is None: + label = loader.__name__ + + action = group.addAction(label) + # action = QtWidgets.QAction(label) + action.setData(loader) + + # Support font-awesome icons using the `.icon` and `.color` + # attributes on plug-ins. + icon = getattr(loader, "icon", None) + if icon is not None: + try: + key = "fa.{0}".format(icon) + color = getattr(loader, "color", "white") + action.setIcon(qtawesome.icon(key, color=color)) + + except Exception as exc: + print("Unable to set icon for loader {}: {}".format( + loader, str(exc) + )) + + self._accept_btn.add_action(action) + + group.triggered.connect(self._on_action_clicked) + + def _on_action_clicked(self, action): + loader_plugin = action.data() + self._trigger_switch(loader_plugin) + + def _get_loaders(self, repre_ids): + repre_contexts = None + if repre_ids: + repre_contexts = get_repres_contexts(repre_ids) + + if not repre_contexts: + return list() + + available_loaders = [] + for loader_plugin in discover_loader_plugins(): + # Skip loaders without switch method + if not hasattr(loader_plugin, "switch"): + continue + + # Skip utility loaders + if ( + hasattr(loader_plugin, "is_utility") + and loader_plugin.is_utility + ): + continue + available_loaders.append(loader_plugin) + + loaders = None + for repre_context in repre_contexts.values(): + _loaders = set(loaders_from_repre_context( + available_loaders, repre_context + )) + if loaders is None: + loaders = _loaders + else: + loaders = _loaders.intersection(loaders) + + if not loaders: + break + + if loaders is None: + loaders = [] + else: + loaders = list(loaders) + + return loaders + + def _fill_combobox(self, values, combobox_type): + if combobox_type == "product": + combobox_widget = self._products_combox + elif combobox_type == "repre": + combobox_widget = self._representations_box + else: + return + selected_value = combobox_widget.get_valid_value() + + # Fill combobox + if values is not None: + combobox_widget.populate(list(sorted(values))) + if selected_value and selected_value in values: + index = None + for idx in range(combobox_widget.count()): + if selected_value == str(combobox_widget.itemText(idx)): + index = idx + break + if index is not None: + combobox_widget.setCurrentIndex(index) + + def _set_style_property(self, widget, name, value): + cur_value = widget.property(name) + if cur_value == value: + return + widget.setProperty(name, value) + widget.style().polish(widget) + + def _get_current_output_repre_ids(self): + # NOTE hero versions are not used because it is expected that + # hero version has same representations as latests + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.currentText() + selected_repre = self._representations_box.currentText() + + # Nothing is selected + # [ ] [ ] [ ] + if ( + not selected_folder_id + and not selected_product_name + and not selected_repre + ): + return list(self._repre_docs_by_id.keys()) + + # Everything is selected + # [x] [x] [x] + if selected_folder_id and selected_product_name and selected_repre: + return self._get_current_output_repre_ids_xxx( + selected_folder_id, selected_product_name, selected_repre + ) + + # [x] [x] [ ] + # If folder and product is selected + if selected_folder_id and selected_product_name: + return self._get_current_output_repre_ids_xxo( + selected_folder_id, selected_product_name + ) + + # [x] [ ] [x] + # If folder and repre is selected + if selected_folder_id and selected_repre: + return self._get_current_output_repre_ids_xox( + selected_folder_id, selected_repre + ) + + # [x] [ ] [ ] + # If folder and product is selected + if selected_folder_id: + return self._get_current_output_repre_ids_xoo(selected_folder_id) + + # [ ] [x] [x] + if selected_product_name and selected_repre: + return self._get_current_output_repre_ids_oxx( + selected_product_name, selected_repre + ) + + # [ ] [x] [ ] + if selected_product_name: + return self._get_current_output_repre_ids_oxo( + selected_product_name + ) + + # [ ] [ ] [x] + return self._get_current_output_repre_ids_oox(selected_repre) + + def _get_current_output_repre_ids_xxx( + self, folder_id, selected_product_name, selected_repre + ): + project_name = self._project_name + product_doc = get_subset_by_name( + project_name, + selected_product_name, + folder_id, + fields=["_id"] + ) + + product_id = product_doc["_id"] + last_versions_by_product_id = self.find_last_versions([product_id]) + version_doc = last_versions_by_product_id.get(product_id) + if not version_doc: + return [] + + repre_docs = get_representations( + project_name, + version_ids=[version_doc["_id"]], + representation_names=[selected_repre], + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_xxo(self, folder_id, product_name): + project_name = self._project_name + product_doc = get_subset_by_name( + project_name, + product_name, + folder_id, + fields=["_id"] + ) + if not product_doc: + return [] + + repre_names = set() + for repre_doc in self._repre_docs_by_id.values(): + repre_names.add(repre_doc["name"]) + + # TODO where to take version ids? + version_ids = [] + repre_docs = get_representations( + project_name, + representation_names=repre_names, + version_ids=version_ids, + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_xox(self, folder_id, selected_repre): + product_names = { + product_doc["name"] + for product_doc in self._product_docs_by_id.values() + } + + project_name = self._project_name + product_docs = get_subsets( + project_name, + asset_ids=[folder_id], + subset_names=product_names, + fields=["_id", "name"] + ) + product_name_by_id = { + product_doc["_id"]: product_doc["name"] + for product_doc in product_docs + } + product_ids = list(product_name_by_id.keys()) + last_versions_by_product_id = self.find_last_versions(product_ids) + last_version_id_by_product_name = {} + for product_id, last_version in last_versions_by_product_id.items(): + product_name = product_name_by_id[product_id] + last_version_id_by_product_name[product_name] = ( + last_version["_id"] + ) + + repre_docs = get_representations( + project_name, + version_ids=last_version_id_by_product_name.values(), + representation_names=[selected_repre], + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_xoo(self, folder_id): + project_name = self._project_name + repres_by_product_name = collections.defaultdict(set) + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + product_name = product_doc["name"] + repres_by_product_name[product_name].add(repre_doc["name"]) + + product_docs = list(get_subsets( + project_name, + asset_ids=[folder_id], + subset_names=repres_by_product_name.keys(), + fields=["_id", "name"] + )) + product_name_by_id = { + product_doc["_id"]: product_doc["name"] + for product_doc in product_docs + } + product_ids = list(product_name_by_id.keys()) + last_versions_by_product_id = self.find_last_versions(product_ids) + last_version_id_by_product_name = {} + for product_id, last_version in last_versions_by_product_id.items(): + product_name = product_name_by_id[product_id] + last_version_id_by_product_name[product_name] = ( + last_version["_id"] + ) + + repre_names_by_version_id = {} + for product_name, repre_names in repres_by_product_name.items(): + version_id = last_version_id_by_product_name.get(product_name) + # This should not happen but why to crash? + if version_id is not None: + repre_names_by_version_id[version_id] = list(repre_names) + + repre_docs = get_representations( + project_name, + names_by_version_ids=repre_names_by_version_id, + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_oxx( + self, product_name, selected_repre + ): + project_name = self._project_name + product_docs = get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[product_name], + fields=["_id"] + ) + product_ids = [product_doc["_id"] for product_doc in product_docs] + last_versions_by_product_id = self.find_last_versions(product_ids) + last_version_ids = [ + last_version["_id"] + for last_version in last_versions_by_product_id.values() + ] + repre_docs = get_representations( + project_name, + version_ids=last_version_ids, + representation_names=[selected_repre], + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_oxo(self, product_name): + project_name = self._project_name + product_docs = get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[product_name], + fields=["_id", "parent"] + ) + product_docs_by_id = { + product_doc["_id"]: product_doc + for product_doc in product_docs + } + if not product_docs: + return list() + + last_versions_by_product_id = self.find_last_versions( + product_docs_by_id.keys() + ) + + product_id_by_version_id = {} + for product_id, last_version in last_versions_by_product_id.items(): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + if not product_id_by_version_id: + return list() + + repre_names_by_folder_id = collections.defaultdict(set) + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + folder_doc = self._folder_docs_by_id[product_doc["parent"]] + folder_id = folder_doc["_id"] + repre_names_by_folder_id[folder_id].add(repre_doc["name"]) + + repre_names_by_version_id = {} + for last_version_id, product_id in product_id_by_version_id.items(): + product_doc = product_docs_by_id[product_id] + folder_id = product_doc["parent"] + repre_names = repre_names_by_folder_id.get(folder_id) + if not repre_names: + continue + repre_names_by_version_id[last_version_id] = repre_names + + repre_docs = get_representations( + project_name, + names_by_version_ids=repre_names_by_version_id, + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_current_output_repre_ids_oox(self, selected_repre): + project_name = self._project_name + repre_docs = get_representations( + project_name, + representation_names=[selected_repre], + version_ids=self._version_docs_by_id.keys(), + fields=["_id"] + ) + return [repre_doc["_id"] for repre_doc in repre_docs] + + def _get_product_box_values(self): + project_name = self._project_name + selected_folder_id = self._folders_field.get_selected_folder_id() + if selected_folder_id: + folder_ids = [selected_folder_id] + else: + folder_ids = list(self._folder_docs_by_id.keys()) + + product_docs = get_subsets( + project_name, + asset_ids=folder_ids, + fields=["parent", "name"] + ) + + product_names_by_parent_id = collections.defaultdict(set) + for product_doc in product_docs: + product_names_by_parent_id[product_doc["parent"]].add( + product_doc["name"] + ) + + possible_product_names = None + for product_names in product_names_by_parent_id.values(): + if possible_product_names is None: + possible_product_names = product_names + else: + possible_product_names = possible_product_names.intersection( + product_names) + + if not possible_product_names: + break + + if not possible_product_names: + return [] + return list(possible_product_names) + + def _representations_box_values(self): + # NOTE hero versions are not used because it is expected that + # hero version has same representations as latests + project_name = self._project_name + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.currentText() + + # If nothing is selected + # [ ] [ ] [?] + if not selected_folder_id and not selected_product_name: + # Find all representations of selection's products + possible_repres = get_representations( + project_name, + version_ids=self._version_docs_by_id.keys(), + fields=["parent", "name"] + ) + + possible_repres_by_parent = collections.defaultdict(set) + for repre in possible_repres: + possible_repres_by_parent[repre["parent"]].add(repre["name"]) + + output_repres = None + for repre_names in possible_repres_by_parent.values(): + if output_repres is None: + output_repres = repre_names + else: + output_repres = (output_repres & repre_names) + + if not output_repres: + break + + return list(output_repres or list()) + + # [x] [x] [?] + if selected_folder_id and selected_product_name: + product_doc = get_subset_by_name( + project_name, + selected_product_name, + selected_folder_id, + fields=["_id"] + ) + + product_id = product_doc["_id"] + last_versions_by_product_id = self.find_last_versions([product_id]) + version_doc = last_versions_by_product_id.get(product_id) + repre_docs = get_representations( + project_name, + version_ids=[version_doc["_id"]], + fields=["name"] + ) + return [ + repre_doc["name"] + for repre_doc in repre_docs + ] + + # [x] [ ] [?] + # If only folder is selected + if selected_folder_id: + # Filter products by names from content + product_names = { + product_doc["name"] + for product_doc in self._product_docs_by_id.values() + } + + product_docs = get_subsets( + project_name, + asset_ids=[selected_folder_id], + subset_names=product_names, + fields=["_id"] + ) + product_ids = { + product_doc["_id"] + for product_doc in product_docs + } + if not product_ids: + return list() + + last_versions_by_product_id = self.find_last_versions(product_ids) + product_id_by_version_id = {} + for product_id, last_version in ( + last_versions_by_product_id.items() + ): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + if not product_id_by_version_id: + return list() + + repre_docs = list(get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + )) + if not repre_docs: + return list() + + repre_names_by_parent = collections.defaultdict(set) + for repre_doc in repre_docs: + repre_names_by_parent[repre_doc["parent"]].add( + repre_doc["name"] + ) + + available_repres = None + for repre_names in repre_names_by_parent.values(): + if available_repres is None: + available_repres = repre_names + continue + + available_repres = available_repres.intersection(repre_names) + + return list(available_repres) + + # [ ] [x] [?] + product_docs = list(get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[selected_product_name], + fields=["_id", "parent"] + )) + if not product_docs: + return list() + + product_docs_by_id = { + product_doc["_id"]: product_doc + for product_doc in product_docs + } + last_versions_by_product_id = self.find_last_versions( + product_docs_by_id.keys() + ) + + product_id_by_version_id = {} + for product_id, last_version in last_versions_by_product_id.items(): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + if not product_id_by_version_id: + return list() + + repre_docs = list( + get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + ) + ) + if not repre_docs: + return list() + + repre_names_by_folder_id = collections.defaultdict(set) + for repre_doc in repre_docs: + product_id = product_id_by_version_id[repre_doc["parent"]] + folder_id = product_docs_by_id[product_id]["parent"] + repre_names_by_folder_id[folder_id].add(repre_doc["name"]) + + available_repres = None + for repre_names in repre_names_by_folder_id.values(): + if available_repres is None: + available_repres = repre_names + continue + + available_repres = available_repres.intersection(repre_names) + + return list(available_repres) + + def _is_folder_ok(self, validation_state): + selected_folder_id = self._folders_field.get_selected_folder_id() + if ( + selected_folder_id is None + and (self._missing_docs or self._inactive_folder_ids) + ): + validation_state.folder_ok = False + + def _is_product_ok(self, validation_state): + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.get_valid_value() + + # [?] [x] [?] + # If product is selected then must be ok + if selected_product_name is not None: + return + + # [ ] [ ] [?] + if selected_folder_id is None: + # If there were archived products and folder is not selected + if self._inactive_product_ids: + validation_state.product_ok = False + return + + # [x] [ ] [?] + project_name = self._project_name + product_docs = get_subsets( + project_name, asset_ids=[selected_folder_id], fields=["name"] + ) + + product_names = set( + product_doc["name"] + for product_doc in product_docs + ) + + for product_doc in self._product_docs_by_id.values(): + if product_doc["name"] not in product_names: + validation_state.product_ok = False + break + + def _is_repre_ok(self, validation_state): + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.get_valid_value() + selected_repre = self._representations_box.get_valid_value() + + # [?] [?] [x] + # If product is selected then must be ok + if selected_repre is not None: + return + + # [ ] [ ] [ ] + if selected_folder_id is None and selected_product_name is None: + if ( + self._inactive_repre_ids + or self._missing_version_ids + or self._missing_repre_ids + ): + validation_state.repre_ok = False + return + + # [x] [x] [ ] + project_name = self._project_name + if ( + selected_folder_id is not None + and selected_product_name is not None + ): + product_doc = get_subset_by_name( + project_name, + selected_product_name, + selected_folder_id, + fields=["_id"] + ) + product_id = product_doc["_id"] + last_versions_by_product_id = self.find_last_versions([product_id]) + last_version = last_versions_by_product_id.get(product_id) + if not last_version: + validation_state.repre_ok = False + return + + repre_docs = get_representations( + project_name, + version_ids=[last_version["_id"]], + fields=["name"] + ) + + repre_names = set( + repre_doc["name"] + for repre_doc in repre_docs + ) + for repre_doc in self._repre_docs_by_id.values(): + if repre_doc["name"] not in repre_names: + validation_state.repre_ok = False + break + return + + # [x] [ ] [ ] + if selected_folder_id is not None: + product_docs = list(get_subsets( + project_name, + asset_ids=[selected_folder_id], + fields=["_id", "name"] + )) + + product_name_by_id = {} + product_ids = set() + for product_doc in product_docs: + product_id = product_doc["_id"] + product_ids.add(product_id) + product_name_by_id[product_id] = product_doc["name"] + + last_versions_by_product_id = self.find_last_versions(product_ids) + + product_id_by_version_id = {} + for product_id, last_version in ( + last_versions_by_product_id.items() + ): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + repre_docs = get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + ) + repres_by_product_name = collections.defaultdict(set) + for repre_doc in repre_docs: + product_id = product_id_by_version_id[repre_doc["parent"]] + product_name = product_name_by_id[product_id] + repres_by_product_name[product_name].add(repre_doc["name"]) + + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + repre_names = repres_by_product_name[product_doc["name"]] + if repre_doc["name"] not in repre_names: + validation_state.repre_ok = False + break + return + + # [ ] [x] [ ] + # Product documents + product_docs = get_subsets( + project_name, + asset_ids=self._folder_docs_by_id.keys(), + subset_names=[selected_product_name], + fields=["_id", "name", "parent"] + ) + product_docs_by_id = {} + for product_doc in product_docs: + product_docs_by_id[product_doc["_id"]] = product_doc + + last_versions_by_product_id = self.find_last_versions( + product_docs_by_id.keys() + ) + product_id_by_version_id = {} + for product_id, last_version in last_versions_by_product_id.items(): + version_id = last_version["_id"] + product_id_by_version_id[version_id] = product_id + + repre_docs = get_representations( + project_name, + version_ids=product_id_by_version_id.keys(), + fields=["name", "parent"] + ) + repres_by_folder_id = collections.defaultdict(set) + for repre_doc in repre_docs: + product_id = product_id_by_version_id[repre_doc["parent"]] + folder_id = product_docs_by_id[product_id]["parent"] + repres_by_folder_id[folder_id].add(repre_doc["name"]) + + for repre_doc in self._repre_docs_by_id.values(): + version_doc = self._version_docs_by_id[repre_doc["parent"]] + product_doc = self._product_docs_by_id[version_doc["parent"]] + folder_id = product_doc["parent"] + repre_names = repres_by_folder_id[folder_id] + if repre_doc["name"] not in repre_names: + validation_state.repre_ok = False + break + + def _on_current_folder(self): + # Set initial folder as current. + folder_id = self._controller.get_current_folder_id() + if not folder_id: + return + + selected_folder_id = self._folders_field.get_selected_folder_id() + if folder_id == selected_folder_id: + return + + self._folders_field.set_selected_item(folder_id) + self._combobox_value_changed() + + def _on_accept(self): + self._trigger_switch() + + def _trigger_switch(self, loader=None): + # Use None when not a valid value or when placeholder value + selected_folder_id = self._folders_field.get_selected_folder_id() + selected_product_name = self._products_combox.get_valid_value() + selected_representation = self._representations_box.get_valid_value() + + project_name = self._project_name + if selected_folder_id: + folder_ids = {selected_folder_id} + else: + folder_ids = set(self._folder_docs_by_id.keys()) + + product_names = None + if selected_product_name: + product_names = [selected_product_name] + + product_docs = list(get_subsets( + project_name, + subset_names=product_names, + asset_ids=folder_ids + )) + product_ids = set() + product_docs_by_parent_and_name = collections.defaultdict(dict) + for product_doc in product_docs: + product_ids.add(product_doc["_id"]) + folder_id = product_doc["parent"] + name = product_doc["name"] + product_docs_by_parent_and_name[folder_id][name] = product_doc + + # versions + _version_docs = get_versions(project_name, subset_ids=product_ids) + version_docs = list(reversed( + sorted(_version_docs, key=lambda item: item["name"]) + )) + + hero_version_docs = list(get_hero_versions( + project_name, subset_ids=product_ids + )) + + version_ids = set() + version_docs_by_parent_id = {} + for version_doc in version_docs: + parent_id = version_doc["parent"] + if parent_id not in version_docs_by_parent_id: + version_ids.add(version_doc["_id"]) + version_docs_by_parent_id[parent_id] = version_doc + + hero_version_docs_by_parent_id = {} + for hero_version_doc in hero_version_docs: + version_ids.add(hero_version_doc["_id"]) + parent_id = hero_version_doc["parent"] + hero_version_docs_by_parent_id[parent_id] = hero_version_doc + + repre_docs = get_representations( + project_name, version_ids=version_ids + ) + repre_docs_by_parent_id_by_name = collections.defaultdict(dict) + for repre_doc in repre_docs: + parent_id = repre_doc["parent"] + name = repre_doc["name"] + repre_docs_by_parent_id_by_name[parent_id][name] = repre_doc + + for container in self._items: + self._switch_container( + container, + loader, + selected_folder_id, + selected_product_name, + selected_representation, + product_docs_by_parent_and_name, + version_docs_by_parent_id, + hero_version_docs_by_parent_id, + repre_docs_by_parent_id_by_name, + ) + + self.switched.emit() + + self.close() + + def _switch_container( + self, + container, + loader, + selected_folder_id, + product_name, + selected_representation, + product_docs_by_parent_and_name, + version_docs_by_parent_id, + hero_version_docs_by_parent_id, + repre_docs_by_parent_id_by_name, + ): + container_repre_id = container["representation"] + container_repre = self._repre_docs_by_id[container_repre_id] + container_repre_name = container_repre["name"] + container_version_id = container_repre["parent"] + + container_version = self._version_docs_by_id[container_version_id] + + container_product_id = container_version["parent"] + container_product = self._product_docs_by_id[container_product_id] + + if selected_folder_id: + folder_id = selected_folder_id + else: + folder_id = container_product["parent"] + + products_by_name = product_docs_by_parent_and_name[folder_id] + if product_name: + product_doc = products_by_name[product_name] + else: + product_doc = products_by_name[container_product["name"]] + + repre_doc = None + product_id = product_doc["_id"] + if container_version["type"] == "hero_version": + hero_version = hero_version_docs_by_parent_id.get( + product_id + ) + if hero_version: + _repres = repre_docs_by_parent_id_by_name.get( + hero_version["_id"] + ) + if selected_representation: + repre_doc = _repres.get(selected_representation) + else: + repre_doc = _repres.get(container_repre_name) + + if not repre_doc: + version_doc = version_docs_by_parent_id[product_id] + version_id = version_doc["_id"] + repres_by_name = repre_docs_by_parent_id_by_name[version_id] + if selected_representation: + repre_doc = repres_by_name[selected_representation] + else: + repre_doc = repres_by_name[container_repre_name] + + error = None + try: + switch_container(container, repre_doc, loader) + except ( + LoaderSwitchNotImplementedError, + IncompatibleLoaderError, + LoaderNotFoundError, + ) as exc: + error = str(exc) + except Exception: + error = ( + "Switch asset failed. " + "Search console log for more details." + ) + if error is not None: + log.warning(( + "Couldn't switch asset." + "See traceback for more information." + ), exc_info=True) + dialog = QtWidgets.QMessageBox(self) + dialog.setWindowTitle("Switch asset failed") + dialog.setText(error) + dialog.exec_() diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/folders_input.py b/openpype/tools/ayon_sceneinventory/switch_dialog/folders_input.py new file mode 100644 index 0000000000..699c62371a --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/folders_input.py @@ -0,0 +1,307 @@ +from qtpy import QtWidgets, QtCore +import qtawesome + +from openpype.tools.utils import ( + PlaceholderLineEdit, + BaseClickableFrame, + set_style_property, +) +from openpype.tools.ayon_utils.widgets import FoldersWidget + +NOT_SET = object() + + +class ClickableLineEdit(QtWidgets.QLineEdit): + """QLineEdit capturing left mouse click. + + Triggers `clicked` signal on mouse click. + """ + clicked = QtCore.Signal() + + def __init__(self, *args, **kwargs): + super(ClickableLineEdit, self).__init__(*args, **kwargs) + self.setReadOnly(True) + self._mouse_pressed = False + + def mousePressEvent(self, event): + if event.button() == QtCore.Qt.LeftButton: + self._mouse_pressed = True + event.accept() + + def mouseMoveEvent(self, event): + event.accept() + + def mouseReleaseEvent(self, event): + if self._mouse_pressed: + self._mouse_pressed = False + if self.rect().contains(event.pos()): + self.clicked.emit() + event.accept() + + def mouseDoubleClickEvent(self, event): + event.accept() + + +class ControllerWrap: + def __init__(self, controller): + self._controller = controller + self._selected_folder_id = None + + def emit_event(self, *args, **kwargs): + self._controller.emit_event(*args, **kwargs) + + def register_event_callback(self, *args, **kwargs): + self._controller.register_event_callback(*args, **kwargs) + + def get_current_project_name(self): + return self._controller.get_current_project_name() + + def get_folder_items(self, *args, **kwargs): + return self._controller.get_folder_items(*args, **kwargs) + + def set_selected_folder(self, folder_id): + self._selected_folder_id = folder_id + + def get_selected_folder_id(self): + return self._selected_folder_id + + +class FoldersDialog(QtWidgets.QDialog): + """Dialog to select asset for a context of instance.""" + + def __init__(self, controller, parent): + super(FoldersDialog, self).__init__(parent) + self.setWindowTitle("Select folder") + + filter_input = PlaceholderLineEdit(self) + filter_input.setPlaceholderText("Filter folders..") + + controller_wrap = ControllerWrap(controller) + folders_widget = FoldersWidget(controller_wrap, self) + folders_widget.set_deselectable(True) + + ok_btn = QtWidgets.QPushButton("OK", self) + cancel_btn = QtWidgets.QPushButton("Cancel", self) + + btns_layout = QtWidgets.QHBoxLayout() + btns_layout.addStretch(1) + btns_layout.addWidget(ok_btn) + btns_layout.addWidget(cancel_btn) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(filter_input, 0) + layout.addWidget(folders_widget, 1) + layout.addLayout(btns_layout, 0) + + folders_widget.double_clicked.connect(self._on_ok_clicked) + folders_widget.refreshed.connect(self._on_folders_refresh) + filter_input.textChanged.connect(self._on_filter_change) + ok_btn.clicked.connect(self._on_ok_clicked) + cancel_btn.clicked.connect(self._on_cancel_clicked) + + self._filter_input = filter_input + self._ok_btn = ok_btn + self._cancel_btn = cancel_btn + + self._folders_widget = folders_widget + self._controller_wrap = controller_wrap + + # Set selected folder only when user confirms the dialog + self._selected_folder_id = None + self._selected_folder_label = None + + self._folder_id_to_select = NOT_SET + + self._first_show = True + self._default_height = 500 + + def showEvent(self, event): + """Refresh asset model on show.""" + super(FoldersDialog, self).showEvent(event) + if self._first_show: + self._first_show = False + self._on_first_show() + + def refresh(self): + project_name = self._controller_wrap.get_current_project_name() + self._folders_widget.set_project_name(project_name) + + def _on_first_show(self): + center = self.rect().center() + size = self.size() + size.setHeight(self._default_height) + + self.resize(size) + new_pos = self.mapToGlobal(center) + new_pos.setX(new_pos.x() - int(self.width() / 2)) + new_pos.setY(new_pos.y() - int(self.height() / 2)) + self.move(new_pos) + + def _on_folders_refresh(self): + if self._folder_id_to_select is NOT_SET: + return + self._folders_widget.set_selected_folder(self._folder_id_to_select) + self._folder_id_to_select = NOT_SET + + def _on_filter_change(self, text): + """Trigger change of filter of folders.""" + + self._folders_widget.set_name_filter(text) + + def _on_cancel_clicked(self): + self.done(0) + + def _on_ok_clicked(self): + self._selected_folder_id = ( + self._folders_widget.get_selected_folder_id() + ) + self._selected_folder_label = ( + self._folders_widget.get_selected_folder_label() + ) + self.done(1) + + def set_selected_folder(self, folder_id): + """Change preselected folder before showing the dialog. + + This also resets model and clean filter. + """ + + if ( + self._folders_widget.is_refreshing + or self._folders_widget.get_project_name() is None + ): + self._folder_id_to_select = folder_id + else: + self._folders_widget.set_selected_folder(folder_id) + + def get_selected_folder_id(self): + """Get selected folder id. + + Returns: + Union[str, None]: Selected folder id or None if nothing + is selected. + """ + return self._selected_folder_id + + def get_selected_folder_label(self): + return self._selected_folder_label + + +class FoldersField(BaseClickableFrame): + """Field where asset name of selected instance/s is showed. + + Click on the field will trigger `FoldersDialog`. + """ + value_changed = QtCore.Signal() + + def __init__(self, controller, parent): + super(FoldersField, self).__init__(parent) + self.setObjectName("AssetNameInputWidget") + + # Don't use 'self' for parent! + # - this widget has specific styles + dialog = FoldersDialog(controller, parent) + + name_input = ClickableLineEdit(self) + name_input.setObjectName("AssetNameInput") + + icon = qtawesome.icon("fa.window-maximize", color="white") + icon_btn = QtWidgets.QPushButton(self) + icon_btn.setIcon(icon) + icon_btn.setObjectName("AssetNameInputButton") + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.setSpacing(0) + layout.addWidget(name_input, 1) + layout.addWidget(icon_btn, 0) + + # Make sure all widgets are vertically extended to highest widget + for widget in ( + name_input, + icon_btn + ): + w_size_policy = widget.sizePolicy() + w_size_policy.setVerticalPolicy( + QtWidgets.QSizePolicy.MinimumExpanding) + widget.setSizePolicy(w_size_policy) + + size_policy = self.sizePolicy() + size_policy.setVerticalPolicy(QtWidgets.QSizePolicy.Maximum) + self.setSizePolicy(size_policy) + + name_input.clicked.connect(self._mouse_release_callback) + icon_btn.clicked.connect(self._mouse_release_callback) + dialog.finished.connect(self._on_dialog_finish) + + self._controller = controller + self._dialog = dialog + self._name_input = name_input + self._icon_btn = icon_btn + + self._selected_folder_id = None + self._selected_folder_label = None + self._selected_items = [] + self._is_valid = True + + def refresh(self): + self._dialog.refresh() + + def is_valid(self): + """Is asset valid.""" + return self._is_valid + + def get_selected_folder_id(self): + """Selected asset names.""" + return self._selected_folder_id + + def get_selected_folder_label(self): + return self._selected_folder_label + + def set_text(self, text): + """Set text in text field. + + Does not change selected items (assets). + """ + self._name_input.setText(text) + + def set_valid(self, is_valid): + state = "" + if not is_valid: + state = "invalid" + self._set_state_property(state) + + def set_selected_item(self, folder_id=None, folder_label=None): + """Set folder for selection. + + Args: + folder_id (Optional[str]): Folder id to select. + folder_label (Optional[str]): Folder label. + """ + + self._selected_folder_id = folder_id + if not folder_id: + folder_label = None + elif folder_id and not folder_label: + folder_label = self._controller.get_folder_label(folder_id) + self._selected_folder_label = folder_label + self.set_text(folder_label if folder_label else "") + + def _on_dialog_finish(self, result): + if not result: + return + + folder_id = self._dialog.get_selected_folder_id() + folder_label = self._dialog.get_selected_folder_label() + self.set_selected_item(folder_id, folder_label) + + self.value_changed.emit() + + def _mouse_release_callback(self): + self._dialog.set_selected_folder(self._selected_folder_id) + self._dialog.open() + + def _set_state_property(self, state): + set_style_property(self, "state", state) + set_style_property(self._name_input, "state", state) + set_style_property(self._icon_btn, "state", state) diff --git a/openpype/tools/ayon_sceneinventory/switch_dialog/widgets.py b/openpype/tools/ayon_sceneinventory/switch_dialog/widgets.py new file mode 100644 index 0000000000..50a49e0ce1 --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/switch_dialog/widgets.py @@ -0,0 +1,94 @@ +from qtpy import QtWidgets, QtCore + +from openpype import style + + +class ButtonWithMenu(QtWidgets.QToolButton): + def __init__(self, parent=None): + super(ButtonWithMenu, self).__init__(parent) + + self.setObjectName("ButtonWithMenu") + + self.setPopupMode(QtWidgets.QToolButton.MenuButtonPopup) + menu = QtWidgets.QMenu(self) + + self.setMenu(menu) + + self._menu = menu + self._actions = [] + + def menu(self): + return self._menu + + def clear_actions(self): + if self._menu is not None: + self._menu.clear() + self._actions = [] + + def add_action(self, action): + self._actions.append(action) + self._menu.addAction(action) + + def _on_action_trigger(self): + action = self.sender() + if action not in self._actions: + return + action.trigger() + + +class SearchComboBox(QtWidgets.QComboBox): + """Searchable ComboBox with empty placeholder value as first value""" + + def __init__(self, parent): + super(SearchComboBox, self).__init__(parent) + + self.setEditable(True) + self.setInsertPolicy(QtWidgets.QComboBox.NoInsert) + + combobox_delegate = QtWidgets.QStyledItemDelegate(self) + self.setItemDelegate(combobox_delegate) + + completer = self.completer() + completer.setCompletionMode( + QtWidgets.QCompleter.PopupCompletion + ) + completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive) + + completer_view = completer.popup() + completer_view.setObjectName("CompleterView") + completer_delegate = QtWidgets.QStyledItemDelegate(completer_view) + completer_view.setItemDelegate(completer_delegate) + completer_view.setStyleSheet(style.load_stylesheet()) + + self._combobox_delegate = combobox_delegate + + self._completer_delegate = completer_delegate + self._completer = completer + + def set_placeholder(self, placeholder): + self.lineEdit().setPlaceholderText(placeholder) + + def populate(self, items): + self.clear() + self.addItems([""]) # ensure first item is placeholder + self.addItems(items) + + def get_valid_value(self): + """Return the current text if it's a valid value else None + + Note: The empty placeholder value is valid and returns as "" + + """ + + text = self.currentText() + lookup = set(self.itemText(i) for i in range(self.count())) + if text not in lookup: + return None + + return text or None + + def set_valid_value(self, value): + """Try to locate 'value' and pre-select it in dropdown.""" + index = self.findText(value) + if index > -1: + self.setCurrentIndex(index) diff --git a/openpype/tools/ayon_sceneinventory/view.py b/openpype/tools/ayon_sceneinventory/view.py new file mode 100644 index 0000000000..039b498b1b --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/view.py @@ -0,0 +1,825 @@ +import uuid +import collections +import logging +import itertools +from functools import partial + +from qtpy import QtWidgets, QtCore +import qtawesome + +from openpype.client import ( + get_version_by_id, + get_versions, + get_hero_versions, + get_representation_by_id, + get_representations, +) +from openpype import style +from openpype.pipeline import ( + HeroVersionType, + update_container, + remove_container, + discover_inventory_actions, +) +from openpype.tools.utils.lib import ( + iter_model_rows, + format_version +) + +from .switch_dialog import SwitchAssetDialog +from .model import InventoryModel + + +DEFAULT_COLOR = "#fb9c15" + +log = logging.getLogger("SceneInventory") + + +class SceneInventoryView(QtWidgets.QTreeView): + data_changed = QtCore.Signal() + hierarchy_view_changed = QtCore.Signal(bool) + + def __init__(self, controller, parent): + super(SceneInventoryView, self).__init__(parent=parent) + + # view settings + self.setIndentation(12) + self.setAlternatingRowColors(True) + self.setSortingEnabled(True) + self.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) + self.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) + + self.customContextMenuRequested.connect(self._show_right_mouse_menu) + + self._hierarchy_view = False + self._selected = None + + self._controller = controller + + def _set_hierarchy_view(self, enabled): + if enabled == self._hierarchy_view: + return + self._hierarchy_view = enabled + self.hierarchy_view_changed.emit(enabled) + + def _enter_hierarchy(self, items): + self._selected = set(i["objectName"] for i in items) + self._set_hierarchy_view(True) + self.data_changed.emit() + self.expandToDepth(1) + self.setStyleSheet(""" + QTreeView { + border-color: #fb9c15; + } + """) + + def _leave_hierarchy(self): + self._set_hierarchy_view(False) + self.data_changed.emit() + self.setStyleSheet("QTreeView {}") + + def _build_item_menu_for_selection(self, items, menu): + # Exclude items that are "NOT FOUND" since setting versions, updating + # and removal won't work for those items. + items = [item for item in items if not item.get("isNotFound")] + if not items: + return + + # An item might not have a representation, for example when an item + # is listed as "NOT FOUND" + repre_ids = set() + for item in items: + repre_id = item["representation"] + try: + uuid.UUID(repre_id) + repre_ids.add(repre_id) + except ValueError: + pass + + project_name = self._controller.get_current_project_name() + repre_docs = get_representations( + project_name, representation_ids=repre_ids, fields=["parent"] + ) + + version_ids = { + repre_doc["parent"] + for repre_doc in repre_docs + } + + loaded_versions = get_versions( + project_name, version_ids=version_ids, hero=True + ) + + loaded_hero_versions = [] + versions_by_parent_id = collections.defaultdict(list) + subset_ids = set() + for version in loaded_versions: + if version["type"] == "hero_version": + loaded_hero_versions.append(version) + else: + parent_id = version["parent"] + versions_by_parent_id[parent_id].append(version) + subset_ids.add(parent_id) + + all_versions = get_versions( + project_name, subset_ids=subset_ids, hero=True + ) + hero_versions = [] + versions = [] + for version in all_versions: + if version["type"] == "hero_version": + hero_versions.append(version) + else: + versions.append(version) + + has_loaded_hero_versions = len(loaded_hero_versions) > 0 + has_available_hero_version = len(hero_versions) > 0 + has_outdated = False + + for version in versions: + parent_id = version["parent"] + current_versions = versions_by_parent_id[parent_id] + for current_version in current_versions: + if current_version["name"] < version["name"]: + has_outdated = True + break + + if has_outdated: + break + + switch_to_versioned = None + if has_loaded_hero_versions: + def _on_switch_to_versioned(items): + repre_ids = { + item["representation"] + for item in items + } + + repre_docs = get_representations( + project_name, + representation_ids=repre_ids, + fields=["parent"] + ) + + version_ids = set() + version_id_by_repre_id = {} + for repre_doc in repre_docs: + version_id = repre_doc["parent"] + repre_id = str(repre_doc["_id"]) + version_id_by_repre_id[repre_id] = version_id + version_ids.add(version_id) + + hero_versions = get_hero_versions( + project_name, + version_ids=version_ids, + fields=["version_id"] + ) + + hero_src_version_ids = set() + for hero_version in hero_versions: + version_id = hero_version["version_id"] + hero_src_version_ids.add(version_id) + hero_version_id = hero_version["_id"] + for _repre_id, current_version_id in ( + version_id_by_repre_id.items() + ): + if current_version_id == hero_version_id: + version_id_by_repre_id[_repre_id] = version_id + + version_docs = get_versions( + project_name, + version_ids=hero_src_version_ids, + fields=["name"] + ) + version_name_by_id = {} + for version_doc in version_docs: + version_name_by_id[version_doc["_id"]] = \ + version_doc["name"] + + # Specify version per item to update to + update_items = [] + update_versions = [] + for item in items: + repre_id = item["representation"] + version_id = version_id_by_repre_id.get(repre_id) + version_name = version_name_by_id.get(version_id) + if version_name is not None: + update_items.append(item) + update_versions.append(version_name) + self._update_containers(update_items, update_versions) + + update_icon = qtawesome.icon( + "fa.asterisk", + color=DEFAULT_COLOR + ) + switch_to_versioned = QtWidgets.QAction( + update_icon, + "Switch to versioned", + menu + ) + switch_to_versioned.triggered.connect( + lambda: _on_switch_to_versioned(items) + ) + + update_to_latest_action = None + if has_outdated or has_loaded_hero_versions: + update_icon = qtawesome.icon( + "fa.angle-double-up", + color=DEFAULT_COLOR + ) + update_to_latest_action = QtWidgets.QAction( + update_icon, + "Update to latest", + menu + ) + update_to_latest_action.triggered.connect( + lambda: self._update_containers(items, version=-1) + ) + + change_to_hero = None + if has_available_hero_version: + # TODO change icon + change_icon = qtawesome.icon( + "fa.asterisk", + color="#00b359" + ) + change_to_hero = QtWidgets.QAction( + change_icon, + "Change to hero", + menu + ) + change_to_hero.triggered.connect( + lambda: self._update_containers(items, + version=HeroVersionType(-1)) + ) + + # set version + set_version_icon = qtawesome.icon("fa.hashtag", color=DEFAULT_COLOR) + set_version_action = QtWidgets.QAction( + set_version_icon, + "Set version", + menu + ) + set_version_action.triggered.connect( + lambda: self._show_version_dialog(items)) + + # switch folder + switch_folder_icon = qtawesome.icon("fa.sitemap", color=DEFAULT_COLOR) + switch_folder_action = QtWidgets.QAction( + switch_folder_icon, + "Switch Folder", + menu + ) + switch_folder_action.triggered.connect( + lambda: self._show_switch_dialog(items)) + + # remove + remove_icon = qtawesome.icon("fa.remove", color=DEFAULT_COLOR) + remove_action = QtWidgets.QAction(remove_icon, "Remove items", menu) + remove_action.triggered.connect( + lambda: self._show_remove_warning_dialog(items)) + + # add the actions + if switch_to_versioned: + menu.addAction(switch_to_versioned) + + if update_to_latest_action: + menu.addAction(update_to_latest_action) + + if change_to_hero: + menu.addAction(change_to_hero) + + menu.addAction(set_version_action) + menu.addAction(switch_folder_action) + + menu.addSeparator() + + menu.addAction(remove_action) + + self._handle_sync_server(menu, repre_ids) + + def _handle_sync_server(self, menu, repre_ids): + """Adds actions for download/upload when SyncServer is enabled + + Args: + menu (OptionMenu) + repre_ids (list) of object_ids + + Returns: + (OptionMenu) + """ + + if not self._controller.is_sync_server_enabled(): + return + + menu.addSeparator() + + download_icon = qtawesome.icon("fa.download", color=DEFAULT_COLOR) + download_active_action = QtWidgets.QAction( + download_icon, + "Download", + menu + ) + download_active_action.triggered.connect( + lambda: self._add_sites(repre_ids, "active_site")) + + upload_icon = qtawesome.icon("fa.upload", color=DEFAULT_COLOR) + upload_remote_action = QtWidgets.QAction( + upload_icon, + "Upload", + menu + ) + upload_remote_action.triggered.connect( + lambda: self._add_sites(repre_ids, "remote_site")) + + menu.addAction(download_active_action) + menu.addAction(upload_remote_action) + + def _add_sites(self, repre_ids, site_type): + """(Re)sync all 'repre_ids' to specific site. + + It checks if opposite site has fully available content to limit + accidents. (ReSync active when no remote >> losing active content) + + Args: + repre_ids (list) + site_type (Literal[active_site, remote_site]): Site type. + """ + + self._controller.resync_representations(repre_ids, site_type) + + self.data_changed.emit() + + def _build_item_menu(self, items=None): + """Create menu for the selected items""" + + if not items: + items = [] + + menu = QtWidgets.QMenu(self) + + # add the actions + self._build_item_menu_for_selection(items, menu) + + # These two actions should be able to work without selection + # expand all items + expandall_action = QtWidgets.QAction(menu, text="Expand all items") + expandall_action.triggered.connect(self.expandAll) + + # collapse all items + collapse_action = QtWidgets.QAction(menu, text="Collapse all items") + collapse_action.triggered.connect(self.collapseAll) + + menu.addAction(expandall_action) + menu.addAction(collapse_action) + + custom_actions = self._get_custom_actions(containers=items) + if custom_actions: + submenu = QtWidgets.QMenu("Actions", self) + for action in custom_actions: + color = action.color or DEFAULT_COLOR + icon = qtawesome.icon("fa.%s" % action.icon, color=color) + action_item = QtWidgets.QAction(icon, action.label, submenu) + action_item.triggered.connect( + partial(self._process_custom_action, action, items)) + + submenu.addAction(action_item) + + menu.addMenu(submenu) + + # go back to flat view + back_to_flat_action = None + if self._hierarchy_view: + back_to_flat_icon = qtawesome.icon("fa.list", color=DEFAULT_COLOR) + back_to_flat_action = QtWidgets.QAction( + back_to_flat_icon, + "Back to Full-View", + menu + ) + back_to_flat_action.triggered.connect(self._leave_hierarchy) + + # send items to hierarchy view + enter_hierarchy_icon = qtawesome.icon("fa.indent", color="#d8d8d8") + enter_hierarchy_action = QtWidgets.QAction( + enter_hierarchy_icon, + "Cherry-Pick (Hierarchy)", + menu + ) + enter_hierarchy_action.triggered.connect( + lambda: self._enter_hierarchy(items)) + + if items: + menu.addAction(enter_hierarchy_action) + + if back_to_flat_action is not None: + menu.addAction(back_to_flat_action) + + return menu + + def _get_custom_actions(self, containers): + """Get the registered Inventory Actions + + Args: + containers(list): collection of containers + + Returns: + list: collection of filter and initialized actions + """ + + def sorter(Plugin): + """Sort based on order attribute of the plugin""" + return Plugin.order + + # Fedd an empty dict if no selection, this will ensure the compat + # lookup always work, so plugin can interact with Scene Inventory + # reversely. + containers = containers or [dict()] + + # Check which action will be available in the menu + Plugins = discover_inventory_actions() + compatible = [p() for p in Plugins if + any(p.is_compatible(c) for c in containers)] + + return sorted(compatible, key=sorter) + + def _process_custom_action(self, action, containers): + """Run action and if results are returned positive update the view + + If the result is list or dict, will select view items by the result. + + Args: + action (InventoryAction): Inventory Action instance + containers (list): Data of currently selected items + + Returns: + None + """ + + result = action.process(containers) + if result: + self.data_changed.emit() + + if isinstance(result, (list, set)): + self._select_items_by_action(result) + + if isinstance(result, dict): + self._select_items_by_action( + result["objectNames"], result["options"] + ) + + def _select_items_by_action(self, object_names, options=None): + """Select view items by the result of action + + Args: + object_names (list or set): A list/set of container object name + options (dict): GUI operation options. + + Returns: + None + + """ + options = options or dict() + + if options.get("clear", True): + self.clearSelection() + + object_names = set(object_names) + if ( + self._hierarchy_view + and not self._selected.issuperset(object_names) + ): + # If any container not in current cherry-picked view, update + # view before selecting them. + self._selected.update(object_names) + self.data_changed.emit() + + model = self.model() + selection_model = self.selectionModel() + + select_mode = { + "select": QtCore.QItemSelectionModel.Select, + "deselect": QtCore.QItemSelectionModel.Deselect, + "toggle": QtCore.QItemSelectionModel.Toggle, + }[options.get("mode", "select")] + + for index in iter_model_rows(model, 0): + item = index.data(InventoryModel.ItemRole) + if item.get("isGroupNode"): + continue + + name = item.get("objectName") + if name in object_names: + self.scrollTo(index) # Ensure item is visible + flags = select_mode | QtCore.QItemSelectionModel.Rows + selection_model.select(index, flags) + + object_names.remove(name) + + if len(object_names) == 0: + break + + def _show_right_mouse_menu(self, pos): + """Display the menu when at the position of the item clicked""" + + globalpos = self.viewport().mapToGlobal(pos) + + if not self.selectionModel().hasSelection(): + print("No selection") + # Build menu without selection, feed an empty list + menu = self._build_item_menu() + menu.exec_(globalpos) + return + + active = self.currentIndex() # index under mouse + active = active.sibling(active.row(), 0) # get first column + + # move index under mouse + indices = self.get_indices() + if active in indices: + indices.remove(active) + + indices.append(active) + + # Extend to the sub-items + all_indices = self._extend_to_children(indices) + items = [dict(i.data(InventoryModel.ItemRole)) for i in all_indices + if i.parent().isValid()] + + if self._hierarchy_view: + # Ensure no group item + items = [n for n in items if not n.get("isGroupNode")] + + menu = self._build_item_menu(items) + menu.exec_(globalpos) + + def get_indices(self): + """Get the selected rows""" + selection_model = self.selectionModel() + return selection_model.selectedRows() + + def _extend_to_children(self, indices): + """Extend the indices to the children indices. + + Top-level indices are extended to its children indices. Sub-items + are kept as is. + + Args: + indices (list): The indices to extend. + + Returns: + list: The children indices + + """ + def get_children(i): + model = i.model() + rows = model.rowCount(parent=i) + for row in range(rows): + child = model.index(row, 0, parent=i) + yield child + + subitems = set() + for i in indices: + valid_parent = i.parent().isValid() + if valid_parent and i not in subitems: + subitems.add(i) + + if self._hierarchy_view: + # Assume this is a group item + for child in get_children(i): + subitems.add(child) + else: + # is top level item + for child in get_children(i): + subitems.add(child) + + return list(subitems) + + def _show_version_dialog(self, items): + """Create a dialog with the available versions for the selected file + + Args: + items (list): list of items to run the "set_version" for + + Returns: + None + """ + + active = items[-1] + + project_name = self._controller.get_current_project_name() + # Get available versions for active representation + repre_doc = get_representation_by_id( + project_name, + active["representation"], + fields=["parent"] + ) + + repre_version_doc = get_version_by_id( + project_name, + repre_doc["parent"], + fields=["parent"] + ) + + version_docs = list(get_versions( + project_name, + subset_ids=[repre_version_doc["parent"]], + hero=True + )) + hero_version = None + standard_versions = [] + for version_doc in version_docs: + if version_doc["type"] == "hero_version": + hero_version = version_doc + else: + standard_versions.append(version_doc) + versions = list(reversed( + sorted(standard_versions, key=lambda item: item["name"]) + )) + if hero_version: + _version_id = hero_version["version_id"] + for _version in versions: + if _version["_id"] != _version_id: + continue + + hero_version["name"] = HeroVersionType( + _version["name"] + ) + hero_version["data"] = _version["data"] + break + + # Get index among the listed versions + current_item = None + current_version = active["version"] + if isinstance(current_version, HeroVersionType): + current_item = hero_version + else: + for version in versions: + if version["name"] == current_version: + current_item = version + break + + all_versions = [] + if hero_version: + all_versions.append(hero_version) + all_versions.extend(versions) + + if current_item: + index = all_versions.index(current_item) + else: + index = 0 + + versions_by_label = dict() + labels = [] + for version in all_versions: + is_hero = version["type"] == "hero_version" + label = format_version(version["name"], is_hero) + labels.append(label) + versions_by_label[label] = version["name"] + + label, state = QtWidgets.QInputDialog.getItem( + self, + "Set version..", + "Set version number to", + labels, + current=index, + editable=False + ) + if not state: + return + + if label: + version = versions_by_label[label] + self._update_containers(items, version) + + def _show_switch_dialog(self, items): + """Display Switch dialog""" + dialog = SwitchAssetDialog(self._controller, self, items) + dialog.switched.connect(self.data_changed.emit) + dialog.show() + + def _show_remove_warning_dialog(self, items): + """Prompt a dialog to inform the user the action will remove items""" + + accept = QtWidgets.QMessageBox.Ok + buttons = accept | QtWidgets.QMessageBox.Cancel + + state = QtWidgets.QMessageBox.question( + self, + "Are you sure?", + "Are you sure you want to remove {} item(s)".format(len(items)), + buttons=buttons, + defaultButton=accept + ) + + if state != accept: + return + + for item in items: + remove_container(item) + self.data_changed.emit() + + def _show_version_error_dialog(self, version, items): + """Shows QMessageBox when version switch doesn't work + + Args: + version: str or int or None + """ + if version == -1: + version_str = "latest" + elif isinstance(version, HeroVersionType): + version_str = "hero" + elif isinstance(version, int): + version_str = "v{:03d}".format(version) + else: + version_str = version + + dialog = QtWidgets.QMessageBox(self) + dialog.setIcon(QtWidgets.QMessageBox.Warning) + dialog.setStyleSheet(style.load_stylesheet()) + dialog.setWindowTitle("Update failed") + + switch_btn = dialog.addButton( + "Switch Folder", + QtWidgets.QMessageBox.ActionRole + ) + switch_btn.clicked.connect(lambda: self._show_switch_dialog(items)) + + dialog.addButton(QtWidgets.QMessageBox.Cancel) + + msg = ( + "Version update to '{}' failed as representation doesn't exist." + "\n\nPlease update to version with a valid representation" + " OR \n use 'Switch Folder' button to change folder." + ).format(version_str) + dialog.setText(msg) + dialog.exec_() + + def update_all(self): + """Update all items that are currently 'outdated' in the view""" + # Get the source model through the proxy model + model = self.model().sourceModel() + + # Get all items from outdated groups + outdated_items = [] + for index in iter_model_rows(model, + column=0, + include_root=False): + item = index.data(model.ItemRole) + + if not item.get("isGroupNode"): + continue + + # Only the group nodes contain the "highest_version" data and as + # such we find only the groups and take its children. + if not model.outdated(item): + continue + + # Collect all children which we want to update + children = item.children() + outdated_items.extend(children) + + if not outdated_items: + log.info("Nothing to update.") + return + + # Trigger update to latest + self._update_containers(outdated_items, version=-1) + + def _update_containers(self, items, version): + """Helper to update items to given version (or version per item) + + If at least one item is specified this will always try to refresh + the inventory even if errors occurred on any of the items. + + Arguments: + items (list): Items to update + version (int or list): Version to set to. + This can be a list specifying a version for each item. + Like `update_container` version -1 sets the latest version + and HeroTypeVersion instances set the hero version. + + """ + + if isinstance(version, (list, tuple)): + # We allow a unique version to be specified per item. In that case + # the length must match with the items + assert len(items) == len(version), ( + "Number of items mismatches number of versions: " + "{} items - {} versions".format(len(items), len(version)) + ) + versions = version + else: + # Repeat the same version infinitely + versions = itertools.repeat(version) + + # Trigger update to latest + try: + for item, item_version in zip(items, versions): + try: + update_container(item, item_version) + except AssertionError: + self._show_version_error_dialog(item_version, [item]) + log.warning("Update failed", exc_info=True) + finally: + # Always update the scene inventory view, even if errors occurred + self.data_changed.emit() diff --git a/openpype/tools/ayon_sceneinventory/window.py b/openpype/tools/ayon_sceneinventory/window.py new file mode 100644 index 0000000000..427bf4c50d --- /dev/null +++ b/openpype/tools/ayon_sceneinventory/window.py @@ -0,0 +1,200 @@ +from qtpy import QtWidgets, QtCore, QtGui +import qtawesome + +from openpype import style, resources +from openpype.tools.utils.delegates import VersionDelegate +from openpype.tools.utils.lib import ( + preserve_expanded_rows, + preserve_selection, +) +from openpype.tools.ayon_sceneinventory import SceneInventoryController + +from .model import ( + InventoryModel, + FilterProxyModel +) +from .view import SceneInventoryView + + +class ControllerVersionDelegate(VersionDelegate): + """Version delegate that uses controller to get project. + + Original VersionDelegate is using 'AvalonMongoDB' object instead. Don't + worry about the variable name, object is stored to '_dbcon' attribute. + """ + + def get_project_name(self): + self._dbcon.get_current_project_name() + + +class SceneInventoryWindow(QtWidgets.QDialog): + """Scene Inventory window""" + + def __init__(self, controller=None, parent=None): + super(SceneInventoryWindow, self).__init__(parent) + + if controller is None: + controller = SceneInventoryController() + + project_name = controller.get_current_project_name() + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + self.setWindowTitle("Scene Inventory - {}".format(project_name)) + self.setObjectName("SceneInventory") + + self.resize(1100, 480) + + # region control + + filter_label = QtWidgets.QLabel("Search", self) + text_filter = QtWidgets.QLineEdit(self) + + outdated_only_checkbox = QtWidgets.QCheckBox( + "Filter to outdated", self + ) + outdated_only_checkbox.setToolTip("Show outdated files only") + outdated_only_checkbox.setChecked(False) + + icon = qtawesome.icon("fa.arrow-up", color="white") + update_all_button = QtWidgets.QPushButton(self) + update_all_button.setToolTip("Update all outdated to latest version") + update_all_button.setIcon(icon) + + icon = qtawesome.icon("fa.refresh", color="white") + refresh_button = QtWidgets.QPushButton(self) + refresh_button.setToolTip("Refresh") + refresh_button.setIcon(icon) + + control_layout = QtWidgets.QHBoxLayout() + control_layout.addWidget(filter_label) + control_layout.addWidget(text_filter) + control_layout.addWidget(outdated_only_checkbox) + control_layout.addWidget(update_all_button) + control_layout.addWidget(refresh_button) + + model = InventoryModel(controller) + proxy = FilterProxyModel() + proxy.setSourceModel(model) + proxy.setDynamicSortFilter(True) + proxy.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) + + view = SceneInventoryView(controller, self) + view.setModel(proxy) + + sync_enabled = controller.is_sync_server_enabled() + view.setColumnHidden(model.active_site_col, not sync_enabled) + view.setColumnHidden(model.remote_site_col, not sync_enabled) + + # set some nice default widths for the view + view.setColumnWidth(0, 250) # name + view.setColumnWidth(1, 55) # version + view.setColumnWidth(2, 55) # count + view.setColumnWidth(3, 150) # family + view.setColumnWidth(4, 120) # group + view.setColumnWidth(5, 150) # loader + + # apply delegates + version_delegate = ControllerVersionDelegate(controller, self) + column = model.Columns.index("version") + view.setItemDelegateForColumn(column, version_delegate) + + layout = QtWidgets.QVBoxLayout(self) + layout.addLayout(control_layout) + layout.addWidget(view) + + show_timer = QtCore.QTimer() + show_timer.setInterval(0) + show_timer.setSingleShot(False) + + # signals + show_timer.timeout.connect(self._on_show_timer) + text_filter.textChanged.connect(self._on_text_filter_change) + outdated_only_checkbox.stateChanged.connect( + self._on_outdated_state_change + ) + view.hierarchy_view_changed.connect( + self._on_hierarchy_view_change + ) + view.data_changed.connect(self._on_refresh_request) + refresh_button.clicked.connect(self._on_refresh_request) + update_all_button.clicked.connect(self._on_update_all) + + self._show_timer = show_timer + self._show_counter = 0 + self._controller = controller + self._update_all_button = update_all_button + self._outdated_only_checkbox = outdated_only_checkbox + self._view = view + self._model = model + self._proxy = proxy + self._version_delegate = version_delegate + + self._first_show = True + self._first_refresh = True + + def showEvent(self, event): + super(SceneInventoryWindow, self).showEvent(event) + if self._first_show: + self._first_show = False + self.setStyleSheet(style.load_stylesheet()) + + self._show_counter = 0 + self._show_timer.start() + + def keyPressEvent(self, event): + """Custom keyPressEvent. + + Override keyPressEvent to do nothing so that Maya's panels won't + take focus when pressing "SHIFT" whilst mouse is over viewport or + outliner. This way users don't accidentally perform Maya commands + whilst trying to name an instance. + + """ + + def _on_refresh_request(self): + """Signal callback to trigger 'refresh' without any arguments.""" + + self.refresh() + + def refresh(self, containers=None): + self._first_refresh = False + self._controller.reset() + with preserve_expanded_rows( + tree_view=self._view, + role=self._model.UniqueRole + ): + with preserve_selection( + tree_view=self._view, + role=self._model.UniqueRole, + current_index=False + ): + kwargs = {"containers": containers} + # TODO do not touch view's inner attribute + if self._view._hierarchy_view: + kwargs["selected"] = self._view._selected + self._model.refresh(**kwargs) + + def _on_show_timer(self): + if self._show_counter < 3: + self._show_counter += 1 + return + self._show_timer.stop() + self.refresh() + + def _on_hierarchy_view_change(self, enabled): + self._proxy.set_hierarchy_view(enabled) + self._model.set_hierarchy_view(enabled) + + def _on_text_filter_change(self, text_filter): + if hasattr(self._proxy, "setFilterRegExp"): + self._proxy.setFilterRegExp(text_filter) + else: + self._proxy.setFilterRegularExpression(text_filter) + + def _on_outdated_state_change(self): + self._proxy.set_filter_outdated( + self._outdated_only_checkbox.isChecked() + ) + + def _on_update_all(self): + self._view.update_all() diff --git a/openpype/tools/ayon_utils/models/__init__.py b/openpype/tools/ayon_utils/models/__init__.py index 1434282c5b..69722b5e21 100644 --- a/openpype/tools/ayon_utils/models/__init__.py +++ b/openpype/tools/ayon_utils/models/__init__.py @@ -12,6 +12,7 @@ from .hierarchy import ( HierarchyModel, HIERARCHY_MODEL_SENDER, ) +from .thumbnails import ThumbnailsModel __all__ = ( @@ -26,4 +27,6 @@ __all__ = ( "TaskItem", "HierarchyModel", "HIERARCHY_MODEL_SENDER", + + "ThumbnailsModel", ) diff --git a/openpype/tools/ayon_utils/models/hierarchy.py b/openpype/tools/ayon_utils/models/hierarchy.py index 93f4c48d98..fc6b8e1eb7 100644 --- a/openpype/tools/ayon_utils/models/hierarchy.py +++ b/openpype/tools/ayon_utils/models/hierarchy.py @@ -29,17 +29,21 @@ class FolderItem: parent_id (Union[str, None]): Parent folder id. If 'None' then project is parent. name (str): Name of folder. - label (str): Folder label. - icon_name (str): Name of icon from font awesome. - icon_color (str): Hex color string that will be used for icon. + path (str): Folder path. + folder_type (str): Type of folder. + label (Union[str, None]): Folder label. + icon (Union[dict[str, Any], None]): Icon definition. """ def __init__( - self, entity_id, parent_id, name, label, icon + self, entity_id, parent_id, name, path, folder_type, label, icon ): self.entity_id = entity_id self.parent_id = parent_id self.name = name + self.path = path + self.folder_type = folder_type + self.label = label or name if not icon: icon = { "type": "awesome-font", @@ -47,7 +51,6 @@ class FolderItem: "color": get_default_entity_icon_color() } self.icon = icon - self.label = label or name def to_data(self): """Converts folder item to data. @@ -60,6 +63,8 @@ class FolderItem: "entity_id": self.entity_id, "parent_id": self.parent_id, "name": self.name, + "path": self.path, + "folder_type": self.folder_type, "label": self.label, "icon": self.icon, } @@ -91,8 +96,7 @@ class TaskItem: name (str): Name of task. task_type (str): Type of task. parent_id (str): Parent folder id. - icon_name (str): Name of icon from font awesome. - icon_color (str): Hex color string that will be used for icon. + icon (Union[dict[str, Any], None]): Icon definitions. """ def __init__( @@ -184,12 +188,31 @@ def _get_task_items_from_tasks(tasks): def _get_folder_item_from_hierarchy_item(item): + name = item["name"] + path_parts = list(item["parents"]) + path_parts.append(name) + return FolderItem( item["id"], item["parentId"], - item["name"], + name, + "/".join(path_parts), + item["folderType"], item["label"], - None + None, + ) + + +def _get_folder_item_from_entity(entity): + name = entity["name"] + return FolderItem( + entity["id"], + entity["parentId"], + name, + entity["path"], + entity["folderType"], + entity["label"] or name, + None, ) @@ -224,13 +247,84 @@ class HierarchyModel(object): self._tasks_by_id.reset() def refresh_project(self, project_name): + """Force to refresh folder items for a project. + + Args: + project_name (str): Name of project to refresh. + """ + self._refresh_folders_cache(project_name) def get_folder_items(self, project_name, sender): + """Get folder items by project name. + + The folders are cached per project name. If the cache is not valid + then the folders are queried from server. + + Args: + project_name (str): Name of project where to look for folders. + sender (Union[str, None]): Who requested the folder ids. + + Returns: + dict[str, FolderItem]: Folder items by id. + """ + if not self._folders_items[project_name].is_valid: self._refresh_folders_cache(project_name, sender) return self._folders_items[project_name].get_data() + def get_folder_items_by_id(self, project_name, folder_ids): + """Get folder items by ids. + + This function will query folders if they are not in cache. But the + queried items are not added to cache back. + + Args: + project_name (str): Name of project where to look for folders. + folder_ids (Iterable[str]): Folder ids. + + Returns: + dict[str, Union[FolderItem, None]]: Folder items by id. + """ + + folder_ids = set(folder_ids) + if self._folders_items[project_name].is_valid: + cache_data = self._folders_items[project_name].get_data() + return { + folder_id: cache_data.get(folder_id) + for folder_id in folder_ids + } + folders = ayon_api.get_folders( + project_name, + folder_ids=folder_ids, + fields=["id", "name", "label", "parentId", "path", "folderType"] + ) + # Make sure all folder ids are in output + output = {folder_id: None for folder_id in folder_ids} + output.update({ + folder["id"]: _get_folder_item_from_entity(folder) + for folder in folders + }) + return output + + def get_folder_item(self, project_name, folder_id): + """Get folder items by id. + + This function will query folder if they is not in cache. But the + queried items are not added to cache back. + + Args: + project_name (str): Name of project where to look for folders. + folder_id (str): Folder id. + + Returns: + Union[FolderItem, None]: Folder item. + """ + items = self.get_folder_items_by_id( + project_name, [folder_id] + ) + return items.get(folder_id) + def get_task_items(self, project_name, folder_id, sender): if not project_name or not folder_id: return [] @@ -240,23 +334,65 @@ class HierarchyModel(object): self._refresh_tasks_cache(project_name, folder_id, sender) return task_cache.get_data() + def get_folder_entities(self, project_name, folder_ids): + """Get folder entities by ids. + + Args: + project_name (str): Project name. + folder_ids (Iterable[str]): Folder ids. + + Returns: + dict[str, Any]: Folder entities by id. + """ + + output = {} + folder_ids = set(folder_ids) + if not project_name or not folder_ids: + return output + + folder_ids_to_query = set() + for folder_id in folder_ids: + cache = self._folders_by_id[project_name][folder_id] + if cache.is_valid: + output[folder_id] = cache.get_data() + elif folder_id: + folder_ids_to_query.add(folder_id) + else: + output[folder_id] = None + self._query_folder_entities(project_name, folder_ids_to_query) + for folder_id in folder_ids_to_query: + cache = self._folders_by_id[project_name][folder_id] + output[folder_id] = cache.get_data() + return output + def get_folder_entity(self, project_name, folder_id): - cache = self._folders_by_id[project_name][folder_id] - if not cache.is_valid: - entity = None - if folder_id: - entity = ayon_api.get_folder_by_id(project_name, folder_id) - cache.update_data(entity) - return cache.get_data() + output = self.get_folder_entities(project_name, {folder_id}) + return output[folder_id] + + def get_task_entities(self, project_name, task_ids): + output = {} + task_ids = set(task_ids) + if not project_name or not task_ids: + return output + + task_ids_to_query = set() + for task_id in task_ids: + cache = self._tasks_by_id[project_name][task_id] + if cache.is_valid: + output[task_id] = cache.get_data() + elif task_id: + task_ids_to_query.add(task_id) + else: + output[task_id] = None + self._query_task_entities(project_name, task_ids_to_query) + for task_id in task_ids_to_query: + cache = self._tasks_by_id[project_name][task_id] + output[task_id] = cache.get_data() + return output def get_task_entity(self, project_name, task_id): - cache = self._tasks_by_id[project_name][task_id] - if not cache.is_valid: - entity = None - if task_id: - entity = ayon_api.get_task_by_id(project_name, task_id) - cache.update_data(entity) - return cache.get_data() + output = self.get_task_entities(project_name, {task_id}) + return output[task_id] @contextlib.contextmanager def _folder_refresh_event_manager(self, project_name, sender): @@ -326,6 +462,25 @@ class HierarchyModel(object): hierachy_queue.extend(item["children"] or []) return folder_items + def _query_folder_entities(self, project_name, folder_ids): + if not project_name or not folder_ids: + return + project_cache = self._folders_by_id[project_name] + folders = ayon_api.get_folders(project_name, folder_ids=folder_ids) + for folder in folders: + folder_id = folder["id"] + project_cache[folder_id].update_data(folder) + + def _query_task_entities(self, project_name, task_ids): + if not project_name or not task_ids: + return + + project_cache = self._tasks_by_id[project_name] + tasks = ayon_api.get_tasks(project_name, task_ids=task_ids) + for task in tasks: + task_id = task["id"] + project_cache[task_id].update_data(task) + def _refresh_tasks_cache(self, project_name, folder_id, sender=None): if folder_id in self._tasks_refreshing: return diff --git a/openpype/tools/ayon_utils/models/projects.py b/openpype/tools/ayon_utils/models/projects.py index ae3eeecea4..4ad53fbbfa 100644 --- a/openpype/tools/ayon_utils/models/projects.py +++ b/openpype/tools/ayon_utils/models/projects.py @@ -29,13 +29,14 @@ class ProjectItem: is parent. """ - def __init__(self, name, active, icon=None): + def __init__(self, name, active, is_library, icon=None): self.name = name self.active = active + self.is_library = is_library if icon is None: icon = { "type": "awesome-font", - "name": "fa.map", + "name": "fa.book" if is_library else "fa.map", "color": get_default_entity_icon_color(), } self.icon = icon @@ -50,6 +51,7 @@ class ProjectItem: return { "name": self.name, "active": self.active, + "is_library": self.is_library, "icon": self.icon, } @@ -78,7 +80,7 @@ def _get_project_items_from_entitiy(projects): """ return [ - ProjectItem(project["name"], project["active"]) + ProjectItem(project["name"], project["active"], project["library"]) for project in projects ] @@ -141,5 +143,5 @@ class ProjectsModel(object): self._projects_cache.update_data(project_items) def _query_projects(self): - projects = ayon_api.get_projects(fields=["name", "active"]) + projects = ayon_api.get_projects(fields=["name", "active", "library"]) return _get_project_items_from_entitiy(projects) diff --git a/openpype/tools/ayon_utils/models/thumbnails.py b/openpype/tools/ayon_utils/models/thumbnails.py new file mode 100644 index 0000000000..40892338df --- /dev/null +++ b/openpype/tools/ayon_utils/models/thumbnails.py @@ -0,0 +1,118 @@ +import collections + +import ayon_api + +from openpype.client.server.thumbnails import AYONThumbnailCache + +from .cache import NestedCacheItem + + +class ThumbnailsModel: + entity_cache_lifetime = 240 # In seconds + + def __init__(self): + self._thumbnail_cache = AYONThumbnailCache() + self._paths_cache = collections.defaultdict(dict) + self._folders_cache = NestedCacheItem( + levels=2, lifetime=self.entity_cache_lifetime) + self._versions_cache = NestedCacheItem( + levels=2, lifetime=self.entity_cache_lifetime) + + def reset(self): + self._paths_cache = collections.defaultdict(dict) + self._folders_cache.reset() + self._versions_cache.reset() + + def get_thumbnail_path(self, project_name, thumbnail_id): + return self._get_thumbnail_path(project_name, thumbnail_id) + + def get_folder_thumbnail_ids(self, project_name, folder_ids): + project_cache = self._folders_cache[project_name] + output = {} + missing_cache = set() + for folder_id in folder_ids: + cache = project_cache[folder_id] + if cache.is_valid: + output[folder_id] = cache.get_data() + else: + missing_cache.add(folder_id) + self._query_folder_thumbnail_ids(project_name, missing_cache) + for folder_id in missing_cache: + cache = project_cache[folder_id] + output[folder_id] = cache.get_data() + return output + + def get_version_thumbnail_ids(self, project_name, version_ids): + project_cache = self._versions_cache[project_name] + output = {} + missing_cache = set() + for version_id in version_ids: + cache = project_cache[version_id] + if cache.is_valid: + output[version_id] = cache.get_data() + else: + missing_cache.add(version_id) + self._query_version_thumbnail_ids(project_name, missing_cache) + for version_id in missing_cache: + cache = project_cache[version_id] + output[version_id] = cache.get_data() + return output + + def _get_thumbnail_path(self, project_name, thumbnail_id): + if not thumbnail_id: + return None + + project_cache = self._paths_cache[project_name] + if thumbnail_id in project_cache: + return project_cache[thumbnail_id] + + filepath = self._thumbnail_cache.get_thumbnail_filepath( + project_name, thumbnail_id + ) + if filepath is not None: + project_cache[thumbnail_id] = filepath + return filepath + + # 'ayon_api' had a bug, public function + # 'get_thumbnail_by_id' did not return output of + # 'ServerAPI' method. + con = ayon_api.get_server_api_connection() + result = con.get_thumbnail_by_id(project_name, thumbnail_id) + if result is None: + pass + + elif result.is_valid: + filepath = self._thumbnail_cache.store_thumbnail( + project_name, + thumbnail_id, + result.content, + result.content_type + ) + project_cache[thumbnail_id] = filepath + return filepath + + def _query_folder_thumbnail_ids(self, project_name, folder_ids): + if not project_name or not folder_ids: + return + + folders = ayon_api.get_folders( + project_name, + folder_ids=folder_ids, + fields=["id", "thumbnailId"] + ) + project_cache = self._folders_cache[project_name] + for folder in folders: + project_cache[folder["id"]] = folder["thumbnailId"] + + def _query_version_thumbnail_ids(self, project_name, version_ids): + if not project_name or not version_ids: + return + + versions = ayon_api.get_versions( + project_name, + version_ids=version_ids, + fields=["id", "thumbnailId"] + ) + project_cache = self._versions_cache[project_name] + for version in versions: + project_cache[version["id"]] = version["thumbnailId"] diff --git a/openpype/tools/ayon_utils/widgets/__init__.py b/openpype/tools/ayon_utils/widgets/__init__.py index 59aef98faf..432a249a73 100644 --- a/openpype/tools/ayon_utils/widgets/__init__.py +++ b/openpype/tools/ayon_utils/widgets/__init__.py @@ -8,11 +8,13 @@ from .projects_widget import ( from .folders_widget import ( FoldersWidget, FoldersModel, + FOLDERS_MODEL_SENDER_NAME, ) from .tasks_widget import ( TasksWidget, TasksModel, + TASKS_MODEL_SENDER_NAME, ) from .utils import ( get_qt_icon, @@ -28,9 +30,11 @@ __all__ = ( "FoldersWidget", "FoldersModel", + "FOLDERS_MODEL_SENDER_NAME", "TasksWidget", "TasksModel", + "TASKS_MODEL_SENDER_NAME", "get_qt_icon", "RefreshThread", diff --git a/openpype/tools/ayon_utils/widgets/folders_widget.py b/openpype/tools/ayon_utils/widgets/folders_widget.py index 4f44881081..322553c51c 100644 --- a/openpype/tools/ayon_utils/widgets/folders_widget.py +++ b/openpype/tools/ayon_utils/widgets/folders_widget.py @@ -4,14 +4,16 @@ from qtpy import QtWidgets, QtGui, QtCore from openpype.tools.utils import ( RecursiveSortFilterProxyModel, - DeselectableTreeView, + TreeView, ) from .utils import RefreshThread, get_qt_icon -SENDER_NAME = "qt_folders_model" -ITEM_ID_ROLE = QtCore.Qt.UserRole + 1 -ITEM_NAME_ROLE = QtCore.Qt.UserRole + 2 +FOLDERS_MODEL_SENDER_NAME = "qt_folders_model" +FOLDER_ID_ROLE = QtCore.Qt.UserRole + 1 +FOLDER_NAME_ROLE = QtCore.Qt.UserRole + 2 +FOLDER_PATH_ROLE = QtCore.Qt.UserRole + 3 +FOLDER_TYPE_ROLE = QtCore.Qt.UserRole + 4 class FoldersModel(QtGui.QStandardItemModel): @@ -84,6 +86,15 @@ class FoldersModel(QtGui.QStandardItemModel): return QtCore.QModelIndex() return self.indexFromItem(item) + def get_project_name(self): + """Project name which model currently use. + + Returns: + Union[str, None]: Currently used project name. + """ + + return self._last_project_name + def set_project_name(self, project_name): """Refresh folders items. @@ -112,7 +123,7 @@ class FoldersModel(QtGui.QStandardItemModel): project_name, self._controller.get_folder_items, project_name, - SENDER_NAME + FOLDERS_MODEL_SENDER_NAME ) self._current_refresh_thread = thread self._refresh_threads[thread.id] = thread @@ -142,6 +153,22 @@ class FoldersModel(QtGui.QStandardItemModel): self._fill_items(thread.get_result()) + def _fill_item_data(self, item, folder_item): + """ + + Args: + item (QtGui.QStandardItem): Item to fill data. + folder_item (FolderItem): Folder item. + """ + + icon = get_qt_icon(folder_item.icon) + item.setData(folder_item.entity_id, FOLDER_ID_ROLE) + item.setData(folder_item.name, FOLDER_NAME_ROLE) + item.setData(folder_item.path, FOLDER_PATH_ROLE) + item.setData(folder_item.folder_type, FOLDER_TYPE_ROLE) + item.setData(folder_item.label, QtCore.Qt.DisplayRole) + item.setData(icon, QtCore.Qt.DecorationRole) + def _fill_items(self, folder_items_by_id): if not folder_items_by_id: if folder_items_by_id is not None: @@ -178,7 +205,7 @@ class FoldersModel(QtGui.QStandardItemModel): folder_ids_to_add = set(folder_items) for row_idx in reversed(range(parent_item.rowCount())): child_item = parent_item.child(row_idx) - child_id = child_item.data(ITEM_ID_ROLE) + child_id = child_item.data(FOLDER_ID_ROLE) if child_id in ids_to_remove: removed_items.append(parent_item.takeRow(row_idx)) else: @@ -195,11 +222,7 @@ class FoldersModel(QtGui.QStandardItemModel): else: is_new = self._parent_id_by_id[item_id] != parent_id - icon = get_qt_icon(folder_item.icon) - item.setData(item_id, ITEM_ID_ROLE) - item.setData(folder_item.name, ITEM_NAME_ROLE) - item.setData(folder_item.label, QtCore.Qt.DisplayRole) - item.setData(icon, QtCore.Qt.DecorationRole) + self._fill_item_data(item, folder_item) if is_new: new_items.append(item) self._items_by_id[item_id] = item @@ -248,10 +271,14 @@ class FoldersWidget(QtWidgets.QWidget): the expected selection. Defaults to False. """ + double_clicked = QtCore.Signal(QtGui.QMouseEvent) + selection_changed = QtCore.Signal() + refreshed = QtCore.Signal() + def __init__(self, controller, parent, handle_expected_selection=False): super(FoldersWidget, self).__init__(parent) - folders_view = DeselectableTreeView(self) + folders_view = TreeView(self) folders_view.setHeaderHidden(True) folders_model = FoldersModel(controller) @@ -284,7 +311,7 @@ class FoldersWidget(QtWidgets.QWidget): selection_model = folders_view.selectionModel() selection_model.selectionChanged.connect(self._on_selection_change) - + folders_view.double_clicked.connect(self.double_clicked) folders_model.refreshed.connect(self._on_model_refresh) self._controller = controller @@ -295,7 +322,27 @@ class FoldersWidget(QtWidgets.QWidget): self._handle_expected_selection = handle_expected_selection self._expected_selection = None - def set_name_filer(self, name): + @property + def is_refreshing(self): + """Model is refreshing. + + Returns: + bool: True if model is refreshing. + """ + + return self._folders_model.is_refreshing + + @property + def has_content(self): + """Has at least one folder. + + Returns: + bool: True if model has at least one folder. + """ + + return self._folders_model.has_content + + def set_name_filter(self, name): """Set filter of folder name. Args: @@ -312,16 +359,108 @@ class FoldersWidget(QtWidgets.QWidget): self._folders_model.refresh() - def _on_project_selection_change(self, event): - project_name = event["project_name"] - self._set_project_name(project_name) + def get_project_name(self): + """Project name in which folders widget currently is. + + Returns: + Union[str, None]: Currently used project name. + """ + + return self._folders_model.get_project_name() + + def set_project_name(self, project_name): + """Set project name. + + Do not use this method when controller is handling selection of + project using 'selection.project.changed' event. + + Args: + project_name (str): Project name. + """ - def _set_project_name(self, project_name): self._folders_model.set_project_name(project_name) + def get_selected_folder_id(self): + """Get selected folder id. + + Returns: + Union[str, None]: Folder id which is selected. + """ + + return self._get_selected_item_id() + + def get_selected_folder_label(self): + """Selected folder label. + + Returns: + Union[str, None]: Selected folder label. + """ + + item_id = self._get_selected_item_id() + return self.get_folder_label(item_id) + + def get_folder_label(self, folder_id): + """Folder label for a given folder id. + + Returns: + Union[str, None]: Folder label. + """ + + index = self._folders_model.get_index_by_id(folder_id) + if index.isValid(): + return index.data(QtCore.Qt.DisplayRole) + return None + + def set_selected_folder(self, folder_id): + """Change selection. + + Args: + folder_id (Union[str, None]): Folder id or None to deselect. + """ + + if folder_id is None: + self._folders_view.clearSelection() + return True + + if folder_id == self._get_selected_item_id(): + return True + index = self._folders_model.get_index_by_id(folder_id) + if not index.isValid(): + return False + + proxy_index = self._folders_proxy_model.mapFromSource(index) + if not proxy_index.isValid(): + return False + + selection_model = self._folders_view.selectionModel() + selection_model.setCurrentIndex( + proxy_index, QtCore.QItemSelectionModel.SelectCurrent + ) + return True + + def set_deselectable(self, enabled): + """Set deselectable mode. + + Items in view can be deselected. + + Args: + enabled (bool): Enable deselectable mode. + """ + + self._folders_view.set_deselectable(enabled) + + def _get_selected_index(self): + return self._folders_model.get_index_by_id( + self.get_selected_folder_id() + ) + + def _on_project_selection_change(self, event): + project_name = event["project_name"] + self.set_project_name(project_name) + def _on_folders_refresh_finished(self, event): - if event["sender"] != SENDER_NAME: - self._set_project_name(event["project_name"]) + if event["sender"] != FOLDERS_MODEL_SENDER_NAME: + self.set_project_name(event["project_name"]) def _on_controller_refresh(self): self._update_expected_selection() @@ -330,11 +469,12 @@ class FoldersWidget(QtWidgets.QWidget): if self._expected_selection: self._set_expected_selection() self._folders_proxy_model.sort(0) + self.refreshed.emit() def _get_selected_item_id(self): selection_model = self._folders_view.selectionModel() for index in selection_model.selectedIndexes(): - item_id = index.data(ITEM_ID_ROLE) + item_id = index.data(FOLDER_ID_ROLE) if item_id is not None: return item_id return None @@ -342,6 +482,7 @@ class FoldersWidget(QtWidgets.QWidget): def _on_selection_change(self): item_id = self._get_selected_item_id() self._controller.set_selected_folder(item_id) + self.selection_changed.emit() # Expected selection handling def _on_expected_selection_change(self, event): @@ -369,12 +510,6 @@ class FoldersWidget(QtWidgets.QWidget): folder_id = self._expected_selection self._expected_selection = None - if ( - folder_id is not None - and folder_id != self._get_selected_item_id() - ): - index = self._folders_model.get_index_by_id(folder_id) - if index.isValid(): - proxy_index = self._folders_proxy_model.mapFromSource(index) - self._folders_view.setCurrentIndex(proxy_index) + if folder_id is not None: + self.set_selected_folder(folder_id) self._controller.expected_folder_selected(folder_id) diff --git a/openpype/tools/ayon_utils/widgets/projects_widget.py b/openpype/tools/ayon_utils/widgets/projects_widget.py index 818d574910..be18cfe3ed 100644 --- a/openpype/tools/ayon_utils/widgets/projects_widget.py +++ b/openpype/tools/ayon_utils/widgets/projects_widget.py @@ -5,6 +5,9 @@ from .utils import RefreshThread, get_qt_icon PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 1 PROJECT_IS_ACTIVE_ROLE = QtCore.Qt.UserRole + 2 +PROJECT_IS_LIBRARY_ROLE = QtCore.Qt.UserRole + 3 +PROJECT_IS_CURRENT_ROLE = QtCore.Qt.UserRole + 4 +LIBRARY_PROJECT_SEPARATOR_ROLE = QtCore.Qt.UserRole + 5 class ProjectsModel(QtGui.QStandardItemModel): @@ -15,10 +18,23 @@ class ProjectsModel(QtGui.QStandardItemModel): self._controller = controller self._project_items = {} + self._has_libraries = False self._empty_item = None self._empty_item_added = False + self._select_item = None + self._select_item_added = False + self._select_item_visible = None + + self._libraries_sep_item = None + self._libraries_sep_item_added = False + self._libraries_sep_item_visible = False + + self._current_context_project = None + + self._selected_project = None + self._is_refreshing = False self._refresh_thread = None @@ -32,21 +48,63 @@ class ProjectsModel(QtGui.QStandardItemModel): def has_content(self): return len(self._project_items) > 0 + def set_select_item_visible(self, visible): + if self._select_item_visible is visible: + return + self._select_item_visible = visible + + if self._selected_project is None: + self._add_select_item() + + def set_libraries_separator_visible(self, visible): + if self._libraries_sep_item_visible is visible: + return + self._libraries_sep_item_visible = visible + + def set_selected_project(self, project_name): + if not self._select_item_visible: + return + + self._selected_project = project_name + if project_name is None: + self._add_select_item() + else: + self._remove_select_item() + + def set_current_context_project(self, project_name): + if project_name == self._current_context_project: + return + self._unset_current_context_project(self._current_context_project) + self._current_context_project = project_name + self._set_current_context_project(project_name) + + def _set_current_context_project(self, project_name): + item = self._project_items.get(project_name) + if item is None: + return + item.setData(True, PROJECT_IS_CURRENT_ROLE) + + def _unset_current_context_project(self, project_name): + item = self._project_items.get(project_name) + if item is None: + return + item.setData(False, PROJECT_IS_CURRENT_ROLE) + def _add_empty_item(self): + if self._empty_item_added: + return + self._empty_item_added = True item = self._get_empty_item() - if not self._empty_item_added: - root_item = self.invisibleRootItem() - root_item.appendRow(item) - self._empty_item_added = True + root_item = self.invisibleRootItem() + root_item.appendRow(item) def _remove_empty_item(self): if not self._empty_item_added: return - + self._empty_item_added = False root_item = self.invisibleRootItem() item = self._get_empty_item() root_item.takeRow(item.row()) - self._empty_item_added = False def _get_empty_item(self): if self._empty_item is None: @@ -55,6 +113,61 @@ class ProjectsModel(QtGui.QStandardItemModel): self._empty_item = item return self._empty_item + def _get_library_sep_item(self): + if self._libraries_sep_item is not None: + return self._libraries_sep_item + + item = QtGui.QStandardItem() + item.setData("Libraries", QtCore.Qt.DisplayRole) + item.setData(True, LIBRARY_PROJECT_SEPARATOR_ROLE) + item.setFlags(QtCore.Qt.NoItemFlags) + self._libraries_sep_item = item + return item + + def _add_library_sep_item(self): + if ( + not self._libraries_sep_item_visible + or self._libraries_sep_item_added + ): + return + self._libraries_sep_item_added = True + item = self._get_library_sep_item() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_library_sep_item(self): + if ( + not self._libraries_sep_item_added + ): + return + self._libraries_sep_item_added = False + item = self._get_library_sep_item() + root_item = self.invisibleRootItem() + root_item.takeRow(item.row()) + + def _add_select_item(self): + if self._select_item_added: + return + self._select_item_added = True + item = self._get_select_item() + root_item = self.invisibleRootItem() + root_item.appendRow(item) + + def _remove_select_item(self): + if not self._select_item_added: + return + self._select_item_added = False + root_item = self.invisibleRootItem() + item = self._get_select_item() + root_item.takeRow(item.row()) + + def _get_select_item(self): + if self._select_item is None: + item = QtGui.QStandardItem("< Select project >") + item.setEditable(False) + self._select_item = item + return self._select_item + def _refresh(self): if self._is_refreshing: return @@ -80,44 +193,118 @@ class ProjectsModel(QtGui.QStandardItemModel): self.refreshed.emit() def _fill_items(self, project_items): - items_to_remove = set(self._project_items.keys()) + new_project_names = { + project_item.name + for project_item in project_items + } + + # Handle "Select item" visibility + if self._select_item_visible: + # Add select project. if previously selected project is not in + # project items + if self._selected_project not in new_project_names: + self._add_select_item() + else: + self._remove_select_item() + + root_item = self.invisibleRootItem() + + items_to_remove = set(self._project_items.keys()) - new_project_names + for project_name in items_to_remove: + item = self._project_items.pop(project_name) + root_item.takeRow(item.row()) + + has_library_project = False new_items = [] for project_item in project_items: project_name = project_item.name - items_to_remove.discard(project_name) item = self._project_items.get(project_name) + if project_item.is_library: + has_library_project = True if item is None: item = QtGui.QStandardItem() + item.setEditable(False) new_items.append(item) icon = get_qt_icon(project_item.icon) item.setData(project_name, QtCore.Qt.DisplayRole) item.setData(icon, QtCore.Qt.DecorationRole) item.setData(project_name, PROJECT_NAME_ROLE) item.setData(project_item.active, PROJECT_IS_ACTIVE_ROLE) + item.setData(project_item.is_library, PROJECT_IS_LIBRARY_ROLE) + is_current = project_name == self._current_context_project + item.setData(is_current, PROJECT_IS_CURRENT_ROLE) self._project_items[project_name] = item - root_item = self.invisibleRootItem() + self._set_current_context_project(self._current_context_project) + + self._has_libraries = has_library_project + if new_items: root_item.appendRows(new_items) - for project_name in items_to_remove: - item = self._project_items.pop(project_name) - root_item.removeRow(item.row()) - if self.has_content(): + # Make sure "No projects" item is removed self._remove_empty_item() + if has_library_project: + self._add_library_sep_item() + else: + self._remove_library_sep_item() else: + # Keep only "No projects" item self._add_empty_item() + self._remove_select_item() + self._remove_library_sep_item() class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): def __init__(self, *args, **kwargs): super(ProjectSortFilterProxy, self).__init__(*args, **kwargs) self._filter_inactive = True + self._filter_standard = False + self._filter_library = False + self._sort_by_type = True # Disable case sensitivity self.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + def _type_sort(self, l_index, r_index): + if not self._sort_by_type: + return None + + l_is_library = l_index.data(PROJECT_IS_LIBRARY_ROLE) + r_is_library = r_index.data(PROJECT_IS_LIBRARY_ROLE) + # Both hare project items + if l_is_library is not None and r_is_library is not None: + if l_is_library is r_is_library: + return None + if l_is_library: + return False + return True + + if l_index.data(LIBRARY_PROJECT_SEPARATOR_ROLE): + if r_is_library is None: + return False + return r_is_library + + if r_index.data(LIBRARY_PROJECT_SEPARATOR_ROLE): + if l_is_library is None: + return True + return l_is_library + return None + def lessThan(self, left_index, right_index): + # Current project always on top + # - make sure this is always first, before any other sorting + # e.g. type sort would move the item lower + if left_index.data(PROJECT_IS_CURRENT_ROLE): + return True + if right_index.data(PROJECT_IS_CURRENT_ROLE): + return False + + # Library separator should be before library projects + result = self._type_sort(left_index, right_index) + if result is not None: + return result + if left_index.data(PROJECT_NAME_ROLE) is None: return True @@ -137,21 +324,43 @@ class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): def filterAcceptsRow(self, source_row, source_parent): index = self.sourceModel().index(source_row, 0, source_parent) + project_name = index.data(PROJECT_NAME_ROLE) + if project_name is None: + return True + string_pattern = self.filterRegularExpression().pattern() + if string_pattern: + return string_pattern.lower() in project_name.lower() + + # Current project keep always visible + default = super(ProjectSortFilterProxy, self).filterAcceptsRow( + source_row, source_parent + ) + if not default: + return default + + # Make sure current project is visible + if index.data(PROJECT_IS_CURRENT_ROLE): + return True + if ( self._filter_inactive and not index.data(PROJECT_IS_ACTIVE_ROLE) ): return False - if string_pattern: - project_name = index.data(PROJECT_IS_ACTIVE_ROLE) - if project_name is not None: - return string_pattern.lower() in project_name.lower() + if ( + self._filter_standard + and not index.data(PROJECT_IS_LIBRARY_ROLE) + ): + return False - return super(ProjectSortFilterProxy, self).filterAcceptsRow( - source_row, source_parent - ) + if ( + self._filter_library + and index.data(PROJECT_IS_LIBRARY_ROLE) + ): + return False + return True def _custom_index_filter(self, index): return bool(index.data(PROJECT_IS_ACTIVE_ROLE)) @@ -159,14 +368,35 @@ class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel): def is_active_filter_enabled(self): return self._filter_inactive - def set_active_filter_enabled(self, value): - if self._filter_inactive == value: + def set_active_filter_enabled(self, enabled): + if self._filter_inactive == enabled: return - self._filter_inactive = value + self._filter_inactive = enabled self.invalidateFilter() + def set_library_filter_enabled(self, enabled): + if self._filter_library == enabled: + return + self._filter_library = enabled + self.invalidateFilter() + + def set_standard_filter_enabled(self, enabled): + if self._filter_standard == enabled: + return + self._filter_standard = enabled + self.invalidateFilter() + + def set_sort_by_type(self, enabled): + if self._sort_by_type is enabled: + return + self._sort_by_type = enabled + self.invalidate() + class ProjectsCombobox(QtWidgets.QWidget): + refreshed = QtCore.Signal() + selection_changed = QtCore.Signal() + def __init__(self, controller, parent, handle_expected_selection=False): super(ProjectsCombobox, self).__init__(parent) @@ -203,6 +433,7 @@ class ProjectsCombobox(QtWidgets.QWidget): self._controller = controller self._listen_selection_change = True + self._select_item_visible = False self._handle_expected_selection = handle_expected_selection self._expected_selection = None @@ -252,7 +483,7 @@ class ProjectsCombobox(QtWidgets.QWidget): self._listen_selection_change = listen - def get_current_project_name(self): + def get_selected_project_name(self): """Name of selected project. Returns: @@ -264,17 +495,57 @@ class ProjectsCombobox(QtWidgets.QWidget): return None return self._projects_combobox.itemData(idx, PROJECT_NAME_ROLE) + def set_current_context_project(self, project_name): + self._projects_model.set_current_context_project(project_name) + self._projects_proxy_model.invalidateFilter() + + def _update_select_item_visiblity(self, **kwargs): + if not self._select_item_visible: + return + if "project_name" not in kwargs: + project_name = self.get_selected_project_name() + else: + project_name = kwargs.get("project_name") + + # Hide the item if a project is selected + self._projects_model.set_selected_project(project_name) + + def set_select_item_visible(self, visible): + self._select_item_visible = visible + self._projects_model.set_select_item_visible(visible) + self._update_select_item_visiblity() + + def set_libraries_separator_visible(self, visible): + self._projects_model.set_libraries_separator_visible(visible) + + def is_active_filter_enabled(self): + return self._projects_proxy_model.is_active_filter_enabled() + + def set_active_filter_enabled(self, enabled): + return self._projects_proxy_model.set_active_filter_enabled(enabled) + + def set_standard_filter_enabled(self, enabled): + return self._projects_proxy_model.set_standard_filter_enabled(enabled) + + def set_library_filter_enabled(self, enabled): + return self._projects_proxy_model.set_library_filter_enabled(enabled) + def _on_current_index_changed(self, idx): if not self._listen_selection_change: return project_name = self._projects_combobox.itemData( idx, PROJECT_NAME_ROLE) + self._update_select_item_visiblity(project_name=project_name) self._controller.set_selected_project(project_name) + self.selection_changed.emit() def _on_model_refresh(self): self._projects_proxy_model.sort(0) + self._projects_proxy_model.invalidateFilter() if self._expected_selection: self._set_expected_selection() + self._update_select_item_visiblity() + self.refreshed.emit() def _on_projects_refresh_finished(self, event): if event["sender"] != PROJECTS_MODEL_SENDER: @@ -292,7 +563,7 @@ class ProjectsCombobox(QtWidgets.QWidget): return project_name = self._expected_selection if project_name is not None: - if project_name != self.get_current_project_name(): + if project_name != self.get_selected_project_name(): self.set_selection(project_name) else: # Fake project change diff --git a/openpype/tools/ayon_utils/widgets/tasks_widget.py b/openpype/tools/ayon_utils/widgets/tasks_widget.py index 0af506863a..d01b3a7917 100644 --- a/openpype/tools/ayon_utils/widgets/tasks_widget.py +++ b/openpype/tools/ayon_utils/widgets/tasks_widget.py @@ -5,7 +5,7 @@ from openpype.tools.utils import DeselectableTreeView from .utils import RefreshThread, get_qt_icon -SENDER_NAME = "qt_tasks_model" +TASKS_MODEL_SENDER_NAME = "qt_tasks_model" ITEM_ID_ROLE = QtCore.Qt.UserRole + 1 PARENT_ID_ROLE = QtCore.Qt.UserRole + 2 ITEM_NAME_ROLE = QtCore.Qt.UserRole + 3 @@ -296,6 +296,9 @@ class TasksWidget(QtWidgets.QWidget): handle_expected_selection (Optional[bool]): Handle expected selection. """ + refreshed = QtCore.Signal() + selection_changed = QtCore.Signal() + def __init__(self, controller, parent, handle_expected_selection=False): super(TasksWidget, self).__init__(parent) @@ -362,7 +365,7 @@ class TasksWidget(QtWidgets.QWidget): # Refresh only if current folder id is the same if ( - event["sender"] == SENDER_NAME + event["sender"] == TASKS_MODEL_SENDER_NAME or event["folder_id"] != self._selected_folder_id ): return @@ -380,6 +383,7 @@ class TasksWidget(QtWidgets.QWidget): if not self._set_expected_selection(): self._on_selection_change() self._tasks_proxy_model.sort(0) + self.refreshed.emit() def _get_selected_item_ids(self): selection_model = self._tasks_view.selectionModel() @@ -400,6 +404,7 @@ class TasksWidget(QtWidgets.QWidget): parent_id, task_id, task_name = self._get_selected_item_ids() self._controller.set_selected_task(task_id, task_name) + self.selection_changed.emit() # Expected selection handling def _on_expected_selection_change(self, event): diff --git a/openpype/tools/ayon_workfiles/widgets/files_widget_published.py b/openpype/tools/ayon_workfiles/widgets/files_widget_published.py index bc59447777..576cf18d73 100644 --- a/openpype/tools/ayon_workfiles/widgets/files_widget_published.py +++ b/openpype/tools/ayon_workfiles/widgets/files_widget_published.py @@ -5,9 +5,10 @@ from openpype.style import ( get_default_entity_icon_color, get_disabled_entity_icon_color, ) +from openpype.tools.utils import TreeView from openpype.tools.utils.delegates import PrettyTimeDelegate -from .utils import TreeView, BaseOverlayFrame +from .utils import BaseOverlayFrame REPRE_ID_ROLE = QtCore.Qt.UserRole + 1 @@ -306,7 +307,7 @@ class PublishedFilesWidget(QtWidgets.QWidget): selection_model = view.selectionModel() selection_model.selectionChanged.connect(self._on_selection_change) - view.double_clicked_left.connect(self._on_left_double_click) + view.double_clicked.connect(self._on_mouse_double_click) controller.register_event_callback( "expected_selection_changed", @@ -350,8 +351,9 @@ class PublishedFilesWidget(QtWidgets.QWidget): repre_id = self.get_selected_repre_id() self._controller.set_selected_representation_id(repre_id) - def _on_left_double_click(self): - self.save_as_requested.emit() + def _on_mouse_double_click(self, event): + if event.button() == QtCore.Qt.LeftButton: + self.save_as_requested.emit() def _on_expected_selection_change(self, event): if ( diff --git a/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py b/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py index e8ccd094d1..3a8e90f933 100644 --- a/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py +++ b/openpype/tools/ayon_workfiles/widgets/files_widget_workarea.py @@ -5,10 +5,9 @@ from openpype.style import ( get_default_entity_icon_color, get_disabled_entity_icon_color, ) +from openpype.tools.utils import TreeView from openpype.tools.utils.delegates import PrettyTimeDelegate -from .utils import TreeView - FILENAME_ROLE = QtCore.Qt.UserRole + 1 FILEPATH_ROLE = QtCore.Qt.UserRole + 2 DATE_MODIFIED_ROLE = QtCore.Qt.UserRole + 3 @@ -271,7 +270,7 @@ class WorkAreaFilesWidget(QtWidgets.QWidget): selection_model = view.selectionModel() selection_model.selectionChanged.connect(self._on_selection_change) - view.double_clicked_left.connect(self._on_left_double_click) + view.double_clicked.connect(self._on_mouse_double_click) view.customContextMenuRequested.connect(self._on_context_menu) controller.register_event_callback( @@ -333,8 +332,9 @@ class WorkAreaFilesWidget(QtWidgets.QWidget): filepath = self.get_selected_path() self._controller.set_selected_workfile_path(filepath) - def _on_left_double_click(self): - self.open_current_requested.emit() + def _on_mouse_double_click(self, event): + if event.button() == QtCore.Qt.LeftButton: + self.open_current_requested.emit() def _on_context_menu(self, point): index = self._view.indexAt(point) diff --git a/openpype/tools/ayon_workfiles/widgets/folders_widget.py b/openpype/tools/ayon_workfiles/widgets/folders_widget.py index b35845f4b6..b04f8e4098 100644 --- a/openpype/tools/ayon_workfiles/widgets/folders_widget.py +++ b/openpype/tools/ayon_workfiles/widgets/folders_widget.py @@ -264,7 +264,7 @@ class FoldersWidget(QtWidgets.QWidget): self._expected_selection = None - def set_name_filer(self, name): + def set_name_filter(self, name): self._folders_proxy_model.setFilterFixedString(name) def _clear(self): diff --git a/openpype/tools/ayon_workfiles/widgets/utils.py b/openpype/tools/ayon_workfiles/widgets/utils.py index 6a61239f8d..9171638546 100644 --- a/openpype/tools/ayon_workfiles/widgets/utils.py +++ b/openpype/tools/ayon_workfiles/widgets/utils.py @@ -1,70 +1,4 @@ from qtpy import QtWidgets, QtCore -from openpype.tools.flickcharm import FlickCharm - - -class TreeView(QtWidgets.QTreeView): - """Ultimate TreeView with flick charm and double click signals. - - Tree view have deselectable mode, which allows to deselect items by - clicking on item area without any items. - - Todos: - Add to tools utils. - """ - - double_clicked_left = QtCore.Signal() - double_clicked_right = QtCore.Signal() - - def __init__(self, *args, **kwargs): - super(TreeView, self).__init__(*args, **kwargs) - self._deselectable = False - - self._flick_charm_activated = False - self._flick_charm = FlickCharm(parent=self) - self._before_flick_scroll_mode = None - - def is_deselectable(self): - return self._deselectable - - def set_deselectable(self, deselectable): - self._deselectable = deselectable - - deselectable = property(is_deselectable, set_deselectable) - - def mousePressEvent(self, event): - if self._deselectable: - index = self.indexAt(event.pos()) - if not index.isValid(): - # clear the selection - self.clearSelection() - # clear the current index - self.setCurrentIndex(QtCore.QModelIndex()) - super(TreeView, self).mousePressEvent(event) - - def mouseDoubleClickEvent(self, event): - if event.button() == QtCore.Qt.LeftButton: - self.double_clicked_left.emit() - - elif event.button() == QtCore.Qt.RightButton: - self.double_clicked_right.emit() - - return super(TreeView, self).mouseDoubleClickEvent(event) - - def activate_flick_charm(self): - if self._flick_charm_activated: - return - self._flick_charm_activated = True - self._before_flick_scroll_mode = self.verticalScrollMode() - self._flick_charm.activateOn(self) - self.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) - - def deactivate_flick_charm(self): - if not self._flick_charm_activated: - return - self._flick_charm_activated = False - self._flick_charm.deactivateFrom(self) - if self._before_flick_scroll_mode is not None: - self.setVerticalScrollMode(self._before_flick_scroll_mode) class BaseOverlayFrame(QtWidgets.QFrame): diff --git a/openpype/tools/ayon_workfiles/widgets/window.py b/openpype/tools/ayon_workfiles/widgets/window.py index ef352c8b18..6218d2dd06 100644 --- a/openpype/tools/ayon_workfiles/widgets/window.py +++ b/openpype/tools/ayon_workfiles/widgets/window.py @@ -338,7 +338,7 @@ class WorkfilesToolWindow(QtWidgets.QWidget): self._side_panel.set_published_mode(published_mode) def _on_folder_filter_change(self, text): - self._folder_widget.set_name_filer(text) + self._folder_widget.set_name_filter(text) def _on_go_to_current_clicked(self): self._controller.go_to_current_context() diff --git a/openpype/tools/publisher/widgets/card_view_widgets.py b/openpype/tools/publisher/widgets/card_view_widgets.py index eae8e0420a..5cdd429cd4 100644 --- a/openpype/tools/publisher/widgets/card_view_widgets.py +++ b/openpype/tools/publisher/widgets/card_view_widgets.py @@ -797,6 +797,7 @@ class InstanceCardView(AbstractInstanceView): widget.set_active(value) else: self._select_item_clear(instance_id, group_name, instance_widget) + self.selection_changed.emit() self.active_changed.emit() def _on_widget_selection(self, instance_id, group_name, selection_type): diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 39e78c01bb..312cf1dd5c 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -388,6 +388,45 @@ class PublisherWindow(QtWidgets.QDialog): def controller(self): return self._controller + def show_and_publish(self, comment=None): + """Show the window and start publishing. + + The method will reset controller and start the publishing afterwards. + + Todos: + Move validations from '_on_publish_clicked' and change of + 'comment' value in controller to controller so it can be + simplified. + + Args: + comment (Optional[str]): Comment to be set to publish. + If is set to 'None' a comment is not changed at all. + """ + + self._reset_on_show = False + self._reset_on_first_show = False + + if comment is not None: + self.set_comment(comment) + self.make_sure_is_visible() + # Reset controller + self._controller.reset() + # Fake publish click to trigger save validation and propagate + # comment to controller + self._on_publish_clicked() + + def set_comment(self, comment): + """Change comment text. + + Todos: + Be able to set the comment via controller. + + Args: + comment (str): Comment text. + """ + + self._comment_input.setText(comment) + def make_sure_is_visible(self): if self._window_is_visible: self.setWindowState(QtCore.Qt.WindowActive) diff --git a/openpype/tools/push_to_project/control_integrate.py b/openpype/tools/push_to_project/control_integrate.py index a822339ccf..9f083d8eb7 100644 --- a/openpype/tools/push_to_project/control_integrate.py +++ b/openpype/tools/push_to_project/control_integrate.py @@ -1051,6 +1051,11 @@ class ProjectPushItemProcess: repre_format_data["ext"] = ext[1:] break + # Re-use 'output' from source representation + repre_output_name = repre_doc["context"].get("output") + if repre_output_name is not None: + repre_format_data["output"] = repre_output_name + template_obj = anatomy.templates_obj[template_name]["folder"] folder_path = template_obj.format_strict(formatting_data) repre_context = folder_path.used_values diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py index 018088e916..50d50f467a 100644 --- a/openpype/tools/utils/__init__.py +++ b/openpype/tools/utils/__init__.py @@ -20,7 +20,10 @@ from .widgets import ( RefreshButton, GoToCurrentButton, ) -from .views import DeselectableTreeView +from .views import ( + DeselectableTreeView, + TreeView, +) from .error_dialog import ErrorMessageBox from .lib import ( WrappedCallbackItem, @@ -43,6 +46,7 @@ from .overlay_messages import ( MessageOverlayObject, ) from .multiselection_combobox import MultiSelectionComboBox +from .thumbnail_paint_widget import ThumbnailPainterWidget __all__ = ( @@ -70,6 +74,7 @@ __all__ = ( "GoToCurrentButton", "DeselectableTreeView", + "TreeView", "ErrorMessageBox", @@ -90,4 +95,6 @@ __all__ = ( "MessageOverlayObject", "MultiSelectionComboBox", + + "ThumbnailPainterWidget", ) diff --git a/openpype/tools/utils/delegates.py b/openpype/tools/utils/delegates.py index c71c87f9b0..c51323e556 100644 --- a/openpype/tools/utils/delegates.py +++ b/openpype/tools/utils/delegates.py @@ -24,9 +24,12 @@ class VersionDelegate(QtWidgets.QStyledItemDelegate): lock = False def __init__(self, dbcon, *args, **kwargs): - self.dbcon = dbcon + self._dbcon = dbcon super(VersionDelegate, self).__init__(*args, **kwargs) + def get_project_name(self): + return self._dbcon.active_project() + def displayText(self, value, locale): if isinstance(value, HeroVersionType): return lib.format_version(value, True) @@ -120,7 +123,7 @@ class VersionDelegate(QtWidgets.QStyledItemDelegate): "Version is not integer" ) - project_name = self.dbcon.active_project() + project_name = self.get_project_name() # Add all available versions to the editor parent_id = item["version_document"]["parent"] version_docs = [ diff --git a/openpype/tools/utils/host_tools.py b/openpype/tools/utils/host_tools.py index 2ebc973a47..cc20774349 100644 --- a/openpype/tools/utils/host_tools.py +++ b/openpype/tools/utils/host_tools.py @@ -86,12 +86,22 @@ class HostToolsHelper: def get_loader_tool(self, parent): """Create, cache and return loader tool window.""" if self._loader_tool is None: - from openpype.tools.loader import LoaderWindow - host = registered_host() ILoadHost.validate_load_methods(host) + if AYON_SERVER_ENABLED: + from openpype.tools.ayon_loader.ui import LoaderWindow + from openpype.tools.ayon_loader import LoaderController - loader_window = LoaderWindow(parent=parent or self._parent) + controller = LoaderController(host=host) + loader_window = LoaderWindow( + controller=controller, + parent=parent or self._parent + ) + + else: + from openpype.tools.loader import LoaderWindow + + loader_window = LoaderWindow(parent=parent or self._parent) self._loader_tool = loader_window return self._loader_tool @@ -109,7 +119,7 @@ class HostToolsHelper: if use_context is None: use_context = False - if use_context: + if not AYON_SERVER_ENABLED and use_context: context = {"asset": get_current_asset_name()} loader_tool.set_context(context, refresh=True) else: @@ -161,14 +171,23 @@ class HostToolsHelper: def get_scene_inventory_tool(self, parent): """Create, cache and return scene inventory tool window.""" if self._scene_inventory_tool is None: - from openpype.tools.sceneinventory import SceneInventoryWindow - host = registered_host() ILoadHost.validate_load_methods(host) - scene_inventory_window = SceneInventoryWindow( - parent=parent or self._parent - ) + if AYON_SERVER_ENABLED: + from openpype.tools.ayon_sceneinventory.window import ( + SceneInventoryWindow) + + scene_inventory_window = SceneInventoryWindow( + parent=parent or self._parent + ) + + else: + from openpype.tools.sceneinventory import SceneInventoryWindow + + scene_inventory_window = SceneInventoryWindow( + parent=parent or self._parent + ) self._scene_inventory_tool = scene_inventory_window return self._scene_inventory_tool @@ -187,6 +206,9 @@ class HostToolsHelper: def get_library_loader_tool(self, parent): """Create, cache and return library loader tool window.""" + if AYON_SERVER_ENABLED: + return self.get_loader_tool(parent) + if self._library_loader_tool is None: from openpype.tools.libraryloader import LibraryLoaderWindow @@ -199,6 +221,9 @@ class HostToolsHelper: def show_library_loader(self, parent=None): """Loader tool for loading representations from library project.""" + if AYON_SERVER_ENABLED: + return self.show_loader(parent) + with qt_app_context(): library_loader_tool = self.get_library_loader_tool(parent) library_loader_tool.show() @@ -271,7 +296,8 @@ class HostToolsHelper: ILoadHost.validate_load_methods(host) publisher_window = PublisherWindow( - controller=controller, parent=parent or self._parent + controller=controller, + parent=parent or self._parent ) self._publisher_tool = publisher_window diff --git a/openpype/tools/utils/images/__init__.py b/openpype/tools/utils/images/__init__.py new file mode 100644 index 0000000000..3f437fcc8c --- /dev/null +++ b/openpype/tools/utils/images/__init__.py @@ -0,0 +1,56 @@ +import os +from qtpy import QtGui + +IMAGES_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__))) + + +def get_image_path(filename): + """Get image path from './images'. + + Returns: + Union[str, None]: Path to image file or None if not found. + """ + + path = os.path.join(IMAGES_DIR, filename) + if os.path.exists(path): + return path + return None + + +def get_image(filename): + """Load image from './images' as QImage. + + Returns: + Union[QtGui.QImage, None]: QImage or None if not found. + """ + + path = get_image_path(filename) + if path: + return QtGui.QImage(path) + return None + + +def get_pixmap(filename): + """Load image from './images' as QPixmap. + + Returns: + Union[QtGui.QPixmap, None]: QPixmap or None if not found. + """ + + path = get_image_path(filename) + if path: + return QtGui.QPixmap(path) + return None + + +def get_icon(filename): + """Load image from './images' as QIcon. + + Returns: + Union[QtGui.QIcon, None]: QIcon or None if not found. + """ + + pix = get_pixmap(filename) + if pix: + return QtGui.QIcon(pix) + return None diff --git a/openpype/tools/utils/images/thumbnail.png b/openpype/tools/utils/images/thumbnail.png new file mode 100644 index 0000000000..adea862e5b Binary files /dev/null and b/openpype/tools/utils/images/thumbnail.png differ diff --git a/openpype/tools/utils/thumbnail_paint_widget.py b/openpype/tools/utils/thumbnail_paint_widget.py new file mode 100644 index 0000000000..130942aaf0 --- /dev/null +++ b/openpype/tools/utils/thumbnail_paint_widget.py @@ -0,0 +1,366 @@ +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.style import get_objected_colors + +from .lib import paint_image_with_color +from .images import get_image + + +class ThumbnailPainterWidget(QtWidgets.QWidget): + """Widget for painting of thumbnails. + + The widget use is to paint thumbnail or multiple thumbnails in a defined + area. Is not meant to show them in a grid but in overlay. + + It is expected that there is a logic that will provide thumbnails to + paint and set them using 'set_current_thumbnails' or + 'set_current_thumbnail_paths'. + """ + + width_ratio = 3.0 + height_ratio = 2.0 + border_width = 1 + max_thumbnails = 3 + offset_sep = 4 + checker_boxes_count = 20 + + def __init__(self, parent): + super(ThumbnailPainterWidget, self).__init__(parent) + + border_color = get_objected_colors("bg-buttons").get_qcolor() + thumbnail_bg_color = get_objected_colors("bg-view").get_qcolor() + + default_image = get_image("thumbnail.png") + default_pix = paint_image_with_color(default_image, border_color) + + self._border_color = border_color + self._thumbnail_bg_color = thumbnail_bg_color + self._default_pix = default_pix + + self._cached_pix = None + self._current_pixes = None + self._has_pixes = False + + self._bg_color = QtCore.Qt.transparent + self._use_checker = True + self._checker_color_1 = QtGui.QColor(89, 89, 89) + self._checker_color_2 = QtGui.QColor(188, 187, 187) + + def set_background_color(self, color): + self._bg_color = color + self.repaint() + + def set_use_checkboard(self, use_checker): + if self._use_checker is use_checker: + return + self._use_checker = use_checker + self.repaint() + + def set_checker_colors(self, color_1, color_2): + self._checker_color_1 = color_1 + self._checker_color_2 = color_2 + self.repaint() + + def set_border_color(self, color): + """Change border color. + + Args: + color (QtGui.QColor): Color to set. + """ + + self._border_color = color + self._default_pix = None + self.clear_cache() + + def set_thumbnail_bg_color(self, color): + """Change background color. + + Args: + color (QtGui.QColor): Color to set. + """ + + self._thumbnail_bg_color = color + self.clear_cache() + + @property + def has_pixes(self): + """Has set thumbnails. + + Returns: + bool: True if widget has thumbnails to paint. + """ + + return self._has_pixes + + def clear_cache(self): + """Clear cache of resized thumbnails and repaint widget.""" + + self._cached_pix = None + self.repaint() + + def set_current_thumbnails(self, pixmaps=None): + """Set current thumbnails. + + Args: + pixmaps (Optional[List[QtGui.QPixmap]]): List of pixmaps. + """ + + self._current_pixes = pixmaps or None + self._has_pixes = self._current_pixes is not None + self.clear_cache() + + def set_current_thumbnail_paths(self, thumbnail_paths=None): + """Set current thumbnails. + + Set current thumbnails using paths to a files. + + Args: + thumbnail_paths (Optional[List[str]]): List of paths to thumbnail + sources. + """ + + pixes = [] + if thumbnail_paths: + for thumbnail_path in thumbnail_paths: + pixes.append(QtGui.QPixmap(thumbnail_path)) + + self.set_current_thumbnails(pixes) + + def paintEvent(self, event): + if self._cached_pix is None: + self._cache_pix() + + painter = QtGui.QPainter() + painter.begin(self) + painter.setRenderHint(QtGui.QPainter.Antialiasing) + painter.drawPixmap(0, 0, self._cached_pix) + painter.end() + + def resizeEvent(self, event): + self._cached_pix = None + super(ThumbnailPainterWidget, self).resizeEvent(event) + + def _get_default_pix(self): + if self._default_pix is None: + default_image = get_image("thumbnail") + default_pix = paint_image_with_color( + default_image, self._border_color) + self._default_pix = default_pix + return self._default_pix + + def _paint_tile(self, width, height): + if not self._use_checker: + tile_pix = QtGui.QPixmap(width, width) + tile_pix.fill(self._bg_color) + return tile_pix + + checker_size = int(float(width) / self.checker_boxes_count) + if checker_size < 1: + checker_size = 1 + + checker_pix = QtGui.QPixmap(checker_size * 2, checker_size * 2) + checker_pix.fill(QtCore.Qt.transparent) + checker_painter = QtGui.QPainter() + checker_painter.begin(checker_pix) + checker_painter.setPen(QtCore.Qt.NoPen) + checker_painter.setBrush(self._checker_color_1) + checker_painter.drawRect( + 0, 0, checker_pix.width(), checker_pix.height() + ) + checker_painter.setBrush(self._checker_color_2) + checker_painter.drawRect( + 0, 0, checker_size, checker_size + ) + checker_painter.drawRect( + checker_size, checker_size, checker_size, checker_size + ) + checker_painter.end() + return checker_pix + + def _paint_default_pix(self, pix_width, pix_height): + full_border_width = 2 * self.border_width + width = pix_width - full_border_width + height = pix_height - full_border_width + if width > 100: + width = int(width * 0.6) + height = int(height * 0.6) + + scaled_pix = self._get_default_pix().scaled( + width, + height, + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + pos_x = int( + (pix_width - scaled_pix.width()) / 2 + ) + pos_y = int( + (pix_height - scaled_pix.height()) / 2 + ) + new_pix = QtGui.QPixmap(pix_width, pix_height) + new_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(new_pix) + render_hints = ( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + if hasattr(QtGui.QPainter, "HighQualityAntialiasing"): + render_hints |= QtGui.QPainter.HighQualityAntialiasing + + pix_painter.setRenderHints(render_hints) + pix_painter.drawPixmap(pos_x, pos_y, scaled_pix) + pix_painter.end() + return new_pix + + def _draw_thumbnails(self, thumbnails, pix_width, pix_height): + full_border_width = 2 * self.border_width + + checker_pix = self._paint_tile(pix_width, pix_height) + + backgrounded_images = [] + for src_pix in thumbnails: + scaled_pix = src_pix.scaled( + pix_width - full_border_width, + pix_height - full_border_width, + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + pos_x = int( + (pix_width - scaled_pix.width()) / 2 + ) + pos_y = int( + (pix_height - scaled_pix.height()) / 2 + ) + + new_pix = QtGui.QPixmap(pix_width, pix_height) + new_pix.fill(QtCore.Qt.transparent) + pix_painter = QtGui.QPainter() + pix_painter.begin(new_pix) + render_hints = ( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + if hasattr(QtGui.QPainter, "HighQualityAntialiasing"): + render_hints |= QtGui.QPainter.HighQualityAntialiasing + pix_painter.setRenderHints(render_hints) + + tiled_rect = QtCore.QRectF( + pos_x, pos_y, scaled_pix.width(), scaled_pix.height() + ) + pix_painter.drawTiledPixmap( + tiled_rect, + checker_pix, + QtCore.QPointF(0.0, 0.0) + ) + pix_painter.drawPixmap(pos_x, pos_y, scaled_pix) + pix_painter.end() + backgrounded_images.append(new_pix) + return backgrounded_images + + def _paint_dash_line(self, painter, rect): + pen = QtGui.QPen() + pen.setWidth(1) + pen.setBrush(QtCore.Qt.darkGray) + pen.setStyle(QtCore.Qt.DashLine) + + new_rect = rect.adjusted(1, 1, -1, -1) + painter.setPen(pen) + painter.setBrush(QtCore.Qt.transparent) + # painter.drawRect(rect) + painter.drawRect(new_rect) + + def _cache_pix(self): + rect = self.rect() + rect_width = rect.width() + rect_height = rect.height() + + pix_x_offset = 0 + pix_y_offset = 0 + expected_height = int( + (rect_width / self.width_ratio) * self.height_ratio + ) + if expected_height > rect_height: + expected_height = rect_height + expected_width = int( + (rect_height / self.height_ratio) * self.width_ratio + ) + pix_x_offset = (rect_width - expected_width) / 2 + else: + expected_width = rect_width + pix_y_offset = (rect_height - expected_height) / 2 + + if self._current_pixes is None: + used_default_pix = True + pixes_to_draw = None + pixes_len = 1 + else: + used_default_pix = False + pixes_to_draw = self._current_pixes + if len(pixes_to_draw) > self.max_thumbnails: + pixes_to_draw = pixes_to_draw[:-self.max_thumbnails] + pixes_len = len(pixes_to_draw) + + width_offset, height_offset = self._get_pix_offset_size( + expected_width, expected_height, pixes_len + ) + pix_width = expected_width - width_offset + pix_height = expected_height - height_offset + + if used_default_pix: + thumbnail_images = [self._paint_default_pix(pix_width, pix_height)] + else: + thumbnail_images = self._draw_thumbnails( + pixes_to_draw, pix_width, pix_height + ) + + if pixes_len == 1: + width_offset_part = 0 + height_offset_part = 0 + else: + width_offset_part = int(float(width_offset) / (pixes_len - 1)) + height_offset_part = int(float(height_offset) / (pixes_len - 1)) + full_width_offset = width_offset + pix_x_offset + + final_pix = QtGui.QPixmap(rect_width, rect_height) + final_pix.fill(QtCore.Qt.transparent) + + bg_pen = QtGui.QPen() + bg_pen.setWidth(self.border_width) + bg_pen.setColor(self._border_color) + + final_painter = QtGui.QPainter() + final_painter.begin(final_pix) + render_hints = ( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + if hasattr(QtGui.QPainter, "HighQualityAntialiasing"): + render_hints |= QtGui.QPainter.HighQualityAntialiasing + + final_painter.setRenderHints(render_hints) + + final_painter.setBrush(QtGui.QBrush(self._thumbnail_bg_color)) + final_painter.setPen(bg_pen) + final_painter.drawRect(rect) + + for idx, pix in enumerate(thumbnail_images): + x_offset = full_width_offset - (width_offset_part * idx) + y_offset = (height_offset_part * idx) + pix_y_offset + final_painter.drawPixmap(x_offset, y_offset, pix) + + # Draw drop enabled dashes + if used_default_pix: + self._paint_dash_line(final_painter, rect) + + final_painter.end() + + self._cached_pix = final_pix + + def _get_pix_offset_size(self, width, height, image_count): + if image_count == 1: + return 0, 0 + + part_width = width / self.offset_sep + part_height = height / self.offset_sep + return part_width, part_height diff --git a/openpype/tools/utils/views.py b/openpype/tools/utils/views.py index 01919d6745..596a47ede9 100644 --- a/openpype/tools/utils/views.py +++ b/openpype/tools/utils/views.py @@ -1,4 +1,6 @@ from openpype.resources import get_image_path +from openpype.tools.flickcharm import FlickCharm + from qtpy import QtWidgets, QtCore, QtGui, QtSvg @@ -57,3 +59,63 @@ class TreeViewSpinner(QtWidgets.QTreeView): self.paint_empty(event) else: super(TreeViewSpinner, self).paintEvent(event) + + +class TreeView(QtWidgets.QTreeView): + """Ultimate TreeView with flick charm and double click signals. + + Tree view have deselectable mode, which allows to deselect items by + clicking on item area without any items. + + Todos: + Add refresh animation. + """ + + double_clicked = QtCore.Signal(QtGui.QMouseEvent) + + def __init__(self, *args, **kwargs): + super(TreeView, self).__init__(*args, **kwargs) + self._deselectable = False + + self._flick_charm_activated = False + self._flick_charm = FlickCharm(parent=self) + self._before_flick_scroll_mode = None + + def is_deselectable(self): + return self._deselectable + + def set_deselectable(self, deselectable): + self._deselectable = deselectable + + deselectable = property(is_deselectable, set_deselectable) + + def mousePressEvent(self, event): + if self._deselectable: + index = self.indexAt(event.pos()) + if not index.isValid(): + # clear the selection + self.clearSelection() + # clear the current index + self.setCurrentIndex(QtCore.QModelIndex()) + super(TreeView, self).mousePressEvent(event) + + def mouseDoubleClickEvent(self, event): + self.double_clicked.emit(event) + + return super(TreeView, self).mouseDoubleClickEvent(event) + + def activate_flick_charm(self): + if self._flick_charm_activated: + return + self._flick_charm_activated = True + self._before_flick_scroll_mode = self.verticalScrollMode() + self._flick_charm.activateOn(self) + self.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) + + def deactivate_flick_charm(self): + if not self._flick_charm_activated: + return + self._flick_charm_activated = False + self._flick_charm.deactivateFrom(self) + if self._before_flick_scroll_mode is not None: + self.setVerticalScrollMode(self._before_flick_scroll_mode) diff --git a/openpype/vendor/python/common/ayon_api/_api.py b/openpype/vendor/python/common/ayon_api/_api.py index 22e137d6e5..9f89d3d59e 100644 --- a/openpype/vendor/python/common/ayon_api/_api.py +++ b/openpype/vendor/python/common/ayon_api/_api.py @@ -602,12 +602,12 @@ def delete_installer(*args, **kwargs): def download_installer(*args, **kwargs): con = get_server_api_connection() - con.download_installer(*args, **kwargs) + return con.download_installer(*args, **kwargs) def upload_installer(*args, **kwargs): con = get_server_api_connection() - con.upload_installer(*args, **kwargs) + return con.upload_installer(*args, **kwargs) # Dependency packages @@ -753,12 +753,12 @@ def get_secrets(*args, **kwargs): def get_secret(*args, **kwargs): con = get_server_api_connection() - return con.delete_secret(*args, **kwargs) + return con.get_secret(*args, **kwargs) def save_secret(*args, **kwargs): con = get_server_api_connection() - return con.delete_secret(*args, **kwargs) + return con.save_secret(*args, **kwargs) def delete_secret(*args, **kwargs): @@ -978,12 +978,14 @@ def delete_project(project_name): def get_thumbnail_by_id(project_name, thumbnail_id): con = get_server_api_connection() - con.get_thumbnail_by_id(project_name, thumbnail_id) + return con.get_thumbnail_by_id(project_name, thumbnail_id) def get_thumbnail(project_name, entity_type, entity_id, thumbnail_id=None): con = get_server_api_connection() - con.get_thumbnail(project_name, entity_type, entity_id, thumbnail_id) + return con.get_thumbnail( + project_name, entity_type, entity_id, thumbnail_id + ) def get_folder_thumbnail(project_name, folder_id, thumbnail_id=None): diff --git a/openpype/vendor/python/common/ayon_api/graphql_queries.py b/openpype/vendor/python/common/ayon_api/graphql_queries.py index 2435fc8a17..cedb3ed2ac 100644 --- a/openpype/vendor/python/common/ayon_api/graphql_queries.py +++ b/openpype/vendor/python/common/ayon_api/graphql_queries.py @@ -144,6 +144,7 @@ def product_types_query(fields): query_queue.append((k, v, field)) return query + def project_product_types_query(fields): query = GraphQlQuery("ProjectProductTypes") project_query = query.add_field("project") @@ -175,6 +176,8 @@ def folders_graphql_query(fields): parent_folder_ids_var = query.add_variable("parentFolderIds", "[String!]") folder_paths_var = query.add_variable("folderPaths", "[String!]") folder_names_var = query.add_variable("folderNames", "[String!]") + folder_types_var = query.add_variable("folderTypes", "[String!]") + statuses_var = query.add_variable("folderStatuses", "[String!]") has_products_var = query.add_variable("folderHasProducts", "Boolean!") project_field = query.add_field("project") @@ -185,6 +188,8 @@ def folders_graphql_query(fields): folders_field.set_filter("parentIds", parent_folder_ids_var) folders_field.set_filter("names", folder_names_var) folders_field.set_filter("paths", folder_paths_var) + folders_field.set_filter("folderTypes", folder_types_var) + folders_field.set_filter("statuses", statuses_var) folders_field.set_filter("hasProducts", has_products_var) nested_fields = fields_to_dict(fields) diff --git a/openpype/vendor/python/common/ayon_api/server_api.py b/openpype/vendor/python/common/ayon_api/server_api.py index 511a239a83..3bac59c192 100644 --- a/openpype/vendor/python/common/ayon_api/server_api.py +++ b/openpype/vendor/python/common/ayon_api/server_api.py @@ -75,6 +75,7 @@ from .utils import ( TransferProgress, create_dependency_package_basename, ThumbnailContent, + get_default_timeout, ) PatternType = type(re.compile("")) @@ -351,7 +352,6 @@ class ServerAPI(object): timeout (Optional[float]): Timeout for requests. max_retries (Optional[int]): Number of retries for requests. """ - _default_timeout = 10.0 _default_max_retries = 3 def __init__( @@ -500,20 +500,13 @@ class ServerAPI(object): def get_default_timeout(cls): """Default value for requests timeout. - First looks for environment variable SERVER_TIMEOUT_ENV_KEY which - can affect timeout value. If not available then use class - attribute '_default_timeout'. + Utils function 'get_default_timeout' is used by default. Returns: float: Timeout value in seconds. """ - try: - return float(os.environ.get(SERVER_TIMEOUT_ENV_KEY)) - except (ValueError, TypeError): - pass - - return cls._default_timeout + return get_default_timeout() @classmethod def get_default_max_retries(cls): @@ -662,13 +655,10 @@ class ServerAPI(object): as default variant. Args: - variant (Literal['production', 'staging']): Settings variant name. + variant (str): Settings variant name. It is possible to use + 'production', 'staging' or name of dev bundle. """ - if variant not in ("production", "staging"): - raise ValueError(( - "Invalid variant name {}. Expected 'production' or 'staging'" - ).format(variant)) self._default_settings_variant = variant default_settings_variant = property( @@ -938,8 +928,8 @@ class ServerAPI(object): int(re_match.group("major")), int(re_match.group("minor")), int(re_match.group("patch")), - re_match.group("prerelease"), - re_match.group("buildmetadata") + re_match.group("prerelease") or "", + re_match.group("buildmetadata") or "", ) return self._server_version_tuple @@ -1140,31 +1130,41 @@ class ServerAPI(object): response = None new_response = None - for _ in range(max_retries): + for retry_idx in reversed(range(max_retries)): try: response = function(url, **kwargs) break except ConnectionRefusedError: + if retry_idx == 0: + self.log.warning( + "Connection error happened.", exc_info=True + ) + # Server may be restarting new_response = RestApiResponse( None, {"detail": "Unable to connect the server. Connection refused"} ) + except requests.exceptions.Timeout: # Connection timed out new_response = RestApiResponse( None, {"detail": "Connection timed out."} ) + except requests.exceptions.ConnectionError: - # Other connection error (ssl, etc) - does not make sense to - # try call server again + # Log warning only on last attempt + if retry_idx == 0: + self.log.warning( + "Connection error happened.", exc_info=True + ) + new_response = RestApiResponse( None, {"detail": "Unable to connect the server. Connection error"} ) - break time.sleep(0.1) @@ -1349,7 +1349,9 @@ class ServerAPI(object): status=None, description=None, summary=None, - payload=None + payload=None, + progress=None, + retries=None ): kwargs = { key: value @@ -1360,9 +1362,27 @@ class ServerAPI(object): ("description", description), ("summary", summary), ("payload", payload), + ("progress", progress), + ("retries", retries), ) if value is not None } + # 'progress' and 'retries' are available since 0.5.x server version + major, minor, _, _, _ = self.server_version_tuple + if (major, minor) < (0, 5): + args = [] + if progress is not None: + args.append("progress") + if retries is not None: + args.append("retries") + fields = ", ".join("'{}'".format(f) for f in args) + ending = "s" if len(args) > 1 else "" + raise ValueError(( + "Your server version '{}' does not support update" + " of {} field{} on event. The fields are supported since" + " server version '0.5'." + ).format(self.get_server_version(), fields, ending)) + response = self.patch( "events/{}".format(event_id), **kwargs @@ -1434,6 +1454,7 @@ class ServerAPI(object): description=None, sequential=None, events_filter=None, + max_retries=None, ): """Enroll job based on events. @@ -1475,8 +1496,12 @@ class ServerAPI(object): in target event. sequential (Optional[bool]): The source topic must be processed in sequence. - events_filter (Optional[ayon_server.sqlfilter.Filter]): A dict-like - with conditions to filter the source event. + events_filter (Optional[dict[str, Any]]): Filtering conditions + to filter the source event. For more technical specifications + look to server backed 'ayon_server.sqlfilter.Filter'. + TODO: Add example of filters. + max_retries (Optional[int]): How many times can be event retried. + Default value is based on server (3 at the time of this PR). Returns: Union[None, dict[str, Any]]: None if there is no event matching @@ -1487,6 +1512,7 @@ class ServerAPI(object): "sourceTopic": source_topic, "targetTopic": target_topic, "sender": sender, + "maxRetries": max_retries, } if sequential is not None: kwargs["sequential"] = sequential @@ -2236,6 +2262,34 @@ class ServerAPI(object): response.raise_for_status("Failed to create/update dependency") return response.data + def _get_dependency_package_route( + self, filename=None, platform_name=None + ): + major, minor, patch, _, _ = self.server_version_tuple + if (major, minor, patch) <= (0, 2, 0): + # Backwards compatibility for AYON server 0.2.0 and lower + self.log.warning(( + "Using deprecated dependency package route." + " Please update your AYON server to version 0.2.1 or higher." + " Backwards compatibility for this route will be removed" + " in future releases of ayon-python-api." + )) + if platform_name is None: + platform_name = platform.system().lower() + base = "dependencies" + if not filename: + return base + return "{}/{}/{}".format(base, filename, platform_name) + + if (major, minor) <= (0, 3): + endpoint = "desktop/dependency_packages" + else: + endpoint = "desktop/dependencyPackages" + + if filename: + return "{}/{}".format(endpoint, filename) + return endpoint + def get_dependency_packages(self): """Information about dependency packages on server. @@ -2263,33 +2317,11 @@ class ServerAPI(object): server. """ - endpoint = "desktop/dependencyPackages" - major, minor, _, _, _ = self.server_version_tuple - if major == 0 and minor <= 3: - endpoint = "desktop/dependency_packages" - + endpoint = self._get_dependency_package_route() result = self.get(endpoint) result.raise_for_status() return result.data - def _get_dependency_package_route( - self, filename=None, platform_name=None - ): - major, minor, patch, _, _ = self.server_version_tuple - if major == 0 and (minor > 2 or (minor == 2 and patch >= 1)): - base = "desktop/dependency_packages" - if not filename: - return base - return "{}/{}".format(base, filename) - - # Backwards compatibility for AYON server 0.2.0 and lower - if platform_name is None: - platform_name = platform.system().lower() - base = "dependencies" - if not filename: - return base - return "{}/{}/{}".format(base, filename, platform_name) - def create_dependency_package( self, filename, @@ -3515,7 +3547,9 @@ class ServerAPI(object): folder_ids=None, folder_paths=None, folder_names=None, + folder_types=None, parent_ids=None, + statuses=None, active=True, fields=None, own_attributes=False @@ -3536,8 +3570,12 @@ class ServerAPI(object): for filtering. folder_names (Optional[Iterable[str]]): Folder names used for filtering. + folder_types (Optional[Iterable[str]]): Folder types used + for filtering. parent_ids (Optional[Iterable[str]]): Ids of folder parents. Use 'None' if folder is direct child of project. + statuses (Optional[Iterable[str]]): Folder statuses used + for filtering. active (Optional[bool]): Filter active/inactive folders. Both are returned if is set to None. fields (Optional[Iterable[str]]): Fields to be queried for @@ -3574,6 +3612,18 @@ class ServerAPI(object): return filters["folderNames"] = list(folder_names) + if folder_types is not None: + folder_types = set(folder_types) + if not folder_types: + return + filters["folderTypes"] = list(folder_types) + + if statuses is not None: + statuses = set(statuses) + if not statuses: + return + filters["folderStatuses"] = list(statuses) + if parent_ids is not None: parent_ids = set(parent_ids) if not parent_ids: @@ -4312,9 +4362,6 @@ class ServerAPI(object): fields.remove("attrib") fields |= self.get_attributes_fields_for_type("version") - if active is not None: - fields.add("active") - # Make sure fields have minimum required fields fields |= {"id", "version"} @@ -4323,6 +4370,9 @@ class ServerAPI(object): use_rest = True fields = {"id"} + if active is not None: + fields.add("active") + if own_attributes: fields.add("ownAttrib") @@ -5845,19 +5895,22 @@ class ServerAPI(object): """Helper method to get links from server for entity types. Example output: - [ - { - "id": "59a212c0d2e211eda0e20242ac120002", - "linkType": "reference", - "description": "reference link between folders", - "projectName": "my_project", - "author": "frantadmin", - "entityId": "b1df109676db11ed8e8c6c9466b19aa8", - "entityType": "folder", - "direction": "out" - }, + { + "59a212c0d2e211eda0e20242ac120001": [ + { + "id": "59a212c0d2e211eda0e20242ac120002", + "linkType": "reference", + "description": "reference link between folders", + "projectName": "my_project", + "author": "frantadmin", + "entityId": "b1df109676db11ed8e8c6c9466b19aa8", + "entityType": "folder", + "direction": "out" + }, + ... + ], ... - ] + } Args: project_name (str): Project where links are. diff --git a/openpype/vendor/python/common/ayon_api/utils.py b/openpype/vendor/python/common/ayon_api/utils.py index 314d13faec..502d24f713 100644 --- a/openpype/vendor/python/common/ayon_api/utils.py +++ b/openpype/vendor/python/common/ayon_api/utils.py @@ -1,3 +1,4 @@ +import os import re import datetime import uuid @@ -15,6 +16,7 @@ except ImportError: import requests import unidecode +from .constants import SERVER_TIMEOUT_ENV_KEY from .exceptions import UrlError REMOVED_VALUE = object() @@ -27,6 +29,23 @@ RepresentationParents = collections.namedtuple( ) +def get_default_timeout(): + """Default value for requests timeout. + + First looks for environment variable SERVER_TIMEOUT_ENV_KEY which + can affect timeout value. If not available then use 10.0 s. + + Returns: + float: Timeout value in seconds. + """ + + try: + return float(os.environ.get(SERVER_TIMEOUT_ENV_KEY)) + except (ValueError, TypeError): + pass + return 10.0 + + class ThumbnailContent: """Wrapper for thumbnail content. @@ -231,30 +250,36 @@ def _try_parse_url(url): return None -def _try_connect_to_server(url): +def _try_connect_to_server(url, timeout=None): + if timeout is None: + timeout = get_default_timeout() try: # TODO add validation if the url lead to Ayon server - # - thiw won't validate if the url lead to 'google.com' - requests.get(url) + # - this won't validate if the url lead to 'google.com' + requests.get(url, timeout=timeout) except BaseException: return False return True -def login_to_server(url, username, password): +def login_to_server(url, username, password, timeout=None): """Use login to the server to receive token. Args: url (str): Server url. username (str): User's username. password (str): User's password. + timeout (Optional[float]): Timeout for request. Value from + 'get_default_timeout' is used if not specified. Returns: Union[str, None]: User's token if login was successfull. Otherwise 'None'. """ + if timeout is None: + timeout = get_default_timeout() headers = {"Content-Type": "application/json"} response = requests.post( "{}/api/auth/login".format(url), @@ -262,7 +287,8 @@ def login_to_server(url, username, password): json={ "name": username, "password": password - } + }, + timeout=timeout, ) token = None # 200 - success @@ -273,47 +299,67 @@ def login_to_server(url, username, password): return token -def logout_from_server(url, token): +def logout_from_server(url, token, timeout=None): """Logout from server and throw token away. Args: url (str): Url from which should be logged out. token (str): Token which should be used to log out. + timeout (Optional[float]): Timeout for request. Value from + 'get_default_timeout' is used if not specified. """ + if timeout is None: + timeout = get_default_timeout() headers = { "Content-Type": "application/json", "Authorization": "Bearer {}".format(token) } requests.post( url + "/api/auth/logout", - headers=headers + headers=headers, + timeout=timeout, ) -def is_token_valid(url, token): +def is_token_valid(url, token, timeout=None): """Check if token is valid. + Token can be a user token or service api key. + Args: url (str): Server url. token (str): User's token. + timeout (Optional[float]): Timeout for request. Value from + 'get_default_timeout' is used if not specified. Returns: bool: True if token is valid. """ - headers = { + if timeout is None: + timeout = get_default_timeout() + + base_headers = { "Content-Type": "application/json", - "Authorization": "Bearer {}".format(token) } - response = requests.get( - "{}/api/users/me".format(url), - headers=headers - ) - return response.status_code == 200 + for header_value in ( + {"Authorization": "Bearer {}".format(token)}, + {"X-Api-Key": token}, + ): + headers = base_headers.copy() + headers.update(header_value) + response = requests.get( + "{}/api/users/me".format(url), + headers=headers, + timeout=timeout, + ) + if response.status_code == 200: + return True + return False -def validate_url(url): +def validate_url(url, timeout=None): """Validate url if is valid and server is available. Validation checks if can be parsed as url and contains scheme. @@ -334,6 +380,7 @@ def validate_url(url): Args: url (str): Server url. + timeout (Optional[int]): Timeout in seconds for connection to server. Returns: Url which was used to connect to server. @@ -369,10 +416,10 @@ def validate_url(url): # - this will trigger UrlError if both will crash if not parsed_url.scheme: new_url = "https://" + modified_url - if _try_connect_to_server(new_url): + if _try_connect_to_server(new_url, timeout=timeout): return new_url - if _try_connect_to_server(modified_url): + if _try_connect_to_server(modified_url, timeout=timeout): return modified_url hints = [] diff --git a/openpype/vendor/python/common/ayon_api/version.py b/openpype/vendor/python/common/ayon_api/version.py index f3826a6407..ac4f32997f 100644 --- a/openpype/vendor/python/common/ayon_api/version.py +++ b/openpype/vendor/python/common/ayon_api/version.py @@ -1,2 +1,2 @@ """Package declaring Python API for Ayon server.""" -__version__ = "0.4.1" +__version__ = "0.5.1" diff --git a/openpype/version.py b/openpype/version.py index 399c1404b1..4c58a5098f 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.17.2-nightly.2" +__version__ = "3.17.4" diff --git a/pyproject.toml b/pyproject.toml index 2460185bdd..633dafece1 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.17.1" # OpenPype +version = "3.17.4" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/server_addon/applications/server/applications.json b/server_addon/applications/server/applications.json index e40b8d41f6..db7f86e357 100644 --- a/server_addon/applications/server/applications.json +++ b/server_addon/applications/server/applications.json @@ -7,6 +7,26 @@ "host_name": "maya", "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", "variants": [ + { + "name": "2024", + "label": "2024", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2024/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2024\"\n}", + "use_python_2": false + }, { "name": "2023", "label": "2023", @@ -45,18 +65,47 @@ "linux": [] }, "environment": "{\n \"MAYA_VERSION\": \"2022\"\n}", + "use_python_2": true + } + ] + }, + "mayapy": { + "enabled": true, + "label": "Maya", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", + "variants": [ + { + "name": "2024", + "label": "2024", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2024\\bin\\mayapy.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2024/bin/mayapy" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2024\"\n}", "use_python_2": false }, { - "name": "2020", - "label": "2020", + "name": "2023", + "label": "2023", "executables": { "windows": [ - "C:\\Program Files\\Autodesk\\Maya2020\\bin\\maya.exe" + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\mayapy.exe" ], "darwin": [], "linux": [ - "/usr/autodesk/maya2020/bin/maya" + "/usr/autodesk/maya2023/bin/mayapy" ] }, "arguments": { @@ -64,48 +113,8 @@ "darwin": [], "linux": [] }, - "environment": "{\n \"MAYA_VERSION\": \"2020\"\n}", - "use_python_2": true - }, - { - "name": "2019", - "label": "2019", - "executables": { - "windows": [ - "C:\\Program Files\\Autodesk\\Maya2019\\bin\\maya.exe" - ], - "darwin": [], - "linux": [ - "/usr/autodesk/maya2019/bin/maya" - ] - }, - "arguments": { - "windows": [], - "darwin": [], - "linux": [] - }, - "environment": "{\n \"MAYA_VERSION\": \"2019\"\n}", - "use_python_2": true - }, - { - "name": "2018", - "label": "2018", - "executables": { - "windows": [ - "C:\\Program Files\\Autodesk\\Maya2018\\bin\\maya.exe" - ], - "darwin": [], - "linux": [ - "/usr/autodesk/maya2018/bin/maya" - ] - }, - "arguments": { - "windows": [], - "darwin": [], - "linux": [] - }, - "environment": "{\n \"MAYA_VERSION\": \"2018\"\n}", - "use_python_2": true + "environment": "{\n \"MAYA_VERSION\": \"2023\"\n}", + "use_python_2": false } ] }, diff --git a/server_addon/applications/server/settings.py b/server_addon/applications/server/settings.py index fd481b6ce8..be9a2ea07e 100644 --- a/server_addon/applications/server/settings.py +++ b/server_addon/applications/server/settings.py @@ -115,9 +115,7 @@ class ToolGroupModel(BaseSettingsModel): name: str = Field("", title="Name") label: str = Field("", title="Label") environment: str = Field("{}", title="Environments", widget="textarea") - variants: list[ToolVariantModel] = Field( - default_factory=ToolVariantModel - ) + variants: list[ToolVariantModel] = Field(default_factory=list) @validator("environment") def validate_json(cls, value): diff --git a/server_addon/applications/server/version.py b/server_addon/applications/server/version.py index 485f44ac21..b3f4756216 100644 --- a/server_addon/applications/server/version.py +++ b/server_addon/applications/server/version.py @@ -1 +1 @@ -__version__ = "0.1.1" +__version__ = "0.1.2" diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py index 5e047b7013..27dc0b232f 100644 --- a/server_addon/blender/server/settings/publish_plugins.py +++ b/server_addon/blender/server/settings/publish_plugins.py @@ -103,7 +103,7 @@ class PublishPuginsModel(BaseSettingsModel): default_factory=ValidatePluginModel, title="Extract FBX" ) - ExtractABC: ValidatePluginModel = Field( + ExtractModelABC: ValidatePluginModel = Field( default_factory=ValidatePluginModel, title="Extract ABC" ) @@ -197,10 +197,10 @@ DEFAULT_BLENDER_PUBLISH_SETTINGS = { "optional": True, "active": False }, - "ExtractABC": { + "ExtractModelABC": { "enabled": True, "optional": True, - "active": False + "active": True }, "ExtractBlendAnimation": { "enabled": True, diff --git a/server_addon/core/server/settings/main.py b/server_addon/core/server/settings/main.py index ca8f7e63ed..433d0ef2f0 100644 --- a/server_addon/core/server/settings/main.py +++ b/server_addon/core/server/settings/main.py @@ -12,6 +12,27 @@ from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_VALUES from .tools import GlobalToolsModel, DEFAULT_TOOLS_VALUES +class DiskMappingItemModel(BaseSettingsModel): + _layout = "expanded" + source: str = Field("", title="Source") + destination: str = Field("", title="Destination") + + +class DiskMappingModel(BaseSettingsModel): + windows: list[DiskMappingItemModel] = Field( + title="Windows", + default_factory=list, + ) + linux: list[DiskMappingItemModel] = Field( + title="Linux", + default_factory=list, + ) + darwin: list[DiskMappingItemModel] = Field( + title="MacOS", + default_factory=list, + ) + + class ImageIOFileRuleModel(BaseSettingsModel): name: str = Field("", title="Rule name") pattern: str = Field("", title="Regex pattern") @@ -97,6 +118,10 @@ class CoreSettings(BaseSettingsModel): widget="textarea", scope=["studio"], ) + disk_mapping: DiskMappingModel = Field( + default_factory=DiskMappingModel, + title="Disk mapping", + ) tools: GlobalToolsModel = Field( default_factory=GlobalToolsModel, title="Tools" diff --git a/server_addon/core/server/settings/tools.py b/server_addon/core/server/settings/tools.py index 7befc795e4..d7c7b367b7 100644 --- a/server_addon/core/server/settings/tools.py +++ b/server_addon/core/server/settings/tools.py @@ -487,6 +487,17 @@ DEFAULT_TOOLS_VALUES = { "task_types": [], "task_names": [], "template_name": "publish_online" + }, + { + "families": [ + "tycache" + ], + "hosts": [ + "max" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_tycache" } ], "hero_template_name_profiles": [ diff --git a/server_addon/core/server/version.py b/server_addon/core/server/version.py index b3f4756216..ae7362549b 100644 --- a/server_addon/core/server/version.py +++ b/server_addon/core/server/version.py @@ -1 +1 @@ -__version__ = "0.1.2" +__version__ = "0.1.3" diff --git a/server_addon/houdini/server/settings/general.py b/server_addon/houdini/server/settings/general.py new file mode 100644 index 0000000000..aee44f1648 --- /dev/null +++ b/server_addon/houdini/server/settings/general.py @@ -0,0 +1,50 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class HoudiniVarModel(BaseSettingsModel): + _layout = "expanded" + var: str = Field("", title="Var") + value: str = Field("", title="Value") + is_directory: bool = Field(False, title="Treat as directory") + + +class UpdateHoudiniVarcontextModel(BaseSettingsModel): + """Sync vars with context changes. + + If a value is treated as a directory on update + it will be ensured the folder exists. + """ + + enabled: bool = Field(title="Enabled") + # TODO this was dynamic dictionary '{var: path}' + houdini_vars: list[HoudiniVarModel] = Field( + default_factory=list, + title="Houdini Vars" + ) + + +class GeneralSettingsModel(BaseSettingsModel): + add_self_publish_button: bool = Field( + False, + title="Add Self Publish Button" + ) + update_houdini_var_context: UpdateHoudiniVarcontextModel = Field( + default_factory=UpdateHoudiniVarcontextModel, + title="Update Houdini Vars on context change" + ) + + +DEFAULT_GENERAL_SETTINGS = { + "add_self_publish_button": False, + "update_houdini_var_context": { + "enabled": True, + "houdini_vars": [ + { + "var": "JOB", + "value": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{task[name]}", # noqa + "is_directory": True + } + ] + } +} diff --git a/server_addon/houdini/server/settings/main.py b/server_addon/houdini/server/settings/main.py index fdb6838f5c..0c2e160c87 100644 --- a/server_addon/houdini/server/settings/main.py +++ b/server_addon/houdini/server/settings/main.py @@ -4,7 +4,10 @@ from ayon_server.settings import ( MultiplatformPathModel, MultiplatformPathListModel, ) - +from .general import ( + GeneralSettingsModel, + DEFAULT_GENERAL_SETTINGS +) from .imageio import HoudiniImageIOModel from .publish_plugins import ( PublishPluginsModel, @@ -52,6 +55,10 @@ class ShelvesModel(BaseSettingsModel): class HoudiniSettings(BaseSettingsModel): + general: GeneralSettingsModel = Field( + default_factory=GeneralSettingsModel, + title="General" + ) imageio: HoudiniImageIOModel = Field( default_factory=HoudiniImageIOModel, title="Color Management (ImageIO)" @@ -73,6 +80,7 @@ class HoudiniSettings(BaseSettingsModel): DEFAULT_VALUES = { + "general": DEFAULT_GENERAL_SETTINGS, "shelves": [], "create": DEFAULT_HOUDINI_CREATE_SETTINGS, "publish": DEFAULT_HOUDINI_PUBLISH_SETTINGS diff --git a/server_addon/houdini/server/settings/publish_plugins.py b/server_addon/houdini/server/settings/publish_plugins.py index 58240b0205..a19fb4bca9 100644 --- a/server_addon/houdini/server/settings/publish_plugins.py +++ b/server_addon/houdini/server/settings/publish_plugins.py @@ -151,6 +151,17 @@ class ValidateWorkfilePathsModel(BaseSettingsModel): ) +class CollectRopFrameRangeModel(BaseSettingsModel): + """Collect Frame Range + + Disable this if you want the publisher to + ignore start and end handles specified in the + asset data for publish instances + """ + use_asset_handles: bool = Field( + title="Use asset handles") + + class BasicValidateModel(BaseSettingsModel): enabled: bool = Field(title="Enabled") optional: bool = Field(title="Optional") @@ -158,6 +169,11 @@ class BasicValidateModel(BaseSettingsModel): class PublishPluginsModel(BaseSettingsModel): + CollectRopFrameRange: CollectRopFrameRangeModel = Field( + default_factory=CollectRopFrameRangeModel, + title="Collect Rop Frame Range.", + section="Collectors" + ) ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field( default_factory=ValidateWorkfilePathsModel, title="Validate workfile paths settings.") @@ -179,6 +195,9 @@ class PublishPluginsModel(BaseSettingsModel): DEFAULT_HOUDINI_PUBLISH_SETTINGS = { + "CollectRopFrameRange": { + "use_asset_handles": True + }, "ValidateWorkfilePaths": { "enabled": True, "optional": True, diff --git a/server_addon/houdini/server/version.py b/server_addon/houdini/server/version.py index ae7362549b..0a8da88258 100644 --- a/server_addon/houdini/server/version.py +++ b/server_addon/houdini/server/version.py @@ -1 +1 @@ -__version__ = "0.1.3" +__version__ = "0.1.6" diff --git a/server_addon/maya/server/settings/creators.py b/server_addon/maya/server/settings/creators.py index 11e2b8a36c..84e873589d 100644 --- a/server_addon/maya/server/settings/creators.py +++ b/server_addon/maya/server/settings/creators.py @@ -1,6 +1,7 @@ from pydantic import Field from ayon_server.settings import BaseSettingsModel +from ayon_server.settings import task_types_enum class CreateLookModel(BaseSettingsModel): @@ -120,6 +121,16 @@ class CreateVrayProxyModel(BaseSettingsModel): default_factory=list, title="Default Products") +class CreateMultishotLayout(BasicCreatorModel): + shotParent: str = Field(title="Shot Parent Folder") + groupLoadedAssets: bool = Field(title="Group Loaded Assets") + task_type: list[str] = Field( + title="Task types", + enum_resolver=task_types_enum + ) + task_name: str = Field(title="Task name (regex)") + + class CreatorsModel(BaseSettingsModel): CreateLook: CreateLookModel = Field( default_factory=CreateLookModel, diff --git a/server_addon/maya/server/settings/publishers.py b/server_addon/maya/server/settings/publishers.py index 6c5baa3900..dd8d4a0a37 100644 --- a/server_addon/maya/server/settings/publishers.py +++ b/server_addon/maya/server/settings/publishers.py @@ -433,6 +433,10 @@ class PublishersModel(BaseSettingsModel): default_factory=ValidateRenderSettingsModel, title="Validate Render Settings" ) + ValidateResolution: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Resolution Setting" + ) ValidateCurrentRenderLayerIsRenderable: BasicValidateModel = Field( default_factory=BasicValidateModel, title="Validate Current Render Layer Has Renderable Camera" @@ -902,6 +906,11 @@ DEFAULT_PUBLISH_SETTINGS = { "redshift_render_attributes": [], "renderman_render_attributes": [] }, + "ValidateResolution": { + "enabled": True, + "optional": True, + "active": True + }, "ValidateCurrentRenderLayerIsRenderable": { "enabled": True, "optional": False, diff --git a/server_addon/maya/server/version.py b/server_addon/maya/server/version.py index de699158fd..90ce344d3e 100644 --- a/server_addon/maya/server/version.py +++ b/server_addon/maya/server/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring addon version.""" -__version__ = "0.1.4" +__version__ = "0.1.5" diff --git a/server_addon/nuke/server/settings/imageio.py b/server_addon/nuke/server/settings/imageio.py index 811b12104b..15ccd4e89a 100644 --- a/server_addon/nuke/server/settings/imageio.py +++ b/server_addon/nuke/server/settings/imageio.py @@ -14,10 +14,30 @@ class NodesModel(BaseSettingsModel): default_factory=list, title="Used in plugins" ) - nukeNodeClass: str = Field( + nuke_node_class: str = Field( title="Nuke Node Class", ) + +class RequiredNodesModel(NodesModel): + knobs: list[KnobModel] = Field( + default_factory=list, + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class OverrideNodesModel(NodesModel): + subsets: list[str] = Field( + default_factory=list, + title="Subsets" + ) + knobs: list[KnobModel] = Field( default_factory=list, title="Knobs", @@ -31,13 +51,11 @@ class NodesModel(BaseSettingsModel): class NodesSetting(BaseSettingsModel): - # TODO: rename `requiredNodes` to `required_nodes` - requiredNodes: list[NodesModel] = Field( + required_nodes: list[RequiredNodesModel] = Field( title="Plugin required", default_factory=list ) - # TODO: rename `overrideNodes` to `override_nodes` - overrideNodes: list[NodesModel] = Field( + override_nodes: list[OverrideNodesModel] = Field( title="Plugin's node overrides", default_factory=list ) @@ -46,38 +64,40 @@ class NodesSetting(BaseSettingsModel): def ocio_configs_switcher_enum(): return [ {"value": "nuke-default", "label": "nuke-default"}, - {"value": "spi-vfx", "label": "spi-vfx"}, - {"value": "spi-anim", "label": "spi-anim"}, - {"value": "aces_0.1.1", "label": "aces_0.1.1"}, - {"value": "aces_0.7.1", "label": "aces_0.7.1"}, - {"value": "aces_1.0.1", "label": "aces_1.0.1"}, - {"value": "aces_1.0.3", "label": "aces_1.0.3"}, - {"value": "aces_1.1", "label": "aces_1.1"}, - {"value": "aces_1.2", "label": "aces_1.2"}, - {"value": "aces_1.3", "label": "aces_1.3"}, - {"value": "custom", "label": "custom"} + {"value": "spi-vfx", "label": "spi-vfx (11)"}, + {"value": "spi-anim", "label": "spi-anim (11)"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1 (11)"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1 (11)"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1 (11)"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3 (11, 12)"}, + {"value": "aces_1.1", "label": "aces_1.1 (12, 13)"}, + {"value": "aces_1.2", "label": "aces_1.2 (13, 14)"}, + {"value": "studio-config-v1.0.0_aces-v1.3_ocio-v2.1", + "label": "studio-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)"}, + {"value": "cg-config-v1.0.0_aces-v1.3_ocio-v2.1", + "label": "cg-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)"}, ] class WorkfileColorspaceSettings(BaseSettingsModel): """Nuke workfile colorspace preset. """ - colorManagement: Literal["Nuke", "OCIO"] = Field( - title="Color Management" + color_management: Literal["Nuke", "OCIO"] = Field( + title="Color Management Workflow" ) - OCIO_config: str = Field( - title="OpenColorIO Config", - description="Switch between OCIO configs", + native_ocio_config: str = Field( + title="Native OpenColorIO Config", + description="Switch between native OCIO configs", enum_resolver=ocio_configs_switcher_enum, conditionalEnum=True ) - workingSpaceLUT: str = Field( + working_space: str = Field( title="Working Space" ) - monitorLut: str = Field( - title="Monitor" + thumbnail_space: str = Field( + title="Thumbnail Space" ) @@ -182,12 +202,10 @@ class ImageIOSettings(BaseSettingsModel): title="Nodes" ) """# TODO: enhance settings with host api: - - [ ] old settings are using `regexInputs` key but we - need to rename to `regex_inputs` - [ ] no need for `inputs` middle part. It can stay directly on `regex_inputs` """ - regexInputs: RegexInputsModel = Field( + regex_inputs: RegexInputsModel = Field( default_factory=RegexInputsModel, title="Assign colorspace to read nodes via rules" ) @@ -201,18 +219,18 @@ DEFAULT_IMAGEIO_SETTINGS = { "viewerProcess": "rec709" }, "workfile": { - "colorManagement": "Nuke", - "OCIO_config": "nuke-default", - "workingSpaceLUT": "linear", - "monitorLut": "sRGB", + "color_management": "Nuke", + "native_ocio_config": "nuke-default", + "working_space": "linear", + "thumbnail_space": "sRGB", }, "nodes": { - "requiredNodes": [ + "required_nodes": [ { "plugins": [ "CreateWriteRender" ], - "nukeNodeClass": "Write", + "nuke_node_class": "Write", "knobs": [ { "type": "text", @@ -264,7 +282,7 @@ DEFAULT_IMAGEIO_SETTINGS = { "plugins": [ "CreateWritePrerender" ], - "nukeNodeClass": "Write", + "nuke_node_class": "Write", "knobs": [ { "type": "text", @@ -316,7 +334,7 @@ DEFAULT_IMAGEIO_SETTINGS = { "plugins": [ "CreateWriteImage" ], - "nukeNodeClass": "Write", + "nuke_node_class": "Write", "knobs": [ { "type": "text", @@ -360,9 +378,9 @@ DEFAULT_IMAGEIO_SETTINGS = { ] } ], - "overrideNodes": [] + "override_nodes": [] }, - "regexInputs": { + "regex_inputs": { "inputs": [ { "regex": "(beauty).*(?=.exr)", diff --git a/server_addon/nuke/server/settings/publish_plugins.py b/server_addon/nuke/server/settings/publish_plugins.py index 19206149b6..81663fa5aa 100644 --- a/server_addon/nuke/server/settings/publish_plugins.py +++ b/server_addon/nuke/server/settings/publish_plugins.py @@ -155,8 +155,10 @@ class IntermediateOutputModel(BaseSettingsModel): title="Filter", default_factory=BakingStreamFilterModel) read_raw: bool = Field(title="Read raw switch") viewer_process_override: str = Field(title="Viewer process override") - bake_viewer_process: bool = Field(title="Bake view process") - bake_viewer_input_process: bool = Field(title="Bake viewer input process") + bake_viewer_process: bool = Field(title="Bake viewer process") + bake_viewer_input_process: bool = Field( + title="Bake viewer input process node (LUT)" + ) reformat_nodes_config: ReformatNodesConfigModel = Field( default_factory=ReformatNodesConfigModel, title="Reformat Nodes") @@ -236,7 +238,7 @@ class PublishPuginsModel(BaseSettingsModel): default_factory=CollectInstanceDataModel, section="Collectors" ) - ValidateCorrectAssetName: OptionalPluginModel = Field( + ValidateCorrectAssetContext: OptionalPluginModel = Field( title="Validate Correct Folder Name", default_factory=OptionalPluginModel, section="Validators" @@ -261,8 +263,8 @@ class PublishPuginsModel(BaseSettingsModel): title="Validate Backdrop", default_factory=OptionalPluginModel ) - ValidateScript: OptionalPluginModel = Field( - title="Validate Script", + ValidateScriptAttributes: OptionalPluginModel = Field( + title="Validate workfile attributes", default_factory=OptionalPluginModel ) ExtractThumbnail: ExtractThumbnailModel = Field( @@ -308,7 +310,7 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { "write" ] }, - "ValidateCorrectAssetName": { + "ValidateCorrectAssetContext": { "enabled": True, "optional": True, "active": True @@ -343,7 +345,7 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { "optional": True, "active": True }, - "ValidateScript": { + "ValidateScriptAttributes": { "enabled": True, "optional": True, "active": True @@ -407,12 +409,12 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { "text": "Lanczos6" }, { - "type": "bool", + "type": "boolean", "name": "black_outside", "boolean": True }, { - "type": "bool", + "type": "boolean", "name": "pbb", "boolean": False } @@ -427,7 +429,7 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { "enabled": False }, "ExtractReviewDataMov": { - "enabled": True, + "enabled": False, "viewer_lut_raw": False, "outputs": [ { @@ -463,12 +465,12 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { "text": "Lanczos6" }, { - "type": "bool", + "type": "boolean", "name": "black_outside", "boolean": True }, { - "type": "bool", + "type": "boolean", "name": "pbb", "boolean": False } @@ -518,12 +520,12 @@ DEFAULT_PUBLISH_PLUGIN_SETTINGS = { "text": "Lanczos6" }, { - "type": "bool", + "type": "boolean", "name": "black_outside", "boolean": True }, { - "type": "bool", + "type": "boolean", "name": "pbb", "boolean": False } diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py index ae7362549b..bbab0242f6 100644 --- a/server_addon/nuke/server/version.py +++ b/server_addon/nuke/server/version.py @@ -1 +1 @@ -__version__ = "0.1.3" +__version__ = "0.1.4" diff --git a/server_addon/openpype/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml index 6d5ac92ca7..40da8f6716 100644 --- a/server_addon/openpype/client/pyproject.toml +++ b/server_addon/openpype/client/pyproject.toml @@ -8,7 +8,6 @@ aiohttp_json_rpc = "*" # TVPaint server aiohttp-middlewares = "^2.0.0" wsrpc_aiohttp = "^3.1.1" # websocket server clique = "1.6.*" -shotgun_api3 = {git = "https://github.com/shotgunsoftware/python-api.git", rev = "v3.3.3"} gazu = "^0.9.3" google-api-python-client = "^1.12.8" # sync server google support (should be separate?) jsonschema = "^2.6.0" diff --git a/tests/integration/hosts/maya/input/startup/userSetup.py b/tests/integration/hosts/maya/input/startup/userSetup.py new file mode 100644 index 0000000000..eb6e2411b5 --- /dev/null +++ b/tests/integration/hosts/maya/input/startup/userSetup.py @@ -0,0 +1,28 @@ +import logging +import sys + +from maya import cmds + +import pyblish.util + + +def setup_pyblish_logging(): + log = logging.getLogger("pyblish") + handler = logging.StreamHandler(sys.stdout) + formatter = logging.Formatter( + "pyblish (%(levelname)s) (line: %(lineno)d) %(name)s:" + "\n%(message)s" + ) + handler.setFormatter(formatter) + log.addHandler(handler) + + +def _run_publish_test_deferred(): + try: + setup_pyblish_logging() + pyblish.util.publish() + finally: + cmds.quit(force=True) + + +cmds.evalDeferred("_run_publish_test_deferred()", lowestPriority=True) diff --git a/tests/integration/hosts/maya/lib.py b/tests/integration/hosts/maya/lib.py index e7480e25fa..f27d516605 100644 --- a/tests/integration/hosts/maya/lib.py +++ b/tests/integration/hosts/maya/lib.py @@ -33,16 +33,16 @@ class MayaHostFixtures(HostFixtures): yield dest_path @pytest.fixture(scope="module") - def startup_scripts(self, monkeypatch_session, download_test_data): + def startup_scripts(self, monkeypatch_session): """Points Maya to userSetup file from input data""" - startup_path = os.path.join(download_test_data, - "input", - "startup") + startup_path = os.path.join( + os.path.dirname(__file__), "input", "startup" + ) original_pythonpath = os.environ.get("PYTHONPATH") - monkeypatch_session.setenv("PYTHONPATH", - "{}{}{}".format(startup_path, - os.pathsep, - original_pythonpath)) + monkeypatch_session.setenv( + "PYTHONPATH", + "{}{}{}".format(startup_path, os.pathsep, original_pythonpath) + ) @pytest.fixture(scope="module") def skip_compare_folders(self): diff --git a/tests/integration/hosts/nuke/test_publish_in_nuke.py b/tests/integration/hosts/nuke/test_publish_in_nuke.py index bfd84e4fd5..b7bb8716c0 100644 --- a/tests/integration/hosts/nuke/test_publish_in_nuke.py +++ b/tests/integration/hosts/nuke/test_publish_in_nuke.py @@ -68,7 +68,7 @@ class TestPublishInNuke(NukeLocalPublishTestClass): name="workfileTest_task")) failures.append( - DBAssert.count_of_types(dbcon, "representation", 4)) + DBAssert.count_of_types(dbcon, "representation", 3)) additional_args = {"context.subset": "workfileTest_task", "context.ext": "nk"} @@ -85,7 +85,7 @@ class TestPublishInNuke(NukeLocalPublishTestClass): additional_args = {"context.subset": "renderTest_taskMain", "name": "thumbnail"} failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, + DBAssert.count_of_types(dbcon, "representation", 0, additional_args=additional_args)) additional_args = {"context.subset": "renderTest_taskMain", diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index e82e438e54..277b332e19 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -105,7 +105,7 @@ class ModuleUnitTest(BaseTest): yield path @pytest.fixture(scope="module") - def env_var(self, monkeypatch_session, download_test_data): + def env_var(self, monkeypatch_session, download_test_data, mongo_url): """Sets temporary env vars from json file.""" env_url = os.path.join(download_test_data, "input", "env_vars", "env_var.json") @@ -129,6 +129,9 @@ class ModuleUnitTest(BaseTest): monkeypatch_session.setenv(key, str(value)) #reset connection to openpype DB with new env var + if mongo_url: + monkeypatch_session.setenv("OPENPYPE_MONGO", mongo_url) + import openpype.settings.lib as sett_lib sett_lib._SETTINGS_HANDLER = None sett_lib._LOCAL_SETTINGS_HANDLER = None @@ -150,8 +153,7 @@ class ModuleUnitTest(BaseTest): request, mongo_url): """Restore prepared MongoDB dumps into selected DB.""" backup_dir = os.path.join(download_test_data, "input", "dumps") - - uri = mongo_url or os.environ.get("OPENPYPE_MONGO") + uri = os.environ.get("OPENPYPE_MONGO") db_handler = DBHandler(uri) db_handler.setup_from_dump(self.TEST_DB_NAME, backup_dir, overwrite=True, diff --git a/tests/unit/openpype/pipeline/publish/test_publish_plugins.py b/tests/unit/openpype/pipeline/publish/test_publish_plugins.py index aace8cf7e3..1f7f551237 100644 --- a/tests/unit/openpype/pipeline/publish/test_publish_plugins.py +++ b/tests/unit/openpype/pipeline/publish/test_publish_plugins.py @@ -37,7 +37,7 @@ class TestPipelinePublishPlugins(TestPipeline): # files are the same as those used in `test_pipeline_colorspace` TEST_FILES = [ ( - "1Lf-mFxev7xiwZCWfImlRcw7Fj8XgNQMh", + "1csqimz8bbNcNgxtEXklLz6GRv91D3KgA", "test_pipeline_colorspace.zip", "" ) @@ -123,8 +123,7 @@ class TestPipelinePublishPlugins(TestPipeline): def test_get_colorspace_settings(self, context, config_path_asset): expected_config_template = ( - "{root[work]}/{project[name]}" - "/{hierarchy}/{asset}/config/aces.ocio" + "{root[work]}/{project[name]}/config/aces.ocio" ) expected_file_rules = { "comp_review": { @@ -177,16 +176,16 @@ class TestPipelinePublishPlugins(TestPipeline): # load plugin function for testing plugin = publish_plugins.ColormanagedPyblishPluginMixin() plugin.log = log + context.data["imageioSettings"] = (config_data_nuke, file_rules_nuke) plugin.set_representation_colorspace( - representation_nuke, context, - colorspace_settings=(config_data_nuke, file_rules_nuke) + representation_nuke, context ) # load plugin function for testing plugin = publish_plugins.ColormanagedPyblishPluginMixin() plugin.log = log + context.data["imageioSettings"] = (config_data_hiero, file_rules_hiero) plugin.set_representation_colorspace( - representation_hiero, context, - colorspace_settings=(config_data_hiero, file_rules_hiero) + representation_hiero, context ) colorspace_data_nuke = representation_nuke.get("colorspaceData") diff --git a/tests/unit/openpype/pipeline/test_colorspace_convert_colorspace_enumerator_item.py b/tests/unit/openpype/pipeline/test_colorspace_convert_colorspace_enumerator_item.py new file mode 100644 index 0000000000..56ac2a5d28 --- /dev/null +++ b/tests/unit/openpype/pipeline/test_colorspace_convert_colorspace_enumerator_item.py @@ -0,0 +1,118 @@ +import unittest +from openpype.pipeline.colorspace import convert_colorspace_enumerator_item + + +class TestConvertColorspaceEnumeratorItem(unittest.TestCase): + def setUp(self): + self.config_items = { + "colorspaces": { + "sRGB": { + "aliases": ["sRGB_1"], + "family": "colorspace", + "categories": ["colors"], + "equalitygroup": "equalitygroup", + }, + "Rec.709": { + "aliases": ["rec709_1", "rec709_2"], + }, + }, + "looks": { + "sRGB_to_Rec.709": { + "process_space": "sRGB", + }, + }, + "displays_views": { + "sRGB (ACES)": { + "view": "sRGB", + "display": "ACES", + }, + "Rec.709 (ACES)": { + "view": "Rec.709", + "display": "ACES", + }, + }, + "roles": { + "compositing_linear": { + "colorspace": "linear", + }, + }, + } + + def test_valid_item(self): + colorspace_item_data = convert_colorspace_enumerator_item( + "colorspaces::sRGB", self.config_items) + self.assertEqual( + colorspace_item_data, + { + "name": "sRGB", + "type": "colorspaces", + "aliases": ["sRGB_1"], + "family": "colorspace", + "categories": ["colors"], + "equalitygroup": "equalitygroup" + } + ) + + alias_item_data = convert_colorspace_enumerator_item( + "aliases::rec709_1", self.config_items) + self.assertEqual( + alias_item_data, + { + "aliases": ["rec709_1", "rec709_2"], + "name": "Rec.709", + "type": "colorspace" + } + ) + + display_view_item_data = convert_colorspace_enumerator_item( + "displays_views::sRGB (ACES)", self.config_items) + self.assertEqual( + display_view_item_data, + { + "type": "displays_views", + "name": "sRGB (ACES)", + "view": "sRGB", + "display": "ACES" + } + ) + + role_item_data = convert_colorspace_enumerator_item( + "roles::compositing_linear", self.config_items) + self.assertEqual( + role_item_data, + { + "name": "compositing_linear", + "type": "roles", + "colorspace": "linear" + } + ) + + look_item_data = convert_colorspace_enumerator_item( + "looks::sRGB_to_Rec.709", self.config_items) + self.assertEqual( + look_item_data, + { + "type": "looks", + "name": "sRGB_to_Rec.709", + "process_space": "sRGB" + } + ) + + def test_invalid_item(self): + config_items = { + "RGB": { + "sRGB": {"red": 255, "green": 255, "blue": 255}, + "AdobeRGB": {"red": 255, "green": 255, "blue": 255}, + } + } + with self.assertRaises(KeyError): + convert_colorspace_enumerator_item("RGB::invalid", config_items) + + def test_missing_config_data(self): + config_items = {} + with self.assertRaises(KeyError): + convert_colorspace_enumerator_item("RGB::sRGB", config_items) + + +if __name__ == '__main__': + unittest.main() diff --git a/tests/unit/openpype/pipeline/test_colorspace_get_colorspaces_enumerator_items.py b/tests/unit/openpype/pipeline/test_colorspace_get_colorspaces_enumerator_items.py new file mode 100644 index 0000000000..c221712d70 --- /dev/null +++ b/tests/unit/openpype/pipeline/test_colorspace_get_colorspaces_enumerator_items.py @@ -0,0 +1,121 @@ +import unittest + +from openpype.pipeline.colorspace import get_colorspaces_enumerator_items + + +class TestGetColorspacesEnumeratorItems(unittest.TestCase): + def setUp(self): + self.config_items = { + "colorspaces": { + "sRGB": { + "aliases": ["sRGB_1"], + }, + "Rec.709": { + "aliases": ["rec709_1", "rec709_2"], + }, + }, + "looks": { + "sRGB_to_Rec.709": { + "process_space": "sRGB", + }, + }, + "displays_views": { + "sRGB (ACES)": { + "view": "sRGB", + "display": "ACES", + }, + "Rec.709 (ACES)": { + "view": "Rec.709", + "display": "ACES", + }, + }, + "roles": { + "compositing_linear": { + "colorspace": "linear", + }, + }, + } + + def test_colorspaces(self): + result = get_colorspaces_enumerator_items(self.config_items) + expected = [ + ("colorspaces::Rec.709", "[colorspace] Rec.709"), + ("colorspaces::sRGB", "[colorspace] sRGB"), + ] + self.assertEqual(result, expected) + + def test_aliases(self): + result = get_colorspaces_enumerator_items( + self.config_items, include_aliases=True) + expected = [ + ("colorspaces::Rec.709", "[colorspace] Rec.709"), + ("colorspaces::sRGB", "[colorspace] sRGB"), + ("aliases::rec709_1", "[alias] rec709_1 (Rec.709)"), + ("aliases::rec709_2", "[alias] rec709_2 (Rec.709)"), + ("aliases::sRGB_1", "[alias] sRGB_1 (sRGB)"), + ] + self.assertEqual(result, expected) + + def test_looks(self): + result = get_colorspaces_enumerator_items( + self.config_items, include_looks=True) + expected = [ + ("colorspaces::Rec.709", "[colorspace] Rec.709"), + ("colorspaces::sRGB", "[colorspace] sRGB"), + ("looks::sRGB_to_Rec.709", "[look] sRGB_to_Rec.709 (sRGB)"), + ] + self.assertEqual(result, expected) + + def test_display_views(self): + result = get_colorspaces_enumerator_items( + self.config_items, include_display_views=True) + expected = [ + ("colorspaces::Rec.709", "[colorspace] Rec.709"), + ("colorspaces::sRGB", "[colorspace] sRGB"), + ("displays_views::Rec.709 (ACES)", "[view (display)] Rec.709 (ACES)"), # noqa: E501 + ("displays_views::sRGB (ACES)", "[view (display)] sRGB (ACES)"), + + ] + self.assertEqual(result, expected) + + def test_roles(self): + result = get_colorspaces_enumerator_items( + self.config_items, include_roles=True) + expected = [ + ("roles::compositing_linear", "[role] compositing_linear (linear)"), # noqa: E501 + ("colorspaces::Rec.709", "[colorspace] Rec.709"), + ("colorspaces::sRGB", "[colorspace] sRGB"), + ] + self.assertEqual(result, expected) + + def test_all(self): + message_config_keys = ", ".join( + "'{}':{}".format( + key, + set(self.config_items.get(key, {}).keys()) + ) for key in self.config_items.keys() + ) + print("Testing with config: [{}]".format(message_config_keys)) + result = get_colorspaces_enumerator_items( + self.config_items, + include_aliases=True, + include_looks=True, + include_roles=True, + include_display_views=True, + ) + expected = [ + ("roles::compositing_linear", "[role] compositing_linear (linear)"), # noqa: E501 + ("colorspaces::Rec.709", "[colorspace] Rec.709"), + ("colorspaces::sRGB", "[colorspace] sRGB"), + ("aliases::rec709_1", "[alias] rec709_1 (Rec.709)"), + ("aliases::rec709_2", "[alias] rec709_2 (Rec.709)"), + ("aliases::sRGB_1", "[alias] sRGB_1 (sRGB)"), + ("looks::sRGB_to_Rec.709", "[look] sRGB_to_Rec.709 (sRGB)"), + ("displays_views::Rec.709 (ACES)", "[view (display)] Rec.709 (ACES)"), # noqa: E501 + ("displays_views::sRGB (ACES)", "[view (display)] sRGB (ACES)"), + ] + self.assertEqual(result, expected) + + +if __name__ == "__main__": + unittest.main() diff --git a/website/docs/admin_hosts_houdini.md b/website/docs/admin_hosts_houdini.md index 64c54db591..18c390e07f 100644 --- a/website/docs/admin_hosts_houdini.md +++ b/website/docs/admin_hosts_houdini.md @@ -3,9 +3,36 @@ id: admin_hosts_houdini title: Houdini sidebar_label: Houdini --- +## General Settings +### Houdini Vars + +Allows admins to have a list of vars (e.g. JOB) with (dynamic) values that will be updated on context changes, e.g. when switching to another asset or task. + +Using template keys is supported but formatting keys capitalization variants is not, e.g. `{Asset}` and `{ASSET}` won't work + + +:::note +If `Treat as directory` toggle is activated, Openpype will consider the given value is a path of a folder. + +If the folder does not exist on the context change it will be created by this feature so that the path will always try to point to an existing folder. +::: + +Disabling `Update Houdini vars on context change` feature will leave all Houdini vars unmanaged and thus no context update changes will occur. + +> If `$JOB` is present in the Houdini var list and has an empty value, OpenPype will set its value to `$HIP` + + +:::note +For consistency reasons we always force all vars to be uppercase. +e.g. `myvar` will be `MYVAR` +::: + +![update-houdini-vars-context-change](assets/houdini/update-houdini-vars-context-change.png) + + ## Shelves Manager You can add your custom shelf set into Houdini by setting your shelf sets, shelves and tools in **Houdini -> Shelves Manager**. ![Custom menu definition](assets/houdini-admin_shelvesmanager.png) -The Shelf Set Path is used to load a .shelf file to generate your shelf set. If the path is specified, you don't have to set the shelves and tools. \ No newline at end of file +The Shelf Set Path is used to load a .shelf file to generate your shelf set. If the path is specified, you don't have to set the shelves and tools. diff --git a/website/docs/assets/houdini/update-houdini-vars-context-change.png b/website/docs/assets/houdini/update-houdini-vars-context-change.png new file mode 100644 index 0000000000..74ac8d86c9 Binary files /dev/null and b/website/docs/assets/houdini/update-houdini-vars-context-change.png differ