diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index fe86a8400b..54a4ee6ac0 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,19 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.15.9-nightly.1 + - 3.15.8 + - 3.15.8-nightly.3 + - 3.15.8-nightly.2 + - 3.15.8-nightly.1 + - 3.15.7 + - 3.15.7-nightly.3 + - 3.15.7-nightly.2 + - 3.15.7-nightly.1 + - 3.15.6 + - 3.15.6-nightly.3 + - 3.15.6-nightly.2 + - 3.15.6-nightly.1 - 3.15.5 - 3.15.5-nightly.2 - 3.15.5-nightly.1 @@ -122,19 +135,6 @@ body: - 3.14.2-nightly.5 - 3.14.2-nightly.4 - 3.14.2-nightly.3 - - 3.14.2-nightly.2 - - 3.14.2-nightly.1 - - 3.14.1 - - 3.14.1-nightly.4 - - 3.14.1-nightly.3 - - 3.14.1-nightly.2 - - 3.14.1-nightly.1 - - 3.14.0 - - 3.14.0-nightly.1 - - 3.13.1-nightly.3 - - 3.13.1-nightly.2 - - 3.13.1-nightly.1 - - 3.13.0 validations: required: true - type: dropdown diff --git a/.github/workflows/update_bug_report.yml b/.github/workflows/update_bug_report.yml index 7a1bfb7bfd..1e5da414bb 100644 --- a/.github/workflows/update_bug_report.yml +++ b/.github/workflows/update_bug_report.yml @@ -18,10 +18,16 @@ jobs: uses: ynput/gha-populate-form-version@main with: github_token: ${{ secrets.YNPUT_BOT_TOKEN }} - github_user: ${{ secrets.CI_USER }} - github_email: ${{ secrets.CI_EMAIL }} registry: github dropdown: _version limit_to: 100 form: .github/ISSUE_TEMPLATE/bug_report.yml commit_message: 'chore(): update bug report / version' + dry_run: no-push + + - name: Push to protected develop branch + uses: CasperWA/push-protected@v2.10.0 + with: + token: ${{ secrets.YNPUT_BOT_TOKEN }} + branch: develop + unprotect_reviews: true \ No newline at end of file diff --git a/.gitignore b/.gitignore index 18e7cd7bf2..50f52f65a3 100644 --- a/.gitignore +++ b/.gitignore @@ -112,3 +112,9 @@ tools/run_eventserver.* tools/dev_* .github_changelog_generator + + +# Addons +######## +/openpype/addons/* +!/openpype/addons/README.md diff --git a/.gitmodules b/.gitmodules index fe93791c4e..4de92471f7 100644 --- a/.gitmodules +++ b/.gitmodules @@ -4,4 +4,7 @@ [submodule "tools/modules/powershell/PSWriteColor"] path = tools/modules/powershell/PSWriteColor - url = https://github.com/EvotecIT/PSWriteColor.git \ No newline at end of file + url = https://github.com/EvotecIT/PSWriteColor.git +[submodule "openpype/hosts/unreal/integration"] + path = openpype/hosts/unreal/integration + url = https://github.com/ynput/ayon-unreal-plugin.git diff --git a/CHANGELOG.md b/CHANGELOG.md index 16deaaa4fd..a33904735b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,849 @@ # Changelog +## [3.15.8](https://github.com/ynput/OpenPype/tree/3.15.8) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.7...3.15.8) + +### **🆕 New features** + + +
+Publisher: Show instances in report page #4915 + +Show publish instances in report page. Also added basic log view with logs grouped by instance. Validation error detail now have 2 colums, one with erro details second with logs. Crashed state shows fast access to report action buttons. Success will show only logs. Publish frame is shrunked automatically on publish stop. + + +___ + +
+ + +
+Fusion - Loader plugins updates #4920 + +Update to some Fusion loader plugins:The sequence loader can now load footage from the image and online family.The FBX loader can now import all formats Fusions FBX node can read.You can now import the content of another workfile into your current comp with the workfile loader. + + +___ + +
+ + +
+Fusion: deadline farm rendering #4955 + +Enabling Fusion for deadline farm rendering. + + +___ + +
+ + +
+AfterEffects: set frame range and resolution #4983 + +Frame information (frame start, duration, fps) and resolution (width and height) is applied to selected composition from Asset Management System (Ftrack or DB) automatically when published instance is created.It is also possible explicitly propagate both values from DB to selected composition by newly added menu buttons. + + +___ + +
+ + +
+Publish: Enhance automated publish plugin settings #4986 + +Added plugins option to define settings category where to look for settings of a plugin and added public helper functions to apply settings `get_plugin_settings` and `apply_plugin_settings_automatically`. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Load Rig References - Change Rig to Animation in Animation instance #4877 + +We are using the template builder to build an animation scene. All the rig placeholders are imported correctly, but the automatically created animation instances retain the rig family in their names and subsets. In our example, we need animationMain instead of rigMain, because this name will be used in the following steps like lighting.Here is the result we need. I checked, and it's not a template builder problem, because even if I load a rig as a reference, the result is the same. For me, since we are in the animation instance, it makes more sense to have animation instead of rig in the name. The naming is just fine if we use create from the Openpype menu. + + +___ + +
+ + +
+Enhancement: Resolve prelaunch code refactoring and update defaults #4916 + +The main reason of this PR is wrong default settings in `openpype/settings/defaults/system_settings/applications.json` for Resolve host. The `bin` folder should not be a part of the macos and Linux `RESOLVE_PYTHON3_PATH` variable.The rest of this PR is some code cleanups for Resolve prelaunch hook to simplify further development.Also added a .gitignore for vscode workspace files. + + +___ + +
+ + +
+Unreal: 🚚 move Unreal plugin to separate repository #4980 + +To support Epic Marketplace have to move AYON Unreal integration plugins to separate repository. This is replacing current files with git submodule, so the change should be functionally without impact.New repository lives here: https://github.com/ynput/ayon-unreal-plugin + + +___ + +
+ + +
+General: Lib code cleanup #5003 + +Small cleanup in lib files in openpype. + + +___ + +
+ + +
+Allow to open with djv by extension instead of representation name #5004 + +Filter open in djv action by extension instead of representation. + + +___ + +
+ + +
+DJV open action `extensions` as `set` #5005 + +Change `extensions` attribute to `set`. + + +___ + +
+ + +
+Nuke: extract thumbnail with multiple reposition nodes #5011 + +Added support for multiple reposition nodes. + + +___ + +
+ + +
+Enhancement: Improve logging levels and messages for artist facing publish reports #5018 + +Tweak the logging levels and messages to try and only show those logs that an artist should see and could understand. Move anything that's slightly more involved into a "debug" message instead. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Bugfix/frame variable fix #4978 + +Renamed variables to match OpenPype terminology to reduce confusion and add consistency. +___ + +
+ + +
+Global: plugins cleanup plugin will leave beauty rendered files #4790 + +Attempt to mark more files to be cleaned up explicitly in intermediate `renders` folder in work area for farm jobs. + + +___ + +
+ + +
+Fix: Download last workfile doesn't work if not already downloaded #4942 + +Some optimization condition is messing with the feature: if the published workfile is not already downloaded, it won't download it... + + +___ + +
+ + +
+Unreal: Fix transform when loading layout to match existing assets #4972 + +Fixed transform when loading layout to match existing assets. + + +___ + +
+ + +
+fix the bug of fbx loaders in Max #4977 + +bug fix of fbx loaders for not being able to parent to the CON instances while importing cameras(and models) which is published from other DCCs such as Maya. + + +___ + +
+ + +
+AfterEffects: allow returning stub with not saved workfile #4984 + +Allows to use Workfile app to Save first empty workfile. + + +___ + +
+ + +
+Blender: Fix Alembic loading #4985 + +Fixed problem occurring when trying to load an Alembic model in Blender. + + +___ + +
+ + +
+Unreal: Addon Py2 compatibility #4994 + +Fixed Python 2 compatibility of unreal addon. + + +___ + +
+ + +
+Nuke: fixed missing files key in representation #4999 + +Issue with missing keys once rendering target set to existing frames is fixed. Instance has to be evaluated in validation for missing files. + + +___ + +
+ + +
+Unreal: Fix the frame range when loading camera #5002 + +The keyframes of the camera, when loaded, were not using the correct frame range. + + +___ + +
+ + +
+Fusion: fixing frame range targeting #5013 + +Frame range targeting at Rendering instances is now following configured options. + + +___ + +
+ + +
+Deadline: fix selection from multiple webservices #5015 + +Multiple different DL webservice could be configured. First they must by configured in System Settings., then they could be configured per project in `project_settings/deadline/deadline_servers`.Only single webservice could be a target of publish though. + + +___ + +
+ +### **Merged pull requests** + + +
+3dsmax: Refactored publish plugins to use proper implementation of pymxs #4988 + + +___ + +
+ + + + +## [3.15.7](https://github.com/ynput/OpenPype/tree/3.15.7) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.6...3.15.7) + +### **🆕 New features** + + +
+Addons directory #4893 + +This adds a directory for Addons, for easier distribution of studio specific code. + + +___ + +
+ + +
+Kitsu - Add "image", "online" and "plate" to review families #4923 + +This PR adds "image", "online" and "plate" to the review families so they also can be uploaded to Kitsu.It also adds the `Add review to Kitsu` tag to the default png review. Without it the user would manually need to add it for single image uploads to Kitsu and might confuse users (it confused me first for a while as movies did work). + + +___ + +
+ + +
+Feature/remove and load inv action #4930 + +Added the ability to remove and load a container, as a way to reset it.This can be useful in cases where a container breaks in a way that can be fixed by removing it, then reloading it.Also added the ability to add `InventoryAction` plugins by placing them in `openpype/plugins/inventory`. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Load Rig References - Change Rig to Animation in Animation instance #4877 + +We are using the template builder to build an animation scene. All the rig placeholders are imported correctly, but the automatically created animation instances retain the rig family in their names and subsets. In our example, we need animationMain instead of rigMain, because this name will be used in the following steps like lighting.Here is the result we need. I checked, and it's not a template builder problem, because even if I load a rig as a reference, the result is the same. For me, since we are in the animation instance, it makes more sense to have animation instead of rig in the name. The naming is just fine if we use create from the Openpype menu. + + +___ + +
+ + +
+Maya template builder - preserve all references when importing a template #4797 + +When building a template with Maya template builder, we import the template and also the references inside the template file. This causes some problems: +- We cannot use the references to version assets imported by the template. +- When we import the file, the internal reference files are also imported. As a side effect, Maya complains about a reference that no longer exists.`// Error: file: /xxx/maya/2023.3/linux/scripts/AETemplates/AEtransformRelated.mel line 58: Reference node 'turntable_mayaSceneMain_01_RN' is not associated with a reference file.` + + +___ + +
+ + +
+Unreal: Renaming the integration plugin to Ayon. #4646 + +Renamed the .h, and .cpp files to Ayon. Also renamed the classes to with the Ayon keyword. + + +___ + +
+ + +
+3dsMax: render dialogue needs to be closed #4729 + +Make sure the render setup dialog is in a closed state for the update of resolution and other render settings + + +___ + +
+ + +
+Maya Template Builder - Remove default cameras from renderable cameras #4815 + +When we build an asset workfile with build workfile from template inside Maya, we load our turntable camera. But then we end up with 2 renderables camera : **persp** the one imported from the template.We need to remove the **persp** camera (or any other default camera) from renderable cameras when building the work file. + + +___ + +
+ + +
+Validators for Frame Range in Max #4914 + +Switch Render Frame Range Type to 3 for specific ranges (initial setup for the range type is 4)Reset Frame Range will also set the frame range for render settingsRender Collector won't take the frame range from context data but take the range directly from render settingAdd validators for render frame range type and frame range respectively with repair action + + +___ + +
+ + +
+Fusion: Saver creator settings #4943 + +Adding Saver creator settings and enhanced rendering path with template. + + +___ + +
+ + +
+General: Project Anatomy on creators #4962 + +Anatomy object of current project is available on `CreateContext` and create plugins. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate shader name - OP-5903 #4971 + +Running the plugin would error with: +``` +// TypeError: 'str' object cannot be interpreted as an integer +```Fixed and added setting `active`. + + +___ + +
+ + +
+Houdini: Fix slow Houdini launch due to shelves generation #4829 + +Shelf generation during Houdini startup would add an insane amount of delay for the Houdini UI to launch correctly. By deferring the shelf generation this takes away the 5+ minutes of delay for the Houdini UI to launch. + + +___ + +
+ + +
+Fusion - Fixed "optional validation" #4912 + +Added OptionalPyblishPluginMixin and is_active checks for all publish tools that should be optional + + +___ + +
+ + +
+Bug: add missing `pyblish.util` import #4937 + +remote publishing was missing import of `remote_publish`. This is adding it back. + + +___ + +
+ + +
+Unreal: Fix missing 'object_path' property #4938 + +Epic removed the `object_path` property from `AssetData`. This PR fixes usages of that property.Fixes #4936 + + +___ + +
+ + +
+Remove obsolete global validator #4939 + +Removing `Validate Sequence Frames` validator from global plugins as it wasn't handling correctly many things and was by mistake enabled, breaking functionality on Deadline. + + +___ + +
+ + +
+General: fix build_workfile get_linked_assets missing project_name arg #4940 + +Linked assets collection don't work within `build_workfile` because `get_linked_assets` function call has a missing `project_name`argument. +- Added the `project_name` arg to the `get_linked_assets` function call. + + +___ + +
+ + +
+General: fix Scene Inventory switch version error dialog missing parent arg on init #4941 + +QuickFix for the switch version error dialog to set inventory widget as parent. + + +___ + +
+ + +
+Unreal: Fix camera frame range #4956 + +Fix the frame range of the level sequence for the Camera in Unreal. + + +___ + +
+ + +
+Unreal: Fix missing parameter when updating Alembic StaticMesh #4957 + +Fix an error when updating an Alembic StaticMesh in Unreal, due to a missing parameter in a function call. + + +___ + +
+ + +
+Unreal: Fix render extraction #4963 + +Fix a problem with the extraction of renders in Unreal. + + +___ + +
+ + +
+Unreal: Remove Python 3.8 syntax from addon #4965 + +Removed Python 3.8 syntax from addon. + + +___ + +
+ + +
+Ftrack: Fix editorial task creation #4966 + +Fix key assignment on instance data during editorial publishing in ftrack hierarchy integration. + + +___ + +
+ +### **Merged pull requests** + + +
+Add "shortcut" to Scripts Menu Definition #4927 + +Add the possibility to associate a shorcut for an entry in the script menu definition with the key "shortcut" + + +___ + +
+ + + + +## [3.15.6](https://github.com/ynput/OpenPype/tree/3.15.6) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.5...3.15.6) + +### **🆕 New features** + + +
+Substance Painter Integration #4283 + +This implements a part of #4205 by implementing a Substance Painter integration + +Status: +- [x] Implement Host +- [x] start substance with last workfile using `AddLastWorkfileToLaunchArgs` prelaunch hook +- [x] Implement Qt tools +- [x] Implement loaders +- [x] Implemented a Set project mesh loader (this is relatively special case because a Project will always have exactly one mesh - a Substance Painter project cannot exist without a mesh). +- [x] Implement project open callback +- [x] On project open it notifies the user if the loaded model is outdated +- [x] Implement publishing logic +- [x] Workfile publishing +- [x] Export Texture Sets +- [x] Support OCIO using #4195 (draft brach is set up - see comment) +- [ ] Likely needs more testing on the OCIO front +- [x] Validate all outputs of the Export template are exported/generated +- [x] Allow validation to be optional **(issue: there's no API method to detect what maps will be exported without doing an actual export to disk)** +- [x] Support extracting/integration if not all outputs are generated +- [x] Support multiple materials/texture sets per instance +- [ ] Add validator that can enforce only a single texture set output if studio prefers that. +- [ ] Implement Export File Format (extensions) override in Creator +- [ ] Add settings so Admin can choose which extensions are available. + + +___ + +
+ + +
+Data Exchange: Geometry in 3dsMax #4555 + +Introduces and updates a creator, extractors and loaders for model family + +Introduces new creator, extractors and loaders for model family while adding model families into the existing max scene loader and extractor +- [x] creators +- [x] adding model family into max scene loader and extractor +- [x] fbx loader +- [x] fbx extractor +- [x] usd loader +- [x] usd extractor +- [x] validator for model family +- [x] obj loader(update function) +- [x] fix the update function of the loader as #4675 +- [x] Add documentation + + +___ + +
+ + +
+AfterEffects: add review flag to each instance #4884 + +Adds `mark_for_review` flag to the Creator to allow artists to disable review if necessary.Exposed this flag in Settings, by default set to True (eg. same behavior as previously). + + +___ + +
+ +### **🚀 Enhancements** + + +
+Houdini: Fix Validate Output Node (VDB) #4819 + +- Removes plug-in that was a duplicate of this plug-in. +- Optimize logging of many prims slightly +- Fix error reporting like https://github.com/ynput/OpenPype/pull/4818 did + + +___ + +
+ + +
+Houdini: Add null node as output indicator when using TAB search #4834 + + +___ + +
+ + +
+Houdini: Don't error in collect review if camera is not set correctly #4874 + +Do not raise an error in collector when invalid path is set as camera path. Allow camera path to not be set correctly in review instance until validation so it's nicely shown in a validation report. + + +___ + +
+ + +
+Project packager: Backup and restore can store only database #4879 + +Pack project functionality have option to zip only project database without project files. Unpack project can skip project copy if the folder is not found.Added helper functions to `openpype.client.mongo` that can be also used for tests as replacement of mongo dump. + + +___ + +
+ + +
+Houdini: ExtractOpenGL for Review instance not optional #4881 + +Don't make ExtractOpenGL optional for review instance optional. + + +___ + +
+ + +
+Publisher: Small style changes #4894 + +Small changes in styles and form of publisher UI. + + +___ + +
+ + +
+Houdini: Workfile icon in new publisher #4898 + +Fix icon for the workfile instance in new publisher + + +___ + +
+ + +
+Fusion: Simplify creator icons code #4899 + +Simplify code for setting the icons for the Fusion creators + + +___ + +
+ + +
+Enhancement: Fix PySide 6.5 support for loader #4900 + +Fixes PySide 6.5 support in Loader. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Attributes #4917 + +This plugin was broken due to bad fetching of data and wrong repair action. + + +___ + +
+ + +
+Fix: Locally copied version of last published workfile is not incremented #4722 + +### Fix 1 +When copied, the local workfile version keeps the published version number, when it must be +1 to follow OP's naming convention. + +### Fix 2 +Local workfile version's name is built from anatomy. This avoids to get workfiles with their publish template naming. + +### Fix 3 +In the case a subset has at least two tasks with published workfiles, for example `Modeling` and `Rigging`, launching `Rigging` was getting the first one with the `next` and trying to find representations, therefore `workfileModeling` and trying to match the current `task_name` (`Rigging`) with the `representation["context"]["task"]["name"]` of a Modeling representation, which was ending up to a `workfile_representation` to `None`, and exiting the process. + +Trying to find the `task_name` in the `subset['name']` fixes it. + +### Fix 4 +Fetch input dependencies of workfile. + +Replacing https://github.com/ynput/OpenPype/pull/4102 for changes to bring this home. +___ + +
+ + +
+Maya: soft-fail when pan/zoom locked on camera when playblasting #4929 + +When pan/zoom enabled attribute on camera is locked, playblasting with pan/zoom fails because it is trying to restore it. This is fixing it by skipping over with warning. + + +___ + +
+ +### **Merged pull requests** + + +
+Maya Load References - Add Display Handle Setting #4904 + +When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. + + +___ + +
+ + +
+Photoshop: add autocreators for review and flat image #4871 + +Review and flatten image (produced when no instance of `image` family was created) were created somehow magically. This PRintroduces two new auto creators which allow artists to disable review or flatten image.For all `image` instances `Review` flag was added to provide functionality to create separate review per `image` instance. Previously was possible only to have separate instance of `review` family.Review is not enabled on `image` family by default. (Eg. follows original behavior)Review auto creator is enabled by default as it was before.Flatten image creator must be set in Settings in `project_settings/photoshop/create/AutoImageCreator`. + + +___ + +
+ + + + ## [3.15.5](https://github.com/ynput/OpenPype/tree/3.15.5) diff --git a/inno_setup.iss b/inno_setup.iss index 3adde52a8b..418bedbd4d 100644 --- a/inno_setup.iss +++ b/inno_setup.iss @@ -14,10 +14,10 @@ AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93} AppName={#MyAppName} AppVersion={#AppVer} AppVerName={#MyAppName} version {#AppVer} -AppPublisher=Orbi Tools s.r.o -AppPublisherURL=http://pype.club -AppSupportURL=http://pype.club -AppUpdatesURL=http://pype.club +AppPublisher=Ynput s.r.o +AppPublisherURL=https://ynput.io +AppSupportURL=https://ynput.io +AppUpdatesURL=https://ynput.io DefaultDirName={autopf}\{#MyAppName}\{#AppVer} UsePreviousAppDir=no DisableProgramGroupPage=yes diff --git a/openpype/addons/README.md b/openpype/addons/README.md new file mode 100644 index 0000000000..92b8b8c07c --- /dev/null +++ b/openpype/addons/README.md @@ -0,0 +1,3 @@ +This directory is for storing external addons that needs to be included in the pipeline when distributed. + +The directory is ignored by Git, but included in the zip and installation files. diff --git a/openpype/hooks/pre_add_last_workfile_arg.py b/openpype/hooks/pre_add_last_workfile_arg.py index 2a35db869a..c54acbc203 100644 --- a/openpype/hooks/pre_add_last_workfile_arg.py +++ b/openpype/hooks/pre_add_last_workfile_arg.py @@ -25,6 +25,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): "blender", "photoshop", "tvpaint", + "substancepainter", "aftereffects" ] diff --git a/openpype/hooks/pre_host_set_ocio.py b/openpype/hooks/pre_host_set_ocio.py new file mode 100644 index 0000000000..3620d88db6 --- /dev/null +++ b/openpype/hooks/pre_host_set_ocio.py @@ -0,0 +1,37 @@ +from openpype.lib import PreLaunchHook + +from openpype.pipeline.colorspace import get_imageio_config +from openpype.pipeline.template_data import get_template_data + + +class PreLaunchHostSetOCIO(PreLaunchHook): + """Set OCIO environment for the host""" + + order = 0 + app_groups = ["substancepainter"] + + def execute(self): + """Hook entry method.""" + + anatomy_data = get_template_data( + project_doc=self.data["project_doc"], + asset_doc=self.data["asset_doc"], + task_name=self.data["task_name"], + host_name=self.host_name, + system_settings=self.data["system_settings"] + ) + + ocio_config = get_imageio_config( + project_name=self.data["project_doc"]["name"], + host_name=self.host_name, + project_settings=self.data["project_settings"], + anatomy_data=anatomy_data, + anatomy=self.data["anatomy"] + ) + + if ocio_config: + ocio_path = ocio_config["path"] + self.log.info(f"Setting OCIO config path: {ocio_path}") + self.launch_context.env["OCIO"] = ocio_path + else: + self.log.debug("OCIO not set or enabled") diff --git a/openpype/hosts/aftereffects/api/__init__.py b/openpype/hosts/aftereffects/api/__init__.py index a7137ba8fb..28062cc35d 100644 --- a/openpype/hosts/aftereffects/api/__init__.py +++ b/openpype/hosts/aftereffects/api/__init__.py @@ -4,9 +4,8 @@ Anything that isn't defined here is INTERNAL and unreliable for external use. """ -from .launch_logic import ( +from .ws_stub import ( get_stub, - stub, ) from .pipeline import ( @@ -18,7 +17,8 @@ from .pipeline import ( from .lib import ( maintained_selection, get_extension_manifest_path, - get_asset_settings + get_asset_settings, + set_settings ) from .plugin import ( @@ -27,9 +27,8 @@ from .plugin import ( __all__ = [ - # launch_logic + # ws_stub "get_stub", - "stub", # pipeline "ls", @@ -39,6 +38,7 @@ __all__ = [ "maintained_selection", "get_extension_manifest_path", "get_asset_settings", + "set_settings", # plugin "AfterEffectsLoader" diff --git a/openpype/hosts/aftereffects/api/extension.zxp b/openpype/hosts/aftereffects/api/extension.zxp index b436f0ca0b..50fda416f8 100644 Binary files a/openpype/hosts/aftereffects/api/extension.zxp and b/openpype/hosts/aftereffects/api/extension.zxp differ diff --git a/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml b/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml index f96e80c503..9f65720ef0 100644 --- a/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml +++ b/openpype/hosts/aftereffects/api/extension/CSXS/manifest.xml @@ -1,6 +1,6 @@ - + diff --git a/openpype/hosts/aftereffects/api/extension/index.html b/openpype/hosts/aftereffects/api/extension/index.html index 52a7c4964f..291965559f 100644 --- a/openpype/hosts/aftereffects/api/extension/index.html +++ b/openpype/hosts/aftereffects/api/extension/index.html @@ -2,7 +2,7 @@ - + @@ -25,11 +25,11 @@ - + - + - + - + - + + + + + + + - - + + + - - + @@ -107,6 +143,6 @@ - + - \ No newline at end of file + diff --git a/openpype/hosts/aftereffects/api/extension/js/main.js b/openpype/hosts/aftereffects/api/extension/js/main.js index bb0f3b1f0c..ffc41f0937 100644 --- a/openpype/hosts/aftereffects/api/extension/js/main.js +++ b/openpype/hosts/aftereffects/api/extension/js/main.js @@ -4,7 +4,7 @@ indent: 4, maxerr: 50 */ var csInterface = new CSInterface(); - + log.warn("script start"); WSRPC.DEBUG = false; @@ -14,7 +14,7 @@ WSRPC.TRACE = false; async function startUp(url){ promis = runEvalScript("getEnv('" + url + "')"); - var res = await promis; + var res = await promis; log.warn("res: " + res); promis = runEvalScript("getEnv('OPENPYPE_DEBUG')"); @@ -56,7 +56,7 @@ function get_extension_version(){ } function main(websocket_url){ - // creates connection to 'websocket_url', registers routes + // creates connection to 'websocket_url', registers routes var default_url = 'ws://localhost:8099/ws/'; if (websocket_url == ''){ @@ -66,7 +66,7 @@ function main(websocket_url){ RPC.connect(); - log.warn("connected"); + log.warn("connected"); RPC.addRoute('AfterEffects.open', function (data) { log.warn('Server called client route "open":', data); @@ -88,7 +88,7 @@ function main(websocket_url){ }); RPC.addRoute('AfterEffects.get_active_document_name', function (data) { - log.warn('Server called client route ' + + log.warn('Server called client route ' + '"get_active_document_name":', data); return runEvalScript("getActiveDocumentName()") .then(function(result){ @@ -98,7 +98,7 @@ function main(websocket_url){ }); RPC.addRoute('AfterEffects.get_active_document_full_name', function (data){ - log.warn('Server called client route ' + + log.warn('Server called client route ' + '"get_active_document_full_name":', data); return runEvalScript("getActiveDocumentFullName()") .then(function(result){ @@ -118,7 +118,7 @@ function main(websocket_url){ }); }); - + RPC.addRoute('AfterEffects.get_selected_items', function (data) { log.warn('Server called client route "get_selected_items":', data); return runEvalScript("getSelectedItems(" + data.comps + "," + @@ -194,23 +194,25 @@ function main(websocket_url){ }); }); - RPC.addRoute('AfterEffects.get_work_area', function (data) { - log.warn('Server called client route "get_work_area":', data); - return runEvalScript("getWorkArea(" + data.item_id + ")") + RPC.addRoute('AfterEffects.get_comp_properties', function (data) { + log.warn('Server called client route "get_comp_properties":', data); + return runEvalScript("getCompProperties(" + data.item_id + ")") .then(function(result){ - log.warn("getWorkArea: " + result); + log.warn("get_comp_properties: " + result); return result; }); }); - RPC.addRoute('AfterEffects.set_work_area', function (data) { + RPC.addRoute('AfterEffects.set_comp_properties', function (data) { log.warn('Server called client route "set_work_area":', data); - return runEvalScript("setWorkArea(" + data.item_id + ',' + + return runEvalScript("setCompProperties(" + data.item_id + ',' + data.start + ',' + data.duration + ',' + - data.frame_rate + ")") + data.frame_rate + ',' + + data.width + ',' + + data.height + ")") .then(function(result){ - log.warn("getWorkArea: " + result); + log.warn("set_comp_properties: " + result); return result; }); }); @@ -255,7 +257,7 @@ function main(websocket_url){ RPC.addRoute('AfterEffects.import_background', function (data) { log.warn('Server called client route "import_background":', data); - return runEvalScript("importBackground(" + data.comp_id + ", " + + return runEvalScript("importBackground(" + data.comp_id + ", " + "'" + data.comp_name + "', " + JSON.stringify(data.files) + ")") .then(function(result){ @@ -266,7 +268,7 @@ function main(websocket_url){ RPC.addRoute('AfterEffects.reload_background', function (data) { log.warn('Server called client route "reload_background":', data); - return runEvalScript("reloadBackground(" + data.comp_id + ", " + + return runEvalScript("reloadBackground(" + data.comp_id + ", " + "'" + data.comp_name + "', " + JSON.stringify(data.files) + ")") .then(function(result){ @@ -314,6 +316,16 @@ function main(websocket_url){ log.warn('Server called client route "close":', data); return runEvalScript("close()"); }); + + RPC.addRoute('AfterEffects.print_msg', function (data) { + log.warn('Server called client route "print_msg":', data); + var escaped_msg = EscapeStringForJSX(data.msg); + return runEvalScript("printMsg('" + escaped_msg +"')") + .then(function(result){ + log.warn("print_msg: " + result); + return result; + }); + }); } /** main entry point **/ @@ -323,17 +335,17 @@ startUp("WEBSOCKET_URL"); 'use strict'; var csInterface = new CSInterface(); - - + + function init() { - + themeManager.init(); - + $("#btn_test").click(function () { csInterface.evalScript('sayHello()'); }); } - + init(); }()); diff --git a/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx b/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx index 5c1d163439..7d0b20bbb4 100644 --- a/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx +++ b/openpype/hosts/aftereffects/api/extension/jsx/hostscript.jsx @@ -1,7 +1,7 @@ /*jslint vars: true, plusplus: true, devel: true, nomen: true, regexp: true, indent: 4, maxerr: 50 */ /*global $, Folder*/ -#include "../js/libs/json.js"; +//@include "../js/libs/json.js" /* All public API function should return JSON! */ @@ -29,13 +29,13 @@ function getEnv(variable){ function getMetadata(){ /** * Returns payload in 'Label' field of project's metadata - * + * **/ if (ExternalObject.AdobeXMPScript === undefined){ ExternalObject.AdobeXMPScript = new ExternalObject('lib:AdobeXMPScript'); } - + var proj = app.project; var meta = new XMPMeta(app.project.xmpPacket); var schemaNS = XMPMeta.getNamespaceURI("xmp"); @@ -53,7 +53,7 @@ function getMetadata(){ function imprint(payload){ /** * Stores payload in 'Label' field of project's metadata - * + * * Args: * payload (string): json content */ @@ -61,14 +61,14 @@ function imprint(payload){ ExternalObject.AdobeXMPScript = new ExternalObject('lib:AdobeXMPScript'); } - + var proj = app.project; var meta = new XMPMeta(app.project.xmpPacket); var schemaNS = XMPMeta.getNamespaceURI("xmp"); var label = "xmp:Label"; meta.setProperty(schemaNS, label, payload); - + app.project.xmpPacket = meta.serialize(); } @@ -116,14 +116,14 @@ function getItems(comps, folders, footages){ /** * Returns JSON representation of compositions and * if 'collectLayers' then layers in comps too. - * + * * Args: * comps (bool): return selected compositions * folders (bool): return folders * footages (bool): return FootageItem * Returns: * (list) of JSON items - */ + */ var items = [] for (i = 1; i <= app.project.items.length; ++i){ var item = app.project.items[i]; @@ -142,14 +142,14 @@ function getItems(comps, folders, footages){ function getSelectedItems(comps, folders, footages){ /** * Returns list of selected items from Project menu - * + * * Args: * comps (bool): return selected compositions * folders (bool): return folders * footages (bool): return FootageItem * Returns: * (list) of JSON items - */ + */ var items = [] for (i = 0; i < app.project.selection.length; ++i){ var item = app.project.selection[i]; @@ -166,9 +166,9 @@ function getSelectedItems(comps, folders, footages){ function _getItem(item, comps, folders, footages){ /** - * Auxiliary function as project items and selections + * Auxiliary function as project items and selections * are indexed in different way :/ - * Refactor + * Refactor */ var item_type = ''; if (item instanceof FolderItem){ @@ -189,7 +189,7 @@ function _getItem(item, comps, folders, footages){ return "{}"; } } - + var item = {"name": item.name, "id": item.id, "type": item_type}; @@ -200,7 +200,7 @@ function importFile(path, item_name, import_options){ /** * Imports file (image tested for now) as a FootageItem. * Creates new composition - * + * * Args: * path (string): absolute path to image file * item_name (string): label for composition @@ -218,7 +218,7 @@ function importFile(path, item_name, import_options){ app.beginUndoGroup("Import File"); fp = new File(path); if (fp.exists){ - try { + try { im_opt = new ImportOptions(fp); importAsType = import_options["ImportAsType"]; @@ -234,18 +234,18 @@ function importFile(path, item_name, import_options){ } if (importAsType.indexOf('PROJECT') > 0){ im_opt.importAs = ImportAsType.PROJECT; - } - + } + } if ('sequence' in import_options){ im_opt.sequence = true; } - + comp = app.project.importFile(im_opt); if (app.project.selection.length == 2 && app.project.selection[0] instanceof FolderItem){ - comp.parentFolder = app.project.selection[0] + comp.parentFolder = app.project.selection[0] } } catch (error) { return _prepareError(error.toString() + importOptions.file.fsName); @@ -283,14 +283,14 @@ function setLabelColor(comp_id, color_idx){ function replaceItem(comp_id, path, item_name){ /** * Replaces loaded file with new file and updates name - * + * * Args: * comp_id (int): id of composition, not a index! * path (string): absolute path to new file * item_name (string): new composition name */ app.beginUndoGroup("Replace File"); - + fp = new File(path); if (!fp.exists){ return _prepareError("File " + path + " not found."); @@ -303,7 +303,7 @@ function replaceItem(comp_id, path, item_name){ }else{ item.replace(fp); } - + item.name = item_name; } catch (error) { return _prepareError(error.toString() + path); @@ -319,7 +319,7 @@ function replaceItem(comp_id, path, item_name){ function renameItem(item_id, new_name){ /** * Renames item with 'item_id' to 'new_name' - * + * * Args: * item_id (int): id to search item * new_name (str) @@ -335,7 +335,7 @@ function renameItem(item_id, new_name){ function deleteItem(item_id){ /** * Delete any 'item_id' - * + * * Not restricted only to comp, it could delete * any item with 'id' */ @@ -347,38 +347,76 @@ function deleteItem(item_id){ } } -function getWorkArea(comp_id){ +function getCompProperties(comp_id){ /** - * Returns information about workarea - are that will be - * rendered. All calculation will be done in OpenPype, - * easier to modify without redeploy of extension. - * + * Returns information about composition - are that will be + * rendered. + * * Returns * (dict) */ - var item = app.project.itemByID(comp_id); - if (item){ - return JSON.stringify({ - "workAreaStart": item.displayStartFrame, - "workAreaDuration": item.duration, - "frameRate": item.frameRate}); - }else{ + var comp = app.project.itemByID(comp_id); + if (!comp){ return _prepareError("There is no composition with "+ comp_id); } + + return JSON.stringify({ + "id": comp.id, + "name": comp.name, + "frameStart": comp.displayStartFrame, + "framesDuration": comp.duration * comp.frameRate, + "frameRate": comp.frameRate, + "width": comp.width, + "height": comp.height}); } -function setWorkArea(comp_id, workAreaStart, workAreaDuration, frameRate){ +function setCompProperties(comp_id, frameStart, framesCount, frameRate, + width, height){ /** * Sets work area info from outside (from Ftrack via OpenPype) */ - var item = app.project.itemByID(comp_id); - if (item){ - item.displayStartTime = workAreaStart; - item.duration = workAreaDuration; - item.frameRate = frameRate; - }else{ + var comp = app.project.itemByID(comp_id); + if (!comp){ return _prepareError("There is no composition with "+ comp_id); } + + app.beginUndoGroup('change comp properties'); + if (frameStart && framesCount && frameRate){ + comp.displayStartFrame = frameStart; + comp.duration = framesCount / frameRate; + comp.frameRate = frameRate; + } + if (width && height){ + var widthOld = comp.width; + var widthNew = width; + var widthDelta = widthNew - widthOld; + + var heightOld = comp.height; + var heightNew = height; + var heightDelta = heightNew - heightOld; + + var offset = [widthDelta / 2, heightDelta / 2]; + + comp.width = widthNew; + comp.height = heightNew; + + for (var i = 1, il = comp.numLayers; i <= il; i++) { + var layer = comp.layer(i); + var positionProperty = layer.property('ADBE Transform Group').property('ADBE Position'); + + if (positionProperty.numKeys > 0) { + for (var j = 1, jl = positionProperty.numKeys; j <= jl; j++) { + var keyValue = positionProperty.keyValue(j); + positionProperty.setValueAtKey(j, keyValue + offset); + } + } else { + var positionValue = positionProperty.value; + positionProperty.setValue(positionValue + offset); + } + } + } + + app.endUndoGroup(); } function save(){ @@ -504,7 +542,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){ * Args: * comp_id (int): id of target composition * item_id (int): FootageItem.id - * found_comp (CompItem, optional): to limit querying if + * found_comp (CompItem, optional): to limit quering if * comp already found previously */ var comp = found_comp || app.project.itemByID(comp_id); @@ -749,7 +787,7 @@ function render(target_folder, comp_id){ var om1 = app.project.renderQueue.item(i).outputModule(1); var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space? - + var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE ); var targetFolder = new Folder(target_folder); @@ -763,7 +801,7 @@ function render(target_folder, comp_id){ render_item.render = false; } } - + } app.beginSuppressDialogs(); app.project.renderQueue.render(); @@ -779,6 +817,10 @@ function getAppVersion(){ return _prepareSingleValue(app.version); } +function printMsg(msg){ + alert(msg); +} + function _prepareSingleValue(value){ return JSON.stringify({"result": value}) } diff --git a/openpype/hosts/aftereffects/api/launch_logic.py b/openpype/hosts/aftereffects/api/launch_logic.py index c428043d99..77c2b0b6ca 100644 --- a/openpype/hosts/aftereffects/api/launch_logic.py +++ b/openpype/hosts/aftereffects/api/launch_logic.py @@ -1,49 +1,77 @@ import os +import sys import subprocess import collections import logging import asyncio import functools +import traceback + from wsrpc_aiohttp import ( WebSocketRoute, WebSocketAsync ) -from qtpy import QtCore +from qtpy import QtCore, QtWidgets from openpype.lib import Logger -from openpype.pipeline import legacy_io from openpype.tools.utils import host_tools +from openpype.tests.lib import is_in_tests +from openpype.pipeline import install_host, legacy_io +from openpype.modules import ModulesManager from openpype.tools.adobe_webserver.app import WebServerTool -from .ws_stub import AfterEffectsServerStub +from .ws_stub import get_stub +from .lib import set_settings log = logging.getLogger(__name__) log.setLevel(logging.DEBUG) -class ConnectionNotEstablishedYet(Exception): - pass +def safe_excepthook(*args): + traceback.print_exception(*args) -def get_stub(): - """ - Convenience function to get server RPC stub to call methods directed - for host (Photoshop). - It expects already created connection, started from client. - Currently created when panel is opened (PS: Window>Extensions>Avalon) - :return: where functions could be called from - """ - ae_stub = AfterEffectsServerStub() - if not ae_stub.client: - raise ConnectionNotEstablishedYet("Connection is not created yet") +def main(*subprocess_args): + """Main entrypoint to AE launching, called from pre hook.""" + sys.excepthook = safe_excepthook - return ae_stub + from openpype.hosts.aftereffects.api import AfterEffectsHost + host = AfterEffectsHost() + install_host(host) -def stub(): - return get_stub() + os.environ["OPENPYPE_LOG_NO_COLORS"] = "False" + app = QtWidgets.QApplication([]) + app.setQuitOnLastWindowClosed(False) + + launcher = ProcessLauncher(subprocess_args) + launcher.start() + + if os.environ.get("HEADLESS_PUBLISH"): + manager = ModulesManager() + webpublisher_addon = manager["webpublisher"] + + launcher.execute_in_main_thread( + functools.partial( + webpublisher_addon.headless_publish, + log, + "CloseAE", + is_in_tests() + ) + ) + + elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True): + save = False + if os.getenv("WORKFILES_SAVE_AS"): + save = True + + launcher.execute_in_main_thread( + lambda: host_tools.show_tool_by_name("workfiles", save=save) + ) + + sys.exit(app.exec_()) def show_tool_by_name(tool_name): @@ -55,6 +83,7 @@ def show_tool_by_name(tool_name): class ProcessLauncher(QtCore.QObject): + """Launches webserver, connects to it, runs main thread.""" route_name = "AfterEffects" _main_thread_callbacks = collections.deque() @@ -296,6 +325,15 @@ class AfterEffectsRoute(WebSocketRoute): async def sceneinventory_route(self): self._tool_route("sceneinventory") + async def setresolution_route(self): + self._settings_route(False, True) + + async def setframes_route(self): + self._settings_route(True, False) + + async def setall_route(self): + self._settings_route(True, True) + async def experimental_tools_route(self): self._tool_route("experimental_tools") @@ -309,3 +347,13 @@ class AfterEffectsRoute(WebSocketRoute): # Required return statement. return "nothing" + + def _settings_route(self, frames, resolution): + partial_method = functools.partial(set_settings, + frames, + resolution) + + ProcessLauncher.execute_in_main_thread(partial_method) + + # Required return statement. + return "nothing" diff --git a/openpype/hosts/aftereffects/api/lib.py b/openpype/hosts/aftereffects/api/lib.py index a39af5c81f..e8352c382b 100644 --- a/openpype/hosts/aftereffects/api/lib.py +++ b/openpype/hosts/aftereffects/api/lib.py @@ -1,69 +1,17 @@ import os -import sys import re import json import contextlib -import traceback import logging -from functools import partial -from qtpy import QtWidgets - -from openpype.pipeline import install_host -from openpype.modules import ModulesManager - -from openpype.tools.utils import host_tools -from openpype.tests.lib import is_in_tests -from .launch_logic import ProcessLauncher, get_stub +from openpype.pipeline.context_tools import get_current_context +from openpype.client import get_asset_by_name +from .ws_stub import get_stub log = logging.getLogger(__name__) log.setLevel(logging.DEBUG) -def safe_excepthook(*args): - traceback.print_exception(*args) - - -def main(*subprocess_args): - sys.excepthook = safe_excepthook - - from openpype.hosts.aftereffects.api import AfterEffectsHost - - host = AfterEffectsHost() - install_host(host) - - os.environ["OPENPYPE_LOG_NO_COLORS"] = "False" - app = QtWidgets.QApplication([]) - app.setQuitOnLastWindowClosed(False) - - launcher = ProcessLauncher(subprocess_args) - launcher.start() - - if os.environ.get("HEADLESS_PUBLISH"): - manager = ModulesManager() - webpublisher_addon = manager["webpublisher"] - - launcher.execute_in_main_thread( - partial( - webpublisher_addon.headless_publish, - log, - "CloseAE", - is_in_tests() - ) - ) - - elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True): - save = False - if os.getenv("WORKFILES_SAVE_AS"): - save = True - - launcher.execute_in_main_thread( - lambda: host_tools.show_tool_by_name("workfiles", save=save) - ) - - sys.exit(app.exec_()) - - @contextlib.contextmanager def maintained_selection(): """Maintain selection during context.""" @@ -145,13 +93,13 @@ def get_asset_settings(asset_doc): """ asset_data = asset_doc["data"] - fps = asset_data.get("fps") - frame_start = asset_data.get("frameStart") - frame_end = asset_data.get("frameEnd") - handle_start = asset_data.get("handleStart") - handle_end = asset_data.get("handleEnd") - resolution_width = asset_data.get("resolutionWidth") - resolution_height = asset_data.get("resolutionHeight") + fps = asset_data.get("fps", 0) + frame_start = asset_data.get("frameStart", 0) + frame_end = asset_data.get("frameEnd", 0) + handle_start = asset_data.get("handleStart", 0) + handle_end = asset_data.get("handleEnd", 0) + resolution_width = asset_data.get("resolutionWidth", 0) + resolution_height = asset_data.get("resolutionHeight", 0) duration = (frame_end - frame_start + 1) + handle_start + handle_end return { @@ -164,3 +112,49 @@ def get_asset_settings(asset_doc): "resolutionHeight": resolution_height, "duration": duration } + + +def set_settings(frames, resolution, comp_ids=None, print_msg=True): + """Sets number of frames and resolution to selected comps. + + Args: + frames (bool): True if set frame info + resolution (bool): True if set resolution + comp_ids (list): specific composition ids, if empty + it tries to look for currently selected + print_msg (bool): True throw JS alert with msg + """ + frame_start = frames_duration = fps = width = height = None + current_context = get_current_context() + + asset_doc = get_asset_by_name(current_context["project_name"], + current_context["asset_name"]) + settings = get_asset_settings(asset_doc) + + msg = '' + if frames: + frame_start = settings["frameStart"] - settings["handleStart"] + frames_duration = settings["duration"] + fps = settings["fps"] + msg += f"frame start:{frame_start}, duration:{frames_duration}, "\ + f"fps:{fps}" + if resolution: + width = settings["resolutionWidth"] + height = settings["resolutionHeight"] + msg += f"width:{width} and height:{height}" + + stub = get_stub() + if not comp_ids: + comps = stub.get_selected_items(True, False, False) + comp_ids = [comp.id for comp in comps] + if not comp_ids: + stub.print_msg("Select at least one composition to apply settings.") + return + + for comp_id in comp_ids: + msg = f"Setting for comp {comp_id} " + msg + log.debug(msg) + stub.set_comp_properties(comp_id, frame_start, frames_duration, + fps, width, height) + if print_msg: + stub.print_msg(msg) diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index 95f6f3235b..27aee8c7ce 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -8,10 +8,7 @@ from openpype.lib import Logger, register_event_callback from openpype.pipeline import ( register_loader_plugin_path, register_creator_plugin_path, - deregister_loader_plugin_path, - deregister_creator_plugin_path, AVALON_CONTAINER_ID, - legacy_io, ) from openpype.pipeline.load import any_outdated_containers import openpype.hosts.aftereffects @@ -23,7 +20,8 @@ from openpype.host import ( IPublishHost ) -from .launch_logic import get_stub, ConnectionNotEstablishedYet +from .launch_logic import get_stub +from .ws_stub import ConnectionNotEstablishedYet log = Logger.get_logger(__name__) @@ -60,9 +58,6 @@ class AfterEffectsHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): print("Not connected yet, ignoring") return - if not stub.get_active_document_name(): - return - self._stub = stub return self._stub diff --git a/openpype/hosts/aftereffects/api/ws_stub.py b/openpype/hosts/aftereffects/api/ws_stub.py index f094c7fa2a..576c997f49 100644 --- a/openpype/hosts/aftereffects/api/ws_stub.py +++ b/openpype/hosts/aftereffects/api/ws_stub.py @@ -11,6 +11,10 @@ from wsrpc_aiohttp import WebSocketAsync from openpype.tools.adobe_webserver.app import WebServerTool +class ConnectionNotEstablishedYet(Exception): + pass + + @attr.s class AEItem(object): """ @@ -24,8 +28,8 @@ class AEItem(object): # all imported elements, single for # regular image, array for Backgrounds members = attr.ib(factory=list) - workAreaStart = attr.ib(default=None) - workAreaDuration = attr.ib(default=None) + frameStart = attr.ib(default=None) + framesDuration = attr.ib(default=None) frameRate = attr.ib(default=None) file_name = attr.ib(default=None) instance_id = attr.ib(default=None) # New Publisher @@ -355,42 +359,50 @@ class AfterEffectsServerStub(): return self._handle_return(res) - def get_work_area(self, item_id): - """ Get work are information for render purposes + def get_comp_properties(self, comp_id): + """ Get composition information for render purposes + + Returns startFrame, frameDuration, fps, width, height. + Args: - item_id (int): + comp_id (int): Returns: (AEItem) """ res = self.websocketserver.call(self.client.call - ('AfterEffects.get_work_area', - item_id=item_id + ('AfterEffects.get_comp_properties', + item_id=comp_id )) records = self._to_records(self._handle_return(res)) if records: return records.pop() - def set_work_area(self, item, start, duration, frame_rate): + def set_comp_properties(self, comp_id, start, duration, frame_rate, + width, height): """ Set work area to predefined values (from Ftrack). Work area directs what gets rendered. Beware of rounding, AE expects seconds, not frames directly. Args: - item (dict): - start (float): workAreaStart in seconds - duration (float): in seconds + comp_id (int): + start (int): workAreaStart in frames + duration (int): in frames frame_rate (float): frames in seconds + width (int): resolution width + height (int): resolution height """ res = self.websocketserver.call(self.client.call - ('AfterEffects.set_work_area', - item_id=item.id, + ('AfterEffects.set_comp_properties', + item_id=comp_id, start=start, duration=duration, - frame_rate=frame_rate)) + frame_rate=frame_rate, + width=width, + height=height)) return self._handle_return(res) def save(self): @@ -554,6 +566,12 @@ class AfterEffectsServerStub(): return self._handle_return(res) + def print_msg(self, msg): + """Triggers Javascript alert dialog.""" + self.websocketserver.call(self.client.call + ('AfterEffects.print_msg', + msg=msg)) + def _handle_return(self, res): """Wraps return, throws ValueError if 'error' key is present.""" if res and isinstance(res, str) and res != "undefined": @@ -608,8 +626,8 @@ class AfterEffectsServerStub(): d.get('name'), d.get('type'), d.get('members'), - d.get('workAreaStart'), - d.get('workAreaDuration'), + d.get('frameStart'), + d.get('framesDuration'), d.get('frameRate'), d.get('file_name'), d.get("instance_id"), @@ -618,3 +636,18 @@ class AfterEffectsServerStub(): ret.append(item) return ret + + +def get_stub(): + """ + Convenience function to get server RPC stub to call methods directed + for host (Photoshop). + It expects already created connection, started from client. + Currently created when panel is opened (PS: Window>Extensions>Avalon) + :return: where functions could be called from + """ + ae_stub = AfterEffectsServerStub() + if not ae_stub.client: + raise ConnectionNotEstablishedYet("Connection is not created yet") + + return ae_stub diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py index c20b0ec51b..fa79fac78f 100644 --- a/openpype/hosts/aftereffects/plugins/create/create_render.py +++ b/openpype/hosts/aftereffects/plugins/create/create_render.py @@ -9,6 +9,7 @@ from openpype.pipeline import ( CreatorError ) from openpype.hosts.aftereffects.api.pipeline import cache_and_get_instances +from openpype.hosts.aftereffects.api.lib import set_settings from openpype.lib import prepare_template_data from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS @@ -26,15 +27,20 @@ class RenderCreator(Creator): create_allow_context_change = True - def __init__(self, project_settings, *args, **kwargs): - super(RenderCreator, self).__init__(project_settings, *args, **kwargs) - self._default_variants = (project_settings["aftereffects"] - ["create"] - ["RenderCreator"] - ["defaults"]) + # Settings + default_variants = [] + mark_for_review = True def create(self, subset_name_from_ui, data, pre_create_data): stub = api.get_stub() # only after After Effects is up + + try: + _ = stub.get_active_document_full_name() + except ValueError: + raise CreatorError( + "Please save workfile via Workfile app first!" + ) + if pre_create_data.get("use_selection"): comps = stub.get_selected_items( comps=True, folders=False, footages=False @@ -44,8 +50,8 @@ class RenderCreator(Creator): if not comps: raise CreatorError( - "Nothing to create. Select composition " - "if 'useSelection' or create at least " + "Nothing to create. Select composition in Project Bin if " + "'Use selection' is toggled or create at least " "one composition." ) use_composition_name = (pre_create_data.get("use_composition_name") or @@ -82,28 +88,44 @@ class RenderCreator(Creator): use_farm = pre_create_data["farm"] new_instance.creator_attributes["farm"] = use_farm + review = pre_create_data["mark_for_review"] + new_instance.creator_attributes["mark_for_review"] = review + api.get_stub().imprint(new_instance.id, new_instance.data_to_store()) self._add_instance_to_context(new_instance) stub.rename_item(comp.id, subset_name) - - def get_default_variants(self): - return self._default_variants - - def get_instance_attr_defs(self): - return [BoolDef("farm", label="Render on farm")] + set_settings(True, True, [comp.id], print_msg=False) def get_pre_create_attr_defs(self): output = [ - BoolDef("use_selection", default=True, label="Use selection"), + BoolDef("use_selection", + tooltip="Composition for publishable instance should be " + "selected by default.", + default=True, label="Use selection"), BoolDef("use_composition_name", label="Use composition name in subset"), UISeparatorDef(), - BoolDef("farm", label="Render on farm") + BoolDef("farm", label="Render on farm"), + BoolDef( + "mark_for_review", + label="Review", + default=self.mark_for_review + ) ] return output + def get_instance_attr_defs(self): + return [ + BoolDef("farm", label="Render on farm"), + BoolDef( + "mark_for_review", + label="Review", + default=False + ) + ] + def get_icon(self): return resources.get_openpype_splash_filepath() @@ -143,6 +165,13 @@ class RenderCreator(Creator): api.get_stub().rename_item(comp_id, new_comp_name) + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["aftereffects"]["create"]["RenderCreator"] + ) + + self.mark_for_review = plugin_settings["mark_for_review"] + def get_detail_description(self): return """Creator for Render instances @@ -201,4 +230,7 @@ class RenderCreator(Creator): instance_data["creator_attributes"] = {"farm": is_old_farm} instance_data["family"] = self.family + if instance_data["creator_attributes"].get("mark_for_review") is None: + instance_data["creator_attributes"]["mark_for_review"] = True + return instance_data diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_render.py b/openpype/hosts/aftereffects/plugins/publish/collect_render.py index 6153a426cf..aa46461915 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_render.py @@ -66,19 +66,19 @@ class CollectAERender(publish.AbstractCollectRender): comp_id = int(inst.data["members"][0]) - work_area_info = CollectAERender.get_stub().get_work_area(comp_id) + comp_info = CollectAERender.get_stub().get_comp_properties( + comp_id) - if not work_area_info: + if not comp_info: self.log.warning("Orphaned instance, deleting metadata") - inst_id = inst.get("instance_id") or str(comp_id) + inst_id = inst.data.get("instance_id") or str(comp_id) CollectAERender.get_stub().remove_instance(inst_id) continue - frame_start = work_area_info.workAreaStart - frame_end = round(work_area_info.workAreaStart + - float(work_area_info.workAreaDuration) * - float(work_area_info.frameRate)) - 1 - fps = work_area_info.frameRate + frame_start = comp_info.frameStart + frame_end = round(comp_info.frameStart + + comp_info.framesDuration) - 1 + fps = comp_info.frameRate # TODO add resolution when supported by extension task_name = inst.data.get("task") # legacy @@ -88,10 +88,11 @@ class CollectAERender(publish.AbstractCollectRender): raise ValueError("No file extension set in Render Queue") render_item = render_q[0] + instance_families = inst.data.get("families", []) subset_name = inst.data["subset"] instance = AERenderInstance( family="render", - families=inst.data.get("families", []), + families=instance_families, version=version, time="", source=current_file, @@ -109,6 +110,7 @@ class CollectAERender(publish.AbstractCollectRender): tileRendering=False, tilesX=0, tilesY=0, + review="review" in instance_families, frameStart=frame_start, frameEnd=frame_end, frameStep=1, @@ -139,6 +141,9 @@ class CollectAERender(publish.AbstractCollectRender): instance.toBeRenderedOn = "deadline" instance.renderer = "aerender" instance.farm = True # to skip integrate + if "review" in instance.families: + # to skip ExtractReview locally + instance.families.remove("review") instances.append(instance) instances_to_remove.append(inst) @@ -218,15 +223,4 @@ class CollectAERender(publish.AbstractCollectRender): if fam not in instance.families: instance.families.append(fam) - settings = get_project_settings(os.getenv("AVALON_PROJECT")) - reviewable_subset_filter = (settings["deadline"] - ["publish"] - ["ProcessSubmittedJobOnFarm"] - ["aov_filter"].get(self.hosts[0])) - for aov_pattern in reviewable_subset_filter: - if re.match(aov_pattern, instance.subset): - instance.families.append("review") - instance.review = True - break - return instance diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_review.py b/openpype/hosts/aftereffects/plugins/publish/collect_review.py new file mode 100644 index 0000000000..a933b9fed2 --- /dev/null +++ b/openpype/hosts/aftereffects/plugins/publish/collect_review.py @@ -0,0 +1,25 @@ +""" +Requires: + None + +Provides: + instance -> family ("review") +""" +import pyblish.api + + +class CollectReview(pyblish.api.ContextPlugin): + """Add review to families if instance created with 'mark_for_review' flag + """ + label = "Collect Review" + hosts = ["aftereffects"] + order = pyblish.api.CollectorOrder + 0.1 + + def process(self, context): + for instance in context: + creator_attributes = instance.data.get("creator_attributes") or {} + if ( + creator_attributes.get("mark_for_review") + and "review" not in instance.data["families"] + ): + instance.data["families"].append("review") diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index d535329eb4..c70aa41dbe 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -66,33 +66,9 @@ class ExtractLocalRender(publish.Extractor): first_repre = not representations if instance.data["review"] and first_repre: repre_data["tags"] = ["review"] + thumbnail_path = os.path.join(staging_dir, files[0]) + instance.data["thumbnailSource"] = thumbnail_path representations.append(repre_data) instance.data["representations"] = representations - - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - # Generate thumbnail. - thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") - - args = [ - ffmpeg_path, "-y", - "-i", first_file_path, - "-vf", "scale=300:-1", - "-vframes", "1", - thumbnail_path - ] - self.log.debug("Thumbnail args:: {}".format(args)) - try: - output = run_subprocess(args) - except TypeError: - self.log.warning("Error in creating thumbnail") - six.reraise(*sys.exc_info()) - - instance.data["representations"].append({ - "name": "thumbnail", - "ext": "jpg", - "files": os.path.basename(thumbnail_path), - "stagingDir": staging_dir, - "tags": ["thumbnail"] - }) diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py index 1b2e800769..c1d73eff02 100644 --- a/openpype/hosts/blender/plugins/load/load_abc.py +++ b/openpype/hosts/blender/plugins/load/load_abc.py @@ -65,37 +65,19 @@ class CacheModelLoader(plugin.AssetLoader): imported = lib.get_selection() - empties = [obj for obj in imported if obj.type == 'EMPTY'] - - container = None - - for empty in empties: - if not empty.parent: - container = empty - break - - assert container, "No asset group found" - # Children must be linked before parents, # otherwise the hierarchy will break objects = [] - nodes = list(container.children) - for obj in nodes: + for obj in imported: obj.parent = asset_group - bpy.data.objects.remove(container) - - for obj in nodes: + for obj in imported: objects.append(obj) - nodes.extend(list(obj.children)) + imported.extend(list(obj.children)) objects.reverse() - for obj in objects: - parent.objects.link(obj) - collection.objects.unlink(obj) - for obj in objects: name = obj.name obj.name = f"{group_name}:{name}" @@ -138,13 +120,14 @@ class CacheModelLoader(plugin.AssetLoader): group_name = plugin.asset_name(asset, subset, unique_number) namespace = namespace or f"{asset}_{unique_number}" - avalon_container = bpy.data.collections.get(AVALON_CONTAINERS) - if not avalon_container: - avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS) - bpy.context.scene.collection.children.link(avalon_container) + avalon_containers = bpy.data.collections.get(AVALON_CONTAINERS) + if not avalon_containers: + avalon_containers = bpy.data.collections.new( + name=AVALON_CONTAINERS) + bpy.context.scene.collection.children.link(avalon_containers) asset_group = bpy.data.objects.new(group_name, object_data=None) - avalon_container.objects.link(asset_group) + avalon_containers.objects.link(asset_group) objects = self._process(libpath, asset_group, group_name) diff --git a/openpype/hosts/fusion/api/__init__.py b/openpype/hosts/fusion/api/__init__.py index 495fe286d5..dba55a98d9 100644 --- a/openpype/hosts/fusion/api/__init__.py +++ b/openpype/hosts/fusion/api/__init__.py @@ -13,6 +13,7 @@ from .lib import ( update_frame_range, set_asset_framerange, get_current_comp, + get_bmd_library, comp_lock_and_undo_chunk ) diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py index 40cc4d2963..cba8c38c2f 100644 --- a/openpype/hosts/fusion/api/lib.py +++ b/openpype/hosts/fusion/api/lib.py @@ -256,8 +256,11 @@ def switch_item(container, @contextlib.contextmanager -def maintained_selection(): - comp = get_current_comp() +def maintained_selection(comp=None): + """Reset comp selection from before the context after the context""" + if comp is None: + comp = get_current_comp() + previous_selection = comp.GetToolList(True).values() try: yield @@ -269,6 +272,33 @@ def maintained_selection(): flow.Select(tool, True) +@contextlib.contextmanager +def maintained_comp_range(comp=None, + global_start=True, + global_end=True, + render_start=True, + render_end=True): + """Reset comp frame ranges from before the context after the context""" + if comp is None: + comp = get_current_comp() + + comp_attrs = comp.GetAttrs() + preserve_attrs = {} + if global_start: + preserve_attrs["COMPN_GlobalStart"] = comp_attrs["COMPN_GlobalStart"] + if global_end: + preserve_attrs["COMPN_GlobalEnd"] = comp_attrs["COMPN_GlobalEnd"] + if render_start: + preserve_attrs["COMPN_RenderStart"] = comp_attrs["COMPN_RenderStart"] + if render_end: + preserve_attrs["COMPN_RenderEnd"] = comp_attrs["COMPN_RenderEnd"] + + try: + yield + finally: + comp.SetAttrs(preserve_attrs) + + def get_frame_path(path): """Get filename for the Fusion Saver with padded number as '#' @@ -309,6 +339,12 @@ def get_fusion_module(): return fusion +def get_bmd_library(): + """Get bmd library""" + bmd = getattr(sys.modules["__main__"], "bmd", None) + return bmd + + def get_current_comp(): """Get current comp in this session""" fusion = get_fusion_module() diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py index cedc4029fa..04898d0a45 100644 --- a/openpype/hosts/fusion/plugins/create/create_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_saver.py @@ -1,3 +1,4 @@ +from copy import deepcopy import os from openpype.hosts.fusion.api import ( @@ -11,15 +12,13 @@ from openpype.lib import ( ) from openpype.pipeline import ( legacy_io, - Creator, + Creator as NewCreator, CreatedInstance, -) -from openpype.client import ( - get_asset_by_name, + Anatomy ) -class CreateSaver(Creator): +class CreateSaver(NewCreator): identifier = "io.openpype.creators.fusion.saver" label = "Render (saver)" name = "render" @@ -28,9 +27,29 @@ class CreateSaver(Creator): description = "Fusion Saver to generate image sequence" icon = "fa5.eye" - instance_attributes = ["reviewable"] + instance_attributes = [ + "reviewable" + ] + default_variants = [ + "Main", + "Mask" + ] + + # TODO: This should be renamed together with Nuke so it is aligned + temp_rendering_path_template = ( + "{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}") def create(self, subset_name, instance_data, pre_create_data): + self.pass_pre_attributes_to_instance( + instance_data, + pre_create_data + ) + + instance_data.update({ + "id": "pyblish.avalon.instance", + "subset": subset_name + }) + # TODO: Add pre_create attributes to choose file format? file_format = "OpenEXRFormat" @@ -39,7 +58,6 @@ class CreateSaver(Creator): args = (-32768, -32768) # Magical position numbers saver = comp.AddTool("Saver", *args) - instance_data["subset"] = subset_name self._update_tool_with_data(saver, data=instance_data) saver["OutputFormat"] = file_format @@ -78,7 +96,7 @@ class CreateSaver(Creator): for tool in tools: data = self.get_managed_tool_data(tool) if not data: - data = self._collect_unmanaged_saver(tool) + continue # Add instance created_instance = CreatedInstance.from_existing(data, self) @@ -125,60 +143,35 @@ class CreateSaver(Creator): original_subset = tool.GetData("openpype.subset") subset = data["subset"] if original_subset != subset: - # Subset change detected - # Update output filepath - workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"]) - filename = f"{subset}..exr" - filepath = os.path.join(workdir, "render", subset, filename) - tool["Clip"] = filepath + self._configure_saver_tool(data, tool, subset) - # Rename tool - if tool.Name != subset: - print(f"Renaming {tool.Name} -> {subset}") - tool.SetAttrs({"TOOLS_Name": subset}) + def _configure_saver_tool(self, data, tool, subset): + formatting_data = deepcopy(data) - def _collect_unmanaged_saver(self, tool): - # TODO: this should not be done this way - this should actually - # get the data as stored on the tool explicitly (however) - # that would disallow any 'regular saver' to be collected - # unless the instance data is stored on it to begin with - - print("Collecting unmanaged saver..") - comp = tool.Comp() - - # Allow regular non-managed savers to also be picked up - project = legacy_io.Session["AVALON_PROJECT"] - asset = legacy_io.Session["AVALON_ASSET"] - task = legacy_io.Session["AVALON_TASK"] - - asset_doc = get_asset_by_name(project_name=project, asset_name=asset) - - path = tool["Clip"][comp.TIME_UNDEFINED] - fname = os.path.basename(path) - fname, _ext = os.path.splitext(fname) - variant = fname.rstrip(".") - subset = self.get_subset_name( - variant=variant, - task_name=task, - asset_doc=asset_doc, - project_name=project, + # get frame padding from anatomy templates + anatomy = Anatomy() + frame_padding = int( + anatomy.templates["render"].get("frame_padding", 4) ) - attrs = tool.GetAttrs() - passthrough = attrs["TOOLB_PassThrough"] - return { - # Required data - "project": project, - "asset": asset, - "subset": subset, - "task": task, - "variant": variant, - "active": not passthrough, - "family": self.family, - # Unique identifier for instance and this creator - "id": "pyblish.avalon.instance", - "creator_identifier": self.identifier, - } + # Subset change detected + workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"]) + formatting_data.update({ + "workdir": workdir, + "frame": "0" * frame_padding, + "ext": "exr" + }) + + # build file path to render + filepath = self.temp_rendering_path_template.format( + **formatting_data) + + tool["Clip"] = os.path.normpath(filepath) + + # Rename tool + if tool.Name != subset: + print(f"Renaming {tool.Name} -> {subset}") + tool.SetAttrs({"TOOLS_Name": subset}) def get_managed_tool_data(self, tool): """Return data of the tool if it matches creator identifier""" @@ -206,20 +199,25 @@ class CreateSaver(Creator): attr_defs = [ self._get_render_target_enum(), self._get_reviewable_bool(), + self._get_frame_range_enum() ] return attr_defs def get_instance_attr_defs(self): """Settings for publish page""" - attr_defs = [ - self._get_render_target_enum(), - self._get_reviewable_bool(), - ] - return attr_defs + return self.get_pre_create_attr_defs() + + def pass_pre_attributes_to_instance( + self, + instance_data, + pre_create_data + ): + creator_attrs = instance_data["creator_attributes"] = {} + for pass_key in pre_create_data.keys(): + creator_attrs[pass_key] = pre_create_data[pass_key] # These functions below should be moved to another file # so it can be used by other plugins. plugin.py ? - def _get_render_target_enum(self): rendering_targets = { "local": "Local machine rendering", @@ -232,9 +230,44 @@ class CreateSaver(Creator): "render_target", items=rendering_targets, label="Render target" ) + def _get_frame_range_enum(self): + frame_range_options = { + "asset_db": "Current asset context", + "render_range": "From render in/out", + "comp_range": "From composition timeline" + } + + return EnumDef( + "frame_range_source", + items=frame_range_options, + label="Frame range source" + ) + def _get_reviewable_bool(self): return BoolDef( "review", default=("reviewable" in self.instance_attributes), label="Review", ) + + def apply_settings( + self, + project_settings, + system_settings + ): + """Method called on initialization of plugin to apply settings.""" + + # plugin settings + plugin_settings = ( + project_settings["fusion"]["create"][self.__class__.__name__] + ) + + # individual attributes + self.instance_attributes = plugin_settings.get( + "instance_attributes") or self.instance_attributes + self.default_variants = plugin_settings.get( + "default_variants") or self.default_variants + self.temp_rendering_path_template = ( + plugin_settings.get("temp_rendering_path_template") + or self.temp_rendering_path_template + ) diff --git a/openpype/hosts/fusion/plugins/load/load_fbx.py b/openpype/hosts/fusion/plugins/load/load_fbx.py index b8f501ae7e..c73ad78394 100644 --- a/openpype/hosts/fusion/plugins/load/load_fbx.py +++ b/openpype/hosts/fusion/plugins/load/load_fbx.py @@ -1,4 +1,3 @@ - from openpype.pipeline import ( load, get_representation_path, @@ -6,7 +5,7 @@ from openpype.pipeline import ( from openpype.hosts.fusion.api import ( imprint_container, get_current_comp, - comp_lock_and_undo_chunk + comp_lock_and_undo_chunk, ) @@ -15,7 +14,21 @@ class FusionLoadFBXMesh(load.LoaderPlugin): families = ["*"] representations = ["*"] - extensions = {"fbx"} + extensions = { + "3ds", + "amc", + "aoa", + "asf", + "bvh", + "c3d", + "dae", + "dxf", + "fbx", + "htr", + "mcd", + "obj", + "trc", + } label = "Load FBX mesh" order = -10 @@ -27,23 +40,24 @@ class FusionLoadFBXMesh(load.LoaderPlugin): def load(self, context, name, namespace, data): # Fallback to asset name when namespace is None if namespace is None: - namespace = context['asset']['name'] + namespace = context["asset"]["name"] # Create the Loader with the filename path set comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create tool"): - path = self.fname args = (-32768, -32768) tool = comp.AddTool(self.tool_type, *args) tool["ImportFile"] = path - imprint_container(tool, - name=name, - namespace=namespace, - context=context, - loader=self.__class__.__name__) + imprint_container( + tool, + name=name, + namespace=namespace, + context=context, + loader=self.__class__.__name__, + ) def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/fusion/plugins/load/load_sequence.py b/openpype/hosts/fusion/plugins/load/load_sequence.py index 38fd41c8b2..552e282587 100644 --- a/openpype/hosts/fusion/plugins/load/load_sequence.py +++ b/openpype/hosts/fusion/plugins/load/load_sequence.py @@ -3,17 +3,14 @@ import contextlib import openpype.pipeline.load as load from openpype.pipeline.load import ( get_representation_context, - get_representation_path_from_context + get_representation_path_from_context, ) from openpype.hosts.fusion.api import ( imprint_container, get_current_comp, - comp_lock_and_undo_chunk -) -from openpype.lib.transcoding import ( - IMAGE_EXTENSIONS, - VIDEO_EXTENSIONS + comp_lock_and_undo_chunk, ) +from openpype.lib.transcoding import IMAGE_EXTENSIONS, VIDEO_EXTENSIONS comp = get_current_comp() @@ -57,20 +54,23 @@ def preserve_trim(loader, log=None): try: yield finally: - length = loader.GetAttrs()["TOOLIT_Clip_Length"][1] - 1 if trim_from_start > length: trim_from_start = length if log: - log.warning("Reducing trim in to %d " - "(because of less frames)" % trim_from_start) + log.warning( + "Reducing trim in to %d " + "(because of less frames)" % trim_from_start + ) remainder = length - trim_from_start if trim_from_end > remainder: trim_from_end = remainder if log: - log.warning("Reducing trim in to %d " - "(because of less frames)" % trim_from_end) + log.warning( + "Reducing trim in to %d " + "(because of less frames)" % trim_from_end + ) loader["ClipTimeStart"][time] = trim_from_start loader["ClipTimeEnd"][time] = length - trim_from_end @@ -109,11 +109,15 @@ def loader_shift(loader, frame, relative=True): # Shifting global in will try to automatically compensate for the change # in the "ClipTimeStart" and "HoldFirstFrame" inputs, so we preserve those # input values to "just shift" the clip - with preserve_inputs(loader, inputs=["ClipTimeStart", - "ClipTimeEnd", - "HoldFirstFrame", - "HoldLastFrame"]): - + with preserve_inputs( + loader, + inputs=[ + "ClipTimeStart", + "ClipTimeEnd", + "HoldFirstFrame", + "HoldLastFrame", + ], + ): # GlobalIn cannot be set past GlobalOut or vice versa # so we must apply them in the order of the shift. if shift > 0: @@ -129,7 +133,14 @@ def loader_shift(loader, frame, relative=True): class FusionLoadSequence(load.LoaderPlugin): """Load image sequence into Fusion""" - families = ["imagesequence", "review", "render", "plate"] + families = [ + "imagesequence", + "review", + "render", + "plate", + "image", + "onilne", + ] representations = ["*"] extensions = set( ext.lstrip(".") for ext in IMAGE_EXTENSIONS.union(VIDEO_EXTENSIONS) @@ -143,7 +154,7 @@ class FusionLoadSequence(load.LoaderPlugin): def load(self, context, name, namespace, data): # Fallback to asset name when namespace is None if namespace is None: - namespace = context['asset']['name'] + namespace = context["asset"]["name"] # Use the first file for now path = get_representation_path_from_context(context) @@ -151,7 +162,6 @@ class FusionLoadSequence(load.LoaderPlugin): # Create the Loader with the filename path set comp = get_current_comp() with comp_lock_and_undo_chunk(comp, "Create Loader"): - args = (-32768, -32768) tool = comp.AddTool("Loader", *args) tool["Clip"] = path @@ -160,11 +170,13 @@ class FusionLoadSequence(load.LoaderPlugin): start = self._get_start(context["version"], tool) loader_shift(tool, start, relative=False) - imprint_container(tool, - name=name, - namespace=namespace, - context=context, - loader=self.__class__.__name__) + imprint_container( + tool, + name=name, + namespace=namespace, + context=context, + loader=self.__class__.__name__, + ) def switch(self, container, representation): self.update(container, representation) @@ -222,24 +234,28 @@ class FusionLoadSequence(load.LoaderPlugin): start = self._get_start(context["version"], tool) with comp_lock_and_undo_chunk(comp, "Update Loader"): - # Update the loader's path whilst preserving some values with preserve_trim(tool, log=self.log): - with preserve_inputs(tool, - inputs=("HoldFirstFrame", - "HoldLastFrame", - "Reverse", - "Depth", - "KeyCode", - "TimeCodeOffset")): + with preserve_inputs( + tool, + inputs=( + "HoldFirstFrame", + "HoldLastFrame", + "Reverse", + "Depth", + "KeyCode", + "TimeCodeOffset", + ), + ): tool["Clip"] = path # Set the global in to the start frame of the sequence global_in_changed = loader_shift(tool, start, relative=False) if global_in_changed: # Log this change to the user - self.log.debug("Changed '%s' global in: %d" % (tool.Name, - start)) + self.log.debug( + "Changed '%s' global in: %d" % (tool.Name, start) + ) # Update the imprinted representation tool.SetData("avalon.representation", str(representation["_id"])) @@ -264,9 +280,11 @@ class FusionLoadSequence(load.LoaderPlugin): # Get frame start without handles start = data.get("frameStart") if start is None: - self.log.warning("Missing start frame for version " - "assuming starts at frame 0 for: " - "{}".format(tool.Name)) + self.log.warning( + "Missing start frame for version " + "assuming starts at frame 0 for: " + "{}".format(tool.Name) + ) return 0 # Use `handleStart` if the data is available diff --git a/openpype/hosts/fusion/plugins/load/load_workfile.py b/openpype/hosts/fusion/plugins/load/load_workfile.py new file mode 100644 index 0000000000..b49d104a15 --- /dev/null +++ b/openpype/hosts/fusion/plugins/load/load_workfile.py @@ -0,0 +1,32 @@ +"""Import workfiles into your current comp. +As all imported nodes are free floating and will probably be changed there +is no update or reload function added for this plugin +""" + +from openpype.pipeline import load + +from openpype.hosts.fusion.api import ( + get_current_comp, + get_bmd_library, +) + + +class FusionLoadWorkfile(load.LoaderPlugin): + """Load the content of a workfile into Fusion""" + + families = ["workfile"] + representations = ["*"] + extensions = {"comp"} + + label = "Load Workfile" + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name, namespace, data): + # Get needed elements + bmd = get_bmd_library() + comp = get_current_comp() + + # Paste the content of the file into the current comp + comp.Paste(bmd.readfile(self.fname)) diff --git a/openpype/hosts/fusion/plugins/publish/collect_comp_frame_range.py b/openpype/hosts/fusion/plugins/publish/collect_comp_frame_range.py index fbd7606cd7..24a9a92337 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_comp_frame_range.py +++ b/openpype/hosts/fusion/plugins/publish/collect_comp_frame_range.py @@ -35,9 +35,10 @@ class CollectFusionCompFrameRanges(pyblish.api.ContextPlugin): # Store comp render ranges start, end, global_start, global_end = get_comp_render_range(comp) - context.data["frameStart"] = int(start) - context.data["frameEnd"] = int(end) - context.data["frameStartHandle"] = int(global_start) - context.data["frameEndHandle"] = int(global_end) - context.data["handleStart"] = int(start) - int(global_start) - context.data["handleEnd"] = int(global_end) - int(end) + + context.data.update({ + "renderFrameStart": int(start), + "renderFrameEnd": int(end), + "compFrameStart": int(global_start), + "compFrameEnd": int(global_end) + }) diff --git a/openpype/hosts/fusion/plugins/publish/collect_expected_frames.py b/openpype/hosts/fusion/plugins/publish/collect_expected_frames.py deleted file mode 100644 index 0ba777629f..0000000000 --- a/openpype/hosts/fusion/plugins/publish/collect_expected_frames.py +++ /dev/null @@ -1,50 +0,0 @@ -import pyblish.api -from openpype.pipeline import publish -import os - - -class CollectFusionExpectedFrames( - pyblish.api.InstancePlugin, publish.ColormanagedPyblishPluginMixin -): - """Collect all frames needed to publish expected frames""" - - order = pyblish.api.CollectorOrder + 0.5 - label = "Collect Expected Frames" - hosts = ["fusion"] - families = ["render"] - - def process(self, instance): - context = instance.context - - frame_start = context.data["frameStartHandle"] - frame_end = context.data["frameEndHandle"] - path = instance.data["path"] - output_dir = instance.data["outputDir"] - - basename = os.path.basename(path) - head, ext = os.path.splitext(basename) - files = [ - f"{head}{str(frame).zfill(4)}{ext}" - for frame in range(frame_start, frame_end + 1) - ] - repre = { - "name": ext[1:], - "ext": ext[1:], - "frameStart": f"%0{len(str(frame_end))}d" % frame_start, - "files": files, - "stagingDir": output_dir, - } - - self.set_representation_colorspace( - representation=repre, - context=context, - ) - - # review representation - if instance.data.get("review", False): - repre["tags"] = ["review"] - - # add the repre to the instance - if "representations" not in instance.data: - instance.data["representations"] = [] - instance.data["representations"].append(repre) diff --git a/openpype/hosts/fusion/plugins/publish/collect_fusion_version.py b/openpype/hosts/fusion/plugins/publish/collect_fusion_version.py deleted file mode 100644 index 65d8386f33..0000000000 --- a/openpype/hosts/fusion/plugins/publish/collect_fusion_version.py +++ /dev/null @@ -1,22 +0,0 @@ -import pyblish.api - - -class CollectFusionVersion(pyblish.api.ContextPlugin): - """Collect current comp""" - - order = pyblish.api.CollectorOrder - label = "Collect Fusion Version" - hosts = ["fusion"] - - def process(self, context): - """Collect all image sequence tools""" - - comp = context.data.get("currentComp") - if not comp: - raise RuntimeError("No comp previously collected, unable to " - "retrieve Fusion version.") - - version = comp.GetApp().Version - context.data["fusionVersion"] = version - - self.log.info("Fusion version: %s" % version) diff --git a/openpype/hosts/fusion/plugins/publish/collect_inputs.py b/openpype/hosts/fusion/plugins/publish/collect_inputs.py index 1bb3cd1220..a6628300db 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_inputs.py +++ b/openpype/hosts/fusion/plugins/publish/collect_inputs.py @@ -113,4 +113,4 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin): inputs = [c["representation"] for c in containers] instance.data["inputRepresentations"] = inputs - self.log.info("Collected inputs: %s" % inputs) + self.log.debug("Collected inputs: %s" % inputs) diff --git a/openpype/hosts/fusion/plugins/publish/collect_instances.py b/openpype/hosts/fusion/plugins/publish/collect_instances.py index af227f03db..6016baa2a9 100644 --- a/openpype/hosts/fusion/plugins/publish/collect_instances.py +++ b/openpype/hosts/fusion/plugins/publish/collect_instances.py @@ -1,5 +1,3 @@ -import os - import pyblish.api @@ -24,23 +22,63 @@ class CollectInstanceData(pyblish.api.InstancePlugin): creator_attributes = instance.data["creator_attributes"] instance.data.update(creator_attributes) - # Include start and end render frame in label - subset = instance.data["subset"] + frame_range_source = creator_attributes.get("frame_range_source") + instance.data["frame_range_source"] = frame_range_source + + # get asset frame ranges to all instances + # render family instances `asset_db` render target start = context.data["frameStart"] end = context.data["frameEnd"] - label = "{subset} ({start}-{end})".format(subset=subset, - start=int(start), - end=int(end)) + handle_start = context.data["handleStart"] + handle_end = context.data["handleEnd"] + start_with_handle = start - handle_start + end_with_handle = end + handle_end + + # conditions for render family instances + if frame_range_source == "render_range": + # set comp render frame ranges + start = context.data["renderFrameStart"] + end = context.data["renderFrameEnd"] + handle_start = 0 + handle_end = 0 + start_with_handle = start + end_with_handle = end + + if frame_range_source == "comp_range": + comp_start = context.data["compFrameStart"] + comp_end = context.data["compFrameEnd"] + render_start = context.data["renderFrameStart"] + render_end = context.data["renderFrameEnd"] + # set comp frame ranges + start = render_start + end = render_end + handle_start = render_start - comp_start + handle_end = comp_end - render_end + start_with_handle = comp_start + end_with_handle = comp_end + + # Include start and end render frame in label + subset = instance.data["subset"] + label = ( + "{subset} ({start}-{end}) [{handle_start}-{handle_end}]" + ).format( + subset=subset, + start=int(start), + end=int(end), + handle_start=int(handle_start), + handle_end=int(handle_end) + ) + instance.data.update({ "label": label, # todo: Allow custom frame range per instance - "frameStart": context.data["frameStart"], - "frameEnd": context.data["frameEnd"], - "frameStartHandle": context.data["frameStartHandle"], - "frameEndHandle": context.data["frameStartHandle"], - "handleStart": context.data["handleStart"], - "handleEnd": context.data["handleEnd"], + "frameStart": start, + "frameEnd": end, + "frameStartHandle": start_with_handle, + "frameEndHandle": end_with_handle, + "handleStart": handle_start, + "handleEnd": handle_end, "fps": context.data["fps"], }) @@ -49,31 +87,3 @@ class CollectInstanceData(pyblish.api.InstancePlugin): if instance.data.get("review", False): self.log.info("Adding review family..") instance.data["families"].append("review") - - if instance.data["family"] == "render": - # TODO: This should probably move into a collector of - # its own for the "render" family - from openpype.hosts.fusion.api.lib import get_frame_path - comp = context.data["currentComp"] - - # This is only the case for savers currently but not - # for workfile instances. So we assume saver here. - tool = instance.data["transientData"]["tool"] - path = tool["Clip"][comp.TIME_UNDEFINED] - - filename = os.path.basename(path) - head, padding, tail = get_frame_path(filename) - ext = os.path.splitext(path)[1] - assert tail == ext, ("Tail does not match %s" % ext) - - instance.data.update({ - "path": path, - "outputDir": os.path.dirname(path), - "ext": ext, # todo: should be redundant? - - # Backwards compatibility: embed tool in instance.data - "tool": tool - }) - - # Add tool itself as member - instance.append(tool) diff --git a/openpype/hosts/fusion/plugins/publish/collect_render.py b/openpype/hosts/fusion/plugins/publish/collect_render.py new file mode 100644 index 0000000000..a20a142701 --- /dev/null +++ b/openpype/hosts/fusion/plugins/publish/collect_render.py @@ -0,0 +1,209 @@ +import os +import attr +import pyblish.api + +from openpype.pipeline import publish +from openpype.pipeline.publish import RenderInstance +from openpype.hosts.fusion.api.lib import get_frame_path + + +@attr.s +class FusionRenderInstance(RenderInstance): + # extend generic, composition name is needed + fps = attr.ib(default=None) + projectEntity = attr.ib(default=None) + stagingDir = attr.ib(default=None) + app_version = attr.ib(default=None) + tool = attr.ib(default=None) + workfileComp = attr.ib(default=None) + publish_attributes = attr.ib(default={}) + frameStartHandle = attr.ib(default=None) + frameEndHandle = attr.ib(default=None) + + +class CollectFusionRender( + publish.AbstractCollectRender, + publish.ColormanagedPyblishPluginMixin +): + + order = pyblish.api.CollectorOrder + 0.09 + label = "Collect Fusion Render" + hosts = ["fusion"] + + def get_instances(self, context): + + comp = context.data.get("currentComp") + comp_frame_format_prefs = comp.GetPrefs("Comp.FrameFormat") + aspect_x = comp_frame_format_prefs["AspectX"] + aspect_y = comp_frame_format_prefs["AspectY"] + + instances = [] + instances_to_remove = [] + + current_file = context.data["currentFile"] + version = context.data["version"] + + project_entity = context.data["projectEntity"] + + for inst in context: + if not inst.data.get("active", True): + continue + + family = inst.data["family"] + if family != "render": + continue + + task_name = context.data["task"] + tool = inst.data["transientData"]["tool"] + + instance_families = inst.data.get("families", []) + subset_name = inst.data["subset"] + instance = FusionRenderInstance( + family="render", + tool=tool, + workfileComp=comp, + families=instance_families, + version=version, + time="", + source=current_file, + label=inst.data["label"], + subset=subset_name, + asset=inst.data["asset"], + task=task_name, + attachTo=False, + setMembers='', + publish=True, + name=subset_name, + resolutionWidth=comp_frame_format_prefs.get("Width"), + resolutionHeight=comp_frame_format_prefs.get("Height"), + pixelAspect=aspect_x / aspect_y, + tileRendering=False, + tilesX=0, + tilesY=0, + review="review" in instance_families, + frameStart=inst.data["frameStart"], + frameEnd=inst.data["frameEnd"], + handleStart=inst.data["handleStart"], + handleEnd=inst.data["handleEnd"], + frameStartHandle=inst.data["frameStartHandle"], + frameEndHandle=inst.data["frameEndHandle"], + frameStep=1, + fps=comp_frame_format_prefs.get("Rate"), + app_version=comp.GetApp().Version, + publish_attributes=inst.data.get("publish_attributes", {}) + ) + + render_target = inst.data["creator_attributes"]["render_target"] + + # Add render target family + render_target_family = f"render.{render_target}" + if render_target_family not in instance.families: + instance.families.append(render_target_family) + + # Add render target specific data + if render_target in {"local", "frames"}: + instance.projectEntity = project_entity + + if render_target == "farm": + fam = "render.farm" + if fam not in instance.families: + instance.families.append(fam) + instance.toBeRenderedOn = "deadline" + instance.farm = True # to skip integrate + if "review" in instance.families: + # to skip ExtractReview locally + instance.families.remove("review") + + # add new instance to the list and remove the original + # instance since it is not needed anymore + instances.append(instance) + instances_to_remove.append(inst) + + for instance in instances_to_remove: + context.remove(instance) + + return instances + + def post_collecting_action(self): + for instance in self._context: + if "render.frames" in instance.data.get("families", []): + # adding representation data to the instance + self._update_for_frames(instance) + + def get_expected_files(self, render_instance): + """ + Returns list of rendered files that should be created by + Deadline. These are not published directly, they are source + for later 'submit_publish_job'. + + Args: + render_instance (RenderInstance): to pull anatomy and parts used + in url + + Returns: + (list) of absolute urls to rendered file + """ + start = render_instance.frameStart - render_instance.handleStart + end = render_instance.frameEnd + render_instance.handleEnd + + path = ( + render_instance.tool["Clip"] + [render_instance.workfileComp.TIME_UNDEFINED] + ) + output_dir = os.path.dirname(path) + render_instance.outputDir = output_dir + + basename = os.path.basename(path) + + head, padding, ext = get_frame_path(basename) + + expected_files = [] + for frame in range(start, end + 1): + expected_files.append( + os.path.join( + output_dir, + f"{head}{str(frame).zfill(padding)}{ext}" + ) + ) + + return expected_files + + def _update_for_frames(self, instance): + """Updating instance for render.frames family + + Adding representation data to the instance. Also setting + colorspaceData to the representation based on file rules. + """ + + expected_files = instance.data["expectedFiles"] + + start = instance.data["frameStart"] - instance.data["handleStart"] + + path = expected_files[0] + basename = os.path.basename(path) + staging_dir = os.path.dirname(path) + _, padding, ext = get_frame_path(basename) + + repre = { + "name": ext[1:], + "ext": ext[1:], + "frameStart": f"%0{padding}d" % start, + "files": [os.path.basename(f) for f in expected_files], + "stagingDir": staging_dir, + } + + self.set_representation_colorspace( + representation=repre, + context=instance.context, + ) + + # review representation + if instance.data.get("review", False): + repre["tags"] = ["review"] + + # add the repre to the instance + if "representations" not in instance.data: + instance.data["representations"] = [] + instance.data["representations"].append(repre) + + return instance diff --git a/openpype/hosts/fusion/plugins/publish/collect_renders.py b/openpype/hosts/fusion/plugins/publish/collect_renders.py deleted file mode 100644 index 7f38e68447..0000000000 --- a/openpype/hosts/fusion/plugins/publish/collect_renders.py +++ /dev/null @@ -1,25 +0,0 @@ -import pyblish.api - - -class CollectFusionRenders(pyblish.api.InstancePlugin): - """Collect current saver node's render Mode - - Options: - local (Render locally) - frames (Use existing frames) - - """ - - order = pyblish.api.CollectorOrder + 0.4 - label = "Collect Renders" - hosts = ["fusion"] - families = ["render"] - - def process(self, instance): - render_target = instance.data["render_target"] - family = instance.data["family"] - - # add targeted family to families - instance.data["families"].append( - "{}.{}".format(family, render_target) - ) diff --git a/openpype/hosts/fusion/plugins/publish/extract_render_local.py b/openpype/hosts/fusion/plugins/publish/extract_render_local.py index 5a0140c525..25c101cf00 100644 --- a/openpype/hosts/fusion/plugins/publish/extract_render_local.py +++ b/openpype/hosts/fusion/plugins/publish/extract_render_local.py @@ -1,8 +1,12 @@ +import os import logging import contextlib +import collections import pyblish.api -from openpype.hosts.fusion.api import comp_lock_and_undo_chunk +from openpype.pipeline import publish +from openpype.hosts.fusion.api import comp_lock_and_undo_chunk +from openpype.hosts.fusion.api.lib import get_frame_path, maintained_comp_range log = logging.getLogger(__name__) @@ -38,7 +42,10 @@ def enabled_savers(comp, savers): saver.SetAttrs({"TOOLB_PassThrough": original_state}) -class FusionRenderLocal(pyblish.api.InstancePlugin): +class FusionRenderLocal( + pyblish.api.InstancePlugin, + publish.ColormanagedPyblishPluginMixin +): """Render the current Fusion composition locally.""" order = pyblish.api.ExtractorOrder - 0.2 @@ -46,11 +53,16 @@ class FusionRenderLocal(pyblish.api.InstancePlugin): hosts = ["fusion"] families = ["render.local"] + is_rendered_key = "_fusionrenderlocal_has_rendered" + def process(self, instance): - context = instance.context # Start render - self.render_once(context) + result = self.render(instance) + if result is False: + raise RuntimeError(f"Comp render failed for {instance}") + + self._add_representation(instance) # Log render status self.log.info( @@ -61,39 +73,48 @@ class FusionRenderLocal(pyblish.api.InstancePlugin): ) ) - def render_once(self, context): - """Render context comp only once, even with more render instances""" + def render(self, instance): + """Render instance. - # This plug-in assumes all render nodes get rendered at the same time - # to speed up the rendering. The check below makes sure that we only - # execute the rendering once and not for each instance. - key = f"__hasRun{self.__class__.__name__}" + We try to render the minimal amount of times by combining the instances + that have a matching frame range in one Fusion render. Then for the + batch of instances we store whether the render succeeded or failed. - savers_to_render = [ - # Get the saver tool from the instance - instance[0] for instance in context if - # Only active instances - instance.data.get("publish", True) and - # Only render.local instances - "render.local" in instance.data["families"] - ] + """ - if key not in context.data: - # We initialize as false to indicate it wasn't successful yet - # so we can keep track of whether Fusion succeeded - context.data[key] = False + if self.is_rendered_key in instance.data: + # This instance was already processed in batch with another + # instance, so we just return the render result directly + self.log.debug(f"Instance {instance} was already rendered") + return instance.data[self.is_rendered_key] - current_comp = context.data["currentComp"] - frame_start = context.data["frameStartHandle"] - frame_end = context.data["frameEndHandle"] + instances_by_frame_range = self.get_render_instances_by_frame_range( + instance.context + ) - self.log.info("Starting Fusion render") - self.log.info(f"Start frame: {frame_start}") - self.log.info(f"End frame: {frame_end}") - saver_names = ", ".join(saver.Name for saver in savers_to_render) - self.log.info(f"Rendering tools: {saver_names}") + # Render matching batch of instances that share the same frame range + frame_range = self.get_instance_render_frame_range(instance) + render_instances = instances_by_frame_range[frame_range] - with comp_lock_and_undo_chunk(current_comp): + # We initialize render state false to indicate it wasn't successful + # yet to keep track of whether Fusion succeeded. This is for cases + # where an error below this might cause the comp render result not + # to be stored for the instances of this batch + for render_instance in render_instances: + render_instance.data[self.is_rendered_key] = False + + savers_to_render = [inst.data["tool"] for inst in render_instances] + current_comp = instance.context.data["currentComp"] + frame_start, frame_end = frame_range + + self.log.info( + f"Starting Fusion render frame range {frame_start}-{frame_end}" + ) + saver_names = ", ".join(saver.Name for saver in savers_to_render) + self.log.info(f"Rendering tools: {saver_names}") + + with comp_lock_and_undo_chunk(current_comp): + with maintained_comp_range(current_comp): with enabled_savers(current_comp, savers_to_render): result = current_comp.Render( { @@ -103,7 +124,76 @@ class FusionRenderLocal(pyblish.api.InstancePlugin): } ) - context.data[key] = bool(result) + # Store the render state for all the rendered instances + for render_instance in render_instances: + render_instance.data[self.is_rendered_key] = bool(result) - if context.data[key] is False: - raise RuntimeError("Comp render failed") + return result + + def _add_representation(self, instance): + """Add representation to instance""" + + expected_files = instance.data["expectedFiles"] + + start = instance.data["frameStart"] - instance.data["handleStart"] + + path = expected_files[0] + _, padding, ext = get_frame_path(path) + + staging_dir = os.path.dirname(path) + + repre = { + "name": ext[1:], + "ext": ext[1:], + "frameStart": f"%0{padding}d" % start, + "files": [os.path.basename(f) for f in expected_files], + "stagingDir": staging_dir, + } + + self.set_representation_colorspace( + representation=repre, + context=instance.context, + ) + + # review representation + if instance.data.get("review", False): + repre["tags"] = ["review"] + + # add the repre to the instance + if "representations" not in instance.data: + instance.data["representations"] = [] + instance.data["representations"].append(repre) + + return instance + + def get_render_instances_by_frame_range(self, context): + """Return enabled render.local instances grouped by their frame range. + + Arguments: + context (pyblish.Context): The pyblish context + + Returns: + dict: (start, end): instances mapping + + """ + + instances_to_render = [ + instance for instance in context if + # Only active instances + instance.data.get("publish", True) and + # Only render.local instances + "render.local" in instance.data.get("families", []) + ] + + # Instances by frame ranges + instances_by_frame_range = collections.defaultdict(list) + for instance in instances_to_render: + start, end = self.get_instance_render_frame_range(instance) + instances_by_frame_range[(start, end)].append(instance) + + return dict(instances_by_frame_range) + + def get_instance_render_frame_range(self, instance): + start = instance.data["frameStartHandle"] + end = instance.data["frameEndHandle"] + return start, end diff --git a/openpype/hosts/fusion/plugins/publish/increment_current_file.py b/openpype/hosts/fusion/plugins/publish/increment_current_file.py index 42891446f7..08a65bf52d 100644 --- a/openpype/hosts/fusion/plugins/publish/increment_current_file.py +++ b/openpype/hosts/fusion/plugins/publish/increment_current_file.py @@ -1,29 +1,39 @@ import pyblish.api +from openpype.pipeline import OptionalPyblishPluginMixin +from openpype.pipeline import KnownPublishError -class FusionIncrementCurrentFile(pyblish.api.ContextPlugin): + +class FusionIncrementCurrentFile( + pyblish.api.ContextPlugin, OptionalPyblishPluginMixin +): """Increment the current file. Saves the current file with an increased version number. """ - label = "Increment current file" + label = "Increment workfile version" order = pyblish.api.IntegratorOrder + 9.0 hosts = ["fusion"] - families = ["workfile"] optional = True def process(self, context): + if not self.is_active(context.data): + return from openpype.lib import version_up from openpype.pipeline.publish import get_errored_plugins_from_context errored_plugins = get_errored_plugins_from_context(context) - if any(plugin.__name__ == "FusionSubmitDeadline" - for plugin in errored_plugins): - raise RuntimeError("Skipping incrementing current file because " - "submission to render farm failed.") + if any( + plugin.__name__ == "FusionSubmitDeadline" + for plugin in errored_plugins + ): + raise KnownPublishError( + "Skipping incrementing current file because " + "submission to render farm failed." + ) comp = context.data.get("currentComp") assert comp, "Must have comp" diff --git a/openpype/hosts/fusion/plugins/publish/save_scene.py b/openpype/hosts/fusion/plugins/publish/save_scene.py index a249c453d8..0798e7c8b7 100644 --- a/openpype/hosts/fusion/plugins/publish/save_scene.py +++ b/openpype/hosts/fusion/plugins/publish/save_scene.py @@ -17,5 +17,5 @@ class FusionSaveComp(pyblish.api.ContextPlugin): current = comp.GetAttrs().get("COMPS_FileName", "") assert context.data['currentFile'] == current - self.log.info("Saving current file..") + self.log.info("Saving current file: {}".format(current)) comp.Save() diff --git a/openpype/hosts/fusion/plugins/publish/validate_background_depth.py b/openpype/hosts/fusion/plugins/publish/validate_background_depth.py index db2c4f0dd9..6908889eb4 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_background_depth.py +++ b/openpype/hosts/fusion/plugins/publish/validate_background_depth.py @@ -1,12 +1,17 @@ import pyblish.api -from openpype.pipeline.publish import RepairAction -from openpype.pipeline import PublishValidationError +from openpype.pipeline import ( + publish, + OptionalPyblishPluginMixin, + PublishValidationError, +) from openpype.hosts.fusion.api.action import SelectInvalidAction -class ValidateBackgroundDepth(pyblish.api.InstancePlugin): +class ValidateBackgroundDepth( + pyblish.api.InstancePlugin, OptionalPyblishPluginMixin +): """Validate if all Background tool are set to float32 bit""" order = pyblish.api.ValidatorOrder @@ -15,11 +20,10 @@ class ValidateBackgroundDepth(pyblish.api.InstancePlugin): families = ["render"] optional = True - actions = [SelectInvalidAction, RepairAction] + actions = [SelectInvalidAction, publish.RepairAction] @classmethod def get_invalid(cls, instance): - context = instance.context comp = context.data.get("currentComp") assert comp, "Must have Comp object" @@ -31,12 +35,16 @@ class ValidateBackgroundDepth(pyblish.api.InstancePlugin): return [i for i in backgrounds if i.GetInput("Depth") != 4.0] def process(self, instance): + if not self.is_active(instance.data): + return + invalid = self.get_invalid(instance) if invalid: raise PublishValidationError( "Found {} Backgrounds tools which" " are not set to float32".format(len(invalid)), - title=self.label) + title=self.label, + ) @classmethod def repair(cls, instance): diff --git a/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py b/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py index 8a91f23578..35c92163eb 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py +++ b/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py @@ -21,7 +21,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - tool = instance[0] + tool = instance.data["tool"] create_dir = tool.GetInput("CreateDir") if create_dir == 0.0: cls.log.error( diff --git a/openpype/hosts/fusion/plugins/publish/validate_expected_frames_existence.py b/openpype/hosts/fusion/plugins/publish/validate_expected_frames_existence.py index c208b8ef15..3f84f59678 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_expected_frames_existence.py +++ b/openpype/hosts/fusion/plugins/publish/validate_expected_frames_existence.py @@ -14,7 +14,7 @@ class ValidateLocalFramesExistence(pyblish.api.InstancePlugin): order = pyblish.api.ValidatorOrder label = "Validate Expected Frames Exists" - families = ["render"] + families = ["render.frames"] hosts = ["fusion"] actions = [RepairAction, SelectInvalidAction] @@ -23,31 +23,20 @@ class ValidateLocalFramesExistence(pyblish.api.InstancePlugin): if non_existing_frames is None: non_existing_frames = [] - if instance.data.get("render_target") == "frames": - tool = instance[0] + tool = instance.data["tool"] - frame_start = instance.data["frameStart"] - frame_end = instance.data["frameEnd"] - path = instance.data["path"] - output_dir = instance.data["outputDir"] + expected_files = instance.data["expectedFiles"] - basename = os.path.basename(path) - head, ext = os.path.splitext(basename) - files = [ - f"{head}{str(frame).zfill(4)}{ext}" - for frame in range(frame_start, frame_end + 1) - ] + for file in expected_files: + if not os.path.exists(file): + cls.log.error( + f"Missing file: {file}" + ) + non_existing_frames.append(file) - for file in files: - if not os.path.exists(os.path.join(output_dir, file)): - cls.log.error( - f"Missing file: {os.path.join(output_dir, file)}" - ) - non_existing_frames.append(file) - - if len(non_existing_frames) > 0: - cls.log.error(f"Some of {tool.Name}'s files does not exist") - return [tool] + if len(non_existing_frames) > 0: + cls.log.error(f"Some of {tool.Name}'s files does not exist") + return [tool] def process(self, instance): non_existing_frames = [] @@ -67,8 +56,7 @@ class ValidateLocalFramesExistence(pyblish.api.InstancePlugin): def repair(cls, instance): invalid = cls.get_invalid(instance) if invalid: - tool = invalid[0] - + tool = instance.data["tool"] # Change render target to local to render locally tool.SetData("openpype.creator_attributes.render_target", "local") diff --git a/openpype/hosts/fusion/plugins/publish/validate_filename_has_extension.py b/openpype/hosts/fusion/plugins/publish/validate_filename_has_extension.py index bbba2dde6e..537e43c875 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_filename_has_extension.py +++ b/openpype/hosts/fusion/plugins/publish/validate_filename_has_extension.py @@ -30,11 +30,11 @@ class ValidateFilenameHasExtension(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - path = instance.data["path"] + path = instance.data["expectedFiles"][0] fname, ext = os.path.splitext(path) if not ext: - tool = instance[0] + tool = instance.data["tool"] cls.log.error("%s has no extension specified" % tool.Name) return [tool] diff --git a/openpype/hosts/fusion/plugins/publish/validate_instance_frame_range.py b/openpype/hosts/fusion/plugins/publish/validate_instance_frame_range.py new file mode 100644 index 0000000000..06cd0ca186 --- /dev/null +++ b/openpype/hosts/fusion/plugins/publish/validate_instance_frame_range.py @@ -0,0 +1,41 @@ +import pyblish.api + +from openpype.pipeline import PublishValidationError + + +class ValidateInstanceFrameRange(pyblish.api.InstancePlugin): + """Validate instance frame range is within comp's global render range.""" + + order = pyblish.api.ValidatorOrder + label = "Validate Filename Has Extension" + families = ["render"] + hosts = ["fusion"] + + def process(self, instance): + + context = instance.context + global_start = context.data["compFrameStart"] + global_end = context.data["compFrameEnd"] + + render_start = instance.data["frameStartHandle"] + render_end = instance.data["frameEndHandle"] + + if render_start < global_start or render_end > global_end: + + message = ( + f"Instance {instance} render frame range " + f"({render_start}-{render_end}) is outside of the comp's " + f"global render range ({global_start}-{global_end}) and thus " + f"can't be rendered. " + ) + description = ( + f"{message}\n\n" + f"Either update the comp's global range or the instance's " + f"frame range to ensure the comp's frame range includes the " + f"to render frame range for the instance." + ) + raise PublishValidationError( + title="Frame range outside of comp range", + message=message, + description=description + ) diff --git a/openpype/hosts/fusion/plugins/publish/validate_saver_has_input.py b/openpype/hosts/fusion/plugins/publish/validate_saver_has_input.py index e02125f531..faf2102a8b 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_saver_has_input.py +++ b/openpype/hosts/fusion/plugins/publish/validate_saver_has_input.py @@ -20,7 +20,7 @@ class ValidateSaverHasInput(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - saver = instance[0] + saver = instance.data["tool"] if not saver.Input.GetConnectedOutput(): return [saver] diff --git a/openpype/hosts/fusion/plugins/publish/validate_saver_passthrough.py b/openpype/hosts/fusion/plugins/publish/validate_saver_passthrough.py index 56f2e7e6b8..9004976dc5 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_saver_passthrough.py +++ b/openpype/hosts/fusion/plugins/publish/validate_saver_passthrough.py @@ -37,7 +37,7 @@ class ValidateSaverPassthrough(pyblish.api.ContextPlugin): def is_invalid(self, instance): - saver = instance[0] + saver = instance.data["tool"] attr = saver.GetAttrs() active = not attr["TOOLB_PassThrough"] diff --git a/openpype/hosts/houdini/api/action.py b/openpype/hosts/houdini/api/action.py new file mode 100644 index 0000000000..27e8ce55bb --- /dev/null +++ b/openpype/hosts/houdini/api/action.py @@ -0,0 +1,46 @@ +import pyblish.api +import hou + +from openpype.pipeline.publish import get_errored_instances_from_context + + +class SelectInvalidAction(pyblish.api.Action): + """Select invalid nodes in Maya when plug-in failed. + + To retrieve the invalid nodes this assumes a static `get_invalid()` + method is available on the plugin. + + """ + label = "Select invalid" + on = "failed" # This action is only available on a failed plug-in + icon = "search" # Icon from Awesome Icon + + def process(self, context, plugin): + + errored_instances = get_errored_instances_from_context(context) + + # Apply pyblish.logic to get the instances for the plug-in + instances = pyblish.api.instances_by_plugin(errored_instances, plugin) + + # Get the invalid nodes for the plug-ins + self.log.info("Finding invalid nodes..") + invalid = list() + for instance in instances: + invalid_nodes = plugin.get_invalid(instance) + if invalid_nodes: + if isinstance(invalid_nodes, (list, tuple)): + invalid.extend(invalid_nodes) + else: + self.log.warning("Plug-in returned to be invalid, " + "but has no selectable nodes.") + + hou.clearAllSelected() + if invalid: + self.log.info("Selecting invalid nodes: {}".format( + ", ".join(node.path() for node in invalid) + )) + for node in invalid: + node.setSelected(True) + node.setCurrent(True) + else: + self.log.info("No invalid nodes found.") diff --git a/openpype/hosts/houdini/api/creator_node_shelves.py b/openpype/hosts/houdini/api/creator_node_shelves.py index 3638e14296..7c6122cffe 100644 --- a/openpype/hosts/houdini/api/creator_node_shelves.py +++ b/openpype/hosts/houdini/api/creator_node_shelves.py @@ -12,26 +12,43 @@ import tempfile import logging import os +from openpype.client import get_asset_by_name from openpype.pipeline import registered_host from openpype.pipeline.create import CreateContext from openpype.resources import get_openpype_icon_filepath import hou +import stateutils +import soptoolutils +import loptoolutils +import cop2toolutils + log = logging.getLogger(__name__) +CATEGORY_GENERIC_TOOL = { + hou.sopNodeTypeCategory(): soptoolutils.genericTool, + hou.cop2NodeTypeCategory(): cop2toolutils.genericTool, + hou.lopNodeTypeCategory(): loptoolutils.genericTool +} + + CREATE_SCRIPT = """ from openpype.hosts.houdini.api.creator_node_shelves import create_interactive -create_interactive("{identifier}") +create_interactive("{identifier}", **kwargs) """ -def create_interactive(creator_identifier): +def create_interactive(creator_identifier, **kwargs): """Create a Creator using its identifier interactively. This is used by the generated shelf tools as callback when a user selects the creator from the node tab search menu. + The `kwargs` should be what Houdini passes to the tool create scripts + context. For more information see: + https://www.sidefx.com/docs/houdini/hom/tool_script.html#arguments + Args: creator_identifier (str): The creator identifier of the Creator plugin to create. @@ -58,6 +75,33 @@ def create_interactive(creator_identifier): host = registered_host() context = CreateContext(host) + creator = context.manual_creators.get(creator_identifier) + if not creator: + raise RuntimeError("Invalid creator identifier: " + "{}".format(creator_identifier)) + + # TODO: Once more elaborate unique create behavior should exist per Creator + # instead of per network editor area then we should move this from here + # to a method on the Creators for which this could be the default + # implementation. + pane = stateutils.activePane(kwargs) + if isinstance(pane, hou.NetworkEditor): + pwd = pane.pwd() + subset_name = creator.get_subset_name( + variant=variant, + task_name=context.get_current_task_name(), + asset_doc=get_asset_by_name( + project_name=context.get_current_project_name(), + asset_name=context.get_current_asset_name() + ), + project_name=context.get_current_project_name(), + host_name=context.host_name + ) + + tool_fn = CATEGORY_GENERIC_TOOL.get(pwd.childTypeCategory()) + if tool_fn is not None: + out_null = tool_fn(kwargs, "null") + out_null.setName("OUT_{}".format(subset_name), unique_name=True) before = context.instances_by_id.copy() @@ -135,12 +179,20 @@ def install(): log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath)) tools = [] + with shelves_change_block(): for identifier, creator in create_context.manual_creators.items(): - # TODO: Allow the creator plug-in itself to override the categories - # for where they are shown, by e.g. defining - # `Creator.get_network_categories()` + # Allow the creator plug-in itself to override the categories + # for where they are shown with `Creator.get_network_categories()` + if not hasattr(creator, "get_network_categories"): + log.debug("Creator {} has no `get_network_categories` method " + "and will not be added to TAB search.") + continue + + network_categories = creator.get_network_categories() + if not network_categories: + continue key = "openpype_create.{}".format(identifier) log.debug(f"Registering {key}") @@ -153,17 +205,13 @@ def install(): creator.label ), "help_url": None, - "network_categories": [ - hou.ropNodeTypeCategory(), - hou.sopNodeTypeCategory() - ], + "network_categories": network_categories, "viewer_categories": [], "cop_viewer_categories": [], "network_op_type": None, "viewer_op_type": None, "locations": ["OpenPype"] } - label = "Create {}".format(creator.label) tool = hou.shelves.tool(key) if tool: diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 61274e6028..b8b8fefb52 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -81,7 +81,13 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): # TODO: make sure this doesn't trigger when # opening with last workfile. _set_context_settings() - shelves.generate_shelves() + + if not IS_HEADLESS: + import hdefereval # noqa, hdefereval is only available in ui mode + # Defer generation of shelves due to issue on Windows where shelf + # initialization during start up delays Houdini UI by minutes + # making it extremely slow to launch. + hdefereval.executeDeferred(shelves.generate_shelves) if not IS_HEADLESS: import hdefereval # noqa, hdefereval is only available in ui mode diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 340a7f0770..1e7eaa7e22 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -276,3 +276,19 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): color = hou.Color((0.616, 0.871, 0.769)) node.setUserData('nodeshape', shape) node.setColor(color) + + def get_network_categories(self): + """Return in which network view type this creator should show. + + The node type categories returned here will be used to define where + the creator will show up in the TAB search for nodes in Houdini's + Network View. + + This can be overridden in inherited classes to define where that + particular Creator should be visible in the TAB search. + + Returns: + list: List of houdini node type categories + + """ + return [hou.ropNodeTypeCategory()] diff --git a/openpype/hosts/houdini/plugins/create/create_alembic_camera.py b/openpype/hosts/houdini/plugins/create/create_alembic_camera.py index fec64eb4a1..8c8a5e9eed 100644 --- a/openpype/hosts/houdini/plugins/create/create_alembic_camera.py +++ b/openpype/hosts/houdini/plugins/create/create_alembic_camera.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance, CreatorError +import hou + class CreateAlembicCamera(plugin.HoudiniCreator): """Single baked camera from Alembic ROP.""" @@ -47,3 +49,9 @@ class CreateAlembicCamera(plugin.HoudiniCreator): self.lock_parameters(instance_node, to_lock) instance_node.parm("trange").set(1) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.objNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_composite.py b/openpype/hosts/houdini/plugins/create/create_composite.py index 45af2b0630..9d4f7969bb 100644 --- a/openpype/hosts/houdini/plugins/create/create_composite.py +++ b/openpype/hosts/houdini/plugins/create/create_composite.py @@ -1,7 +1,9 @@ # -*- coding: utf-8 -*- """Creator plugin for creating composite sequences.""" from openpype.hosts.houdini.api import plugin -from openpype.pipeline import CreatedInstance +from openpype.pipeline import CreatedInstance, CreatorError + +import hou class CreateCompositeSequence(plugin.HoudiniCreator): @@ -35,8 +37,20 @@ class CreateCompositeSequence(plugin.HoudiniCreator): "copoutput": filepath } + if self.selected_nodes: + if len(self.selected_nodes) > 1: + raise CreatorError("More than one item selected.") + path = self.selected_nodes[0].path() + parms["coppath"] = path + instance_node.setParms(parms) # Lock any parameters in this list to_lock = ["prim_to_detail_pattern"] self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.cop2NodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py index 6b6b277422..df74070fee 100644 --- a/openpype/hosts/houdini/plugins/create/create_pointcache.py +++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +import hou + class CreatePointCache(plugin.HoudiniCreator): """Alembic ROP to pointcache""" @@ -49,3 +51,9 @@ class CreatePointCache(plugin.HoudiniCreator): # Lock any parameters in this list to_lock = ["prim_to_detail_pattern"] self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.sopNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_usd.py b/openpype/hosts/houdini/plugins/create/create_usd.py index 51ed8237c5..e05d254863 100644 --- a/openpype/hosts/houdini/plugins/create/create_usd.py +++ b/openpype/hosts/houdini/plugins/create/create_usd.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +import hou + class CreateUSD(plugin.HoudiniCreator): """Universal Scene Description""" @@ -13,7 +15,6 @@ class CreateUSD(plugin.HoudiniCreator): enabled = False def create(self, subset_name, instance_data, pre_create_data): - import hou # noqa instance_data.pop("active", None) instance_data.update({"node_type": "usd"}) @@ -43,3 +44,9 @@ class CreateUSD(plugin.HoudiniCreator): "id", ] self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.lopNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py index 1a5011745f..c015cebd49 100644 --- a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py +++ b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +import hou + class CreateVDBCache(plugin.HoudiniCreator): """OpenVDB from Geometry ROP""" @@ -34,3 +36,9 @@ class CreateVDBCache(plugin.HoudiniCreator): parms["soppath"] = self.selected_nodes[0].path() instance_node.setParms(parms) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.sopNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_workfile.py b/openpype/hosts/houdini/plugins/create/create_workfile.py index 0c6d840810..1a8537adcd 100644 --- a/openpype/hosts/houdini/plugins/create/create_workfile.py +++ b/openpype/hosts/houdini/plugins/create/create_workfile.py @@ -14,7 +14,7 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator): identifier = "io.openpype.creators.houdini.workfile" label = "Workfile" family = "workfile" - icon = "document" + icon = "fa5.file" default_variant = "Main" diff --git a/openpype/hosts/houdini/plugins/publish/collect_frames.py b/openpype/hosts/houdini/plugins/publish/collect_frames.py index 6c695f64e9..059793e3c5 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_frames.py +++ b/openpype/hosts/houdini/plugins/publish/collect_frames.py @@ -8,7 +8,6 @@ import pyblish.api from openpype.hosts.houdini.api import lib - class CollectFrames(pyblish.api.InstancePlugin): """Collect all frames which would be saved from the ROP nodes""" @@ -34,8 +33,10 @@ class CollectFrames(pyblish.api.InstancePlugin): self.log.warning("Using current frame: {}".format(hou.frame())) output = output_parm.eval() - _, ext = lib.splitext(output, - allowed_multidot_extensions=[".ass.gz"]) + _, ext = lib.splitext( + output, + allowed_multidot_extensions=[".ass.gz"] + ) file_name = os.path.basename(output) result = file_name diff --git a/openpype/hosts/houdini/plugins/publish/collect_inputs.py b/openpype/hosts/houdini/plugins/publish/collect_inputs.py index 6411376ea3..e92a42f2e8 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_inputs.py +++ b/openpype/hosts/houdini/plugins/publish/collect_inputs.py @@ -117,4 +117,4 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin): inputs = [c["representation"] for c in containers] instance.data["inputRepresentations"] = inputs - self.log.info("Collected inputs: %s" % inputs) + self.log.debug("Collected inputs: %s" % inputs) diff --git a/openpype/hosts/houdini/plugins/publish/collect_instances.py b/openpype/hosts/houdini/plugins/publish/collect_instances.py index bb85630552..5d5347f96e 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_instances.py +++ b/openpype/hosts/houdini/plugins/publish/collect_instances.py @@ -55,7 +55,9 @@ class CollectInstances(pyblish.api.ContextPlugin): has_family = node.evalParm("family") assert has_family, "'%s' is missing 'family'" % node.name() - self.log.info("processing {}".format(node)) + self.log.info( + "Processing legacy instance node {}".format(node.path()) + ) data = lib.read(node) # Check bypass state and reverse diff --git a/openpype/hosts/houdini/plugins/publish/collect_review_data.py b/openpype/hosts/houdini/plugins/publish/collect_review_data.py index 8118e6d558..3efb75e66c 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_review_data.py +++ b/openpype/hosts/houdini/plugins/publish/collect_review_data.py @@ -19,6 +19,9 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin): instance.data["handleEnd"] = 0 instance.data["fps"] = instance.context.data["fps"] + # Enable ftrack functionality + instance.data.setdefault("families", []).append('ftrack') + # Get the camera from the rop node to collect the focal length ropnode_path = instance.data["instance_node"] ropnode = hou.node(ropnode_path) @@ -26,8 +29,9 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin): camera_path = ropnode.parm("camera").eval() camera_node = hou.node(camera_path) if not camera_node: - raise RuntimeError("No valid camera node found on review node: " - "{}".format(camera_path)) + self.log.warning("No valid camera node found on review node: " + "{}".format(camera_path)) + return # Collect focal length. focal_length_parm = camera_node.parm("focal") @@ -49,5 +53,3 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin): # Store focal length in `burninDataMembers` burnin_members = instance.data.setdefault("burninDataMembers", {}) burnin_members["focalLength"] = focal_length - - instance.data.setdefault("families", []).append('ftrack') diff --git a/openpype/hosts/houdini/plugins/publish/collect_workfile.py b/openpype/hosts/houdini/plugins/publish/collect_workfile.py index a6e94ec29e..aa533bcf1b 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_workfile.py +++ b/openpype/hosts/houdini/plugins/publish/collect_workfile.py @@ -32,5 +32,4 @@ class CollectWorkfile(pyblish.api.InstancePlugin): "stagingDir": folder, }] - self.log.info('Collected instance: {}'.format(file)) - self.log.info('staging Dir: {}'.format(folder)) + self.log.debug('Collected workfile instance: {}'.format(file)) diff --git a/openpype/hosts/houdini/plugins/publish/extract_opengl.py b/openpype/hosts/houdini/plugins/publish/extract_opengl.py index c26d0813a6..6c36dec5f5 100644 --- a/openpype/hosts/houdini/plugins/publish/extract_opengl.py +++ b/openpype/hosts/houdini/plugins/publish/extract_opengl.py @@ -2,27 +2,20 @@ import os import pyblish.api -from openpype.pipeline import ( - publish, - OptionalPyblishPluginMixin -) +from openpype.pipeline import publish from openpype.hosts.houdini.api.lib import render_rop import hou -class ExtractOpenGL(publish.Extractor, - OptionalPyblishPluginMixin): +class ExtractOpenGL(publish.Extractor): order = pyblish.api.ExtractorOrder - 0.01 label = "Extract OpenGL" families = ["review"] hosts = ["houdini"] - optional = True def process(self, instance): - if not self.is_active(instance.data): - return ropnode = hou.node(instance.data.get("instance_node")) output = ropnode.evalParm("picture") diff --git a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml deleted file mode 100644 index 0f92560bf7..0000000000 --- a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - -Scene setting - -## Invalid input node - -VDB input must have the same number of VDBs, points, primitives and vertices as output. - - - -### __Detailed Info__ (optional) - -A VDB is an inherited type of Prim, holds the following data: - - Primitives: 1 - - Points: 1 - - Vertices: 1 - - VDBs: 1 - - - \ No newline at end of file diff --git a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_output_node.xml b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_output_node.xml new file mode 100644 index 0000000000..eb83bfffe3 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_output_node.xml @@ -0,0 +1,28 @@ + + + +Invalid VDB + +## Invalid VDB output + +All primitives of the output geometry must be VDBs, no other primitive +types are allowed. That means that regardless of the amount of VDBs in the +geometry it will have an equal amount of VDBs, points, primitives and +vertices since each VDB primitive is one point, one vertex and one VDB. + +This validation only checks the geometry on the first frame of the export +frame range. + + + + + +### Detailed Info + +ROP node `{rop_path}` is set to export SOP path `{sop_path}`. + +{message} + + + + \ No newline at end of file diff --git a/openpype/hosts/houdini/plugins/publish/save_scene.py b/openpype/hosts/houdini/plugins/publish/save_scene.py index d6e07ccab0..703d3e4895 100644 --- a/openpype/hosts/houdini/plugins/publish/save_scene.py +++ b/openpype/hosts/houdini/plugins/publish/save_scene.py @@ -20,7 +20,7 @@ class SaveCurrentScene(pyblish.api.ContextPlugin): ) if host.has_unsaved_changes(): - self.log.info("Saving current file {}...".format(current_file)) + self.log.info("Saving current file: {}".format(current_file)) host.save_workfile(current_file) else: self.log.debug("No unsaved changes, skipping file save..") diff --git a/openpype/hosts/houdini/plugins/publish/validate_scene_review.py b/openpype/hosts/houdini/plugins/publish/validate_scene_review.py index ade01d4b90..a44b7e1597 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_scene_review.py +++ b/openpype/hosts/houdini/plugins/publish/validate_scene_review.py @@ -16,15 +16,19 @@ class ValidateSceneReview(pyblish.api.InstancePlugin): label = "Scene Setting for review" def process(self, instance): - invalid = self.get_invalid_scene_path(instance) report = [] - if invalid: - report.append( - "Scene path does not exist: '%s'" % invalid[0], - ) + instance_node = hou.node(instance.data.get("instance_node")) - invalid = self.get_invalid_resolution(instance) + invalid = self.get_invalid_scene_path(instance_node) + if invalid: + report.append(invalid) + + invalid = self.get_invalid_camera_path(instance_node) + if invalid: + report.append(invalid) + + invalid = self.get_invalid_resolution(instance_node) if invalid: report.extend(invalid) @@ -33,26 +37,36 @@ class ValidateSceneReview(pyblish.api.InstancePlugin): "\n\n".join(report), title=self.label) - def get_invalid_scene_path(self, instance): - - node = hou.node(instance.data.get("instance_node")) - scene_path_parm = node.parm("scenepath") + def get_invalid_scene_path(self, rop_node): + scene_path_parm = rop_node.parm("scenepath") scene_path_node = scene_path_parm.evalAsNode() if not scene_path_node: - return [scene_path_parm.evalAsString()] + path = scene_path_parm.evalAsString() + return "Scene path does not exist: '{}'".format(path) - def get_invalid_resolution(self, instance): - node = hou.node(instance.data.get("instance_node")) + def get_invalid_camera_path(self, rop_node): + camera_path_parm = rop_node.parm("camera") + camera_node = camera_path_parm.evalAsNode() + path = camera_path_parm.evalAsString() + if not camera_node: + return "Camera path does not exist: '{}'".format(path) + type_name = camera_node.type().name() + if type_name != "cam": + return "Camera path is not a camera: '{}' (type: {})".format( + path, type_name + ) + + def get_invalid_resolution(self, rop_node): # The resolution setting is only used when Override Camera Resolution # is enabled. So we skip validation if it is disabled. - override = node.parm("tres").eval() + override = rop_node.parm("tres").eval() if not override: return invalid = [] - res_width = node.parm("res1").eval() - res_height = node.parm("res2").eval() + res_width = rop_node.parm("res1").eval() + res_height = rop_node.parm("res2").eval() if res_width == 0: invalid.append("Override Resolution width is set to zero.") if res_height == 0: diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py deleted file mode 100644 index 1f9ccc9c42..0000000000 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py +++ /dev/null @@ -1,52 +0,0 @@ -# -*- coding: utf-8 -*- -import pyblish.api -from openpype.pipeline import ( - PublishValidationError -) - - -class ValidateVDBInputNode(pyblish.api.InstancePlugin): - """Validate that the node connected to the output node is of type VDB. - - Regardless of the amount of VDBs create the output will need to have an - equal amount of VDBs, points, primitives and vertices - - A VDB is an inherited type of Prim, holds the following data: - - Primitives: 1 - - Points: 1 - - Vertices: 1 - - VDBs: 1 - - """ - - order = pyblish.api.ValidatorOrder + 0.1 - families = ["vdbcache"] - hosts = ["houdini"] - label = "Validate Input Node (VDB)" - - def process(self, instance): - invalid = self.get_invalid(instance) - if invalid: - raise PublishValidationError( - self, - "Node connected to the output node is not of type VDB", - title=self.label - ) - - @classmethod - def get_invalid(cls, instance): - - node = instance.data["output_node"] - - prims = node.geometry().prims() - nr_of_prims = len(prims) - - nr_of_points = len(node.geometry().points()) - if nr_of_points != nr_of_prims: - cls.log.error("The number of primitives and points do not match") - return [instance] - - for prim in prims: - if prim.numVertices() != 1: - cls.log.error("Found primitive with more than 1 vertex!") - return [instance] diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py index f9f88b3bf9..674782179c 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py @@ -1,14 +1,73 @@ # -*- coding: utf-8 -*- +import contextlib + import pyblish.api import hou -from openpype.pipeline import PublishValidationError + +from openpype.pipeline import PublishXmlValidationError +from openpype.hosts.houdini.api.action import SelectInvalidAction + + +def group_consecutive_numbers(nums): + """ + Args: + nums (list): List of sorted integer numbers. + + Yields: + str: Group ranges as {start}-{end} if more than one number in the range + else it yields {end} + + """ + start = None + end = None + + def _result(a, b): + if a == b: + return "{}".format(a) + else: + return "{}-{}".format(a, b) + + for num in nums: + if start is None: + start = num + end = num + elif num == end + 1: + end = num + else: + yield _result(start, end) + start = num + end = num + if start is not None: + yield _result(start, end) + + +@contextlib.contextmanager +def update_mode_context(mode): + original = hou.updateModeSetting() + try: + hou.setUpdateMode(mode) + yield + finally: + hou.setUpdateMode(original) + + +def get_geometry_at_frame(sop_node, frame, force=True): + """Return geometry at frame but force a cooked value.""" + with update_mode_context(hou.updateMode.AutoUpdate): + sop_node.cook(force=force, frame_range=(frame, frame)) + return sop_node.geometryAtFrame(frame) class ValidateVDBOutputNode(pyblish.api.InstancePlugin): """Validate that the node connected to the output node is of type VDB. - Regardless of the amount of VDBs create the output will need to have an - equal amount of VDBs, points, primitives and vertices + All primitives of the output geometry must be VDBs, no other primitive + types are allowed. That means that regardless of the amount of VDBs in the + geometry it will have an equal amount of VDBs, points, primitives and + vertices since each VDB primitive is one point, one vertex and one VDB. + + This validation only checks the geometry on the first frame of the export + frame range for optimization purposes. A VDB is an inherited type of Prim, holds the following data: - Primitives: 1 @@ -22,54 +81,95 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin): families = ["vdbcache"] hosts = ["houdini"] label = "Validate Output Node (VDB)" + actions = [SelectInvalidAction] def process(self, instance): - invalid = self.get_invalid(instance) - if invalid: - raise PublishValidationError( - "Node connected to the output node is not" " of type VDB!", - title=self.label + invalid_nodes, message = self.get_invalid_with_message(instance) + if invalid_nodes: + + # instance_node is str, but output_node is hou.Node so we convert + output = instance.data.get("output_node") + output_path = output.path() if output else None + + raise PublishXmlValidationError( + self, + "Invalid VDB content: {}".format(message), + formatting_data={ + "message": message, + "rop_path": instance.data.get("instance_node"), + "sop_path": output_path + } ) @classmethod - def get_invalid(cls, instance): + def get_invalid_with_message(cls, instance): - node = instance.data["output_node"] + node = instance.data.get("output_node") if node is None: - cls.log.error( + instance_node = instance.data.get("instance_node") + error = ( "SOP path is not correctly set on " - "ROP node '%s'." % instance.data.get("instance_node") + "ROP node `{}`.".format(instance_node) ) - return [instance] + return [hou.node(instance_node), error] frame = instance.data.get("frameStart", 0) - geometry = node.geometryAtFrame(frame) + geometry = get_geometry_at_frame(node, frame) if geometry is None: # No geometry data on this node, maybe the node hasn't cooked? - cls.log.error( - "SOP node has no geometry data. " - "Is it cooked? %s" % node.path() + error = ( + "SOP node `{}` has no geometry data. " + "Was it unable to cook?".format(node.path()) ) - return [node] + return [node, error] - prims = geometry.prims() - nr_of_prims = len(prims) + num_prims = geometry.intrinsicValue("primitivecount") + num_points = geometry.intrinsicValue("pointcount") + if num_prims == 0 and num_points == 0: + # Since we are only checking the first frame it doesn't mean there + # won't be VDB prims in a few frames. As such we'll assume for now + # the user knows what he or she is doing + cls.log.warning( + "SOP node `{}` has no primitives on start frame {}. " + "Validation is skipped and it is assumed elsewhere in the " + "frame range VDB prims and only VDB prims will exist." + "".format(node.path(), int(frame)) + ) + return [None, None] - # All primitives must be hou.VDB - invalid_prim = False - for prim in prims: - if not isinstance(prim, hou.VDB): - cls.log.error("Found non-VDB primitive: %s" % prim) - invalid_prim = True - if invalid_prim: - return [instance] + num_vdb_prims = geometry.countPrimType(hou.primType.VDB) + cls.log.debug("Detected {} VDB primitives".format(num_vdb_prims)) + if num_prims != num_vdb_prims: + # There's at least one primitive that is not a VDB. + # Search them and report them to the artist. + prims = geometry.prims() + invalid_prims = [prim for prim in prims + if not isinstance(prim, hou.VDB)] + if invalid_prims: + # Log prim numbers as consecutive ranges so logging isn't very + # slow for large number of primitives + error = ( + "Found non-VDB primitives for `{}`. " + "Primitive indices {} are not VDB primitives.".format( + node.path(), + ", ".join(group_consecutive_numbers( + prim.number() for prim in invalid_prims + )) + ) + ) + return [node, error] - nr_of_points = len(geometry.points()) - if nr_of_points != nr_of_prims: - cls.log.error("The number of primitives and points do not match") - return [instance] + if num_points != num_vdb_prims: + # We have points unrelated to the VDB primitives. + error = ( + "The number of primitives and points do not match in '{}'. " + "This likely means you have unconnected points, which we do " + "not allow in the VDB output.".format(node.path())) + return [node, error] - for prim in prims: - if prim.numVertices() != 1: - cls.log.error("Found primitive with more than 1 vertex!") - return [instance] + return [None, None] + + @classmethod + def get_invalid(cls, instance): + nodes, _ = cls.get_invalid_with_message(instance) + return nodes diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py index 7707cc2dba..543c8e1407 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py @@ -28,18 +28,37 @@ class ValidateWorkfilePaths( if not self.is_active(instance.data): return invalid = self.get_invalid() - self.log.info( - "node types to check: {}".format(", ".join(self.node_types))) - self.log.info( - "prohibited vars: {}".format(", ".join(self.prohibited_vars)) + self.log.debug( + "Checking node types: {}".format(", ".join(self.node_types))) + self.log.debug( + "Searching prohibited vars: {}".format( + ", ".join(self.prohibited_vars) + ) ) - if invalid: - for param in invalid: - self.log.error( - "{}: {}".format(param.path(), param.unexpandedString())) - raise PublishValidationError( - "Invalid paths found", title=self.label) + if invalid: + all_container_vars = set() + for param in invalid: + value = param.unexpandedString() + contained_vars = [ + var for var in self.prohibited_vars + if var in value + ] + all_container_vars.update(contained_vars) + + self.log.error( + "Parm {} contains prohibited vars {}: {}".format( + param.path(), + ", ".join(contained_vars), + value) + ) + + message = ( + "Prohibited vars {} found in parameter values".format( + ", ".join(all_container_vars) + ) + ) + raise PublishValidationError(message, title=self.label) @classmethod def get_invalid(cls): @@ -63,7 +82,7 @@ class ValidateWorkfilePaths( def repair(cls, instance): invalid = cls.get_invalid() for param in invalid: - cls.log.info("processing: {}".format(param.path())) + cls.log.info("Processing: {}".format(param.path())) cls.log.info("Replacing {} for {}".format( param.unexpandedString(), hou.text.expandString(param.unexpandedString()))) diff --git a/openpype/hosts/max/api/lib.py b/openpype/hosts/max/api/lib.py index 27d4598a3a..e2af0720ec 100644 --- a/openpype/hosts/max/api/lib.py +++ b/openpype/hosts/max/api/lib.py @@ -145,7 +145,7 @@ def get_default_render_folder(project_setting=None): ["default_render_image_folder"]) -def set_framerange(start_frame, end_frame): +def set_render_frame_range(start_frame, end_frame): """ Note: Frame range can be specified in different types. Possible values are: @@ -157,10 +157,10 @@ def set_framerange(start_frame, end_frame): Todo: Current type is hard-coded, there should be a custom setting for this. """ - rt.rendTimeType = 4 + rt.rendTimeType = 3 if start_frame is not None and end_frame is not None: - frame_range = "{0}-{1}".format(start_frame, end_frame) - rt.rendPickupFrames = frame_range + rt.rendStart = int(start_frame) + rt.rendEnd = int(end_frame) def get_multipass_setting(project_setting=None): @@ -180,10 +180,16 @@ def set_scene_resolution(width: int, height: int): None """ + # make sure the render dialog is closed + # for the update of resolution + # Changing the Render Setup dialog settingsshould be done + # with the actual Render Setup dialog in a closed state. + if rt.renderSceneDialog.isOpen(): + rt.renderSceneDialog.close() + rt.renderWidth = width rt.renderHeight = height - def reset_scene_resolution(): """Apply the scene resolution from the project definition @@ -246,10 +252,15 @@ def reset_frame_range(fps: bool = True): fps_number = float(data_fps["data"]["fps"]) rt.frameRate = fps_number frame_range = get_frame_range() - frame_start = frame_range["frameStart"] - int(frame_range["handleStart"]) - frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"]) - frange_cmd = f"animationRange = interval {frame_start} {frame_end}" + frame_start_handle = frame_range["frameStart"] - int( + frame_range["handleStart"] + ) + frame_end_handle = frame_range["frameEnd"] + int(frame_range["handleEnd"]) + frange_cmd = ( + f"animationRange = interval {frame_start_handle} {frame_end_handle}" + ) rt.execute(frange_cmd) + set_render_frame_range(frame_start_handle, frame_end_handle) def set_context_setting(): @@ -266,6 +277,7 @@ def set_context_setting(): None """ reset_scene_resolution() + reset_frame_range() def get_max_version(): diff --git a/openpype/hosts/max/api/lib_renderproducts.py b/openpype/hosts/max/api/lib_renderproducts.py index 350eb97661..8224d589ad 100644 --- a/openpype/hosts/max/api/lib_renderproducts.py +++ b/openpype/hosts/max/api/lib_renderproducts.py @@ -36,8 +36,9 @@ class RenderProducts(object): container) context = get_current_project_asset() - startFrame = context["data"].get("frameStart") - endFrame = context["data"].get("frameEnd") + 1 + # TODO: change the frame range follows the current render setting + startFrame = int(rt.rendStart) + endFrame = int(rt.rendEnd) + 1 img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa full_render_list = self.beauty_render_product(output_file, diff --git a/openpype/hosts/max/api/lib_rendersettings.py b/openpype/hosts/max/api/lib_rendersettings.py index 4940265a23..91e4a5bf9b 100644 --- a/openpype/hosts/max/api/lib_rendersettings.py +++ b/openpype/hosts/max/api/lib_rendersettings.py @@ -6,7 +6,7 @@ from openpype.pipeline import legacy_io from openpype.pipeline.context_tools import get_current_project_asset from openpype.hosts.max.api.lib import ( - set_framerange, + set_render_frame_range, get_current_renderer, get_default_render_folder ) @@ -68,7 +68,7 @@ class RenderSettings(object): # Set Frame Range frame_start = context["data"].get("frame_start") frame_end = context["data"].get("frame_end") - set_framerange(frame_start, frame_end) + set_render_frame_range(frame_start, frame_end) # get the production render renderer_class = get_current_renderer() renderer = str(renderer_class).split(":")[0] @@ -105,6 +105,9 @@ class RenderSettings(object): rt.rendSaveFile = True + if rt.renderSceneDialog.isOpen(): + rt.renderSceneDialog.close() + def arnold_setup(self): # get Arnold RenderView run in the background # for setting up renderable camera diff --git a/openpype/hosts/max/api/pipeline.py b/openpype/hosts/max/api/pipeline.py index dacc402318..50fe30b299 100644 --- a/openpype/hosts/max/api/pipeline.py +++ b/openpype/hosts/max/api/pipeline.py @@ -52,6 +52,7 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, INewPublisher): def context_setting(): return lib.set_context_setting() + rt.callbacks.addScript(rt.Name('systemPostNew'), context_setting) diff --git a/openpype/hosts/max/plugins/create/create_model.py b/openpype/hosts/max/plugins/create/create_model.py new file mode 100644 index 0000000000..e7ae3af9db --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_model.py @@ -0,0 +1,28 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for model.""" +from openpype.hosts.max.api import plugin +from openpype.pipeline import CreatedInstance + + +class CreateModel(plugin.MaxCreator): + identifier = "io.openpype.creators.max.model" + label = "Model" + family = "model" + icon = "gear" + + def create(self, subset_name, instance_data, pre_create_data): + from pymxs import runtime as rt + instance = super(CreateModel, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + container = rt.getNodeByName(instance.data.get("instance_node")) + # TODO: Disable "Add to Containers?" Panel + # parent the selected cameras into the container + sel_obj = None + if self.selected_nodes: + sel_obj = list(self.selected_nodes) + for obj in sel_obj: + obj.parent = container + # for additional work on the node: + # instance_node = rt.getNodeByName(instance.get("instance_node")) diff --git a/openpype/hosts/max/plugins/create/create_render.py b/openpype/hosts/max/plugins/create/create_render.py index 269fff2e32..68ae5eac72 100644 --- a/openpype/hosts/max/plugins/create/create_render.py +++ b/openpype/hosts/max/plugins/create/create_render.py @@ -27,6 +27,11 @@ class CreateRender(plugin.MaxCreator): # for additional work on the node: # instance_node = rt.getNodeByName(instance.get("instance_node")) + # make sure the render dialog is closed + # for the update of resolution + # Changing the Render Setup dialog settings should be done + # with the actual Render Setup dialog in a closed state. + # set viewport camera for rendering(mandatory for deadline) RenderSettings().set_render_camera(sel_obj) # set output paths for rendering(mandatory for deadline) diff --git a/openpype/hosts/max/plugins/load/load_camera_fbx.py b/openpype/hosts/max/plugins/load/load_camera_fbx.py index 3a6947798e..0c5dd762cf 100644 --- a/openpype/hosts/max/plugins/load/load_camera_fbx.py +++ b/openpype/hosts/max/plugins/load/load_camera_fbx.py @@ -20,28 +20,25 @@ class FbxLoader(load.LoaderPlugin): from pymxs import runtime as rt filepath = os.path.normpath(self.fname) + rt.FBXImporterSetParam("Animation", True) + rt.FBXImporterSetParam("Camera", True) + rt.FBXImporterSetParam("AxisConversionMethod", True) + rt.FBXImporterSetParam("Preserveinstances", True) + rt.importFile( + filepath, + rt.name("noPrompt"), + using=rt.FBXIMP) - fbx_import_cmd = ( - f""" + container = rt.getNodeByName(f"{name}") + if not container: + container = rt.container() + container.name = f"{name}" -FBXImporterSetParam "Animation" true -FBXImporterSetParam "Cameras" true -FBXImporterSetParam "AxisConversionMethod" true -FbxExporterSetParam "UpAxis" "Y" -FbxExporterSetParam "Preserveinstances" true - -importFile @"{filepath}" #noPrompt using:FBXIMP - """) - - self.log.debug(f"Executing command: {fbx_import_cmd}") - rt.execute(fbx_import_cmd) - - container_name = f"{name}_CON" - - asset = rt.getNodeByName(f"{name}") + for selection in rt.getCurrentSelection(): + selection.Parent = container return containerise( - name, [asset], context, loader=self.__class__.__name__) + name, [container], context, loader=self.__class__.__name__) def update(self, container, representation): from pymxs import runtime as rt diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py index 460f4822a6..4b19cd671f 100644 --- a/openpype/hosts/max/plugins/load/load_max_scene.py +++ b/openpype/hosts/max/plugins/load/load_max_scene.py @@ -10,7 +10,9 @@ class MaxSceneLoader(load.LoaderPlugin): """Max Scene Loader""" families = ["camera", - "maxScene"] + "maxScene", + "model"] + representations = ["max"] order = -8 icon = "code-fork" diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py new file mode 100644 index 0000000000..95ee014e07 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model.py @@ -0,0 +1,109 @@ + +import os +from openpype.pipeline import ( + load, get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class ModelAbcLoader(load.LoaderPlugin): + """Loading model with the Alembic loader.""" + + families = ["model"] + label = "Load Model(Alembic)" + representations = ["abc"] + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + file_path = os.path.normpath(self.fname) + + abc_before = { + c for c in rt.rootNode.Children + if rt.classOf(c) == rt.AlembicContainer + } + + abc_import_cmd = (f""" +AlembicImport.ImportToRoot = false +AlembicImport.CustomAttributes = true +AlembicImport.UVs = true +AlembicImport.VertexColors = true + +importFile @"{file_path}" #noPrompt + """) + + self.log.debug(f"Executing command: {abc_import_cmd}") + rt.execute(abc_import_cmd) + + abc_after = { + c for c in rt.rootNode.Children + if rt.classOf(c) == rt.AlembicContainer + } + + # This should yield new AlembicContainer node + abc_containers = abc_after.difference(abc_before) + + if len(abc_containers) != 1: + self.log.error("Something failed when loading.") + + abc_container = abc_containers.pop() + + return containerise( + name, [abc_container], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + path = get_representation_path(representation) + node = rt.getNodeByName(container["instance_node"]) + rt.select(node.Children) + + for alembic in rt.selection: + abc = rt.getNodeByName(alembic.name) + rt.select(abc.Children) + for abc_con in rt.selection: + container = rt.getNodeByName(abc_con.name) + container.source = path + rt.select(container.Children) + for abc_obj in rt.selection: + alembic_obj = rt.getNodeByName(abc_obj.name) + alembic_obj.source = path + + with maintained_selection(): + rt.select(node) + + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) + + @staticmethod + def get_container_children(parent, type_name): + from pymxs import runtime as rt + + def list_children(node): + children = [] + for c in node.Children: + children.append(c) + children += list_children(c) + return children + + filtered = [] + for child in list_children(parent): + class_type = str(rt.classOf(child.baseObject)) + if class_type == type_name: + filtered.append(child) + + return filtered diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py new file mode 100644 index 0000000000..01e6acae12 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model_fbx.py @@ -0,0 +1,75 @@ +import os +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class FbxModelLoader(load.LoaderPlugin): + """Fbx Model Loader""" + + families = ["model"] + representations = ["fbx"] + order = -9 + icon = "code-fork" + color = "white" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + filepath = os.path.normpath(self.fname) + rt.FBXImporterSetParam("Animation", False) + rt.FBXImporterSetParam("Cameras", False) + rt.FBXImporterSetParam("Preserveinstances", True) + rt.importFile( + filepath, + rt.name("noPrompt"), + using=rt.FBXIMP) + + container = rt.getNodeByName(f"{name}") + if not container: + container = rt.container() + container.name = f"{name}" + + for selection in rt.getCurrentSelection(): + selection.Parent = container + + return containerise( + name, [container], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node = rt.getNodeByName(container["instance_node"]) + rt.select(node.Children) + fbx_reimport_cmd = ( + f""" +FBXImporterSetParam "Animation" false +FBXImporterSetParam "Cameras" false +FBXImporterSetParam "AxisConversionMethod" true +FbxExporterSetParam "UpAxis" "Y" +FbxExporterSetParam "Preserveinstances" true + +importFile @"{path}" #noPrompt using:FBXIMP + """) + rt.execute(fbx_reimport_cmd) + + with maintained_selection(): + rt.select(node) + + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/load/load_model_obj.py b/openpype/hosts/max/plugins/load/load_model_obj.py new file mode 100644 index 0000000000..c55e462111 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model_obj.py @@ -0,0 +1,68 @@ +import os +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class ObjLoader(load.LoaderPlugin): + """Obj Loader""" + + families = ["model"] + representations = ["obj"] + order = -9 + icon = "code-fork" + color = "white" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + filepath = os.path.normpath(self.fname) + self.log.debug(f"Executing command to import..") + + rt.execute(f'importFile @"{filepath}" #noPrompt using:ObjImp') + # create "missing" container for obj import + container = rt.container() + container.name = f"{name}" + + # get current selection + for selection in rt.getCurrentSelection(): + selection.Parent = container + + asset = rt.getNodeByName(f"{name}") + + return containerise( + name, [asset], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + + instance_name, _ = node_name.split("_") + container = rt.getNodeByName(instance_name) + for n in container.Children: + rt.delete(n) + + rt.execute(f'importFile @"{path}" #noPrompt using:ObjImp') + # get current selection + for selection in rt.getCurrentSelection(): + selection.Parent = container + + with maintained_selection(): + rt.select(node) + + lib.imprint(node_name, { + "representation": str(representation["_id"]) + }) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/load/load_model_usd.py b/openpype/hosts/max/plugins/load/load_model_usd.py new file mode 100644 index 0000000000..143f91f40b --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model_usd.py @@ -0,0 +1,78 @@ +import os +from openpype.pipeline import ( + load, get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class ModelUSDLoader(load.LoaderPlugin): + """Loading model with the USD loader.""" + + families = ["model"] + label = "Load Model(USD)" + representations = ["usda"] + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + # asset_filepath + filepath = os.path.normpath(self.fname) + import_options = rt.USDImporter.CreateOptions() + base_filename = os.path.basename(filepath) + filename, ext = os.path.splitext(base_filename) + log_filepath = filepath.replace(ext, "txt") + + rt.LogPath = log_filepath + rt.LogLevel = rt.name('info') + rt.USDImporter.importFile(filepath, + importOptions=import_options) + + asset = rt.getNodeByName(f"{name}") + + return containerise( + name, [asset], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + for n in node.Children: + for r in n.Children: + rt.delete(r) + rt.delete(n) + instance_name, _ = node_name.split("_") + + import_options = rt.USDImporter.CreateOptions() + base_filename = os.path.basename(path) + _, ext = os.path.splitext(base_filename) + log_filepath = path.replace(ext, "txt") + + rt.LogPath = log_filepath + rt.LogLevel = rt.name('info') + rt.USDImporter.importFile(path, + importOptions=import_options) + + asset = rt.getNodeByName(f"{instance_name}") + asset.Parent = node + + with maintained_selection(): + rt.select(node) + + lib.imprint(node_name, { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py index f7a72ece25..b3e12adc7b 100644 --- a/openpype/hosts/max/plugins/load/load_pointcache.py +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -15,8 +15,7 @@ from openpype.hosts.max.api import lib class AbcLoader(load.LoaderPlugin): """Alembic loader.""" - families = ["model", - "camera", + families = ["camera", "animation", "pointcache"] label = "Load Alembic" diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index b040467522..00e00a8eb5 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -46,7 +46,6 @@ class CollectRender(pyblish.api.InstancePlugin): self.log.debug(f"Setting {version_int} to context.") context.data["version"] = version_int - # setup the plugin as 3dsmax for the internal renderer data = { "subset": instance.name, @@ -59,8 +58,8 @@ class CollectRender(pyblish.api.InstancePlugin): "source": filepath, "expectedFiles": render_layer_files, "plugin": "3dsmax", - "frameStart": context.data['frameStart'], - "frameEnd": context.data['frameEnd'], + "frameStart": int(rt.rendStart), + "frameEnd": int(rt.rendEnd), "version": version_int, "farm": True } diff --git a/openpype/hosts/max/plugins/publish/extract_camera_abc.py b/openpype/hosts/max/plugins/publish/extract_camera_abc.py index 8c23ff9878..6b3bb178a3 100644 --- a/openpype/hosts/max/plugins/publish/extract_camera_abc.py +++ b/openpype/hosts/max/plugins/publish/extract_camera_abc.py @@ -1,18 +1,11 @@ import os import pyblish.api -from openpype.pipeline import ( - publish, - OptionalPyblishPluginMixin -) +from openpype.pipeline import publish, OptionalPyblishPluginMixin from pymxs import runtime as rt -from openpype.hosts.max.api import ( - maintained_selection, - get_all_children -) +from openpype.hosts.max.api import maintained_selection, get_all_children -class ExtractCameraAlembic(publish.Extractor, - OptionalPyblishPluginMixin): +class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin): """ Extract Camera with AlembicExport """ @@ -38,38 +31,33 @@ class ExtractCameraAlembic(publish.Extractor, path = os.path.join(stagingdir, filename) # We run the render - self.log.info("Writing alembic '%s' to '%s'" % (filename, - stagingdir)) + self.log.info("Writing alembic '%s' to '%s'" % (filename, stagingdir)) - export_cmd = ( - f""" -AlembicExport.ArchiveType = #ogawa -AlembicExport.CoordinateSystem = #maya -AlembicExport.StartFrame = {start} -AlembicExport.EndFrame = {end} -AlembicExport.CustomAttributes = true - -exportFile @"{path}" #noPrompt selectedOnly:on using:AlembicExport - - """) - - self.log.debug(f"Executing command: {export_cmd}") + rt.AlembicExport.ArchiveType = rt.name("ogawa") + rt.AlembicExport.CoordinateSystem = rt.name("maya") + rt.AlembicExport.StartFrame = start + rt.AlembicExport.EndFrame = end + rt.AlembicExport.CustomAttributes = True with maintained_selection(): # select and export rt.select(get_all_children(rt.getNodeByName(container))) - rt.execute(export_cmd) + rt.exportFile( + path, + rt.name("noPrompt"), + selectedOnly=True, + using=rt.AlembicExport, + ) self.log.info("Performing Extraction ...") if "representations" not in instance.data: instance.data["representations"] = [] representation = { - 'name': 'abc', - 'ext': 'abc', - 'files': filename, + "name": "abc", + "ext": "abc", + "files": filename, "stagingDir": stagingdir, } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" % (instance.name, - path)) + self.log.info("Extracted instance '%s' to: %s" % (instance.name, path)) diff --git a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py index 7e92f355ed..4b4b349e19 100644 --- a/openpype/hosts/max/plugins/publish/extract_camera_fbx.py +++ b/openpype/hosts/max/plugins/publish/extract_camera_fbx.py @@ -1,18 +1,11 @@ import os import pyblish.api -from openpype.pipeline import ( - publish, - OptionalPyblishPluginMixin -) +from openpype.pipeline import publish, OptionalPyblishPluginMixin from pymxs import runtime as rt -from openpype.hosts.max.api import ( - maintained_selection, - get_all_children -) +from openpype.hosts.max.api import maintained_selection, get_all_children -class ExtractCameraFbx(publish.Extractor, - OptionalPyblishPluginMixin): +class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin): """ Extract Camera with FbxExporter """ @@ -33,43 +26,35 @@ class ExtractCameraFbx(publish.Extractor, filename = "{name}.fbx".format(**instance.data) filepath = os.path.join(stagingdir, filename) - self.log.info("Writing fbx file '%s' to '%s'" % (filename, - filepath)) + self.log.info("Writing fbx file '%s' to '%s'" % (filename, filepath)) - # Need to export: - # Animation = True - # Cameras = True - # AxisConversionMethod - fbx_export_cmd = ( - f""" - -FBXExporterSetParam "Animation" true -FBXExporterSetParam "Cameras" true -FBXExporterSetParam "AxisConversionMethod" "Animation" -FbxExporterSetParam "UpAxis" "Y" -FbxExporterSetParam "Preserveinstances" true - -exportFile @"{filepath}" #noPrompt selectedOnly:true using:FBXEXP - - """) - - self.log.debug(f"Executing command: {fbx_export_cmd}") + rt.FBXExporterSetParam("Animation", True) + rt.FBXExporterSetParam("Cameras", True) + rt.FBXExporterSetParam("AxisConversionMethod", "Animation") + rt.FBXExporterSetParam("UpAxis", "Y") + rt.FBXExporterSetParam("Preserveinstances", True) with maintained_selection(): # select and export rt.select(get_all_children(rt.getNodeByName(container))) - rt.execute(fbx_export_cmd) + rt.exportFile( + filepath, + rt.name("noPrompt"), + selectedOnly=True, + using=rt.FBXEXP, + ) self.log.info("Performing Extraction ...") if "representations" not in instance.data: instance.data["representations"] = [] representation = { - 'name': 'fbx', - 'ext': 'fbx', - 'files': filename, + "name": "fbx", + "ext": "fbx", + "files": filename, "stagingDir": stagingdir, } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" % (instance.name, - filepath)) + self.log.info( + "Extracted instance '%s' to: %s" % (instance.name, filepath) + ) diff --git a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py index 969f87be48..f0c2aff7f3 100644 --- a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py +++ b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py @@ -1,18 +1,11 @@ import os import pyblish.api -from openpype.pipeline import ( - publish, - OptionalPyblishPluginMixin -) +from openpype.pipeline import publish, OptionalPyblishPluginMixin from pymxs import runtime as rt -from openpype.hosts.max.api import ( - maintained_selection, - get_all_children -) +from openpype.hosts.max.api import get_all_children -class ExtractMaxSceneRaw(publish.Extractor, - OptionalPyblishPluginMixin): +class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin): """ Extract Raw Max Scene with SaveSelected """ @@ -20,8 +13,7 @@ class ExtractMaxSceneRaw(publish.Extractor, order = pyblish.api.ExtractorOrder - 0.2 label = "Extract Max Scene (Raw)" hosts = ["max"] - families = ["camera", - "maxScene"] + families = ["camera", "maxScene", "model"] optional = True def process(self, instance): @@ -36,26 +28,23 @@ class ExtractMaxSceneRaw(publish.Extractor, filename = "{name}.max".format(**instance.data) max_path = os.path.join(stagingdir, filename) - self.log.info("Writing max file '%s' to '%s'" % (filename, - max_path)) + self.log.info("Writing max file '%s' to '%s'" % (filename, max_path)) if "representations" not in instance.data: instance.data["representations"] = [] - # saving max scene - with maintained_selection(): - # need to figure out how to select the camera - rt.select(get_all_children(rt.getNodeByName(container))) - rt.execute(f'saveNodes selection "{max_path}" quiet:true') + nodes = get_all_children(rt.getNodeByName(container)) + rt.saveNodes(nodes, max_path, quiet=True) self.log.info("Performing Extraction ...") representation = { - 'name': 'max', - 'ext': 'max', - 'files': filename, + "name": "max", + "ext": "max", + "files": filename, "stagingDir": stagingdir, } instance.data["representations"].append(representation) - self.log.info("Extracted instance '%s' to: %s" % (instance.name, - max_path)) + self.log.info( + "Extracted instance '%s' to: %s" % (instance.name, max_path) + ) diff --git a/openpype/hosts/max/plugins/publish/extract_model.py b/openpype/hosts/max/plugins/publish/extract_model.py new file mode 100644 index 0000000000..4c7c98e2cc --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model.py @@ -0,0 +1,64 @@ +import os +import pyblish.api +from openpype.pipeline import publish, OptionalPyblishPluginMixin +from pymxs import runtime as rt +from openpype.hosts.max.api import maintained_selection, get_all_children + + +class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin): + """ + Extract Geometry in Alembic Format + """ + + order = pyblish.api.ExtractorOrder - 0.1 + label = "Extract Geometry (Alembic)" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.abc".format(**instance.data) + filepath = os.path.join(stagingdir, filename) + + # We run the render + self.log.info("Writing alembic '%s' to '%s'" % (filename, stagingdir)) + + rt.AlembicExport.ArchiveType = rt.name("ogawa") + rt.AlembicExport.CoordinateSystem = rt.name("maya") + rt.AlembicExport.CustomAttributes = True + rt.AlembicExport.UVs = True + rt.AlembicExport.VertexColors = True + rt.AlembicExport.PreserveInstances = True + + with maintained_selection(): + # select and export + rt.select(get_all_children(rt.getNodeByName(container))) + rt.exportFile( + filepath, + rt.name("noPrompt"), + selectedOnly=True, + using=rt.AlembicExport, + ) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "abc", + "ext": "abc", + "files": filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + self.log.info( + "Extracted instance '%s' to: %s" % (instance.name, filepath) + ) diff --git a/openpype/hosts/max/plugins/publish/extract_model_fbx.py b/openpype/hosts/max/plugins/publish/extract_model_fbx.py new file mode 100644 index 0000000000..e6ccb24cdd --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model_fbx.py @@ -0,0 +1,63 @@ +import os +import pyblish.api +from openpype.pipeline import publish, OptionalPyblishPluginMixin +from pymxs import runtime as rt +from openpype.hosts.max.api import maintained_selection, get_all_children + + +class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin): + """ + Extract Geometry in FBX Format + """ + + order = pyblish.api.ExtractorOrder - 0.05 + label = "Extract FBX" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.fbx".format(**instance.data) + filepath = os.path.join(stagingdir, filename) + self.log.info("Writing FBX '%s' to '%s'" % (filepath, stagingdir)) + + rt.FBXExporterSetParam("Animation", False) + rt.FBXExporterSetParam("Cameras", False) + rt.FBXExporterSetParam("Lights", False) + rt.FBXExporterSetParam("PointCache", False) + rt.FBXExporterSetParam("AxisConversionMethod", "Animation") + rt.FBXExporterSetParam("UpAxis", "Y") + rt.FBXExporterSetParam("Preserveinstances", True) + + with maintained_selection(): + # select and export + rt.select(get_all_children(rt.getNodeByName(container))) + rt.exportFile( + filepath, + rt.name("noPrompt"), + selectedOnly=True, + using=rt.FBXEXP, + ) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "fbx", + "ext": "fbx", + "files": filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + self.log.info( + "Extracted instance '%s' to: %s" % (instance.name, filepath) + ) diff --git a/openpype/hosts/max/plugins/publish/extract_model_obj.py b/openpype/hosts/max/plugins/publish/extract_model_obj.py new file mode 100644 index 0000000000..ed3d68c990 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model_obj.py @@ -0,0 +1,56 @@ +import os +import pyblish.api +from openpype.pipeline import publish, OptionalPyblishPluginMixin +from pymxs import runtime as rt +from openpype.hosts.max.api import maintained_selection, get_all_children + + +class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin): + """ + Extract Geometry in OBJ Format + """ + + order = pyblish.api.ExtractorOrder - 0.05 + label = "Extract OBJ" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.obj".format(**instance.data) + filepath = os.path.join(stagingdir, filename) + self.log.info("Writing OBJ '%s' to '%s'" % (filepath, stagingdir)) + + with maintained_selection(): + # select and export + rt.select(get_all_children(rt.getNodeByName(container))) + rt.exportFile( + filepath, + rt.name("noPrompt"), + selectedOnly=True, + using=rt.ObjExp, + ) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + "name": "obj", + "ext": "obj", + "files": filename, + "stagingDir": stagingdir, + } + + instance.data["representations"].append(representation) + self.log.info( + "Extracted instance '%s' to: %s" % (instance.name, filepath) + ) diff --git a/openpype/hosts/max/plugins/publish/extract_model_usd.py b/openpype/hosts/max/plugins/publish/extract_model_usd.py new file mode 100644 index 0000000000..0bed2d855e --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model_usd.py @@ -0,0 +1,114 @@ +import os +import pyblish.api +from openpype.pipeline import ( + publish, + OptionalPyblishPluginMixin +) +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection +) + + +class ExtractModelUSD(publish.Extractor, + OptionalPyblishPluginMixin): + """ + Extract Geometry in USDA Format + """ + + order = pyblish.api.ExtractorOrder - 0.05 + label = "Extract Geometry (USD)" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + asset_filename = "{name}.usda".format(**instance.data) + asset_filepath = os.path.join(stagingdir, + asset_filename) + self.log.info("Writing USD '%s' to '%s'" % (asset_filepath, + stagingdir)) + + log_filename = "{name}.txt".format(**instance.data) + log_filepath = os.path.join(stagingdir, + log_filename) + self.log.info("Writing log '%s' to '%s'" % (log_filepath, + stagingdir)) + + # get the nodes which need to be exported + export_options = self.get_export_options(log_filepath) + with maintained_selection(): + # select and export + node_list = self.get_node_list(container) + rt.USDExporter.ExportFile(asset_filepath, + exportOptions=export_options, + contentSource=rt.name("selected"), + nodeList=node_list) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'usda', + 'ext': 'usda', + 'files': asset_filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + log_representation = { + 'name': 'txt', + 'ext': 'txt', + 'files': log_filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(log_representation) + + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + asset_filepath)) + + def get_node_list(self, container): + """ + Get the target nodes which are + the children of the container + """ + node_list = [] + + container_node = rt.getNodeByName(container) + target_node = container_node.Children + rt.select(target_node) + for sel in rt.selection: + node_list.append(sel) + + return node_list + + def get_export_options(self, log_path): + """Set Export Options for USD Exporter""" + + export_options = rt.USDExporter.createOptions() + + export_options.Meshes = True + export_options.Shapes = False + export_options.Lights = False + export_options.Cameras = False + export_options.Materials = False + export_options.MeshFormat = rt.name('fromScene') + export_options.FileFormat = rt.name('ascii') + export_options.UpAxis = rt.name('y') + export_options.LogLevel = rt.name('info') + export_options.LogPath = log_path + export_options.PreserveEdgeOrientation = True + export_options.TimeMode = rt.name('current') + + rt.USDexporter.UIOptions = export_options + + return export_options diff --git a/openpype/hosts/max/plugins/publish/extract_pointcache.py b/openpype/hosts/max/plugins/publish/extract_pointcache.py index 75d8a7972c..8658cecb1b 100644 --- a/openpype/hosts/max/plugins/publish/extract_pointcache.py +++ b/openpype/hosts/max/plugins/publish/extract_pointcache.py @@ -41,10 +41,7 @@ import os import pyblish.api from openpype.pipeline import publish from pymxs import runtime as rt -from openpype.hosts.max.api import ( - maintained_selection, - get_all_children -) +from openpype.hosts.max.api import maintained_selection, get_all_children class ExtractAlembic(publish.Extractor): @@ -66,35 +63,30 @@ class ExtractAlembic(publish.Extractor): path = os.path.join(parent_dir, file_name) # We run the render - self.log.info("Writing alembic '%s' to '%s'" % (file_name, - parent_dir)) + self.log.info("Writing alembic '%s' to '%s'" % (file_name, parent_dir)) - abc_export_cmd = ( - f""" -AlembicExport.ArchiveType = #ogawa -AlembicExport.CoordinateSystem = #maya -AlembicExport.StartFrame = {start} -AlembicExport.EndFrame = {end} - -exportFile @"{path}" #noPrompt selectedOnly:on using:AlembicExport - - """) - - self.log.debug(f"Executing command: {abc_export_cmd}") + rt.AlembicExport.ArchiveType = rt.name("ogawa") + rt.AlembicExport.CoordinateSystem = rt.name("maya") + rt.AlembicExport.StartFrame = start + rt.AlembicExport.EndFrame = end with maintained_selection(): # select and export - rt.select(get_all_children(rt.getNodeByName(container))) - rt.execute(abc_export_cmd) + rt.exportFile( + path, + rt.name("noPrompt"), + selectedOnly=True, + using=rt.AlembicExport, + ) if "representations" not in instance.data: instance.data["representations"] = [] representation = { - 'name': 'abc', - 'ext': 'abc', - 'files': file_name, + "name": "abc", + "ext": "abc", + "files": file_name, "stagingDir": parent_dir, } instance.data["representations"].append(representation) diff --git a/openpype/hosts/max/plugins/publish/validate_frame_range.py b/openpype/hosts/max/plugins/publish/validate_frame_range.py new file mode 100644 index 0000000000..21e847405e --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_frame_range.py @@ -0,0 +1,64 @@ +import pyblish.api + +from pymxs import runtime as rt +from openpype.pipeline import ( + OptionalPyblishPluginMixin +) +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, + PublishValidationError +) + + +class ValidateFrameRange(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validates the frame ranges. + + This is an optional validator checking if the frame range on instance + matches the frame range specified for the asset. + + It also validates render frame ranges of render layers. + + Repair action will change everything to match the asset frame range. + + This can be turned off by the artist to allow custom ranges. + """ + + label = "Validate Frame Range" + order = ValidateContentsOrder + families = ["maxrender"] + hosts = ["max"] + optional = True + actions = [RepairAction] + + def process(self, instance): + if not self.is_active(instance.data): + self.log.info("Skipping validation...") + return + context = instance.context + + frame_start = int(context.data.get("frameStart")) + frame_end = int(context.data.get("frameEnd")) + + inst_frame_start = int(instance.data.get("frameStart")) + inst_frame_end = int(instance.data.get("frameEnd")) + + errors = [] + if frame_start != inst_frame_start: + errors.append( + f"Start frame ({inst_frame_start}) on instance does not match " # noqa + f"with the start frame ({frame_start}) set on the asset data. ") # noqa + if frame_end != inst_frame_end: + errors.append( + f"End frame ({inst_frame_end}) on instance does not match " + f"with the end frame ({frame_start}) from the asset data. ") + + if errors: + errors.append("You can use repair action to fix it.") + raise PublishValidationError("\n".join(errors)) + + @classmethod + def repair(cls, instance): + rt.rendStart = instance.context.data.get("frameStart") + rt.rendEnd = instance.context.data.get("frameEnd") diff --git a/openpype/hosts/max/plugins/publish/validate_model_contents.py b/openpype/hosts/max/plugins/publish/validate_model_contents.py new file mode 100644 index 0000000000..dd782674ff --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_model_contents.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateModelContent(pyblish.api.InstancePlugin): + """Validates Model instance contents. + + A model instance may only hold either geometry-related + object(excluding Shapes) or editable meshes. + """ + + order = pyblish.api.ValidatorOrder + families = ["model"] + hosts = ["max"] + label = "Model Contents" + + def process(self, instance): + invalid = self.get_invalid(instance) + if invalid: + raise PublishValidationError("Model instance must only include" + "Geometry and Editable Mesh") + + def get_invalid(self, instance): + """ + Get invalid nodes if the instance is not camera + """ + invalid = list() + container = instance.data["instance_node"] + self.log.info("Validating look content for " + "{}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) or rt.getCurrentSelection() + for sel in selection_list: + if rt.classOf(sel) in rt.Camera.classes: + invalid.append(sel) + if rt.classOf(sel) in rt.Light.classes: + invalid.append(sel) + if rt.classOf(sel) in rt.Shape.classes: + invalid.append(sel) + + return invalid diff --git a/openpype/hosts/max/plugins/publish/validate_resolution_setting.py b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py new file mode 100644 index 0000000000..5fcb843b20 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_resolution_setting.py @@ -0,0 +1,65 @@ +import pyblish.api +from openpype.pipeline import ( + PublishValidationError, + OptionalPyblishPluginMixin +) +from pymxs import runtime as rt +from openpype.hosts.max.api.lib import reset_scene_resolution + +from openpype.pipeline.context_tools import ( + get_current_project_asset, + get_current_project +) + + +class ValidateResolutionSetting(pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin): + """Validate the resolution setting aligned with DB""" + + order = pyblish.api.ValidatorOrder - 0.01 + families = ["maxrender"] + hosts = ["max"] + label = "Validate Resolution Setting" + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + width, height = self.get_db_resolution(instance) + current_width = rt.renderwidth + current_height = rt.renderHeight + if current_width != width and current_height != height: + raise PublishValidationError("Resolution Setting " + "not matching resolution " + "set on asset or shot.") + if current_width != width: + raise PublishValidationError("Width in Resolution Setting " + "not matching resolution set " + "on asset or shot.") + + if current_height != height: + raise PublishValidationError("Height in Resolution Setting " + "not matching resolution set " + "on asset or shot.") + + def get_db_resolution(self, instance): + data = ["data.resolutionWidth", "data.resolutionHeight"] + project_resolution = get_current_project(fields=data) + project_resolution_data = project_resolution["data"] + asset_resolution = get_current_project_asset(fields=data) + asset_resolution_data = asset_resolution["data"] + # Set project resolution + project_width = int( + project_resolution_data.get("resolutionWidth", 1920)) + project_height = int( + project_resolution_data.get("resolutionHeight", 1080)) + width = int( + asset_resolution_data.get("resolutionWidth", project_width)) + height = int( + asset_resolution_data.get("resolutionHeight", project_height)) + + return width, height + + @classmethod + def repair(cls, instance): + reset_scene_resolution() diff --git a/openpype/hosts/max/plugins/publish/validate_usd_plugin.py b/openpype/hosts/max/plugins/publish/validate_usd_plugin.py new file mode 100644 index 0000000000..747147020a --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_usd_plugin.py @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateUSDPlugin(pyblish.api.InstancePlugin): + """Validates if USD plugin is installed or loaded in Max + """ + + order = pyblish.api.ValidatorOrder - 0.01 + families = ["model"] + hosts = ["max"] + label = "USD Plugin" + + def process(self, instance): + plugin_mgr = rt.pluginManager + plugin_count = plugin_mgr.pluginDllCount + plugin_info = self.get_plugins(plugin_mgr, + plugin_count) + usd_import = "usdimport.dli" + if usd_import not in plugin_info: + raise PublishValidationError("USD Plugin {}" + " not found".format(usd_import)) + usd_export = "usdexport.dle" + if usd_export not in plugin_info: + raise PublishValidationError("USD Plugin {}" + " not found".format(usd_export)) + + def get_plugins(self, manager, count): + plugin_info_list = list() + for p in range(1, count + 1): + plugin_info = manager.pluginDllName(p) + plugin_info_list.append(plugin_info) + + return plugin_info_list diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index f814187cc1..cb01a847ba 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -190,6 +190,44 @@ def maintained_selection(): cmds.select(clear=True) +def get_custom_namespace(custom_namespace): + """Return unique namespace. + + The input namespace can contain a single group + of '#' number tokens to indicate where the namespace's + unique index should go. The amount of tokens defines + the zero padding of the number, e.g ### turns into 001. + + Warning: Note that a namespace will always be + prefixed with a _ if it starts with a digit + + Example: + >>> get_custom_namespace("myspace_##_") + # myspace_01_ + >>> get_custom_namespace("##_myspace") + # _01_myspace + >>> get_custom_namespace("myspace##") + # myspace01 + + """ + split = re.split("([#]+)", custom_namespace, 1) + + if len(split) == 3: + base, padding, suffix = split + padding = "%0{}d".format(len(padding)) + else: + base = split[0] + padding = "%02d" # default padding + suffix = "" + + return unique_namespace( + base, + format=padding, + prefix="_" if not base or base[0].isdigit() else "", + suffix=suffix + ) + + def unique_namespace(namespace, format="%02d", prefix="", suffix=""): """Return unique namespace @@ -316,11 +354,13 @@ def collect_animation_data(fps=False): # get scene values as defaults frame_start = cmds.playbackOptions(query=True, minTime=True) frame_end = cmds.playbackOptions(query=True, maxTime=True) - handle_start = cmds.playbackOptions(query=True, animationStartTime=True) - handle_end = cmds.playbackOptions(query=True, animationEndTime=True) + frame_start_handle = cmds.playbackOptions( + query=True, animationStartTime=True + ) + frame_end_handle = cmds.playbackOptions(query=True, animationEndTime=True) - handle_start = frame_start - handle_start - handle_end = handle_end - frame_end + handle_start = frame_start - frame_start_handle + handle_end = frame_end_handle - frame_end # build attributes data = OrderedDict() @@ -3937,7 +3977,9 @@ def get_capture_preset(task_name, task_type, subset, project_settings, log): return capture_preset or {} -def create_rig_animation_instance(nodes, context, namespace, log=None): +def create_rig_animation_instance( + nodes, context, namespace, options=None, log=None +): """Create an animation publish instance for loaded rigs. See the RecreateRigAnimationInstance inventory action on how to use this @@ -3947,12 +3989,16 @@ def create_rig_animation_instance(nodes, context, namespace, log=None): nodes (list): Member nodes of the rig instance. context (dict): Representation context of the rig container namespace (str): Namespace of the rig container + options (dict, optional): Additional loader data log (logging.Logger, optional): Logger to log to if provided Returns: None """ + if options is None: + options = {} + output = next((node for node in nodes if node.endswith("out_SET")), None) controls = next((node for node in nodes if @@ -3971,6 +4017,23 @@ def create_rig_animation_instance(nodes, context, namespace, log=None): asset = legacy_io.Session["AVALON_ASSET"] dependency = str(context["representation"]["_id"]) + custom_subset = options.get("animationSubsetName") + if custom_subset: + formatting_data = { + "asset_name": context['asset']['name'], + "asset_type": context['asset']['type'], + "subset": context['subset']['name'], + "family": ( + context['subset']['data'].get('family') or + context['subset']['data']['families'][0] + ) + } + namespace = get_custom_namespace( + custom_subset.format( + **formatting_data + ) + ) + if log: log.info("Creating subset: {}".format(namespace)) diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 714278ba6c..604ff101db 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -84,44 +84,6 @@ def get_reference_node_parents(ref): return parents -def get_custom_namespace(custom_namespace): - """Return unique namespace. - - The input namespace can contain a single group - of '#' number tokens to indicate where the namespace's - unique index should go. The amount of tokens defines - the zero padding of the number, e.g ### turns into 001. - - Warning: Note that a namespace will always be - prefixed with a _ if it starts with a digit - - Example: - >>> get_custom_namespace("myspace_##_") - # myspace_01_ - >>> get_custom_namespace("##_myspace") - # _01_myspace - >>> get_custom_namespace("myspace##") - # myspace01 - - """ - split = re.split("([#]+)", custom_namespace, 1) - - if len(split) == 3: - base, padding, suffix = split - padding = "%0{}d".format(len(padding)) - else: - base = split[0] - padding = "%02d" # default padding - suffix = "" - - return lib.unique_namespace( - base, - format=padding, - prefix="_" if not base or base[0].isdigit() else "", - suffix=suffix - ) - - class Creator(LegacyCreator): defaults = ['Main'] @@ -216,7 +178,7 @@ class ReferenceLoader(Loader): count = options.get("count") or 1 for c in range(0, count): - namespace = get_custom_namespace(custom_namespace) + namespace = lib.get_custom_namespace(custom_namespace) group_name = "{}:{}".format( namespace, custom_group_name diff --git a/openpype/hosts/maya/api/workfile_template_builder.py b/openpype/hosts/maya/api/workfile_template_builder.py index d65e4c74d2..6e6166c2ef 100644 --- a/openpype/hosts/maya/api/workfile_template_builder.py +++ b/openpype/hosts/maya/api/workfile_template_builder.py @@ -43,7 +43,24 @@ class MayaTemplateBuilder(AbstractTemplateBuilder): )) cmds.sets(name=PLACEHOLDER_SET, empty=True) - new_nodes = cmds.file(path, i=True, returnNewNodes=True) + new_nodes = cmds.file( + path, + i=True, + returnNewNodes=True, + preserveReferences=True, + loadReferenceDepth="all", + ) + + # make default cameras non-renderable + default_cameras = [cam for cam in cmds.ls(cameras=True) + if cmds.camera(cam, query=True, startupCamera=True)] + for cam in default_cameras: + if not cmds.attributeQuery("renderable", node=cam, exists=True): + self.log.debug( + "Camera {} has no attribute 'renderable'".format(cam) + ) + continue + cmds.setAttr("{}.renderable".format(cam), 0) cmds.setAttr(PLACEHOLDER_SET + ".hiddenInOutliner", True) diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index 0dbdb03bb7..f4a4a44344 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -162,9 +162,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): with parent_nodes(roots, parent=None): cmds.xform(group_name, zeroTransformPivots=True) - cmds.setAttr("{}.displayHandle".format(group_name), 1) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + + display_handle = settings['maya']['load'].get( + 'reference_loader', {} + ).get('display_handle', True) + cmds.setAttr( + "{}.displayHandle".format(group_name), display_handle + ) + colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: @@ -174,7 +180,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): (float(c[1]) / 255), (float(c[2]) / 255)) - cmds.setAttr("{}.displayHandle".format(group_name), 1) + cmds.setAttr( + "{}.displayHandle".format(group_name), display_handle + ) # get bounding box bbox = cmds.exactWorldBoundingBox(group_name) # get pivot position on world space @@ -215,7 +223,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def _post_process_rig(self, name, namespace, context, options): nodes = self[:] create_rig_animation_instance( - nodes, context, namespace, log=self.log + nodes, context, namespace, options=options, log=self.log ) def _lock_camera_transforms(self, nodes): diff --git a/openpype/hosts/maya/plugins/publish/collect_inputs.py b/openpype/hosts/maya/plugins/publish/collect_inputs.py index 9c3f0f5efa..895c92762b 100644 --- a/openpype/hosts/maya/plugins/publish/collect_inputs.py +++ b/openpype/hosts/maya/plugins/publish/collect_inputs.py @@ -166,7 +166,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin): inputs = [c["representation"] for c in containers] instance.data["inputRepresentations"] = inputs - self.log.info("Collected inputs: %s" % inputs) + self.log.debug("Collected inputs: %s" % inputs) def _collect_renderlayer_inputs(self, scene_containers, instance): """Collects inputs from nodes in renderlayer, incl. shaders + camera""" diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index 825a8d38c7..3ceef6f3d3 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -217,7 +217,11 @@ class ExtractPlayblast(publish.Extractor): instance.data["panel"], edit=True, **viewport_defaults ) - cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom) + try: + cmds.setAttr( + "{}.panZoomEnabled".format(preset["camera"]), pan_zoom) + except RuntimeError: + self.log.warning("Cannot restore Pan/Zoom settings.") collected_files = os.listdir(stagingdir) patterns = [clique.PATTERNS["frames"]] diff --git a/openpype/hosts/maya/plugins/publish/save_scene.py b/openpype/hosts/maya/plugins/publish/save_scene.py index 45e62e7b44..495c339731 100644 --- a/openpype/hosts/maya/plugins/publish/save_scene.py +++ b/openpype/hosts/maya/plugins/publish/save_scene.py @@ -31,5 +31,5 @@ class SaveCurrentScene(pyblish.api.ContextPlugin): # remove lockfile before saving if is_workfile_lock_enabled("maya", project_name, project_settings): remove_workfile_lock(current) - self.log.info("Saving current file..") + self.log.info("Saving current file: {}".format(current)) cmds.file(save=True, force=True) diff --git a/openpype/hosts/maya/plugins/publish/validate_attributes.py b/openpype/hosts/maya/plugins/publish/validate_attributes.py index 6ca9afb9a4..7ebd9d7d03 100644 --- a/openpype/hosts/maya/plugins/publish/validate_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_attributes.py @@ -6,7 +6,7 @@ import pyblish.api from openpype.hosts.maya.api.lib import set_attribute from openpype.pipeline.publish import ( - RepairContextAction, + RepairAction, ValidateContentsOrder, ) @@ -26,7 +26,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin): order = ValidateContentsOrder label = "Attributes" hosts = ["maya"] - actions = [RepairContextAction] + actions = [RepairAction] optional = True attributes = None @@ -81,7 +81,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin): if node_name not in attributes: continue - for attr_name, expected in attributes.items(): + for attr_name, expected in attributes[node_name].items(): # Skip if attribute does not exist if not cmds.attributeQuery(attr_name, node=node, exists=True): diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py index 4870f27bff..63849cfd12 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py @@ -13,7 +13,6 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance): - invalid = list() if not instance.data["setMembers"]: objectset_name = instance.data['name'] @@ -22,6 +21,10 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): return invalid def process(self, instance): + # Allow renderlayer and workfile to be empty + skip_families = ["workfile", "renderlayer", "rendersetup"] + if instance.data.get("family") in skip_families: + return invalid = self.get_invalid(instance) if invalid: diff --git a/openpype/hosts/maya/plugins/publish/validate_shader_name.py b/openpype/hosts/maya/plugins/publish/validate_shader_name.py index b3e51f011d..034db471da 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shader_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_shader_name.py @@ -50,7 +50,8 @@ class ValidateShaderName(pyblish.api.InstancePlugin): asset_name = instance.data.get("asset", None) # Check the number of connected shadingEngines per shape - r = re.compile(cls.regex) + regex_compile = re.compile(cls.regex) + error_message = "object {0} has invalid shader name {1}" for shape in shapes: shading_engines = cmds.listConnections(shape, destination=True, @@ -60,19 +61,18 @@ class ValidateShaderName(pyblish.api.InstancePlugin): ) for shader in shaders: - m = r.match(cls.regex, shader) + m = regex_compile.match(shader) if m is None: invalid.append(shape) - cls.log.error( - "object {0} has invalid shader name {1}".format(shape, - shader) - ) + cls.log.error(error_message.format(shape, shader)) else: - if 'asset' in r.groupindex: + if 'asset' in regex_compile.groupindex: if m.group('asset') != asset_name: invalid.append(shape) - cls.log.error(("object {0} has invalid " - "shader name {1}").format(shape, - shader)) + message = error_message + message += " with missing asset name \"{2}\"" + cls.log.error( + message.format(shape, shader, asset_name) + ) return invalid diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 64fa32a383..a439142051 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -2239,13 +2239,13 @@ class WorkfileSettings(object): handle_end = data["handleEnd"] fps = float(data["fps"]) - frame_start = int(data["frameStart"]) - handle_start - frame_end = int(data["frameEnd"]) + handle_end + frame_start_handle = int(data["frameStart"]) - handle_start + frame_end_handle = int(data["frameEnd"]) + handle_end self._root_node["lock_range"].setValue(False) self._root_node["fps"].setValue(fps) - self._root_node["first_frame"].setValue(frame_start) - self._root_node["last_frame"].setValue(frame_end) + self._root_node["first_frame"].setValue(frame_start_handle) + self._root_node["last_frame"].setValue(frame_end_handle) self._root_node["lock_range"].setValue(True) # setting active viewers diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index d649ffae7f..75b0f80d21 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -151,6 +151,7 @@ class NukeHost( def add_nuke_callbacks(): """ Adding all available nuke callbacks """ + nuke_settings = get_current_project_settings()["nuke"] workfile_settings = WorkfileSettings() # Set context settings. nuke.addOnCreate( @@ -169,7 +170,10 @@ def add_nuke_callbacks(): # # set apply all workfile settings on script load and save nuke.addOnScriptLoad(WorkfileSettings().set_context_settings) - nuke.addFilenameFilter(dirmap_file_name_filter) + if nuke_settings["nuke-dirmap"]["enabled"]: + log.info("Added Nuke's dirmaping callback ...") + # Add dirmap for file paths. + nuke.addFilenameFilter(dirmap_file_name_filter) log.info("Added Nuke callbacks ...") diff --git a/openpype/hosts/nuke/plugins/publish/collect_writes.py b/openpype/hosts/nuke/plugins/publish/collect_writes.py index 6697a1e59a..2d1caacdc3 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/collect_writes.py @@ -133,11 +133,11 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, else: representation['files'] = collected_frames - # inject colorspace data - self.set_representation_colorspace( - representation, instance.context, - colorspace=colorspace - ) + # inject colorspace data + self.set_representation_colorspace( + representation, instance.context, + colorspace=colorspace + ) instance.data["representations"].append(representation) self.log.info("Publishing rendered frames ...") diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index f391ca1e7c..21eefda249 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -5,6 +5,8 @@ import pyblish.api from openpype.pipeline import publish from openpype.hosts.nuke import api as napi +from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings + if sys.version_info[0] >= 3: unicode = str @@ -28,7 +30,7 @@ class ExtractThumbnail(publish.Extractor): bake_viewer_process = True bake_viewer_input_process = True nodes = {} - + reposition_nodes = None def process(self, instance): if instance.data.get("farm"): @@ -123,18 +125,32 @@ class ExtractThumbnail(publish.Extractor): temporary_nodes.append(rnode) previous_node = rnode - reformat_node = nuke.createNode("Reformat") - ref_node = self.nodes.get("Reformat", None) - if ref_node: - for k, v in ref_node: - self.log.debug("k, v: {0}:{1}".format(k, v)) - if isinstance(v, unicode): - v = str(v) - reformat_node[k].setValue(v) + if self.reposition_nodes is None: + # [deprecated] create reformat node old way + reformat_node = nuke.createNode("Reformat") + ref_node = self.nodes.get("Reformat", None) + if ref_node: + for k, v in ref_node: + self.log.debug("k, v: {0}:{1}".format(k, v)) + if isinstance(v, unicode): + v = str(v) + reformat_node[k].setValue(v) - reformat_node.setInput(0, previous_node) - previous_node = reformat_node - temporary_nodes.append(reformat_node) + reformat_node.setInput(0, previous_node) + previous_node = reformat_node + temporary_nodes.append(reformat_node) + else: + # create reformat node new way + for repo_node in self.reposition_nodes: + node_class = repo_node["node_class"] + knobs = repo_node["knobs"] + node = nuke.createNode(node_class) + set_node_knobs_from_settings(node, knobs) + + # connect in order + node.setInput(0, previous_node) + previous_node = node + temporary_nodes.append(node) # only create colorspace baking if toggled on if bake_viewer_process: diff --git a/openpype/hosts/photoshop/plugins/create/workfile_creator.py b/openpype/hosts/photoshop/lib.py similarity index 83% rename from openpype/hosts/photoshop/plugins/create/workfile_creator.py rename to openpype/hosts/photoshop/lib.py index f5d56adcbc..ae7a33b7b6 100644 --- a/openpype/hosts/photoshop/plugins/create/workfile_creator.py +++ b/openpype/hosts/photoshop/lib.py @@ -7,28 +7,26 @@ from openpype.pipeline import ( from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances -class PSWorkfileCreator(AutoCreator): - identifier = "workfile" - family = "workfile" - - default_variant = "Main" - +class PSAutoCreator(AutoCreator): + """Generic autocreator to extend.""" def get_instance_attr_defs(self): return [] def collect_instances(self): for instance_data in cache_and_get_instances(self): creator_id = instance_data.get("creator_identifier") + if creator_id == self.identifier: - subset_name = instance_data["subset"] - instance = CreatedInstance( - self.family, subset_name, instance_data, self + instance = CreatedInstance.from_existing( + instance_data, self ) self._add_instance_to_context(instance) def update_instances(self, update_list): - # nothing to change on workfiles - pass + self.log.debug("update_list:: {}".format(update_list)) + for created_inst, _changes in update_list: + api.stub().imprint(created_inst.get("instance_id"), + created_inst.data_to_store()) def create(self, options=None): existing_instance = None @@ -58,6 +56,9 @@ class PSWorkfileCreator(AutoCreator): project_name, host_name, None )) + if not self.active_on_create: + data["active"] = False + new_instance = CreatedInstance( self.family, subset_name, data, self ) diff --git a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py new file mode 100644 index 0000000000..3bc61c8184 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py @@ -0,0 +1,120 @@ +from openpype.pipeline import CreatedInstance + +from openpype.lib import BoolDef +import openpype.hosts.photoshop.api as api +from openpype.hosts.photoshop.lib import PSAutoCreator +from openpype.pipeline.create import get_subset_name +from openpype.client import get_asset_by_name + + +class AutoImageCreator(PSAutoCreator): + """Creates flatten image from all visible layers. + + Used in simplified publishing as auto created instance. + Must be enabled in Setting and template for subset name provided + """ + identifier = "auto_image" + family = "image" + + # Settings + default_variant = "" + # - Mark by default instance for review + mark_for_review = True + active_on_create = True + + def create(self, options=None): + existing_instance = None + for instance in self.create_context.instances: + if instance.creator_identifier == self.identifier: + existing_instance = instance + break + + context = self.create_context + project_name = context.get_current_project_name() + asset_name = context.get_current_asset_name() + task_name = context.get_current_task_name() + host_name = context.host_name + asset_doc = get_asset_by_name(project_name, asset_name) + + if existing_instance is None: + subset_name = get_subset_name( + self.family, self.default_variant, task_name, asset_doc, + project_name, host_name + ) + + publishable_ids = [layer.id for layer in api.stub().get_layers() + if layer.visible] + data = { + "asset": asset_name, + "task": task_name, + # ids are "virtual" layers, won't get grouped as 'members' do + # same difference in color coded layers in WP + "ids": publishable_ids + } + + if not self.active_on_create: + data["active"] = False + + creator_attributes = {"mark_for_review": self.mark_for_review} + data.update({"creator_attributes": creator_attributes}) + + new_instance = CreatedInstance( + self.family, subset_name, data, self + ) + self._add_instance_to_context(new_instance) + api.stub().imprint(new_instance.get("instance_id"), + new_instance.data_to_store()) + + elif ( # existing instance from different context + existing_instance["asset"] != asset_name + or existing_instance["task"] != task_name + ): + subset_name = get_subset_name( + self.family, self.default_variant, task_name, asset_doc, + project_name, host_name + ) + + existing_instance["asset"] = asset_name + existing_instance["task"] = task_name + existing_instance["subset"] = subset_name + + api.stub().imprint(existing_instance.get("instance_id"), + existing_instance.data_to_store()) + + def get_pre_create_attr_defs(self): + return [ + BoolDef( + "mark_for_review", + label="Review", + default=self.mark_for_review + ) + ] + + def get_instance_attr_defs(self): + return [ + BoolDef( + "mark_for_review", + label="Review" + ) + ] + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["AutoImageCreator"] + ) + + self.active_on_create = plugin_settings["active_on_create"] + self.default_variant = plugin_settings["default_variant"] + self.mark_for_review = plugin_settings["mark_for_review"] + self.enabled = plugin_settings["enabled"] + + def get_detail_description(self): + return """Creator for flatten image. + + Studio might configure simple publishing workflow. In that case + `image` instance is automatically created which will publish flat + image from all visible layers. + + Artist might disable this instance from publishing or from creating + review for it though. + """ diff --git a/openpype/hosts/photoshop/plugins/create/create_image.py b/openpype/hosts/photoshop/plugins/create/create_image.py index 3d82d6b6f0..f3165fca57 100644 --- a/openpype/hosts/photoshop/plugins/create/create_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_image.py @@ -23,6 +23,11 @@ class ImageCreator(Creator): family = "image" description = "Image creator" + # Settings + default_variants = "" + mark_for_review = False + active_on_create = True + def create(self, subset_name_from_ui, data, pre_create_data): groups_to_create = [] top_layers_to_wrap = [] @@ -94,6 +99,12 @@ class ImageCreator(Creator): data.update({"layer_name": layer_name}) data.update({"long_name": "_".join(layer_names_in_hierarchy)}) + creator_attributes = {"mark_for_review": self.mark_for_review} + data.update({"creator_attributes": creator_attributes}) + + if not self.active_on_create: + data["active"] = False + new_instance = CreatedInstance(self.family, subset_name, data, self) @@ -134,11 +145,6 @@ class ImageCreator(Creator): self.host.remove_instance(instance) self._remove_instance_from_context(instance) - def get_default_variants(self): - return [ - "Main" - ] - def get_pre_create_attr_defs(self): output = [ BoolDef("use_selection", default=True, @@ -148,10 +154,34 @@ class ImageCreator(Creator): label="Create separate instance for each selected"), BoolDef("use_layer_name", default=False, - label="Use layer name in subset") + label="Use layer name in subset"), + BoolDef( + "mark_for_review", + label="Create separate review", + default=False + ) ] return output + def get_instance_attr_defs(self): + return [ + BoolDef( + "mark_for_review", + label="Review" + ) + ] + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["ImageCreator"] + ) + + self.active_on_create = plugin_settings["active_on_create"] + self.default_variants = plugin_settings["default_variants"] + self.mark_for_review = plugin_settings["mark_for_review"] + self.enabled = plugin_settings["enabled"] + + def get_detail_description(self): return """Creator for Image instances @@ -180,6 +210,11 @@ class ImageCreator(Creator): but layer name should be used (set explicitly in UI or implicitly if multiple images should be created), it is added in capitalized form as a suffix to subset name. + + Each image could have its separate review created if necessary via + `Create separate review` toggle. + But more use case is to use separate `review` instance to create review + from all published items. """ def _handle_legacy(self, instance_data): diff --git a/openpype/hosts/photoshop/plugins/create/create_review.py b/openpype/hosts/photoshop/plugins/create/create_review.py new file mode 100644 index 0000000000..064485d465 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/create/create_review.py @@ -0,0 +1,28 @@ +from openpype.hosts.photoshop.lib import PSAutoCreator + + +class ReviewCreator(PSAutoCreator): + """Creates review instance which might be disabled from publishing.""" + identifier = "review" + family = "review" + + default_variant = "Main" + + def get_detail_description(self): + return """Auto creator for review. + + Photoshop review is created from all published images or from all + visible layers if no `image` instances got created. + + Review might be disabled by an artist (instance shouldn't be deleted as + it will get recreated in next publish either way). + """ + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["ReviewCreator"] + ) + + self.default_variant = plugin_settings["default_variant"] + self.active_on_create = plugin_settings["active_on_create"] + self.enabled = plugin_settings["enabled"] diff --git a/openpype/hosts/photoshop/plugins/create/create_workfile.py b/openpype/hosts/photoshop/plugins/create/create_workfile.py new file mode 100644 index 0000000000..d498f0549c --- /dev/null +++ b/openpype/hosts/photoshop/plugins/create/create_workfile.py @@ -0,0 +1,28 @@ +from openpype.hosts.photoshop.lib import PSAutoCreator + + +class WorkfileCreator(PSAutoCreator): + identifier = "workfile" + family = "workfile" + + default_variant = "Main" + + def get_detail_description(self): + return """Auto creator for workfile. + + It is expected that each publish will also publish its source workfile + for safekeeping. This creator triggers automatically without need for + an artist to remember and trigger it explicitly. + + Workfile instance could be disabled if it is not required to publish + workfile. (Instance shouldn't be deleted though as it will be recreated + in next publish automatically). + """ + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["WorkfileCreator"] + ) + + self.active_on_create = plugin_settings["active_on_create"] + self.enabled = plugin_settings["enabled"] diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py new file mode 100644 index 0000000000..ce408f8d01 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py @@ -0,0 +1,101 @@ +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop +from openpype.pipeline.create import get_subset_name + + +class CollectAutoImage(pyblish.api.ContextPlugin): + """Creates auto image in non artist based publishes (Webpublisher). + + 'remotepublish' should be renamed to 'autopublish' or similar in the future + """ + + label = "Collect Auto Image" + order = pyblish.api.CollectorOrder + hosts = ["photoshop"] + order = pyblish.api.CollectorOrder + 0.2 + + targets = ["remotepublish"] + + def process(self, context): + family = "image" + for instance in context: + creator_identifier = instance.data.get("creator_identifier") + if creator_identifier and creator_identifier == "auto_image": + self.log.debug("Auto image instance found, won't create new") + return + + project_name = context.data["anatomyData"]["project"]["name"] + proj_settings = context.data["project_settings"] + task_name = context.data["anatomyData"]["task"]["name"] + host_name = context.data["hostName"] + asset_doc = context.data["assetEntity"] + asset_name = asset_doc["name"] + + auto_creator = proj_settings.get( + "photoshop", {}).get( + "create", {}).get( + "AutoImageCreator", {}) + + if not auto_creator or not auto_creator["enabled"]: + self.log.debug("Auto image creator disabled, won't create new") + return + + stub = photoshop.stub() + stored_items = stub.get_layers_metadata() + for item in stored_items: + if item.get("creator_identifier") == "auto_image": + if not item.get("active"): + self.log.debug("Auto_image instance disabled") + return + + layer_items = stub.get_layers() + + publishable_ids = [layer.id for layer in layer_items + if layer.visible] + + # collect stored image instances + instance_names = [] + for layer_item in layer_items: + layer_meta_data = stub.read(layer_item, stored_items) + + # Skip layers without metadata. + if layer_meta_data is None: + continue + + # Skip containers. + if "container" in layer_meta_data["id"]: + continue + + # active might not be in legacy meta + if layer_meta_data.get("active", True) and layer_item.visible: + instance_names.append(layer_meta_data["subset"]) + + if len(instance_names) == 0: + variants = proj_settings.get( + "photoshop", {}).get( + "create", {}).get( + "CreateImage", {}).get( + "default_variants", ['']) + family = "image" + + variant = context.data.get("variant") or variants[0] + + subset_name = get_subset_name( + family, variant, task_name, asset_doc, + project_name, host_name + ) + + instance = context.create_instance(subset_name) + instance.data["family"] = family + instance.data["asset"] = asset_name + instance.data["subset"] = subset_name + instance.data["ids"] = publishable_ids + instance.data["publish"] = True + instance.data["creator_identifier"] = "auto_image" + + if auto_creator["mark_for_review"]: + instance.data["creator_attributes"] = {"mark_for_review": True} + instance.data["families"] = ["review"] + + self.log.info("auto image instance: {} ".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py new file mode 100644 index 0000000000..7de4adcaf4 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py @@ -0,0 +1,92 @@ +""" +Requires: + None + +Provides: + instance -> family ("review") +""" +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop +from openpype.pipeline.create import get_subset_name + + +class CollectAutoReview(pyblish.api.ContextPlugin): + """Create review instance in non artist based workflow. + + Called only if PS is triggered in Webpublisher or in tests. + """ + + label = "Collect Auto Review" + hosts = ["photoshop"] + order = pyblish.api.CollectorOrder + 0.2 + targets = ["remotepublish"] + + publish = True + + def process(self, context): + family = "review" + has_review = False + for instance in context: + if instance.data["family"] == family: + self.log.debug("Review instance found, won't create new") + has_review = True + + creator_attributes = instance.data.get("creator_attributes", {}) + if (creator_attributes.get("mark_for_review") and + "review" not in instance.data["families"]): + instance.data["families"].append("review") + + if has_review: + return + + stub = photoshop.stub() + stored_items = stub.get_layers_metadata() + for item in stored_items: + if item.get("creator_identifier") == family: + if not item.get("active"): + self.log.debug("Review instance disabled") + return + + auto_creator = context.data["project_settings"].get( + "photoshop", {}).get( + "create", {}).get( + "ReviewCreator", {}) + + if not auto_creator or not auto_creator["enabled"]: + self.log.debug("Review creator disabled, won't create new") + return + + variant = (context.data.get("variant") or + auto_creator["default_variant"]) + + project_name = context.data["anatomyData"]["project"]["name"] + proj_settings = context.data["project_settings"] + task_name = context.data["anatomyData"]["task"]["name"] + host_name = context.data["hostName"] + asset_doc = context.data["assetEntity"] + asset_name = asset_doc["name"] + + subset_name = get_subset_name( + family, + variant, + task_name, + asset_doc, + project_name, + host_name=host_name, + project_settings=proj_settings + ) + + instance = context.create_instance(subset_name) + instance.data.update({ + "subset": subset_name, + "label": subset_name, + "name": subset_name, + "family": family, + "families": [], + "representations": [], + "asset": asset_name, + "publish": self.publish + }) + + self.log.debug("auto review created::{}".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py new file mode 100644 index 0000000000..d10cf62c67 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py @@ -0,0 +1,99 @@ +import os +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop +from openpype.pipeline.create import get_subset_name + + +class CollectAutoWorkfile(pyblish.api.ContextPlugin): + """Collect current script for publish.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Workfile" + hosts = ["photoshop"] + + targets = ["remotepublish"] + + def process(self, context): + family = "workfile" + file_path = context.data["currentFile"] + _, ext = os.path.splitext(file_path) + staging_dir = os.path.dirname(file_path) + base_name = os.path.basename(file_path) + workfile_representation = { + "name": ext[1:], + "ext": ext[1:], + "files": base_name, + "stagingDir": staging_dir, + } + + for instance in context: + if instance.data["family"] == family: + self.log.debug("Workfile instance found, won't create new") + instance.data.update({ + "label": base_name, + "name": base_name, + "representations": [], + }) + + # creating representation + _, ext = os.path.splitext(file_path) + instance.data["representations"].append( + workfile_representation) + + return + + stub = photoshop.stub() + stored_items = stub.get_layers_metadata() + for item in stored_items: + if item.get("creator_identifier") == family: + if not item.get("active"): + self.log.debug("Workfile instance disabled") + return + + project_name = context.data["anatomyData"]["project"]["name"] + proj_settings = context.data["project_settings"] + auto_creator = proj_settings.get( + "photoshop", {}).get( + "create", {}).get( + "WorkfileCreator", {}) + + if not auto_creator or not auto_creator["enabled"]: + self.log.debug("Workfile creator disabled, won't create new") + return + + # context.data["variant"] might come only from collect_batch_data + variant = (context.data.get("variant") or + auto_creator["default_variant"]) + + task_name = context.data["anatomyData"]["task"]["name"] + host_name = context.data["hostName"] + asset_doc = context.data["assetEntity"] + asset_name = asset_doc["name"] + + subset_name = get_subset_name( + family, + variant, + task_name, + asset_doc, + project_name, + host_name=host_name, + project_settings=proj_settings + ) + + # Create instance + instance = context.create_instance(subset_name) + instance.data.update({ + "subset": subset_name, + "label": base_name, + "name": base_name, + "family": family, + "families": [], + "representations": [], + "asset": asset_name + }) + + # creating representation + instance.data["representations"].append(workfile_representation) + + self.log.debug("auto workfile review created:{}".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_instances.py b/openpype/hosts/photoshop/plugins/publish/collect_instances.py deleted file mode 100644 index 5bf12379b1..0000000000 --- a/openpype/hosts/photoshop/plugins/publish/collect_instances.py +++ /dev/null @@ -1,116 +0,0 @@ -import pprint - -import pyblish.api - -from openpype.settings import get_project_settings -from openpype.hosts.photoshop import api as photoshop -from openpype.lib import prepare_template_data -from openpype.pipeline import legacy_io - - -class CollectInstances(pyblish.api.ContextPlugin): - """Gather instances by LayerSet and file metadata - - Collects publishable instances from file metadata or enhance - already collected by creator (family == "image"). - - If no image instances are explicitly created, it looks if there is value - in `flatten_subset_template` (configurable in Settings), in that case it - produces flatten image with all visible layers. - - Identifier: - id (str): "pyblish.avalon.instance" - """ - - label = "Collect Instances" - order = pyblish.api.CollectorOrder - hosts = ["photoshop"] - families_mapping = { - "image": [] - } - # configurable in Settings - flatten_subset_template = "" - - def process(self, context): - instance_by_layer_id = {} - for instance in context: - if ( - instance.data["family"] == "image" and - instance.data.get("members")): - layer_id = str(instance.data["members"][0]) - instance_by_layer_id[layer_id] = instance - - stub = photoshop.stub() - layer_items = stub.get_layers() - layers_meta = stub.get_layers_metadata() - instance_names = [] - - all_layer_ids = [] - for layer_item in layer_items: - layer_meta_data = stub.read(layer_item, layers_meta) - all_layer_ids.append(layer_item.id) - - # Skip layers without metadata. - if layer_meta_data is None: - continue - - # Skip containers. - if "container" in layer_meta_data["id"]: - continue - - # active might not be in legacy meta - if not layer_meta_data.get("active", True): - continue - - instance = instance_by_layer_id.get(str(layer_item.id)) - if instance is None: - instance = context.create_instance(layer_meta_data["subset"]) - - instance.data["layer"] = layer_item - instance.data.update(layer_meta_data) - instance.data["families"] = self.families_mapping[ - layer_meta_data["family"] - ] - instance.data["publish"] = layer_item.visible - instance_names.append(layer_meta_data["subset"]) - - # Produce diagnostic message for any graphical - # user interface interested in visualising it. - self.log.info("Found: \"%s\" " % instance.data["name"]) - self.log.info("instance: {} ".format( - pprint.pformat(instance.data, indent=4))) - - if len(instance_names) != len(set(instance_names)): - self.log.warning("Duplicate instances found. " + - "Remove unwanted via Publisher") - - if len(instance_names) == 0 and self.flatten_subset_template: - project_name = context.data["projectEntity"]["name"] - variants = get_project_settings(project_name).get( - "photoshop", {}).get( - "create", {}).get( - "CreateImage", {}).get( - "defaults", ['']) - family = "image" - task_name = legacy_io.Session["AVALON_TASK"] - asset_name = context.data["assetEntity"]["name"] - - variant = context.data.get("variant") or variants[0] - fill_pairs = { - "variant": variant, - "family": family, - "task": task_name - } - - subset = self.flatten_subset_template.format( - **prepare_template_data(fill_pairs)) - - instance = context.create_instance(subset) - instance.data["family"] = family - instance.data["asset"] = asset_name - instance.data["subset"] = subset - instance.data["ids"] = all_layer_ids - instance.data["families"] = self.families_mapping[family] - instance.data["publish"] = True - - self.log.info("flatten instance: {} ".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_review.py b/openpype/hosts/photoshop/plugins/publish/collect_review.py index 7e598a8250..87ec4ee3f1 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_review.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_review.py @@ -14,10 +14,7 @@ from openpype.pipeline.create import get_subset_name class CollectReview(pyblish.api.ContextPlugin): - """Gather the active document as review instance. - - Triggers once even if no 'image' is published as by defaults it creates - flatten image from a workfile. + """Adds review to families for instances marked to be reviewable. """ label = "Collect Review" @@ -28,25 +25,8 @@ class CollectReview(pyblish.api.ContextPlugin): publish = True def process(self, context): - family = "review" - subset = get_subset_name( - family, - context.data.get("variant", ''), - context.data["anatomyData"]["task"]["name"], - context.data["assetEntity"], - context.data["anatomyData"]["project"]["name"], - host_name=context.data["hostName"], - project_settings=context.data["project_settings"] - ) - - instance = context.create_instance(subset) - instance.data.update({ - "subset": subset, - "label": subset, - "name": subset, - "family": family, - "families": [], - "representations": [], - "asset": os.environ["AVALON_ASSET"], - "publish": self.publish - }) + for instance in context: + creator_attributes = instance.data["creator_attributes"] + if (creator_attributes.get("mark_for_review") and + "review" not in instance.data["families"]): + instance.data["families"].append("review") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_workfile.py index 9a5aad5569..9625464499 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_workfile.py @@ -14,50 +14,19 @@ class CollectWorkfile(pyblish.api.ContextPlugin): default_variant = "Main" def process(self, context): - existing_instance = None for instance in context: if instance.data["family"] == "workfile": - self.log.debug("Workfile instance found, won't create new") - existing_instance = instance - break + file_path = context.data["currentFile"] + _, ext = os.path.splitext(file_path) + staging_dir = os.path.dirname(file_path) + base_name = os.path.basename(file_path) - family = "workfile" - # context.data["variant"] might come only from collect_batch_data - variant = context.data.get("variant") or self.default_variant - subset = get_subset_name( - family, - variant, - context.data["anatomyData"]["task"]["name"], - context.data["assetEntity"], - context.data["anatomyData"]["project"]["name"], - host_name=context.data["hostName"], - project_settings=context.data["project_settings"] - ) - - file_path = context.data["currentFile"] - staging_dir = os.path.dirname(file_path) - base_name = os.path.basename(file_path) - - # Create instance - if existing_instance is None: - instance = context.create_instance(subset) - instance.data.update({ - "subset": subset, - "label": base_name, - "name": base_name, - "family": family, - "families": [], - "representations": [], - "asset": os.environ["AVALON_ASSET"] - }) - else: - instance = existing_instance - - # creating representation - _, ext = os.path.splitext(file_path) - instance.data["representations"].append({ - "name": ext[1:], - "ext": ext[1:], - "files": base_name, - "stagingDir": staging_dir, - }) + # creating representation + _, ext = os.path.splitext(file_path) + instance.data["representations"].append({ + "name": ext[1:], + "ext": ext[1:], + "files": base_name, + "stagingDir": staging_dir, + }) + return diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index 9d7eff0211..d5416a389d 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -47,32 +47,42 @@ class ExtractReview(publish.Extractor): layers = self._get_layers_from_image_instances(instance) self.log.info("Layers image instance found: {}".format(layers)) + repre_name = "jpg" + repre_skeleton = { + "name": repre_name, + "ext": "jpg", + "stagingDir": staging_dir, + "tags": self.jpg_options['tags'], + } + + if instance.data["family"] != "review": + # enable creation of review, without this jpg review would clash + # with jpg of the image family + output_name = repre_name + repre_name = "{}_{}".format(repre_name, output_name) + repre_skeleton.update({"name": repre_name, + "outputName": output_name}) + if self.make_image_sequence and len(layers) > 1: self.log.info("Extract layers to image sequence.") img_list = self._save_sequence_images(staging_dir, layers) - instance.data["representations"].append({ - "name": "jpg", - "ext": "jpg", - "files": img_list, + repre_skeleton.update({ "frameStart": 0, "frameEnd": len(img_list), "fps": fps, - "stagingDir": staging_dir, - "tags": self.jpg_options['tags'], + "files": img_list, }) + instance.data["representations"].append(repre_skeleton) processed_img_names = img_list else: self.log.info("Extract layers to flatten image.") img_list = self._save_flatten_image(staging_dir, layers) - instance.data["representations"].append({ - "name": "jpg", - "ext": "jpg", - "files": img_list, # cannot be [] for single frame - "stagingDir": staging_dir, - "tags": self.jpg_options['tags'] + repre_skeleton.update({ + "files": img_list, }) + instance.data["representations"].append(repre_skeleton) processed_img_names = [img_list] ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") diff --git a/openpype/hosts/resolve/api/__init__.py b/openpype/hosts/resolve/api/__init__.py index 00a598548e..2b4546f8d6 100644 --- a/openpype/hosts/resolve/api/__init__.py +++ b/openpype/hosts/resolve/api/__init__.py @@ -24,6 +24,8 @@ from .lib import ( get_project_manager, get_current_project, get_current_timeline, + get_any_timeline, + get_new_timeline, create_bin, get_media_pool_item, create_media_pool_item, @@ -95,6 +97,8 @@ __all__ = [ "get_project_manager", "get_current_project", "get_current_timeline", + "get_any_timeline", + "get_new_timeline", "create_bin", "get_media_pool_item", "create_media_pool_item", diff --git a/openpype/hosts/resolve/api/lib.py b/openpype/hosts/resolve/api/lib.py index b3ad20df39..a44c527f13 100644 --- a/openpype/hosts/resolve/api/lib.py +++ b/openpype/hosts/resolve/api/lib.py @@ -15,6 +15,7 @@ log = Logger.get_logger(__name__) self = sys.modules[__name__] self.project_manager = None self.media_storage = None +self.current_project = None # OpenPype sequential rename variables self.rename_index = 0 @@ -85,22 +86,60 @@ def get_media_storage(): def get_current_project(): - # initialize project manager - get_project_manager() + """Get current project object. + """ + if not self.current_project: + self.current_project = get_project_manager().GetCurrentProject() - return self.project_manager.GetCurrentProject() + return self.current_project def get_current_timeline(new=False): - # get current project + """Get current timeline object. + + Args: + new (bool)[optional]: [DEPRECATED] if True it will create + new timeline if none exists + + Returns: + TODO: will need to reflect future `None` + object: resolve.Timeline + """ project = get_current_project() + timeline = project.GetCurrentTimeline() + # return current timeline if any + if timeline: + return timeline + + # TODO: [deprecated] and will be removed in future if new: - media_pool = project.GetMediaPool() - new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name) - project.SetCurrentTimeline(new_timeline) + return get_new_timeline() - return project.GetCurrentTimeline() + +def get_any_timeline(): + """Get any timeline object. + + Returns: + object | None: resolve.Timeline + """ + project = get_current_project() + timeline_count = project.GetTimelineCount() + if timeline_count > 0: + return project.GetTimelineByIndex(1) + + +def get_new_timeline(): + """Get new timeline object. + + Returns: + object: resolve.Timeline + """ + project = get_current_project() + media_pool = project.GetMediaPool() + new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name) + project.SetCurrentTimeline(new_timeline) + return new_timeline def create_bin(name: str, root: object = None) -> object: @@ -312,7 +351,13 @@ def get_current_timeline_items( track_type = track_type or "video" selecting_color = selecting_color or "Chocolate" project = get_current_project() - timeline = get_current_timeline() + + # get timeline anyhow + timeline = ( + get_current_timeline() or + get_any_timeline() or + get_new_timeline() + ) selected_clips = [] # get all tracks count filtered by track type diff --git a/openpype/hosts/resolve/api/plugin.py b/openpype/hosts/resolve/api/plugin.py index 609cff60f7..e5846c2fc2 100644 --- a/openpype/hosts/resolve/api/plugin.py +++ b/openpype/hosts/resolve/api/plugin.py @@ -327,7 +327,10 @@ class ClipLoader: self.active_timeline = options["timeline"] else: # create new sequence - self.active_timeline = lib.get_current_timeline(new=True) + self.active_timeline = ( + lib.get_current_timeline() or + lib.get_new_timeline() + ) else: self.active_timeline = lib.get_current_timeline() diff --git a/openpype/hosts/resolve/hooks/pre_resolve_setup.py b/openpype/hosts/resolve/hooks/pre_resolve_setup.py index 8574b3ad01..d066fc2da2 100644 --- a/openpype/hosts/resolve/hooks/pre_resolve_setup.py +++ b/openpype/hosts/resolve/hooks/pre_resolve_setup.py @@ -1,4 +1,5 @@ import os +from pathlib import Path import platform from openpype.lib import PreLaunchHook from openpype.hosts.resolve.utils import setup @@ -6,33 +7,57 @@ from openpype.hosts.resolve.utils import setup class ResolvePrelaunch(PreLaunchHook): """ - This hook will check if current workfile path has Resolve - project inside. IF not, it initialize it and finally it pass - path to the project by environment variable to Premiere launcher - shell script. + This hook will set up the Resolve scripting environment as described in + Resolve's documentation found with the installed application at + {resolve}/Support/Developer/Scripting/README.txt + + Prepares the following environment variables: + - `RESOLVE_SCRIPT_API` + - `RESOLVE_SCRIPT_LIB` + + It adds $RESOLVE_SCRIPT_API/Modules to PYTHONPATH. + + Additionally it sets up the Python home for Python 3 based on the + RESOLVE_PYTHON3_HOME in the environment (usually defined in OpenPype's + Application environment for Resolve by the admin). For this it sets + PYTHONHOME and PATH variables. + + It also defines: + - `RESOLVE_UTILITY_SCRIPTS_DIR`: Destination directory for OpenPype + Fusion scripts to be copied to for Resolve to pick them up. + - `OPENPYPE_LOG_NO_COLORS` to True to ensure OP doesn't try to + use logging with terminal colors as it fails in Resolve. + """ + app_groups = ["resolve"] def execute(self): current_platform = platform.system().lower() - PROGRAMDATA = self.launch_context.env.get("PROGRAMDATA", "") - RESOLVE_SCRIPT_API_ = { + programdata = self.launch_context.env.get("PROGRAMDATA", "") + resolve_script_api_locations = { "windows": ( - f"{PROGRAMDATA}/Blackmagic Design/" + f"{programdata}/Blackmagic Design/" "DaVinci Resolve/Support/Developer/Scripting" ), "darwin": ( "/Library/Application Support/Blackmagic Design" "/DaVinci Resolve/Developer/Scripting" ), - "linux": "/opt/resolve/Developer/Scripting" + "linux": "/opt/resolve/Developer/Scripting", } - RESOLVE_SCRIPT_API = os.path.normpath( - RESOLVE_SCRIPT_API_[current_platform]) - self.launch_context.env["RESOLVE_SCRIPT_API"] = RESOLVE_SCRIPT_API + resolve_script_api = Path( + resolve_script_api_locations[current_platform] + ) + self.log.info( + f"setting RESOLVE_SCRIPT_API variable to {resolve_script_api}" + ) + self.launch_context.env[ + "RESOLVE_SCRIPT_API" + ] = resolve_script_api.as_posix() - RESOLVE_SCRIPT_LIB_ = { + resolve_script_lib_dirs = { "windows": ( "C:/Program Files/Blackmagic Design" "/DaVinci Resolve/fusionscript.dll" @@ -41,61 +66,69 @@ class ResolvePrelaunch(PreLaunchHook): "/Applications/DaVinci Resolve/DaVinci Resolve.app" "/Contents/Libraries/Fusion/fusionscript.so" ), - "linux": "/opt/resolve/libs/Fusion/fusionscript.so" + "linux": "/opt/resolve/libs/Fusion/fusionscript.so", } - RESOLVE_SCRIPT_LIB = os.path.normpath( - RESOLVE_SCRIPT_LIB_[current_platform]) - self.launch_context.env["RESOLVE_SCRIPT_LIB"] = RESOLVE_SCRIPT_LIB + resolve_script_lib = Path(resolve_script_lib_dirs[current_platform]) + self.launch_context.env[ + "RESOLVE_SCRIPT_LIB" + ] = resolve_script_lib.as_posix() + self.log.info( + f"setting RESOLVE_SCRIPT_LIB variable to {resolve_script_lib}" + ) - # TODO: add OTIO installation from `openpype/requirements.py` + # TODO: add OTIO installation from `openpype/requirements.py` # making sure python <3.9.* is installed at provided path - python3_home = os.path.normpath( - self.launch_context.env.get("RESOLVE_PYTHON3_HOME", "")) + python3_home = Path( + self.launch_context.env.get("RESOLVE_PYTHON3_HOME", "") + ) - assert os.path.isdir(python3_home), ( + assert python3_home.is_dir(), ( "Python 3 is not installed at the provided folder path. Either " "make sure the `environments\resolve.json` is having correctly " "set `RESOLVE_PYTHON3_HOME` or make sure Python 3 is installed " f"in given path. \nRESOLVE_PYTHON3_HOME: `{python3_home}`" ) - self.launch_context.env["PYTHONHOME"] = python3_home - self.log.info(f"Path to Resolve Python folder: `{python3_home}`...") - - # add to the python path to path - env_path = self.launch_context.env["PATH"] - self.launch_context.env["PATH"] = os.pathsep.join([ - python3_home, - os.path.join(python3_home, "Scripts") - ] + env_path.split(os.pathsep)) - - self.log.debug(f"PATH: {self.launch_context.env['PATH']}") + python3_home_str = python3_home.as_posix() + self.launch_context.env["PYTHONHOME"] = python3_home_str + self.log.info(f"Path to Resolve Python folder: `{python3_home_str}`") # add to the PYTHONPATH env_pythonpath = self.launch_context.env["PYTHONPATH"] - self.launch_context.env["PYTHONPATH"] = os.pathsep.join([ - os.path.join(python3_home, "Lib", "site-packages"), - os.path.join(RESOLVE_SCRIPT_API, "Modules"), - ] + env_pythonpath.split(os.pathsep)) + modules_path = Path(resolve_script_api, "Modules").as_posix() + self.launch_context.env[ + "PYTHONPATH" + ] = f"{modules_path}{os.pathsep}{env_pythonpath}" self.log.debug(f"PYTHONPATH: {self.launch_context.env['PYTHONPATH']}") - RESOLVE_UTILITY_SCRIPTS_DIR_ = { + # add the pythonhome folder to PATH because on Windows + # this is needed for Py3 to be correctly detected within Resolve + env_path = self.launch_context.env["PATH"] + self.log.info(f"Adding `{python3_home_str}` to the PATH variable") + self.launch_context.env[ + "PATH" + ] = f"{python3_home_str}{os.pathsep}{env_path}" + + self.log.debug(f"PATH: {self.launch_context.env['PATH']}") + + resolve_utility_scripts_dirs = { "windows": ( - f"{PROGRAMDATA}/Blackmagic Design" + f"{programdata}/Blackmagic Design" "/DaVinci Resolve/Fusion/Scripts/Comp" ), "darwin": ( "/Library/Application Support/Blackmagic Design" "/DaVinci Resolve/Fusion/Scripts/Comp" ), - "linux": "/opt/resolve/Fusion/Scripts/Comp" + "linux": "/opt/resolve/Fusion/Scripts/Comp", } - RESOLVE_UTILITY_SCRIPTS_DIR = os.path.normpath( - RESOLVE_UTILITY_SCRIPTS_DIR_[current_platform] + resolve_utility_scripts_dir = Path( + resolve_utility_scripts_dirs[current_platform] ) # setting utility scripts dir for scripts syncing - self.launch_context.env["RESOLVE_UTILITY_SCRIPTS_DIR"] = ( - RESOLVE_UTILITY_SCRIPTS_DIR) + self.launch_context.env[ + "RESOLVE_UTILITY_SCRIPTS_DIR" + ] = resolve_utility_scripts_dir.as_posix() # remove terminal coloring tags self.launch_context.env["OPENPYPE_LOG_NO_COLORS"] = "True" diff --git a/openpype/hosts/resolve/plugins/load/load_clip.py b/openpype/hosts/resolve/plugins/load/load_clip.py index d30a7ea272..05bfb003d6 100644 --- a/openpype/hosts/resolve/plugins/load/load_clip.py +++ b/openpype/hosts/resolve/plugins/load/load_clip.py @@ -19,6 +19,7 @@ from openpype.lib.transcoding import ( IMAGE_EXTENSIONS ) + class LoadClip(plugin.TimelineItemLoader): """Load a subset to timeline as clip diff --git a/openpype/hosts/resolve/utility_scripts/__OpenPype__Menu__.py b/openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py similarity index 100% rename from openpype/hosts/resolve/utility_scripts/__OpenPype__Menu__.py rename to openpype/hosts/resolve/utility_scripts/OpenPype__Menu.py diff --git a/openpype/hosts/resolve/utility_scripts/README.markdown b/openpype/hosts/resolve/utility_scripts/README.markdown deleted file mode 100644 index 8b13789179..0000000000 --- a/openpype/hosts/resolve/utility_scripts/README.markdown +++ /dev/null @@ -1 +0,0 @@ - diff --git a/openpype/hosts/resolve/utility_scripts/OTIO_export.py b/openpype/hosts/resolve/utility_scripts/develop/OTIO_export.py similarity index 100% rename from openpype/hosts/resolve/utility_scripts/OTIO_export.py rename to openpype/hosts/resolve/utility_scripts/develop/OTIO_export.py diff --git a/openpype/hosts/resolve/utility_scripts/OTIO_import.py b/openpype/hosts/resolve/utility_scripts/develop/OTIO_import.py similarity index 100% rename from openpype/hosts/resolve/utility_scripts/OTIO_import.py rename to openpype/hosts/resolve/utility_scripts/develop/OTIO_import.py diff --git a/openpype/hosts/resolve/utility_scripts/OpenPype_sync_util_scripts.py b/openpype/hosts/resolve/utility_scripts/develop/OpenPype_sync_util_scripts.py similarity index 100% rename from openpype/hosts/resolve/utility_scripts/OpenPype_sync_util_scripts.py rename to openpype/hosts/resolve/utility_scripts/develop/OpenPype_sync_util_scripts.py diff --git a/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py b/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py new file mode 100644 index 0000000000..8270496f64 --- /dev/null +++ b/openpype/hosts/resolve/utility_scripts/tests/testing_timeline_op.py @@ -0,0 +1,13 @@ +#! python3 +from openpype.pipeline import install_host +from openpype.hosts.resolve import api as bmdvr +from openpype.hosts.resolve.api.lib import get_current_project + +if __name__ == "__main__": + install_host(bmdvr) + project = get_current_project() + timeline_count = project.GetTimelineCount() + print(f"Timeline count: {timeline_count}") + timeline = project.GetTimelineByIndex(timeline_count) + print(f"Timeline name: {timeline.GetName()}") + print(timeline.GetTrackCount("video")) diff --git a/openpype/hosts/resolve/utils.py b/openpype/hosts/resolve/utils.py index 5881f153ae..9a161f4865 100644 --- a/openpype/hosts/resolve/utils.py +++ b/openpype/hosts/resolve/utils.py @@ -1,6 +1,6 @@ import os import shutil -from openpype.lib import Logger +from openpype.lib import Logger, is_running_from_build RESOLVE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -8,30 +8,30 @@ RESOLVE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) def setup(env): log = Logger.get_logger("ResolveSetup") scripts = {} - us_env = env.get("RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR") - us_dir = env["RESOLVE_UTILITY_SCRIPTS_DIR"] + util_scripts_env = env.get("RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR") + util_scripts_dir = env["RESOLVE_UTILITY_SCRIPTS_DIR"] - us_paths = [os.path.join( + util_scripts_paths = [os.path.join( RESOLVE_ROOT_DIR, "utility_scripts" )] # collect script dirs - if us_env: - log.info("Utility Scripts Env: `{}`".format(us_env)) - us_paths = us_env.split( - os.pathsep) + us_paths + if util_scripts_env: + log.info("Utility Scripts Env: `{}`".format(util_scripts_env)) + util_scripts_paths = util_scripts_env.split( + os.pathsep) + util_scripts_paths # collect scripts from dirs - for path in us_paths: + for path in util_scripts_paths: scripts.update({path: os.listdir(path)}) - log.info("Utility Scripts Dir: `{}`".format(us_paths)) + log.info("Utility Scripts Dir: `{}`".format(util_scripts_paths)) log.info("Utility Scripts: `{}`".format(scripts)) # make sure no script file is in folder - for s in os.listdir(us_dir): - path = os.path.join(us_dir, s) + for script in os.listdir(util_scripts_dir): + path = os.path.join(util_scripts_dir, script) log.info("Removing `{}`...".format(path)) if os.path.isdir(path): shutil.rmtree(path, onerror=None) @@ -39,12 +39,17 @@ def setup(env): os.remove(path) # copy scripts into Resolve's utility scripts dir - for d, sl in scripts.items(): - # directory and scripts list - for s in sl: - # script in script list - src = os.path.join(d, s) - dst = os.path.join(us_dir, s) + for directory, scripts in scripts.items(): + for script in scripts: + if ( + is_running_from_build() and + script in ["tests", "develop"] + ): + # only copy those if started from build + continue + + src = os.path.join(directory, script) + dst = os.path.join(util_scripts_dir, script) log.info("Copying `{}` to `{}`...".format(src, dst)) if os.path.isdir(src): shutil.copytree( diff --git a/openpype/hosts/substancepainter/__init__.py b/openpype/hosts/substancepainter/__init__.py new file mode 100644 index 0000000000..4c33b9f507 --- /dev/null +++ b/openpype/hosts/substancepainter/__init__.py @@ -0,0 +1,10 @@ +from .addon import ( + SubstanceAddon, + SUBSTANCE_HOST_DIR, +) + + +__all__ = ( + "SubstanceAddon", + "SUBSTANCE_HOST_DIR" +) diff --git a/openpype/hosts/substancepainter/addon.py b/openpype/hosts/substancepainter/addon.py new file mode 100644 index 0000000000..2fbea139c5 --- /dev/null +++ b/openpype/hosts/substancepainter/addon.py @@ -0,0 +1,34 @@ +import os +from openpype.modules import OpenPypeModule, IHostAddon + +SUBSTANCE_HOST_DIR = os.path.dirname(os.path.abspath(__file__)) + + +class SubstanceAddon(OpenPypeModule, IHostAddon): + name = "substancepainter" + host_name = "substancepainter" + + def initialize(self, module_settings): + self.enabled = True + + def add_implementation_envs(self, env, _app): + # Add requirements to SUBSTANCE_PAINTER_PLUGINS_PATH + plugin_path = os.path.join(SUBSTANCE_HOST_DIR, "deploy") + plugin_path = plugin_path.replace("\\", "/") + if env.get("SUBSTANCE_PAINTER_PLUGINS_PATH"): + plugin_path += os.pathsep + env["SUBSTANCE_PAINTER_PLUGINS_PATH"] + + env["SUBSTANCE_PAINTER_PLUGINS_PATH"] = plugin_path + + # Log in Substance Painter doesn't support custom terminal colors + env["OPENPYPE_LOG_NO_COLORS"] = "Yes" + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(SUBSTANCE_HOST_DIR, "hooks") + ] + + def get_workfile_extensions(self): + return [".spp", ".toc"] diff --git a/openpype/hosts/substancepainter/api/__init__.py b/openpype/hosts/substancepainter/api/__init__.py new file mode 100644 index 0000000000..937d0c429e --- /dev/null +++ b/openpype/hosts/substancepainter/api/__init__.py @@ -0,0 +1,8 @@ +from .pipeline import ( + SubstanceHost, + +) + +__all__ = [ + "SubstanceHost", +] diff --git a/openpype/hosts/substancepainter/api/colorspace.py b/openpype/hosts/substancepainter/api/colorspace.py new file mode 100644 index 0000000000..375b61b39b --- /dev/null +++ b/openpype/hosts/substancepainter/api/colorspace.py @@ -0,0 +1,157 @@ +"""Substance Painter OCIO management + +Adobe Substance 3D Painter supports OCIO color management using a per project +configuration. Output color spaces are defined at the project level + +More information see: + - https://substance3d.adobe.com/documentation/spdoc/color-management-223053233.html # noqa + - https://substance3d.adobe.com/documentation/spdoc/color-management-with-opencolorio-225969419.html # noqa + +""" +import substance_painter.export +import substance_painter.js +import json + +from .lib import ( + get_document_structure, + get_channel_format +) + + +def _iter_document_stack_channels(): + """Yield all stack paths and channels project""" + + for material in get_document_structure()["materials"]: + material_name = material["name"] + for stack in material["stacks"]: + stack_name = stack["name"] + if stack_name: + stack_path = [material_name, stack_name] + else: + stack_path = material_name + for channel in stack["channels"]: + yield stack_path, channel + + +def _get_first_color_and_data_stack_and_channel(): + """Return first found color channel and data channel.""" + color_channel = None + data_channel = None + for stack_path, channel in _iter_document_stack_channels(): + channel_format = get_channel_format(stack_path, channel) + if channel_format["color"]: + color_channel = (stack_path, channel) + else: + data_channel = (stack_path, channel) + + if color_channel and data_channel: + return color_channel, data_channel + + return color_channel, data_channel + + +def get_project_channel_data(): + """Return colorSpace settings for the current substance painter project. + + In Substance Painter only color channels have Color Management enabled + whereas data channels have no color management applied. This can't be + changed. The artist can only customize the export color space for color + channels per bit-depth for 8 bpc, 16 bpc and 32 bpc. + + As such this returns the color space for 'data' and for per bit-depth + for color channels. + + Example output: + { + "data": {'colorSpace': 'Utility - Raw'}, + "8": {"colorSpace": "ACES - AcesCG"}, + "16": {"colorSpace": "ACES - AcesCG"}, + "16f": {"colorSpace": "ACES - AcesCG"}, + "32f": {"colorSpace": "ACES - AcesCG"} + } + + """ + + keys = ["colorSpace"] + query = {key: f"${key}" for key in keys} + + config = { + "exportPath": "/", + "exportShaderParams": False, + "defaultExportPreset": "query_preset", + + "exportPresets": [{ + "name": "query_preset", + + # List of maps making up this export preset. + "maps": [{ + "fileName": json.dumps(query), + # List of source/destination defining which channels will + # make up the texture file. + "channels": [], + "parameters": { + "fileFormat": "exr", + "bitDepth": "32f", + "dithering": False, + "sizeLog2": 4, + "paddingAlgorithm": "passthrough", + "dilationDistance": 16 + } + }] + }], + } + + def _get_query_output(config): + # Return the basename of the single output path we defined + result = substance_painter.export.list_project_textures(config) + path = next(iter(result.values()))[0] + # strip extension and slash since we know relevant json data starts + # and ends with { and } characters + path = path.strip("/\\.exr") + return json.loads(path) + + # Query for each type of channel (color and data) + color_channel, data_channel = _get_first_color_and_data_stack_and_channel() + colorspaces = {} + for key, channel_data in { + "data": data_channel, + "color": color_channel + }.items(): + if channel_data is None: + # No channel of that datatype anywhere in the Stack. We're + # unable to identify the output color space of the project + colorspaces[key] = None + continue + + stack, channel = channel_data + + # Stack must be a string + if not isinstance(stack, str): + # Assume iterable + stack = "/".join(stack) + + # Define the temp output config + config["exportList"] = [{"rootPath": stack}] + config_map = config["exportPresets"][0]["maps"][0] + config_map["channels"] = [ + { + "destChannel": x, + "srcChannel": x, + "srcMapType": "documentMap", + "srcMapName": channel + } for x in "RGB" + ] + + if key == "color": + # Query for each bit depth + # Color space definition can have a different OCIO config set + # for 8-bit, 16-bit and 32-bit outputs so we need to check each + # bit depth + for depth in ["8", "16", "16f", "32f"]: + config_map["parameters"]["bitDepth"] = depth # noqa + colorspaces[key + depth] = _get_query_output(config) + else: + # Data channel (not color managed) + colorspaces[key] = _get_query_output(config) + + return colorspaces diff --git a/openpype/hosts/substancepainter/api/lib.py b/openpype/hosts/substancepainter/api/lib.py new file mode 100644 index 0000000000..2cd08f862e --- /dev/null +++ b/openpype/hosts/substancepainter/api/lib.py @@ -0,0 +1,649 @@ +import os +import re +import json +from collections import defaultdict + +import substance_painter.project +import substance_painter.resource +import substance_painter.js +import substance_painter.export + +from qtpy import QtGui, QtWidgets, QtCore + + +def get_export_presets(): + """Return Export Preset resource URLs for all available Export Presets. + + Returns: + dict: {Resource url: GUI Label} + + """ + # TODO: Find more optimal way to find all export templates + + preset_resources = {} + for shelf in substance_painter.resource.Shelves.all(): + shelf_path = os.path.normpath(shelf.path()) + + presets_path = os.path.join(shelf_path, "export-presets") + if not os.path.exists(presets_path): + continue + + for filename in os.listdir(presets_path): + if filename.endswith(".spexp"): + template_name = os.path.splitext(filename)[0] + + resource = substance_painter.resource.ResourceID( + context=shelf.name(), + name=template_name + ) + resource_url = resource.url() + + preset_resources[resource_url] = template_name + + # Sort by template name + export_templates = dict(sorted(preset_resources.items(), + key=lambda x: x[1])) + + # Add default built-ins at the start + # TODO: find the built-ins automatically; scraped with https://gist.github.com/BigRoy/97150c7c6f0a0c916418207b9a2bc8f1 # noqa + result = { + "export-preset-generator://viewport2d": "2D View", # noqa + "export-preset-generator://doc-channel-normal-no-alpha": "Document channels + Normal + AO (No Alpha)", # noqa + "export-preset-generator://doc-channel-normal-with-alpha": "Document channels + Normal + AO (With Alpha)", # noqa + "export-preset-generator://sketchfab": "Sketchfab", # noqa + "export-preset-generator://adobe-standard-material": "Substance 3D Stager", # noqa + "export-preset-generator://usd": "USD PBR Metal Roughness", # noqa + "export-preset-generator://gltf": "glTF PBR Metal Roughness", # noqa + "export-preset-generator://gltf-displacement": "glTF PBR Metal Roughness + Displacement texture (experimental)" # noqa + } + result.update(export_templates) + return result + + +def _convert_stack_path_to_cmd_str(stack_path): + """Convert stack path `str` or `[str, str]` for javascript query + + Example usage: + >>> stack_path = _convert_stack_path_to_cmd_str(stack_path) + >>> cmd = f"alg.mapexport.channelIdentifiers({stack_path})" + >>> substance_painter.js.evaluate(cmd) + + Args: + stack_path (list or str): Path to the stack, could be + "Texture set name" or ["Texture set name", "Stack name"] + + Returns: + str: Stack path usable as argument in javascript query. + + """ + return json.dumps(stack_path) + + +def get_channel_identifiers(stack_path=None): + """Return the list of channel identifiers. + + If a context is passed (texture set/stack), + return only used channels with resolved user channels. + + Channel identifiers are: + basecolor, height, specular, opacity, emissive, displacement, + glossiness, roughness, anisotropylevel, anisotropyangle, transmissive, + scattering, reflection, ior, metallic, normal, ambientOcclusion, + diffuse, specularlevel, blendingmask, [custom user names]. + + Args: + stack_path (list or str, Optional): Path to the stack, could be + "Texture set name" or ["Texture set name", "Stack name"] + + Returns: + list: List of channel identifiers. + + """ + if stack_path is None: + stack_path = "" + else: + stack_path = _convert_stack_path_to_cmd_str(stack_path) + cmd = f"alg.mapexport.channelIdentifiers({stack_path})" + return substance_painter.js.evaluate(cmd) + + +def get_channel_format(stack_path, channel): + """Retrieve the channel format of a specific stack channel. + + See `alg.mapexport.channelFormat` (javascript API) for more details. + + The channel format data is: + "label" (str): The channel format label: could be one of + [sRGB8, L8, RGB8, L16, RGB16, L16F, RGB16F, L32F, RGB32F] + "color" (bool): True if the format is in color, False is grayscale + "floating" (bool): True if the format uses floating point + representation, false otherwise + "bitDepth" (int): Bit per color channel (could be 8, 16 or 32 bpc) + + Arguments: + stack_path (list or str): Path to the stack, could be + "Texture set name" or ["Texture set name", "Stack name"] + channel (str): Identifier of the channel to export + (see `get_channel_identifiers`) + + Returns: + dict: The channel format data. + + """ + stack_path = _convert_stack_path_to_cmd_str(stack_path) + cmd = f"alg.mapexport.channelFormat({stack_path}, '{channel}')" + return substance_painter.js.evaluate(cmd) + + +def get_document_structure(): + """Dump the document structure. + + See `alg.mapexport.documentStructure` (javascript API) for more details. + + Returns: + dict: Document structure or None when no project is open + + """ + return substance_painter.js.evaluate("alg.mapexport.documentStructure()") + + +def get_export_templates(config, format="png", strip_folder=True): + """Return export config outputs. + + This use the Javascript API `alg.mapexport.getPathsExportDocumentMaps` + which returns a different output than using the Python equivalent + `substance_painter.export.list_project_textures(config)`. + + The nice thing about the Javascript API version is that it returns the + output textures grouped by filename template. + + A downside is that it doesn't return all the UDIM tiles but per template + always returns a single file. + + Note: + The file format needs to be explicitly passed to the Javascript API + but upon exporting through the Python API the file format can be based + on the output preset. So it's likely the file extension will mismatch + + Warning: + Even though the function appears to solely get the expected outputs + the Javascript API will actually create the config's texture output + folder if it does not exist yet. As such, a valid path must be set. + + Example output: + { + "DefaultMaterial": { + "$textureSet_BaseColor(_$colorSpace)(.$udim)": "DefaultMaterial_BaseColor_ACES - ACEScg.1002.png", # noqa + "$textureSet_Emissive(_$colorSpace)(.$udim)": "DefaultMaterial_Emissive_ACES - ACEScg.1002.png", # noqa + "$textureSet_Height(_$colorSpace)(.$udim)": "DefaultMaterial_Height_Utility - Raw.1002.png", # noqa + "$textureSet_Metallic(_$colorSpace)(.$udim)": "DefaultMaterial_Metallic_Utility - Raw.1002.png", # noqa + "$textureSet_Normal(_$colorSpace)(.$udim)": "DefaultMaterial_Normal_Utility - Raw.1002.png", # noqa + "$textureSet_Roughness(_$colorSpace)(.$udim)": "DefaultMaterial_Roughness_Utility - Raw.1002.png" # noqa + } + } + + Arguments: + config (dict) Export config + format (str, Optional): Output format to write to, defaults to 'png' + strip_folder (bool, Optional): Whether to strip the output folder + from the output filenames. + + Returns: + dict: The expected output maps. + + """ + folder = config["exportPath"].replace("\\", "/") + preset = config["defaultExportPreset"] + cmd = f'alg.mapexport.getPathsExportDocumentMaps("{preset}", "{folder}", "{format}")' # noqa + result = substance_painter.js.evaluate(cmd) + + if strip_folder: + for _stack, maps in result.items(): + for map_template, map_filepath in maps.items(): + map_filepath = map_filepath.replace("\\", "/") + assert map_filepath.startswith(folder) + map_filename = map_filepath[len(folder):].lstrip("/") + maps[map_template] = map_filename + + return result + + +def _templates_to_regex(templates, + texture_set, + colorspaces, + project, + mesh): + """Return regex based on a Substance Painter expot filename template. + + This converts Substance Painter export filename templates like + `$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)` into a regex + which can be used to query an output filename to help retrieve: + + - Which template filename the file belongs to. + - Which color space the file is written with. + - Which udim tile it is exactly. + + This is used by `get_parsed_export_maps` which tries to as explicitly + as possible match the filename pattern against the known possible outputs. + That's why Texture Set name, Color spaces, Project path and mesh path must + be provided. By doing so we get the best shot at correctly matching the + right template because otherwise $texture_set could basically be any string + and thus match even that of a color space or mesh. + + Arguments: + templates (list): List of templates to convert to regex. + texture_set (str): The texture set to match against. + colorspaces (list): The colorspaces defined in the current project. + project (str): Filepath of current substance project. + mesh (str): Path to mesh file used in current project. + + Returns: + dict: Template: Template regex pattern + + """ + def _filename_no_ext(path): + return os.path.splitext(os.path.basename(path))[0] + + if colorspaces and any(colorspaces): + colorspace_match = "|".join(re.escape(c) for c in set(colorspaces)) + colorspace_match = f"({colorspace_match})" + else: + # No colorspace support enabled + colorspace_match = "" + + # Key to regex valid search values + key_matches = { + "$project": re.escape(_filename_no_ext(project)), + "$mesh": re.escape(_filename_no_ext(mesh)), + "$textureSet": re.escape(texture_set), + "$colorSpace": colorspace_match, + "$udim": "([0-9]{4})" + } + + # Turn the templates into regexes + regexes = {} + for template in templates: + + # We need to tweak a temp + search_regex = re.escape(template) + + # Let's assume that any ( and ) character in the file template was + # intended as an optional template key and do a simple `str.replace` + # Note: we are matching against re.escape(template) so will need to + # search for the escaped brackets. + search_regex = search_regex.replace(re.escape("("), "(") + search_regex = search_regex.replace(re.escape(")"), ")?") + + # Substitute each key into a named group + for key, key_expected_regex in key_matches.items(): + + # We want to use the template as a regex basis in the end so will + # escape the whole thing first. Note that thus we'll need to + # search for the escaped versions of the keys too. + escaped_key = re.escape(key) + key_label = key[1:] # key without $ prefix + + key_expected_grp_regex = f"(?P<{key_label}>{key_expected_regex})" + search_regex = search_regex.replace(escaped_key, + key_expected_grp_regex) + + # The filename templates don't include the extension so we add it + # to be able to match the out filename beginning to end + ext_regex = r"(?P\.[A-Za-z][A-Za-z0-9-]*)" + search_regex = rf"^{search_regex}{ext_regex}$" + + regexes[template] = search_regex + + return regexes + + +def strip_template(template, strip="._ "): + """Return static characters in a substance painter filename template. + + >>> strip_template("$textureSet_HELLO(.$udim)") + # HELLO + >>> strip_template("$mesh_$textureSet_HELLO_WORLD_$colorSpace(.$udim)") + # HELLO_WORLD + >>> strip_template("$textureSet_HELLO(.$udim)", strip=None) + # _HELLO + >>> strip_template("$mesh_$textureSet_$colorSpace(.$udim)", strip=None) + # _HELLO_ + >>> strip_template("$textureSet_HELLO(.$udim)") + # _HELLO + + Arguments: + template (str): Filename template to strip. + strip (str, optional): Characters to strip from beginning and end + of the static string in template. Defaults to: `._ `. + + Returns: + str: The static string in filename template. + + """ + # Return only characters that were part of the template that were static. + # Remove all keys + keys = ["$project", "$mesh", "$textureSet", "$udim", "$colorSpace"] + stripped_template = template + for key in keys: + stripped_template = stripped_template.replace(key, "") + + # Everything inside an optional bracket space is excluded since it's not + # static. We keep a counter to track whether we are currently iterating + # over parts of the template that are inside an 'optional' group or not. + counter = 0 + result = "" + for char in stripped_template: + if char == "(": + counter += 1 + elif char == ")": + counter -= 1 + if counter < 0: + counter = 0 + else: + if counter == 0: + result += char + + if strip: + # Strip of any trailing start/end characters. Technically these are + # static but usually start and end separators like space or underscore + # aren't wanted. + result = result.strip(strip) + + return result + + +def get_parsed_export_maps(config): + """Return Export Config's expected output textures with parsed data. + + This tries to parse the texture outputs using a Python API export config. + + Parses template keys: $project, $mesh, $textureSet, $colorSpace, $udim + + Example: + {("DefaultMaterial", ""): { + "$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)": [ + { + // OUTPUT DATA FOR FILE #1 OF THE TEMPLATE + }, + { + // OUTPUT DATA FOR FILE #2 OF THE TEMPLATE + }, + ] + }, + }} + + File output data (all outputs are `str`). + 1) Parsed tokens: These are parsed tokens from the template, they will + only exist if found in the filename template and output filename. + + project: Workfile filename without extension + mesh: Filename of the loaded mesh without extension + textureSet: The texture set, e.g. "DefaultMaterial", + colorSpace: The color space, e.g. "ACES - ACEScg", + udim: The udim tile, e.g. "1001" + + 2) Template output and filepath + + filepath: Full path to the resulting texture map, e.g. + "/path/to/mesh_DefaultMaterial_BaseColor_ACES - ACEScg.1002.png", + output: "mesh_DefaultMaterial_BaseColor_ACES - ACEScg.1002.png" + Note: if template had slashes (folders) then `output` will too. + So `output` might include a folder. + + Returns: + dict: [texture_set, stack]: {template: [file1_data, file2_data]} + + """ + # Import is here to avoid recursive lib <-> colorspace imports + from .colorspace import get_project_channel_data + + outputs = substance_painter.export.list_project_textures(config) + templates = get_export_templates(config, strip_folder=False) + + # Get all color spaces set for the current project + project_colorspaces = set( + data["colorSpace"] for data in get_project_channel_data().values() + ) + + # Get current project mesh path and project path to explicitly match + # the $mesh and $project tokens + project_mesh_path = substance_painter.project.last_imported_mesh_path() + project_path = substance_painter.project.file_path() + + # Get the current export path to strip this of the beginning of filepath + # results, since filename templates don't have these we'll match without + # that part of the filename. + export_path = config["exportPath"] + export_path = export_path.replace("\\", "/") + if not export_path.endswith("/"): + export_path += "/" + + # Parse the outputs + result = {} + for key, filepaths in outputs.items(): + texture_set, stack = key + + if stack: + stack_path = f"{texture_set}/{stack}" + else: + stack_path = texture_set + + stack_templates = list(templates[stack_path].keys()) + + template_regex = _templates_to_regex(stack_templates, + texture_set=texture_set, + colorspaces=project_colorspaces, + mesh=project_mesh_path, + project=project_path) + + # Let's precompile the regexes + for template, regex in template_regex.items(): + template_regex[template] = re.compile(regex) + + stack_results = defaultdict(list) + for filepath in sorted(filepaths): + # We strip explicitly using the full parent export path instead of + # using `os.path.basename` because export template is allowed to + # have subfolders in its template which we want to match against + filepath = filepath.replace("\\", "/") + assert filepath.startswith(export_path), ( + f"Filepath {filepath} must start with folder {export_path}" + ) + filename = filepath[len(export_path):] + + for template, regex in template_regex.items(): + match = regex.match(filename) + if match: + parsed = match.groupdict(default={}) + + # Include some special outputs for convenience + parsed["filepath"] = filepath + parsed["output"] = filename + + stack_results[template].append(parsed) + break + else: + raise ValueError(f"Unable to match {filename} against any " + f"template in: {list(template_regex.keys())}") + + result[key] = dict(stack_results) + + return result + + +def load_shelf(path, name=None): + """Add shelf to substance painter (for current application session) + + This will dynamically add a Shelf for the current session. It's good + to note however that these will *not* persist on restart of the host. + + Note: + Consider the loaded shelf a static library of resources. + + The shelf will *not* be visible in application preferences in + Edit > Settings > Libraries. + + The shelf will *not* show in the Assets browser if it has no existing + assets + + The shelf will *not* be a selectable option for selecting it as a + destination to import resources too. + + """ + + # Ensure expanded path with forward slashes + path = os.path.expandvars(path) + path = os.path.abspath(path) + path = path.replace("\\", "/") + + # Path must exist + if not os.path.isdir(path): + raise ValueError(f"Path is not an existing folder: {path}") + + # This name must be unique and must only contain lowercase letters, + # numbers, underscores or hyphens. + if name is None: + name = os.path.basename(path) + + name = name.lower() + name = re.sub(r"[^a-z0-9_\-]", "_", name) # sanitize to underscores + + if substance_painter.resource.Shelves.exists(name): + shelf = next( + shelf for shelf in substance_painter.resource.Shelves.all() + if shelf.name() == name + ) + if os.path.normpath(shelf.path()) != os.path.normpath(path): + raise ValueError(f"Shelf with name '{name}' already exists " + f"for a different path: '{shelf.path()}") + + return + + print(f"Adding Shelf '{name}' to path: {path}") + substance_painter.resource.Shelves.add(name, path) + + return name + + +def _get_new_project_action(): + """Return QAction which triggers Substance Painter's new project dialog""" + + main_window = substance_painter.ui.get_main_window() + + # Find the file menu's New file action + menubar = main_window.menuBar() + new_action = None + for action in menubar.actions(): + menu = action.menu() + if not menu: + continue + + if menu.objectName() != "file": + continue + + # Find the action with the CTRL+N key sequence + new_action = next(action for action in menu.actions() + if action.shortcut() == QtGui.QKeySequence.New) + break + + return new_action + + +def prompt_new_file_with_mesh(mesh_filepath): + """Prompts the user for a new file using Substance Painter's own dialog. + + This will set the mesh path to load to the given mesh and disables the + dialog box to disallow the user to change the path. This way we can allow + user configuration of a project but set the mesh path ourselves. + + Warning: + This is very hacky and experimental. + + Note: + If a project is currently open using the same mesh filepath it can't + accurately detect whether the user had actually accepted the new project + dialog or whether the project afterwards is still the original project, + for example when the user might have cancelled the operation. + + """ + + app = QtWidgets.QApplication.instance() + assert os.path.isfile(mesh_filepath), \ + f"Mesh filepath does not exist: {mesh_filepath}" + + def _setup_file_dialog(): + """Set filepath in QFileDialog and trigger accept result""" + file_dialog = app.activeModalWidget() + assert isinstance(file_dialog, QtWidgets.QFileDialog) + + # Quickly hide the dialog + file_dialog.hide() + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents, 1000) + + file_dialog.setDirectory(os.path.dirname(mesh_filepath)) + url = QtCore.QUrl.fromLocalFile(os.path.basename(mesh_filepath)) + file_dialog.selectUrl(url) + + # Give the explorer window time to refresh to the folder and select + # the file + while not file_dialog.selectedFiles(): + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents, 1000) + print(f"Selected: {file_dialog.selectedFiles()}") + + # Set it again now we know the path is refreshed - without this + # accepting the dialog will often not trigger the correct filepath + file_dialog.setDirectory(os.path.dirname(mesh_filepath)) + url = QtCore.QUrl.fromLocalFile(os.path.basename(mesh_filepath)) + file_dialog.selectUrl(url) + + file_dialog.done(file_dialog.Accepted) + app.processEvents(QtCore.QEventLoop.AllEvents) + + def _setup_prompt(): + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents) + dialog = app.activeModalWidget() + assert dialog.objectName() == "NewProjectDialog" + + # Set the window title + mesh = os.path.basename(mesh_filepath) + dialog.setWindowTitle(f"New Project with mesh: {mesh}") + + # Get the select mesh file button + mesh_select = dialog.findChild(QtWidgets.QPushButton, "meshSelect") + + # Hide the select mesh button to the user to block changing of mesh + mesh_select.setVisible(False) + + # Ensure UI is visually up-to-date + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents) + + # Trigger the 'select file' dialog to set the path and have the + # new file dialog to use the path. + QtCore.QTimer.singleShot(10, _setup_file_dialog) + mesh_select.click() + + app.processEvents(QtCore.QEventLoop.AllEvents, 5000) + + mesh_filename = dialog.findChild(QtWidgets.QFrame, "meshFileName") + mesh_filename_label = mesh_filename.findChild(QtWidgets.QLabel) + if not mesh_filename_label.text(): + dialog.close() + raise RuntimeError(f"Failed to set mesh path: {mesh_filepath}") + + new_action = _get_new_project_action() + if not new_action: + raise RuntimeError("Unable to detect new file action..") + + QtCore.QTimer.singleShot(0, _setup_prompt) + new_action.trigger() + app.processEvents(QtCore.QEventLoop.AllEvents, 5000) + + if not substance_painter.project.is_open(): + return + + # Confirm mesh was set as expected + project_mesh = substance_painter.project.last_imported_mesh_path() + if os.path.normpath(project_mesh) != os.path.normpath(mesh_filepath): + return + + return project_mesh diff --git a/openpype/hosts/substancepainter/api/pipeline.py b/openpype/hosts/substancepainter/api/pipeline.py new file mode 100644 index 0000000000..9406fb8edb --- /dev/null +++ b/openpype/hosts/substancepainter/api/pipeline.py @@ -0,0 +1,427 @@ +# -*- coding: utf-8 -*- +"""Pipeline tools for OpenPype Substance Painter integration.""" +import os +import logging +from functools import partial + +# Substance 3D Painter modules +import substance_painter.ui +import substance_painter.event +import substance_painter.project + +import pyblish.api + +from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost +from openpype.settings import ( + get_current_project_settings, + get_system_settings +) + +from openpype.pipeline.template_data import get_template_data_with_names +from openpype.pipeline import ( + register_creator_plugin_path, + register_loader_plugin_path, + AVALON_CONTAINER_ID, + Anatomy +) +from openpype.lib import ( + StringTemplate, + register_event_callback, + emit_event, +) +from openpype.pipeline.load import any_outdated_containers +from openpype.hosts.substancepainter import SUBSTANCE_HOST_DIR + +from . import lib + +log = logging.getLogger("openpype.hosts.substance") + +PLUGINS_DIR = os.path.join(SUBSTANCE_HOST_DIR, "plugins") +PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") +LOAD_PATH = os.path.join(PLUGINS_DIR, "load") +CREATE_PATH = os.path.join(PLUGINS_DIR, "create") +INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") + +OPENPYPE_METADATA_KEY = "OpenPype" +OPENPYPE_METADATA_CONTAINERS_KEY = "containers" # child key +OPENPYPE_METADATA_CONTEXT_KEY = "context" # child key +OPENPYPE_METADATA_INSTANCES_KEY = "instances" # child key + + +class SubstanceHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): + name = "substancepainter" + + def __init__(self): + super(SubstanceHost, self).__init__() + self._has_been_setup = False + self.menu = None + self.callbacks = [] + self.shelves = [] + + def install(self): + pyblish.api.register_host("substancepainter") + + pyblish.api.register_plugin_path(PUBLISH_PATH) + register_loader_plugin_path(LOAD_PATH) + register_creator_plugin_path(CREATE_PATH) + + log.info("Installing callbacks ... ") + # register_event_callback("init", on_init) + self._register_callbacks() + # register_event_callback("before.save", before_save) + # register_event_callback("save", on_save) + register_event_callback("open", on_open) + # register_event_callback("new", on_new) + + log.info("Installing menu ... ") + self._install_menu() + + project_settings = get_current_project_settings() + self._install_shelves(project_settings) + + self._has_been_setup = True + + def uninstall(self): + self._uninstall_shelves() + self._uninstall_menu() + self._deregister_callbacks() + + def has_unsaved_changes(self): + + if not substance_painter.project.is_open(): + return False + + return substance_painter.project.needs_saving() + + def get_workfile_extensions(self): + return [".spp", ".toc"] + + def save_workfile(self, dst_path=None): + + if not substance_painter.project.is_open(): + return False + + if not dst_path: + dst_path = self.get_current_workfile() + + full_save_mode = substance_painter.project.ProjectSaveMode.Full + substance_painter.project.save_as(dst_path, full_save_mode) + + return dst_path + + def open_workfile(self, filepath): + + if not os.path.exists(filepath): + raise RuntimeError("File does not exist: {}".format(filepath)) + + # We must first explicitly close current project before opening another + if substance_painter.project.is_open(): + substance_painter.project.close() + + substance_painter.project.open(filepath) + return filepath + + def get_current_workfile(self): + if not substance_painter.project.is_open(): + return None + + filepath = substance_painter.project.file_path() + if filepath and filepath.endswith(".spt"): + # When currently in a Substance Painter template assume our + # scene isn't saved. This can be the case directly after doing + # "New project", the path will then be the template used. This + # avoids Workfiles tool trying to save as .spt extension if the + # file hasn't been saved before. + return + + return filepath + + def get_containers(self): + + if not substance_painter.project.is_open(): + return + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) + if containers: + for key, container in containers.items(): + container["objectName"] = key + yield container + + def update_context_data(self, data, changes): + + if not substance_painter.project.is_open(): + return + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + metadata.set(OPENPYPE_METADATA_CONTEXT_KEY, data) + + def get_context_data(self): + + if not substance_painter.project.is_open(): + return + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + return metadata.get(OPENPYPE_METADATA_CONTEXT_KEY) or {} + + def _install_menu(self): + from PySide2 import QtWidgets + from openpype.tools.utils import host_tools + + parent = substance_painter.ui.get_main_window() + + menu = QtWidgets.QMenu("OpenPype") + + action = menu.addAction("Create...") + action.triggered.connect( + lambda: host_tools.show_publisher(parent=parent, + tab="create") + ) + + action = menu.addAction("Load...") + action.triggered.connect( + lambda: host_tools.show_loader(parent=parent, use_context=True) + ) + + action = menu.addAction("Publish...") + action.triggered.connect( + lambda: host_tools.show_publisher(parent=parent, + tab="publish") + ) + + action = menu.addAction("Manage...") + action.triggered.connect( + lambda: host_tools.show_scene_inventory(parent=parent) + ) + + action = menu.addAction("Library...") + action.triggered.connect( + lambda: host_tools.show_library_loader(parent=parent) + ) + + menu.addSeparator() + action = menu.addAction("Work Files...") + action.triggered.connect( + lambda: host_tools.show_workfiles(parent=parent) + ) + + substance_painter.ui.add_menu(menu) + + def on_menu_destroyed(): + self.menu = None + + menu.destroyed.connect(on_menu_destroyed) + + self.menu = menu + + def _uninstall_menu(self): + if self.menu: + self.menu.destroy() + self.menu = None + + def _register_callbacks(self): + # Prepare emit event callbacks + open_callback = partial(emit_event, "open") + + # Connect to the Substance Painter events + dispatcher = substance_painter.event.DISPATCHER + for event, callback in [ + (substance_painter.event.ProjectOpened, open_callback) + ]: + dispatcher.connect(event, callback) + # Keep a reference so we can deregister if needed + self.callbacks.append((event, callback)) + + def _deregister_callbacks(self): + for event, callback in self.callbacks: + substance_painter.event.DISPATCHER.disconnect(event, callback) + self.callbacks.clear() + + def _install_shelves(self, project_settings): + + shelves = project_settings["substancepainter"].get("shelves", {}) + if not shelves: + return + + # Prepare formatting data if we detect any path which might have + # template tokens like {asset} in there. + formatting_data = {} + has_formatting_entries = any("{" in path for path in shelves.values()) + if has_formatting_entries: + project_name = self.get_current_project_name() + asset_name = self.get_current_asset_name() + task_name = self.get_current_asset_name() + system_settings = get_system_settings() + formatting_data = get_template_data_with_names(project_name, + asset_name, + task_name, + system_settings) + anatomy = Anatomy(project_name) + formatting_data["root"] = anatomy.roots + + for name, path in shelves.items(): + shelf_name = None + + # Allow formatting with anatomy for the paths + if "{" in path: + path = StringTemplate.format_template(path, formatting_data) + + try: + shelf_name = lib.load_shelf(path, name=name) + except ValueError as exc: + print(f"Failed to load shelf -> {exc}") + + if shelf_name: + self.shelves.append(shelf_name) + + def _uninstall_shelves(self): + for shelf_name in self.shelves: + substance_painter.resource.Shelves.remove(shelf_name) + self.shelves.clear() + + +def on_open(): + log.info("Running callback on open..") + + if any_outdated_containers(): + from openpype.widgets import popup + + log.warning("Scene has outdated content.") + + # Get main window + parent = substance_painter.ui.get_main_window() + if parent is None: + log.info("Skipping outdated content pop-up " + "because Substance window can't be found.") + else: + + # Show outdated pop-up + def _on_show_inventory(): + from openpype.tools.utils import host_tools + host_tools.show_scene_inventory(parent=parent) + + dialog = popup.Popup(parent=parent) + dialog.setWindowTitle("Substance scene has outdated content") + dialog.setMessage("There are outdated containers in " + "your Substance scene.") + dialog.on_clicked.connect(_on_show_inventory) + dialog.show() + + +def imprint_container(container, + name, + namespace, + context, + loader): + """Imprint a loaded container with metadata. + + Containerisation enables a tracking of version, author and origin + for loaded assets. + + Arguments: + container (dict): The (substance metadata) dictionary to imprint into. + name (str): Name of resulting assembly + namespace (str): Namespace under which to host container + context (dict): Asset information + loader (load.LoaderPlugin): loader instance used to produce container. + + Returns: + None + + """ + + data = [ + ("schema", "openpype:container-2.0"), + ("id", AVALON_CONTAINER_ID), + ("name", str(name)), + ("namespace", str(namespace) if namespace else None), + ("loader", str(loader.__class__.__name__)), + ("representation", str(context["representation"]["_id"])), + ] + for key, value in data: + container[key] = value + + +def set_container_metadata(object_name, container_data, update=False): + """Helper method to directly set the data for a specific container + + Args: + object_name (str): The unique object name identifier for the container + container_data (dict): The data for the container. + Note 'objectName' data is derived from `object_name` and key in + `container_data` will be ignored. + update (bool): Whether to only update the dict data. + + """ + # The objectName is derived from the key in the metadata so won't be stored + # in the metadata in the container's data. + container_data.pop("objectName", None) + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) or {} + if update: + existing_data = containers.setdefault(object_name, {}) + existing_data.update(container_data) # mutable dict, in-place update + else: + containers[object_name] = container_data + metadata.set("containers", containers) + + +def remove_container_metadata(object_name): + """Helper method to remove the data for a specific container""" + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) + if containers: + containers.pop(object_name, None) + metadata.set("containers", containers) + + +def set_instance(instance_id, instance_data, update=False): + """Helper method to directly set the data for a specific container + + Args: + instance_id (str): Unique identifier for the instance + instance_data (dict): The instance data to store in the metaadata. + """ + set_instances({instance_id: instance_data}, update=update) + + +def set_instances(instance_data_by_id, update=False): + """Store data for multiple instances at the same time. + + This is more optimal than querying and setting them in the metadata one + by one. + """ + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + instances = metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {} + + for instance_id, instance_data in instance_data_by_id.items(): + if update: + existing_data = instances.get(instance_id, {}) + existing_data.update(instance_data) + else: + instances[instance_id] = instance_data + + metadata.set("instances", instances) + + +def remove_instance(instance_id): + """Helper method to remove the data for a specific container""" + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + instances = metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {} + instances.pop(instance_id, None) + metadata.set("instances", instances) + + +def get_instances_by_id(): + """Return all instances stored in the project instances metadata""" + if not substance_painter.project.is_open(): + return {} + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + return metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {} + + +def get_instances(): + """Return all instances stored in the project instances as a list""" + return list(get_instances_by_id().values()) diff --git a/openpype/hosts/substancepainter/deploy/plugins/openpype_plugin.py b/openpype/hosts/substancepainter/deploy/plugins/openpype_plugin.py new file mode 100644 index 0000000000..e7e1849546 --- /dev/null +++ b/openpype/hosts/substancepainter/deploy/plugins/openpype_plugin.py @@ -0,0 +1,36 @@ + + +def cleanup_openpype_qt_widgets(): + """ + Workaround for Substance failing to shut down correctly + when a Qt window was still open at the time of shutting down. + + This seems to work sometimes, but not all the time. + + """ + # TODO: Create a more reliable method to close down all OpenPype Qt widgets + from PySide2 import QtWidgets + import substance_painter.ui + + # Kill OpenPype Qt widgets + print("Killing OpenPype Qt widgets..") + for widget in QtWidgets.QApplication.topLevelWidgets(): + if widget.__module__.startswith("openpype."): + print(f"Deleting widget: {widget.__class__.__name__}") + substance_painter.ui.delete_ui_element(widget) + + +def start_plugin(): + from openpype.pipeline import install_host + from openpype.hosts.substancepainter.api import SubstanceHost + install_host(SubstanceHost()) + + +def close_plugin(): + from openpype.pipeline import uninstall_host + cleanup_openpype_qt_widgets() + uninstall_host() + + +if __name__ == "__main__": + start_plugin() diff --git a/openpype/hosts/substancepainter/deploy/startup/openpype_load_on_first_run.py b/openpype/hosts/substancepainter/deploy/startup/openpype_load_on_first_run.py new file mode 100644 index 0000000000..04b610b4df --- /dev/null +++ b/openpype/hosts/substancepainter/deploy/startup/openpype_load_on_first_run.py @@ -0,0 +1,43 @@ +"""Ease the OpenPype on-boarding process by loading the plug-in on first run""" + +OPENPYPE_PLUGIN_NAME = "openpype_plugin" + + +def start_plugin(): + try: + # This isn't exposed in the official API so we keep it in a try-except + from painter_plugins_ui import ( + get_settings, + LAUNCH_AT_START_KEY, + ON_STATE, + PLUGINS_MENU, + plugin_manager + ) + + # The `painter_plugins_ui` plug-in itself is also a startup plug-in + # we need to take into account that it could run either earlier or + # later than this startup script, we check whether its menu initialized + is_before_plugins_menu = PLUGINS_MENU is None + + settings = get_settings(OPENPYPE_PLUGIN_NAME) + if settings.value(LAUNCH_AT_START_KEY, None) is None: + print("Initializing OpenPype plug-in on first run...") + if is_before_plugins_menu: + print("- running before 'painter_plugins_ui'") + # Delay the launch to the painter_plugins_ui initialization + settings.setValue(LAUNCH_AT_START_KEY, ON_STATE) + else: + # Launch now + print("- running after 'painter_plugins_ui'") + plugin_manager(OPENPYPE_PLUGIN_NAME)(True) + + # Set the checked state in the menu to avoid confusion + action = next(action for action in PLUGINS_MENU._menu.actions() + if action.text() == OPENPYPE_PLUGIN_NAME) + if action is not None: + action.blockSignals(True) + action.setChecked(True) + action.blockSignals(False) + + except Exception as exc: + print(exc) diff --git a/openpype/hosts/substancepainter/plugins/create/create_textures.py b/openpype/hosts/substancepainter/plugins/create/create_textures.py new file mode 100644 index 0000000000..dece4b2cc1 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/create/create_textures.py @@ -0,0 +1,162 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating textures.""" + +from openpype.pipeline import CreatedInstance, Creator, CreatorError +from openpype.lib import ( + EnumDef, + UILabelDef, + NumberDef, + BoolDef +) + +from openpype.hosts.substancepainter.api.pipeline import ( + get_instances, + set_instance, + set_instances, + remove_instance +) +from openpype.hosts.substancepainter.api.lib import get_export_presets + +import substance_painter.project + + +class CreateTextures(Creator): + """Create a texture set.""" + identifier = "io.openpype.creators.substancepainter.textureset" + label = "Textures" + family = "textureSet" + icon = "picture-o" + + default_variant = "Main" + + def create(self, subset_name, instance_data, pre_create_data): + + if not substance_painter.project.is_open(): + raise CreatorError("Can't create a Texture Set instance without " + "an open project.") + + instance = self.create_instance_in_context(subset_name, + instance_data) + set_instance( + instance_id=instance["instance_id"], + instance_data=instance.data_to_store() + ) + + def collect_instances(self): + for instance in get_instances(): + if (instance.get("creator_identifier") == self.identifier or + instance.get("family") == self.family): + self.create_instance_in_context_from_existing(instance) + + def update_instances(self, update_list): + instance_data_by_id = {} + for instance, _changes in update_list: + # Persist the data + instance_id = instance.get("instance_id") + instance_data = instance.data_to_store() + instance_data_by_id[instance_id] = instance_data + set_instances(instance_data_by_id, update=True) + + def remove_instances(self, instances): + for instance in instances: + remove_instance(instance["instance_id"]) + self._remove_instance_from_context(instance) + + # Helper methods (this might get moved into Creator class) + def create_instance_in_context(self, subset_name, data): + instance = CreatedInstance( + self.family, subset_name, data, self + ) + self.create_context.creator_adds_instance(instance) + return instance + + def create_instance_in_context_from_existing(self, data): + instance = CreatedInstance.from_existing(data, self) + self.create_context.creator_adds_instance(instance) + return instance + + def get_instance_attr_defs(self): + + return [ + EnumDef("exportPresetUrl", + items=get_export_presets(), + label="Output Template"), + BoolDef("allowSkippedMaps", + label="Allow Skipped Output Maps", + tooltip="When enabled this allows the publish to ignore " + "output maps in the used output template if one " + "or more maps are skipped due to the required " + "channels not being present in the current file.", + default=True), + EnumDef("exportFileFormat", + items={ + None: "Based on output template", + # TODO: Get available extensions from substance API + "bmp": "bmp", + "ico": "ico", + "jpeg": "jpeg", + "jng": "jng", + "pbm": "pbm", + "pgm": "pgm", + "png": "png", + "ppm": "ppm", + "tga": "targa", + "tif": "tiff", + "wap": "wap", + "wbmp": "wbmp", + "xpm": "xpm", + "gif": "gif", + "hdr": "hdr", + "exr": "exr", + "j2k": "j2k", + "jp2": "jp2", + "pfm": "pfm", + "webp": "webp", + # TODO: Unsure why jxr format fails to export + # "jxr": "jpeg-xr", + # TODO: File formats that combine the exported textures + # like psd are not correctly supported due to + # publishing only a single file + # "psd": "psd", + # "sbsar": "sbsar", + }, + default=None, + label="File type"), + EnumDef("exportSize", + items={ + None: "Based on each Texture Set's size", + # The key is size of the texture file in log2. + # (i.e. 10 means 2^10 = 1024) + 7: "128", + 8: "256", + 9: "512", + 10: "1024", + 11: "2048", + 12: "4096" + }, + default=None, + label="Size"), + + EnumDef("exportPadding", + items={ + "passthrough": "No padding (passthrough)", + "infinite": "Dilation infinite", + "transparent": "Dilation + transparent", + "color": "Dilation + default background color", + "diffusion": "Dilation + diffusion" + }, + default="infinite", + label="Padding"), + NumberDef("exportDilationDistance", + minimum=0, + maximum=256, + decimals=0, + default=16, + label="Dilation Distance"), + UILabelDef("*only used with " + "'Dilation + ' padding"), + ] + + def get_pre_create_attr_defs(self): + # Use same attributes as for instance attributes + return self.get_instance_attr_defs() diff --git a/openpype/hosts/substancepainter/plugins/create/create_workfile.py b/openpype/hosts/substancepainter/plugins/create/create_workfile.py new file mode 100644 index 0000000000..d7f31f9dcf --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/create/create_workfile.py @@ -0,0 +1,101 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating workfiles.""" + +from openpype.pipeline import CreatedInstance, AutoCreator +from openpype.client import get_asset_by_name + +from openpype.hosts.substancepainter.api.pipeline import ( + set_instances, + set_instance, + get_instances +) + +import substance_painter.project + + +class CreateWorkfile(AutoCreator): + """Workfile auto-creator.""" + identifier = "io.openpype.creators.substancepainter.workfile" + label = "Workfile" + family = "workfile" + icon = "document" + + default_variant = "Main" + + def create(self): + + if not substance_painter.project.is_open(): + return + + variant = self.default_variant + project_name = self.project_name + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name + + # Workfile instance should always exist and must only exist once. + # As such we'll first check if it already exists and is collected. + current_instance = next( + ( + instance for instance in self.create_context.instances + if instance.creator_identifier == self.identifier + ), None) + + if current_instance is None: + self.log.info("Auto-creating workfile instance...") + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + data = { + "asset": asset_name, + "task": task_name, + "variant": variant + } + current_instance = self.create_instance_in_context(subset_name, + data) + elif ( + current_instance["asset"] != asset_name + or current_instance["task"] != task_name + ): + # Update instance context if is not the same + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + current_instance["asset"] = asset_name + current_instance["task"] = task_name + current_instance["subset"] = subset_name + + set_instance( + instance_id=current_instance.get("instance_id"), + instance_data=current_instance.data_to_store() + ) + + def collect_instances(self): + for instance in get_instances(): + if (instance.get("creator_identifier") == self.identifier or + instance.get("family") == self.family): + self.create_instance_in_context_from_existing(instance) + + def update_instances(self, update_list): + instance_data_by_id = {} + for instance, _changes in update_list: + # Persist the data + instance_id = instance.get("instance_id") + instance_data = instance.data_to_store() + instance_data_by_id[instance_id] = instance_data + set_instances(instance_data_by_id, update=True) + + # Helper methods (this might get moved into Creator class) + def create_instance_in_context(self, subset_name, data): + instance = CreatedInstance( + self.family, subset_name, data, self + ) + self.create_context.creator_adds_instance(instance) + return instance + + def create_instance_in_context_from_existing(self, data): + instance = CreatedInstance.from_existing(data, self) + self.create_context.creator_adds_instance(instance) + return instance diff --git a/openpype/hosts/substancepainter/plugins/load/load_mesh.py b/openpype/hosts/substancepainter/plugins/load/load_mesh.py new file mode 100644 index 0000000000..822095641d --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/load/load_mesh.py @@ -0,0 +1,124 @@ +from openpype.pipeline import ( + load, + get_representation_path, +) +from openpype.pipeline.load import LoadError +from openpype.hosts.substancepainter.api.pipeline import ( + imprint_container, + set_container_metadata, + remove_container_metadata +) +from openpype.hosts.substancepainter.api.lib import prompt_new_file_with_mesh + +import substance_painter.project +import qargparse + + +class SubstanceLoadProjectMesh(load.LoaderPlugin): + """Load mesh for project""" + + families = ["*"] + representations = ["abc", "fbx", "obj", "gltf"] + + label = "Load mesh" + order = -10 + icon = "code-fork" + color = "orange" + + options = [ + qargparse.Boolean( + "preserve_strokes", + default=True, + help="Preserve strokes positions on mesh.\n" + "(only relevant when loading into existing project)" + ), + qargparse.Boolean( + "import_cameras", + default=True, + help="Import cameras from the mesh file." + ) + ] + + def load(self, context, name, namespace, data): + + # Get user inputs + import_cameras = data.get("import_cameras", True) + preserve_strokes = data.get("preserve_strokes", True) + + if not substance_painter.project.is_open(): + # Allow to 'initialize' a new project + result = prompt_new_file_with_mesh(mesh_filepath=self.fname) + if not result: + self.log.info("User cancelled new project prompt.") + return + + else: + # Reload the mesh + settings = substance_painter.project.MeshReloadingSettings( + import_cameras=import_cameras, + preserve_strokes=preserve_strokes + ) + + def on_mesh_reload(status: substance_painter.project.ReloadMeshStatus): # noqa + if status == substance_painter.project.ReloadMeshStatus.SUCCESS: # noqa + self.log.info("Reload succeeded") + else: + raise LoadError("Reload of mesh failed") + + path = self.fname + substance_painter.project.reload_mesh(path, + settings, + on_mesh_reload) + + # Store container + container = {} + project_mesh_object_name = "_ProjectMesh_" + imprint_container(container, + name=project_mesh_object_name, + namespace=project_mesh_object_name, + context=context, + loader=self) + + # We want store some options for updating to keep consistent behavior + # from the user's original choice. We don't store 'preserve_strokes' + # as we always preserve strokes on updates. + container["options"] = { + "import_cameras": import_cameras, + } + + set_container_metadata(project_mesh_object_name, container) + + def switch(self, container, representation): + self.update(container, representation) + + def update(self, container, representation): + + path = get_representation_path(representation) + + # Reload the mesh + container_options = container.get("options", {}) + settings = substance_painter.project.MeshReloadingSettings( + import_cameras=container_options.get("import_cameras", True), + preserve_strokes=True + ) + + def on_mesh_reload(status: substance_painter.project.ReloadMeshStatus): + if status == substance_painter.project.ReloadMeshStatus.SUCCESS: + self.log.info("Reload succeeded") + else: + raise LoadError("Reload of mesh failed") + + substance_painter.project.reload_mesh(path, settings, on_mesh_reload) + + # Update container representation + object_name = container["objectName"] + update_data = {"representation": str(representation["_id"])} + set_container_metadata(object_name, update_data, update=True) + + def remove(self, container): + + # Remove OpenPype related settings about what model was loaded + # or close the project? + # TODO: This is likely best 'hidden' away to the user because + # this will leave the project's mesh unmanaged. + remove_container_metadata(container["objectName"]) diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_current_file.py b/openpype/hosts/substancepainter/plugins/publish/collect_current_file.py new file mode 100644 index 0000000000..9a37eb0d1c --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/collect_current_file.py @@ -0,0 +1,17 @@ +import pyblish.api + +from openpype.pipeline import registered_host + + +class CollectCurrentFile(pyblish.api.ContextPlugin): + """Inject the current working file into context""" + + order = pyblish.api.CollectorOrder - 0.49 + label = "Current Workfile" + hosts = ["substancepainter"] + + def process(self, context): + host = registered_host() + path = host.get_current_workfile() + context.data["currentFile"] = path + self.log.debug(f"Current workfile: {path}") diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py new file mode 100644 index 0000000000..d11abd1019 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py @@ -0,0 +1,196 @@ +import os +import copy +import pyblish.api + +from openpype.pipeline import publish + +import substance_painter.textureset +from openpype.hosts.substancepainter.api.lib import ( + get_parsed_export_maps, + strip_template +) +from openpype.pipeline.create import get_subset_name +from openpype.client import get_asset_by_name + + +class CollectTextureSet(pyblish.api.InstancePlugin): + """Extract Textures using an output template config""" + # TODO: Production-test usage of color spaces + # TODO: Detect what source data channels end up in each file + + label = "Collect Texture Set images" + hosts = ["substancepainter"] + families = ["textureSet"] + order = pyblish.api.CollectorOrder + + def process(self, instance): + + config = self.get_export_config(instance) + asset_doc = get_asset_by_name( + project_name=instance.context.data["projectName"], + asset_name=instance.data["asset"] + ) + + instance.data["exportConfig"] = config + maps = get_parsed_export_maps(config) + + # Let's break the instance into multiple instances to integrate + # a subset per generated texture or texture UDIM sequence + for (texture_set_name, stack_name), template_maps in maps.items(): + self.log.info(f"Processing {texture_set_name}/{stack_name}") + for template, outputs in template_maps.items(): + self.log.info(f"Processing {template}") + self.create_image_instance(instance, template, outputs, + asset_doc=asset_doc, + texture_set_name=texture_set_name, + stack_name=stack_name) + + def create_image_instance(self, instance, template, outputs, + asset_doc, texture_set_name, stack_name): + """Create a new instance per image or UDIM sequence. + + The new instances will be of family `image`. + + """ + + context = instance.context + first_filepath = outputs[0]["filepath"] + fnames = [os.path.basename(output["filepath"]) for output in outputs] + ext = os.path.splitext(first_filepath)[1] + assert ext.lstrip("."), f"No extension: {ext}" + + always_include_texture_set_name = False # todo: make this configurable + all_texture_sets = substance_painter.textureset.all_texture_sets() + texture_set = substance_painter.textureset.TextureSet.from_name( + texture_set_name + ) + + # Define the suffix we want to give this particular texture + # set and set up a remapped subset naming for it. + suffix = "" + if always_include_texture_set_name or len(all_texture_sets) > 1: + # More than one texture set, include texture set name + suffix += f".{texture_set_name}" + if texture_set.is_layered_material() and stack_name: + # More than one stack, include stack name + suffix += f".{stack_name}" + + # Always include the map identifier + map_identifier = strip_template(template) + suffix += f".{map_identifier}" + + image_subset = get_subset_name( + # TODO: The family actually isn't 'texture' currently but for now + # this is only done so the subset name starts with 'texture' + family="texture", + variant=instance.data["variant"] + suffix, + task_name=instance.data.get("task"), + asset_doc=asset_doc, + project_name=context.data["projectName"], + host_name=context.data["hostName"], + project_settings=context.data["project_settings"] + ) + + # Prepare representation + representation = { + "name": ext.lstrip("."), + "ext": ext.lstrip("."), + "files": fnames if len(fnames) > 1 else fnames[0], + } + + # Mark as UDIM explicitly if it has UDIM tiles. + if bool(outputs[0].get("udim")): + # The representation for a UDIM sequence should have a `udim` key + # that is a list of all udim tiles (str) like: ["1001", "1002"] + # strings. See CollectTextures plug-in and Integrators. + representation["udim"] = [output["udim"] for output in outputs] + + # Set up the representation for thumbnail generation + # TODO: Simplify this once thumbnail extraction is refactored + staging_dir = os.path.dirname(first_filepath) + representation["tags"] = ["review"] + representation["stagingDir"] = staging_dir + + # Clone the instance + image_instance = context.create_instance(image_subset) + image_instance[:] = instance[:] + image_instance.data.update(copy.deepcopy(instance.data)) + image_instance.data["name"] = image_subset + image_instance.data["label"] = image_subset + image_instance.data["subset"] = image_subset + image_instance.data["family"] = "image" + image_instance.data["families"] = ["image", "textures"] + image_instance.data["representations"] = [representation] + + # Group the textures together in the loader + image_instance.data["subsetGroup"] = instance.data["subset"] + + # Store the texture set name and stack name on the instance + image_instance.data["textureSetName"] = texture_set_name + image_instance.data["textureStackName"] = stack_name + + # Store color space with the instance + # Note: The extractor will assign it to the representation + colorspace = outputs[0].get("colorSpace") + if colorspace: + self.log.debug(f"{image_subset} colorspace: {colorspace}") + image_instance.data["colorspace"] = colorspace + + # Store the instance in the original instance as a member + instance.append(image_instance) + + def get_export_config(self, instance): + """Return an export configuration dict for texture exports. + + This config can be supplied to: + - `substance_painter.export.export_project_textures` + - `substance_painter.export.list_project_textures` + + See documentation on substance_painter.export module about the + formatting of the configuration dictionary. + + Args: + instance (pyblish.api.Instance): Texture Set instance to be + published. + + Returns: + dict: Export config + + """ + + creator_attrs = instance.data["creator_attributes"] + preset_url = creator_attrs["exportPresetUrl"] + self.log.debug(f"Exporting using preset: {preset_url}") + + # See: https://substance3d.adobe.com/documentation/ptpy/api/substance_painter/export # noqa + config = { # noqa + "exportShaderParams": True, + "exportPath": publish.get_instance_staging_dir(instance), + "defaultExportPreset": preset_url, + + # Custom overrides to the exporter + "exportParameters": [ + { + "parameters": { + "fileFormat": creator_attrs["exportFileFormat"], + "sizeLog2": creator_attrs["exportSize"], + "paddingAlgorithm": creator_attrs["exportPadding"], + "dilationDistance": creator_attrs["exportDilationDistance"] # noqa + } + } + ] + } + + # Create the list of Texture Sets to export. + config["exportList"] = [] + for texture_set in substance_painter.textureset.all_texture_sets(): + config["exportList"].append({"rootPath": texture_set.name()}) + + # Consider None values from the creator attributes optionals + for override in config["exportParameters"]: + parameters = override.get("parameters") + for key, value in dict(parameters).items(): + if value is None: + parameters.pop(key) + + return config diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_workfile_representation.py b/openpype/hosts/substancepainter/plugins/publish/collect_workfile_representation.py new file mode 100644 index 0000000000..8d98d0b014 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/collect_workfile_representation.py @@ -0,0 +1,26 @@ +import os +import pyblish.api + + +class CollectWorkfileRepresentation(pyblish.api.InstancePlugin): + """Create a publish representation for the current workfile instance.""" + + order = pyblish.api.CollectorOrder + label = "Workfile representation" + hosts = ["substancepainter"] + families = ["workfile"] + + def process(self, instance): + + context = instance.context + current_file = context.data["currentFile"] + + folder, file = os.path.split(current_file) + filename, ext = os.path.splitext(file) + + instance.data["representations"] = [{ + "name": ext.lstrip("."), + "ext": ext.lstrip("."), + "files": file, + "stagingDir": folder, + }] diff --git a/openpype/hosts/substancepainter/plugins/publish/extract_textures.py b/openpype/hosts/substancepainter/plugins/publish/extract_textures.py new file mode 100644 index 0000000000..bb6f15ead9 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/extract_textures.py @@ -0,0 +1,62 @@ +import substance_painter.export + +from openpype.pipeline import KnownPublishError, publish + + +class ExtractTextures(publish.Extractor, + publish.ColormanagedPyblishPluginMixin): + """Extract Textures using an output template config. + + Note: + This Extractor assumes that `collect_textureset_images` has prepared + the relevant export config and has also collected the individual image + instances for publishing including its representation. That is why this + particular Extractor doesn't specify representations to integrate. + + """ + + label = "Extract Texture Set" + hosts = ["substancepainter"] + families = ["textureSet"] + + # Run before thumbnail extractors + order = publish.Extractor.order - 0.1 + + def process(self, instance): + + config = instance.data["exportConfig"] + result = substance_painter.export.export_project_textures(config) + + if result.status != substance_painter.export.ExportStatus.Success: + raise KnownPublishError( + "Failed to export texture set: {}".format(result.message) + ) + + # Log what files we generated + for (texture_set_name, stack_name), maps in result.textures.items(): + # Log our texture outputs + self.log.info(f"Exported stack: {texture_set_name} {stack_name}") + for texture_map in maps: + self.log.info(f"Exported texture: {texture_map}") + + # We'll insert the color space data for each image instance that we + # added into this texture set. The collector couldn't do so because + # some anatomy and other instance data needs to be collected prior + context = instance.context + for image_instance in instance: + representation = next(iter(image_instance.data["representations"])) + + colorspace = image_instance.data.get("colorspace") + if not colorspace: + self.log.debug("No color space data present for instance: " + f"{image_instance}") + continue + + self.set_representation_colorspace(representation, + context=context, + colorspace=colorspace) + + # The TextureSet instance should not be integrated. It generates no + # output data. Instead the separated texture instances are generated + # from it which themselves integrate into the database. + instance.data["integrate"] = False diff --git a/openpype/hosts/substancepainter/plugins/publish/increment_workfile.py b/openpype/hosts/substancepainter/plugins/publish/increment_workfile.py new file mode 100644 index 0000000000..b45d66fbb1 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/increment_workfile.py @@ -0,0 +1,23 @@ +import pyblish.api + +from openpype.lib import version_up +from openpype.pipeline import registered_host + + +class IncrementWorkfileVersion(pyblish.api.ContextPlugin): + """Increment current workfile version.""" + + order = pyblish.api.IntegratorOrder + 1 + label = "Increment Workfile Version" + optional = True + hosts = ["substancepainter"] + + def process(self, context): + + assert all(result["success"] for result in context.data["results"]), ( + "Publishing not successful so version is not increased.") + + host = registered_host() + path = context.data["currentFile"] + self.log.info(f"Incrementing current workfile to: {path}") + host.save_workfile(version_up(path)) diff --git a/openpype/hosts/substancepainter/plugins/publish/save_workfile.py b/openpype/hosts/substancepainter/plugins/publish/save_workfile.py new file mode 100644 index 0000000000..9662f31922 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/save_workfile.py @@ -0,0 +1,28 @@ +import pyblish.api + +from openpype.pipeline import ( + registered_host, + KnownPublishError +) + + +class SaveCurrentWorkfile(pyblish.api.ContextPlugin): + """Save current workfile""" + + label = "Save current workfile" + order = pyblish.api.ExtractorOrder - 0.49 + hosts = ["substancepainter"] + + def process(self, context): + + host = registered_host() + current = host.get_current_workfile() + if context.data["currentFile"] != current: + raise KnownPublishError("Workfile has changed during publishing!") + + if host.has_unsaved_changes(): + self.log.info("Saving current file: {}".format(current)) + host.save_workfile() + else: + self.log.debug("Skipping workfile save because there are no " + "unsaved changes.") diff --git a/openpype/hosts/substancepainter/plugins/publish/validate_ouput_maps.py b/openpype/hosts/substancepainter/plugins/publish/validate_ouput_maps.py new file mode 100644 index 0000000000..b57cf4c5a2 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/validate_ouput_maps.py @@ -0,0 +1,109 @@ +import copy +import os + +import pyblish.api + +import substance_painter.export + +from openpype.pipeline import PublishValidationError + + +class ValidateOutputMaps(pyblish.api.InstancePlugin): + """Validate all output maps for Output Template are generated. + + Output maps will be skipped by Substance Painter if it is an output + map in the Substance Output Template which uses channels that the current + substance painter project has not painted or generated. + + """ + + order = pyblish.api.ValidatorOrder + label = "Validate output maps" + hosts = ["substancepainter"] + families = ["textureSet"] + + def process(self, instance): + + config = instance.data["exportConfig"] + + # Substance Painter API does not allow to query the actual output maps + # it will generate without actually exporting the files. So we try to + # generate the smallest size / fastest export as possible + config = copy.deepcopy(config) + parameters = config["exportParameters"][0]["parameters"] + parameters["sizeLog2"] = [1, 1] # output 2x2 images (smallest) + parameters["paddingAlgorithm"] = "passthrough" # no dilation (faster) + parameters["dithering"] = False # no dithering (faster) + + result = substance_painter.export.export_project_textures(config) + if result.status != substance_painter.export.ExportStatus.Success: + raise PublishValidationError( + "Failed to export texture set: {}".format(result.message) + ) + + generated_files = set() + for texture_maps in result.textures.values(): + for texture_map in texture_maps: + generated_files.add(os.path.normpath(texture_map)) + # Directly clean up our temporary export + os.remove(texture_map) + + creator_attributes = instance.data.get("creator_attributes", {}) + allow_skipped_maps = creator_attributes.get("allowSkippedMaps", True) + error_report_missing = [] + for image_instance in instance: + + # Confirm whether the instance has its expected files generated. + # We assume there's just one representation and that it is + # the actual texture representation from the collector. + representation = next(iter(image_instance.data["representations"])) + staging_dir = representation["stagingDir"] + filenames = representation["files"] + if not isinstance(filenames, (list, tuple)): + # Convert single file to list + filenames = [filenames] + + missing = [] + for filename in filenames: + filepath = os.path.join(staging_dir, filename) + filepath = os.path.normpath(filepath) + if filepath not in generated_files: + self.log.warning(f"Missing texture: {filepath}") + missing.append(filepath) + + if not missing: + continue + + if allow_skipped_maps: + # TODO: This is changing state on the instance's which + # should not be done during validation. + self.log.warning(f"Disabling texture instance: " + f"{image_instance}") + image_instance.data["active"] = False + image_instance.data["integrate"] = False + representation.setdefault("tags", []).append("delete") + continue + else: + error_report_missing.append((image_instance, missing)) + + if error_report_missing: + + message = ( + "The Texture Set skipped exporting some output maps which are " + "defined in the Output Template. This happens if the Output " + "Templates exports maps from channels which you do not " + "have in your current Substance Painter project.\n\n" + "To allow this enable the *Allow Skipped Output Maps* setting " + "on the instance.\n\n" + f"Instance {instance} skipped exporting output maps:\n" + "" + ) + + for image_instance, missing in error_report_missing: + missing_str = ", ".join(missing) + message += f"- **{image_instance}** skipped: {missing_str}\n" + + raise PublishValidationError( + message=message, + title="Missing output maps" + ) diff --git a/openpype/hosts/unreal/README.md b/openpype/hosts/unreal/README.md index 0a69b9e0cf..d131105659 100644 --- a/openpype/hosts/unreal/README.md +++ b/openpype/hosts/unreal/README.md @@ -4,6 +4,6 @@ Supported Unreal Engine version is 4.26+ (mainly because of major Python changes ### Project naming Unreal doesn't support project names starting with non-alphabetic character. So names like `123_myProject` are -invalid. If OpenPype detects such name it automatically prepends letter **P** to make it valid name, so `123_myProject` +invalid. If Ayon detects such name it automatically prepends letter **P** to make it valid name, so `123_myProject` will become `P123_myProject`. There is also soft-limit on project name length to be shorter than 20 characters. -Longer names will issue warning in Unreal Editor that there might be possible side effects. \ No newline at end of file +Longer names will issue warning in Unreal Editor that there might be possible side effects. diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py index 24e2db975d..1119b5c16c 100644 --- a/openpype/hosts/unreal/addon.py +++ b/openpype/hosts/unreal/addon.py @@ -1,5 +1,5 @@ import os -from openpype.modules import OpenPypeModule, IHostAddon +from openpype.modules import IHostAddon, OpenPypeModule UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) @@ -13,15 +13,27 @@ class UnrealAddon(OpenPypeModule, IHostAddon): def add_implementation_envs(self, env, app): """Modify environments to contain all required for implementation.""" - # Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation + # Set AYON_UNREAL_PLUGIN required for Unreal implementation + # Imports are in this method for Python 2 compatiblity of an addon + from pathlib import Path - ue_plugin = "UE_5.0" if app.name[:1] == "5" else "UE_4.7" + from .lib import get_compatible_integration + + ue_version = app.name.replace("-", ".") unreal_plugin_path = os.path.join( - UNREAL_ROOT_DIR, "integration", ue_plugin, "OpenPype" + UNREAL_ROOT_DIR, "integration", "UE_{}".format(ue_version), "Ayon" ) - if not env.get("OPENPYPE_UNREAL_PLUGIN") or \ - env.get("OPENPYPE_UNREAL_PLUGIN") != unreal_plugin_path: - env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path + if not Path(unreal_plugin_path).exists(): + compatible_versions = get_compatible_integration( + ue_version, Path(UNREAL_ROOT_DIR) / "integration" + ) + if compatible_versions: + unreal_plugin_path = compatible_versions[-1] / "Ayon" + unreal_plugin_path = unreal_plugin_path.as_posix() + + if not env.get("AYON_UNREAL_PLUGIN") or \ + env.get("AYON_UNREAL_PLUGIN") != unreal_plugin_path: + env["AYON_UNREAL_PLUGIN"] = unreal_plugin_path # Set default environments if are not set via settings defaults = { diff --git a/openpype/hosts/unreal/api/__init__.py b/openpype/hosts/unreal/api/__init__.py index 2618a7677c..de0fce13d5 100644 --- a/openpype/hosts/unreal/api/__init__.py +++ b/openpype/hosts/unreal/api/__init__.py @@ -1,5 +1,5 @@ # -*- coding: utf-8 -*- -"""Unreal Editor OpenPype host API.""" +"""Unreal Editor Ayon host API.""" from .plugin import ( UnrealActorCreator, diff --git a/openpype/hosts/unreal/api/helpers.py b/openpype/hosts/unreal/api/helpers.py index 0b6f07f52f..e9ab3fb4c5 100644 --- a/openpype/hosts/unreal/api/helpers.py +++ b/openpype/hosts/unreal/api/helpers.py @@ -2,15 +2,15 @@ import unreal # noqa -class OpenPypeUnrealException(Exception): +class AyonUnrealException(Exception): pass @unreal.uclass() -class OpenPypeHelpers(unreal.OpenPypeLib): - """Class wrapping some useful functions for OpenPype. +class AyonHelpers(unreal.AyonLib): + """Class wrapping some useful functions for Ayon. - This class is extending native BP class in OpenPype Integration Plugin. + This class is extending native BP class in Ayon Integration Plugin. """ @@ -29,13 +29,13 @@ class OpenPypeHelpers(unreal.OpenPypeLib): Example: - OpenPypeHelpers().set_folder_color( + AyonHelpers().set_folder_color( "/Game/Path", unreal.LinearColor(a=1.0, r=1.0, g=0.5, b=0) ) Note: This will take effect only after Editor is restarted. I couldn't - find a way to refresh it. Also this saves the color definition + find a way to refresh it. Also, this saves the color definition into the project config, binding this path with color. So if you delete this path and later re-create, it will set this color again. diff --git a/openpype/hosts/unreal/api/pipeline.py b/openpype/hosts/unreal/api/pipeline.py index 1a7c626984..bb45fa8c01 100644 --- a/openpype/hosts/unreal/api/pipeline.py +++ b/openpype/hosts/unreal/api/pipeline.py @@ -14,7 +14,7 @@ from openpype.pipeline import ( register_creator_plugin_path, deregister_loader_plugin_path, deregister_creator_plugin_path, - AVALON_CONTAINER_ID, + AYON_CONTAINER_ID, ) from openpype.tools.utils import host_tools import openpype.hosts.unreal @@ -22,12 +22,13 @@ from openpype.host import HostBase, ILoadHost, IPublishHost import unreal # noqa +# Rename to Ayon once parent module renames logger = logging.getLogger("openpype.hosts.unreal") -OPENPYPE_CONTAINERS = "OpenPypeContainers" -CONTEXT_CONTAINER = "OpenPype/context.json" +AYON_CONTAINERS = "AyonContainers" +CONTEXT_CONTAINER = "Ayon/context.json" UNREAL_VERSION = semver.VersionInfo( - *os.getenv("OPENPYPE_UNREAL_VERSION").split(".") + *os.getenv("AYON_UNREAL_VERSION").split(".") ) HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.unreal.__file__)) @@ -53,14 +54,14 @@ class UnrealHost(HostBase, ILoadHost, IPublishHost): def get_containers(self): return ls() - def show_tools_popup(self): + @staticmethod + def show_tools_popup(): """Show tools popup with actions leading to show other tools.""" - show_tools_popup() - def show_tools_dialog(self): + @staticmethod + def show_tools_dialog(): """Show tools dialog with actions leading to show other tools.""" - show_tools_dialog() def update_context_data(self, data, changes): @@ -72,9 +73,10 @@ class UnrealHost(HostBase, ILoadHost, IPublishHost): with open(op_ctx, "w+") as f: json.dump(data, f) break - except IOError: + except IOError as e: if i == attempts - 1: - raise Exception("Failed to write context data. Aborting.") + raise Exception( + "Failed to write context data. Aborting.") from e unreal.log_warning("Failed to write context data. Retrying...") i += 1 time.sleep(3) @@ -95,19 +97,30 @@ def install(): print("-=" * 40) logo = '''. . - ____________ - / \\ __ \\ - \\ \\ \\/_\\ \\ - \\ \\ _____/ ______ - \\ \\ \\___// \\ \\ - \\ \\____\\ \\ \\_____\\ - \\/_____/ \\/______/ PYPE Club . + · + │ + ·∙/ + ·-∙•∙-· + / \\ /∙· / \\ + ∙ \\ │ / ∙ + \\ \\ · / / + \\\\ ∙ ∙ // + \\\\/ \\// + ___ + │ │ + │ │ + │ │ + │___│ + -· + + ·-─═─-∙ A Y O N ∙-─═─-· + by YNPUT . ''' print(logo) - print("installing OpenPype for Unreal ...") + print("installing Ayon for Unreal ...") print("-=" * 40) - logger.info("installing OpenPype for Unreal") + logger.info("installing Ayon for Unreal") pyblish.api.register_host("unreal") pyblish.api.register_plugin_path(str(PUBLISH_PATH)) register_loader_plugin_path(str(LOAD_PATH)) @@ -117,7 +130,7 @@ def install(): def uninstall(): - """Uninstall Unreal configuration for Avalon.""" + """Uninstall Unreal configuration for Ayon.""" pyblish.api.deregister_plugin_path(str(PUBLISH_PATH)) deregister_loader_plugin_path(str(LOAD_PATH)) deregister_creator_plugin_path(str(CREATE_PATH)) @@ -125,14 +138,14 @@ def uninstall(): def _register_callbacks(): """ - TODO: Implement callbacks if supported by UE4 + TODO: Implement callbacks if supported by UE """ pass def _register_events(): """ - TODO: Implement callbacks if supported by UE4 + TODO: Implement callbacks if supported by UE """ pass @@ -146,32 +159,30 @@ def ls(): """ ar = unreal.AssetRegistryHelpers.get_asset_registry() # UE 5.1 changed how class name is specified - class_name = ["/Script/OpenPype", "AssetContainer"] if UNREAL_VERSION.major == 5 and UNREAL_VERSION.minor > 0 else "AssetContainer" # noqa - openpype_containers = ar.get_assets_by_class(class_name, True) + class_name = ["/Script/Ayon", "AyonAssetContainer"] if UNREAL_VERSION.major == 5 and UNREAL_VERSION.minor > 0 else "AyonAssetContainer" # noqa + ayon_containers = ar.get_assets_by_class(class_name, True) # get_asset_by_class returns AssetData. To get all metadata we need to # load asset. get_tag_values() work only on metadata registered in # Asset Registry Project settings (and there is no way to set it with # python short of editing ini configuration file). - for asset_data in openpype_containers: + for asset_data in ayon_containers: asset = asset_data.get_asset() data = unreal.EditorAssetLibrary.get_metadata_tag_values(asset) data["objectName"] = asset_data.asset_name - data = cast_map_to_str_dict(data) - - yield data + yield cast_map_to_str_dict(data) def ls_inst(): ar = unreal.AssetRegistryHelpers.get_asset_registry() # UE 5.1 changed how class name is specified class_name = [ - "/Script/OpenPype", - "OpenPypePublishInstance" + "/Script/Ayon", + "AyonPublishInstance" ] if ( UNREAL_VERSION.major == 5 and UNREAL_VERSION.minor > 0 - ) else "OpenPypePublishInstance" # noqa + ) else "AyonPublishInstance" # noqa instances = ar.get_assets_by_class(class_name, True) # get_asset_by_class returns AssetData. To get all metadata we need to @@ -182,13 +193,11 @@ def ls_inst(): asset = asset_data.get_asset() data = unreal.EditorAssetLibrary.get_metadata_tag_values(asset) data["objectName"] = asset_data.asset_name - data = cast_map_to_str_dict(data) - - yield data + yield cast_map_to_str_dict(data) def parse_container(container): - """To get data from container, AssetContainer must be loaded. + """To get data from container, AyonAssetContainer must be loaded. Args: container(str): path to container @@ -217,7 +226,7 @@ def containerise(name, namespace, nodes, context, loader=None, suffix="_CON"): Unreal doesn't support *groups* of assets that you can add metadata to. But it does support folders that helps to organize asset. Unfortunately those folders are just that - you cannot add any additional information - to them. OpenPype Integration Plugin is providing way out - Implementing + to them. Ayon Integration Plugin is providing way out - Implementing `AssetContainer` Blueprint class. This class when added to folder can handle metadata on it using standard :func:`unreal.EditorAssetLibrary.set_metadata_tag()` and @@ -226,30 +235,30 @@ def containerise(name, namespace, nodes, context, loader=None, suffix="_CON"): those assets is available as `assets` property. This is list of strings starting with asset type and ending with its path: - `Material /Game/OpenPype/Test/TestMaterial.TestMaterial` + `Material /Game/Ayon/Test/TestMaterial.TestMaterial` """ # 1 - create directory for container root = "/Game" - container_name = "{}{}".format(name, suffix) + container_name = f"{name}{suffix}" new_name = move_assets_to_path(root, container_name, nodes) # 2 - create Asset Container there - path = "{}/{}".format(root, new_name) + path = f"{root}/{new_name}" create_container(container=container_name, path=path) namespace = path data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "name": new_name, "namespace": namespace, "loader": str(loader), "representation": context["representation"]["_id"], } # 3 - imprint data - imprint("{}/{}".format(path, container_name), data) + imprint(f"{path}/{container_name}", data) return path @@ -257,7 +266,7 @@ def instantiate(root, name, data, assets=None, suffix="_INS"): """Bundles *nodes* into *container*. Marking it with metadata as publishable instance. If assets are provided, - they are moved to new path where `OpenPypePublishInstance` class asset is + they are moved to new path where `AyonPublishInstance` class asset is created and imprinted with metadata. This can then be collected for publishing by Pyblish for example. @@ -271,7 +280,7 @@ def instantiate(root, name, data, assets=None, suffix="_INS"): suffix (str): suffix string to append to instance name """ - container_name = "{}{}".format(name, suffix) + container_name = f"{name}{suffix}" # if we specify assets, create new folder and move them there. If not, # just create empty folder @@ -280,10 +289,10 @@ def instantiate(root, name, data, assets=None, suffix="_INS"): else: new_name = create_folder(root, name) - path = "{}/{}".format(root, new_name) + path = f"{root}/{new_name}" create_publish_instance(instance=container_name, path=path) - imprint("{}/{}".format(path, container_name), data) + imprint(f"{path}/{container_name}", data) def imprint(node, data): @@ -299,7 +308,7 @@ def imprint(node, data): loaded_asset, key, str(value) ) - with unreal.ScopedEditorTransaction("OpenPype containerising"): + with unreal.ScopedEditorTransaction("Ayon containerising"): unreal.EditorAssetLibrary.save_asset(node) @@ -366,11 +375,11 @@ def create_folder(root: str, name: str) -> str: eal = unreal.EditorAssetLibrary index = 1 while True: - if eal.does_directory_exist("{}/{}".format(root, name)): - name = "{}{}".format(name, index) + if eal.does_directory_exist(f"{root}/{name}"): + name = f"{name}{index}" index += 1 else: - eal.make_directory("{}/{}".format(root, name)) + eal.make_directory(f"{root}/{name}") break return name @@ -403,9 +412,7 @@ def move_assets_to_path(root: str, name: str, assets: List[str]) -> str: unreal.log(assets) for asset in assets: loaded = eal.load_asset(asset) - eal.rename_asset( - asset, "{}/{}/{}".format(root, name, loaded.get_name()) - ) + eal.rename_asset(asset, f"{root}/{name}/{loaded.get_name()}") return name @@ -432,17 +439,16 @@ def create_container(container: str, path: str) -> unreal.Object: ) """ - factory = unreal.AssetContainerFactory() + factory = unreal.AyonAssetContainerFactory() tools = unreal.AssetToolsHelpers().get_asset_tools() - asset = tools.create_asset(container, path, None, factory) - return asset + return tools.create_asset(container, path, None, factory) def create_publish_instance(instance: str, path: str) -> unreal.Object: - """Helper function to create OpenPype Publish Instance on given path. + """Helper function to create Ayon Publish Instance on given path. - This behaves similarly as :func:`create_openpype_container`. + This behaves similarly as :func:`create_ayon_container`. Args: path (str): Path where to create Publish Instance. @@ -460,10 +466,9 @@ def create_publish_instance(instance: str, path: str) -> unreal.Object: ) """ - factory = unreal.OpenPypePublishInstanceFactory() + factory = unreal.AyonPublishInstanceFactory() tools = unreal.AssetToolsHelpers().get_asset_tools() - asset = tools.create_asset(instance, path, None, factory) - return asset + return tools.create_asset(instance, path, None, factory) def cast_map_to_str_dict(umap) -> dict: @@ -494,11 +499,14 @@ def get_subsequences(sequence: unreal.LevelSequence): """ tracks = sequence.get_master_tracks() - subscene_track = None - for t in tracks: - if t.get_class() == unreal.MovieSceneSubTrack.static_class(): - subscene_track = t - break + subscene_track = next( + ( + t + for t in tracks + if t.get_class() == unreal.MovieSceneSubTrack.static_class() + ), + None, + ) if subscene_track is not None and subscene_track.get_sections(): return subscene_track.get_sections() return [] diff --git a/openpype/hosts/unreal/api/plugin.py b/openpype/hosts/unreal/api/plugin.py index d60050a696..26ef69af86 100644 --- a/openpype/hosts/unreal/api/plugin.py +++ b/openpype/hosts/unreal/api/plugin.py @@ -31,7 +31,7 @@ from openpype.pipeline import ( @six.add_metaclass(ABCMeta) class UnrealBaseCreator(Creator): """Base class for Unreal creator plugins.""" - root = "/Game/OpenPype/PublishInstances" + root = "/Game/Ayon/AyonPublishInstances" suffix = "_INS" @staticmethod @@ -243,5 +243,5 @@ class UnrealActorCreator(UnrealBaseCreator): class Loader(LoaderPlugin, ABC): - """This serves as skeleton for future OpenPype specific functionality""" + """This serves as skeleton for future Ayon specific functionality""" pass diff --git a/openpype/hosts/unreal/api/rendering.py b/openpype/hosts/unreal/api/rendering.py index 3b54ab08a2..efe6fc54ad 100644 --- a/openpype/hosts/unreal/api/rendering.py +++ b/openpype/hosts/unreal/api/rendering.py @@ -51,7 +51,7 @@ def start_rendering(): # instances = pipeline.ls_inst() instances = [ a for a in assets - if a.get_class().get_name() == "OpenPypePublishInstance"] + if a.get_class().get_name() == "AyonPublishInstance"] inst_data = [] @@ -64,8 +64,9 @@ def start_rendering(): project = os.environ.get("AVALON_PROJECT") anatomy = Anatomy(project) root = anatomy.roots['renders'] - except Exception: - raise Exception("Could not find render root in anatomy settings.") + except Exception as e: + raise Exception( + "Could not find render root in anatomy settings.") from e render_dir = f"{root}/{project}" @@ -121,7 +122,7 @@ def start_rendering(): job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob) job.sequence = unreal.SoftObjectPath(i["master_sequence"]) job.map = unreal.SoftObjectPath(i["master_level"]) - job.author = "OpenPype" + job.author = "Ayon" # If we have a saved configuration, copy it to the job. if config: @@ -129,7 +130,7 @@ def start_rendering(): # User data could be used to pass data to the job, that can be # read in the job's OnJobFinished callback. We could, - # for instance, pass the AvalonPublishInstance's path to the job. + # for instance, pass the AyonPublishInstance's path to the job. # job.user_data = "" output_dir = render_setting.get('output') diff --git a/openpype/hosts/unreal/api/tools_ui.py b/openpype/hosts/unreal/api/tools_ui.py index 8531472142..5a4c689918 100644 --- a/openpype/hosts/unreal/api/tools_ui.py +++ b/openpype/hosts/unreal/api/tools_ui.py @@ -64,7 +64,7 @@ class ToolsDialog(QtWidgets.QDialog): def __init__(self, *args, **kwargs): super(ToolsDialog, self).__init__(*args, **kwargs) - self.setWindowTitle("OpenPype tools") + self.setWindowTitle("Ayon tools") icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) self.setWindowIcon(icon) diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index efbacc3b16..f01609d314 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -186,15 +186,15 @@ class UnrealPrelaunchHook(PreLaunchHook): project_path.mkdir(parents=True, exist_ok=True) - # Set "OPENPYPE_UNREAL_PLUGIN" to current process environment for + # Set "AYON_UNREAL_PLUGIN" to current process environment for # execution of `create_unreal_project` - if self.launch_context.env.get("OPENPYPE_UNREAL_PLUGIN"): + if self.launch_context.env.get("AYON_UNREAL_PLUGIN"): self.log.info(( - f"{self.signature} using OpenPype plugin from " - f"{self.launch_context.env.get('OPENPYPE_UNREAL_PLUGIN')}" + f"{self.signature} using Ayon plugin from " + f"{self.launch_context.env.get('AYON_UNREAL_PLUGIN')}" )) - env_key = "OPENPYPE_UNREAL_PLUGIN" + env_key = "AYON_UNREAL_PLUGIN" if self.launch_context.env.get(env_key): os.environ[env_key] = self.launch_context.env[env_key] @@ -213,7 +213,7 @@ class UnrealPrelaunchHook(PreLaunchHook): engine_path, project_path) - self.launch_context.env["OPENPYPE_UNREAL_VERSION"] = engine_version + self.launch_context.env["AYON_UNREAL_VERSION"] = engine_version # Append project file to launch arguments self.launch_context.launch_args.append( f"\"{project_file.as_posix()}\"") diff --git a/openpype/hosts/unreal/integration b/openpype/hosts/unreal/integration new file mode 160000 index 0000000000..ff15c70077 --- /dev/null +++ b/openpype/hosts/unreal/integration @@ -0,0 +1 @@ +Subproject commit ff15c700771e719cc5f3d561ac5d6f7590623986 diff --git a/openpype/hosts/unreal/integration/UE_4.7/CommandletProject/.gitignore b/openpype/hosts/unreal/integration/UE_4.7/CommandletProject/.gitignore deleted file mode 100644 index e74e6886b7..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/CommandletProject/.gitignore +++ /dev/null @@ -1,8 +0,0 @@ -/Saved -/DerivedDataCache -/Intermediate -/Content -/Config -/Binaries -/.idea -/.vs \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/CommandletProject/CommandletProject.uproject b/openpype/hosts/unreal/integration/UE_4.7/CommandletProject/CommandletProject.uproject deleted file mode 100644 index 4d75e03bf3..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/CommandletProject/CommandletProject.uproject +++ /dev/null @@ -1,12 +0,0 @@ -{ - "FileVersion": 3, - "EngineAssociation": "4.27", - "Category": "", - "Description": "", - "Plugins": [ - { - "Name": "OpenPype", - "Enabled": true - } - ] -} \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/.gitignore b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/.gitignore deleted file mode 100644 index b32a6f55e5..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/.gitignore +++ /dev/null @@ -1,35 +0,0 @@ -# Prerequisites -*.d - -# Compiled Object files -*.slo -*.lo -*.o -*.obj - -# Precompiled Headers -*.gch -*.pch - -# Compiled Dynamic libraries -*.so -*.dylib -*.dll - -# Fortran module files -*.mod -*.smod - -# Compiled Static libraries -*.lai -*.la -*.a -*.lib - -# Executables -*.exe -*.out -*.app - -/Binaries -/Intermediate diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Config/DefaultOpenPypeSettings.ini b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Config/DefaultOpenPypeSettings.ini deleted file mode 100644 index 8a883cf1db..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Config/DefaultOpenPypeSettings.ini +++ /dev/null @@ -1,2 +0,0 @@ -[/Script/OpenPype.OpenPypeSettings] -FolderColor=(R=91,G=197,B=220,A=255) \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Config/FilterPlugin.ini b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Config/FilterPlugin.ini deleted file mode 100644 index ccebca2f32..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Config/FilterPlugin.ini +++ /dev/null @@ -1,8 +0,0 @@ -[FilterPlugin] -; This section lists additional files which will be packaged along with your plugin. Paths should be listed relative to the root plugin directory, and -; may include "...", "*", and "?" wildcards to match directories, files, and individual characters respectively. -; -; Examples: -; /README.txt -; /Extras/... -; /Binaries/ThirdParty/*.dll diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Content/Python/init_unreal.py b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Content/Python/init_unreal.py deleted file mode 100644 index b85f970699..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Content/Python/init_unreal.py +++ /dev/null @@ -1,30 +0,0 @@ -import unreal - -openpype_detected = True -try: - from openpype.pipeline import install_host - from openpype.hosts.unreal.api import UnrealHost - - openpype_host = UnrealHost() -except ImportError as exc: - openpype_host = None - openpype_detected = False - unreal.log_error("OpenPype: cannot load OpenPype [ {} ]".format(exc)) - -if openpype_detected: - install_host(openpype_host) - - -@unreal.uclass() -class OpenPypeIntegration(unreal.OpenPypePythonBridge): - @unreal.ufunction(override=True) - def RunInPython_Popup(self): - unreal.log_warning("OpenPype: showing tools popup") - if openpype_detected: - openpype_host.show_tools_popup() - - @unreal.ufunction(override=True) - def RunInPython_Dialog(self): - unreal.log_warning("OpenPype: showing tools dialog") - if openpype_detected: - openpype_host.show_tools_dialog() diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/OpenPype.uplugin b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/OpenPype.uplugin deleted file mode 100644 index b2cbe3cff3..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/OpenPype.uplugin +++ /dev/null @@ -1,23 +0,0 @@ -{ - "FileVersion": 3, - "Version": 1, - "VersionName": "1.0", - "FriendlyName": "OpenPype", - "Description": "OpenPype Integration", - "Category": "OpenPype.Integration", - "CreatedBy": "Ondrej Samohel", - "CreatedByURL": "https://openpype.io", - "DocsURL": "https://openpype.io/docs/artist_hosts_unreal", - "MarketplaceURL": "", - "SupportURL": "https://pype.club/", - "EngineVersion": "4.27", - "CanContainContent": true, - "Installed": true, - "Modules": [ - { - "Name": "OpenPype", - "Type": "Editor", - "LoadingPhase": "Default" - } - ] -} \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/README.md b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/README.md deleted file mode 100644 index a08c1ada39..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# OpenPype Unreal Integration plugin - UE 4.x - -This is plugin for Unreal Editor, creating menu for [OpenPype](https://github.com/getavalon) tools to run. - -## How does this work - -Plugin is creating basic menu items in **Window/OpenPype** section of Unreal Editor main menu and a button -on the main toolbar with associated menu. Clicking on those menu items is calling callbacks that are -declared in c++ but needs to be implemented during Unreal Editor -startup in `Plugins/OpenPype/Content/Python/init_unreal.py` - this should be executed by Unreal Editor -automatically. diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype128.png b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype128.png deleted file mode 100644 index abe8a807ef..0000000000 Binary files a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype128.png and /dev/null differ diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype40.png b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype40.png deleted file mode 100644 index f983e7a1f2..0000000000 Binary files a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype40.png and /dev/null differ diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype512.png b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype512.png deleted file mode 100644 index 97c4d4326b..0000000000 Binary files a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Resources/openpype512.png and /dev/null differ diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/OpenPype.Build.cs b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/OpenPype.Build.cs deleted file mode 100644 index f77c1383eb..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/OpenPype.Build.cs +++ /dev/null @@ -1,59 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -using UnrealBuildTool; - -public class OpenPype : ModuleRules -{ - public OpenPype(ReadOnlyTargetRules Target) : base(Target) - { - PCHUsage = PCHUsageMode.UseExplicitOrSharedPCHs; - - PublicIncludePaths.AddRange( - new string[] { - // ... add public include paths required here ... - } - ); - - - PrivateIncludePaths.AddRange( - new string[] { - // ... add other private include paths required here ... - } - ); - - - PublicDependencyModuleNames.AddRange( - new string[] - { - "Core", - // ... add other public dependencies that you statically link with here ... - } - ); - - - PrivateDependencyModuleNames.AddRange( - new string[] - { - "GameProjectGeneration", - "Projects", - "InputCore", - "UnrealEd", - "LevelEditor", - "CoreUObject", - "Engine", - "Slate", - "SlateCore", - "AssetTools" - // ... add private dependencies that you statically link with here ... - } - ); - - - DynamicallyLoadedModuleNames.AddRange( - new string[] - { - // ... add any modules that your module loads dynamically here ... - } - ); - } -} diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Commandlets/Implementations/OPGenerateProjectCommandlet.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Commandlets/Implementations/OPGenerateProjectCommandlet.cpp deleted file mode 100644 index abb1975027..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Commandlets/Implementations/OPGenerateProjectCommandlet.cpp +++ /dev/null @@ -1,141 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "Commandlets/Implementations/OPGenerateProjectCommandlet.h" - -#include "Editor.h" -#include "GameProjectUtils.h" -#include "OPConstants.h" -#include "Commandlets/OPActionResult.h" -#include "ProjectDescriptor.h" - -int32 UOPGenerateProjectCommandlet::Main(const FString& CommandLineParams) -{ - //Parses command line parameters & creates structure FProjectInformation - const FOPGenerateProjectParams ParsedParams = FOPGenerateProjectParams(CommandLineParams); - ProjectInformation = ParsedParams.GenerateUEProjectInformation(); - - //Creates .uproject & other UE files - EVALUATE_OP_ACTION_RESULT(TryCreateProject()); - - //Loads created .uproject - EVALUATE_OP_ACTION_RESULT(TryLoadProjectDescriptor()); - - //Adds needed plugin to .uproject - AttachPluginsToProjectDescriptor(); - - //Saves .uproject - EVALUATE_OP_ACTION_RESULT(TrySave()); - - //When we are here, there should not be problems in generating Unreal Project for OpenPype - return 0; -} - - -FOPGenerateProjectParams::FOPGenerateProjectParams(): FOPGenerateProjectParams("") -{ -} - -FOPGenerateProjectParams::FOPGenerateProjectParams(const FString& CommandLineParams): CommandLineParams( - CommandLineParams) -{ - UCommandlet::ParseCommandLine(*CommandLineParams, Tokens, Switches); -} - -FProjectInformation FOPGenerateProjectParams::GenerateUEProjectInformation() const -{ - FProjectInformation ProjectInformation = FProjectInformation(); - ProjectInformation.ProjectFilename = GetProjectFileName(); - - ProjectInformation.bShouldGenerateCode = IsSwitchPresent("GenerateCode"); - - return ProjectInformation; -} - -FString FOPGenerateProjectParams::TryGetToken(const int32 Index) const -{ - return Tokens.IsValidIndex(Index) ? Tokens[Index] : ""; -} - -FString FOPGenerateProjectParams::GetProjectFileName() const -{ - return TryGetToken(0); -} - -bool FOPGenerateProjectParams::IsSwitchPresent(const FString& Switch) const -{ - return INDEX_NONE != Switches.IndexOfByPredicate([&Switch](const FString& Item) -> bool - { - return Item.Equals(Switch); - } - ); -} - - -UOPGenerateProjectCommandlet::UOPGenerateProjectCommandlet() -{ - LogToConsole = true; -} - -FOP_ActionResult UOPGenerateProjectCommandlet::TryCreateProject() const -{ - FText FailReason; - FText FailLog; - TArray OutCreatedFiles; - - if (!GameProjectUtils::CreateProject(ProjectInformation, FailReason, FailLog, &OutCreatedFiles)) - return FOP_ActionResult(EOP_ActionResult::ProjectNotCreated, FailReason); - return FOP_ActionResult(); -} - -FOP_ActionResult UOPGenerateProjectCommandlet::TryLoadProjectDescriptor() -{ - FText FailReason; - const bool bLoaded = ProjectDescriptor.Load(ProjectInformation.ProjectFilename, FailReason); - - return FOP_ActionResult(bLoaded ? EOP_ActionResult::Ok : EOP_ActionResult::ProjectNotLoaded, FailReason); -} - -void UOPGenerateProjectCommandlet::AttachPluginsToProjectDescriptor() -{ - FPluginReferenceDescriptor OPPluginDescriptor; - OPPluginDescriptor.bEnabled = true; - OPPluginDescriptor.Name = OPConstants::OP_PluginName; - ProjectDescriptor.Plugins.Add(OPPluginDescriptor); - - FPluginReferenceDescriptor PythonPluginDescriptor; - PythonPluginDescriptor.bEnabled = true; - PythonPluginDescriptor.Name = OPConstants::PythonScript_PluginName; - ProjectDescriptor.Plugins.Add(PythonPluginDescriptor); - - FPluginReferenceDescriptor SequencerScriptingPluginDescriptor; - SequencerScriptingPluginDescriptor.bEnabled = true; - SequencerScriptingPluginDescriptor.Name = OPConstants::SequencerScripting_PluginName; - ProjectDescriptor.Plugins.Add(SequencerScriptingPluginDescriptor); - - FPluginReferenceDescriptor MovieRenderPipelinePluginDescriptor; - MovieRenderPipelinePluginDescriptor.bEnabled = true; - MovieRenderPipelinePluginDescriptor.Name = OPConstants::MovieRenderPipeline_PluginName; - ProjectDescriptor.Plugins.Add(MovieRenderPipelinePluginDescriptor); - - FPluginReferenceDescriptor EditorScriptingPluginDescriptor; - EditorScriptingPluginDescriptor.bEnabled = true; - EditorScriptingPluginDescriptor.Name = OPConstants::EditorScriptingUtils_PluginName; - ProjectDescriptor.Plugins.Add(EditorScriptingPluginDescriptor); -} - -FOP_ActionResult UOPGenerateProjectCommandlet::TrySave() -{ - FText FailReason; - const bool bSaved = ProjectDescriptor.Save(ProjectInformation.ProjectFilename, FailReason); - - return FOP_ActionResult(bSaved ? EOP_ActionResult::Ok : EOP_ActionResult::ProjectNotSaved, FailReason); -} - -FOPGenerateProjectParams UOPGenerateProjectCommandlet::ParseParameters(const FString& Params) const -{ - FOPGenerateProjectParams ParamsResult; - - TArray Tokens, Switches; - ParseCommandLine(*Params, Tokens, Switches); - - return ParamsResult; -} diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Commandlets/OPActionResult.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Commandlets/OPActionResult.cpp deleted file mode 100644 index 6e50ef2221..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Commandlets/OPActionResult.cpp +++ /dev/null @@ -1,41 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - - -#include "Commandlets/OPActionResult.h" -#include "Logging/OP_Log.h" - -EOP_ActionResult::Type& FOP_ActionResult::GetStatus() -{ - return Status; -} - -FText& FOP_ActionResult::GetReason() -{ - return Reason; -} - -FOP_ActionResult::FOP_ActionResult():Status(EOP_ActionResult::Type::Ok) -{ - -} - -FOP_ActionResult::FOP_ActionResult(const EOP_ActionResult::Type& InEnum):Status(InEnum) -{ - TryLog(); -} - -FOP_ActionResult::FOP_ActionResult(const EOP_ActionResult::Type& InEnum, const FText& InReason):Status(InEnum), Reason(InReason) -{ - TryLog(); -}; - -bool FOP_ActionResult::IsProblem() const -{ - return Status != EOP_ActionResult::Ok; -} - -void FOP_ActionResult::TryLog() const -{ - if(IsProblem()) - UE_LOG(LogCommandletOPGenerateProject, Error, TEXT("%s"), *Reason.ToString()); -} diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Logging/OP_Log.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Logging/OP_Log.cpp deleted file mode 100644 index 29b1068c21..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/Logging/OP_Log.cpp +++ /dev/null @@ -1 +0,0 @@ -#include "Logging/OP_Log.h" diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPype.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPype.cpp deleted file mode 100644 index 9bf7b341c5..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPype.cpp +++ /dev/null @@ -1,155 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPype.h" - -#include "ISettingsContainer.h" -#include "ISettingsModule.h" -#include "ISettingsSection.h" -#include "LevelEditor.h" -#include "OpenPypePythonBridge.h" -#include "OpenPypeSettings.h" -#include "OpenPypeStyle.h" - - -static const FName OpenPypeTabName("OpenPype"); - -#define LOCTEXT_NAMESPACE "FOpenPypeModule" - -// This function is triggered when the plugin is staring up -void FOpenPypeModule::StartupModule() -{ - if (!IsRunningCommandlet()) { - FOpenPypeStyle::Initialize(); - FOpenPypeStyle::SetIcon("Logo", "openpype40"); - - // Create the Extender that will add content to the menu - FLevelEditorModule& LevelEditorModule = FModuleManager::LoadModuleChecked("LevelEditor"); - - TSharedPtr MenuExtender = MakeShareable(new FExtender()); - TSharedPtr ToolbarExtender = MakeShareable(new FExtender()); - - MenuExtender->AddMenuExtension( - "LevelEditor", - EExtensionHook::After, - NULL, - FMenuExtensionDelegate::CreateRaw(this, &FOpenPypeModule::AddMenuEntry) - ); - ToolbarExtender->AddToolBarExtension( - "Settings", - EExtensionHook::After, - NULL, - FToolBarExtensionDelegate::CreateRaw(this, &FOpenPypeModule::AddToobarEntry)); - - - LevelEditorModule.GetMenuExtensibilityManager()->AddExtender(MenuExtender); - LevelEditorModule.GetToolBarExtensibilityManager()->AddExtender(ToolbarExtender); - - RegisterSettings(); - } -} - -void FOpenPypeModule::ShutdownModule() -{ - FOpenPypeStyle::Shutdown(); -} - - -void FOpenPypeModule::AddMenuEntry(FMenuBuilder& MenuBuilder) -{ - // Create Section - MenuBuilder.BeginSection("OpenPype", TAttribute(FText::FromString("OpenPype"))); - { - // Create a Submenu inside of the Section - MenuBuilder.AddMenuEntry( - FText::FromString("Tools..."), - FText::FromString("Pipeline tools"), - FSlateIcon(FOpenPypeStyle::GetStyleSetName(), "OpenPype.Logo"), - FUIAction(FExecuteAction::CreateRaw(this, &FOpenPypeModule::MenuPopup)) - ); - - MenuBuilder.AddMenuEntry( - FText::FromString("Tools dialog..."), - FText::FromString("Pipeline tools dialog"), - FSlateIcon(FOpenPypeStyle::GetStyleSetName(), "OpenPype.Logo"), - FUIAction(FExecuteAction::CreateRaw(this, &FOpenPypeModule::MenuDialog)) - ); - } - MenuBuilder.EndSection(); -} - -void FOpenPypeModule::AddToobarEntry(FToolBarBuilder& ToolbarBuilder) -{ - ToolbarBuilder.BeginSection(TEXT("OpenPype")); - { - ToolbarBuilder.AddToolBarButton( - FUIAction( - FExecuteAction::CreateRaw(this, &FOpenPypeModule::MenuPopup), - NULL, - FIsActionChecked() - - ), - NAME_None, - LOCTEXT("OpenPype_label", "OpenPype"), - LOCTEXT("OpenPype_tooltip", "OpenPype Tools"), - FSlateIcon(FOpenPypeStyle::GetStyleSetName(), "OpenPype.Logo") - ); - } - ToolbarBuilder.EndSection(); -} - -void FOpenPypeModule::RegisterSettings() -{ - ISettingsModule& SettingsModule = FModuleManager::LoadModuleChecked("Settings"); - - // Create the new category - // TODO: After the movement of the plugin from the game to editor, it might be necessary to move this! - ISettingsContainerPtr SettingsContainer = SettingsModule.GetContainer("Project"); - - UOpenPypeSettings* Settings = GetMutableDefault(); - - // Register the settings - ISettingsSectionPtr SettingsSection = SettingsModule.RegisterSettings("Project", "OpenPype", "General", - LOCTEXT("RuntimeGeneralSettingsName", - "General"), - LOCTEXT("RuntimeGeneralSettingsDescription", - "Base configuration for Open Pype Module"), - Settings - ); - - // Register the save handler to your settings, you might want to use it to - // validate those or just act to settings changes. - if (SettingsSection.IsValid()) - { - SettingsSection->OnModified().BindRaw(this, &FOpenPypeModule::HandleSettingsSaved); - } -} - -bool FOpenPypeModule::HandleSettingsSaved() -{ - UOpenPypeSettings* Settings = GetMutableDefault(); - bool ResaveSettings = false; - - // You can put any validation code in here and resave the settings in case an invalid - // value has been entered - - if (ResaveSettings) - { - Settings->SaveConfig(); - } - - return true; -} - - -void FOpenPypeModule::MenuPopup() -{ - UOpenPypePythonBridge* bridge = UOpenPypePythonBridge::Get(); - bridge->RunInPython_Popup(); -} - -void FOpenPypeModule::MenuDialog() -{ - UOpenPypePythonBridge* bridge = UOpenPypePythonBridge::Get(); - bridge->RunInPython_Dialog(); -} - -IMPLEMENT_MODULE(FOpenPypeModule, OpenPype) diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeLib.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeLib.cpp deleted file mode 100644 index 34faba1f49..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeLib.cpp +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypeLib.h" - -#include "AssetViewUtils.h" -#include "Misc/Paths.h" -#include "Misc/ConfigCacheIni.h" -#include "UObject/UnrealType.h" - -/** - * Sets color on folder icon on given path - * @param InPath - path to folder - * @param InFolderColor - color of the folder - * @warning This color will appear only after Editor restart. Is there a better way? - */ - -bool UOpenPypeLib::SetFolderColor(const FString& FolderPath, const FLinearColor& FolderColor, const bool& bForceAdd) -{ - if (AssetViewUtils::DoesFolderExist(FolderPath)) - { - const TSharedPtr LinearColor = MakeShared(FolderColor); - - AssetViewUtils::SaveColor(FolderPath, LinearColor, true); - UE_LOG(LogAssetData, Display, TEXT("A color {%s} has been set to folder \"%s\""), *LinearColor->ToString(), - *FolderPath) - return true; - } - - UE_LOG(LogAssetData, Display, TEXT("Setting a color {%s} to folder \"%s\" has failed! Directory doesn't exist!"), - *FolderColor.ToString(), *FolderPath) - return false; -} - -/** - * Returns all properties on given object - * @param cls - class - * @return TArray of properties - */ -TArray UOpenPypeLib::GetAllProperties(UClass* cls) -{ - TArray Ret; - if (cls != nullptr) - { - for (TFieldIterator It(cls); It; ++It) - { - FProperty* Property = *It; - if (Property->HasAnyPropertyFlags(EPropertyFlags::CPF_Edit)) - { - Ret.Add(Property->GetName()); - } - } - } - return Ret; -} diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePublishInstance.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePublishInstance.cpp deleted file mode 100644 index 05638fbd0b..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePublishInstance.cpp +++ /dev/null @@ -1,201 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "OpenPypePublishInstance.h" -#include "AssetRegistryModule.h" -#include "OpenPypeLib.h" -#include "OpenPypeSettings.h" -#include "Framework/Notifications/NotificationManager.h" -#include "Widgets/Notifications/SNotificationList.h" - -//Moves all the invalid pointers to the end to prepare them for the shrinking -#define REMOVE_INVALID_ENTRIES(VAR) VAR.CompactStable(); \ - VAR.Shrink(); - -UOpenPypePublishInstance::UOpenPypePublishInstance(const FObjectInitializer& ObjectInitializer) - : UPrimaryDataAsset(ObjectInitializer) -{ - const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked< - FAssetRegistryModule>("AssetRegistry"); - - const FPropertyEditorModule& PropertyEditorModule = FModuleManager::LoadModuleChecked( - "PropertyEditor"); - - FString Left, Right; - GetPathName().Split("/" + GetName(), &Left, &Right); - - FARFilter Filter; - Filter.PackagePaths.Emplace(FName(Left)); - - TArray FoundAssets; - AssetRegistryModule.GetRegistry().GetAssets(Filter, FoundAssets); - - for (const FAssetData& AssetData : FoundAssets) - OnAssetCreated(AssetData); - - REMOVE_INVALID_ENTRIES(AssetDataInternal) - REMOVE_INVALID_ENTRIES(AssetDataExternal) - - AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UOpenPypePublishInstance::OnAssetCreated); - AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UOpenPypePublishInstance::OnAssetRemoved); - AssetRegistryModule.Get().OnAssetUpdated().AddUObject(this, &UOpenPypePublishInstance::OnAssetUpdated); - -#ifdef WITH_EDITOR - ColorOpenPypeDirs(); -#endif - -} - -void UOpenPypePublishInstance::OnAssetCreated(const FAssetData& InAssetData) -{ - TArray split; - - UObject* Asset = InAssetData.GetAsset(); - - if (!IsValid(Asset)) - { - UE_LOG(LogAssetData, Warning, TEXT("Asset \"%s\" is not valid! Skipping the addition."), - *InAssetData.ObjectPath.ToString()); - return; - } - - const bool result = IsUnderSameDir(Asset) && Cast(Asset) == nullptr; - - if (result) - { - if (AssetDataInternal.Emplace(Asset).IsValidId()) - { - UE_LOG(LogTemp, Log, TEXT("Added an Asset to PublishInstance - Publish Instance: %s, Asset %s"), - *this->GetName(), *Asset->GetName()); - } - } -} - -void UOpenPypePublishInstance::OnAssetRemoved(const FAssetData& InAssetData) -{ - if (Cast(InAssetData.GetAsset()) == nullptr) - { - if (AssetDataInternal.Contains(nullptr)) - { - AssetDataInternal.Remove(nullptr); - REMOVE_INVALID_ENTRIES(AssetDataInternal) - } - else - { - AssetDataExternal.Remove(nullptr); - REMOVE_INVALID_ENTRIES(AssetDataExternal) - } - } -} - -void UOpenPypePublishInstance::OnAssetUpdated(const FAssetData& InAssetData) -{ - REMOVE_INVALID_ENTRIES(AssetDataInternal); - REMOVE_INVALID_ENTRIES(AssetDataExternal); -} - -bool UOpenPypePublishInstance::IsUnderSameDir(const UObject* InAsset) const -{ - FString ThisLeft, ThisRight; - this->GetPathName().Split(this->GetName(), &ThisLeft, &ThisRight); - - return InAsset->GetPathName().StartsWith(ThisLeft); -} - -#ifdef WITH_EDITOR - -void UOpenPypePublishInstance::ColorOpenPypeDirs() -{ - FString PathName = this->GetPathName(); - - //Check whether the path contains the defined OpenPype folder - if (!PathName.Contains(TEXT("OpenPype"))) return; - - //Get the base path for open pype - FString PathLeft, PathRight; - PathName.Split(FString("OpenPype"), &PathLeft, &PathRight); - - if (PathLeft.IsEmpty() || PathRight.IsEmpty()) - { - UE_LOG(LogAssetData, Error, TEXT("Failed to retrieve the base OpenPype directory!")) - return; - } - - PathName.RemoveFromEnd(PathRight, ESearchCase::CaseSensitive); - - //Get the current settings - const UOpenPypeSettings* Settings = GetMutableDefault(); - - //Color the base folder - UOpenPypeLib::SetFolderColor(PathName, Settings->GetFolderFColor(), false); - - //Get Sub paths, iterate through them and color them according to the folder color in UOpenPypeSettings - const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked( - "AssetRegistry"); - - TArray PathList; - - AssetRegistryModule.Get().GetSubPaths(PathName, PathList, true); - - if (PathList.Num() > 0) - { - for (const FString& Path : PathList) - { - UOpenPypeLib::SetFolderColor(Path, Settings->GetFolderFColor(), false); - } - } -} - -void UOpenPypePublishInstance::SendNotification(const FString& Text) const -{ - FNotificationInfo Info{FText::FromString(Text)}; - - Info.bFireAndForget = true; - Info.bUseLargeFont = false; - Info.bUseThrobber = false; - Info.bUseSuccessFailIcons = false; - Info.ExpireDuration = 4.f; - Info.FadeOutDuration = 2.f; - - FSlateNotificationManager::Get().AddNotification(Info); - - UE_LOG(LogAssetData, Warning, - TEXT( - "Removed duplicated asset from the AssetsDataExternal in Container \"%s\", Asset is already included in the AssetDataInternal!" - ), *GetName() - ) -} - - -void UOpenPypePublishInstance::PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) -{ - Super::PostEditChangeProperty(PropertyChangedEvent); - - if (PropertyChangedEvent.ChangeType == EPropertyChangeType::ValueSet && - PropertyChangedEvent.Property->GetFName() == GET_MEMBER_NAME_CHECKED( - UOpenPypePublishInstance, AssetDataExternal)) - { - // Check for duplicated assets - for (const auto& Asset : AssetDataInternal) - { - if (AssetDataExternal.Contains(Asset)) - { - AssetDataExternal.Remove(Asset); - return SendNotification( - "You are not allowed to add assets into AssetDataExternal which are already included in AssetDataInternal!"); - } - } - - // Check if no UOpenPypePublishInstance type assets are included - for (const auto& Asset : AssetDataExternal) - { - if (Cast(Asset.Get()) != nullptr) - { - AssetDataExternal.Remove(Asset); - return SendNotification("You are not allowed to add publish instances!"); - } - } - } -} - -#endif diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp deleted file mode 100644 index a32ebe32cb..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypePublishInstanceFactory.h" -#include "OpenPypePublishInstance.h" - -UOpenPypePublishInstanceFactory::UOpenPypePublishInstanceFactory(const FObjectInitializer& ObjectInitializer) - : UFactory(ObjectInitializer) -{ - SupportedClass = UOpenPypePublishInstance::StaticClass(); - bCreateNew = false; - bEditorImport = true; -} - -UObject* UOpenPypePublishInstanceFactory::FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) -{ - check(InClass->IsChildOf(UOpenPypePublishInstance::StaticClass())); - return NewObject(InParent, InClass, InName, Flags); -} - -bool UOpenPypePublishInstanceFactory::ShouldShowInNewMenu() const { - return false; -} diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePythonBridge.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePythonBridge.cpp deleted file mode 100644 index 6ebfc528f0..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypePythonBridge.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypePythonBridge.h" - -UOpenPypePythonBridge* UOpenPypePythonBridge::Get() -{ - TArray OpenPypePythonBridgeClasses; - GetDerivedClasses(UOpenPypePythonBridge::StaticClass(), OpenPypePythonBridgeClasses); - int32 NumClasses = OpenPypePythonBridgeClasses.Num(); - if (NumClasses > 0) - { - return Cast(OpenPypePythonBridgeClasses[NumClasses - 1]->GetDefaultObject()); - } - return nullptr; -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeSettings.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeSettings.cpp deleted file mode 100644 index dd4228dfd0..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeSettings.cpp +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#include "OpenPypeSettings.h" - -#include "Interfaces/IPluginManager.h" - -/** - * Mainly is used for initializing default values if the DefaultOpenPypeSettings.ini file does not exist in the saved config - */ -UOpenPypeSettings::UOpenPypeSettings(const FObjectInitializer& ObjectInitializer) -{ - - const FString ConfigFilePath = OPENPYPE_SETTINGS_FILEPATH; - - // This has to be probably in the future set using the UE Reflection system - FColor Color; - GConfig->GetColor(TEXT("/Script/OpenPype.OpenPypeSettings"), TEXT("FolderColor"), Color, ConfigFilePath); - - FolderColor = Color; -} \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeStyle.cpp b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeStyle.cpp deleted file mode 100644 index 0cc854c5ef..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Private/OpenPypeStyle.cpp +++ /dev/null @@ -1,70 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypeStyle.h" -#include "Framework/Application/SlateApplication.h" -#include "Styling/SlateStyle.h" -#include "Styling/SlateStyleRegistry.h" - - -TUniquePtr< FSlateStyleSet > FOpenPypeStyle::OpenPypeStyleInstance = nullptr; - -void FOpenPypeStyle::Initialize() -{ - if (!OpenPypeStyleInstance.IsValid()) - { - OpenPypeStyleInstance = Create(); - FSlateStyleRegistry::RegisterSlateStyle(*OpenPypeStyleInstance); - } -} - -void FOpenPypeStyle::Shutdown() -{ - if (OpenPypeStyleInstance.IsValid()) - { - FSlateStyleRegistry::UnRegisterSlateStyle(*OpenPypeStyleInstance); - OpenPypeStyleInstance.Reset(); - } -} - -FName FOpenPypeStyle::GetStyleSetName() -{ - static FName StyleSetName(TEXT("OpenPypeStyle")); - return StyleSetName; -} - -FName FOpenPypeStyle::GetContextName() -{ - static FName ContextName(TEXT("OpenPype")); - return ContextName; -} - -#define IMAGE_BRUSH(RelativePath, ...) FSlateImageBrush( Style->RootToContentDir( RelativePath, TEXT(".png") ), __VA_ARGS__ ) - -const FVector2D Icon40x40(40.0f, 40.0f); - -TUniquePtr< FSlateStyleSet > FOpenPypeStyle::Create() -{ - TUniquePtr< FSlateStyleSet > Style = MakeUnique(GetStyleSetName()); - Style->SetContentRoot(FPaths::EnginePluginsDir() / TEXT("Marketplace/OpenPype/Resources")); - - return Style; -} - -void FOpenPypeStyle::SetIcon(const FString& StyleName, const FString& ResourcePath) -{ - FSlateStyleSet* Style = OpenPypeStyleInstance.Get(); - - FString Name(GetContextName().ToString()); - Name = Name + "." + StyleName; - Style->Set(*Name, new FSlateImageBrush(Style->RootToContentDir(ResourcePath, TEXT(".png")), Icon40x40)); - - - FSlateApplication::Get().GetRenderer()->ReloadTextureResources(); -} - -#undef IMAGE_BRUSH - -const ISlateStyle& FOpenPypeStyle::Get() -{ - check(OpenPypeStyleInstance); - return *OpenPypeStyleInstance; -} diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Commandlets/Implementations/OPGenerateProjectCommandlet.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Commandlets/Implementations/OPGenerateProjectCommandlet.h deleted file mode 100644 index d1129aa070..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Commandlets/Implementations/OPGenerateProjectCommandlet.h +++ /dev/null @@ -1,60 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "GameProjectUtils.h" -#include "Commandlets/OPActionResult.h" -#include "ProjectDescriptor.h" -#include "Commandlets/Commandlet.h" -#include "OPGenerateProjectCommandlet.generated.h" - -struct FProjectDescriptor; -struct FProjectInformation; - -/** -* @brief Structure which parses command line parameters and generates FProjectInformation -*/ -USTRUCT() -struct FOPGenerateProjectParams -{ - GENERATED_BODY() - -private: - FString CommandLineParams; - TArray Tokens; - TArray Switches; - -public: - FOPGenerateProjectParams(); - FOPGenerateProjectParams(const FString& CommandLineParams); - - FProjectInformation GenerateUEProjectInformation() const; - -private: - FString TryGetToken(const int32 Index) const; - FString GetProjectFileName() const; - - bool IsSwitchPresent(const FString& Switch) const; -}; - -UCLASS() -class OPENPYPE_API UOPGenerateProjectCommandlet : public UCommandlet -{ - GENERATED_BODY() - -private: - FProjectInformation ProjectInformation; - FProjectDescriptor ProjectDescriptor; - -public: - UOPGenerateProjectCommandlet(); - - virtual int32 Main(const FString& CommandLineParams) override; - -private: - FOPGenerateProjectParams ParseParameters(const FString& Params) const; - FOP_ActionResult TryCreateProject() const; - FOP_ActionResult TryLoadProjectDescriptor(); - void AttachPluginsToProjectDescriptor(); - FOP_ActionResult TrySave(); -}; - diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Commandlets/OPActionResult.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Commandlets/OPActionResult.h deleted file mode 100644 index 322a23a3e8..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Commandlets/OPActionResult.h +++ /dev/null @@ -1,83 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "CoreMinimal.h" -#include "OPActionResult.generated.h" - -/** - * @brief This macro returns error code when is problem or does nothing when there is no problem. - * @param ActionResult FOP_ActionResult structure - */ -#define EVALUATE_OP_ACTION_RESULT(ActionResult) \ - if(ActionResult.IsProblem()) \ - return ActionResult.GetStatus(); - -/** -* @brief This enum values are humanly readable mapping of error codes. -* Here should be all error codes to be possible find what went wrong. -* TODO: In the future a web document should exists with the mapped error code & what problem occurred & how to repair it... -*/ -UENUM() -namespace EOP_ActionResult -{ - enum Type - { - Ok, - ProjectNotCreated, - ProjectNotLoaded, - ProjectNotSaved, - //....Here insert another values - - //Do not remove! - //Usable for looping through enum values - __Last UMETA(Hidden) - }; -} - - -/** - * @brief This struct holds action result enum and optionally reason of fail - */ -USTRUCT() -struct FOP_ActionResult -{ - GENERATED_BODY() - -public: - /** @brief Default constructor usable when there is no problem */ - FOP_ActionResult(); - - /** - * @brief This constructor initializes variables & attempts to log when is error - * @param InEnum Status - */ - FOP_ActionResult(const EOP_ActionResult::Type& InEnum); - - /** - * @brief This constructor initializes variables & attempts to log when is error - * @param InEnum Status - * @param InReason Reason of potential fail - */ - FOP_ActionResult(const EOP_ActionResult::Type& InEnum, const FText& InReason); - -private: - /** @brief Action status */ - EOP_ActionResult::Type Status; - - /** @brief Optional reason of fail */ - FText Reason; - -public: - /** - * @brief Checks if there is problematic state - * @return true when status is not equal to EOP_ActionResult::Ok - */ - bool IsProblem() const; - EOP_ActionResult::Type& GetStatus(); - FText& GetReason(); - -private: - void TryLog() const; -}; - diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Logging/OP_Log.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Logging/OP_Log.h deleted file mode 100644 index 3740c5285a..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/Logging/OP_Log.h +++ /dev/null @@ -1,4 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -DEFINE_LOG_CATEGORY_STATIC(LogCommandletOPGenerateProject, Log, All); \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OPConstants.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OPConstants.h deleted file mode 100644 index f4587f7a50..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OPConstants.h +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -namespace OPConstants -{ - const FString OP_PluginName = "OpenPype"; - const FString PythonScript_PluginName = "PythonScriptPlugin"; - const FString SequencerScripting_PluginName = "SequencerScripting"; - const FString MovieRenderPipeline_PluginName = "MovieRenderPipeline"; - const FString EditorScriptingUtils_PluginName = "EditorScriptingUtilities"; -} - - diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPype.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPype.h deleted file mode 100644 index 2454344128..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPype.h +++ /dev/null @@ -1,22 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "Engine.h" - - -class FOpenPypeModule : public IModuleInterface -{ -public: - virtual void StartupModule() override; - virtual void ShutdownModule() override; - -private: - void RegisterSettings(); - bool HandleSettingsSaved(); - - void AddMenuEntry(FMenuBuilder& MenuBuilder); - void AddToobarEntry(FToolBarBuilder& ToolbarBuilder); - void MenuPopup(); - void MenuDialog(); -}; diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeLib.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeLib.h deleted file mode 100644 index ef4d1027ea..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeLib.h +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "Engine.h" -#include "OpenPypeLib.generated.h" - - -UCLASS(Blueprintable) -class OPENPYPE_API UOpenPypeLib : public UBlueprintFunctionLibrary -{ - - GENERATED_BODY() - -public: - UFUNCTION(BlueprintCallable, Category = Python) - static bool SetFolderColor(const FString& FolderPath, const FLinearColor& FolderColor,const bool& bForceAdd); - - UFUNCTION(BlueprintCallable, Category = Python) - static TArray GetAllProperties(UClass* cls); -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePublishInstance.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePublishInstance.h deleted file mode 100644 index 8cfcd067c0..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePublishInstance.h +++ /dev/null @@ -1,102 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "Engine.h" -#include "OpenPypePublishInstance.generated.h" - - -UCLASS(Blueprintable) -class OPENPYPE_API UOpenPypePublishInstance : public UPrimaryDataAsset -{ - GENERATED_UCLASS_BODY() - -public: - /** - * Retrieves all the assets which are monitored by the Publish Instance (Monitors assets in the directory which is - * placed in) - * - * @return - Set of UObjects. Careful! They are returning raw pointers. Seems like an issue in UE5 - */ - UFUNCTION(BlueprintCallable, BlueprintPure, Category="Python") - TSet GetInternalAssets() const - { - //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. - TSet ResultSet; - - for (const auto& Asset : AssetDataInternal) - ResultSet.Add(Asset.LoadSynchronous()); - - return ResultSet; - } - - /** - * Retrieves all the assets which have been added manually by the Publish Instance - * - * @return - TSet of assets (UObjects). Careful! They are returning raw pointers. Seems like an issue in UE5 - */ - UFUNCTION(BlueprintCallable, BlueprintPure, Category="Python") - TSet GetExternalAssets() const - { - //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. - TSet ResultSet; - - for (const auto& Asset : AssetDataExternal) - ResultSet.Add(Asset.LoadSynchronous()); - - return ResultSet; - } - - /** - * Function for returning all the assets in the container combined. - * - * @return Returns all the internal and externally added assets into one set (TSet of UObjects). Careful! They are - * returning raw pointers. Seems like an issue in UE5 - * - * @attention If the bAddExternalAssets variable is false, external assets won't be included! - */ - UFUNCTION(BlueprintCallable, BlueprintPure, Category="Python") - TSet GetAllAssets() const - { - const TSet>& IteratedSet = bAddExternalAssets - ? AssetDataInternal.Union(AssetDataExternal) - : AssetDataInternal; - - //Create a new TSet only with raw pointers. - TSet ResultSet; - - for (auto& Asset : IteratedSet) - ResultSet.Add(Asset.LoadSynchronous()); - - return ResultSet; - } - -private: - UPROPERTY(VisibleAnywhere, Category="Assets") - TSet> AssetDataInternal; - - /** - * This property allows exposing the array to include other assets from any other directory than what it's currently - * monitoring. NOTE: that these assets have to be added manually! They are not automatically registered or added! - */ - UPROPERTY(EditAnywhere, Category = "Assets") - bool bAddExternalAssets = false; - - UPROPERTY(EditAnywhere, meta=(EditCondition="bAddExternalAssets"), Category="Assets") - TSet> AssetDataExternal; - - - void OnAssetCreated(const FAssetData& InAssetData); - void OnAssetRemoved(const FAssetData& InAssetData); - void OnAssetUpdated(const FAssetData& InAssetData); - - bool IsUnderSameDir(const UObject* InAsset) const; - -#ifdef WITH_EDITOR - - void ColorOpenPypeDirs(); - - void SendNotification(const FString& Text) const; - virtual void PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) override; - -#endif -}; diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h deleted file mode 100644 index 3fdb984411..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "CoreMinimal.h" -#include "Factories/Factory.h" -#include "OpenPypePublishInstanceFactory.generated.h" - -/** - * - */ -UCLASS() -class OPENPYPE_API UOpenPypePublishInstanceFactory : public UFactory -{ - GENERATED_BODY() - -public: - UOpenPypePublishInstanceFactory(const FObjectInitializer& ObjectInitializer); - virtual UObject* FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; - virtual bool ShouldShowInNewMenu() const override; -}; diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePythonBridge.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePythonBridge.h deleted file mode 100644 index 827f76f56b..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypePythonBridge.h +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once -#include "Engine.h" -#include "OpenPypePythonBridge.generated.h" - -UCLASS(Blueprintable) -class UOpenPypePythonBridge : public UObject -{ - GENERATED_BODY() - -public: - UFUNCTION(BlueprintCallable, Category = Python) - static UOpenPypePythonBridge* Get(); - - UFUNCTION(BlueprintImplementableEvent, Category = Python) - void RunInPython_Popup() const; - - UFUNCTION(BlueprintImplementableEvent, Category = Python) - void RunInPython_Dialog() const; - -}; diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeSettings.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeSettings.h deleted file mode 100644 index 88defaa773..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeSettings.h +++ /dev/null @@ -1,31 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "CoreMinimal.h" -#include "OpenPypeSettings.generated.h" - -#define OPENPYPE_SETTINGS_FILEPATH IPluginManager::Get().FindPlugin("OpenPype")->GetBaseDir() / TEXT("Config") / TEXT("DefaultOpenPypeSettings.ini") - -UCLASS(Config=OpenPypeSettings, DefaultConfig) -class OPENPYPE_API UOpenPypeSettings : public UObject -{ - GENERATED_UCLASS_BODY() - - UFUNCTION(BlueprintCallable, BlueprintPure, Category = Settings) - FColor GetFolderFColor() const - { - return FolderColor; - } - - UFUNCTION(BlueprintCallable, BlueprintPure, Category = Settings) - FLinearColor GetFolderFLinearColor() const - { - return FLinearColor(FolderColor); - } - -protected: - - UPROPERTY(config, EditAnywhere, Category = Folders) - FColor FolderColor = FColor(25,45,223); -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeStyle.h b/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeStyle.h deleted file mode 100644 index 0e4af129d0..0000000000 --- a/openpype/hosts/unreal/integration/UE_4.7/OpenPype/Source/OpenPype/Public/OpenPypeStyle.h +++ /dev/null @@ -1,23 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once -#include "CoreMinimal.h" - -class FSlateStyleSet; -class ISlateStyle; - - -class FOpenPypeStyle -{ -public: - static void Initialize(); - static void Shutdown(); - static const ISlateStyle& Get(); - static FName GetStyleSetName(); - static FName GetContextName(); - - static void SetIcon(const FString& StyleName, const FString& ResourcePath); - -private: - static TUniquePtr< FSlateStyleSet > Create(); - static TUniquePtr< FSlateStyleSet > OpenPypeStyleInstance; -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/CommandletProject/.gitignore b/openpype/hosts/unreal/integration/UE_5.0/CommandletProject/.gitignore deleted file mode 100644 index 80814ef0a6..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/CommandletProject/.gitignore +++ /dev/null @@ -1,41 +0,0 @@ -# Prerequisites -*.d - -# Compiled Object files -*.slo -*.lo -*.o -*.obj - -# Precompiled Headers -*.gch -*.pch - -# Compiled Dynamic libraries -*.so -*.dylib -*.dll - -# Fortran module files -*.mod -*.smod - -# Compiled Static libraries -*.lai -*.la -*.a -*.lib - -# Executables -*.exe -*.out -*.app - -/Saved -/DerivedDataCache -/Intermediate -/Binaries -/Content -/Config -/.idea -/.vs \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/CommandletProject/CommandletProject.uproject b/openpype/hosts/unreal/integration/UE_5.0/CommandletProject/CommandletProject.uproject deleted file mode 100644 index c8dc1c673e..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/CommandletProject/CommandletProject.uproject +++ /dev/null @@ -1,20 +0,0 @@ -{ - "FileVersion": 3, - "EngineAssociation": "5.0", - "Category": "", - "Description": "", - "Plugins": [ - { - "Name": "ModelingToolsEditorMode", - "Enabled": true, - "TargetAllowList": [ - "Editor" - ] - }, - { - "Name": "OpenPype", - "Enabled": true, - "Type": "Editor" - } - ] -} \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/.gitignore b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/.gitignore deleted file mode 100644 index b32a6f55e5..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/.gitignore +++ /dev/null @@ -1,35 +0,0 @@ -# Prerequisites -*.d - -# Compiled Object files -*.slo -*.lo -*.o -*.obj - -# Precompiled Headers -*.gch -*.pch - -# Compiled Dynamic libraries -*.so -*.dylib -*.dll - -# Fortran module files -*.mod -*.smod - -# Compiled Static libraries -*.lai -*.la -*.a -*.lib - -# Executables -*.exe -*.out -*.app - -/Binaries -/Intermediate diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Config/DefaultOpenPypeSettings.ini b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Config/DefaultOpenPypeSettings.ini deleted file mode 100644 index 8a883cf1db..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Config/DefaultOpenPypeSettings.ini +++ /dev/null @@ -1,2 +0,0 @@ -[/Script/OpenPype.OpenPypeSettings] -FolderColor=(R=91,G=197,B=220,A=255) \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Config/FilterPlugin.ini b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Config/FilterPlugin.ini deleted file mode 100644 index ccebca2f32..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Config/FilterPlugin.ini +++ /dev/null @@ -1,8 +0,0 @@ -[FilterPlugin] -; This section lists additional files which will be packaged along with your plugin. Paths should be listed relative to the root plugin directory, and -; may include "...", "*", and "?" wildcards to match directories, files, and individual characters respectively. -; -; Examples: -; /README.txt -; /Extras/... -; /Binaries/ThirdParty/*.dll diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Content/Python/init_unreal.py b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Content/Python/init_unreal.py deleted file mode 100644 index b85f970699..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Content/Python/init_unreal.py +++ /dev/null @@ -1,30 +0,0 @@ -import unreal - -openpype_detected = True -try: - from openpype.pipeline import install_host - from openpype.hosts.unreal.api import UnrealHost - - openpype_host = UnrealHost() -except ImportError as exc: - openpype_host = None - openpype_detected = False - unreal.log_error("OpenPype: cannot load OpenPype [ {} ]".format(exc)) - -if openpype_detected: - install_host(openpype_host) - - -@unreal.uclass() -class OpenPypeIntegration(unreal.OpenPypePythonBridge): - @unreal.ufunction(override=True) - def RunInPython_Popup(self): - unreal.log_warning("OpenPype: showing tools popup") - if openpype_detected: - openpype_host.show_tools_popup() - - @unreal.ufunction(override=True) - def RunInPython_Dialog(self): - unreal.log_warning("OpenPype: showing tools dialog") - if openpype_detected: - openpype_host.show_tools_dialog() diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/OpenPype.uplugin b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/OpenPype.uplugin deleted file mode 100644 index ff08edc13e..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/OpenPype.uplugin +++ /dev/null @@ -1,24 +0,0 @@ -{ - "FileVersion": 3, - "Version": 1, - "VersionName": "1.0", - "FriendlyName": "OpenPype", - "Description": "OpenPype Integration", - "Category": "OpenPype.Integration", - "CreatedBy": "Ondrej Samohel", - "CreatedByURL": "https://openpype.io", - "DocsURL": "https://openpype.io/docs/artist_hosts_unreal", - "MarketplaceURL": "", - "SupportURL": "https://pype.club/", - "CanContainContent": true, - "EngineVersion": "5.0", - "IsExperimentalVersion": false, - "Installed": true, - "Modules": [ - { - "Name": "OpenPype", - "Type": "Editor", - "LoadingPhase": "Default" - } - ] -} \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/README.md b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/README.md deleted file mode 100644 index cf0aa622c2..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/README.md +++ /dev/null @@ -1,11 +0,0 @@ -# OpenPype Unreal Integration plugin - UE 5.x - -This is plugin for Unreal Editor, creating menu for [OpenPype](https://github.com/getavalon) tools to run. - -## How does this work - -Plugin is creating basic menu items in **Window/OpenPype** section of Unreal Editor main menu and a button -on the main toolbar with associated menu. Clicking on those menu items is calling callbacks that are -declared in C++ but needs to be implemented during Unreal Editor -startup in `Plugins/OpenPype/Content/Python/init_unreal.py` - this should be executed by Unreal Editor -automatically. diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype128.png b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype128.png deleted file mode 100644 index abe8a807ef..0000000000 Binary files a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype128.png and /dev/null differ diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype40.png b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype40.png deleted file mode 100644 index f983e7a1f2..0000000000 Binary files a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype40.png and /dev/null differ diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype512.png b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype512.png deleted file mode 100644 index 97c4d4326b..0000000000 Binary files a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Resources/openpype512.png and /dev/null differ diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/OpenPype.Build.cs b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/OpenPype.Build.cs deleted file mode 100644 index e1087fd720..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/OpenPype.Build.cs +++ /dev/null @@ -1,65 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -using UnrealBuildTool; - -public class OpenPype : ModuleRules -{ - public OpenPype(ReadOnlyTargetRules Target) : base(Target) - { - DefaultBuildSettings = BuildSettingsVersion.V2; - bLegacyPublicIncludePaths = false; - ShadowVariableWarningLevel = WarningLevel.Error; - PCHUsage = ModuleRules.PCHUsageMode.UseExplicitOrSharedPCHs; - //IncludeOrderVersion = EngineIncludeOrderVersion.Unreal5_0; - - PublicIncludePaths.AddRange( - new string[] { - // ... add public include paths required here ... - } - ); - - - PrivateIncludePaths.AddRange( - new string[] { - // ... add other private include paths required here ... - } - ); - - - PublicDependencyModuleNames.AddRange( - new string[] - { - "Core", - "CoreUObject" - // ... add other public dependencies that you statically link with here ... - } - ); - - PrivateDependencyModuleNames.AddRange( - new string[] - { - "GameProjectGeneration", - "Projects", - "InputCore", - "EditorFramework", - "UnrealEd", - "ToolMenus", - "LevelEditor", - "CoreUObject", - "Engine", - "Slate", - "SlateCore", - "AssetTools" - // ... add private dependencies that you statically link with here ... - } - ); - - - DynamicallyLoadedModuleNames.AddRange( - new string[] - { - // ... add any modules that your module loads dynamically here ... - } - ); - } -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/AssetContainer.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/AssetContainer.cpp deleted file mode 100644 index 06dcd67808..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/AssetContainer.cpp +++ /dev/null @@ -1,114 +0,0 @@ -// Fill out your copyright notice in the Description page of Project Settings. - -#include "AssetContainer.h" -#include "AssetRegistry/AssetRegistryModule.h" -#include "Misc/PackageName.h" -#include "Engine.h" -#include "Containers/UnrealString.h" - -UAssetContainer::UAssetContainer(const FObjectInitializer& ObjectInitializer) -: UAssetUserData(ObjectInitializer) -{ - FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked("AssetRegistry"); - FString path = UAssetContainer::GetPathName(); - UE_LOG(LogTemp, Warning, TEXT("UAssetContainer %s"), *path); - FARFilter Filter; - Filter.PackagePaths.Add(FName(*path)); - - AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UAssetContainer::OnAssetAdded); - AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UAssetContainer::OnAssetRemoved); - AssetRegistryModule.Get().OnAssetRenamed().AddUObject(this, &UAssetContainer::OnAssetRenamed); -} - -void UAssetContainer::OnAssetAdded(const FAssetData& AssetData) -{ - TArray split; - - // get directory of current container - FString selfFullPath = UAssetContainer::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); - - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.ObjectPath.ToString(); - UE_LOG(LogTemp, Log, TEXT("asset name %s"), *assetFName); - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - - // take interest only in paths starting with path of current container - if (assetDir.StartsWith(*selfDir)) - { - // exclude self - if (assetFName != "AssetContainer") - { - assets.Add(assetPath); - assetsData.Add(AssetData); - UE_LOG(LogTemp, Log, TEXT("%s: asset added to %s"), *selfFullPath, *selfDir); - } - } -} - -void UAssetContainer::OnAssetRemoved(const FAssetData& AssetData) -{ - TArray split; - - // get directory of current container - FString selfFullPath = UAssetContainer::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); - - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.ObjectPath.ToString(); - - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - - // take interest only in paths starting with path of current container - FString path = UAssetContainer::GetPathName(); - FString lpp = FPackageName::GetLongPackagePath(*path); - - if (assetDir.StartsWith(*selfDir)) - { - // exclude self - if (assetFName != "AssetContainer") - { - // UE_LOG(LogTemp, Warning, TEXT("%s: asset removed"), *lpp); - assets.Remove(assetPath); - assetsData.Remove(AssetData); - } - } -} - -void UAssetContainer::OnAssetRenamed(const FAssetData& AssetData, const FString& str) -{ - TArray split; - - // get directory of current container - FString selfFullPath = UAssetContainer::GetPathName(); - FString selfDir = FPackageName::GetLongPackagePath(*selfFullPath); - - // get asset path and class - FString assetPath = AssetData.GetFullName(); - FString assetFName = AssetData.ObjectPath.ToString(); - - // split path - assetPath.ParseIntoArray(split, TEXT(" "), true); - - FString assetDir = FPackageName::GetLongPackagePath(*split[1]); - if (assetDir.StartsWith(*selfDir)) - { - // exclude self - if (assetFName != "AssetContainer") - { - - assets.Remove(str); - assets.Add(assetPath); - assetsData.Remove(AssetData); - // UE_LOG(LogTemp, Warning, TEXT("%s: asset renamed %s"), *lpp, *str); - } - } -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/AssetContainerFactory.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/AssetContainerFactory.cpp deleted file mode 100644 index b943150bdd..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/AssetContainerFactory.cpp +++ /dev/null @@ -1,20 +0,0 @@ -#include "AssetContainerFactory.h" -#include "AssetContainer.h" - -UAssetContainerFactory::UAssetContainerFactory(const FObjectInitializer& ObjectInitializer) - : UFactory(ObjectInitializer) -{ - SupportedClass = UAssetContainer::StaticClass(); - bCreateNew = false; - bEditorImport = true; -} - -UObject* UAssetContainerFactory::FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) -{ - UAssetContainer* AssetContainer = NewObject(InParent, Class, Name, Flags); - return AssetContainer; -} - -bool UAssetContainerFactory::ShouldShowInNewMenu() const { - return false; -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Commandlets/Implementations/OPGenerateProjectCommandlet.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Commandlets/Implementations/OPGenerateProjectCommandlet.cpp deleted file mode 100644 index abb1975027..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Commandlets/Implementations/OPGenerateProjectCommandlet.cpp +++ /dev/null @@ -1,141 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "Commandlets/Implementations/OPGenerateProjectCommandlet.h" - -#include "Editor.h" -#include "GameProjectUtils.h" -#include "OPConstants.h" -#include "Commandlets/OPActionResult.h" -#include "ProjectDescriptor.h" - -int32 UOPGenerateProjectCommandlet::Main(const FString& CommandLineParams) -{ - //Parses command line parameters & creates structure FProjectInformation - const FOPGenerateProjectParams ParsedParams = FOPGenerateProjectParams(CommandLineParams); - ProjectInformation = ParsedParams.GenerateUEProjectInformation(); - - //Creates .uproject & other UE files - EVALUATE_OP_ACTION_RESULT(TryCreateProject()); - - //Loads created .uproject - EVALUATE_OP_ACTION_RESULT(TryLoadProjectDescriptor()); - - //Adds needed plugin to .uproject - AttachPluginsToProjectDescriptor(); - - //Saves .uproject - EVALUATE_OP_ACTION_RESULT(TrySave()); - - //When we are here, there should not be problems in generating Unreal Project for OpenPype - return 0; -} - - -FOPGenerateProjectParams::FOPGenerateProjectParams(): FOPGenerateProjectParams("") -{ -} - -FOPGenerateProjectParams::FOPGenerateProjectParams(const FString& CommandLineParams): CommandLineParams( - CommandLineParams) -{ - UCommandlet::ParseCommandLine(*CommandLineParams, Tokens, Switches); -} - -FProjectInformation FOPGenerateProjectParams::GenerateUEProjectInformation() const -{ - FProjectInformation ProjectInformation = FProjectInformation(); - ProjectInformation.ProjectFilename = GetProjectFileName(); - - ProjectInformation.bShouldGenerateCode = IsSwitchPresent("GenerateCode"); - - return ProjectInformation; -} - -FString FOPGenerateProjectParams::TryGetToken(const int32 Index) const -{ - return Tokens.IsValidIndex(Index) ? Tokens[Index] : ""; -} - -FString FOPGenerateProjectParams::GetProjectFileName() const -{ - return TryGetToken(0); -} - -bool FOPGenerateProjectParams::IsSwitchPresent(const FString& Switch) const -{ - return INDEX_NONE != Switches.IndexOfByPredicate([&Switch](const FString& Item) -> bool - { - return Item.Equals(Switch); - } - ); -} - - -UOPGenerateProjectCommandlet::UOPGenerateProjectCommandlet() -{ - LogToConsole = true; -} - -FOP_ActionResult UOPGenerateProjectCommandlet::TryCreateProject() const -{ - FText FailReason; - FText FailLog; - TArray OutCreatedFiles; - - if (!GameProjectUtils::CreateProject(ProjectInformation, FailReason, FailLog, &OutCreatedFiles)) - return FOP_ActionResult(EOP_ActionResult::ProjectNotCreated, FailReason); - return FOP_ActionResult(); -} - -FOP_ActionResult UOPGenerateProjectCommandlet::TryLoadProjectDescriptor() -{ - FText FailReason; - const bool bLoaded = ProjectDescriptor.Load(ProjectInformation.ProjectFilename, FailReason); - - return FOP_ActionResult(bLoaded ? EOP_ActionResult::Ok : EOP_ActionResult::ProjectNotLoaded, FailReason); -} - -void UOPGenerateProjectCommandlet::AttachPluginsToProjectDescriptor() -{ - FPluginReferenceDescriptor OPPluginDescriptor; - OPPluginDescriptor.bEnabled = true; - OPPluginDescriptor.Name = OPConstants::OP_PluginName; - ProjectDescriptor.Plugins.Add(OPPluginDescriptor); - - FPluginReferenceDescriptor PythonPluginDescriptor; - PythonPluginDescriptor.bEnabled = true; - PythonPluginDescriptor.Name = OPConstants::PythonScript_PluginName; - ProjectDescriptor.Plugins.Add(PythonPluginDescriptor); - - FPluginReferenceDescriptor SequencerScriptingPluginDescriptor; - SequencerScriptingPluginDescriptor.bEnabled = true; - SequencerScriptingPluginDescriptor.Name = OPConstants::SequencerScripting_PluginName; - ProjectDescriptor.Plugins.Add(SequencerScriptingPluginDescriptor); - - FPluginReferenceDescriptor MovieRenderPipelinePluginDescriptor; - MovieRenderPipelinePluginDescriptor.bEnabled = true; - MovieRenderPipelinePluginDescriptor.Name = OPConstants::MovieRenderPipeline_PluginName; - ProjectDescriptor.Plugins.Add(MovieRenderPipelinePluginDescriptor); - - FPluginReferenceDescriptor EditorScriptingPluginDescriptor; - EditorScriptingPluginDescriptor.bEnabled = true; - EditorScriptingPluginDescriptor.Name = OPConstants::EditorScriptingUtils_PluginName; - ProjectDescriptor.Plugins.Add(EditorScriptingPluginDescriptor); -} - -FOP_ActionResult UOPGenerateProjectCommandlet::TrySave() -{ - FText FailReason; - const bool bSaved = ProjectDescriptor.Save(ProjectInformation.ProjectFilename, FailReason); - - return FOP_ActionResult(bSaved ? EOP_ActionResult::Ok : EOP_ActionResult::ProjectNotSaved, FailReason); -} - -FOPGenerateProjectParams UOPGenerateProjectCommandlet::ParseParameters(const FString& Params) const -{ - FOPGenerateProjectParams ParamsResult; - - TArray Tokens, Switches; - ParseCommandLine(*Params, Tokens, Switches); - - return ParamsResult; -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Commandlets/OPActionResult.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Commandlets/OPActionResult.cpp deleted file mode 100644 index 23ae2dd329..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Commandlets/OPActionResult.cpp +++ /dev/null @@ -1,40 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#include "Commandlets/OPActionResult.h" -#include "Logging/OP_Log.h" - -EOP_ActionResult::Type& FOP_ActionResult::GetStatus() -{ - return Status; -} - -FText& FOP_ActionResult::GetReason() -{ - return Reason; -} - -FOP_ActionResult::FOP_ActionResult():Status(EOP_ActionResult::Type::Ok) -{ - -} - -FOP_ActionResult::FOP_ActionResult(const EOP_ActionResult::Type& InEnum):Status(InEnum) -{ - TryLog(); -} - -FOP_ActionResult::FOP_ActionResult(const EOP_ActionResult::Type& InEnum, const FText& InReason):Status(InEnum), Reason(InReason) -{ - TryLog(); -}; - -bool FOP_ActionResult::IsProblem() const -{ - return Status != EOP_ActionResult::Ok; -} - -void FOP_ActionResult::TryLog() const -{ - if(IsProblem()) - UE_LOG(LogCommandletOPGenerateProject, Error, TEXT("%s"), *Reason.ToString()); -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Logging/OP_Log.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Logging/OP_Log.cpp deleted file mode 100644 index 198fb9df0c..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/Logging/OP_Log.cpp +++ /dev/null @@ -1,3 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#include "Logging/OP_Log.h" diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPype.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPype.cpp deleted file mode 100644 index 65da29da35..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPype.cpp +++ /dev/null @@ -1,140 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPype.h" - -#include "ISettingsContainer.h" -#include "ISettingsModule.h" -#include "ISettingsSection.h" -#include "OpenPypeStyle.h" -#include "OpenPypeCommands.h" -#include "OpenPypePythonBridge.h" -#include "OpenPypeSettings.h" -#include "Misc/MessageDialog.h" -#include "ToolMenus.h" - - -static const FName OpenPypeTabName("OpenPype"); - -#define LOCTEXT_NAMESPACE "FOpenPypeModule" - -// This function is triggered when the plugin is staring up -void FOpenPypeModule::StartupModule() -{ - FOpenPypeStyle::Initialize(); - FOpenPypeStyle::ReloadTextures(); - FOpenPypeCommands::Register(); - - PluginCommands = MakeShareable(new FUICommandList); - - PluginCommands->MapAction( - FOpenPypeCommands::Get().OpenPypeTools, - FExecuteAction::CreateRaw(this, &FOpenPypeModule::MenuPopup), - FCanExecuteAction()); - PluginCommands->MapAction( - FOpenPypeCommands::Get().OpenPypeToolsDialog, - FExecuteAction::CreateRaw(this, &FOpenPypeModule::MenuDialog), - FCanExecuteAction()); - - UToolMenus::RegisterStartupCallback( - FSimpleMulticastDelegate::FDelegate::CreateRaw(this, &FOpenPypeModule::RegisterMenus)); - - RegisterSettings(); -} - -void FOpenPypeModule::ShutdownModule() -{ - UToolMenus::UnRegisterStartupCallback(this); - - UToolMenus::UnregisterOwner(this); - - FOpenPypeStyle::Shutdown(); - - FOpenPypeCommands::Unregister(); -} - - -void FOpenPypeModule::RegisterSettings() -{ - ISettingsModule& SettingsModule = FModuleManager::LoadModuleChecked("Settings"); - - // Create the new category - // TODO: After the movement of the plugin from the game to editor, it might be necessary to move this! - ISettingsContainerPtr SettingsContainer = SettingsModule.GetContainer("Project"); - - UOpenPypeSettings* Settings = GetMutableDefault(); - - // Register the settings - ISettingsSectionPtr SettingsSection = SettingsModule.RegisterSettings("Project", "OpenPype", "General", - LOCTEXT("RuntimeGeneralSettingsName", - "General"), - LOCTEXT("RuntimeGeneralSettingsDescription", - "Base configuration for Open Pype Module"), - Settings - ); - - // Register the save handler to your settings, you might want to use it to - // validate those or just act to settings changes. - if (SettingsSection.IsValid()) - { - SettingsSection->OnModified().BindRaw(this, &FOpenPypeModule::HandleSettingsSaved); - } -} - -bool FOpenPypeModule::HandleSettingsSaved() -{ - UOpenPypeSettings* Settings = GetMutableDefault(); - bool ResaveSettings = false; - - // You can put any validation code in here and resave the settings in case an invalid - // value has been entered - - if (ResaveSettings) - { - Settings->SaveConfig(); - } - - return true; -} - -void FOpenPypeModule::RegisterMenus() -{ - // Owner will be used for cleanup in call to UToolMenus::UnregisterOwner - FToolMenuOwnerScoped OwnerScoped(this); - - { - UToolMenu* Menu = UToolMenus::Get()->ExtendMenu("LevelEditor.MainMenu.Tools"); - { - // FToolMenuSection& Section = Menu->FindOrAddSection("OpenPype"); - FToolMenuSection& Section = Menu->AddSection( - "OpenPype", - TAttribute(FText::FromString("OpenPype")), - FToolMenuInsert("Programming", EToolMenuInsertType::Before) - ); - Section.AddMenuEntryWithCommandList(FOpenPypeCommands::Get().OpenPypeTools, PluginCommands); - Section.AddMenuEntryWithCommandList(FOpenPypeCommands::Get().OpenPypeToolsDialog, PluginCommands); - } - UToolMenu* ToolbarMenu = UToolMenus::Get()->ExtendMenu("LevelEditor.LevelEditorToolBar.PlayToolBar"); - { - FToolMenuSection& Section = ToolbarMenu->FindOrAddSection("PluginTools"); - { - FToolMenuEntry& Entry = Section.AddEntry( - FToolMenuEntry::InitToolBarButton(FOpenPypeCommands::Get().OpenPypeTools)); - Entry.SetCommandList(PluginCommands); - } - } - } -} - - -void FOpenPypeModule::MenuPopup() -{ - UOpenPypePythonBridge* bridge = UOpenPypePythonBridge::Get(); - bridge->RunInPython_Popup(); -} - -void FOpenPypeModule::MenuDialog() -{ - UOpenPypePythonBridge* bridge = UOpenPypePythonBridge::Get(); - bridge->RunInPython_Dialog(); -} - -IMPLEMENT_MODULE(FOpenPypeModule, OpenPype) diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeCommands.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeCommands.cpp deleted file mode 100644 index 881814e278..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeCommands.cpp +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#include "OpenPypeCommands.h" - -#define LOCTEXT_NAMESPACE "FOpenPypeModule" - -void FOpenPypeCommands::RegisterCommands() -{ - UI_COMMAND(OpenPypeTools, "OpenPype Tools", "Pipeline tools", EUserInterfaceActionType::Button, FInputChord()); - UI_COMMAND(OpenPypeToolsDialog, "OpenPype Tools Dialog", "Pipeline tools dialog", EUserInterfaceActionType::Button, FInputChord()); -} - -#undef LOCTEXT_NAMESPACE diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeLib.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeLib.cpp deleted file mode 100644 index 34faba1f49..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeLib.cpp +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypeLib.h" - -#include "AssetViewUtils.h" -#include "Misc/Paths.h" -#include "Misc/ConfigCacheIni.h" -#include "UObject/UnrealType.h" - -/** - * Sets color on folder icon on given path - * @param InPath - path to folder - * @param InFolderColor - color of the folder - * @warning This color will appear only after Editor restart. Is there a better way? - */ - -bool UOpenPypeLib::SetFolderColor(const FString& FolderPath, const FLinearColor& FolderColor, const bool& bForceAdd) -{ - if (AssetViewUtils::DoesFolderExist(FolderPath)) - { - const TSharedPtr LinearColor = MakeShared(FolderColor); - - AssetViewUtils::SaveColor(FolderPath, LinearColor, true); - UE_LOG(LogAssetData, Display, TEXT("A color {%s} has been set to folder \"%s\""), *LinearColor->ToString(), - *FolderPath) - return true; - } - - UE_LOG(LogAssetData, Display, TEXT("Setting a color {%s} to folder \"%s\" has failed! Directory doesn't exist!"), - *FolderColor.ToString(), *FolderPath) - return false; -} - -/** - * Returns all properties on given object - * @param cls - class - * @return TArray of properties - */ -TArray UOpenPypeLib::GetAllProperties(UClass* cls) -{ - TArray Ret; - if (cls != nullptr) - { - for (TFieldIterator It(cls); It; ++It) - { - FProperty* Property = *It; - if (Property->HasAnyPropertyFlags(EPropertyFlags::CPF_Edit)) - { - Ret.Add(Property->GetName()); - } - } - } - return Ret; -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePublishInstance.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePublishInstance.cpp deleted file mode 100644 index 05d5c8a87d..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePublishInstance.cpp +++ /dev/null @@ -1,202 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "OpenPypePublishInstance.h" -#include "AssetRegistry/AssetRegistryModule.h" -#include "AssetToolsModule.h" -#include "Framework/Notifications/NotificationManager.h" -#include "OpenPypeLib.h" -#include "OpenPypeSettings.h" -#include "Widgets/Notifications/SNotificationList.h" - - -//Moves all the invalid pointers to the end to prepare them for the shrinking -#define REMOVE_INVALID_ENTRIES(VAR) VAR.CompactStable(); \ - VAR.Shrink(); - -UOpenPypePublishInstance::UOpenPypePublishInstance(const FObjectInitializer& ObjectInitializer) - : UPrimaryDataAsset(ObjectInitializer) -{ - const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked< - FAssetRegistryModule>("AssetRegistry"); - - const FPropertyEditorModule& PropertyEditorModule = FModuleManager::LoadModuleChecked( - "PropertyEditor"); - - FString Left, Right; - GetPathName().Split("/" + GetName(), &Left, &Right); - - FARFilter Filter; - Filter.PackagePaths.Emplace(FName(Left)); - - TArray FoundAssets; - AssetRegistryModule.GetRegistry().GetAssets(Filter, FoundAssets); - - for (const FAssetData& AssetData : FoundAssets) - OnAssetCreated(AssetData); - - REMOVE_INVALID_ENTRIES(AssetDataInternal) - REMOVE_INVALID_ENTRIES(AssetDataExternal) - - AssetRegistryModule.Get().OnAssetAdded().AddUObject(this, &UOpenPypePublishInstance::OnAssetCreated); - AssetRegistryModule.Get().OnAssetRemoved().AddUObject(this, &UOpenPypePublishInstance::OnAssetRemoved); - AssetRegistryModule.Get().OnAssetUpdated().AddUObject(this, &UOpenPypePublishInstance::OnAssetUpdated); - -#ifdef WITH_EDITOR - ColorOpenPypeDirs(); -#endif -} - -void UOpenPypePublishInstance::OnAssetCreated(const FAssetData& InAssetData) -{ - TArray split; - - UObject* Asset = InAssetData.GetAsset(); - - if (!IsValid(Asset)) - { - UE_LOG(LogAssetData, Warning, TEXT("Asset \"%s\" is not valid! Skipping the addition."), - *InAssetData.ObjectPath.ToString()); - return; - } - - const bool result = IsUnderSameDir(Asset) && Cast(Asset) == nullptr; - - if (result) - { - if (AssetDataInternal.Emplace(Asset).IsValidId()) - { - UE_LOG(LogTemp, Log, TEXT("Added an Asset to PublishInstance - Publish Instance: %s, Asset %s"), - *this->GetName(), *Asset->GetName()); - } - } -} - -void UOpenPypePublishInstance::OnAssetRemoved(const FAssetData& InAssetData) -{ - if (Cast(InAssetData.GetAsset()) == nullptr) - { - if (AssetDataInternal.Contains(nullptr)) - { - AssetDataInternal.Remove(nullptr); - REMOVE_INVALID_ENTRIES(AssetDataInternal) - } - else - { - AssetDataExternal.Remove(nullptr); - REMOVE_INVALID_ENTRIES(AssetDataExternal) - } - } -} - -void UOpenPypePublishInstance::OnAssetUpdated(const FAssetData& InAssetData) -{ - REMOVE_INVALID_ENTRIES(AssetDataInternal); - REMOVE_INVALID_ENTRIES(AssetDataExternal); -} - -bool UOpenPypePublishInstance::IsUnderSameDir(const UObject* InAsset) const -{ - FString ThisLeft, ThisRight; - this->GetPathName().Split(this->GetName(), &ThisLeft, &ThisRight); - - return InAsset->GetPathName().StartsWith(ThisLeft); -} - -#ifdef WITH_EDITOR - -void UOpenPypePublishInstance::ColorOpenPypeDirs() -{ - FString PathName = this->GetPathName(); - - //Check whether the path contains the defined OpenPype folder - if (!PathName.Contains(TEXT("OpenPype"))) return; - - //Get the base path for open pype - FString PathLeft, PathRight; - PathName.Split(FString("OpenPype"), &PathLeft, &PathRight); - - if (PathLeft.IsEmpty() || PathRight.IsEmpty()) - { - UE_LOG(LogAssetData, Error, TEXT("Failed to retrieve the base OpenPype directory!")) - return; - } - - PathName.RemoveFromEnd(PathRight, ESearchCase::CaseSensitive); - - //Get the current settings - const UOpenPypeSettings* Settings = GetMutableDefault(); - - //Color the base folder - UOpenPypeLib::SetFolderColor(PathName, Settings->GetFolderFColor(), false); - - //Get Sub paths, iterate through them and color them according to the folder color in UOpenPypeSettings - const FAssetRegistryModule& AssetRegistryModule = FModuleManager::LoadModuleChecked( - "AssetRegistry"); - - TArray PathList; - - AssetRegistryModule.Get().GetSubPaths(PathName, PathList, true); - - if (PathList.Num() > 0) - { - for (const FString& Path : PathList) - { - UOpenPypeLib::SetFolderColor(Path, Settings->GetFolderFColor(), false); - } - } -} - -void UOpenPypePublishInstance::SendNotification(const FString& Text) const -{ - FNotificationInfo Info{FText::FromString(Text)}; - - Info.bFireAndForget = true; - Info.bUseLargeFont = false; - Info.bUseThrobber = false; - Info.bUseSuccessFailIcons = false; - Info.ExpireDuration = 4.f; - Info.FadeOutDuration = 2.f; - - FSlateNotificationManager::Get().AddNotification(Info); - - UE_LOG(LogAssetData, Warning, - TEXT( - "Removed duplicated asset from the AssetsDataExternal in Container \"%s\", Asset is already included in the AssetDataInternal!" - ), *GetName() - ) -} - - -void UOpenPypePublishInstance::PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) -{ - Super::PostEditChangeProperty(PropertyChangedEvent); - - if (PropertyChangedEvent.ChangeType == EPropertyChangeType::ValueSet && - PropertyChangedEvent.Property->GetFName() == GET_MEMBER_NAME_CHECKED( - UOpenPypePublishInstance, AssetDataExternal)) - { - // Check for duplicated assets - for (const auto& Asset : AssetDataInternal) - { - if (AssetDataExternal.Contains(Asset)) - { - AssetDataExternal.Remove(Asset); - return SendNotification( - "You are not allowed to add assets into AssetDataExternal which are already included in AssetDataInternal!"); - } - } - - // Check if no UOpenPypePublishInstance type assets are included - for (const auto& Asset : AssetDataExternal) - { - if (Cast(Asset.Get()) != nullptr) - { - AssetDataExternal.Remove(Asset); - return SendNotification("You are not allowed to add publish instances!"); - } - } - } -} - -#endif diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp deleted file mode 100644 index a32ebe32cb..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePublishInstanceFactory.cpp +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypePublishInstanceFactory.h" -#include "OpenPypePublishInstance.h" - -UOpenPypePublishInstanceFactory::UOpenPypePublishInstanceFactory(const FObjectInitializer& ObjectInitializer) - : UFactory(ObjectInitializer) -{ - SupportedClass = UOpenPypePublishInstance::StaticClass(); - bCreateNew = false; - bEditorImport = true; -} - -UObject* UOpenPypePublishInstanceFactory::FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) -{ - check(InClass->IsChildOf(UOpenPypePublishInstance::StaticClass())); - return NewObject(InParent, InClass, InName, Flags); -} - -bool UOpenPypePublishInstanceFactory::ShouldShowInNewMenu() const { - return false; -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePythonBridge.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePythonBridge.cpp deleted file mode 100644 index 6ebfc528f0..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypePythonBridge.cpp +++ /dev/null @@ -1,14 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#include "OpenPypePythonBridge.h" - -UOpenPypePythonBridge* UOpenPypePythonBridge::Get() -{ - TArray OpenPypePythonBridgeClasses; - GetDerivedClasses(UOpenPypePythonBridge::StaticClass(), OpenPypePythonBridgeClasses); - int32 NumClasses = OpenPypePythonBridgeClasses.Num(); - if (NumClasses > 0) - { - return Cast(OpenPypePythonBridgeClasses[NumClasses - 1]->GetDefaultObject()); - } - return nullptr; -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeSettings.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeSettings.cpp deleted file mode 100644 index 6562a81138..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeSettings.cpp +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#include "OpenPypeSettings.h" - -#include "Interfaces/IPluginManager.h" -#include "UObject/UObjectGlobals.h" - -/** - * Mainly is used for initializing default values if the DefaultOpenPypeSettings.ini file does not exist in the saved config - */ -UOpenPypeSettings::UOpenPypeSettings(const FObjectInitializer& ObjectInitializer) -{ - - const FString ConfigFilePath = OPENPYPE_SETTINGS_FILEPATH; - - // This has to be probably in the future set using the UE Reflection system - FColor Color; - GConfig->GetColor(TEXT("/Script/OpenPype.OpenPypeSettings"), TEXT("FolderColor"), Color, ConfigFilePath); - - FolderColor = Color; -} \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeStyle.cpp b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeStyle.cpp deleted file mode 100644 index a4d75e048e..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Private/OpenPypeStyle.cpp +++ /dev/null @@ -1,63 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#include "OpenPypeStyle.h" -#include "OpenPype.h" -#include "Framework/Application/SlateApplication.h" -#include "Styling/SlateStyleRegistry.h" -#include "Slate/SlateGameResources.h" -#include "Interfaces/IPluginManager.h" -#include "Styling/SlateStyleMacros.h" - -#define RootToContentDir Style->RootToContentDir - -TSharedPtr FOpenPypeStyle::OpenPypeStyleInstance = nullptr; - -void FOpenPypeStyle::Initialize() -{ - if (!OpenPypeStyleInstance.IsValid()) - { - OpenPypeStyleInstance = Create(); - FSlateStyleRegistry::RegisterSlateStyle(*OpenPypeStyleInstance); - } -} - -void FOpenPypeStyle::Shutdown() -{ - FSlateStyleRegistry::UnRegisterSlateStyle(*OpenPypeStyleInstance); - ensure(OpenPypeStyleInstance.IsUnique()); - OpenPypeStyleInstance.Reset(); -} - -FName FOpenPypeStyle::GetStyleSetName() -{ - static FName StyleSetName(TEXT("OpenPypeStyle")); - return StyleSetName; -} - -const FVector2D Icon16x16(16.0f, 16.0f); -const FVector2D Icon20x20(20.0f, 20.0f); -const FVector2D Icon40x40(40.0f, 40.0f); - -TSharedRef< FSlateStyleSet > FOpenPypeStyle::Create() -{ - TSharedRef< FSlateStyleSet > Style = MakeShareable(new FSlateStyleSet("OpenPypeStyle")); - Style->SetContentRoot(IPluginManager::Get().FindPlugin("OpenPype")->GetBaseDir() / TEXT("Resources")); - - Style->Set("OpenPype.OpenPypeTools", new IMAGE_BRUSH(TEXT("openpype40"), Icon40x40)); - Style->Set("OpenPype.OpenPypeToolsDialog", new IMAGE_BRUSH(TEXT("openpype40"), Icon40x40)); - - return Style; -} - -void FOpenPypeStyle::ReloadTextures() -{ - if (FSlateApplication::IsInitialized()) - { - FSlateApplication::Get().GetRenderer()->ReloadTextureResources(); - } -} - -const ISlateStyle& FOpenPypeStyle::Get() -{ - return *OpenPypeStyleInstance; -} diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/AssetContainer.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/AssetContainer.h deleted file mode 100644 index 9157569c08..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/AssetContainer.h +++ /dev/null @@ -1,37 +0,0 @@ -// Fill out your copyright notice in the Description page of Project Settings. - -#pragma once - -#include "CoreMinimal.h" -#include "UObject/NoExportTypes.h" -#include "Engine/AssetUserData.h" -#include "AssetRegistry/AssetData.h" -#include "AssetContainer.generated.h" - -/** - * - */ -UCLASS(Blueprintable) -class OPENPYPE_API UAssetContainer : public UAssetUserData -{ - GENERATED_BODY() - -public: - - UAssetContainer(const FObjectInitializer& ObjectInitalizer); - // ~UAssetContainer(); - - UPROPERTY(EditAnywhere, BlueprintReadOnly, Category="Assets") - TArray assets; - - // There seems to be no reflection option to expose array of FAssetData - /* - UPROPERTY(Transient, BlueprintReadOnly, Category = "Python", meta=(DisplayName="Assets Data")) - TArray assetsData; - */ -private: - TArray assetsData; - void OnAssetAdded(const FAssetData& AssetData); - void OnAssetRemoved(const FAssetData& AssetData); - void OnAssetRenamed(const FAssetData& AssetData, const FString& str); -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/AssetContainerFactory.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/AssetContainerFactory.h deleted file mode 100644 index 9095f8a3d7..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/AssetContainerFactory.h +++ /dev/null @@ -1,21 +0,0 @@ -// Fill out your copyright notice in the Description page of Project Settings. - -#pragma once - -#include "CoreMinimal.h" -#include "Factories/Factory.h" -#include "AssetContainerFactory.generated.h" - -/** - * - */ -UCLASS() -class OPENPYPE_API UAssetContainerFactory : public UFactory -{ - GENERATED_BODY() - -public: - UAssetContainerFactory(const FObjectInitializer& ObjectInitializer); - virtual UObject* FactoryCreateNew(UClass* Class, UObject* InParent, FName Name, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; - virtual bool ShouldShowInNewMenu() const override; -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Commandlets/Implementations/OPGenerateProjectCommandlet.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Commandlets/Implementations/OPGenerateProjectCommandlet.h deleted file mode 100644 index 6a6c6406e7..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Commandlets/Implementations/OPGenerateProjectCommandlet.h +++ /dev/null @@ -1,61 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - - -#include "GameProjectUtils.h" -#include "Commandlets/OPActionResult.h" -#include "ProjectDescriptor.h" -#include "Commandlets/Commandlet.h" -#include "OPGenerateProjectCommandlet.generated.h" - -struct FProjectDescriptor; -struct FProjectInformation; - -/** -* @brief Structure which parses command line parameters and generates FProjectInformation -*/ -USTRUCT() -struct FOPGenerateProjectParams -{ - GENERATED_BODY() - -private: - FString CommandLineParams; - TArray Tokens; - TArray Switches; - -public: - FOPGenerateProjectParams(); - FOPGenerateProjectParams(const FString& CommandLineParams); - - FProjectInformation GenerateUEProjectInformation() const; - -private: - FString TryGetToken(const int32 Index) const; - FString GetProjectFileName() const; - - bool IsSwitchPresent(const FString& Switch) const; -}; - -UCLASS() -class OPENPYPE_API UOPGenerateProjectCommandlet : public UCommandlet -{ - GENERATED_BODY() - -private: - FProjectInformation ProjectInformation; - FProjectDescriptor ProjectDescriptor; - -public: - UOPGenerateProjectCommandlet(); - - virtual int32 Main(const FString& CommandLineParams) override; - -private: - FOPGenerateProjectParams ParseParameters(const FString& Params) const; - FOP_ActionResult TryCreateProject() const; - FOP_ActionResult TryLoadProjectDescriptor(); - void AttachPluginsToProjectDescriptor(); - FOP_ActionResult TrySave(); -}; - diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Commandlets/OPActionResult.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Commandlets/OPActionResult.h deleted file mode 100644 index 322a23a3e8..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Commandlets/OPActionResult.h +++ /dev/null @@ -1,83 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "CoreMinimal.h" -#include "OPActionResult.generated.h" - -/** - * @brief This macro returns error code when is problem or does nothing when there is no problem. - * @param ActionResult FOP_ActionResult structure - */ -#define EVALUATE_OP_ACTION_RESULT(ActionResult) \ - if(ActionResult.IsProblem()) \ - return ActionResult.GetStatus(); - -/** -* @brief This enum values are humanly readable mapping of error codes. -* Here should be all error codes to be possible find what went wrong. -* TODO: In the future a web document should exists with the mapped error code & what problem occurred & how to repair it... -*/ -UENUM() -namespace EOP_ActionResult -{ - enum Type - { - Ok, - ProjectNotCreated, - ProjectNotLoaded, - ProjectNotSaved, - //....Here insert another values - - //Do not remove! - //Usable for looping through enum values - __Last UMETA(Hidden) - }; -} - - -/** - * @brief This struct holds action result enum and optionally reason of fail - */ -USTRUCT() -struct FOP_ActionResult -{ - GENERATED_BODY() - -public: - /** @brief Default constructor usable when there is no problem */ - FOP_ActionResult(); - - /** - * @brief This constructor initializes variables & attempts to log when is error - * @param InEnum Status - */ - FOP_ActionResult(const EOP_ActionResult::Type& InEnum); - - /** - * @brief This constructor initializes variables & attempts to log when is error - * @param InEnum Status - * @param InReason Reason of potential fail - */ - FOP_ActionResult(const EOP_ActionResult::Type& InEnum, const FText& InReason); - -private: - /** @brief Action status */ - EOP_ActionResult::Type Status; - - /** @brief Optional reason of fail */ - FText Reason; - -public: - /** - * @brief Checks if there is problematic state - * @return true when status is not equal to EOP_ActionResult::Ok - */ - bool IsProblem() const; - EOP_ActionResult::Type& GetStatus(); - FText& GetReason(); - -private: - void TryLog() const; -}; - diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Logging/OP_Log.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Logging/OP_Log.h deleted file mode 100644 index 3740c5285a..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/Logging/OP_Log.h +++ /dev/null @@ -1,4 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -DEFINE_LOG_CATEGORY_STATIC(LogCommandletOPGenerateProject, Log, All); \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OPConstants.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OPConstants.h deleted file mode 100644 index f4587f7a50..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OPConstants.h +++ /dev/null @@ -1,13 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -namespace OPConstants -{ - const FString OP_PluginName = "OpenPype"; - const FString PythonScript_PluginName = "PythonScriptPlugin"; - const FString SequencerScripting_PluginName = "SequencerScripting"; - const FString MovieRenderPipeline_PluginName = "MovieRenderPipeline"; - const FString EditorScriptingUtils_PluginName = "EditorScriptingUtilities"; -} - - diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPype.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPype.h deleted file mode 100644 index b89760099b..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPype.h +++ /dev/null @@ -1,25 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "CoreMinimal.h" -#include "Modules/ModuleManager.h" - - -class FOpenPypeModule : public IModuleInterface -{ -public: - virtual void StartupModule() override; - virtual void ShutdownModule() override; - -private: - void RegisterMenus(); - void RegisterSettings(); - bool HandleSettingsSaved(); - - void MenuPopup(); - void MenuDialog(); - -private: - TSharedPtr PluginCommands; -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeCommands.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeCommands.h deleted file mode 100644 index 99b0be26f0..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeCommands.h +++ /dev/null @@ -1,24 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "CoreMinimal.h" -#include "Framework/Commands/Commands.h" -#include "OpenPypeStyle.h" - -class FOpenPypeCommands : public TCommands -{ -public: - - FOpenPypeCommands() - : TCommands(TEXT("OpenPype"), NSLOCTEXT("Contexts", "OpenPype", "OpenPype Tools"), NAME_None, FOpenPypeStyle::GetStyleSetName()) - { - } - - // TCommands<> interface - virtual void RegisterCommands() override; - -public: - TSharedPtr< FUICommandInfo > OpenPypeTools; - TSharedPtr< FUICommandInfo > OpenPypeToolsDialog; -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeLib.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeLib.h deleted file mode 100644 index ef4d1027ea..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeLib.h +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "Engine.h" -#include "OpenPypeLib.generated.h" - - -UCLASS(Blueprintable) -class OPENPYPE_API UOpenPypeLib : public UBlueprintFunctionLibrary -{ - - GENERATED_BODY() - -public: - UFUNCTION(BlueprintCallable, Category = Python) - static bool SetFolderColor(const FString& FolderPath, const FLinearColor& FolderColor,const bool& bForceAdd); - - UFUNCTION(BlueprintCallable, Category = Python) - static TArray GetAllProperties(UClass* cls); -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePublishInstance.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePublishInstance.h deleted file mode 100644 index bce41ef1b1..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePublishInstance.h +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "Engine.h" -#include "OpenPypePublishInstance.generated.h" - - -UCLASS(Blueprintable) -class OPENPYPE_API UOpenPypePublishInstance : public UPrimaryDataAsset -{ - GENERATED_UCLASS_BODY() - -public: - /** - /** - * Retrieves all the assets which are monitored by the Publish Instance (Monitors assets in the directory which is - * placed in) - * - * @return - Set of UObjects. Careful! They are returning raw pointers. Seems like an issue in UE5 - */ - UFUNCTION(BlueprintCallable, BlueprintPure, Category="Python") - TSet GetInternalAssets() const - { - //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. - TSet ResultSet; - - for (const auto& Asset : AssetDataInternal) - ResultSet.Add(Asset.LoadSynchronous()); - - return ResultSet; - } - - /** - * Retrieves all the assets which have been added manually by the Publish Instance - * - * @return - TSet of assets (UObjects). Careful! They are returning raw pointers. Seems like an issue in UE5 - */ - UFUNCTION(BlueprintCallable, BlueprintPure, Category="Python") - TSet GetExternalAssets() const - { - //For some reason it can only return Raw Pointers? Seems like an issue which they haven't fixed. - TSet ResultSet; - - for (const auto& Asset : AssetDataExternal) - ResultSet.Add(Asset.LoadSynchronous()); - - return ResultSet; - } - - /** - * Function for returning all the assets in the container combined. - * - * @return Returns all the internal and externally added assets into one set (TSet of UObjects). Careful! They are - * returning raw pointers. Seems like an issue in UE5 - * - * @attention If the bAddExternalAssets variable is false, external assets won't be included! - */ - UFUNCTION(BlueprintCallable, BlueprintPure, Category="Python") - TSet GetAllAssets() const - { - const TSet>& IteratedSet = bAddExternalAssets - ? AssetDataInternal.Union(AssetDataExternal) - : AssetDataInternal; - - //Create a new TSet only with raw pointers. - TSet ResultSet; - - for (auto& Asset : IteratedSet) - ResultSet.Add(Asset.LoadSynchronous()); - - return ResultSet; - } - -private: - UPROPERTY(VisibleAnywhere, Category="Assets") - TSet> AssetDataInternal; - - /** - * This property allows exposing the array to include other assets from any other directory than what it's currently - * monitoring. NOTE: that these assets have to be added manually! They are not automatically registered or added! - */ - UPROPERTY(EditAnywhere, Category = "Assets") - bool bAddExternalAssets = false; - - UPROPERTY(EditAnywhere, meta=(EditCondition="bAddExternalAssets"), Category="Assets") - TSet> AssetDataExternal; - - - void OnAssetCreated(const FAssetData& InAssetData); - void OnAssetRemoved(const FAssetData& InAssetData); - void OnAssetUpdated(const FAssetData& InAssetData); - - bool IsUnderSameDir(const UObject* InAsset) const; - -#ifdef WITH_EDITOR - - void ColorOpenPypeDirs(); - - void SendNotification(const FString& Text) const; - virtual void PostEditChangeProperty(FPropertyChangedEvent& PropertyChangedEvent) override; - -#endif -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h deleted file mode 100644 index 3fdb984411..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePublishInstanceFactory.h +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once - -#include "CoreMinimal.h" -#include "Factories/Factory.h" -#include "OpenPypePublishInstanceFactory.generated.h" - -/** - * - */ -UCLASS() -class OPENPYPE_API UOpenPypePublishInstanceFactory : public UFactory -{ - GENERATED_BODY() - -public: - UOpenPypePublishInstanceFactory(const FObjectInitializer& ObjectInitializer); - virtual UObject* FactoryCreateNew(UClass* InClass, UObject* InParent, FName InName, EObjectFlags Flags, UObject* Context, FFeedbackContext* Warn) override; - virtual bool ShouldShowInNewMenu() const override; -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePythonBridge.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePythonBridge.h deleted file mode 100644 index 827f76f56b..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypePythonBridge.h +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once -#include "Engine.h" -#include "OpenPypePythonBridge.generated.h" - -UCLASS(Blueprintable) -class UOpenPypePythonBridge : public UObject -{ - GENERATED_BODY() - -public: - UFUNCTION(BlueprintCallable, Category = Python) - static UOpenPypePythonBridge* Get(); - - UFUNCTION(BlueprintImplementableEvent, Category = Python) - void RunInPython_Popup() const; - - UFUNCTION(BlueprintImplementableEvent, Category = Python) - void RunInPython_Dialog() const; - -}; diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeSettings.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeSettings.h deleted file mode 100644 index b818fe0e95..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeSettings.h +++ /dev/null @@ -1,32 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. - -#pragma once - -#include "CoreMinimal.h" -#include "UObject/Object.h" -#include "OpenPypeSettings.generated.h" - -#define OPENPYPE_SETTINGS_FILEPATH IPluginManager::Get().FindPlugin("OpenPype")->GetBaseDir() / TEXT("Config") / TEXT("DefaultOpenPypeSettings.ini") - -UCLASS(Config=OpenPypeSettings, DefaultConfig) -class OPENPYPE_API UOpenPypeSettings : public UObject -{ - GENERATED_UCLASS_BODY() - - UFUNCTION(BlueprintCallable, BlueprintPure, Category = Settings) - FColor GetFolderFColor() const - { - return FolderColor; - } - - UFUNCTION(BlueprintCallable, BlueprintPure, Category = Settings) - FLinearColor GetFolderFLinearColor() const - { - return FLinearColor(FolderColor); - } - -protected: - - UPROPERTY(config, EditAnywhere, Category = Folders) - FColor FolderColor = FColor(25,45,223); -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeStyle.h b/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeStyle.h deleted file mode 100644 index 039abe96ef..0000000000 --- a/openpype/hosts/unreal/integration/UE_5.0/OpenPype/Source/OpenPype/Public/OpenPypeStyle.h +++ /dev/null @@ -1,19 +0,0 @@ -// Copyright 2023, Ayon, All rights reserved. -#pragma once -#include "CoreMinimal.h" -#include "Styling/SlateStyle.h" - -class FOpenPypeStyle -{ -public: - static void Initialize(); - static void Shutdown(); - static void ReloadTextures(); - static const ISlateStyle& Get(); - static FName GetStyleSetName(); - - -private: - static TSharedRef< class FSlateStyleSet > Create(); - static TSharedPtr< class FSlateStyleSet > OpenPypeStyleInstance; -}; \ No newline at end of file diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index 05fc87b318..97771472cf 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -7,7 +7,6 @@ import json from typing import List -import openpype from distutils import dir_util import subprocess import re @@ -189,7 +188,7 @@ def create_unreal_project(project_name: str, As there is no way I know to create a project via command line, this is easiest option. Unreal project file is basically a JSON file. If we find - the `OPENPYPE_UNREAL_PLUGIN` environment variable we assume this is the + the `AYON_UNREAL_PLUGIN` environment variable we assume this is the location of the Integration Plugin and we copy its content to the project folder and enable this plugin. @@ -203,8 +202,7 @@ def create_unreal_project(project_name: str, sources. This will trigger automatically if `Binaries` directory is not found in plugin folders as this indicates this is only source distribution of the plugin. Dev mode - is also set by preset file `unreal/project_setup.json` in - **OPENPYPE_CONFIG**. + is also set in Settings. env (dict, optional): Environment to use. If not set, `os.environ`. Throws: @@ -237,7 +235,7 @@ def create_unreal_project(project_name: str, print("--- Generating a new project ...") commandlet_cmd = [f'{ue_editor_exe.as_posix()}', f'{cmdlet_project.as_posix()}', - f'-run=OPGenerateProject', + f'-run=AyonGenerateProject', f'{project_file.resolve().as_posix()}'] if dev_mode or preset["dev_mode"]: @@ -318,21 +316,73 @@ def get_path_to_uat(engine_path: Path) -> Path: if platform.system().lower() == "windows": return engine_path / "Engine/Build/BatchFiles/RunUAT.bat" - if platform.system().lower() == "linux" \ - or platform.system().lower() == "darwin": + if platform.system().lower() in ["linux", "darwin"]: return engine_path / "Engine/Build/BatchFiles/RunUAT.sh" +def get_compatible_integration( + ue_version: str, integration_root: Path) -> List[Path]: + """Get path to compatible version of integration plugin. + + This will try to get the closest compatible versions to the one + specified in sorted list. + + Args: + ue_version (str): version of the current Unreal Engine. + integration_root (Path): path to built-in integration plugins. + + Returns: + list of Path: Sorted list of paths closest to the specified + version. + + """ + major, minor = ue_version.split(".") + integration_paths = [p for p in integration_root.iterdir() + if p.is_dir()] + + compatible_versions = [] + for i in integration_paths: + # parse version from path + try: + i_major, i_minor = re.search( + r"(?P\d+).(?P\d+)$", i.name).groups() + except AttributeError: + # in case there is no match, just skip to next + continue + + # consider versions with different major so different that they + # are incompatible + if int(major) != int(i_major): + continue + + compatible_versions.append(i) + + sorted(set(compatible_versions)) + return compatible_versions + + def get_path_to_cmdlet_project(ue_version: str) -> Path: - cmd_project = Path(os.path.dirname(os.path.abspath(openpype.__file__))) + cmd_project = Path( + os.path.abspath(os.getenv("OPENPYPE_ROOT"))) - # For now, only tested on Windows (For Linux and Mac it has to be implemented) - if ue_version.split(".")[0] == "4": - cmd_project /= "hosts/unreal/integration/UE_4.7" - elif ue_version.split(".")[0] == "5": - cmd_project /= "hosts/unreal/integration/UE_5.0" + # For now, only tested on Windows (For Linux and Mac + # it has to be implemented) + cmd_project /= f"openpype/hosts/unreal/integration/UE_{ue_version}" - return cmd_project / "CommandletProject/CommandletProject.uproject" + # if the integration doesn't exist for current engine version + # try to find the closest to it. + if cmd_project.exists(): + return cmd_project / "CommandletProject/CommandletProject.uproject" + + if compatible_versions := get_compatible_integration( + ue_version, cmd_project.parent + ): + return compatible_versions[-1] / "CommandletProject/CommandletProject.uproject" # noqa: E501 + else: + raise RuntimeError( + ("There are no compatible versions of Unreal " + "integration plugin compatible with running version " + f"of Unreal Engine {ue_version}")) def get_path_to_ubt(engine_path: Path, ue_version: str) -> Path: @@ -375,13 +425,13 @@ def get_build_id(engine_path: Path, ue_version: str) -> str: def check_plugin_existence(engine_path: Path, env: dict = None) -> bool: env = env or os.environ - integration_plugin_path: Path = Path(env.get("OPENPYPE_UNREAL_PLUGIN", "")) + integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(integration_plugin_path): raise RuntimeError("Path to the integration plugin is null!") # Create a path to the plugin in the engine - op_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/OpenPype" + op_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/Ayon" if not op_plugin_path.is_dir(): return False @@ -396,13 +446,13 @@ def check_plugin_existence(engine_path: Path, env: dict = None) -> bool: def try_installing_plugin(engine_path: Path, env: dict = None) -> None: env = env or os.environ - integration_plugin_path: Path = Path(env.get("OPENPYPE_UNREAL_PLUGIN", "")) + integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(integration_plugin_path): raise RuntimeError("Path to the integration plugin is null!") # Create a path to the plugin in the engine - op_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/OpenPype" + op_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/Ayon" if not op_plugin_path.is_dir(): op_plugin_path.mkdir(parents=True, exist_ok=True) @@ -423,12 +473,12 @@ def _build_and_move_plugin(engine_path: Path, uat_path: Path = get_path_to_uat(engine_path) env = env or os.environ - integration_plugin_path: Path = Path(env.get("OPENPYPE_UNREAL_PLUGIN", "")) + integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) if uat_path.is_file(): temp_dir: Path = integration_plugin_path.parent / "Temp" temp_dir.mkdir(exist_ok=True) - uplugin_path: Path = integration_plugin_path / "OpenPype.uplugin" + uplugin_path: Path = integration_plugin_path / "Ayon.uplugin" # in order to successfully build the plugin, # It must be built outside the Engine directory and then moved @@ -439,7 +489,7 @@ def _build_and_move_plugin(engine_path: Path, subprocess.run(build_plugin_cmd) # Copy the contents of the 'Temp' dir into the - # 'OpenPype' directory in the engine + # 'Ayon' directory in the engine dir_util.copy_tree(temp_dir.as_posix(), plugin_build_path.as_posix()) # We need to also copy the config folder. diff --git a/openpype/hosts/unreal/plugins/create/create_camera.py b/openpype/hosts/unreal/plugins/create/create_camera.py index 642924e2d6..73afb6cefd 100644 --- a/openpype/hosts/unreal/plugins/create/create_camera.py +++ b/openpype/hosts/unreal/plugins/create/create_camera.py @@ -11,7 +11,7 @@ from openpype.hosts.unreal.api.plugin import ( class CreateCamera(UnrealAssetCreator): """Create Camera.""" - identifier = "io.openpype.creators.unreal.camera" + identifier = "io.ayon.creators.unreal.camera" label = "Camera" family = "camera" icon = "fa.camera" diff --git a/openpype/hosts/unreal/plugins/create/create_layout.py b/openpype/hosts/unreal/plugins/create/create_layout.py index 1d2e800a13..e5c7b8ee19 100644 --- a/openpype/hosts/unreal/plugins/create/create_layout.py +++ b/openpype/hosts/unreal/plugins/create/create_layout.py @@ -7,7 +7,7 @@ from openpype.hosts.unreal.api.plugin import ( class CreateLayout(UnrealActorCreator): """Layout output for character rigs.""" - identifier = "io.openpype.creators.unreal.layout" + identifier = "io.ayon.creators.unreal.layout" label = "Layout" family = "layout" icon = "cubes" diff --git a/openpype/hosts/unreal/plugins/create/create_look.py b/openpype/hosts/unreal/plugins/create/create_look.py index f6c73e47e6..e15b57b2ee 100644 --- a/openpype/hosts/unreal/plugins/create/create_look.py +++ b/openpype/hosts/unreal/plugins/create/create_look.py @@ -14,7 +14,7 @@ from openpype.lib import UILabelDef class CreateLook(UnrealAssetCreator): """Shader connections defining shape look.""" - identifier = "io.openpype.creators.unreal.look" + identifier = "io.ayon.creators.unreal.look" label = "Look" family = "look" icon = "paint-brush" @@ -30,7 +30,7 @@ class CreateLook(UnrealAssetCreator): selected_asset = selection[0] - look_directory = "/Game/OpenPype/Looks" + look_directory = "/Game/Ayon/Looks" # Create the folder folder_name = create_folder(look_directory, subset_name) diff --git a/openpype/hosts/unreal/plugins/create/create_render.py b/openpype/hosts/unreal/plugins/create/create_render.py index b9c443c456..5f561e68ad 100644 --- a/openpype/hosts/unreal/plugins/create/create_render.py +++ b/openpype/hosts/unreal/plugins/create/create_render.py @@ -22,7 +22,7 @@ from openpype.lib import ( class CreateRender(UnrealAssetCreator): """Create instance for sequence for rendering""" - identifier = "io.openpype.creators.unreal.render" + identifier = "io.ayon.creators.unreal.render" label = "Render" family = "render" icon = "eye" @@ -50,7 +50,7 @@ class CreateRender(UnrealAssetCreator): # If the option to create a new level sequence is selected, # create a new level sequence and a master level. - root = f"/Game/OpenPype/Sequences" + root = f"/Game/Ayon/Sequences" # Create a new folder for the sequence in root sequence_dir_name = create_folder(root, subset_name) @@ -142,9 +142,9 @@ class CreateRender(UnrealAssetCreator): # "/Game/OpenPype/" and then we split the path by "/". sel_path = selected_asset_path asset_name = sel_path.replace( - "/Game/OpenPype/", "").split("/")[0] + "/Game/Ayon/", "").split("/")[0] - search_path = f"/Game/OpenPype/{asset_name}" + search_path = f"/Game/Ayon/{asset_name}" else: search_path = Path(selected_asset_path).parent.as_posix() diff --git a/openpype/hosts/unreal/plugins/create/create_staticmeshfbx.py b/openpype/hosts/unreal/plugins/create/create_staticmeshfbx.py index 1acf7084d1..80816d8386 100644 --- a/openpype/hosts/unreal/plugins/create/create_staticmeshfbx.py +++ b/openpype/hosts/unreal/plugins/create/create_staticmeshfbx.py @@ -7,7 +7,7 @@ from openpype.hosts.unreal.api.plugin import ( class CreateStaticMeshFBX(UnrealAssetCreator): """Create Static Meshes as FBX geometry.""" - identifier = "io.openpype.creators.unreal.staticmeshfbx" + identifier = "io.ayon.creators.unreal.staticmeshfbx" label = "Static Mesh (FBX)" family = "unrealStaticMesh" icon = "cube" diff --git a/openpype/hosts/unreal/plugins/create/create_uasset.py b/openpype/hosts/unreal/plugins/create/create_uasset.py index 70f17d478b..f70ecc55b3 100644 --- a/openpype/hosts/unreal/plugins/create/create_uasset.py +++ b/openpype/hosts/unreal/plugins/create/create_uasset.py @@ -12,11 +12,13 @@ from openpype.hosts.unreal.api.plugin import ( class CreateUAsset(UnrealAssetCreator): """Create UAsset.""" - identifier = "io.openpype.creators.unreal.uasset" + identifier = "io.ayon.creators.unreal.uasset" label = "UAsset" family = "uasset" icon = "cube" + extension = ".uasset" + def create(self, subset_name, instance_data, pre_create_data): if pre_create_data.get("use_selection"): ar = unreal.AssetRegistryHelpers.get_asset_registry() @@ -37,10 +39,28 @@ class CreateUAsset(UnrealAssetCreator): f"{Path(obj).name} is not on the disk. Likely it needs to" "be saved first.") - if Path(sys_path).suffix != ".uasset": - raise CreatorError(f"{Path(sys_path).name} is not a UAsset.") + if Path(sys_path).suffix != self.extension: + raise CreatorError( + f"{Path(sys_path).name} is not a {self.label}.") super(CreateUAsset, self).create( subset_name, instance_data, pre_create_data) + + +class CreateUMap(CreateUAsset): + """Create Level.""" + + identifier = "io.ayon.creators.unreal.umap" + label = "Level" + family = "uasset" + extension = ".umap" + + def create(self, subset_name, instance_data, pre_create_data): + instance_data["families"] = ["umap"] + + super(CreateUMap, self).create( + subset_name, + instance_data, + pre_create_data) diff --git a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py index 496b6056ea..52eea4122a 100644 --- a/openpype/hosts/unreal/plugins/load/load_alembic_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_alembic_animation.py @@ -4,7 +4,7 @@ import os from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -68,8 +68,8 @@ class AnimationAlembicLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and openpype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and ayon container + root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" if asset: @@ -97,8 +97,8 @@ class AnimationAlembicLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -109,7 +109,7 @@ class AnimationAlembicLoader(plugin.Loader): "family": context["representation"]["context"]["family"] } unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + f"{asset_dir}/{container_name}", data) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True diff --git a/openpype/hosts/unreal/plugins/load/load_animation.py b/openpype/hosts/unreal/plugins/load/load_animation.py index 1fe0bef462..a5ecb677e8 100644 --- a/openpype/hosts/unreal/plugins/load/load_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_animation.py @@ -11,7 +11,7 @@ from unreal import MovieSceneSkeletalAnimationSection from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -139,9 +139,9 @@ class AnimationFBXLoader(plugin.Loader): Returns: list(str): list of container content """ - # Create directory for asset and avalon container + # Create directory for asset and Ayon container hierarchy = context.get('asset').get('data').get('parents') - root = "/Game/OpenPype" + root = "/Game/Ayon" asset = context.get('asset').get('name') suffix = "_CON" asset_name = f"{asset}_{name}" if asset else f"{name}" @@ -156,7 +156,7 @@ class AnimationFBXLoader(plugin.Loader): package_paths=[f"{root}/{hierarchy[0]}"], recursive_paths=False) levels = ar.get_assets(_filter) - master_level = levels[0].get_editor_property('object_path') + master_level = levels[0].get_asset().get_path_name() hierarchy_dir = root for h in hierarchy: @@ -168,7 +168,7 @@ class AnimationFBXLoader(plugin.Loader): package_paths=[f"{hierarchy_dir}/"], recursive_paths=True) levels = ar.get_assets(_filter) - level = levels[0].get_editor_property('object_path') + level = levels[0].get_asset().get_path_name() unreal.EditorLevelLibrary.save_all_dirty_levels() unreal.EditorLevelLibrary.load_level(level) @@ -223,8 +223,8 @@ class AnimationFBXLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_camera.py b/openpype/hosts/unreal/plugins/load/load_camera.py index 2496440e5f..072b3b1467 100644 --- a/openpype/hosts/unreal/plugins/load/load_camera.py +++ b/openpype/hosts/unreal/plugins/load/load_camera.py @@ -8,7 +8,7 @@ from unreal import EditorLevelLibrary from unreal import EditorLevelUtils from openpype.client import get_assets, get_asset_by_name from openpype.pipeline import ( - AVALON_CONTAINER_ID, + AYON_CONTAINER_ID, legacy_io, ) from openpype.hosts.unreal.api import plugin @@ -100,9 +100,9 @@ class CameraLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and avalon container + # Create directory for asset and Ayon container hierarchy = context.get('asset').get('data').get('parents') - root = "/Game/OpenPype" + root = "/Game/Ayon" hierarchy_dir = root hierarchy_dir_list = [] for h in hierarchy: @@ -268,8 +268,8 @@ class CameraLoader(plugin.Loader): data = get_asset_by_name(project_name, asset)["data"] cam_seq.set_display_rate( unreal.FrameRate(data.get("fps"), 1.0)) - cam_seq.set_playback_start(0) - cam_seq.set_playback_end(data.get('clipOut') - data.get('clipIn') + 1) + cam_seq.set_playback_start(data.get('clipIn')) + cam_seq.set_playback_end(data.get('clipOut') + 1) self._set_sequence_hierarchy( sequences[-1], cam_seq, data.get('clipIn'), data.get('clipOut')) @@ -286,13 +286,33 @@ class CameraLoader(plugin.Loader): self.fname ) + # Set range of all sections + # Changing the range of the section is not enough. We need to change + # the frame of all the keys in the section. + for possessable in cam_seq.get_possessables(): + for tracks in possessable.get_tracks(): + for section in tracks.get_sections(): + section.set_range( + data.get('clipIn'), + data.get('clipOut') + 1) + for channel in section.get_all_channels(): + for key in channel.get_keys(): + old_time = key.get_time().get_editor_property( + 'frame_number') + old_time_value = old_time.get_editor_property( + 'value') + new_time = old_time_value + ( + data.get('clipIn') - data.get('frameStart') + ) + key.set_time(unreal.FrameNumber(value=new_time)) + # Create Asset Container unreal_pipeline.create_container( container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -320,7 +340,7 @@ class CameraLoader(plugin.Loader): def update(self, container, representation): ar = unreal.AssetRegistryHelpers.get_asset_registry() - root = "/Game/OpenPype" + root = "/Game/ayon" asset_dir = container.get('namespace') @@ -345,7 +365,7 @@ class CameraLoader(plugin.Loader): maps = ar.get_assets(filter) # There should be only one map in the list - EditorLevelLibrary.load_level(maps[0].get_full_name()) + EditorLevelLibrary.load_level(maps[0].get_asset().get_path_name()) level_sequence = sequences[0].get_asset() @@ -378,7 +398,7 @@ class CameraLoader(plugin.Loader): # Remove the Level Sequence from the parent. # We need to traverse the hierarchy from the master sequence to find # the level sequence. - root = "/Game/OpenPype" + root = "/Game/Ayon" namespace = container.get('namespace').replace(f"{root}/", "") ms_asset = namespace.split('/')[0] filter = unreal.ARFilter( @@ -493,7 +513,7 @@ class CameraLoader(plugin.Loader): map = maps[0] EditorLevelLibrary.save_all_dirty_levels() - EditorLevelLibrary.load_level(map.get_full_name()) + EditorLevelLibrary.load_level(map.get_asset().get_path_name()) # Remove the camera from the level. actors = EditorLevelLibrary.get_all_level_actors() @@ -503,7 +523,7 @@ class CameraLoader(plugin.Loader): EditorLevelLibrary.destroy_actor(a) EditorLevelLibrary.save_all_dirty_levels() - EditorLevelLibrary.load_level(world.get_full_name()) + EditorLevelLibrary.load_level(world.get_asset().get_path_name()) # There should be only one sequence in the path. sequence_name = sequences[0].asset_name @@ -511,7 +531,7 @@ class CameraLoader(plugin.Loader): # Remove the Level Sequence from the parent. # We need to traverse the hierarchy from the master sequence to find # the level sequence. - root = "/Game/OpenPype" + root = "/Game/Ayon" namespace = container.get('namespace').replace(f"{root}/", "") ms_asset = namespace.split('/')[0] filter = unreal.ARFilter( diff --git a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py index 6ac3531b40..3a292fdbd1 100644 --- a/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_geometrycache_abc.py @@ -4,7 +4,7 @@ import os from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -22,7 +22,8 @@ class PointCacheAlembicLoader(plugin.Loader): color = "orange" def get_task( - self, filename, asset_dir, asset_name, replace, frame_start, frame_end + self, filename, asset_dir, asset_name, replace, + frame_start=None, frame_end=None ): task = unreal.AssetImportTask() options = unreal.AbcImportSettings() @@ -51,8 +52,10 @@ class PointCacheAlembicLoader(plugin.Loader): conversion_settings.set_editor_property( 'rotation', unreal.Vector(x=-90.0, y=0.0, z=180.0)) - sampling_settings.set_editor_property('frame_start', frame_start) - sampling_settings.set_editor_property('frame_end', frame_end) + if frame_start is not None: + sampling_settings.set_editor_property('frame_start', frame_start) + if frame_end is not None: + sampling_settings.set_editor_property('frame_end', frame_end) options.geometry_cache_settings = gc_settings options.conversion_settings = conversion_settings @@ -83,8 +86,8 @@ class PointCacheAlembicLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and OpenPype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" if asset: @@ -118,8 +121,8 @@ class PointCacheAlembicLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -145,9 +148,9 @@ class PointCacheAlembicLoader(plugin.Loader): name = container["asset_name"] source_path = get_representation_path(representation) destination_path = container["namespace"] + representation["context"] - task = self.get_task(source_path, destination_path, name, True) - + task = self.get_task(source_path, destination_path, name, False) # do import fbx and replace existing data unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index 63d415a52b..d94e6e5837 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -19,7 +19,7 @@ from openpype.pipeline import ( loaders_from_representation, load_container, get_representation_path, - AVALON_CONTAINER_ID, + AYON_CONTAINER_ID, legacy_io, ) from openpype.pipeline.context_tools import get_current_project_asset @@ -37,7 +37,7 @@ class LayoutLoader(plugin.Loader): label = "Load Layout" icon = "code-fork" color = "orange" - ASSET_ROOT = "/Game/OpenPype" + ASSET_ROOT = "/Game/Ayon" def _get_asset_containers(self, path): ar = unreal.AssetRegistryHelpers.get_asset_registry() @@ -50,7 +50,7 @@ class LayoutLoader(plugin.Loader): # Get all the asset containers for a in asset_content: obj = ar.get_asset_by_object_path(a) - if obj.get_asset().get_class().get_name() == 'AssetContainer': + if obj.get_asset().get_class().get_name() == 'AyonAssetContainer': asset_containers.append(obj) return asset_containers @@ -338,7 +338,7 @@ class LayoutLoader(plugin.Loader): ).replace('\\', '/') _filter = unreal.ARFilter( - class_names=["AssetContainer"], + class_names=["AyonAssetContainer"], package_paths=[anim_path], recursive_paths=False) containers = ar.get_assets(_filter) @@ -519,7 +519,7 @@ class LayoutLoader(plugin.Loader): for asset in assets: obj = ar.get_asset_by_object_path(asset).get_asset() - if obj.get_class().get_name() == 'AssetContainer': + if obj.get_class().get_name() == 'AyonAssetContainer': container = obj if obj.get_class().get_name() == 'Skeleton': skeleton = obj @@ -634,7 +634,7 @@ class LayoutLoader(plugin.Loader): data = get_current_project_settings() create_sequences = data["unreal"]["level_sequences_for_layouts"] - # Create directory for asset and avalon container + # Create directory for asset and Ayon container hierarchy = context.get('asset').get('data').get('parents') root = self.ASSET_ROOT hierarchy_dir = root @@ -740,7 +740,7 @@ class LayoutLoader(plugin.Loader): loaded_assets = self._process(self.fname, asset_dir, shot) for s in sequences: - EditorAssetLibrary.save_asset(s.get_full_name()) + EditorAssetLibrary.save_asset(s.get_path_name()) EditorLevelLibrary.save_current_level() @@ -749,8 +749,8 @@ class LayoutLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -781,7 +781,7 @@ class LayoutLoader(plugin.Loader): ar = unreal.AssetRegistryHelpers.get_asset_registry() - root = "/Game/OpenPype" + root = "/Game/Ayon" asset_dir = container.get('namespace') context = representation.get("context") @@ -819,7 +819,7 @@ class LayoutLoader(plugin.Loader): recursive_paths=False) levels = ar.get_assets(filter) - layout_level = levels[0].get_editor_property('object_path') + layout_level = levels[0].get_asset().get_path_name() EditorLevelLibrary.save_all_dirty_levels() EditorLevelLibrary.load_level(layout_level) @@ -867,7 +867,7 @@ class LayoutLoader(plugin.Loader): data = get_current_project_settings() create_sequences = data["unreal"]["level_sequences_for_layouts"] - root = "/Game/OpenPype" + root = "/Game/Ayon" path = Path(container.get("namespace")) containers = unreal_pipeline.ls() @@ -919,7 +919,7 @@ class LayoutLoader(plugin.Loader): package_paths=[f"{root}/{ms_asset}"], recursive_paths=False) levels = ar.get_assets(_filter) - master_level = levels[0].get_editor_property('object_path') + master_level = levels[0].get_asset().get_path_name() sequences = [master_sequence] diff --git a/openpype/hosts/unreal/plugins/load/load_layout_existing.py b/openpype/hosts/unreal/plugins/load/load_layout_existing.py index 092b273ded..929a9a1399 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout_existing.py +++ b/openpype/hosts/unreal/plugins/load/load_layout_existing.py @@ -10,7 +10,7 @@ from openpype.pipeline import ( loaders_from_representation, load_container, get_representation_path, - AVALON_CONTAINER_ID, + AYON_CONTAINER_ID, legacy_io, ) from openpype.hosts.unreal.api import plugin @@ -28,7 +28,7 @@ class ExistingLayoutLoader(plugin.Loader): label = "Load Layout on Existing Scene" icon = "code-fork" color = "orange" - ASSET_ROOT = "/Game/OpenPype" + ASSET_ROOT = "/Game/Ayon" delete_unmatched_assets = True @@ -59,8 +59,8 @@ class ExistingLayoutLoader(plugin.Loader): container = obj.get_asset() data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -89,50 +89,26 @@ class ExistingLayoutLoader(plugin.Loader): raise NotImplementedError( f"Unreal version {ue_major} not supported") - def _get_transform(self, ext, import_data, lasset): - conversion = unreal.Matrix.IDENTITY.transform() - fbx_tuning = unreal.Matrix.IDENTITY.transform() + def _transform_from_basis(self, transform, basis): + """Transform a transform from a basis to a new basis.""" + # Get the basis matrix + basis_matrix = unreal.Matrix( + basis[0], + basis[1], + basis[2], + basis[3] + ) + transform_matrix = unreal.Matrix( + transform[0], + transform[1], + transform[2], + transform[3] + ) - basis = unreal.Matrix( - lasset.get('basis')[0], - lasset.get('basis')[1], - lasset.get('basis')[2], - lasset.get('basis')[3] - ).transform() - transform = unreal.Matrix( - lasset.get('transform_matrix')[0], - lasset.get('transform_matrix')[1], - lasset.get('transform_matrix')[2], - lasset.get('transform_matrix')[3] - ).transform() + new_transform = ( + basis_matrix.get_inverse() * transform_matrix * basis_matrix) - # Check for the conversion settings. We cannot access - # the alembic conversion settings, so we assume that - # the maya ones have been applied. - if ext == '.fbx': - loc = import_data.import_translation - rot = import_data.import_rotation.to_vector() - scale = import_data.import_uniform_scale - conversion = unreal.Transform( - location=[loc.x, loc.y, loc.z], - rotation=[rot.x, rot.y, rot.z], - scale=[-scale, scale, scale] - ) - fbx_tuning = unreal.Transform( - rotation=[180.0, 0.0, 90.0], - scale=[1.0, 1.0, 1.0] - ) - elif ext == '.abc': - # This is the standard conversion settings for - # alembic files from Maya. - conversion = unreal.Transform( - location=[0.0, 0.0, 0.0], - rotation=[0.0, 0.0, 0.0], - scale=[1.0, -1.0, 1.0] - ) - - new_transform = (basis.inverse() * transform * basis) - return fbx_tuning * conversion.inverse() * new_transform + return new_transform.transform() def _spawn_actor(self, obj, lasset): actor = EditorLevelLibrary.spawn_actor_from_object( @@ -140,16 +116,13 @@ class ExistingLayoutLoader(plugin.Loader): ) actor.set_actor_label(lasset.get('instance_name')) - smc = actor.get_editor_property('static_mesh_component') - mesh = smc.get_editor_property('static_mesh') - import_data = mesh.get_editor_property('asset_import_data') - filename = import_data.get_first_filename() - path = Path(filename) - transform = self._get_transform( - path.suffix, import_data, lasset) + transform = lasset.get('transform_matrix') + basis = lasset.get('basis') - actor.set_actor_transform(transform, False, True) + computed_transform = self._transform_from_basis(transform, basis) + + actor.set_actor_transform(computed_transform, False, True) @staticmethod def _get_fbx_loader(loaders, family): @@ -320,9 +293,12 @@ class ExistingLayoutLoader(plugin.Loader): containers.append(container) # Set the transform for the actor. - transform = self._get_transform( - path.suffix, import_data, lasset) - actor.set_actor_transform(transform, False, True) + transform = lasset.get('transform_matrix') + basis = lasset.get('basis') + + computed_transform = self._transform_from_basis( + transform, basis) + actor.set_actor_transform(computed_transform, False, True) actors_matched.append(actor) found = True @@ -416,8 +392,8 @@ class ExistingLayoutLoader(plugin.Loader): container=container_name, path=curr_level_path) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": curr_level_path, "container_name": container_name, diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py index e316d255e9..7591d5582f 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_abc.py @@ -4,7 +4,7 @@ import os from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -70,8 +70,8 @@ class SkeletalMeshAlembicLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and openpype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and ayon container + root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" if asset: @@ -98,8 +98,8 @@ class SkeletalMeshAlembicLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -110,7 +110,7 @@ class SkeletalMeshAlembicLoader(plugin.Loader): "family": context["representation"]["context"]["family"] } unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + f"{asset_dir}/{container_name}", data) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True diff --git a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py index 227c5c9292..e9676cde3a 100644 --- a/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_skeletalmesh_fbx.py @@ -4,7 +4,7 @@ import os from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -42,8 +42,8 @@ class SkeletalMeshFBXLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and OpenPype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" if options and options.get("asset_dir"): root = options["asset_dir"] asset = context.get('asset').get('name') @@ -103,8 +103,8 @@ class SkeletalMeshFBXLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -115,7 +115,7 @@ class SkeletalMeshFBXLoader(plugin.Loader): "family": context["representation"]["context"]["family"] } unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + f"{asset_dir}/{container_name}", data) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py index c7841cef53..befc7b0ac9 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_abc.py @@ -4,7 +4,7 @@ import os from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -75,8 +75,8 @@ class StaticMeshAlembicLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and OpenPype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" if asset: @@ -108,8 +108,8 @@ class StaticMeshAlembicLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -119,8 +119,7 @@ class StaticMeshAlembicLoader(plugin.Loader): "parent": context["representation"]["parent"], "family": context["representation"]["context"]["family"] } - unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -136,7 +135,7 @@ class StaticMeshAlembicLoader(plugin.Loader): source_path = get_representation_path(representation) destination_path = container["namespace"] - task = self.get_task(source_path, destination_path, name, True) + task = self.get_task(source_path, destination_path, name, True, False) # do import fbx and replace existing data unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) diff --git a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py index 351c686095..e416256486 100644 --- a/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py +++ b/openpype/hosts/unreal/plugins/load/load_staticmesh_fbx.py @@ -4,7 +4,7 @@ import os from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -68,8 +68,8 @@ class StaticMeshFBXLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and OpenPype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" if options and options.get("asset_dir"): root = options["asset_dir"] asset = context.get('asset').get('name') @@ -81,7 +81,8 @@ class StaticMeshFBXLoader(plugin.Loader): tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - "{}/{}/{}".format(root, asset, name), suffix="") + f"{root}/{asset}/{name}", suffix="" + ) container_name += suffix @@ -96,8 +97,8 @@ class StaticMeshFBXLoader(plugin.Loader): container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -107,8 +108,7 @@ class StaticMeshFBXLoader(plugin.Loader): "parent": context["representation"]["parent"], "family": context["representation"]["context"]["family"] } - unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True diff --git a/openpype/hosts/unreal/plugins/load/load_uasset.py b/openpype/hosts/unreal/plugins/load/load_uasset.py index eccfc7b445..30f63abe39 100644 --- a/openpype/hosts/unreal/plugins/load/load_uasset.py +++ b/openpype/hosts/unreal/plugins/load/load_uasset.py @@ -5,7 +5,7 @@ import shutil from openpype.pipeline import ( get_representation_path, - AVALON_CONTAINER_ID + AYON_CONTAINER_ID ) from openpype.hosts.unreal.api import plugin from openpype.hosts.unreal.api import pipeline as unreal_pipeline @@ -21,6 +21,8 @@ class UAssetLoader(plugin.Loader): icon = "cube" color = "orange" + extension = "uasset" + def load(self, context, name, namespace, options): """Load and containerise representation into Content Browser. @@ -38,37 +40,41 @@ class UAssetLoader(plugin.Loader): list(str): list of container content """ - # Create directory for asset and OpenPype container - root = "/Game/OpenPype/Assets" + # Create directory for asset and Ayon container + root = "/Game/Ayon/Assets" asset = context.get('asset').get('name') suffix = "_CON" - if asset: - asset_name = "{}_{}".format(asset, name) - else: - asset_name = "{}".format(name) - + asset_name = f"{asset}_{name}" if asset else f"{name}" tools = unreal.AssetToolsHelpers().get_asset_tools() asset_dir, container_name = tools.create_unique_asset_name( - "{}/{}/{}".format(root, asset, name), suffix="") + f"{root}/{asset}/{name}", suffix="" + ) - container_name += suffix + unique_number = 1 + while unreal.EditorAssetLibrary.does_directory_exist( + f"{asset_dir}_{unique_number:02}" + ): + unique_number += 1 + + asset_dir = f"{asset_dir}_{unique_number:02}" + container_name = f"{container_name}_{unique_number:02}{suffix}" unreal.EditorAssetLibrary.make_directory(asset_dir) destination_path = asset_dir.replace( - "/Game", - Path(unreal.Paths.project_content_dir()).as_posix(), - 1) + "/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1) - shutil.copy(self.fname, f"{destination_path}/{name}.uasset") + shutil.copy( + self.fname, + f"{destination_path}/{name}_{unique_number:02}.{self.extension}") # Create Asset Container unreal_pipeline.create_container( container=container_name, path=asset_dir) data = { - "schema": "openpype:container-2.0", - "id": AVALON_CONTAINER_ID, + "schema": "ayon:container-2.0", + "id": AYON_CONTAINER_ID, "asset": asset, "namespace": asset_dir, "container_name": container_name, @@ -76,10 +82,9 @@ class UAssetLoader(plugin.Loader): "loader": str(self.__class__.__name__), "representation": context["representation"]["_id"], "parent": context["representation"]["parent"], - "family": context["representation"]["context"]["family"] + "family": context["representation"]["context"]["family"], } - unreal_pipeline.imprint( - "{}/{}".format(asset_dir, container_name), data) + unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -96,10 +101,10 @@ class UAssetLoader(plugin.Loader): asset_dir = container["namespace"] name = representation["context"]["subset"] + unique_number = container["container_name"].split("_")[-2] + destination_path = asset_dir.replace( - "/Game", - Path(unreal.Paths.project_content_dir()).as_posix(), - 1) + "/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=False, include_folder=True @@ -107,22 +112,24 @@ class UAssetLoader(plugin.Loader): for asset in asset_content: obj = ar.get_asset_by_object_path(asset).get_asset() - if not obj.get_class().get_name() == 'AssetContainer': + if obj.get_class().get_name() != "AyonAssetContainer": unreal.EditorAssetLibrary.delete_asset(asset) update_filepath = get_representation_path(representation) - shutil.copy(update_filepath, f"{destination_path}/{name}.uasset") + shutil.copy( + update_filepath, + f"{destination_path}/{name}_{unique_number}.{self.extension}") - container_path = "{}/{}".format(container["namespace"], - container["objectName"]) + container_path = f'{container["namespace"]}/{container["objectName"]}' # update metadata unreal_pipeline.imprint( container_path, { "representation": str(representation["_id"]), - "parent": str(representation["parent"]) - }) + "parent": str(representation["parent"]), + } + ) asset_content = unreal.EditorAssetLibrary.list_assets( asset_dir, recursive=True, include_folder=True @@ -143,3 +150,13 @@ class UAssetLoader(plugin.Loader): if len(asset_content) == 0: unreal.EditorAssetLibrary.delete_directory(parent_path) + + +class UMapLoader(UAssetLoader): + """Load Level.""" + + families = ["uasset"] + label = "Load Level" + representations = ["umap"] + + extension = "umap" diff --git a/openpype/hosts/unreal/plugins/publish/collect_instance_members.py b/openpype/hosts/unreal/plugins/publish/collect_instance_members.py index 46ca51ab7e..de10e7b119 100644 --- a/openpype/hosts/unreal/plugins/publish/collect_instance_members.py +++ b/openpype/hosts/unreal/plugins/publish/collect_instance_members.py @@ -24,7 +24,7 @@ class CollectInstanceMembers(pyblish.api.InstancePlugin): ar = unreal.AssetRegistryHelpers.get_asset_registry() inst_path = instance.data.get('instance_path') - inst_name = instance.data.get('objectName') + inst_name = inst_path.split('/')[-1] pub_instance = ar.get_asset_by_object_path( f"{inst_path}.{inst_name}").get_asset() diff --git a/openpype/hosts/unreal/plugins/publish/collect_render_instances.py b/openpype/hosts/unreal/plugins/publish/collect_render_instances.py index cb28f4bf60..dad0310dfc 100644 --- a/openpype/hosts/unreal/plugins/publish/collect_render_instances.py +++ b/openpype/hosts/unreal/plugins/publish/collect_render_instances.py @@ -3,6 +3,7 @@ from pathlib import Path import unreal +from openpype.pipeline import get_current_project_name from openpype.pipeline import Anatomy from openpype.hosts.unreal.api import pipeline import pyblish.api @@ -72,8 +73,8 @@ class CollectRenderInstances(pyblish.api.InstancePlugin): new_data["level"] = data.get("level") new_data["output"] = s.get('output') new_data["fps"] = seq.get_display_rate().numerator - new_data["frameStart"] = s.get('frame_range')[0] - new_data["frameEnd"] = s.get('frame_range')[1] + new_data["frameStart"] = int(s.get('frame_range')[0]) + new_data["frameEnd"] = int(s.get('frame_range')[1]) new_data["sequence"] = seq.get_path_name() new_data["master_sequence"] = data["master_sequence"] new_data["master_level"] = data["master_level"] @@ -81,12 +82,13 @@ class CollectRenderInstances(pyblish.api.InstancePlugin): self.log.debug(f"new instance data: {new_data}") try: - project = os.environ.get("AVALON_PROJECT") + project = get_current_project_name() anatomy = Anatomy(project) root = anatomy.roots['renders'] - except Exception: - raise Exception( - "Could not find render root in anatomy settings.") + except Exception as e: + raise Exception(( + "Could not find render root " + "in anatomy settings.")) from e render_dir = f"{root}/{project}/{s.get('output')}" render_path = Path(render_dir) @@ -101,8 +103,8 @@ class CollectRenderInstances(pyblish.api.InstancePlugin): new_instance.data["representations"] = [] repr = { - 'frameStart': s.get('frame_range')[0], - 'frameEnd': s.get('frame_range')[1], + 'frameStart': instance.data["frameStart"], + 'frameEnd': instance.data["frameEnd"], 'name': 'png', 'ext': 'png', 'files': frames, diff --git a/openpype/hosts/unreal/plugins/publish/extract_layout.py b/openpype/hosts/unreal/plugins/publish/extract_layout.py index cac7991f00..57e7957575 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_layout.py +++ b/openpype/hosts/unreal/plugins/publish/extract_layout.py @@ -48,7 +48,7 @@ class ExtractLayout(publish.Extractor): # Search the reference to the Asset Container for the object path = unreal.Paths.get_path(mesh.get_path_name()) filter = unreal.ARFilter( - class_names=["AssetContainer"], package_paths=[path]) + class_names=["AyonAssetContainer"], package_paths=[path]) ar = unreal.AssetRegistryHelpers.get_asset_registry() try: asset_container = ar.get_assets(filter)[0].get_asset() diff --git a/openpype/hosts/unreal/plugins/publish/extract_render.py b/openpype/hosts/unreal/plugins/publish/extract_render.py deleted file mode 100644 index 8ff38fbee0..0000000000 --- a/openpype/hosts/unreal/plugins/publish/extract_render.py +++ /dev/null @@ -1,48 +0,0 @@ -from pathlib import Path - -import unreal - -from openpype.pipeline import publish - - -class ExtractRender(publish.Extractor): - """Extract render.""" - - label = "Extract Render" - hosts = ["unreal"] - families = ["render"] - optional = True - - def process(self, instance): - # Define extract output file path - stagingdir = self.staging_dir(instance) - - # Perform extraction - self.log.info("Performing extraction..") - - # Get the render output directory - project_dir = unreal.Paths.project_dir() - render_dir = (f"{project_dir}/Saved/MovieRenders/" - f"{instance.data['subset']}") - - assert unreal.Paths.directory_exists(render_dir), \ - "Render directory does not exist" - - render_path = Path(render_dir) - - frames = [] - - for x in render_path.iterdir(): - if x.is_file() and x.suffix == '.png': - frames.append(str(x)) - - if "representations" not in instance.data: - instance.data["representations"] = [] - - render_representation = { - 'name': 'png', - 'ext': 'png', - 'files': frames, - "stagingDir": stagingdir, - } - instance.data["representations"].append(render_representation) diff --git a/openpype/hosts/unreal/plugins/publish/extract_uasset.py b/openpype/hosts/unreal/plugins/publish/extract_uasset.py index f719df2a82..48b62faa97 100644 --- a/openpype/hosts/unreal/plugins/publish/extract_uasset.py +++ b/openpype/hosts/unreal/plugins/publish/extract_uasset.py @@ -11,16 +11,17 @@ class ExtractUAsset(publish.Extractor): label = "Extract UAsset" hosts = ["unreal"] - families = ["uasset"] + families = ["uasset", "umap"] optional = True def process(self, instance): + extension = ( + "umap" if "umap" in instance.data.get("families") else "uasset") ar = unreal.AssetRegistryHelpers.get_asset_registry() self.log.info("Performing extraction..") - staging_dir = self.staging_dir(instance) - filename = "{}.uasset".format(instance.name) + filename = f"{instance.name}.{extension}" members = instance.data.get("members", []) @@ -36,13 +37,15 @@ class ExtractUAsset(publish.Extractor): shutil.copy(sys_path, staging_dir) + self.log.info(f"instance.data: {instance.data}") + if "representations" not in instance.data: instance.data["representations"] = [] representation = { - 'name': 'uasset', - 'ext': 'uasset', - 'files': filename, + "name": extension, + "ext": extension, + "files": filename, "stagingDir": staging_dir, } instance.data["representations"].append(representation) diff --git a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py index e6584e130f..76bb25fac3 100644 --- a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py +++ b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py @@ -31,8 +31,8 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): frames = list(collection.indexes) current_range = (frames[0], frames[-1]) - required_range = (data["frameStart"], - data["frameEnd"]) + required_range = (data["clipIn"], + data["clipOut"]) if current_range != required_range: raise ValueError(f"Invalid frame range: {current_range} - " diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py index d1740124a8..e7a690ac9c 100644 --- a/openpype/hosts/unreal/ue_workers.py +++ b/openpype/hosts/unreal/ue_workers.py @@ -93,7 +93,7 @@ class UEProjectGenerationWorker(QtCore.QObject): commandlet_cmd = [ f"{ue_editor_exe.as_posix()}", f"{cmdlet_project.as_posix()}", - "-run=OPGenerateProject", + "-run=AyonGenerateProject", f"{project_file.resolve().as_posix()}", ] @@ -286,7 +286,7 @@ class UEPluginInstallWorker(QtCore.QObject): def _build_and_move_plugin(self, plugin_build_path: Path): uat_path: Path = ue_lib.get_path_to_uat(self.engine_path) - src_plugin_dir = Path(self.env.get("OPENPYPE_UNREAL_PLUGIN", "")) + src_plugin_dir = Path(self.env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(src_plugin_dir): msg = "Path to the integration plugin is null!" @@ -300,7 +300,7 @@ class UEPluginInstallWorker(QtCore.QObject): temp_dir: Path = src_plugin_dir.parent / "Temp" temp_dir.mkdir(exist_ok=True) - uplugin_path: Path = src_plugin_dir / "OpenPype.uplugin" + uplugin_path: Path = src_plugin_dir / "Ayon.uplugin" # in order to successfully build the plugin, # It must be built outside the Engine directory and then moved @@ -332,7 +332,7 @@ class UEPluginInstallWorker(QtCore.QObject): raise RuntimeError(msg) # Copy the contents of the 'Temp' dir into the - # 'OpenPype' directory in the engine + # 'Ayon' directory in the engine dir_util.copy_tree(temp_dir.as_posix(), plugin_build_path.as_posix()) @@ -347,7 +347,7 @@ class UEPluginInstallWorker(QtCore.QObject): dir_util.remove_tree(temp_dir.as_posix()) def run(self): - src_plugin_dir = Path(self.env.get("OPENPYPE_UNREAL_PLUGIN", "")) + src_plugin_dir = Path(self.env.get("AYON_UNREAL_PLUGIN", "")) if not os.path.isdir(src_plugin_dir): msg = "Path to the integration plugin is null!" @@ -356,7 +356,7 @@ class UEPluginInstallWorker(QtCore.QObject): # Create a path to the plugin in the engine op_plugin_path = self.engine_path / "Engine/Plugins/Marketplace" \ - "/OpenPype" + "/Ayon" if not op_plugin_path.is_dir(): self.installing.emit("Installing and building the plugin ...") diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 9eb7724a60..06de486f2e 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- # flake8: noqa E402 -"""Pype module API.""" +"""OpenPype lib functions.""" # add vendor to sys path based on Python version import sys import os @@ -94,7 +94,8 @@ from .python_module_tools import ( modules_from_path, recursive_bases_from_class, classes_from_module, - import_module_from_dirpath + import_module_from_dirpath, + is_func_signature_supported, ) from .profiles_filtering import ( @@ -243,6 +244,7 @@ __all__ = [ "recursive_bases_from_class", "classes_from_module", "import_module_from_dirpath", + "is_func_signature_supported", "get_transcode_temp_directory", "should_convert_for_ffmpeg", diff --git a/openpype/lib/events.py b/openpype/lib/events.py index bed00fe659..dca58fcf93 100644 --- a/openpype/lib/events.py +++ b/openpype/lib/events.py @@ -6,10 +6,9 @@ import inspect import logging import weakref from uuid import uuid4 -try: - from weakref import WeakMethod -except Exception: - from openpype.lib.python_2_comp import WeakMethod + +from .python_2_comp import WeakMethod +from .python_module_tools import is_func_signature_supported class MissingEventSystem(Exception): @@ -80,40 +79,8 @@ class EventCallback(object): # Get expected arguments from function spec # - positional arguments are always preferred - expect_args = False - expect_kwargs = False - fake_event = "fake" - if hasattr(inspect, "signature"): - # Python 3 using 'Signature' object where we try to bind arg - # or kwarg. Using signature is recommended approach based on - # documentation. - sig = inspect.signature(func) - try: - sig.bind(fake_event) - expect_args = True - except TypeError: - pass - - try: - sig.bind(event=fake_event) - expect_kwargs = True - except TypeError: - pass - - else: - # In Python 2 'signature' is not available so 'getcallargs' is used - # - 'getcallargs' is marked as deprecated since Python 3.0 - try: - inspect.getcallargs(func, fake_event) - expect_args = True - except TypeError: - pass - - try: - inspect.getcallargs(func, event=fake_event) - expect_kwargs = True - except TypeError: - pass + expect_args = is_func_signature_supported(func, "fake") + expect_kwargs = is_func_signature_supported(func, event="fake") self._func_ref = func_ref self._func_name = func_name diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index ef456395e7..6f52efdfcc 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -190,7 +190,7 @@ def run_openpype_process(*args, **kwargs): Example: ``` - run_openpype_process("run", "") + run_detached_process("run", "") ``` Args: diff --git a/openpype/lib/python_2_comp.py b/openpype/lib/python_2_comp.py index d7137dbe9c..091c51a6f6 100644 --- a/openpype/lib/python_2_comp.py +++ b/openpype/lib/python_2_comp.py @@ -1,41 +1,44 @@ import weakref -class _weak_callable: - def __init__(self, obj, func): - self.im_self = obj - self.im_func = func +WeakMethod = getattr(weakref, "WeakMethod", None) - def __call__(self, *args, **kws): - if self.im_self is None: - return self.im_func(*args, **kws) - else: - return self.im_func(self.im_self, *args, **kws) +if WeakMethod is None: + class _WeakCallable: + def __init__(self, obj, func): + self.im_self = obj + self.im_func = func + + def __call__(self, *args, **kws): + if self.im_self is None: + return self.im_func(*args, **kws) + else: + return self.im_func(self.im_self, *args, **kws) -class WeakMethod: - """ Wraps a function or, more importantly, a bound method in - a way that allows a bound method's object to be GCed, while - providing the same interface as a normal weak reference. """ + class WeakMethod: + """ Wraps a function or, more importantly, a bound method in + a way that allows a bound method's object to be GCed, while + providing the same interface as a normal weak reference. """ - def __init__(self, fn): - try: - self._obj = weakref.ref(fn.im_self) - self._meth = fn.im_func - except AttributeError: - # It's not a bound method - self._obj = None - self._meth = fn + def __init__(self, fn): + try: + self._obj = weakref.ref(fn.im_self) + self._meth = fn.im_func + except AttributeError: + # It's not a bound method + self._obj = None + self._meth = fn - def __call__(self): - if self._dead(): - return None - return _weak_callable(self._getobj(), self._meth) + def __call__(self): + if self._dead(): + return None + return _WeakCallable(self._getobj(), self._meth) - def _dead(self): - return self._obj is not None and self._obj() is None + def _dead(self): + return self._obj is not None and self._obj() is None - def _getobj(self): - if self._obj is None: - return None - return self._obj() + def _getobj(self): + if self._obj is None: + return None + return self._obj() diff --git a/openpype/lib/python_module_tools.py b/openpype/lib/python_module_tools.py index 9e8e94842c..a10263f991 100644 --- a/openpype/lib/python_module_tools.py +++ b/openpype/lib/python_module_tools.py @@ -230,3 +230,70 @@ def import_module_from_dirpath(dirpath, folder_name, dst_module_name=None): dirpath, folder_name, dst_module_name ) return module + + +def is_func_signature_supported(func, *args, **kwargs): + """Check if a function signature supports passed args and kwargs. + + This check does not actually call the function, just look if function can + be called with the arguments. + + Notes: + This does NOT check if the function would work with passed arguments + only if they can be passed in. If function have *args, **kwargs + in paramaters, this will always return 'True'. + + Example: + >>> def my_function(my_number): + ... return my_number + 1 + ... + >>> is_func_signature_supported(my_function, 1) + True + >>> is_func_signature_supported(my_function, 1, 2) + False + >>> is_func_signature_supported(my_function, my_number=1) + True + >>> is_func_signature_supported(my_function, number=1) + False + >>> is_func_signature_supported(my_function, "string") + True + >>> def my_other_function(*args, **kwargs): + ... my_function(*args, **kwargs) + ... + >>> is_func_signature_supported( + ... my_other_function, + ... "string", + ... 1, + ... other=None + ... ) + True + + Args: + func (function): A function where the signature should be tested. + *args (tuple[Any]): Positional arguments for function signature. + **kwargs (dict[str, Any]): Keyword arguments for function signature. + + Returns: + bool: Function can pass in arguments. + """ + + if hasattr(inspect, "signature"): + # Python 3 using 'Signature' object where we try to bind arg + # or kwarg. Using signature is recommended approach based on + # documentation. + sig = inspect.signature(func) + try: + sig.bind(*args, **kwargs) + return True + except TypeError: + pass + + else: + # In Python 2 'signature' is not available so 'getcallargs' is used + # - 'getcallargs' is marked as deprecated since Python 3.0 + try: + inspect.getcallargs(func, *args, **kwargs) + return True + except TypeError: + pass + return False diff --git a/openpype/modules/base.py b/openpype/modules/base.py index ed1eeb04cd..732525b6eb 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -311,6 +311,7 @@ def _load_modules(): # Look for OpenPype modules in paths defined with `get_module_dirs` # - dynamically imported OpenPype modules and addons module_dirs = get_module_dirs() + # Add current directory at first place # - has small differences in import logic current_dir = os.path.abspath(os.path.dirname(__file__)) @@ -318,8 +319,11 @@ def _load_modules(): module_dirs.insert(0, hosts_dir) module_dirs.insert(0, current_dir) + addons_dir = os.path.join(os.path.dirname(current_dir), "addons") + module_dirs.append(addons_dir) + processed_paths = set() - for dirpath in module_dirs: + for dirpath in frozenset(module_dirs): # Skip already processed paths if dirpath in processed_paths: continue diff --git a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py index e6ad6a9aa1..cb2b0cf156 100644 --- a/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py +++ b/openpype/modules/deadline/plugins/publish/collect_default_deadline_server.py @@ -4,7 +4,18 @@ import pyblish.api class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin): - """Collect default Deadline Webservice URL.""" + """Collect default Deadline Webservice URL. + + DL webservice addresses must be configured first in System Settings for + project settings enum to work. + + Default webservice could be overriden by + `project_settings/deadline/deadline_servers`. Currently only single url + is expected. + + This url could be overriden by some hosts directly on instances with + `CollectDeadlineServerFromInstance`. + """ order = pyblish.api.CollectorOrder + 0.410 label = "Default Deadline Webservice" @@ -23,3 +34,16 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin): context.data["defaultDeadline"] = deadline_module.deadline_urls["default"] # noqa: E501 context.data["deadlinePassMongoUrl"] = self.pass_mongo_url + + deadline_servers = (context.data + ["project_settings"] + ["deadline"] + ["deadline_servers"]) + if deadline_servers: + deadline_server_name = deadline_servers[0] + deadline_webservice = deadline_module.deadline_urls.get( + deadline_server_name) + if deadline_webservice: + context.data["defaultDeadline"] = deadline_webservice + self.log.debug("Overriding from project settings with {}".format( # noqa: E501 + deadline_webservice)) diff --git a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py index 8570c759bc..a48596c6bf 100644 --- a/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_fusion_deadline.py @@ -7,9 +7,19 @@ import requests import pyblish.api from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ( + OpenPypePyblishPluginMixin +) +from openpype.lib import ( + BoolDef, + NumberDef +) -class FusionSubmitDeadline(pyblish.api.InstancePlugin): +class FusionSubmitDeadline( + pyblish.api.InstancePlugin, + OpenPypePyblishPluginMixin +): """Submit current Comp to Deadline Renders are submitted to a Deadline Web Service as @@ -17,12 +27,62 @@ class FusionSubmitDeadline(pyblish.api.InstancePlugin): """ - label = "Submit to Deadline" + label = "Submit Fusion to Deadline" order = pyblish.api.IntegratorOrder hosts = ["fusion"] - families = ["render.farm"] + families = ["render"] + targets = ["local"] + + # presets + priority = 50 + chunk_size = 1 + concurrent_tasks = 1 + group = "" + + @classmethod + def get_attribute_defs(cls): + return [ + NumberDef( + "priority", + label="Priority", + default=cls.priority, + decimals=0 + ), + NumberDef( + "chunk", + label="Frames Per Task", + default=cls.chunk_size, + decimals=0, + minimum=1, + maximum=1000 + ), + NumberDef( + "concurrency", + label="Concurrency", + default=cls.concurrent_tasks, + decimals=0, + minimum=1, + maximum=10 + ), + BoolDef( + "suspend_publish", + default=False, + label="Suspend publish" + ) + ] def process(self, instance): + if not instance.data.get("farm"): + self.log.debug("Skipping local instance.") + return + + attribute_values = self.get_attr_values_from_data( + instance.data) + + # add suspend_publish attributeValue to instance data + instance.data["suspend_publish"] = attribute_values[ + "suspend_publish"] + context = instance.context key = "__hasRun{}".format(self.__class__.__name__) @@ -33,36 +93,55 @@ class FusionSubmitDeadline(pyblish.api.InstancePlugin): from openpype.hosts.fusion.api.lib import get_frame_path - deadline_url = ( - context.data["system_settings"] - ["modules"] - ["deadline"] - ["DEADLINE_REST_URL"] - ) - assert deadline_url, "Requires DEADLINE_REST_URL" + # get default deadline webservice url from deadline module + deadline_url = instance.context.data["defaultDeadline"] + # if custom one is set in instance, use that + if instance.data.get("deadlineUrl"): + deadline_url = instance.data.get("deadlineUrl") + assert deadline_url, "Requires Deadline Webservice URL" # Collect all saver instances in context that are to be rendered saver_instances = [] - for instance in context[:]: - if not self.families[0] in instance.data.get("families"): + for instance in context: + if instance.data["family"] != "render": # Allow only saver family instances continue if not instance.data.get("publish", True): # Skip inactive instances continue + self.log.debug(instance.data["name"]) saver_instances.append(instance) if not saver_instances: - raise RuntimeError("No instances found for Deadline submittion") + raise RuntimeError("No instances found for Deadline submission") - fusion_version = int(context.data["fusionVersion"]) - filepath = context.data["currentFile"] - filename = os.path.basename(filepath) - comment = context.data.get("comment", "") + comment = instance.data.get("comment", "") deadline_user = context.data.get("deadlineUser", getpass.getuser()) + script_path = context.data["currentFile"] + + for item in context: + if "workfile" in item.data["families"]: + msg = "Workfile (scene) must be published along" + assert item.data["publish"] is True, msg + + template_data = item.data.get("anatomyData") + rep = item.data.get("representations")[0].get("name") + template_data["representation"] = rep + template_data["ext"] = rep + template_data["comment"] = None + anatomy_filled = context.data["anatomy"].format(template_data) + template_filled = anatomy_filled["publish"]["path"] + script_path = os.path.normpath(template_filled) + + self.log.info( + "Using published scene for render {}".format(script_path) + ) + + filename = os.path.basename(script_path) + # Documentation for keys available at: # https://docs.thinkboxsoftware.com # /products/deadline/8.0/1_User%20Manual/manual @@ -73,31 +152,41 @@ class FusionSubmitDeadline(pyblish.api.InstancePlugin): "BatchName": filename, # Asset dependency to wait for at least the scene file to sync. - "AssetDependency0": filepath, + "AssetDependency0": script_path, # Job name, as seen in Monitor "Name": filename, + "Priority": attribute_values.get( + "priority", self.priority), + "ChunkSize": attribute_values.get( + "chunk", self.chunk_size), + "ConcurrentTasks": attribute_values.get( + "concurrency", + self.concurrent_tasks + ), + # User, as seen in Monitor "UserName": deadline_user, - # Use a default submission pool for Fusion - "Pool": "fusion", + "Pool": instance.data.get("primaryPool"), + "SecondaryPool": instance.data.get("secondaryPool"), + "Group": self.group, "Plugin": "Fusion", "Frames": "{start}-{end}".format( - start=int(context.data["frameStart"]), - end=int(context.data["frameEnd"]) + start=int(instance.data["frameStartHandle"]), + end=int(instance.data["frameEndHandle"]) ), "Comment": comment, }, "PluginInfo": { # Input - "FlowFile": filepath, + "FlowFile": script_path, # Mandatory for Deadline - "Version": str(fusion_version), + "Version": str(instance.data["app_version"]), # Render in high quality "HighQuality": True, @@ -108,7 +197,7 @@ class FusionSubmitDeadline(pyblish.api.InstancePlugin): # Proxy: higher numbers smaller images for faster test renders # 1 = no proxy quality - "Proxy": 1, + "Proxy": 1 }, # Mandatory for Deadline, may be empty @@ -117,7 +206,9 @@ class FusionSubmitDeadline(pyblish.api.InstancePlugin): # Enable going to rendered frames from Deadline Monitor for index, instance in enumerate(saver_instances): - head, padding, tail = get_frame_path(instance.data["path"]) + head, padding, tail = get_frame_path( + instance.data["expectedFiles"][0] + ) path = "{}{}{}".format(head, "#" * padding, tail) folder, filename = os.path.split(path) payload["JobInfo"]["OutputDirectory%d" % index] = folder diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py index 5c598df94b..4900231783 100644 --- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py @@ -86,7 +86,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, def process(self, instance): if not instance.data.get("farm"): - self.log.info("Skipping local instance.") + self.log.debug("Skipping local instance.") return instance.data["attributeValues"] = self.get_attr_values_from_data( diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index f80bd40133..f646551a07 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -275,7 +275,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): args = [ "--headless", 'publish', - rootless_metadata_path, + '"{}"'.format(rootless_metadata_path), "--targets", "deadline", "--targets", "farm" ] @@ -438,7 +438,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): "Finished copying %i files" % len(resource_files)) def _create_instances_for_aov( - self, instance_data, exp_files, additional_data + self, instance_data, exp_files, additional_data, do_not_add_review ): """Create instance for each AOV found. @@ -449,6 +449,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instance_data (pyblish.plugin.Instance): skeleton data for instance (those needed) later by collector exp_files (list): list of expected files divided by aovs + additional_data (dict): + do_not_add_review (bool): explicitly skip review Returns: list of instances @@ -514,8 +516,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): app = os.environ.get("AVALON_APP", "") - preview = False - if isinstance(col, list): render_file_name = os.path.basename(col[0]) else: @@ -532,6 +532,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): new_instance = deepcopy(instance_data) new_instance["subset"] = subset_name new_instance["subsetGroup"] = group_name + + preview = preview and not do_not_add_review if preview: new_instance["review"] = True @@ -591,7 +593,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): self.log.debug("instances:{}".format(instances)) return instances - def _get_representations(self, instance, exp_files): + def _get_representations(self, instance, exp_files, do_not_add_review): """Create representations for file sequences. This will return representations of expected files if they are not @@ -602,6 +604,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instance (dict): instance data for which we are setting representations exp_files (list): list of expected files + do_not_add_review (bool): explicitly skip review Returns: list of representations @@ -651,6 +654,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): if instance.get("slate"): frame_start -= 1 + preview = preview and not do_not_add_review rep = { "name": ext, "ext": ext, @@ -705,6 +709,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): preview = match_aov_pattern( host_name, self.aov_filter, remainder ) + preview = preview and not do_not_add_review if preview: rep.update({ "fps": instance.get("fps"), @@ -757,7 +762,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): """ if not instance.data.get("farm"): - self.log.info("Skipping local instance.") + self.log.debug("Skipping local instance.") return data = instance.data.copy() @@ -820,8 +825,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): families = [family] # pass review to families if marked as review + do_not_add_review = False if data.get("review"): families.append("review") + elif data.get("review") == False: + self.log.debug("Instance has review explicitly disabled.") + do_not_add_review = True instance_skeleton_data = { "family": family, @@ -977,7 +986,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instances = self._create_instances_for_aov( instance_skeleton_data, data.get("expectedFiles"), - additional_data + additional_data, + do_not_add_review ) self.log.info("got {} instance{}".format( len(instances), @@ -986,7 +996,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): else: representations = self._get_representations( instance_skeleton_data, - data.get("expectedFiles") + data.get("expectedFiles"), + do_not_add_review ) if "representations" not in instance_skeleton_data.keys(): diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py index 7c8ab62d4d..e1c0595830 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py @@ -26,7 +26,7 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin, def process(self, instance): if not instance.data.get("farm"): - self.log.info("Skipping local instance.") + self.log.debug("Skipping local instance.") return # get default deadline webservice url from deadline module diff --git a/openpype/modules/ftrack/ftrack_server/lib.py b/openpype/modules/ftrack/ftrack_server/lib.py index eb64063fab..2226c85ef9 100644 --- a/openpype/modules/ftrack/ftrack_server/lib.py +++ b/openpype/modules/ftrack/ftrack_server/lib.py @@ -196,7 +196,7 @@ class ProcessEventHub(SocketBaseEventHub): {"pype_data.is_processed": False} ).sort( [("pype_data.stored", pymongo.ASCENDING)] - ) + ).limit(100) found = False for event_data in not_processed_events: diff --git a/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py b/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py index 9f35424d42..6daaea5f18 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py @@ -378,7 +378,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin): existing_tasks.append(task_name_low) for instance in instances_by_task_name[task_name_low]: - instance["ftrackTask"] = child + instance.data["ftrackTask"] = child for task_name in tasks: task_type = tasks[task_name]["type"] diff --git a/openpype/modules/ftrack/tray/login_dialog.py b/openpype/modules/ftrack/tray/login_dialog.py index f374a71178..a8abdaf191 100644 --- a/openpype/modules/ftrack/tray/login_dialog.py +++ b/openpype/modules/ftrack/tray/login_dialog.py @@ -1,5 +1,3 @@ -import os - import requests from qtpy import QtCore, QtGui, QtWidgets diff --git a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py index f8e56377bb..6e5dd056f3 100644 --- a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py +++ b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py @@ -9,7 +9,7 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin): order = pyblish.api.IntegratorOrder label = "Kitsu Note and Status" - families = ["render", "kitsu"] + families = ["render", "image", "online", "plate", "kitsu"] # status settings set_status_note = False @@ -52,8 +52,9 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin): for instance in context: # Check if instance is a review by checking its family # Allow a match to primary family or any of families - families = set([instance.data["family"]] + - instance.data.get("families", [])) + families = set( + [instance.data["family"]] + instance.data.get("families", []) + ) if "review" not in families: continue diff --git a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py index e05ff05f50..bbed4a3024 100644 --- a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py +++ b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py @@ -8,11 +8,10 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder + 0.01 label = "Kitsu Review" - families = ["render", "kitsu"] + families = ["render", "image", "online", "plate", "kitsu"] optional = True def process(self, instance): - # Check comment has been created comment_id = instance.data.get("kitsu_comment", {}).get("id") if not comment_id: diff --git a/openpype/modules/muster/muster.py b/openpype/modules/muster/muster.py index 77b9214a5a..0cdb1230c8 100644 --- a/openpype/modules/muster/muster.py +++ b/openpype/modules/muster/muster.py @@ -1,7 +1,9 @@ import os import json + import appdirs import requests + from openpype.modules import OpenPypeModule, ITrayModule @@ -110,16 +112,10 @@ class MusterModule(OpenPypeModule, ITrayModule): self.save_credentials(token) def save_credentials(self, token): - """ - Save credentials to JSON file - """ - data = { - 'token': token - } + """Save credentials to JSON file.""" - file = open(self.cred_path, 'w') - file.write(json.dumps(data)) - file.close() + with open(self.cred_path, "w") as f: + json.dump({'token': token}, f) def show_login(self): """ diff --git a/openpype/hooks/pre_copy_last_published_workfile.py b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py similarity index 56% rename from openpype/hooks/pre_copy_last_published_workfile.py rename to openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py index 26b43c39cb..bbc220945c 100644 --- a/openpype/hooks/pre_copy_last_published_workfile.py +++ b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py @@ -1,15 +1,20 @@ import os import shutil -from time import sleep + from openpype.client.entities import ( - get_last_version_by_subset_id, get_representations, - get_subsets, + get_project ) + from openpype.lib import PreLaunchHook -from openpype.lib.local_settings import get_local_site_id from openpype.lib.profiles_filtering import filter_profiles -from openpype.pipeline.load.utils import get_representation_path +from openpype.modules.sync_server.sync_server import ( + download_last_published_workfile, +) +from openpype.pipeline.template_data import get_template_data +from openpype.pipeline.workfile.path_resolving import ( + get_workfile_template_key, +) from openpype.settings.lib import get_project_settings @@ -22,7 +27,11 @@ class CopyLastPublishedWorkfile(PreLaunchHook): # Before `AddLastWorkfileToLaunchArgs` order = -1 - app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"] + # any DCC could be used but TrayPublisher and other specials + app_groups = ["blender", "photoshop", "tvpaint", "aftereffects", + "nuke", "nukeassist", "nukex", "hiero", "nukestudio", + "maya", "harmony", "celaction", "flame", "fusion", + "houdini", "tvpaint"] def execute(self): """Check if local workfile doesn't exist, else copy it. @@ -31,11 +40,11 @@ class CopyLastPublishedWorkfile(PreLaunchHook): 2- Check if workfile in work area doesn't exist 3- Check if published workfile exists and is copied locally in publish 4- Substitute copied published workfile as first workfile + with incremented version by +1 Returns: None: This is a void method. """ - sync_server = self.modules_manager.get("sync_server") if not sync_server or not sync_server.enabled: self.log.debug("Sync server module is not enabled or available") @@ -53,6 +62,7 @@ class CopyLastPublishedWorkfile(PreLaunchHook): # Get data project_name = self.data["project_name"] + asset_name = self.data["asset_name"] task_name = self.data["task_name"] task_type = self.data["task_type"] host_name = self.application.host_name @@ -68,6 +78,8 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "hosts": host_name, } last_workfile_settings = filter_profiles(profiles, filter_data) + if not last_workfile_settings: + return use_last_published_workfile = last_workfile_settings.get( "use_last_published_workfile" ) @@ -92,57 +104,27 @@ class CopyLastPublishedWorkfile(PreLaunchHook): ) return + max_retries = int((sync_server.sync_project_settings[project_name] + ["config"] + ["retry_cnt"])) + self.log.info("Trying to fetch last published workfile...") - project_doc = self.data.get("project_doc") asset_doc = self.data.get("asset_doc") anatomy = self.data.get("anatomy") - # Check it can proceed - if not project_doc and not asset_doc: - return + context_filters = { + "asset": asset_name, + "family": "workfile", + "task": {"name": task_name, "type": task_type} + } - # Get subset id - subset_id = next( - ( - subset["_id"] - for subset in get_subsets( - project_name, - asset_ids=[asset_doc["_id"]], - fields=["_id", "data.family", "data.families"], - ) - if subset["data"].get("family") == "workfile" - # Legacy compatibility - or "workfile" in subset["data"].get("families", {}) - ), - None, - ) - if not subset_id: - self.log.debug( - 'No any workfile for asset "{}".'.format(asset_doc["name"]) - ) - return + workfile_representations = list(get_representations( + project_name, + context_filters=context_filters + )) - # Get workfile representation - last_version_doc = get_last_version_by_subset_id( - project_name, subset_id, fields=["_id"] - ) - if not last_version_doc: - self.log.debug("Subset does not have any versions") - return - - workfile_representation = next( - ( - representation - for representation in get_representations( - project_name, version_ids=[last_version_doc["_id"]] - ) - if representation["context"]["task"]["name"] == task_name - ), - None, - ) - - if not workfile_representation: + if not workfile_representations: self.log.debug( 'No published workfile for task "{}" and host "{}".'.format( task_name, host_name @@ -150,28 +132,55 @@ class CopyLastPublishedWorkfile(PreLaunchHook): ) return - local_site_id = get_local_site_id() - sync_server.add_site( - project_name, - workfile_representation["_id"], - local_site_id, - force=True, - priority=99, - reset_timer=True, + filtered_repres = filter( + lambda r: r["context"].get("version") is not None, + workfile_representations ) - - while not sync_server.is_representation_on_site( - project_name, workfile_representation["_id"], local_site_id - ): - sleep(5) - - # Get paths - published_workfile_path = get_representation_path( - workfile_representation, root=anatomy.roots + workfile_representation = max( + filtered_repres, key=lambda r: r["context"]["version"] ) - local_workfile_dir = os.path.dirname(last_workfile) # Copy file and substitute path - self.data["last_workfile_path"] = shutil.copy( - published_workfile_path, local_workfile_dir + last_published_workfile_path = download_last_published_workfile( + host_name, + project_name, + task_name, + workfile_representation, + max_retries, + anatomy=anatomy ) + if not last_published_workfile_path: + self.log.debug( + "Couldn't download {}".format(last_published_workfile_path) + ) + return + + project_doc = self.data["project_doc"] + + project_settings = self.data["project_settings"] + template_key = get_workfile_template_key( + task_name, host_name, project_name, project_settings + ) + + # Get workfile data + workfile_data = get_template_data( + project_doc, asset_doc, task_name, host_name + ) + + extension = last_published_workfile_path.split(".")[-1] + workfile_data["version"] = ( + workfile_representation["context"]["version"] + 1) + workfile_data["ext"] = extension + + anatomy_result = anatomy.format(workfile_data) + local_workfile_path = anatomy_result[template_key]["path"] + + # Copy last published workfile to local workfile directory + shutil.copy( + last_published_workfile_path, + local_workfile_path, + ) + + self.data["last_workfile_path"] = local_workfile_path + # Keep source filepath for further path conformation + self.data["source_filepath"] = last_published_workfile_path diff --git a/openpype/modules/sync_server/sync_server.py b/openpype/modules/sync_server/sync_server.py index 5b873a37cf..98065b68a0 100644 --- a/openpype/modules/sync_server/sync_server.py +++ b/openpype/modules/sync_server/sync_server.py @@ -3,10 +3,15 @@ import os import asyncio import threading import concurrent.futures -from concurrent.futures._base import CancelledError +from time import sleep from .providers import lib +from openpype.client.entity_links import get_linked_representation_id from openpype.lib import Logger +from openpype.lib.local_settings import get_local_site_id +from openpype.modules.base import ModulesManager +from openpype.pipeline import Anatomy +from openpype.pipeline.load.utils import get_representation_path_with_anatomy from .utils import SyncStatus, ResumableError @@ -189,6 +194,97 @@ def _site_is_working(module, project_name, site_name, site_config): return handler.is_active() +def download_last_published_workfile( + host_name: str, + project_name: str, + task_name: str, + workfile_representation: dict, + max_retries: int, + anatomy: Anatomy = None, +) -> str: + """Download the last published workfile + + Args: + host_name (str): Host name. + project_name (str): Project name. + task_name (str): Task name. + workfile_representation (dict): Workfile representation. + max_retries (int): complete file failure only after so many attempts + anatomy (Anatomy, optional): Anatomy (Used for optimization). + Defaults to None. + + Returns: + str: last published workfile path localized + """ + + if not anatomy: + anatomy = Anatomy(project_name) + + # Get sync server module + sync_server = ModulesManager().modules_by_name.get("sync_server") + if not sync_server or not sync_server.enabled: + print("Sync server module is disabled or unavailable.") + return + + if not workfile_representation: + print( + "Not published workfile for task '{}' and host '{}'.".format( + task_name, host_name + ) + ) + return + + last_published_workfile_path = get_representation_path_with_anatomy( + workfile_representation, anatomy + ) + if not last_published_workfile_path: + return + + # If representation isn't available on remote site, then return. + if not sync_server.is_representation_on_site( + project_name, + workfile_representation["_id"], + sync_server.get_remote_site(project_name), + ): + print( + "Representation for task '{}' and host '{}'".format( + task_name, host_name + ) + ) + return + + # Get local site + local_site_id = get_local_site_id() + + # Add workfile representation to local site + representation_ids = {workfile_representation["_id"]} + representation_ids.update( + get_linked_representation_id( + project_name, repre_id=workfile_representation["_id"] + ) + ) + for repre_id in representation_ids: + if not sync_server.is_representation_on_site(project_name, repre_id, + local_site_id): + sync_server.add_site( + project_name, + repre_id, + local_site_id, + force=True, + priority=99 + ) + sync_server.reset_timer() + print("Starting to download:{}".format(last_published_workfile_path)) + # While representation unavailable locally, wait. + while not sync_server.is_representation_on_site( + project_name, workfile_representation["_id"], local_site_id, + max_retries=max_retries + ): + sleep(5) + + return last_published_workfile_path + + class SyncServerThread(threading.Thread): """ Separate thread running synchronization server with asyncio loop. @@ -358,7 +454,6 @@ class SyncServerThread(threading.Thread): duration = time.time() - start_time self.log.debug("One loop took {:.2f}s".format(duration)) - delay = self.module.get_loop_delay(project_name) self.log.debug( "Waiting for {} seconds to new loop".format(delay) @@ -370,8 +465,8 @@ class SyncServerThread(threading.Thread): self.log.warning( "ConnectionResetError in sync loop, trying next loop", exc_info=True) - except CancelledError: - # just stopping server + except asyncio.exceptions.CancelledError: + # cancelling timer pass except ResumableError: self.log.warning( diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index 5a4fa07e98..b85b045bd9 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -838,6 +838,18 @@ class SyncServerModule(OpenPypeModule, ITrayModule): return ret_dict + def get_launch_hook_paths(self): + """Implementation for applications launch hooks. + + Returns: + (str): full absolut path to directory with hooks for the module + """ + + return os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "launch_hooks" + ) + # Needs to be refactored after Settings are updated # # Methods for Settings to get appriate values to fill forms # def get_configurable_items(self, scope=None): @@ -1045,9 +1057,23 @@ class SyncServerModule(OpenPypeModule, ITrayModule): self.sync_server_thread.reset_timer() def is_representation_on_site( - self, project_name, representation_id, site_name + self, project_name, representation_id, site_name, max_retries=None ): - """Checks if 'representation_id' has all files avail. on 'site_name'""" + """Checks if 'representation_id' has all files avail. on 'site_name' + + Args: + project_name (str) + representation_id (str) + site_name (str) + max_retries (int) (optional) - provide only if method used in while + loop to bail out + Returns: + (bool): True if 'representation_id' has all files correctly on the + 'site_name' + Raises: + (ValueError) Only If 'max_retries' provided if upload/download + failed too many times to limit infinite loop check. + """ representation = get_representation_by_id(project_name, representation_id, fields=["_id", "files"]) @@ -1060,6 +1086,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule): if site["name"] != site_name: continue + if max_retries: + tries = self._get_tries_count_from_rec(site) + if tries >= max_retries: + raise ValueError("Failed too many times") + if (site.get("progress") or site.get("error") or not site.get("created_dt")): return False diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index 7a2ef59a5a..d656d58adc 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -1,5 +1,6 @@ from .constants import ( AVALON_CONTAINER_ID, + AYON_CONTAINER_ID, HOST_WORKFILE_EXTENSIONS, ) @@ -99,6 +100,7 @@ uninstall = uninstall_host __all__ = ( "AVALON_CONTAINER_ID", + "AYON_CONTAINER_ID", "HOST_WORKFILE_EXTENSIONS", # --- MongoDB --- diff --git a/openpype/pipeline/constants.py b/openpype/pipeline/constants.py index e6496cbf95..755a5fb380 100644 --- a/openpype/pipeline/constants.py +++ b/openpype/pipeline/constants.py @@ -1,5 +1,5 @@ # Metadata ID of loaded container into scene -AVALON_CONTAINER_ID = "pyblish.avalon.container" +AVALON_CONTAINER_ID = AYON_CONTAINER_ID = "pyblish.avalon.container" # TODO get extensions from host implementations HOST_WORKFILE_EXTENSIONS = { diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index dede2b8fce..ada78b989d 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -35,6 +35,7 @@ from . import ( register_inventory_action_path, register_creator_plugin_path, deregister_loader_plugin_path, + deregister_inventory_action_path, ) @@ -54,6 +55,7 @@ PLUGINS_DIR = os.path.join(PACKAGE_DIR, "plugins") # Global plugin paths PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") +INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") def _get_modules_manager(): @@ -158,6 +160,7 @@ def install_openpype_plugins(project_name=None, host_name=None): pyblish.api.register_plugin_path(PUBLISH_PATH) pyblish.api.register_discovery_filter(filter_pyblish_plugins) register_loader_plugin_path(LOAD_PATH) + register_inventory_action_path(INVENTORY_PATH) if host_name is None: host_name = os.environ.get("AVALON_APP") @@ -223,6 +226,7 @@ def uninstall_host(): pyblish.api.deregister_plugin_path(PUBLISH_PATH) pyblish.api.deregister_discovery_filter(filter_pyblish_plugins) deregister_loader_plugin_path(LOAD_PATH) + deregister_inventory_action_path(INVENTORY_PATH) log.info("Global plug-ins unregistred") deregister_host() diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 382bbea05e..2fc0669732 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -23,7 +23,7 @@ from openpype.lib.attribute_definitions import ( get_default_values, ) from openpype.host import IPublishHost, IWorkfileHost -from openpype.pipeline import legacy_io +from openpype.pipeline import legacy_io, Anatomy from openpype.pipeline.plugin_discover import DiscoverResult from .creator_plugins import ( @@ -1383,6 +1383,8 @@ class CreateContext: self._current_task_name = None self._current_workfile_path = None + self._current_project_anatomy = None + self._host_is_valid = host_is_valid # Currently unused variable self.headless = headless @@ -1546,6 +1548,18 @@ class CreateContext: return self._current_workfile_path + def get_current_project_anatomy(self): + """Project anatomy for current project. + + Returns: + Anatomy: Anatomy object ready to be used. + """ + + if self._current_project_anatomy is None: + self._current_project_anatomy = Anatomy( + self._current_project_name) + return self._current_project_anatomy + @property def context_has_changed(self): """Host context has changed. @@ -1568,6 +1582,7 @@ class CreateContext: ) project_name = property(get_current_project_name) + project_anatomy = property(get_current_project_anatomy) @property def log(self): @@ -1680,6 +1695,8 @@ class CreateContext: self._current_task_name = task_name self._current_workfile_path = workfile_path + self._current_project_anatomy = None + def reset_plugins(self, discover_publish_plugins=True): """Reload plugins. diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py index bd3fbaf78f..9e47e9cc12 100644 --- a/openpype/pipeline/create/creator_plugins.py +++ b/openpype/pipeline/create/creator_plugins.py @@ -231,10 +231,24 @@ class BaseCreator: @property def project_name(self): - """Family that plugin represents.""" + """Current project name. + + Returns: + str: Name of a project. + """ return self.create_context.project_name + @property + def project_anatomy(self): + """Current project anatomy. + + Returns: + Anatomy: Project anatomy object. + """ + + return self.create_context.project_anatomy + @property def host(self): return self.create_context.host diff --git a/openpype/pipeline/publish/__init__.py b/openpype/pipeline/publish/__init__.py index 36252c9f3d..0c57915c05 100644 --- a/openpype/pipeline/publish/__init__.py +++ b/openpype/pipeline/publish/__init__.py @@ -36,6 +36,10 @@ from .lib import ( context_plugin_should_run, get_instance_staging_dir, get_publish_repre_path, + + apply_plugin_settings_automatically, + get_plugin_settings, + get_publish_instance_label, ) from .abstract_expected_files import ExpectedFiles @@ -80,6 +84,10 @@ __all__ = ( "get_instance_staging_dir", "get_publish_repre_path", + "apply_plugin_settings_automatically", + "get_plugin_settings", + "get_publish_instance_label", + "ExpectedFiles", "RenderInstance", diff --git a/openpype/pipeline/publish/abstract_collect_render.py b/openpype/pipeline/publish/abstract_collect_render.py index ccb2415346..6877d556c3 100644 --- a/openpype/pipeline/publish/abstract_collect_render.py +++ b/openpype/pipeline/publish/abstract_collect_render.py @@ -58,7 +58,7 @@ class RenderInstance(object): # With default values # metadata renderer = attr.ib(default="") # renderer - can be used in Deadline - review = attr.ib(default=False) # generate review from instance (bool) + review = attr.ib(default=None) # False - explicitly skip review priority = attr.ib(default=50) # job priority on farm family = attr.ib(default="renderlayer") @@ -167,16 +167,25 @@ class AbstractCollectRender(pyblish.api.ContextPlugin): frame_start_render = int(render_instance.frameStart) frame_end_render = int(render_instance.frameEnd) + # TODO: Refactor hacky frame range workaround below if (render_instance.ignoreFrameHandleCheck or int(context.data['frameStartHandle']) == frame_start_render and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501 - + # only for Harmony where frame range cannot be set by DB handle_start = context.data['handleStart'] handle_end = context.data['handleEnd'] frame_start = context.data['frameStart'] frame_end = context.data['frameEnd'] frame_start_handle = context.data['frameStartHandle'] frame_end_handle = context.data['frameEndHandle'] + elif (hasattr(render_instance, "frameStartHandle") + and hasattr(render_instance, "frameEndHandle")): + handle_start = int(render_instance.handleStart) + handle_end = int(render_instance.handleEnd) + frame_start = int(render_instance.frameStart) + frame_end = int(render_instance.frameEnd) + frame_start_handle = int(render_instance.frameStartHandle) + frame_end_handle = int(render_instance.frameEndHandle) else: handle_start = 0 handle_end = 0 diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 265a9c7822..471be5ddb8 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -1,19 +1,19 @@ import os import sys -import types import inspect import copy import tempfile import xml.etree.ElementTree -import six +import pyblish.util import pyblish.plugin import pyblish.api from openpype.lib import ( Logger, import_filepath, - filter_profiles + filter_profiles, + is_func_signature_supported, ) from openpype.settings import ( get_project_settings, @@ -41,7 +41,9 @@ def get_template_name_profiles( Args: project_name (str): Name of project where to look for templates. - project_settings(Dic[str, Any]): Prepared project settings. + project_settings (Dict[str, Any]): Prepared project settings. + logger (Optional[logging.Logger]): Logger object to be used instead + of default logger. Returns: List[Dict[str, Any]]: Publish template profiles. @@ -102,7 +104,9 @@ def get_hero_template_name_profiles( Args: project_name (str): Name of project where to look for templates. - project_settings(Dic[str, Any]): Prepared project settings. + project_settings (Dict[str, Any]): Prepared project settings. + logger (Optional[logging.Logger]): Logger object to be used instead + of default logger. Returns: List[Dict[str, Any]]: Publish template profiles. @@ -171,9 +175,10 @@ def get_publish_template_name( project_name (str): Name of project where to look for settings. host_name (str): Name of host integration. family (str): Family for which should be found template. - task_name (str): Task name on which is intance working. - task_type (str): Task type on which is intance working. - project_setting (Dict[str, Any]): Prepared project settings. + task_name (str): Task name on which is instance working. + task_type (str): Task type on which is instance working. + project_settings (Dict[str, Any]): Prepared project settings. + hero (bool): Template is for hero version publishing. logger (logging.Logger): Custom logger used for 'filter_profiles' function. @@ -263,19 +268,18 @@ def load_help_content_from_plugin(plugin): def publish_plugins_discover(paths=None): """Find and return available pyblish plug-ins - Overridden function from `pyblish` module to be able collect crashed files - and reason of their crash. + Overridden function from `pyblish` module to be able to collect + crashed files and reason of their crash. Arguments: paths (list, optional): Paths to discover plug-ins from. If no paths are provided, all paths are searched. - """ # The only difference with `pyblish.api.discover` result = DiscoverResult(pyblish.api.Plugin) - plugins = dict() + plugins = {} plugin_names = [] allow_duplicates = pyblish.plugin.ALLOW_DUPLICATES @@ -301,7 +305,7 @@ def publish_plugins_discover(paths=None): mod_name, mod_ext = os.path.splitext(fname) - if not mod_ext == ".py": + if mod_ext != ".py": continue try: @@ -319,6 +323,14 @@ def publish_plugins_discover(paths=None): continue for plugin in pyblish.plugin.plugins_from_module(module): + # Ignore base plugin classes + # NOTE 'pyblish.api.discover' does not ignore them! + if ( + plugin is pyblish.api.Plugin + or plugin is pyblish.api.ContextPlugin + or plugin is pyblish.api.InstancePlugin + ): + continue if not allow_duplicates and plugin.__name__ in plugin_names: result.duplicated_plugins.append(plugin) log.debug("Duplicate plug-in found: %s", plugin) @@ -354,29 +366,55 @@ def publish_plugins_discover(paths=None): return result -def _get_plugin_settings(host_name, project_settings, plugin, log): +def get_plugin_settings(plugin, project_settings, log, category=None): """Get plugin settings based on host name and plugin name. + Note: + Default implementation of automated settings is passing host name + into 'category'. + Args: - host_name (str): Name of host. + plugin (pyblish.Plugin): Plugin where settings are applied. project_settings (dict[str, Any]): Project settings. - plugin (pyliblish.Plugin): Plugin where settings are applied. log (logging.Logger): Logger to log messages. + category (Optional[str]): Settings category key where to look + for plugin settings. Returns: dict[str, Any]: Plugin settings {'attribute': 'value'}. """ - # Use project settings from host name category when available - try: - return ( - project_settings - [host_name] - ["publish"] - [plugin.__name__] - ) - except KeyError: - pass + # Plugin can define settings category by class attribute + # - it's impossible to set `settings_category` via settings because + # obviously settings are not applied before it. + # - if `settings_category` is set the fallback category method is ignored + settings_category = getattr(plugin, "settings_category", None) + if settings_category: + try: + return ( + project_settings + [settings_category] + ["publish"] + [plugin.__name__] + ) + except KeyError: + log.warning(( + "Couldn't find plugin '{}' settings" + " under settings category '{}'" + ).format(plugin.__name__, settings_category)) + return {} + + # Use project settings based on a category name + if category: + try: + return ( + project_settings + [category] + ["publish"] + [plugin.__name__] + ) + except KeyError: + pass # Settings category determined from path # - usually path is './/plugins/publish/' @@ -385,9 +423,10 @@ def _get_plugin_settings(host_name, project_settings, plugin, log): split_path = filepath.rsplit(os.path.sep, 5) if len(split_path) < 4: - log.warning( - 'plugin path too short to extract host {}'.format(filepath) - ) + log.debug(( + "Plugin path is too short to automatically" + " extract settings category. {}" + ).format(filepath)) return {} category_from_file = split_path[-4] @@ -409,6 +448,28 @@ def _get_plugin_settings(host_name, project_settings, plugin, log): return {} +def apply_plugin_settings_automatically(plugin, settings, logger=None): + """Automatically apply plugin settings to a plugin object. + + Note: + This function was created to be able to use it in custom overrides of + 'apply_settings' class method. + + Args: + plugin (type[pyblish.api.Plugin]): Class of a plugin. + settings (dict[str, Any]): Plugin specific settings. + logger (Optional[logging.Logger]): Logger to log debug messages about + applied settings values. + """ + + for option, value in settings.items(): + if logger: + logger.debug("Plugin {} - Attr: {} -> {}".format( + option, value, plugin.__name__ + )) + setattr(plugin, option, value) + + def filter_pyblish_plugins(plugins): """Pyblish plugin filter which applies OpenPype settings. @@ -436,12 +497,26 @@ def filter_pyblish_plugins(plugins): # iterate over plugins for plugin in plugins[:]: # Apply settings to plugins - if hasattr(plugin, "apply_settings"): + + apply_settings_func = getattr(plugin, "apply_settings", None) + if apply_settings_func is not None: # Use classmethod 'apply_settings' # - can be used to target settings from custom settings place # - skip default behavior when successful try: - plugin.apply_settings(project_settings, system_settings) + # Support to pass only project settings + # - make sure that both settings are passed, when can be + # - that covers cases when *args are in method parameters + both_supported = is_func_signature_supported( + apply_settings_func, project_settings, system_settings + ) + project_supported = is_func_signature_supported( + apply_settings_func, project_settings + ) + if not both_supported and project_supported: + plugin.apply_settings(project_settings) + else: + plugin.apply_settings(project_settings, system_settings) except Exception: log.warning( @@ -452,13 +527,10 @@ def filter_pyblish_plugins(plugins): ) else: # Automated - plugin_settins = _get_plugin_settings( - host_name, project_settings, plugin, log + plugin_settins = get_plugin_settings( + plugin, project_settings, log, host_name ) - for option, value in plugin_settins.items(): - log.info("setting {}:{} on plugin {}".format( - option, value, plugin.__name__)) - setattr(plugin, option, value) + apply_plugin_settings_automatically(plugin, plugin_settins, log) # Remove disabled plugins if getattr(plugin, "enabled", True) is False: @@ -478,10 +550,10 @@ def find_close_plugin(close_plugin_name, log): def remote_publish(log, close_plugin_name=None, raise_error=False): """Loops through all plugins, logs to console. Used for tests. - Args: - log (openpype.lib.Logger) - close_plugin_name (str): name of plugin with responsibility to - close host app + Args: + log (Logger) + close_plugin_name (str): name of plugin with responsibility to + close host app """ # Error exit as soon as any error occurs. error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" @@ -790,3 +862,45 @@ def _validate_transient_template(project_name, template_name, anatomy): raise ValueError(("There is not set \"folder\" template in \"{}\" anatomy" # noqa " for project \"{}\"." ).format(template_name, project_name)) + + +def add_repre_files_for_cleanup(instance, repre): + """ Explicitly mark repre files to be deleted. + + Should be used on intermediate files (eg. review, thumbnails) to be + explicitly deleted. + """ + files = repre["files"] + staging_dir = repre.get("stagingDir") + if not staging_dir: + return + + if isinstance(files, str): + files = [files] + + for file_name in files: + expected_file = os.path.join(staging_dir, file_name) + instance.context.data["cleanupFullPaths"].append(expected_file) + + +def get_publish_instance_label(instance): + """Try to get label from pyblish instance. + + First are used values in instance data under 'label' and 'name' keys. Then + is used string conversion of instance object -> 'instance._name'. + + Todos: + Maybe 'subset' key could be used too. + + Args: + instance (pyblish.api.Instance): Pyblish instance. + + Returns: + str: Instance label. + """ + + return ( + instance.data.get("label") + or instance.data.get("name") + or str(instance) + ) diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index a38896ec8e..a67c8397b1 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -379,7 +379,9 @@ class ColormanagedPyblishPluginMixin(object): # check if ext in lower case is in self.allowed_ext if ext.lstrip(".").lower() not in self.allowed_ext: - self.log.debug("Extension is not in allowed extensions.") + self.log.debug( + "Extension '{}' is not in allowed extensions.".format(ext) + ) return if colorspace_settings is None: @@ -393,8 +395,7 @@ class ColormanagedPyblishPluginMixin(object): self.log.warning("No colorspace management was defined") return - self.log.info("Config data is : `{}`".format( - config_data)) + self.log.debug("Config data is: `{}`".format(config_data)) project_name = context.data["projectName"] host_name = context.data["hostName"] @@ -405,8 +406,7 @@ class ColormanagedPyblishPluginMixin(object): if isinstance(filename, list): filename = filename[0] - self.log.debug("__ filename: `{}`".format( - filename)) + self.log.debug("__ filename: `{}`".format(filename)) # get matching colorspace from rules colorspace = colorspace or get_imageio_colorspace_from_filepath( @@ -415,8 +415,7 @@ class ColormanagedPyblishPluginMixin(object): file_rules=file_rules, project_settings=project_settings ) - self.log.debug("__ colorspace: `{}`".format( - colorspace)) + self.log.debug("__ colorspace: `{}`".format(colorspace)) # infuse data to representation if colorspace: diff --git a/openpype/pipeline/workfile/build_workfile.py b/openpype/pipeline/workfile/build_workfile.py index 26b17fa151..8329487839 100644 --- a/openpype/pipeline/workfile/build_workfile.py +++ b/openpype/pipeline/workfile/build_workfile.py @@ -186,7 +186,7 @@ class BuildWorkfile: if link_context_profiles: # Find and append linked assets if preset has set linked mapping - link_assets = get_linked_assets(current_asset_entity) + link_assets = get_linked_assets(project_name, current_asset_entity) if link_assets: assets.extend(link_assets) diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py index a3d7340367..896ed40f2d 100644 --- a/openpype/pipeline/workfile/workfile_template_builder.py +++ b/openpype/pipeline/workfile/workfile_template_builder.py @@ -43,6 +43,7 @@ from openpype.pipeline.load import ( get_contexts_for_repre_docs, load_with_repre_context, ) + from openpype.pipeline.create import ( discover_legacy_creator_plugins, CreateContext, @@ -1246,6 +1247,16 @@ class PlaceholderLoadMixin(object): loader_items = list(sorted(loader_items, key=lambda i: i["label"])) options = options or {} + + # Get families from all loaders excluding "*" + families = set() + for loader in loaders_by_name.values(): + families.update(loader.families) + families.discard("*") + + # Sort for readability + families = list(sorted(families)) + return [ attribute_definitions.UISeparatorDef(), attribute_definitions.UILabelDef("Main attributes"), @@ -1272,11 +1283,11 @@ class PlaceholderLoadMixin(object): " field \"inputLinks\"" ) ), - attribute_definitions.TextDef( + attribute_definitions.EnumDef( "family", label="Family", default=options.get("family"), - placeholder="model, look, ..." + items=families ), attribute_definitions.TextDef( "representation", diff --git a/openpype/plugins/inventory/remove_and_load.py b/openpype/plugins/inventory/remove_and_load.py new file mode 100644 index 0000000000..ae66b95f6e --- /dev/null +++ b/openpype/plugins/inventory/remove_and_load.py @@ -0,0 +1,51 @@ +from openpype.pipeline import InventoryAction +from openpype.pipeline import get_current_project_name +from openpype.pipeline.load.plugins import discover_loader_plugins +from openpype.pipeline.load.utils import ( + get_loader_identifier, + remove_container, + load_container, +) +from openpype.client import get_representation_by_id + + +class RemoveAndLoad(InventoryAction): + """Delete inventory item and reload it.""" + + label = "Remove and load" + icon = "refresh" + + def process(self, containers): + project_name = get_current_project_name() + loaders_by_name = { + get_loader_identifier(plugin): plugin + for plugin in discover_loader_plugins(project_name=project_name) + } + for container in containers: + # Get loader + loader_name = container["loader"] + loader = loaders_by_name.get(loader_name, None) + if not loader: + raise RuntimeError( + "Failed to get loader '{}', can't remove " + "and load container".format(loader_name) + ) + + # Get representation + representation = get_representation_by_id( + project_name, container["representation"] + ) + if not representation: + self.log.warning( + "Skipping remove and load because representation id is not" + " found in database: '{}'".format( + container["representation"] + ) + ) + continue + + # Remove container + remove_container(container) + + # Load container + load_container(loader, representation) diff --git a/openpype/plugins/load/open_djv.py b/openpype/plugins/load/open_djv.py index bc5fd64b87..9c36e7f405 100644 --- a/openpype/plugins/load/open_djv.py +++ b/openpype/plugins/load/open_djv.py @@ -19,12 +19,13 @@ class OpenInDJV(load.LoaderPlugin): djv_list = existing_djv_path() families = ["*"] if djv_list else [] - representations = [ + representations = ["*"] + extensions = { "cin", "dpx", "avi", "dv", "gif", "flv", "mkv", "mov", "mpg", "mpeg", "mp4", "m4v", "mxf", "iff", "z", "ifl", "jpeg", "jpg", "jfif", "lut", "1dl", "exr", "pic", "png", "ppm", "pnm", "pgm", "pbm", "rla", "rpf", "sgi", "rgba", "rgb", "bw", "tga", "tiff", "tif", "img", "h264", - ] + } label = "Open in DJV" order = -10 diff --git a/openpype/plugins/publish/cleanup.py b/openpype/plugins/publish/cleanup.py index b90c88890d..57cc9c0ab5 100644 --- a/openpype/plugins/publish/cleanup.py +++ b/openpype/plugins/publish/cleanup.py @@ -81,7 +81,8 @@ class CleanUp(pyblish.api.InstancePlugin): staging_dir = instance.data.get("stagingDir", None) if not staging_dir: - self.log.info("Staging dir not set.") + self.log.debug("Skipping cleanup. Staging dir not set " + "on instance: {}.".format(instance)) return if not os.path.normpath(staging_dir).startswith(temp_root): @@ -90,7 +91,7 @@ class CleanUp(pyblish.api.InstancePlugin): return if not os.path.exists(staging_dir): - self.log.info("No staging directory found: %s" % staging_dir) + self.log.debug("No staging directory found at: %s" % staging_dir) return if instance.data.get("stagingDir_persistent"): @@ -131,7 +132,9 @@ class CleanUp(pyblish.api.InstancePlugin): try: os.remove(src) except PermissionError: - self.log.warning("Insufficient permission to delete {}".format(src)) + self.log.warning( + "Insufficient permission to delete {}".format(src) + ) continue # add dir for cleanup diff --git a/openpype/plugins/publish/collect_anatomy_context_data.py b/openpype/plugins/publish/collect_anatomy_context_data.py index 55ce8e06f4..508b01447b 100644 --- a/openpype/plugins/publish/collect_anatomy_context_data.py +++ b/openpype/plugins/publish/collect_anatomy_context_data.py @@ -67,5 +67,6 @@ class CollectAnatomyContextData(pyblish.api.ContextPlugin): # Store context.data["anatomyData"] = anatomy_data - self.log.info("Global anatomy Data collected") - self.log.debug(json.dumps(anatomy_data, indent=4)) + self.log.debug("Global Anatomy Context Data collected:\n{}".format( + json.dumps(anatomy_data, indent=4) + )) diff --git a/openpype/plugins/publish/collect_anatomy_instance_data.py b/openpype/plugins/publish/collect_anatomy_instance_data.py index 4fbb93324b..128ad90b4f 100644 --- a/openpype/plugins/publish/collect_anatomy_instance_data.py +++ b/openpype/plugins/publish/collect_anatomy_instance_data.py @@ -46,17 +46,17 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): follow_workfile_version = False def process(self, context): - self.log.info("Collecting anatomy data for all instances.") + self.log.debug("Collecting anatomy data for all instances.") project_name = context.data["projectName"] self.fill_missing_asset_docs(context, project_name) self.fill_latest_versions(context, project_name) self.fill_anatomy_data(context) - self.log.info("Anatomy Data collection finished.") + self.log.debug("Anatomy Data collection finished.") def fill_missing_asset_docs(self, context, project_name): - self.log.debug("Qeurying asset documents for instances.") + self.log.debug("Querying asset documents for instances.") context_asset_doc = context.data.get("assetEntity") @@ -271,7 +271,7 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): instance_name = instance.data["name"] instance_label = instance.data.get("label") if instance_label: - instance_name += "({})".format(instance_label) + instance_name += " ({})".format(instance_label) self.log.debug("Anatomy data for instance {}: {}".format( instance_name, json.dumps(anatomy_data, indent=4) diff --git a/openpype/plugins/publish/collect_anatomy_object.py b/openpype/plugins/publish/collect_anatomy_object.py index 725cae2b14..f792cf3abd 100644 --- a/openpype/plugins/publish/collect_anatomy_object.py +++ b/openpype/plugins/publish/collect_anatomy_object.py @@ -30,6 +30,6 @@ class CollectAnatomyObject(pyblish.api.ContextPlugin): context.data["anatomy"] = Anatomy(project_name) - self.log.info( + self.log.debug( "Anatomy object collected for project \"{}\".".format(project_name) ) diff --git a/openpype/plugins/publish/collect_custom_staging_dir.py b/openpype/plugins/publish/collect_custom_staging_dir.py index b749b251c0..669c4873e0 100644 --- a/openpype/plugins/publish/collect_custom_staging_dir.py +++ b/openpype/plugins/publish/collect_custom_staging_dir.py @@ -65,6 +65,6 @@ class CollectCustomStagingDir(pyblish.api.InstancePlugin): else: result_str = "Not adding" - self.log.info("{} custom staging dir for instance with '{}'".format( + self.log.debug("{} custom staging dir for instance with '{}'".format( result_str, family )) diff --git a/openpype/plugins/publish/collect_from_create_context.py b/openpype/plugins/publish/collect_from_create_context.py index 5fcf8feb56..4888476fff 100644 --- a/openpype/plugins/publish/collect_from_create_context.py +++ b/openpype/plugins/publish/collect_from_create_context.py @@ -92,5 +92,5 @@ class CollectFromCreateContext(pyblish.api.ContextPlugin): instance.data["transientData"] = transient_data - self.log.info("collected instance: {}".format(instance.data)) - self.log.info("parsing data: {}".format(in_data)) + self.log.debug("collected instance: {}".format(instance.data)) + self.log.debug("parsing data: {}".format(in_data)) diff --git a/openpype/plugins/publish/collect_rendered_files.py b/openpype/plugins/publish/collect_rendered_files.py index 8f8d0a5eeb..6c8d1e9ca5 100644 --- a/openpype/plugins/publish/collect_rendered_files.py +++ b/openpype/plugins/publish/collect_rendered_files.py @@ -13,6 +13,7 @@ import json import pyblish.api from openpype.pipeline import legacy_io, KnownPublishError +from openpype.pipeline.publish.lib import add_repre_files_for_cleanup class CollectRenderedFiles(pyblish.api.ContextPlugin): @@ -89,6 +90,7 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): # now we can just add instances from json file and we are done for instance_data in data.get("instances"): + self.log.info(" - processing instance for {}".format( instance_data.get("subset"))) instance = self._context.create_instance( @@ -107,6 +109,8 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): self._fill_staging_dir(repre_data, anatomy) representations.append(repre_data) + add_repre_files_for_cleanup(instance, repre_data) + instance.data["representations"] = representations # add audio if in metadata data @@ -157,6 +161,8 @@ class CollectRenderedFiles(pyblish.api.ContextPlugin): os.environ.update(session_data) session_is_set = True self._process_path(data, anatomy) + context.data["cleanupFullPaths"].append(path) + context.data["cleanupEmptyDirs"].append(os.path.dirname(path)) except Exception as e: self.log.error(e, exc_info=True) raise Exception("Error") from e diff --git a/openpype/plugins/publish/collect_scene_version.py b/openpype/plugins/publish/collect_scene_version.py index fdbcb3cb9d..cd3231a07d 100644 --- a/openpype/plugins/publish/collect_scene_version.py +++ b/openpype/plugins/publish/collect_scene_version.py @@ -48,10 +48,13 @@ class CollectSceneVersion(pyblish.api.ContextPlugin): if '' in filename: return + self.log.debug( + "Collecting scene version from filename: {}".format(filename) + ) + version = get_version_from_path(filename) assert version, "Cannot determine version" rootVersion = int(version) context.data['version'] = rootVersion - self.log.info("{}".format(type(rootVersion))) self.log.info('Scene Version: %s' % context.data.get('version')) diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index a12e8d18b4..6a8ae958d2 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -19,6 +19,7 @@ from openpype.lib import ( should_convert_for_ffmpeg ) from openpype.lib.profiles_filtering import filter_profiles +from openpype.pipeline.publish.lib import add_repre_files_for_cleanup class ExtractBurnin(publish.Extractor): @@ -353,6 +354,8 @@ class ExtractBurnin(publish.Extractor): # Add new representation to instance instance.data["representations"].append(new_repre) + add_repre_files_for_cleanup(instance, new_repre) + # Cleanup temp staging dir after procesisng of output definitions if do_convert: temp_dir = repre["stagingDir"] @@ -517,8 +520,8 @@ class ExtractBurnin(publish.Extractor): """ if "burnin" not in (repre.get("tags") or []): - self.log.info(( - "Representation \"{}\" don't have \"burnin\" tag. Skipped." + self.log.debug(( + "Representation \"{}\" does not have \"burnin\" tag. Skipped." ).format(repre["name"])) return False diff --git a/openpype/plugins/publish/extract_color_transcode.py b/openpype/plugins/publish/extract_color_transcode.py index 58e0350a2e..45b10620d1 100644 --- a/openpype/plugins/publish/extract_color_transcode.py +++ b/openpype/plugins/publish/extract_color_transcode.py @@ -336,13 +336,13 @@ class ExtractOIIOTranscode(publish.Extractor): if repre.get("ext") not in self.supported_exts: self.log.debug(( - "Representation '{}' of unsupported extension. Skipped." - ).format(repre["name"])) + "Representation '{}' has unsupported extension: '{}'. Skipped." + ).format(repre["name"], repre.get("ext"))) return False if not repre.get("files"): self.log.debug(( - "Representation '{}' have empty files. Skipped." + "Representation '{}' has empty files. Skipped." ).format(repre["name"])) return False diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index 1062683319..d04893fa7e 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -23,7 +23,11 @@ from openpype.lib.transcoding import ( convert_input_paths_for_ffmpeg, get_transcode_temp_directory, ) -from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline.publish import ( + KnownPublishError, + get_publish_instance_label, +) +from openpype.pipeline.publish.lib import add_repre_files_for_cleanup class ExtractReview(pyblish.api.InstancePlugin): @@ -92,8 +96,8 @@ class ExtractReview(pyblish.api.InstancePlugin): host_name = instance.context.data["hostName"] family = self.main_family_from_instance(instance) - self.log.info("Host: \"{}\"".format(host_name)) - self.log.info("Family: \"{}\"".format(family)) + self.log.debug("Host: \"{}\"".format(host_name)) + self.log.debug("Family: \"{}\"".format(family)) profile = filter_profiles( self.profiles, @@ -202,17 +206,8 @@ class ExtractReview(pyblish.api.InstancePlugin): return filtered_defs - @staticmethod - def get_instance_label(instance): - return ( - getattr(instance, "label", None) - or instance.data.get("label") - or instance.data.get("name") - or str(instance) - ) - def main_process(self, instance): - instance_label = self.get_instance_label(instance) + instance_label = get_publish_instance_label(instance) self.log.debug("Processing instance \"{}\"".format(instance_label)) profile_outputs = self._get_outputs_for_instance(instance) if not profile_outputs: @@ -351,7 +346,7 @@ class ExtractReview(pyblish.api.InstancePlugin): temp_data = self.prepare_temp_data(instance, repre, output_def) files_to_clean = [] if temp_data["input_is_sequence"]: - self.log.info("Filling gaps in sequence.") + self.log.debug("Checking sequence to fill gaps in sequence..") files_to_clean = self.fill_sequence_gaps( files=temp_data["origin_repre"]["files"], staging_dir=new_repre["stagingDir"], @@ -425,6 +420,8 @@ class ExtractReview(pyblish.api.InstancePlugin): ) instance.data["representations"].append(new_repre) + add_repre_files_for_cleanup(instance, new_repre) + def input_is_sequence(self, repre): """Deduce from representation data if input is sequence.""" # TODO GLOBAL ISSUE - Find better way how to find out if input diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index aa5497a99f..b98ab64f56 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -19,9 +19,9 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): order = pyblish.api.ExtractorOrder families = [ "imagesequence", "render", "render2d", "prerender", - "source", "clip", "take", "online" + "source", "clip", "take", "online", "image" ] - hosts = ["shell", "fusion", "resolve", "traypublisher"] + hosts = ["shell", "fusion", "resolve", "traypublisher", "substancepainter"] enabled = False # presetable attribute @@ -36,7 +36,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): ).format(subset_name)) return - self.log.info( + self.log.debug( "Processing instance with subset name {}".format(subset_name) ) @@ -89,13 +89,13 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): src_staging = os.path.normpath(repre["stagingDir"]) full_input_path = os.path.join(src_staging, input_file) - self.log.info("input {}".format(full_input_path)) + self.log.debug("input {}".format(full_input_path)) filename = os.path.splitext(input_file)[0] jpeg_file = filename + "_thumb.jpg" full_output_path = os.path.join(dst_staging, jpeg_file) if oiio_supported: - self.log.info("Trying to convert with OIIO") + self.log.debug("Trying to convert with OIIO") # If the input can read by OIIO then use OIIO method for # conversion otherwise use ffmpeg thumbnail_created = self.create_thumbnail_oiio( @@ -148,7 +148,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): def _already_has_thumbnail(self, repres): for repre in repres: - self.log.info("repre {}".format(repre)) + self.log.debug("repre {}".format(repre)) if repre["name"] == "thumbnail": return True return False @@ -173,20 +173,20 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): return filtered_repres def create_thumbnail_oiio(self, src_path, dst_path): - self.log.info("outputting {}".format(dst_path)) + self.log.info("Extracting thumbnail {}".format(dst_path)) oiio_tool_path = get_oiio_tools_path() oiio_cmd = [ oiio_tool_path, "-a", src_path, "-o", dst_path ] - self.log.info("running: {}".format(" ".join(oiio_cmd))) + self.log.debug("running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) return True except Exception: self.log.warning( - "Failed to create thubmnail using oiiotool", + "Failed to create thumbnail using oiiotool", exc_info=True ) return False diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py index a92f762cde..a9c95d6065 100644 --- a/openpype/plugins/publish/extract_thumbnail_from_source.py +++ b/openpype/plugins/publish/extract_thumbnail_from_source.py @@ -39,7 +39,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): self._create_context_thumbnail(instance.context) subset_name = instance.data["subset"] - self.log.info( + self.log.debug( "Processing instance with subset name {}".format(subset_name) ) thumbnail_source = instance.data.get("thumbnailSource") @@ -104,7 +104,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): full_output_path = os.path.join(dst_staging, dst_filename) if oiio_supported: - self.log.info("Trying to convert with OIIO") + self.log.debug("Trying to convert with OIIO") # If the input can read by OIIO then use OIIO method for # conversion otherwise use ffmpeg thumbnail_created = self.create_thumbnail_oiio( diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index 65ce30412c..f392cf67f7 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -163,6 +163,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "Instance is marked to be processed on farm. Skipping") return + # Instance is marked to not get integrated + if not instance.data.get("integrate", True): + self.log.info("Instance is marked to skip integrating. Skipping") + return + filtered_repres = self.filter_representations(instance) # Skip instance if there are not representations to integrate # all representations should not be integrated @@ -262,7 +267,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): instance_stagingdir = instance.data.get("stagingDir") if not instance_stagingdir: - self.log.info(( + self.log.debug(( "{0} is missing reference to staging directory." " Will try to get it from representation." ).format(instance)) @@ -475,7 +480,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): update_data ) - self.log.info("Prepared subset: {}".format(subset_name)) + self.log.debug("Prepared subset: {}".format(subset_name)) return subset_doc def prepare_version(self, instance, op_session, subset_doc, project_name): @@ -516,7 +521,9 @@ class IntegrateAsset(pyblish.api.InstancePlugin): project_name, version_doc["type"], version_doc ) - self.log.info("Prepared version: v{0:03d}".format(version_doc["name"])) + self.log.debug( + "Prepared version: v{0:03d}".format(version_doc["name"]) + ) return version_doc diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py index c67ce62bf6..c238cca633 100644 --- a/openpype/plugins/publish/integrate_legacy.py +++ b/openpype/plugins/publish/integrate_legacy.py @@ -147,7 +147,9 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): def process(self, instance): if instance.data.get("processedWithNewIntegrator"): - self.log.info("Instance was already processed with new integrator") + self.log.debug( + "Instance was already processed with new integrator" + ) return for ef in self.exclude_families: @@ -274,7 +276,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): stagingdir = instance.data.get("stagingDir") if not stagingdir: - self.log.info(( + self.log.debug(( "{0} is missing reference to staging directory." " Will try to get it from representation." ).format(instance)) diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py index 16cc47d432..2e87d8fc86 100644 --- a/openpype/plugins/publish/integrate_thumbnail.py +++ b/openpype/plugins/publish/integrate_thumbnail.py @@ -20,6 +20,7 @@ import pyblish.api from openpype.client import get_versions from openpype.client.operations import OperationsSession, new_thumbnail_doc +from openpype.pipeline.publish import get_publish_instance_label InstanceFilterResult = collections.namedtuple( "InstanceFilterResult", @@ -41,7 +42,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): # Filter instances which can be used for integration filtered_instance_items = self._prepare_instances(context) if not filtered_instance_items: - self.log.info( + self.log.debug( "All instances were filtered. Thumbnail integration skipped." ) return @@ -133,7 +134,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): filtered_instances = [] for instance in context: - instance_label = self._get_instance_label(instance) + instance_label = get_publish_instance_label(instance) # Skip instances without published representations # - there is no place where to put the thumbnail published_repres = instance.data.get("published_representations") @@ -162,7 +163,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): # Skip instance if thumbnail path is not available for it if not thumbnail_path: - self.log.info(( + self.log.debug(( "Skipping thumbnail integration for instance \"{}\"." " Instance and context" " thumbnail paths are not available." @@ -248,7 +249,7 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): for instance_item in filtered_instance_items: instance, thumbnail_path, version_id = instance_item - instance_label = self._get_instance_label(instance) + instance_label = get_publish_instance_label(instance) version_doc = version_docs_by_str_id.get(version_id) if not version_doc: self.log.warning(( @@ -339,10 +340,3 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): )) op_session.commit() - - def _get_instance_label(self, instance): - return ( - instance.data.get("label") - or instance.data.get("name") - or "N/A" - ) diff --git a/openpype/plugins/publish/validate_sequence_frames.py b/openpype/plugins/publish/validate_sequence_frames.py deleted file mode 100644 index 239008ee21..0000000000 --- a/openpype/plugins/publish/validate_sequence_frames.py +++ /dev/null @@ -1,66 +0,0 @@ -import os -import re - -import clique -import pyblish.api - - -class ValidateSequenceFrames(pyblish.api.InstancePlugin): - """Ensure the sequence of frames is complete - - The files found in the folder are checked against the startFrame and - endFrame of the instance. If the first or last file is not - corresponding with the first or last frame it is flagged as invalid. - - Used regular expression pattern handles numbers in the file names - (eg "Main_beauty.v001.1001.exr", "Main_beauty_v001.1001.exr", - "Main_beauty.1001.1001.exr") but not numbers behind frames (eg. - "Main_beauty.1001.v001.exr") - """ - - order = pyblish.api.ValidatorOrder - label = "Validate Sequence Frames" - families = ["imagesequence", "render"] - hosts = ["shell", "unreal"] - - def process(self, instance): - representations = instance.data.get("representations") - if not representations: - return - for repr in representations: - repr_files = repr["files"] - if isinstance(repr_files, str): - continue - - ext = repr.get("ext") - if not ext: - _, ext = os.path.splitext(repr_files[0]) - elif not ext.startswith("."): - ext = ".{}".format(ext) - pattern = r"\D?(?P(?P0*)\d+){}$".format( - re.escape(ext)) - patterns = [pattern] - - collections, remainder = clique.assemble( - repr_files, minimum_items=1, patterns=patterns) - - assert not remainder, "Must not have remainder" - assert len(collections) == 1, "Must detect single collection" - collection = collections[0] - frames = list(collection.indexes) - - if instance.data.get("slate"): - # Slate is not part of the frame range - frames = frames[1:] - - current_range = (frames[0], frames[-1]) - - required_range = (instance.data["frameStart"], - instance.data["frameEnd"]) - - if current_range != required_range: - raise ValueError(f"Invalid frame range: {current_range} - " - f"expected: {required_range}") - - missing = collection.holes().indexes - assert not missing, "Missing frames: %s" % (missing,) diff --git a/openpype/resources/app_icons/substancepainter.png b/openpype/resources/app_icons/substancepainter.png new file mode 100644 index 0000000000..dc46f25d74 Binary files /dev/null and b/openpype/resources/app_icons/substancepainter.png differ diff --git a/openpype/scripts/non_python_host_launch.py b/openpype/scripts/non_python_host_launch.py index 79fb1cbb52..c95a9df314 100644 --- a/openpype/scripts/non_python_host_launch.py +++ b/openpype/scripts/non_python_host_launch.py @@ -81,9 +81,10 @@ def main(argv): host_name = os.environ["AVALON_APP"].lower() if host_name == "photoshop": + # TODO refactor launch logic according to AE from openpype.hosts.photoshop.api.lib import main elif host_name == "aftereffects": - from openpype.hosts.aftereffects.api.lib import main + from openpype.hosts.aftereffects.api.launch_logic import main elif host_name == "harmony": from openpype.hosts.harmony.api.lib import main else: diff --git a/openpype/settings/defaults/project_settings/aftereffects.json b/openpype/settings/defaults/project_settings/aftereffects.json index 669e1db0b8..6128534344 100644 --- a/openpype/settings/defaults/project_settings/aftereffects.json +++ b/openpype/settings/defaults/project_settings/aftereffects.json @@ -13,10 +13,14 @@ "RenderCreator": { "defaults": [ "Main" - ] + ], + "mark_for_review": true } }, "publish": { + "CollectReview": { + "enabled": true + }, "ValidateSceneSettings": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/deadline.json b/openpype/settings/defaults/project_settings/deadline.json index fdd70f1a44..1b8c8397d7 100644 --- a/openpype/settings/defaults/project_settings/deadline.json +++ b/openpype/settings/defaults/project_settings/deadline.json @@ -45,6 +45,15 @@ "chunk_size": 10, "group": "none" }, + "FusionSubmitDeadline": { + "enabled": true, + "optional": false, + "active": true, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "" + }, "NukeSubmitDeadline": { "enabled": true, "optional": false, @@ -114,6 +123,9 @@ ], "max": [ ".*" + ], + "fusion": [ + ".*" ] } } diff --git a/openpype/settings/defaults/project_settings/fusion.json b/openpype/settings/defaults/project_settings/fusion.json index f974eebaca..066fc3816a 100644 --- a/openpype/settings/defaults/project_settings/fusion.json +++ b/openpype/settings/defaults/project_settings/fusion.json @@ -21,5 +21,18 @@ "copy_path": "~/.openpype/hosts/fusion/profiles", "copy_status": false, "force_sync": false + }, + "create": { + "CreateSaver": { + "temp_rendering_path_template": "{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ] + } } } diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 50b62737d8..75f335f1de 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -82,7 +82,8 @@ "png": { "ext": "png", "tags": [ - "ftrackreview" + "ftrackreview", + "kitsureview" ], "burnins": [], "ffmpeg_args": { diff --git a/openpype/settings/defaults/project_settings/max.json b/openpype/settings/defaults/project_settings/max.json index d59cdf8c4a..a757e08ef5 100644 --- a/openpype/settings/defaults/project_settings/max.json +++ b/openpype/settings/defaults/project_settings/max.json @@ -19,5 +19,12 @@ "custFloats": "custFloats", "custVecs": "custVecs" } + }, + "publish": { + "ValidateFrameRange": { + "enabled": true, + "optional": true, + "active": true + } } } diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 12223216cd..a2a43eefb5 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -734,6 +734,7 @@ "ValidateShaderName": { "enabled": false, "optional": true, + "active": true, "regex": "(?P.*)_(.*)_SHD" }, "ValidateShadingEngine": { @@ -1460,7 +1461,8 @@ }, "reference_loader": { "namespace": "{asset_name}_{subset}_##_", - "group_name": "_GRP" + "group_name": "_GRP", + "display_handle": true } }, "workfile_build": { diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 85dee73176..f01bdf7d50 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -358,12 +358,12 @@ "optional": true, "active": true }, - "ValidateGizmo": { + "ValidateBackdrop": { "enabled": true, "optional": true, "active": true }, - "ValidateBackdrop": { + "ValidateGizmo": { "enabled": true, "optional": true, "active": true @@ -401,7 +401,39 @@ false ] ] - } + }, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "value": "to format" + }, + { + "type": "text", + "name": "format", + "value": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "value": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "value": true + }, + { + "type": "bool", + "name": "pbb", + "value": false + } + ] + } + ] }, "ExtractReviewData": { "enabled": false diff --git a/openpype/settings/defaults/project_settings/photoshop.json b/openpype/settings/defaults/project_settings/photoshop.json index bcf21f55dd..2454691958 100644 --- a/openpype/settings/defaults/project_settings/photoshop.json +++ b/openpype/settings/defaults/project_settings/photoshop.json @@ -10,23 +10,40 @@ } }, "create": { - "CreateImage": { - "defaults": [ + "ImageCreator": { + "enabled": true, + "active_on_create": true, + "mark_for_review": false, + "default_variants": [ "Main" ] + }, + "AutoImageCreator": { + "enabled": false, + "active_on_create": true, + "mark_for_review": false, + "default_variant": "" + }, + "ReviewCreator": { + "enabled": true, + "active_on_create": true, + "default_variant": "" + }, + "WorkfileCreator": { + "enabled": true, + "active_on_create": true, + "default_variant": "Main" } }, "publish": { "CollectColorCodedInstances": { + "enabled": true, "create_flatten_image": "no", "flatten_subset_template": "", "color_code_mapping": [] }, - "CollectInstances": { - "flatten_subset_template": "" - }, "CollectReview": { - "publish": true + "enabled": true }, "CollectVersion": { "enabled": false diff --git a/openpype/settings/defaults/project_settings/substancepainter.json b/openpype/settings/defaults/project_settings/substancepainter.json new file mode 100644 index 0000000000..60929e85fd --- /dev/null +++ b/openpype/settings/defaults/project_settings/substancepainter.json @@ -0,0 +1,13 @@ +{ + "imageio": { + "ocio_config": { + "enabled": true, + "filepath": [] + }, + "file_rules": { + "enabled": true, + "rules": {} + } + }, + "shelves": {} +} diff --git a/openpype/settings/defaults/system_settings/applications.json b/openpype/settings/defaults/system_settings/applications.json index df5b5e07c6..f2fc7d933a 100644 --- a/openpype/settings/defaults/system_settings/applications.json +++ b/openpype/settings/defaults/system_settings/applications.json @@ -1069,8 +1069,8 @@ "RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR": [], "RESOLVE_PYTHON3_HOME": { "windows": "{LOCALAPPDATA}/Programs/Python/Python36", - "darwin": "~/Library/Python/3.6/bin", - "linux": "/opt/Python/3.6/bin" + "darwin": "/Library/Frameworks/Python.framework/Versions/3.6", + "linux": "/opt/Python/3.6" } }, "variants": { @@ -1479,6 +1479,33 @@ } } }, + "substancepainter": { + "enabled": true, + "label": "Substance Painter", + "icon": "app_icons/substancepainter.png", + "host_name": "substancepainter", + "environment": {}, + "variants": { + "8-2-0": { + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Substance 3D Painter\\Adobe Substance 3D Painter.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": {} + }, + "__dynamic_keys_labels__": { + "8-2-0": "8.2.0" + } + } + }, "unreal": { "enabled": true, "label": "Unreal Editor", diff --git a/openpype/settings/entities/enum_entity.py b/openpype/settings/entities/enum_entity.py index c0c103ea10..de3bd353eb 100644 --- a/openpype/settings/entities/enum_entity.py +++ b/openpype/settings/entities/enum_entity.py @@ -168,6 +168,7 @@ class HostsEnumEntity(BaseEnumEntity): "tvpaint", "unreal", "standalonepublisher", + "substancepainter", "traypublisher", "webpublisher" ] diff --git a/openpype/settings/entities/lib.py b/openpype/settings/entities/lib.py index 1c7dc9bed0..93abc27b0e 100644 --- a/openpype/settings/entities/lib.py +++ b/openpype/settings/entities/lib.py @@ -323,7 +323,10 @@ class SchemasHub: filled_template = self._fill_template( schema_data, template_def ) - return filled_template + new_template_def = [] + for item in filled_template: + new_template_def.extend(self.resolve_schema_data(item)) + return new_template_def def create_schema_object(self, schema_data, *args, **kwargs): """Create entity for passed schema data. diff --git a/openpype/settings/entities/schemas/projects_schema/schema_main.json b/openpype/settings/entities/schemas/projects_schema/schema_main.json index 8c1d8ccbdd..4315987a33 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_main.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_main.json @@ -122,6 +122,10 @@ "type": "schema", "name": "schema_project_photoshop" }, + { + "type": "schema", + "name": "schema_project_substancepainter" + }, { "type": "schema", "name": "schema_project_harmony" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json index 8dc83f5506..313e0ce8ea 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json @@ -40,7 +40,13 @@ "label": "Default Variants", "object_type": "text", "docstring": "Fill default variant(s) (like 'Main' or 'Default') used in subset name creation." - } + }, + { + "type": "boolean", + "key": "mark_for_review", + "label": "Review", + "default": true + } ] } ] @@ -51,6 +57,21 @@ "key": "publish", "label": "Publish plugins", "children": [ + { + "type": "dict", + "collapsible": true, + "key": "CollectReview", + "label": "Collect Review", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled", + "default": true + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json index d8b5e4dc1f..6d59b5a92b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json @@ -248,6 +248,50 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "FusionSubmitDeadline", + "label": "Fusion submit to Deadline", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "number", + "key": "priority", + "label": "Priority" + }, + { + "type": "number", + "key": "chunk_size", + "label": "Frame per Task" + }, + { + "type": "number", + "key": "concurrent_tasks", + "label": "Number of concurrent tasks" + }, + { + "type": "text", + "key": "group", + "label": "Group Name" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json b/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json index 464cf2c06d..7971c62300 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json @@ -68,6 +68,50 @@ "label": "Resync profile on each launch" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "create", + "label": "Creator plugins", + "children": [ + { + "type": "dict", + "collapsible": true, + "key": "CreateSaver", + "label": "Create Saver", + "is_group": true, + "children": [ + { + "type": "text", + "key": "temp_rendering_path_template", + "label": "Temporary rendering path template" + }, + { + "type": "list", + "key": "default_variants", + "label": "Default variants", + "object_type": { + "type": "text" + } + }, + { + "key": "instance_attributes", + "label": "Instance attributes", + "type": "enum", + "multiselection": true, + "enum_items": [ + { + "reviewable": "Reviewable" + }, + { + "farm_rendering": "Farm rendering" + } + ] + } + ] + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_max.json b/openpype/settings/entities/schemas/projects_schema/schema_project_max.json index 4fba9aff0a..42506559d0 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_max.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_max.json @@ -73,6 +73,10 @@ } } ] + }, + { + "type": "schema", + "name": "schema_max_publish" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json index 0071e632af..f6c46aba8b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json @@ -31,16 +31,126 @@ { "type": "dict", "collapsible": true, - "key": "CreateImage", + "key": "ImageCreator", "label": "Create Image", + "checkbox_key": "enabled", "children": [ + { + "type": "label", + "label": "Manually create instance from layer or group of layers. \n Separate review could be created for this image to be sent to Asset Management System." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "boolean", + "key": "mark_for_review", + "label": "Review by default" + }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "AutoImageCreator", + "label": "Create Flatten Image", + "checkbox_key": "enabled", + "children": [ + { + "type": "label", + "label": "Auto create image for all visible layers, used for simplified processing. \n Separate review could be created for this image to be sent to Asset Management System." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "boolean", + "key": "mark_for_review", + "label": "Review by default" + }, + { + "type": "text", + "key": "default_variant", + "label": "Default variant" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ReviewCreator", + "label": "Create Review", + "checkbox_key": "enabled", + "children": [ + { + "type": "label", + "label": "Auto create review instance containing all published image instances or visible layers if no image instance." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled", + "default": true + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "text", + "key": "default_variant", + "label": "Default variant" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "WorkfileCreator", + "label": "Create Workfile", + "checkbox_key": "enabled", + "children": [ + { + "type": "label", + "label": "Auto create workfile instance" + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "text", + "key": "default_variant", + "label": "Default variant" + } + ] } ] }, @@ -56,11 +166,18 @@ "is_group": true, "key": "CollectColorCodedInstances", "label": "Collect Color Coded Instances", + "checkbox_key": "enabled", "children": [ { "type": "label", "label": "Set color for publishable layers, set its resulting family and template for subset name. \nCan create flatten image from published instances.(Applicable only for remote publishing!)" }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled", + "default": true + }, { "key": "create_flatten_image", "label": "Create flatten image", @@ -131,40 +248,26 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "key": "CollectInstances", - "label": "Collect Instances", - "children": [ - { - "type": "label", - "label": "Name for flatten image created if no image instance present" - }, - { - "type": "text", - "key": "flatten_subset_template", - "label": "Subset template for flatten image" - } - ] - }, { "type": "dict", "collapsible": true, "key": "CollectReview", "label": "Collect Review", + "checkbox_key": "enabled", "children": [ { "type": "boolean", - "key": "publish", - "label": "Active" - } - ] + "key": "enabled", + "label": "Enabled", + "default": true + } + ] }, { "type": "dict", "key": "CollectVersion", "label": "Collect Version", + "checkbox_key": "enabled", "children": [ { "type": "label", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_substancepainter.json b/openpype/settings/entities/schemas/projects_schema/schema_project_substancepainter.json new file mode 100644 index 0000000000..79a39b8e6e --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_substancepainter.json @@ -0,0 +1,35 @@ +{ + "type": "dict", + "collapsible": true, + "key": "substancepainter", + "label": "Substance Painter", + "is_file": true, + "children": [ + { + "key": "imageio", + "type": "dict", + "label": "Color Management (ImageIO)", + "is_group": true, + "children": [ + { + "type": "schema", + "name": "schema_imageio_config" + }, + { + "type": "schema", + "name": "schema_imageio_file_rules" + } + + ] + }, + { + "type": "dict-modifiable", + "key": "shelves", + "label": "Shelves", + "use_label_wrap": true, + "object_type": { + "type": "text" + } + } + ] +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_max_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_max_publish.json new file mode 100644 index 0000000000..ea08c735a6 --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_max_publish.json @@ -0,0 +1,33 @@ +{ + "type": "dict", + "collapsible": true, + "key": "publish", + "label": "Publish plugins", + "children": [ + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ValidateFrameRange", + "label": "Validate Frame Range", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + } + ] + } + ] + } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json index c1895c4824..4b6b97ab4e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json @@ -111,6 +111,14 @@ { "type": "label", "label": "Here's a link to the doc where you can find explanations about customing the naming of referenced assets: https://openpype.io/docs/admin_hosts_maya#load-plugins" + }, + { + "type": "separator" + }, + { + "type": "boolean", + "key": "display_handle", + "label": "Display Handle On Load References" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 346948c658..07c8d8715b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -126,6 +126,11 @@ "key": "optional", "label": "Optional" }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, { "type": "label", "label": "Shader name regex can use named capture group asset to validate against current asset name.

Example:
^.*(?P=<asset>.+)_SHD

" diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index ce9fa04c6a..3019c9b1b5 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -158,10 +158,43 @@ "label": "Nodes", "collapsible": true, "children": [ + { + "type": "label", + "label": "Nodes attribute will be deprecated in future releases. Use reposition_nodes instead." + }, { "type": "raw-json", "key": "nodes", - "label": "Nodes" + "label": "Nodes [depricated]" + }, + { + "type": "label", + "label": "Reposition knobs supported only. You can add multiple reformat nodes
and set their knobs. Order of reformat nodes is important. First reformat node
will be applied first and last reformat node will be applied last." + }, + { + "key": "reposition_nodes", + "type": "list", + "label": "Reposition nodes", + "object_type": { + "type": "dict", + "children": [ + { + "key": "node_class", + "label": "Node class", + "type": "text" + }, + { + "type": "schema_template", + "name": "template_nuke_knob_inputs", + "template_data": [ + { + "label": "Node knobs", + "key": "knobs" + } + ] + } + ] + } } ] } diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_substancepainter.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_substancepainter.json new file mode 100644 index 0000000000..fb3b21e63f --- /dev/null +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_substancepainter.json @@ -0,0 +1,40 @@ +{ + "type": "dict", + "key": "substancepainter", + "label": "Substance Painter", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "schema_template", + "name": "template_host_unchangables" + }, + { + "key": "environment", + "label": "Environment", + "type": "raw-json" + }, + { + "type": "dict-modifiable", + "key": "variants", + "collapsible_key": true, + "use_label_wrap": false, + "object_type": { + "type": "dict", + "collapsible": true, + "children": [ + { + "type": "schema_template", + "name": "template_host_variant_items", + "skip_paths": ["use_python_2"] + } + ] + } + } + ] +} diff --git a/openpype/settings/entities/schemas/system_schema/schema_applications.json b/openpype/settings/entities/schemas/system_schema/schema_applications.json index b17687cf71..abea37a9ab 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_applications.json +++ b/openpype/settings/entities/schemas/system_schema/schema_applications.json @@ -93,6 +93,10 @@ "type": "schema", "name": "schema_celaction" }, + { + "type": "schema", + "name": "schema_substancepainter" + }, { "type": "schema", "name": "schema_unreal" diff --git a/openpype/style/data.json b/openpype/style/data.json index bea2a3d407..7389387d97 100644 --- a/openpype/style/data.json +++ b/openpype/style/data.json @@ -26,8 +26,8 @@ "bg": "#2C313A", "bg-inputs": "#21252B", - "bg-buttons": "#434a56", - "bg-button-hover": "rgb(81, 86, 97)", + "bg-buttons": "rgb(67, 74, 86)", + "bg-buttons-hover": "rgb(81, 86, 97)", "bg-inputs-disabled": "#2C313A", "bg-buttons-disabled": "#434a56", @@ -66,7 +66,9 @@ "bg-success": "#458056", "bg-success-hover": "#55a066", "bg-error": "#AD2E2E", - "bg-error-hover": "#C93636" + "bg-error-hover": "#C93636", + "bg-info": "rgb(63, 98, 121)", + "bg-info-hover": "rgb(81, 146, 181)" }, "tab-widget": { "bg": "#21252B", @@ -94,6 +96,7 @@ "crash": "#FF6432", "success": "#458056", "warning": "#ffc671", + "progress": "rgb(194, 226, 236)", "tab-bg": "#16191d", "list-view-group": { "bg": "#434a56", diff --git a/openpype/style/style.css b/openpype/style/style.css index 29abb1d351..5ce55aa658 100644 --- a/openpype/style/style.css +++ b/openpype/style/style.css @@ -132,10 +132,11 @@ QPushButton { border-radius: 0.2em; padding: 3px 5px 3px 5px; background: {color:bg-buttons}; + min-width: 0px; /* Substance Painter fix */ } QPushButton:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; color: {color:font-hover}; } @@ -165,7 +166,7 @@ QToolButton { } QToolButton:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; color: {color:font-hover}; } @@ -337,7 +338,15 @@ QTabWidget::tab-bar { alignment: left; } +/* avoid QTabBar overrides in Substance Painter */ +QTabBar { + text-transform: none; + font-weight: normal; +} + QTabBar::tab { + text-transform: none; + font-weight: normal; border-top: 1px solid {color:border}; border-left: 1px solid {color:border}; border-right: 1px solid {color:border}; @@ -377,6 +386,7 @@ QHeaderView { QHeaderView::section { background: {color:bg-view-header}; padding: 4px; + border-top: 0px; /* Substance Painter fix */ border-right: 1px solid {color:bg-view}; border-radius: 0px; text-align: center; @@ -712,6 +722,13 @@ OverlayMessageWidget[type="error"]:hover { background: {color:overlay-messages:bg-error-hover}; } +OverlayMessageWidget[type="info"] { + background: {color:overlay-messages:bg-info}; +} +OverlayMessageWidget[type="info"]:hover { + background: {color:overlay-messages:bg-info-hover}; +} + OverlayMessageWidget QWidget { background: transparent; } @@ -739,10 +756,11 @@ OverlayMessageWidget QWidget { } #InfoText { - padding-left: 30px; - padding-top: 20px; + padding-left: 0px; + padding-top: 0px; + padding-right: 20px; background: transparent; - border: 1px solid {color:border}; + border: none; } #TypeEditor, #ToolEditor, #NameEditor, #NumberEditor { @@ -904,7 +922,7 @@ PixmapButton{ background: {color:bg-buttons}; } PixmapButton:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; } PixmapButton:disabled { background: {color:bg-buttons-disabled}; @@ -915,7 +933,7 @@ PixmapButton:disabled { background: {color:bg-view}; } #ThumbnailPixmapHoverButton:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; } #CreatorDetailedDescription { @@ -936,7 +954,7 @@ PixmapButton:disabled { } #CreateDialogHelpButton:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; } #CreateDialogHelpButton QWidget { background: transparent; @@ -995,7 +1013,7 @@ PixmapButton:disabled { border-radius: 0.2em; } #CardViewWidget:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; } #CardViewWidget[state="selected"] { background: {color:bg-view-selection}; @@ -1022,7 +1040,7 @@ PixmapButton:disabled { } #PublishInfoFrame[state="3"], #PublishInfoFrame[state="4"] { - background: rgb(194, 226, 236); + background: {color:publisher:progress}; } #PublishInfoFrame QLabel { @@ -1030,6 +1048,11 @@ PixmapButton:disabled { font-style: bold; } +#PublishReportHeader { + font-size: 14pt; + font-weight: bold; +} + #PublishInfoMainLabel { font-size: 12pt; } @@ -1050,7 +1073,7 @@ ValidationArtistMessage QLabel { } #ValidationActionButton:hover { - background: {color:bg-button-hover}; + background: {color:bg-buttons-hover}; color: {color:font-hover}; } @@ -1080,6 +1103,35 @@ ValidationArtistMessage QLabel { border-left: 1px solid {color:border}; } +#PublishInstancesDetails { + border: 1px solid {color:border}; + border-radius: 0.3em; +} + +#InstancesLogsView { + border: 1px solid {color:border}; + background: {color:bg-view}; + border-radius: 0.3em; +} + +#PublishLogMessage { + font-family: "Noto Sans Mono"; +} + +#PublishInstanceLogsLabel { + font-weight: bold; +} + +#PublishCrashMainLabel{ + font-weight: bold; + font-size: 16pt; +} + +#PublishCrashReportLabel { + font-weight: bold; + font-size: 13pt; +} + #AssetNameInputWidget { background: {color:bg-inputs}; border: 1px solid {color:border}; diff --git a/openpype/tools/attribute_defs/files_widget.py b/openpype/tools/attribute_defs/files_widget.py index 067866035f..076b33fb7c 100644 --- a/openpype/tools/attribute_defs/files_widget.py +++ b/openpype/tools/attribute_defs/files_widget.py @@ -198,29 +198,33 @@ class DropEmpty(QtWidgets.QWidget): def paintEvent(self, event): super(DropEmpty, self).paintEvent(event) - painter = QtGui.QPainter(self) + pen = QtGui.QPen() - pen.setWidth(1) pen.setBrush(QtCore.Qt.darkGray) pen.setStyle(QtCore.Qt.DashLine) - painter.setPen(pen) - content_margins = self.layout().contentsMargins() + pen.setWidth(1) - left_m = content_margins.left() - top_m = content_margins.top() - rect = QtCore.QRect( + content_margins = self.layout().contentsMargins() + rect = self.rect() + left_m = content_margins.left() + pen.width() + top_m = content_margins.top() + pen.width() + new_rect = QtCore.QRect( left_m, top_m, ( - self.rect().width() + rect.width() - (left_m + content_margins.right() + pen.width()) ), ( - self.rect().height() + rect.height() - (top_m + content_margins.bottom() + pen.width()) ) ) - painter.drawRect(rect) + + painter = QtGui.QPainter(self) + painter.setRenderHint(QtGui.QPainter.Antialiasing) + painter.setPen(pen) + painter.drawRect(new_rect) class FilesModel(QtGui.QStandardItemModel): diff --git a/openpype/tools/publisher/constants.py b/openpype/tools/publisher/constants.py index 660fccecf1..4630eb144b 100644 --- a/openpype/tools/publisher/constants.py +++ b/openpype/tools/publisher/constants.py @@ -35,9 +35,13 @@ ResetKeySequence = QtGui.QKeySequence( __all__ = ( "CONTEXT_ID", + "CONTEXT_LABEL", "VARIANT_TOOLTIP", + "INPUTS_LAYOUT_HSPACING", + "INPUTS_LAYOUT_VSPACING", + "INSTANCE_ID_ROLE", "SORT_VALUE_ROLE", "IS_GROUP_ROLE", @@ -47,4 +51,6 @@ __all__ = ( "FAMILY_ROLE", "GROUP_ROLE", "CONVERTER_IDENTIFIER_ROLE", + + "ResetKeySequence", ) diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py index 4b083d4bc8..89c2343ef7 100644 --- a/openpype/tools/publisher/control.py +++ b/openpype/tools/publisher/control.py @@ -40,6 +40,7 @@ from openpype.pipeline.create.context import ( CreatorsOperationFailed, ConvertorsOperationFailed, ) +from openpype.pipeline.publish import get_publish_instance_label # Define constant for plugin orders offset PLUGIN_ORDER_OFFSET = 0.5 @@ -47,6 +48,7 @@ PLUGIN_ORDER_OFFSET = 0.5 class CardMessageTypes: standard = None + info = "info" error = "error" @@ -220,7 +222,12 @@ class PublishReportMaker: def _add_plugin_data_item(self, plugin): if plugin in self._stored_plugins: - raise ValueError("Plugin is already stored") + # A plugin would be processed more than once. What can cause it: + # - there is a bug in controller + # - plugin class is imported into multiple files + # - this can happen even with base classes from 'pyblish' + raise ValueError( + "Plugin '{}' is already stored".format(str(plugin))) self._stored_plugins.append(plugin) @@ -239,6 +246,7 @@ class PublishReportMaker: label = plugin.label return { + "id": plugin.id, "name": plugin.__name__, "label": label, "order": plugin.order, @@ -324,7 +332,7 @@ class PublishReportMaker: "instances": instances_details, "context": self._extract_context_data(self._current_context), "crashed_file_paths": crashed_file_paths, - "id": str(uuid.uuid4()), + "id": uuid.uuid4().hex, "report_version": "1.0.0" } @@ -339,10 +347,12 @@ class PublishReportMaker: def _extract_instance_data(self, instance, exists): return { "name": instance.data.get("name"), - "label": instance.data.get("label"), + "label": get_publish_instance_label(instance), "family": instance.data["family"], "families": instance.data.get("families") or [], - "exists": exists + "exists": exists, + "creator_identifier": instance.data.get("creator_identifier"), + "instance_id": instance.data.get("instance_id"), } def _extract_instance_log_items(self, result): @@ -388,8 +398,11 @@ class PublishReportMaker: exception = result.get("error") if exception: fname, line_no, func, exc = exception.traceback + # Action result does not have 'is_validation_error' + is_validation_error = result.get("is_validation_error", False) output.append({ "type": "error", + "is_validation_error": is_validation_error, "msg": str(exception), "filename": str(fname), "lineno": str(line_no), @@ -426,13 +439,15 @@ class PublishPluginsProxy: plugin_id = plugin.id plugins_by_id[plugin_id] = plugin - action_ids = set() + action_ids = [] action_ids_by_plugin_id[plugin_id] = action_ids actions = getattr(plugin, "actions", None) or [] for action in actions: action_id = action.id - action_ids.add(action_id) + if action_id in actions_by_id: + continue + action_ids.append(action_id) actions_by_id[action_id] = action self._plugins_by_id = plugins_by_id @@ -461,7 +476,7 @@ class PublishPluginsProxy: return plugin.id def get_plugin_action_items(self, plugin_id): - """Get plugin action items for plugin by it's id. + """Get plugin action items for plugin by its id. Args: plugin_id (str): Publish plugin id. @@ -568,7 +583,7 @@ class ValidationErrorItem: context_validation, title, description, - detail, + detail ): self.instance_id = instance_id self.instance_label = instance_label @@ -677,6 +692,8 @@ class PublishValidationErrorsReport: for title in titles: grouped_error_items.append({ + "id": uuid.uuid4().hex, + "plugin_id": plugin_id, "plugin_action_items": list(plugin_action_items), "error_items": error_items_by_title[title], "title": title @@ -2379,7 +2396,8 @@ class PublisherController(BasePublisherController): yield MainThreadItem(self.stop_publish) # Add plugin to publish report - self._publish_report.add_plugin_iter(plugin, self._publish_context) + self._publish_report.add_plugin_iter( + plugin, self._publish_context) # WARNING This is hack fix for optional plugins if not self._is_publish_plugin_active(plugin): @@ -2461,14 +2479,14 @@ class PublisherController(BasePublisherController): plugin, self._publish_context, instance ) - self._publish_report.add_result(result) - exception = result.get("error") if exception: + has_validation_error = False if ( isinstance(exception, PublishValidationError) and not self.publish_has_validated ): + has_validation_error = True self._add_validation_error(result) else: @@ -2482,6 +2500,10 @@ class PublisherController(BasePublisherController): self.publish_error_msg = msg self.publish_has_crashed = True + result["is_validation_error"] = has_validation_error + + self._publish_report.add_result(result) + self._publish_next_process() diff --git a/openpype/tools/publisher/publish_report_viewer/widgets.py b/openpype/tools/publisher/publish_report_viewer/widgets.py index dc449b6b69..02c9b63a4e 100644 --- a/openpype/tools/publisher/publish_report_viewer/widgets.py +++ b/openpype/tools/publisher/publish_report_viewer/widgets.py @@ -163,7 +163,11 @@ class ZoomPlainText(QtWidgets.QPlainTextEdit): super(ZoomPlainText, self).wheelEvent(event) return - degrees = float(event.delta()) / 8 + if hasattr(event, "angleDelta"): + delta = event.angleDelta().y() + else: + delta = event.delta() + degrees = float(delta) / 8 steps = int(ceil(degrees / 5)) self._scheduled_scalings += steps if (self._scheduled_scalings * steps < 0): diff --git a/openpype/tools/publisher/widgets/__init__.py b/openpype/tools/publisher/widgets/__init__.py index f18e6cc61e..87a5f3914a 100644 --- a/openpype/tools/publisher/widgets/__init__.py +++ b/openpype/tools/publisher/widgets/__init__.py @@ -18,7 +18,7 @@ from .help_widget import ( from .publish_frame import PublishFrame from .tabs_widget import PublisherTabsWidget from .overview_widget import OverviewWidget -from .validations_widget import ValidationsWidget +from .report_page import ReportPageWidget __all__ = ( @@ -40,5 +40,5 @@ __all__ = ( "PublisherTabsWidget", "OverviewWidget", - "ValidationsWidget", + "ReportPageWidget", ) diff --git a/openpype/tools/publisher/widgets/card_view_widgets.py b/openpype/tools/publisher/widgets/card_view_widgets.py index 13715bc73c..eae8e0420a 100644 --- a/openpype/tools/publisher/widgets/card_view_widgets.py +++ b/openpype/tools/publisher/widgets/card_view_widgets.py @@ -93,7 +93,7 @@ class BaseGroupWidget(QtWidgets.QWidget): return self._group def get_widget_by_item_id(self, item_id): - """Get instance widget by it's id.""" + """Get instance widget by its id.""" return self._widgets_by_id.get(item_id) @@ -702,8 +702,8 @@ class InstanceCardView(AbstractInstanceView): for group_name in sorted_group_names: group_icons = { - idenfier: self._controller.get_creator_icon(idenfier) - for idenfier in identifiers_by_group[group_name] + identifier: self._controller.get_creator_icon(identifier) + for identifier in identifiers_by_group[group_name] } if group_name in self._widgets_by_group: group_widget = self._widgets_by_group[group_name] diff --git a/openpype/tools/publisher/widgets/images/error.png b/openpype/tools/publisher/widgets/images/error.png new file mode 100644 index 0000000000..7b09a57d7d Binary files /dev/null and b/openpype/tools/publisher/widgets/images/error.png differ diff --git a/openpype/tools/publisher/widgets/images/success.png b/openpype/tools/publisher/widgets/images/success.png new file mode 100644 index 0000000000..291b442df4 Binary files /dev/null and b/openpype/tools/publisher/widgets/images/success.png differ diff --git a/openpype/tools/publisher/widgets/images/warning.png b/openpype/tools/publisher/widgets/images/warning.png index 76d1e34b6c..531f62b741 100644 Binary files a/openpype/tools/publisher/widgets/images/warning.png and b/openpype/tools/publisher/widgets/images/warning.png differ diff --git a/openpype/tools/publisher/widgets/publish_frame.py b/openpype/tools/publisher/widgets/publish_frame.py index e4e6740532..d423f97047 100644 --- a/openpype/tools/publisher/widgets/publish_frame.py +++ b/openpype/tools/publisher/widgets/publish_frame.py @@ -310,7 +310,7 @@ class PublishFrame(QtWidgets.QWidget): self._set_success_property() self._set_progress_visibility(True) - self._main_label.setText("Hit publish (play button)! If you want") + self._main_label.setText("") self._message_label_top.setText("") self._reset_btn.setEnabled(True) @@ -331,6 +331,7 @@ class PublishFrame(QtWidgets.QWidget): self._set_success_property(3) self._set_progress_visibility(True) self._set_main_label("Publishing...") + self._message_label_top.setText("") self._reset_btn.setEnabled(False) self._stop_btn.setEnabled(True) @@ -468,45 +469,14 @@ class PublishFrame(QtWidgets.QWidget): widget.setProperty("state", state) widget.style().polish(widget) - def _copy_report(self): - logs = self._controller.get_publish_report() - logs_string = json.dumps(logs, indent=4) - - mime_data = QtCore.QMimeData() - mime_data.setText(logs_string) - QtWidgets.QApplication.instance().clipboard().setMimeData( - mime_data - ) - - def _export_report(self): - default_filename = "publish-report-{}".format( - time.strftime("%y%m%d-%H-%M") - ) - default_filepath = os.path.join( - os.path.expanduser("~"), - default_filename - ) - new_filepath, ext = QtWidgets.QFileDialog.getSaveFileName( - self, "Save report", default_filepath, ".json" - ) - if not ext or not new_filepath: - return - - logs = self._controller.get_publish_report() - full_path = new_filepath + ext - dir_path = os.path.dirname(full_path) - if not os.path.exists(dir_path): - os.makedirs(dir_path) - - with open(full_path, "w") as file_stream: - json.dump(logs, file_stream) - def _on_report_triggered(self, identifier): if identifier == "export_report": - self._export_report() + self._controller.event_system.emit( + "export_report.request", {}, "publish_frame") elif identifier == "copy_report": - self._copy_report() + self._controller.event_system.emit( + "copy_report.request", {}, "publish_frame") elif identifier == "go_to_report": self.details_page_requested.emit() diff --git a/openpype/tools/publisher/widgets/report_page.py b/openpype/tools/publisher/widgets/report_page.py new file mode 100644 index 0000000000..50a619f0a8 --- /dev/null +++ b/openpype/tools/publisher/widgets/report_page.py @@ -0,0 +1,1876 @@ +# -*- coding: utf-8 -*- +import collections +import logging + +try: + import commonmark +except Exception: + commonmark = None + +from qtpy import QtWidgets, QtCore, QtGui + +from openpype.style import get_objected_colors +from openpype.tools.utils import ( + BaseClickableFrame, + ClickableFrame, + ExpandingTextEdit, + FlowLayout, + ClassicExpandBtn, + paint_image_with_color, + SeparatorWidget, +) +from .widgets import IconValuePixmapLabel +from .icons import ( + get_pixmap, + get_image, +) +from ..constants import ( + INSTANCE_ID_ROLE, + CONTEXT_ID, + CONTEXT_LABEL, +) + +LOG_DEBUG_VISIBLE = 1 << 0 +LOG_INFO_VISIBLE = 1 << 1 +LOG_WARNING_VISIBLE = 1 << 2 +LOG_ERROR_VISIBLE = 1 << 3 +LOG_CRITICAL_VISIBLE = 1 << 4 +ERROR_VISIBLE = 1 << 5 +INFO_VISIBLE = 1 << 6 + + +class VerticalScrollArea(QtWidgets.QScrollArea): + """Scroll area for validation error titles. + + The biggest difference is that the scroll area has scroll bar on left side + and resize of content will also resize scrollarea itself. + + Resize if deferred by 100ms because at the moment of resize are not yet + propagated sizes and visibility of scroll bars. + """ + + def __init__(self, *args, **kwargs): + super(VerticalScrollArea, self).__init__(*args, **kwargs) + + self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) + self.setLayoutDirection(QtCore.Qt.RightToLeft) + + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + # Background of scrollbar will be transparent + scrollbar_bg = self.verticalScrollBar().parent() + if scrollbar_bg: + scrollbar_bg.setAttribute(QtCore.Qt.WA_TranslucentBackground) + self.setViewportMargins(0, 0, 0, 0) + + self.verticalScrollBar().installEventFilter(self) + + # Timer with 100ms offset after changing size + size_changed_timer = QtCore.QTimer() + size_changed_timer.setInterval(100) + size_changed_timer.setSingleShot(True) + + size_changed_timer.timeout.connect(self._on_timer_timeout) + self._size_changed_timer = size_changed_timer + + def setVerticalScrollBar(self, widget): + old_widget = self.verticalScrollBar() + if old_widget: + old_widget.removeEventFilter(self) + + super(VerticalScrollArea, self).setVerticalScrollBar(widget) + if widget: + widget.installEventFilter(self) + + def setWidget(self, widget): + old_widget = self.widget() + if old_widget: + old_widget.removeEventFilter(self) + + super(VerticalScrollArea, self).setWidget(widget) + if widget: + widget.installEventFilter(self) + + def _on_timer_timeout(self): + width = self.widget().width() + if self.verticalScrollBar().isVisible(): + width += self.verticalScrollBar().width() + self.setMinimumWidth(width) + + def eventFilter(self, obj, event): + if ( + event.type() == QtCore.QEvent.Resize + and (obj is self.widget() or obj is self.verticalScrollBar()) + ): + self._size_changed_timer.start() + return super(VerticalScrollArea, self).eventFilter(obj, event) + + +# --- Publish actions widget --- +class ActionButton(BaseClickableFrame): + """Plugin's action callback button. + + Action may have label or icon or both. + + Args: + plugin_action_item (PublishPluginActionItem): Action item that can be + triggered by its id. + """ + + action_clicked = QtCore.Signal(str, str) + + def __init__(self, plugin_action_item, parent): + super(ActionButton, self).__init__(parent) + + self.setObjectName("ValidationActionButton") + + self.plugin_action_item = plugin_action_item + + action_label = plugin_action_item.label + action_icon = plugin_action_item.icon + label_widget = QtWidgets.QLabel(action_label, self) + icon_label = None + if action_icon: + icon_label = IconValuePixmapLabel(action_icon, self) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(5, 0, 5, 0) + layout.addWidget(label_widget, 1) + if icon_label: + layout.addWidget(icon_label, 0) + + self.setSizePolicy( + QtWidgets.QSizePolicy.Minimum, + self.sizePolicy().verticalPolicy() + ) + + def _mouse_release_callback(self): + self.action_clicked.emit( + self.plugin_action_item.plugin_id, + self.plugin_action_item.action_id + ) + + +class ValidateActionsWidget(QtWidgets.QFrame): + """Wrapper widget for plugin actions. + + Change actions based on selected validation error. + """ + + def __init__(self, controller, parent): + super(ValidateActionsWidget, self).__init__(parent) + + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + content_widget = QtWidgets.QWidget(self) + content_layout = FlowLayout(content_widget) + content_layout.setContentsMargins(0, 0, 0, 0) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(content_widget) + + self._controller = controller + self._content_widget = content_widget + self._content_layout = content_layout + + self._actions_mapping = {} + + self._visible_mode = True + + def _update_visibility(self): + self.setVisible( + self._visible_mode + and self._content_layout.count() > 0 + ) + + def set_visible_mode(self, visible): + if self._visible_mode is visible: + return + self._visible_mode = visible + self._update_visibility() + + def _clear(self): + """Remove actions from widget.""" + while self._content_layout.count(): + item = self._content_layout.takeAt(0) + widget = item.widget() + if widget: + widget.setVisible(False) + widget.deleteLater() + self._actions_mapping = {} + + def set_error_info(self, error_info): + """Set selected plugin and show it's actions. + + Clears current actions from widget and recreate them from the plugin. + + Args: + Dict[str, Any]: Object holding error items, title and possible + actions to run. + """ + + self._clear() + + if not error_info: + self.setVisible(False) + return + + plugin_action_items = error_info["plugin_action_items"] + for plugin_action_item in plugin_action_items: + if not plugin_action_item.active: + continue + + if plugin_action_item.on_filter not in ("failed", "all"): + continue + + action_id = plugin_action_item.action_id + self._actions_mapping[action_id] = plugin_action_item + + action_btn = ActionButton(plugin_action_item, self._content_widget) + action_btn.action_clicked.connect(self._on_action_click) + self._content_layout.addWidget(action_btn) + + self._update_visibility() + + def _on_action_click(self, plugin_id, action_id): + self._controller.run_action(plugin_id, action_id) + + +# --- Validation error titles --- +class ValidationErrorInstanceList(QtWidgets.QListView): + """List of publish instances that caused a validation error. + + Instances are collected per plugin's validation error title. + """ + def __init__(self, *args, **kwargs): + super(ValidationErrorInstanceList, self).__init__(*args, **kwargs) + + self.setObjectName("ValidationErrorInstanceList") + + self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + self.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) + + def minimumSizeHint(self): + return self.sizeHint() + + def sizeHint(self): + result = super(ValidationErrorInstanceList, self).sizeHint() + row_count = self.model().rowCount() + height = 0 + if row_count > 0: + height = self.sizeHintForRow(0) * row_count + result.setHeight(height) + return result + + +class ValidationErrorTitleWidget(QtWidgets.QWidget): + """Title of validation error. + + Widget is used as radio button so requires clickable functionality and + changing style on selection/deselection. + + Has toggle button to show/hide instances on which validation error happened + if there is a list (Valdation error may happen on context). + """ + + selected = QtCore.Signal(str) + instance_changed = QtCore.Signal(str) + + def __init__(self, title_id, error_info, parent): + super(ValidationErrorTitleWidget, self).__init__(parent) + + self._title_id = title_id + self._error_info = error_info + self._selected = False + + title_frame = ClickableFrame(self) + title_frame.setObjectName("ValidationErrorTitleFrame") + + toggle_instance_btn = QtWidgets.QToolButton(title_frame) + toggle_instance_btn.setObjectName("ArrowBtn") + toggle_instance_btn.setArrowType(QtCore.Qt.RightArrow) + toggle_instance_btn.setMaximumWidth(14) + + label_widget = QtWidgets.QLabel(error_info["title"], title_frame) + + title_frame_layout = QtWidgets.QHBoxLayout(title_frame) + title_frame_layout.addWidget(label_widget, 1) + title_frame_layout.addWidget(toggle_instance_btn, 0) + + instances_model = QtGui.QStandardItemModel() + + instance_ids = [] + + items = [] + context_validation = False + for error_item in error_info["error_items"]: + context_validation = error_item.context_validation + if context_validation: + toggle_instance_btn.setArrowType(QtCore.Qt.NoArrow) + instance_ids.append(CONTEXT_ID) + # Add fake item to have minimum size hint of view widget + items.append(QtGui.QStandardItem(CONTEXT_LABEL)) + continue + + label = error_item.instance_label + item = QtGui.QStandardItem(label) + item.setFlags( + QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable + ) + item.setData(label, QtCore.Qt.ToolTipRole) + item.setData(error_item.instance_id, INSTANCE_ID_ROLE) + items.append(item) + instance_ids.append(error_item.instance_id) + + if items: + root_item = instances_model.invisibleRootItem() + root_item.appendRows(items) + + instances_view = ValidationErrorInstanceList(self) + instances_view.setModel(instances_model) + + self.setLayoutDirection(QtCore.Qt.LeftToRight) + + view_widget = QtWidgets.QWidget(self) + view_layout = QtWidgets.QHBoxLayout(view_widget) + view_layout.setContentsMargins(0, 0, 0, 0) + view_layout.setSpacing(0) + view_layout.addSpacing(14) + view_layout.addWidget(instances_view, 0) + + layout = QtWidgets.QVBoxLayout(self) + layout.setSpacing(0) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(title_frame, 0) + layout.addWidget(view_widget, 0) + view_widget.setVisible(False) + + if not context_validation: + toggle_instance_btn.clicked.connect(self._on_toggle_btn_click) + + title_frame.clicked.connect(self._mouse_release_callback) + instances_view.selectionModel().selectionChanged.connect( + self._on_selection_change + ) + + self._title_frame = title_frame + + self._toggle_instance_btn = toggle_instance_btn + + self._view_widget = view_widget + + self._instances_model = instances_model + self._instances_view = instances_view + + self._context_validation = context_validation + + self._instance_ids = instance_ids + self._expanded = False + + def sizeHint(self): + result = super(ValidationErrorTitleWidget, self).sizeHint() + expected_width = max( + self._view_widget.minimumSizeHint().width(), + self._view_widget.sizeHint().width() + ) + + if expected_width < 200: + expected_width = 200 + + if result.width() < expected_width: + result.setWidth(expected_width) + + return result + + def minimumSizeHint(self): + return self.sizeHint() + + def _mouse_release_callback(self): + """Mark this widget as selected on click.""" + + self.set_selected(True) + + @property + def is_selected(self): + """Is widget marked a selected. + + Returns: + bool: Item is selected or not. + """ + + return self._selected + + @property + def id(self): + return self._title_id + + def _change_style_property(self, selected): + """Change style of widget based on selection.""" + + value = "1" if selected else "" + self._title_frame.setProperty("selected", value) + self._title_frame.style().polish(self._title_frame) + + def set_selected(self, selected=None): + """Change selected state of widget.""" + + if selected is None: + selected = not self._selected + + # Clear instance view selection on deselect + if not selected: + self._instances_view.clearSelection() + + # Skip if has same value + if selected == self._selected: + return + + self._selected = selected + self._change_style_property(selected) + if selected: + self.selected.emit(self._title_id) + self._set_expanded(True) + + def _on_toggle_btn_click(self): + """Show/hide instances list.""" + + self._set_expanded() + + def _set_expanded(self, expanded=None): + if expanded is None: + expanded = not self._expanded + + elif expanded is self._expanded: + return + + if expanded and self._context_validation: + return + + self._expanded = expanded + self._view_widget.setVisible(expanded) + if expanded: + self._toggle_instance_btn.setArrowType(QtCore.Qt.DownArrow) + else: + self._toggle_instance_btn.setArrowType(QtCore.Qt.RightArrow) + + def _on_selection_change(self): + self.instance_changed.emit(self._title_id) + + def get_selected_instances(self): + if self._context_validation: + return [CONTEXT_ID] + sel_model = self._instances_view.selectionModel() + return [ + index.data(INSTANCE_ID_ROLE) + for index in sel_model.selectedIndexes() + if index.isValid() + ] + + def get_available_instances(self): + return list(self._instance_ids) + + +class ValidationArtistMessage(QtWidgets.QWidget): + def __init__(self, message, parent): + super(ValidationArtistMessage, self).__init__(parent) + + artist_msg_label = QtWidgets.QLabel(message, self) + artist_msg_label.setAlignment(QtCore.Qt.AlignCenter) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget( + artist_msg_label, 1, QtCore.Qt.AlignCenter + ) + + +class ValidationErrorsView(QtWidgets.QWidget): + selection_changed = QtCore.Signal() + + def __init__(self, parent): + super(ValidationErrorsView, self).__init__(parent) + + errors_scroll = VerticalScrollArea(self) + errors_scroll.setWidgetResizable(True) + + errors_widget = QtWidgets.QWidget(errors_scroll) + errors_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + errors_scroll.setWidget(errors_widget) + + errors_layout = QtWidgets.QVBoxLayout(errors_widget) + errors_layout.setContentsMargins(0, 0, 0, 0) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(errors_scroll, 1) + + self._errors_widget = errors_widget + self._errors_layout = errors_layout + self._title_widgets = {} + self._previous_select = None + + def _clear(self): + """Delete all dynamic widgets and hide all wrappers.""" + + self._title_widgets = {} + self._previous_select = None + while self._errors_layout.count(): + item = self._errors_layout.takeAt(0) + widget = item.widget() + if widget: + widget.deleteLater() + + def set_errors(self, grouped_error_items): + """Set errors into context and created titles. + + Args: + validation_error_report (PublishValidationErrorsReport): Report + with information about validation errors and publish plugin + actions. + """ + + self._clear() + + first_id = None + for title_item in grouped_error_items: + title_id = title_item["id"] + if first_id is None: + first_id = title_id + widget = ValidationErrorTitleWidget(title_id, title_item, self) + widget.selected.connect(self._on_select) + widget.instance_changed.connect(self._on_instance_change) + self._errors_layout.addWidget(widget) + self._title_widgets[title_id] = widget + + self._errors_layout.addStretch(1) + + if first_id: + self._title_widgets[first_id].set_selected(True) + else: + self.selection_changed.emit() + + self.updateGeometry() + + def _on_select(self, title_id): + if self._previous_select: + if self._previous_select.id == title_id: + return + self._previous_select.set_selected(False) + + self._previous_select = self._title_widgets[title_id] + self.selection_changed.emit() + + def _on_instance_change(self, title_id): + if self._previous_select and self._previous_select.id != title_id: + self._title_widgets[title_id].set_selected(True) + else: + self.selection_changed.emit() + + def get_selected_items(self): + if not self._previous_select: + return None, [] + + title_id = self._previous_select.id + instance_ids = self._previous_select.get_selected_instances() + if not instance_ids: + instance_ids = self._previous_select.get_available_instances() + return title_id, instance_ids + + +# ----- Publish instance report ----- +class _InstanceItem: + """Publish instance item for report UI. + + Contains only data related to an instance in publishing. Has implemented + sorting methods and prepares information, e.g. if contains error or + warnings. + """ + + _attrs = ( + "creator_identifier", + "family", + "label", + "name", + ) + + def __init__( + self, + instance_id, + creator_identifier, + family, + name, + label, + exists, + logs, + errored, + warned + ): + self.id = instance_id + self.creator_identifier = creator_identifier + self.family = family + self.name = name + self.label = label + self.exists = exists + self.logs = logs + self.errored = errored + self.warned = warned + + def __eq__(self, other): + for attr in self._attrs: + if getattr(self, attr) != getattr(other, attr): + return False + return True + + def __ne__(self, other): + return not self.__eq__(other) + + def __gt__(self, other): + for attr in self._attrs: + self_value = getattr(self, attr) + other_value = getattr(other, attr) + if self_value == other_value: + continue + values = [self_value, other_value] + values.sort() + return values[0] == other_value + return None + + def __lt__(self, other): + for attr in self._attrs: + self_value = getattr(self, attr) + other_value = getattr(other, attr) + if self_value == other_value: + continue + if self_value is None: + return False + if other_value is None: + return True + values = [self_value, other_value] + values.sort() + return values[0] == self_value + return None + + def __ge__(self, other): + if self == other: + return True + return self.__gt__(other) + + def __le__(self, other): + if self == other: + return True + return self.__lt__(other) + + @classmethod + def from_report(cls, instance_id, instance_data, logs): + errored, warned = cls.extract_basic_log_info(logs) + + return cls( + instance_id, + instance_data["creator_identifier"], + instance_data["family"], + instance_data["name"], + instance_data["label"], + instance_data["exists"], + logs, + errored, + warned, + ) + + @classmethod + def create_context_item(cls, context_label, logs): + errored, warned = cls.extract_basic_log_info(logs) + return cls( + CONTEXT_ID, + None, + "", + CONTEXT_LABEL, + context_label, + True, + logs, + errored, + warned + ) + + @staticmethod + def extract_basic_log_info(logs): + warned = False + errored = False + for log in logs: + if log["type"] == "error": + errored = True + elif log["type"] == "record": + level_no = log["levelno"] + if level_no and level_no >= logging.WARNING: + warned = True + + if warned and errored: + break + return errored, warned + + +class FamilyGroupLabel(QtWidgets.QWidget): + def __init__(self, family, parent): + super(FamilyGroupLabel, self).__init__(parent) + + self.setLayoutDirection(QtCore.Qt.LeftToRight) + + label_widget = QtWidgets.QLabel(family, self) + + line_widget = QtWidgets.QWidget(self) + line_widget.setObjectName("Separator") + line_widget.setMinimumHeight(2) + line_widget.setMaximumHeight(2) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setAlignment(QtCore.Qt.AlignVCenter) + main_layout.setSpacing(10) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(label_widget, 0) + main_layout.addWidget(line_widget, 1) + + +class PublishInstanceCardWidget(BaseClickableFrame): + selection_requested = QtCore.Signal(str) + + _warning_pix = None + _error_pix = None + _success_pix = None + _in_progress_pix = None + + def __init__(self, instance, icon, publish_finished, parent): + super(PublishInstanceCardWidget, self).__init__(parent) + + self.setObjectName("CardViewWidget") + + icon_widget = IconValuePixmapLabel(icon, self) + icon_widget.setObjectName("FamilyIconLabel") + + label_widget = QtWidgets.QLabel(instance.label, self) + + if instance.errored: + state_pix = self.get_error_pix() + elif instance.warned: + state_pix = self.get_warning_pix() + elif publish_finished: + state_pix = self.get_success_pix() + else: + state_pix = self.get_in_progress_pix() + + state_label = IconValuePixmapLabel(state_pix, self) + + layout = QtWidgets.QHBoxLayout(self) + layout.setContentsMargins(10, 7, 10, 7) + layout.addWidget(icon_widget, 0) + layout.addWidget(label_widget, 1) + layout.addWidget(state_label, 0) + + # Change direction -> parent is scroll area where scrolls are on + # left side + self.setLayoutDirection(QtCore.Qt.LeftToRight) + + self._id = instance.id + + self._selected = False + + self._update_style_state() + + @classmethod + def _prepare_pixes(cls): + publisher_colors = get_objected_colors("publisher") + cls._warning_pix = paint_image_with_color( + get_image("warning"), + publisher_colors["warning"].get_qcolor() + ) + cls._error_pix = paint_image_with_color( + get_image("error"), + publisher_colors["error"].get_qcolor() + ) + cls._success_pix = paint_image_with_color( + get_image("success"), + publisher_colors["success"].get_qcolor() + ) + cls._in_progress_pix = paint_image_with_color( + get_image("success"), + publisher_colors["progress"].get_qcolor() + ) + + @classmethod + def get_warning_pix(cls): + if cls._warning_pix is None: + cls._prepare_pixes() + return cls._warning_pix + + @classmethod + def get_error_pix(cls): + if cls._error_pix is None: + cls._prepare_pixes() + return cls._error_pix + + @classmethod + def get_success_pix(cls): + if cls._success_pix is None: + cls._prepare_pixes() + return cls._success_pix + + @classmethod + def get_in_progress_pix(cls): + if cls._in_progress_pix is None: + cls._prepare_pixes() + return cls._in_progress_pix + + @property + def id(self): + """Id of card. + + Returns: + str: Id of item. + """ + + return self._id + + @property + def is_selected(self): + """Is card selected. + + Returns: + bool: Item widget is marked as selected. + """ + + return self._selected + + def set_selected(self, selected): + """Set card as selected. + + Args: + selected (bool): Item should be marked as selected. + """ + + if selected == self._selected: + return + self._selected = selected + self._update_style_state() + + def _update_style_state(self): + state = "" + if self._selected: + state = "selected" + + self.setProperty("state", state) + self.style().polish(self) + + def _mouse_release_callback(self): + """Trigger selected signal.""" + + self.selection_requested.emit(self.id) + + +class PublishInstancesViewWidget(QtWidgets.QWidget): + # Sane minimum width of instance cards - size calulated using font metrics + _min_width_measure_string = 24 * "O" + selection_changed = QtCore.Signal() + + def __init__(self, controller, parent): + super(PublishInstancesViewWidget, self).__init__(parent) + + scroll_area = VerticalScrollArea(self) + scroll_area.setWidgetResizable(True) + scroll_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + scroll_area.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) + scrollbar_bg = scroll_area.verticalScrollBar().parent() + if scrollbar_bg: + scrollbar_bg.setAttribute(QtCore.Qt.WA_TranslucentBackground) + scroll_area.setViewportMargins(0, 0, 0, 0) + + instance_view = QtWidgets.QWidget(scroll_area) + + scroll_area.setWidget(instance_view) + + instance_layout = QtWidgets.QVBoxLayout(instance_view) + instance_layout.setContentsMargins(0, 0, 0, 0) + instance_layout.addStretch(1) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(scroll_area, 1) + + self._controller = controller + self._scroll_area = scroll_area + self._instance_view = instance_view + self._instance_layout = instance_layout + + self._context_widget = None + + self._widgets_by_instance_id = {} + self._group_widgets = [] + self._ordered_widgets = [] + + self._explicitly_selected_instance_ids = [] + + self.setSizePolicy( + QtWidgets.QSizePolicy.Minimum, + self.sizePolicy().verticalPolicy() + ) + + def sizeHint(self): + """Modify sizeHint based on visibility of scroll bars.""" + # Calculate width hint by content widget and vertical scroll bar + scroll_bar = self._scroll_area.verticalScrollBar() + view_size = self._instance_view.sizeHint().width() + fm = self._instance_view.fontMetrics() + width = ( + max(view_size, fm.width(self._min_width_measure_string)) + + scroll_bar.sizeHint().width() + ) + + result = super(PublishInstancesViewWidget, self).sizeHint() + result.setWidth(width) + return result + + def _get_selected_widgets(self): + return [ + widget + for widget in self._ordered_widgets + if widget.is_selected + ] + + def get_selected_instance_ids(self): + return [ + widget.id + for widget in self._get_selected_widgets() + ] + + def clear(self): + """Remove actions from widget.""" + while self._instance_layout.count(): + item = self._instance_layout.takeAt(0) + widget = item.widget() + if widget: + widget.setVisible(False) + widget.deleteLater() + self._ordered_widgets = [] + self._group_widgets = [] + self._widgets_by_instance_id = {} + + def update_instances(self, instance_items): + self.clear() + identifiers = { + instance_item.creator_identifier + for instance_item in instance_items + } + identifier_icons = { + identifier: self._controller.get_creator_icon(identifier) + for identifier in identifiers + } + + widgets = [] + group_widgets = [] + + publish_finished = ( + self._controller.publish_has_crashed + or self._controller.publish_has_validation_errors + or self._controller.publish_has_finished + ) + instances_by_family = collections.defaultdict(list) + for instance_item in instance_items: + if not instance_item.exists: + continue + instances_by_family[instance_item.family].append(instance_item) + + sorted_by_family = sorted( + instances_by_family.items(), key=lambda i: i[0] + ) + for family, instance_items in sorted_by_family: + # Only instance without family is context + if family: + group_widget = FamilyGroupLabel(family, self._instance_view) + self._instance_layout.addWidget(group_widget, 0) + group_widgets.append(group_widget) + + sorted_items = sorted(instance_items, key=lambda i: i.label) + for instance_item in sorted_items: + icon = identifier_icons[instance_item.creator_identifier] + + widget = PublishInstanceCardWidget( + instance_item, icon, publish_finished, self._instance_view + ) + widget.selection_requested.connect(self._on_selection_request) + self._instance_layout.addWidget(widget, 0) + + widgets.append(widget) + self._widgets_by_instance_id[widget.id] = widget + self._instance_layout.addStretch(1) + self._ordered_widgets = widgets + self._group_widgets = group_widgets + + def _on_selection_request(self, instance_id): + instance_widget = self._widgets_by_instance_id[instance_id] + selected_widgets = self._get_selected_widgets() + if instance_widget in selected_widgets: + instance_widget.set_selected(False) + else: + instance_widget.set_selected(True) + for widget in selected_widgets: + widget.set_selected(False) + self.selection_changed.emit() + + +class LogIconFrame(QtWidgets.QFrame): + """Draw log item icon next to message. + + Todos: + Paint event could be slow, maybe we could cache the image into pixmaps + so each item does not have to redraw it again. + """ + + info_color = QtGui.QColor("#ffffff") + error_color = QtGui.QColor("#ff4a4a") + level_to_color = dict(( + (10, QtGui.QColor("#ff66e8")), + (20, QtGui.QColor("#66abff")), + (30, QtGui.QColor("#ffba66")), + (40, QtGui.QColor("#ff4d58")), + (50, QtGui.QColor("#ff4f75")), + )) + _error_pix = None + _validation_error_pix = None + + def __init__(self, parent, log_type, log_level, is_validation_error): + super(LogIconFrame, self).__init__(parent) + + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + self._is_record = log_type == "record" + self._is_error = log_type == "error" + self._is_validation_error = bool(is_validation_error) + self._log_color = self.level_to_color.get(log_level) + + @classmethod + def get_validation_error_icon(cls): + if cls._validation_error_pix is None: + cls._validation_error_pix = get_pixmap("warning") + return cls._validation_error_pix + + @classmethod + def get_error_icon(cls): + if cls._error_pix is None: + cls._error_pix = get_pixmap("error") + return cls._error_pix + + def minimumSizeHint(self): + fm = self.fontMetrics() + size = fm.height() + return QtCore.QSize(size, size) + + def paintEvent(self, event): + painter = QtGui.QPainter(self) + painter.setRenderHints( + QtGui.QPainter.Antialiasing + | QtGui.QPainter.SmoothPixmapTransform + ) + painter.setPen(QtCore.Qt.NoPen) + rect = self.rect() + new_size = min(rect.width(), rect.height()) + new_rect = QtCore.QRect(1, 1, new_size - 2, new_size - 2) + if self._is_error: + if self._is_validation_error: + error_icon = self.get_validation_error_icon() + else: + error_icon = self.get_error_icon() + scaled_error_icon = error_icon.scaled( + new_rect.size(), + QtCore.Qt.KeepAspectRatio, + QtCore.Qt.SmoothTransformation + ) + painter.drawPixmap(new_rect, scaled_error_icon) + + else: + if self._is_record: + color = self._log_color + else: + color = QtGui.QColor(255, 255, 255) + painter.setBrush(color) + painter.drawEllipse(new_rect) + painter.end() + + +class LogItemWidget(QtWidgets.QWidget): + log_level_to_flag = { + 10: LOG_DEBUG_VISIBLE, + 20: LOG_INFO_VISIBLE, + 30: LOG_WARNING_VISIBLE, + 40: LOG_ERROR_VISIBLE, + 50: LOG_CRITICAL_VISIBLE, + } + + def __init__(self, log, parent): + super(LogItemWidget, self).__init__(parent) + + type_flag, level_n = self._get_log_info(log) + icon_label = LogIconFrame( + self, log["type"], level_n, log.get("is_validation_error")) + message_label = QtWidgets.QLabel(log["msg"].rstrip(), self) + message_label.setObjectName("PublishLogMessage") + message_label.setTextInteractionFlags( + QtCore.Qt.TextBrowserInteraction) + message_label.setCursor(QtGui.QCursor(QtCore.Qt.IBeamCursor)) + message_label.setWordWrap(True) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.setSpacing(8) + main_layout.addWidget(icon_label, 0) + main_layout.addWidget(message_label, 1) + + self._type_flag = type_flag + self._plugin_id = log["plugin_id"] + self._log_type_filtered = False + self._plugin_filtered = False + + @property + def type_flag(self): + return self._type_flag + + @property + def plugin_id(self): + return self._plugin_id + + def _get_log_info(self, log): + log_type = log["type"] + if log_type == "error": + return ERROR_VISIBLE, None + + if log_type != "record": + return INFO_VISIBLE, None + + level_n = log["levelno"] + if level_n < 10: + level_n = 10 + elif level_n % 10 != 0: + level_n -= (level_n % 10) + 10 + + flag = self.log_level_to_flag.get(level_n, LOG_CRITICAL_VISIBLE) + return flag, level_n + + def _update_visibility(self): + self.setVisible( + not self._log_type_filtered + and not self._plugin_filtered + ) + + def set_log_type_filtered(self, filtered): + if filtered is self._log_type_filtered: + return + self._log_type_filtered = filtered + self._update_visibility() + + def set_plugin_filtered(self, filtered): + if filtered is self._plugin_filtered: + return + self._plugin_filtered = filtered + self._update_visibility() + + +class LogsWithIconsView(QtWidgets.QWidget): + """Show logs in a grid with 2 columns. + + First column is for icon second is for message. + + Todos: + Add filtering by type (exception, debug, info, etc.). + """ + + def __init__(self, logs, parent): + super(LogsWithIconsView, self).__init__(parent) + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + logs_layout = QtWidgets.QVBoxLayout(self) + logs_layout.setContentsMargins(0, 0, 0, 0) + logs_layout.setSpacing(4) + + widgets_by_flag = collections.defaultdict(list) + widgets_by_plugins_id = collections.defaultdict(list) + + for log in logs: + widget = LogItemWidget(log, self) + widgets_by_flag[widget.type_flag].append(widget) + widgets_by_plugins_id[widget.plugin_id].append(widget) + logs_layout.addWidget(widget, 0) + + self._widgets_by_flag = widgets_by_flag + self._widgets_by_plugins_id = widgets_by_plugins_id + + self._visibility_by_flags = { + LOG_DEBUG_VISIBLE: True, + LOG_INFO_VISIBLE: True, + LOG_WARNING_VISIBLE: True, + LOG_ERROR_VISIBLE: True, + LOG_CRITICAL_VISIBLE: True, + ERROR_VISIBLE: True, + INFO_VISIBLE: True, + } + self._flags_filter = sum(self._visibility_by_flags.keys()) + self._plugin_ids_filter = None + + def _update_flags_filtering(self): + for flag in ( + LOG_DEBUG_VISIBLE, + LOG_INFO_VISIBLE, + LOG_WARNING_VISIBLE, + LOG_ERROR_VISIBLE, + LOG_CRITICAL_VISIBLE, + ERROR_VISIBLE, + INFO_VISIBLE, + ): + visible = (self._flags_filter & flag) != 0 + if visible is not self._visibility_by_flags[flag]: + self._visibility_by_flags[flag] = visible + for widget in self._widgets_by_flag[flag]: + widget.set_log_type_filtered(not visible) + + def _update_plugin_filtering(self): + if self._plugin_ids_filter is None: + for widgets in self._widgets_by_plugins_id.values(): + for widget in widgets: + widget.set_plugin_filtered(False) + + else: + for plugin_id, widgets in self._widgets_by_plugins_id.items(): + filtered = plugin_id not in self._plugin_ids_filter + for widget in widgets: + widget.set_plugin_filtered(filtered) + + def set_log_filters(self, visibility_filter, plugin_ids): + if self._flags_filter != visibility_filter: + self._flags_filter = visibility_filter + self._update_flags_filtering() + + if self._plugin_ids_filter != plugin_ids: + if plugin_ids is not None: + plugin_ids = set(plugin_ids) + self._plugin_ids_filter = plugin_ids + self._update_plugin_filtering() + + +class InstanceLogsWidget(QtWidgets.QWidget): + """Widget showing logs of one publish instance. + + Args: + instance (_InstanceItem): Item of instance used as data source. + parent (QtWidgets.QWidget): Parent widget. + """ + + def __init__(self, instance, parent): + super(InstanceLogsWidget, self).__init__(parent) + + self.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + label_widget = QtWidgets.QLabel(instance.label, self) + label_widget.setObjectName("PublishInstanceLogsLabel") + logs_grid = LogsWithIconsView(instance.logs, self) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(label_widget, 0) + layout.addWidget(logs_grid, 0) + + self._logs_grid = logs_grid + + def set_log_filters(self, visibility_filter, plugin_ids): + """Change logs filter. + + Args: + visibility_filter (int): Number contained of flags for each log + type and level. + plugin_ids (Iterable[str]): Plugin ids to which are logs filtered. + """ + + self._logs_grid.set_log_filters(visibility_filter, plugin_ids) + + +class InstancesLogsView(QtWidgets.QFrame): + """Publish instances logs view widget.""" + + def __init__(self, parent): + super(InstancesLogsView, self).__init__(parent) + self.setObjectName("InstancesLogsView") + + scroll_area = QtWidgets.QScrollArea(self) + scroll_area.setWidgetResizable(True) + scroll_area.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + scroll_area.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) + scroll_area.setAttribute(QtCore.Qt.WA_TranslucentBackground) + scrollbar_bg = scroll_area.verticalScrollBar().parent() + if scrollbar_bg: + scrollbar_bg.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + content_wrap_widget = QtWidgets.QWidget(scroll_area) + content_wrap_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + content_widget = QtWidgets.QWidget(content_wrap_widget) + content_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + content_layout = QtWidgets.QVBoxLayout(content_widget) + content_layout.setSpacing(15) + + scroll_area.setWidget(content_wrap_widget) + + content_wrap_layout = QtWidgets.QVBoxLayout(content_wrap_widget) + content_wrap_layout.setContentsMargins(0, 0, 0, 0) + content_wrap_layout.addWidget(content_widget, 0) + content_wrap_layout.addStretch(1) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(scroll_area, 1) + + self._visible_filters = ( + LOG_INFO_VISIBLE + | LOG_WARNING_VISIBLE + | LOG_ERROR_VISIBLE + | LOG_CRITICAL_VISIBLE + | ERROR_VISIBLE + | INFO_VISIBLE + ) + + self._content_widget = content_widget + self._content_layout = content_layout + + self._instances_order = [] + self._instances_by_id = {} + self._views_by_instance_id = {} + self._is_showed = False + self._clear_needed = False + self._update_needed = False + self._instance_ids_filter = [] + self._plugin_ids_filter = None + + def showEvent(self, event): + super(InstancesLogsView, self).showEvent(event) + self._is_showed = True + self._update_instances() + + def hideEvent(self, event): + super(InstancesLogsView, self).hideEvent(event) + self._is_showed = False + + def closeEvent(self, event): + super(InstancesLogsView, self).closeEvent(event) + self._is_showed = False + + def _update_instances(self): + if not self._is_showed: + return + + if self._clear_needed: + self._clear_widgets() + self._clear_needed = False + + if not self._update_needed: + return + self._update_needed = False + + instance_ids = self._instance_ids_filter + to_hide = set() + if not instance_ids: + instance_ids = self._instances_by_id + else: + to_hide = set(self._instances_by_id) - set(instance_ids) + + for instance_id in instance_ids: + widget = self._views_by_instance_id.get(instance_id) + if widget is None: + instance = self._instances_by_id[instance_id] + widget = InstanceLogsWidget(instance, self._content_widget) + self._views_by_instance_id[instance_id] = widget + self._content_layout.addWidget(widget, 0) + + widget.setVisible(True) + widget.set_log_filters( + self._visible_filters, self._plugin_ids_filter + ) + + for instance_id in to_hide: + widget = self._views_by_instance_id.get(instance_id) + if widget is not None: + widget.setVisible(False) + + def _clear_widgets(self): + """Remove all widgets from layout and from cache.""" + + while self._content_layout.count(): + item = self._content_layout.takeAt(0) + widget = item.widget() + if widget: + widget.setVisible(False) + widget.deleteLater() + self._views_by_instance_id = {} + + def update_instances(self, instances): + """Update publish instance from report. + + Args: + instances (list[_InstanceItem]): Instance data from report. + """ + + self._instances_order = [ + instance.id for instance in instances + ] + self._instances_by_id = { + instance.id: instance + for instance in instances + } + self._instance_ids_filter = [] + self._plugin_ids_filter = None + self._clear_needed = True + self._update_needed = True + self._update_instances() + + def set_instances_filter(self, instance_ids=None): + """Set instance filter. + + Args: + instance_ids (Optional[list[str]]): List of instances to keep + visible. Pass empty list to hide all items. + """ + + self._instance_ids_filter = instance_ids + self._update_needed = True + self._update_instances() + + def set_plugins_filter(self, plugin_ids=None): + if self._plugin_ids_filter == plugin_ids: + return + self._plugin_ids_filter = plugin_ids + self._update_needed = True + self._update_instances() + + +class CrashWidget(QtWidgets.QWidget): + """Widget shown when publishing crashes. + + Contains only minimal information for artist with easy access to report + actions. + """ + + def __init__(self, controller, parent): + super(CrashWidget, self).__init__(parent) + + main_label = QtWidgets.QLabel("This is not your fault", self) + main_label.setAlignment(QtCore.Qt.AlignCenter) + main_label.setObjectName("PublishCrashMainLabel") + + report_label = QtWidgets.QLabel( + ( + "Please report the error to your pipeline support" + " using one of the options below." + ), + self + ) + report_label.setAlignment(QtCore.Qt.AlignCenter) + report_label.setWordWrap(True) + report_label.setObjectName("PublishCrashReportLabel") + + btns_widget = QtWidgets.QWidget(self) + copy_clipboard_btn = QtWidgets.QPushButton( + "Copy to clipboard", btns_widget) + save_to_disk_btn = QtWidgets.QPushButton( + "Save to disk", btns_widget) + + btns_layout = QtWidgets.QHBoxLayout(btns_widget) + btns_layout.addStretch(1) + btns_layout.addWidget(copy_clipboard_btn, 0) + btns_layout.addSpacing(20) + btns_layout.addWidget(save_to_disk_btn, 0) + btns_layout.addStretch(1) + + layout = QtWidgets.QVBoxLayout(self) + layout.addStretch(1) + layout.addWidget(main_label, 0) + layout.addSpacing(20) + layout.addWidget(report_label, 0) + layout.addSpacing(20) + layout.addWidget(btns_widget, 0) + layout.addStretch(2) + + copy_clipboard_btn.clicked.connect(self._on_copy_to_clipboard) + save_to_disk_btn.clicked.connect(self._on_save_to_disk_click) + + self._controller = controller + + def _on_copy_to_clipboard(self): + self._controller.event_system.emit( + "copy_report.request", {}, "report_page") + + def _on_save_to_disk_click(self): + self._controller.event_system.emit( + "export_report.request", {}, "report_page") + + +class ErrorDetailsWidget(QtWidgets.QWidget): + def __init__(self, parent): + super(ErrorDetailsWidget, self).__init__(parent) + + inputs_widget = QtWidgets.QWidget(self) + # Error 'Description' input + error_description_input = ExpandingTextEdit(inputs_widget) + error_description_input.setObjectName("InfoText") + error_description_input.setTextInteractionFlags( + QtCore.Qt.TextBrowserInteraction + ) + + # Error 'Details' widget -> Collapsible + error_details_widget = QtWidgets.QWidget(inputs_widget) + + error_details_top = ClickableFrame(error_details_widget) + + error_details_expand_btn = ClassicExpandBtn(error_details_top) + error_details_expand_label = QtWidgets.QLabel( + "Details", error_details_top) + + line_widget = SeparatorWidget(1, parent=error_details_top) + + error_details_top_l = QtWidgets.QHBoxLayout(error_details_top) + error_details_top_l.setContentsMargins(0, 0, 10, 0) + error_details_top_l.addWidget(error_details_expand_btn, 0) + error_details_top_l.addWidget(error_details_expand_label, 0) + error_details_top_l.addWidget(line_widget, 1) + + error_details_input = ExpandingTextEdit(error_details_widget) + error_details_input.setObjectName("InfoText") + error_details_input.setTextInteractionFlags( + QtCore.Qt.TextBrowserInteraction + ) + error_details_input.setVisible(not error_details_expand_btn.collapsed) + + error_details_layout = QtWidgets.QVBoxLayout(error_details_widget) + error_details_layout.setContentsMargins(0, 0, 0, 0) + error_details_layout.addWidget(error_details_top, 0) + error_details_layout.addWidget(error_details_input, 0) + error_details_layout.addStretch(1) + + # Description and Details layout + inputs_layout = QtWidgets.QVBoxLayout(inputs_widget) + inputs_layout.setContentsMargins(0, 0, 0, 0) + inputs_layout.setSpacing(10) + inputs_layout.addWidget(error_description_input, 0) + inputs_layout.addWidget(error_details_widget, 1) + + main_layout = QtWidgets.QHBoxLayout(self) + main_layout.setContentsMargins(0, 0, 0, 0) + main_layout.addWidget(inputs_widget, 1) + + error_details_top.clicked.connect(self._on_detail_toggle) + + self._error_details_widget = error_details_widget + self._error_description_input = error_description_input + self._error_details_expand_btn = error_details_expand_btn + self._error_details_input = error_details_input + + def _on_detail_toggle(self): + self._error_details_expand_btn.set_collapsed() + self._error_details_input.setVisible( + not self._error_details_expand_btn.collapsed) + + def set_error_item(self, error_item): + detail = "" + description = "" + if error_item: + description = error_item.description or description + detail = error_item.detail or detail + + if commonmark: + self._error_description_input.setHtml( + commonmark.commonmark(description) + ) + self._error_details_input.setHtml( + commonmark.commonmark(detail) + ) + + elif hasattr(self._error_details_input, "setMarkdown"): + self._error_description_input.setMarkdown(description) + self._error_details_input.setMarkdown(detail) + + else: + self._error_description_input.setText(description) + self._error_details_input.setText(detail) + + self._error_details_widget.setVisible(bool(detail)) + + +class ReportsWidget(QtWidgets.QWidget): + """ + # Crash layout + ┌──────┬─────────┬─────────┐ + │Views │ Logs │ Details │ + │ │ │ │ + │ │ │ │ + └──────┴─────────┴─────────┘ + # Success layout + ┌──────┬───────────────────┐ + │View │ Logs │ + │ │ │ + │ │ │ + └──────┴───────────────────┘ + # Validation errors layout + ┌──────┬─────────┬─────────┐ + │Views │ Actions │ │ + │ ├─────────┤ Details │ + │ │ Logs │ │ + │ │ │ │ + └──────┴─────────┴─────────┘ + """ + + def __init__(self, controller, parent): + super(ReportsWidget, self).__init__(parent) + + # Instances view + views_widget = QtWidgets.QWidget(self) + + instances_view = PublishInstancesViewWidget(controller, views_widget) + + validation_error_view = ValidationErrorsView(views_widget) + + views_layout = QtWidgets.QStackedLayout(views_widget) + views_layout.setContentsMargins(0, 0, 0, 0) + views_layout.addWidget(instances_view) + views_layout.addWidget(validation_error_view) + + views_layout.setCurrentWidget(instances_view) + + # Error description with actions and optional detail + details_widget = QtWidgets.QFrame(self) + details_widget.setObjectName("PublishInstancesDetails") + + # Actions widget + actions_widget = ValidateActionsWidget(controller, details_widget) + + pages_widget = QtWidgets.QWidget(details_widget) + + # Logs view + logs_view = InstancesLogsView(pages_widget) + + # Validation details + # Description and details inputs are in scroll + # - single scroll for both inputs, they are forced to not use theirs + detail_inputs_spacer = QtWidgets.QWidget(pages_widget) + detail_inputs_spacer.setMinimumWidth(30) + detail_inputs_spacer.setMaximumWidth(30) + + detail_input_scroll = QtWidgets.QScrollArea(pages_widget) + + detail_inputs_widget = ErrorDetailsWidget(detail_input_scroll) + detail_inputs_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) + + detail_input_scroll.setWidget(detail_inputs_widget) + detail_input_scroll.setWidgetResizable(True) + detail_input_scroll.setViewportMargins(0, 0, 0, 0) + + # Crash information + crash_widget = CrashWidget(controller, details_widget) + + # Layout pages + pages_layout = QtWidgets.QHBoxLayout(pages_widget) + pages_layout.setContentsMargins(0, 0, 0, 0) + pages_layout.addWidget(logs_view, 1) + pages_layout.addWidget(detail_inputs_spacer, 0) + pages_layout.addWidget(detail_input_scroll, 1) + pages_layout.addWidget(crash_widget, 1) + + details_layout = QtWidgets.QVBoxLayout(details_widget) + margins = details_layout.contentsMargins() + margins.setTop(margins.top() * 2) + margins.setBottom(margins.bottom() * 2) + details_layout.setContentsMargins(margins) + details_layout.setSpacing(margins.top()) + details_layout.addWidget(actions_widget, 0) + details_layout.addWidget(pages_widget, 1) + + content_layout = QtWidgets.QHBoxLayout(self) + content_layout.setContentsMargins(0, 0, 0, 0) + content_layout.addWidget(views_widget, 0) + content_layout.addWidget(details_widget, 1) + + instances_view.selection_changed.connect(self._on_instance_selection) + validation_error_view.selection_changed.connect( + self._on_error_selection) + + self._views_layout = views_layout + self._instances_view = instances_view + self._validation_error_view = validation_error_view + + self._actions_widget = actions_widget + self._detail_inputs_widget = detail_inputs_widget + self._logs_view = logs_view + self._detail_inputs_spacer = detail_inputs_spacer + self._detail_input_scroll = detail_input_scroll + self._crash_widget = crash_widget + + self._controller = controller + + self._validation_errors_by_id = {} + + def _get_instance_items(self): + report = self._controller.get_publish_report() + context_label = report["context"]["label"] or CONTEXT_LABEL + instances_by_id = report["instances"] + plugins_info = report["plugins_data"] + logs_by_instance_id = collections.defaultdict(list) + for plugin_info in plugins_info: + plugin_id = plugin_info["id"] + for instance_info in plugin_info["instances_data"]: + instance_id = instance_info["id"] or CONTEXT_ID + for log in instance_info["logs"]: + log["plugin_id"] = plugin_id + logs_by_instance_id[instance_id].extend(instance_info["logs"]) + + context_item = _InstanceItem.create_context_item( + context_label, logs_by_instance_id[CONTEXT_ID]) + instance_items = [ + _InstanceItem.from_report( + instance_id, instance, logs_by_instance_id[instance_id] + ) + for instance_id, instance in instances_by_id.items() + if instance["exists"] + ] + instance_items.sort() + instance_items.insert(0, context_item) + return instance_items + + def update_data(self): + view = self._instances_view + validation_error_mode = False + if ( + not self._controller.publish_has_crashed + and self._controller.publish_has_validation_errors + ): + view = self._validation_error_view + validation_error_mode = True + + self._actions_widget.set_visible_mode(validation_error_mode) + self._detail_inputs_spacer.setVisible(validation_error_mode) + self._detail_input_scroll.setVisible(validation_error_mode) + self._views_layout.setCurrentWidget(view) + + self._crash_widget.setVisible(self._controller.publish_has_crashed) + self._logs_view.setVisible(not self._controller.publish_has_crashed) + + # Instance view & logs update + instance_items = self._get_instance_items() + self._instances_view.update_instances(instance_items) + self._logs_view.update_instances(instance_items) + + # Validation errors + validation_errors = self._controller.get_validation_errors() + grouped_error_items = validation_errors.group_items_by_title() + + validation_errors_by_id = { + title_item["id"]: title_item + for title_item in grouped_error_items + } + + self._validation_errors_by_id = validation_errors_by_id + self._validation_error_view.set_errors(grouped_error_items) + + def _on_instance_selection(self): + instance_ids = self._instances_view.get_selected_instance_ids() + self._logs_view.set_instances_filter(instance_ids) + + def _on_error_selection(self): + title_id, instance_ids = ( + self._validation_error_view.get_selected_items()) + error_info = self._validation_errors_by_id.get(title_id) + if error_info is None: + self._actions_widget.set_error_info(None) + self._detail_inputs_widget.set_error_item(None) + return + + self._logs_view.set_instances_filter(instance_ids) + self._logs_view.set_plugins_filter([error_info["plugin_id"]]) + + match_error_item = None + for error_item in error_info["error_items"]: + instance_id = error_item.instance_id or CONTEXT_ID + if instance_id in instance_ids: + match_error_item = error_item + break + + self._actions_widget.set_error_info(error_info) + self._detail_inputs_widget.set_error_item(match_error_item) + + +class ReportPageWidget(QtWidgets.QFrame): + """Widgets showing report for artis. + + There are 5 possible states: + 1. Publishing did not start yet. > Only label. + 2. Publishing is paused. ┐ + 3. Publishing successfully finished. │> Instances with logs. + 4. Publishing crashed. ┘ + 5. Crashed because of validation error. > Errors with logs. + + This widget is shown if validation errors happened during validation part. + + Shows validation error titles with instances on which they happened + and validation error detail with possible actions (repair). + """ + + def __init__(self, controller, parent): + super(ReportPageWidget, self).__init__(parent) + + header_label = QtWidgets.QLabel(self) + header_label.setAlignment(QtCore.Qt.AlignCenter) + header_label.setObjectName("PublishReportHeader") + + publish_instances_widget = ReportsWidget(controller, self) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(header_label, 0) + layout.addWidget(publish_instances_widget, 0) + + controller.event_system.add_callback( + "publish.process.started", self._on_publish_start + ) + controller.event_system.add_callback( + "publish.reset.finished", self._on_publish_reset + ) + controller.event_system.add_callback( + "publish.process.stopped", self._on_publish_stop + ) + + self._header_label = header_label + self._publish_instances_widget = publish_instances_widget + + self._controller = controller + + def _update_label(self): + if not self._controller.publish_has_started: + # This probably never happen when this widget is visible + header_label = "Nothing to report until you run publish" + elif self._controller.publish_has_crashed: + header_label = "Publish error report" + elif self._controller.publish_has_validation_errors: + header_label = "Publish validation report" + elif self._controller.publish_has_finished: + header_label = "Publish success report" + else: + header_label = "Publish report" + self._header_label.setText(header_label) + + def _update_state(self): + self._update_label() + publish_started = self._controller.publish_has_started + self._publish_instances_widget.setVisible(publish_started) + if publish_started: + self._publish_instances_widget.update_data() + + self.updateGeometry() + + def _on_publish_start(self): + self._update_state() + + def _on_publish_reset(self): + self._update_state() + + def _on_publish_stop(self): + self._update_state() diff --git a/openpype/tools/publisher/widgets/thumbnail_widget.py b/openpype/tools/publisher/widgets/thumbnail_widget.py index e234f4cdc1..b17ca0adc8 100644 --- a/openpype/tools/publisher/widgets/thumbnail_widget.py +++ b/openpype/tools/publisher/widgets/thumbnail_widget.py @@ -75,6 +75,7 @@ class ThumbnailPainterWidget(QtWidgets.QWidget): painter = QtGui.QPainter() painter.begin(self) + painter.setRenderHint(QtGui.QPainter.Antialiasing) painter.drawPixmap(0, 0, self._cached_pix) painter.end() @@ -183,6 +184,18 @@ class ThumbnailPainterWidget(QtWidgets.QWidget): backgrounded_images.append(new_pix) return backgrounded_images + def _paint_dash_line(self, painter, rect): + pen = QtGui.QPen() + pen.setWidth(1) + pen.setBrush(QtCore.Qt.darkGray) + pen.setStyle(QtCore.Qt.DashLine) + + new_rect = rect.adjusted(1, 1, -1, -1) + painter.setPen(pen) + painter.setBrush(QtCore.Qt.transparent) + # painter.drawRect(rect) + painter.drawRect(new_rect) + def _cache_pix(self): rect = self.rect() rect_width = rect.width() @@ -264,13 +277,7 @@ class ThumbnailPainterWidget(QtWidgets.QWidget): # Draw drop enabled dashes if used_default_pix: - pen = QtGui.QPen() - pen.setWidth(1) - pen.setBrush(QtCore.Qt.darkGray) - pen.setStyle(QtCore.Qt.DashLine) - final_painter.setPen(pen) - final_painter.setBrush(QtCore.Qt.transparent) - final_painter.drawRect(rect) + self._paint_dash_line(final_painter, rect) final_painter.end() diff --git a/openpype/tools/publisher/widgets/validations_widget.py b/openpype/tools/publisher/widgets/validations_widget.py deleted file mode 100644 index 0abe85c0b8..0000000000 --- a/openpype/tools/publisher/widgets/validations_widget.py +++ /dev/null @@ -1,715 +0,0 @@ -# -*- coding: utf-8 -*- -try: - import commonmark -except Exception: - commonmark = None - -from qtpy import QtWidgets, QtCore, QtGui - -from openpype.tools.utils import BaseClickableFrame, ClickableFrame -from .widgets import ( - IconValuePixmapLabel -) -from ..constants import ( - INSTANCE_ID_ROLE -) - - -class ValidationErrorInstanceList(QtWidgets.QListView): - """List of publish instances that caused a validation error. - - Instances are collected per plugin's validation error title. - """ - def __init__(self, *args, **kwargs): - super(ValidationErrorInstanceList, self).__init__(*args, **kwargs) - - self.setObjectName("ValidationErrorInstanceList") - - self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) - self.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) - - def minimumSizeHint(self): - return self.sizeHint() - - def sizeHint(self): - result = super(ValidationErrorInstanceList, self).sizeHint() - row_count = self.model().rowCount() - height = 0 - if row_count > 0: - height = self.sizeHintForRow(0) * row_count - result.setHeight(height) - return result - - -class ValidationErrorTitleWidget(QtWidgets.QWidget): - """Title of validation error. - - Widget is used as radio button so requires clickable functionality and - changing style on selection/deselection. - - Has toggle button to show/hide instances on which validation error happened - if there is a list (Valdation error may happen on context). - """ - - selected = QtCore.Signal(int) - instance_changed = QtCore.Signal(int) - - def __init__(self, index, error_info, parent): - super(ValidationErrorTitleWidget, self).__init__(parent) - - self._index = index - self._error_info = error_info - self._selected = False - - title_frame = ClickableFrame(self) - title_frame.setObjectName("ValidationErrorTitleFrame") - - toggle_instance_btn = QtWidgets.QToolButton(title_frame) - toggle_instance_btn.setObjectName("ArrowBtn") - toggle_instance_btn.setArrowType(QtCore.Qt.RightArrow) - toggle_instance_btn.setMaximumWidth(14) - - label_widget = QtWidgets.QLabel(error_info["title"], title_frame) - - title_frame_layout = QtWidgets.QHBoxLayout(title_frame) - title_frame_layout.addWidget(label_widget, 1) - title_frame_layout.addWidget(toggle_instance_btn, 0) - - instances_model = QtGui.QStandardItemModel() - - help_text_by_instance_id = {} - - items = [] - context_validation = False - for error_item in error_info["error_items"]: - context_validation = error_item.context_validation - if context_validation: - toggle_instance_btn.setArrowType(QtCore.Qt.NoArrow) - description = self._prepare_description(error_item) - help_text_by_instance_id[None] = description - # Add fake item to have minimum size hint of view widget - items.append(QtGui.QStandardItem("Context")) - continue - - label = error_item.instance_label - item = QtGui.QStandardItem(label) - item.setFlags( - QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable - ) - item.setData(label, QtCore.Qt.ToolTipRole) - item.setData(error_item.instance_id, INSTANCE_ID_ROLE) - items.append(item) - description = self._prepare_description(error_item) - help_text_by_instance_id[error_item.instance_id] = description - - if items: - root_item = instances_model.invisibleRootItem() - root_item.appendRows(items) - - instances_view = ValidationErrorInstanceList(self) - instances_view.setModel(instances_model) - - self.setLayoutDirection(QtCore.Qt.LeftToRight) - - view_widget = QtWidgets.QWidget(self) - view_layout = QtWidgets.QHBoxLayout(view_widget) - view_layout.setContentsMargins(0, 0, 0, 0) - view_layout.setSpacing(0) - view_layout.addSpacing(14) - view_layout.addWidget(instances_view, 0) - - layout = QtWidgets.QVBoxLayout(self) - layout.setSpacing(0) - layout.setContentsMargins(0, 0, 0, 0) - layout.addWidget(title_frame, 0) - layout.addWidget(view_widget, 0) - view_widget.setVisible(False) - - if not context_validation: - toggle_instance_btn.clicked.connect(self._on_toggle_btn_click) - - title_frame.clicked.connect(self._mouse_release_callback) - instances_view.selectionModel().selectionChanged.connect( - self._on_seleciton_change - ) - - self._title_frame = title_frame - - self._toggle_instance_btn = toggle_instance_btn - - self._view_widget = view_widget - - self._instances_model = instances_model - self._instances_view = instances_view - - self._context_validation = context_validation - self._help_text_by_instance_id = help_text_by_instance_id - - self._expanded = False - - def sizeHint(self): - result = super(ValidationErrorTitleWidget, self).sizeHint() - expected_width = max( - self._view_widget.minimumSizeHint().width(), - self._view_widget.sizeHint().width() - ) - - if expected_width < 200: - expected_width = 200 - - if result.width() < expected_width: - result.setWidth(expected_width) - - return result - - def minimumSizeHint(self): - return self.sizeHint() - - def _prepare_description(self, error_item): - """Prepare description text for detail intput. - - Args: - error_item (ValidationErrorItem): Item which hold information about - validation error. - - Returns: - str: Prepared detailed description. - """ - - dsc = error_item.description - detail = error_item.detail - if detail: - dsc += "

{}".format(detail) - - description = dsc - if commonmark: - description = commonmark.commonmark(dsc) - return description - - def _mouse_release_callback(self): - """Mark this widget as selected on click.""" - - self.set_selected(True) - - def current_description_text(self): - if self._context_validation: - return self._help_text_by_instance_id[None] - index = self._instances_view.currentIndex() - # TODO make sure instance is selected - if not index.isValid(): - index = self._instances_model.index(0, 0) - - indence_id = index.data(INSTANCE_ID_ROLE) - return self._help_text_by_instance_id[indence_id] - - @property - def is_selected(self): - """Is widget marked a selected. - - Returns: - bool: Item is selected or not. - """ - - return self._selected - - @property - def index(self): - """Widget's index set by parent. - - Returns: - int: Index of widget. - """ - - return self._index - - def set_index(self, index): - """Set index of widget (called by parent). - - Args: - int: New index of widget. - """ - - self._index = index - - def _change_style_property(self, selected): - """Change style of widget based on selection.""" - - value = "1" if selected else "" - self._title_frame.setProperty("selected", value) - self._title_frame.style().polish(self._title_frame) - - def set_selected(self, selected=None): - """Change selected state of widget.""" - - if selected is None: - selected = not self._selected - - # Clear instance view selection on deselect - if not selected: - self._instances_view.clearSelection() - - # Skip if has same value - if selected == self._selected: - return - - self._selected = selected - self._change_style_property(selected) - if selected: - self.selected.emit(self._index) - self._set_expanded(True) - - def _on_toggle_btn_click(self): - """Show/hide instances list.""" - - self._set_expanded() - - def _set_expanded(self, expanded=None): - if expanded is None: - expanded = not self._expanded - - elif expanded is self._expanded: - return - - if expanded and self._context_validation: - return - - self._expanded = expanded - self._view_widget.setVisible(expanded) - if expanded: - self._toggle_instance_btn.setArrowType(QtCore.Qt.DownArrow) - else: - self._toggle_instance_btn.setArrowType(QtCore.Qt.RightArrow) - - def _on_seleciton_change(self): - sel_model = self._instances_view.selectionModel() - if sel_model.selectedIndexes(): - self.instance_changed.emit(self._index) - - -class ActionButton(BaseClickableFrame): - """Plugin's action callback button. - - Action may have label or icon or both. - - Args: - plugin_action_item (PublishPluginActionItem): Action item that can be - triggered by it's id. - """ - - action_clicked = QtCore.Signal(str, str) - - def __init__(self, plugin_action_item, parent): - super(ActionButton, self).__init__(parent) - - self.setObjectName("ValidationActionButton") - - self.plugin_action_item = plugin_action_item - - action_label = plugin_action_item.label - action_icon = plugin_action_item.icon - label_widget = QtWidgets.QLabel(action_label, self) - icon_label = None - if action_icon: - icon_label = IconValuePixmapLabel(action_icon, self) - - layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(5, 0, 5, 0) - layout.addWidget(label_widget, 1) - if icon_label: - layout.addWidget(icon_label, 0) - - self.setSizePolicy( - QtWidgets.QSizePolicy.Minimum, - self.sizePolicy().verticalPolicy() - ) - - def _mouse_release_callback(self): - self.action_clicked.emit( - self.plugin_action_item.plugin_id, - self.plugin_action_item.action_id - ) - - -class ValidateActionsWidget(QtWidgets.QFrame): - """Wrapper widget for plugin actions. - - Change actions based on selected validation error. - """ - - def __init__(self, controller, parent): - super(ValidateActionsWidget, self).__init__(parent) - - self.setAttribute(QtCore.Qt.WA_TranslucentBackground) - - content_widget = QtWidgets.QWidget(self) - content_layout = QtWidgets.QVBoxLayout(content_widget) - - layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(0, 0, 0, 0) - layout.addWidget(content_widget) - - self._controller = controller - self._content_widget = content_widget - self._content_layout = content_layout - self._actions_mapping = {} - - def clear(self): - """Remove actions from widget.""" - while self._content_layout.count(): - item = self._content_layout.takeAt(0) - widget = item.widget() - if widget: - widget.setVisible(False) - widget.deleteLater() - self._actions_mapping = {} - - def set_error_item(self, error_item): - """Set selected plugin and show it's actions. - - Clears current actions from widget and recreate them from the plugin. - - Args: - Dict[str, Any]: Object holding error items, title and possible - actions to run. - """ - - self.clear() - - if not error_item: - self.setVisible(False) - return - - plugin_action_items = error_item["plugin_action_items"] - for plugin_action_item in plugin_action_items: - if not plugin_action_item.active: - continue - - if plugin_action_item.on_filter not in ("failed", "all"): - continue - - action_id = plugin_action_item.action_id - self._actions_mapping[action_id] = plugin_action_item - - action_btn = ActionButton(plugin_action_item, self._content_widget) - action_btn.action_clicked.connect(self._on_action_click) - self._content_layout.addWidget(action_btn) - - if self._content_layout.count() > 0: - self.setVisible(True) - self._content_layout.addStretch(1) - else: - self.setVisible(False) - - def _on_action_click(self, plugin_id, action_id): - self._controller.run_action(plugin_id, action_id) - - -class VerticallScrollArea(QtWidgets.QScrollArea): - """Scroll area for validation error titles. - - The biggest difference is that the scroll area has scroll bar on left side - and resize of content will also resize scrollarea itself. - - Resize if deferred by 100ms because at the moment of resize are not yet - propagated sizes and visibility of scroll bars. - """ - - def __init__(self, *args, **kwargs): - super(VerticallScrollArea, self).__init__(*args, **kwargs) - - self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) - self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAsNeeded) - self.setLayoutDirection(QtCore.Qt.RightToLeft) - - self.setAttribute(QtCore.Qt.WA_TranslucentBackground) - # Background of scrollbar will be transparent - scrollbar_bg = self.verticalScrollBar().parent() - if scrollbar_bg: - scrollbar_bg.setAttribute(QtCore.Qt.WA_TranslucentBackground) - self.setViewportMargins(0, 0, 0, 0) - - self.verticalScrollBar().installEventFilter(self) - - # Timer with 100ms offset after changing size - size_changed_timer = QtCore.QTimer() - size_changed_timer.setInterval(100) - size_changed_timer.setSingleShot(True) - - size_changed_timer.timeout.connect(self._on_timer_timeout) - self._size_changed_timer = size_changed_timer - - def setVerticalScrollBar(self, widget): - old_widget = self.verticalScrollBar() - if old_widget: - old_widget.removeEventFilter(self) - - super(VerticallScrollArea, self).setVerticalScrollBar(widget) - if widget: - widget.installEventFilter(self) - - def setWidget(self, widget): - old_widget = self.widget() - if old_widget: - old_widget.removeEventFilter(self) - - super(VerticallScrollArea, self).setWidget(widget) - if widget: - widget.installEventFilter(self) - - def _on_timer_timeout(self): - width = self.widget().width() - if self.verticalScrollBar().isVisible(): - width += self.verticalScrollBar().width() - self.setMinimumWidth(width) - - def eventFilter(self, obj, event): - if ( - event.type() == QtCore.QEvent.Resize - and (obj is self.widget() or obj is self.verticalScrollBar()) - ): - self._size_changed_timer.start() - return super(VerticallScrollArea, self).eventFilter(obj, event) - - -class ValidationArtistMessage(QtWidgets.QWidget): - def __init__(self, message, parent): - super(ValidationArtistMessage, self).__init__(parent) - - artist_msg_label = QtWidgets.QLabel(message, self) - artist_msg_label.setAlignment(QtCore.Qt.AlignCenter) - - main_layout = QtWidgets.QHBoxLayout(self) - main_layout.setContentsMargins(0, 0, 0, 0) - main_layout.addWidget( - artist_msg_label, 1, QtCore.Qt.AlignCenter - ) - - -class ValidationsWidget(QtWidgets.QFrame): - """Widgets showing validation error. - - This widget is shown if validation error/s happened during validation part. - - Shows validation error titles with instances on which happened and - validation error detail with possible actions (repair). - - ┌──────┬────────────────┬───────┐ - │titles│ │actions│ - │ │ │ │ - │ │ Error detail │ │ - │ │ │ │ - │ │ │ │ - └──────┴────────────────┴───────┘ - """ - - def __init__(self, controller, parent): - super(ValidationsWidget, self).__init__(parent) - - # Before publishing - before_publish_widget = ValidationArtistMessage( - "Nothing to report until you run publish", self - ) - # After success publishing - publish_started_widget = ValidationArtistMessage( - "So far so good", self - ) - # After success publishing - publish_stop_ok_widget = ValidationArtistMessage( - "Publishing finished successfully", self - ) - # After failed publishing (not with validation error) - publish_stop_fail_widget = ValidationArtistMessage( - "This is not your fault...", self - ) - - # Validation errors - validations_widget = QtWidgets.QWidget(self) - - content_widget = QtWidgets.QWidget(validations_widget) - - errors_scroll = VerticallScrollArea(content_widget) - errors_scroll.setWidgetResizable(True) - - errors_widget = QtWidgets.QWidget(errors_scroll) - errors_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground) - errors_layout = QtWidgets.QVBoxLayout(errors_widget) - errors_layout.setContentsMargins(0, 0, 0, 0) - - errors_scroll.setWidget(errors_widget) - - error_details_frame = QtWidgets.QFrame(content_widget) - error_details_input = QtWidgets.QTextEdit(error_details_frame) - error_details_input.setObjectName("InfoText") - error_details_input.setTextInteractionFlags( - QtCore.Qt.TextBrowserInteraction - ) - - actions_widget = ValidateActionsWidget(controller, content_widget) - actions_widget.setMinimumWidth(140) - - error_details_layout = QtWidgets.QHBoxLayout(error_details_frame) - error_details_layout.addWidget(error_details_input, 1) - error_details_layout.addWidget(actions_widget, 0) - - content_layout = QtWidgets.QHBoxLayout(content_widget) - content_layout.setSpacing(0) - content_layout.setContentsMargins(0, 0, 0, 0) - - content_layout.addWidget(errors_scroll, 0) - content_layout.addWidget(error_details_frame, 1) - - top_label = QtWidgets.QLabel( - "Publish validation report", content_widget - ) - top_label.setObjectName("PublishInfoMainLabel") - top_label.setAlignment(QtCore.Qt.AlignCenter) - - validation_layout = QtWidgets.QVBoxLayout(validations_widget) - validation_layout.setContentsMargins(0, 0, 0, 0) - validation_layout.addWidget(top_label, 0) - validation_layout.addWidget(content_widget, 1) - - main_layout = QtWidgets.QStackedLayout(self) - main_layout.addWidget(before_publish_widget) - main_layout.addWidget(publish_started_widget) - main_layout.addWidget(publish_stop_ok_widget) - main_layout.addWidget(publish_stop_fail_widget) - main_layout.addWidget(validations_widget) - - main_layout.setCurrentWidget(before_publish_widget) - - controller.event_system.add_callback( - "publish.process.started", self._on_publish_start - ) - controller.event_system.add_callback( - "publish.reset.finished", self._on_publish_reset - ) - controller.event_system.add_callback( - "publish.process.stopped", self._on_publish_stop - ) - - self._main_layout = main_layout - - self._before_publish_widget = before_publish_widget - self._publish_started_widget = publish_started_widget - self._publish_stop_ok_widget = publish_stop_ok_widget - self._publish_stop_fail_widget = publish_stop_fail_widget - self._validations_widget = validations_widget - - self._top_label = top_label - self._errors_widget = errors_widget - self._errors_layout = errors_layout - self._error_details_frame = error_details_frame - self._error_details_input = error_details_input - self._actions_widget = actions_widget - - self._title_widgets = {} - self._error_info = {} - self._previous_select = None - - self._controller = controller - - def clear(self): - """Delete all dynamic widgets and hide all wrappers.""" - self._title_widgets = {} - self._error_info = {} - self._previous_select = None - while self._errors_layout.count(): - item = self._errors_layout.takeAt(0) - widget = item.widget() - if widget: - widget.deleteLater() - - self._top_label.setVisible(False) - self._error_details_frame.setVisible(False) - self._errors_widget.setVisible(False) - self._actions_widget.setVisible(False) - - def _set_errors(self, validation_error_report): - """Set errors into context and created titles. - - Args: - validation_error_report (PublishValidationErrorsReport): Report - with information about validation errors and publish plugin - actions. - """ - - self.clear() - if not validation_error_report: - return - - self._top_label.setVisible(True) - self._error_details_frame.setVisible(True) - self._errors_widget.setVisible(True) - - grouped_error_items = validation_error_report.group_items_by_title() - for idx, error_info in enumerate(grouped_error_items): - widget = ValidationErrorTitleWidget(idx, error_info, self) - widget.selected.connect(self._on_select) - widget.instance_changed.connect(self._on_instance_change) - self._errors_layout.addWidget(widget) - self._title_widgets[idx] = widget - self._error_info[idx] = error_info - - self._errors_layout.addStretch(1) - - if self._title_widgets: - self._title_widgets[0].set_selected(True) - - self.updateGeometry() - - def _set_current_widget(self, widget): - self._main_layout.setCurrentWidget(widget) - - def _on_publish_start(self): - self._set_current_widget(self._publish_started_widget) - - def _on_publish_reset(self): - self._set_current_widget(self._before_publish_widget) - - def _on_publish_stop(self): - if self._controller.publish_has_crashed: - self._set_current_widget(self._publish_stop_fail_widget) - return - - if self._controller.publish_has_validation_errors: - validation_errors = self._controller.get_validation_errors() - self._set_current_widget(self._validations_widget) - self._set_errors(validation_errors) - return - - if self._controller.publish_has_finished: - self._set_current_widget(self._publish_stop_ok_widget) - return - - self._set_current_widget(self._publish_started_widget) - - def _on_select(self, index): - if self._previous_select: - if self._previous_select.index == index: - return - self._previous_select.set_selected(False) - - self._previous_select = self._title_widgets[index] - - error_item = self._error_info[index] - - self._actions_widget.set_error_item(error_item) - - self._update_description() - - def _on_instance_change(self, index): - if self._previous_select and self._previous_select.index != index: - self._title_widgets[index].set_selected(True) - else: - self._update_description() - - def _update_description(self): - description = self._previous_select.current_description_text() - if commonmark: - html = commonmark.commonmark(description) - self._error_details_input.setHtml(html) - elif hasattr(self._error_details_input, "setMarkdown"): - self._error_details_input.setMarkdown(description) - else: - self._error_details_input.setText(description) diff --git a/openpype/tools/publisher/widgets/widgets.py b/openpype/tools/publisher/widgets/widgets.py index cd1f1f5a96..0b13f26d57 100644 --- a/openpype/tools/publisher/widgets/widgets.py +++ b/openpype/tools/publisher/widgets/widgets.py @@ -40,6 +40,41 @@ from ..constants import ( INPUTS_LAYOUT_VSPACING, ) +FA_PREFIXES = ["", "fa.", "fa5.", "fa5b.", "fa5s.", "ei.", "mdi."] + + +def parse_icon_def( + icon_def, default_width=None, default_height=None, color=None +): + if not icon_def: + return None + + if isinstance(icon_def, QtGui.QPixmap): + return icon_def + + color = color or "white" + default_width = default_width or 512 + default_height = default_height or 512 + + if isinstance(icon_def, QtGui.QIcon): + return icon_def.pixmap(default_width, default_height) + + try: + if os.path.exists(icon_def): + return QtGui.QPixmap(icon_def) + except Exception: + # TODO logging + pass + + for prefix in FA_PREFIXES: + try: + icon_name = "{}{}".format(prefix, icon_def) + icon = qtawesome.icon(icon_name, color=color) + return icon.pixmap(default_width, default_height) + except Exception: + # TODO logging + continue + class PublishPixmapLabel(PixmapLabel): def _get_pix_size(self): @@ -54,7 +89,6 @@ class IconValuePixmapLabel(PublishPixmapLabel): Handle icon parsing from creators/instances. Using of QAwesome module of path to images. """ - fa_prefixes = ["", "fa."] default_size = 200 def __init__(self, icon_def, parent): @@ -77,31 +111,9 @@ class IconValuePixmapLabel(PublishPixmapLabel): return pix def _parse_icon_def(self, icon_def): - if not icon_def: - return self._default_pixmap() - - if isinstance(icon_def, QtGui.QPixmap): - return icon_def - - if isinstance(icon_def, QtGui.QIcon): - return icon_def.pixmap(self.default_size, self.default_size) - - try: - if os.path.exists(icon_def): - return QtGui.QPixmap(icon_def) - except Exception: - # TODO logging - pass - - for prefix in self.fa_prefixes: - try: - icon_name = "{}{}".format(prefix, icon_def) - icon = qtawesome.icon(icon_name, color="white") - return icon.pixmap(self.default_size, self.default_size) - except Exception: - # TODO logging - continue - + icon = parse_icon_def(icon_def, self.default_size, self.default_size) + if icon: + return icon return self._default_pixmap() @@ -692,6 +704,7 @@ class TasksCombobox(QtWidgets.QComboBox): style.drawControl( QtWidgets.QStyle.CE_ComboBoxLabel, opt, painter, self ) + painter.end() def is_valid(self): """Are all selected items valid.""" diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index b3471163ae..6ab444109e 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -1,3 +1,6 @@ +import os +import json +import time import collections import copy from qtpy import QtWidgets, QtCore, QtGui @@ -15,10 +18,11 @@ from openpype.tools.utils import ( from .constants import ResetKeySequence from .publish_report_viewer import PublishReportViewerWidget +from .control import CardMessageTypes from .control_qt import QtPublisherController from .widgets import ( OverviewWidget, - ValidationsWidget, + ReportPageWidget, PublishFrame, PublisherTabsWidget, @@ -182,7 +186,7 @@ class PublisherWindow(QtWidgets.QDialog): controller, content_stacked_widget ) - report_widget = ValidationsWidget(controller, parent) + report_widget = ReportPageWidget(controller, parent) # Details - Publish details publish_details_widget = PublishReportViewerWidget( @@ -313,6 +317,13 @@ class PublisherWindow(QtWidgets.QDialog): controller.event_system.add_callback( "convertors.find.failed", self._on_convertor_error ) + controller.event_system.add_callback( + "export_report.request", self._export_report + ) + controller.event_system.add_callback( + "copy_report.request", self._copy_report + ) + # Store extra header widget for TrayPublisher # - can be used to add additional widgets to header between context @@ -665,7 +676,15 @@ class PublisherWindow(QtWidgets.QDialog): self._tabs_widget.set_current_tab(identifier) def set_current_tab(self, tab): - self._set_current_tab(tab) + if tab == "create": + self._go_to_create_tab() + elif tab == "publish": + self._go_to_publish_tab() + elif tab == "report": + self._go_to_report_tab() + elif tab == "details": + self._go_to_details_tab() + if not self._window_is_visible: self.set_tab_on_reset(tab) @@ -675,6 +694,12 @@ class PublisherWindow(QtWidgets.QDialog): def _go_to_create_tab(self): if self._create_tab.isEnabled(): self._set_current_tab("create") + return + + self._overlay_object.add_message( + "Can't switch to Create tab because publishing is paused.", + message_type="info" + ) def _go_to_publish_tab(self): self._set_current_tab("publish") @@ -825,6 +850,9 @@ class PublisherWindow(QtWidgets.QDialog): self._validate_btn.setEnabled(validate_enabled) self._publish_btn.setEnabled(publish_enabled) + if not publish_enabled: + self._publish_frame.set_shrunk_state(True) + self._update_publish_details_widget() def _validate_create_instances(self): @@ -941,6 +969,46 @@ class PublisherWindow(QtWidgets.QDialog): under_mouse = widget_x < global_pos.x() self._create_overlay_button.set_under_mouse(under_mouse) + def _copy_report(self): + logs = self._controller.get_publish_report() + logs_string = json.dumps(logs, indent=4) + + mime_data = QtCore.QMimeData() + mime_data.setText(logs_string) + QtWidgets.QApplication.instance().clipboard().setMimeData( + mime_data + ) + self._controller.emit_card_message( + "Report added to clipboard", + CardMessageTypes.info) + + def _export_report(self): + default_filename = "publish-report-{}".format( + time.strftime("%y%m%d-%H-%M") + ) + default_filepath = os.path.join( + os.path.expanduser("~"), + default_filename + ) + new_filepath, ext = QtWidgets.QFileDialog.getSaveFileName( + self, "Save report", default_filepath, ".json" + ) + if not ext or not new_filepath: + return + + logs = self._controller.get_publish_report() + full_path = new_filepath + ext + dir_path = os.path.dirname(full_path) + if not os.path.exists(dir_path): + os.makedirs(dir_path) + + with open(full_path, "w") as file_stream: + json.dump(logs, file_stream) + + self._controller.emit_card_message( + "Report saved", + CardMessageTypes.info) + class ErrorsMessageBox(ErrorMessageBox): def __init__(self, error_title, failed_info, message_start, parent): diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py index 3279be6094..73d33392b9 100644 --- a/openpype/tools/sceneinventory/view.py +++ b/openpype/tools/sceneinventory/view.py @@ -791,7 +791,7 @@ class SceneInventoryView(QtWidgets.QTreeView): else: version_str = version - dialog = QtWidgets.QMessageBox() + dialog = QtWidgets.QMessageBox(self) dialog.setIcon(QtWidgets.QMessageBox.Warning) dialog.setStyleSheet(style.load_stylesheet()) dialog.setWindowTitle("Update failed") diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py index 4149763f80..10bd527692 100644 --- a/openpype/tools/utils/__init__.py +++ b/openpype/tools/utils/__init__.py @@ -1,13 +1,16 @@ +from .layouts import FlowLayout from .widgets import ( FocusSpinBox, FocusDoubleSpinBox, ComboBox, CustomTextComboBox, PlaceholderLineEdit, + ExpandingTextEdit, BaseClickableFrame, ClickableFrame, ClickableLabel, ExpandBtn, + ClassicExpandBtn, PixmapLabel, IconButton, PixmapButton, @@ -37,15 +40,19 @@ from .overlay_messages import ( __all__ = ( + "FlowLayout", + "FocusSpinBox", "FocusDoubleSpinBox", "ComboBox", "CustomTextComboBox", "PlaceholderLineEdit", + "ExpandingTextEdit", "BaseClickableFrame", "ClickableFrame", "ClickableLabel", "ExpandBtn", + "ClassicExpandBtn", "PixmapLabel", "IconButton", "PixmapButton", diff --git a/openpype/tools/utils/layouts.py b/openpype/tools/utils/layouts.py new file mode 100644 index 0000000000..65ea087c27 --- /dev/null +++ b/openpype/tools/utils/layouts.py @@ -0,0 +1,150 @@ +from qtpy import QtWidgets, QtCore + + +class FlowLayout(QtWidgets.QLayout): + """Layout that organize widgets by minimum size into a flow layout. + + Layout is putting widget from left to right and top to bottom. When widget + can't fit a row it is added to next line. Minimum size matches widget with + biggest 'sizeHint' width and height using calculated geometry. + + Content margins are part of calculations. It is possible to define + horizontal and vertical spacing. + + Layout does not support stretch and spacing items. + + Todos: + Unified width concept -> use width of largest item so all of them are + same. This could allow to have minimum columns option too. + """ + + def __init__(self, parent=None): + super(FlowLayout, self).__init__(parent) + + # spaces between each item + self._horizontal_spacing = 5 + self._vertical_spacing = 5 + + self._items = [] + + def __del__(self): + while self.count(): + self.takeAt(0, False) + + def isEmpty(self): + for item in self._items: + if not item.isEmpty(): + return False + return True + + def setSpacing(self, spacing): + self._horizontal_spacing = spacing + self._vertical_spacing = spacing + self.invalidate() + + def setHorizontalSpacing(self, spacing): + self._horizontal_spacing = spacing + self.invalidate() + + def setVerticalSpacing(self, spacing): + self._vertical_spacing = spacing + self.invalidate() + + def addItem(self, item): + self._items.append(item) + self.invalidate() + + def count(self): + return len(self._items) + + def itemAt(self, index): + if 0 <= index < len(self._items): + return self._items[index] + return None + + def takeAt(self, index, invalidate=True): + if 0 <= index < len(self._items): + item = self._items.pop(index) + if invalidate: + self.invalidate() + return item + return None + + def expandingDirections(self): + return QtCore.Qt.Orientations(QtCore.Qt.Vertical) + + def hasHeightForWidth(self): + return True + + def heightForWidth(self, width): + return self._setup_geometry(QtCore.QRect(0, 0, width, 0), True) + + def setGeometry(self, rect): + super(FlowLayout, self).setGeometry(rect) + self._setup_geometry(rect) + + def sizeHint(self): + return self.minimumSize() + + def minimumSize(self): + size = QtCore.QSize(0, 0) + for item in self._items: + widget = item.widget() + if widget is not None: + parent = widget.parent() + if not widget.isVisibleTo(parent): + continue + size = size.expandedTo(item.minimumSize()) + + if size.width() < 1 or size.height() < 1: + return size + l_margin, t_margin, r_margin, b_margin = self.getContentsMargins() + size += QtCore.QSize(l_margin + r_margin, t_margin + b_margin) + return size + + def _setup_geometry(self, rect, only_calculate=False): + h_spacing = self._horizontal_spacing + v_spacing = self._vertical_spacing + l_margin, t_margin, r_margin, b_margin = self.getContentsMargins() + + left_x = rect.x() + l_margin + top_y = rect.y() + t_margin + pos_x = left_x + pos_y = top_y + row_height = 0 + for item in self._items: + item_hint = item.sizeHint() + item_width = item_hint.width() + item_height = item_hint.height() + if item_width < 1 or item_height < 1: + continue + + end_x = pos_x + item_width + + wrap = ( + row_height > 0 + and ( + end_x > rect.right() + or (end_x + r_margin) > rect.right() + ) + ) + if not wrap: + next_pos_x = end_x + h_spacing + else: + pos_x = left_x + pos_y += row_height + v_spacing + next_pos_x = pos_x + item_width + h_spacing + row_height = 0 + + if not only_calculate: + item.setGeometry( + QtCore.QRect(pos_x, pos_y, item_width, item_height) + ) + + pos_x = next_pos_x + row_height = max(row_height, item_height) + + height = (pos_y - top_y) + row_height + if height > 0: + height += b_margin + return height diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py index 950c782727..58ece7c68f 100644 --- a/openpype/tools/utils/lib.py +++ b/openpype/tools/utils/lib.py @@ -872,7 +872,6 @@ class WrappedCallbackItem: self.log.warning("- item is already processed") return - self.log.debug("Running callback: {}".format(str(self._callback))) try: result = self._callback(*self._args, **self._kwargs) self._result = result diff --git a/openpype/tools/utils/overlay_messages.py b/openpype/tools/utils/overlay_messages.py index 180d7eae97..4da266bcf7 100644 --- a/openpype/tools/utils/overlay_messages.py +++ b/openpype/tools/utils/overlay_messages.py @@ -127,8 +127,7 @@ class OverlayMessageWidget(QtWidgets.QFrame): if timeout: self._timeout_timer.setInterval(timeout) - if message_type: - set_style_property(self, "type", message_type) + set_style_property(self, "type", message_type) self._timeout_timer.start() diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index bae89aeb09..5a8104611b 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -101,6 +101,46 @@ class PlaceholderLineEdit(QtWidgets.QLineEdit): self.setPalette(filter_palette) +class ExpandingTextEdit(QtWidgets.QTextEdit): + """QTextEdit which does not have sroll area but expands height.""" + + def __init__(self, parent=None): + super(ExpandingTextEdit, self).__init__(parent) + + size_policy = self.sizePolicy() + size_policy.setHeightForWidth(True) + size_policy.setVerticalPolicy(QtWidgets.QSizePolicy.Preferred) + self.setSizePolicy(size_policy) + + self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) + + doc = self.document() + doc.contentsChanged.connect(self._on_doc_change) + + def _on_doc_change(self): + self.updateGeometry() + + def hasHeightForWidth(self): + return True + + def heightForWidth(self, width): + margins = self.contentsMargins() + + document_width = 0 + if width >= margins.left() + margins.right(): + document_width = width - margins.left() - margins.right() + + document = self.document().clone() + document.setTextWidth(document_width) + + return margins.top() + document.size().height() + margins.bottom() + + def sizeHint(self): + width = super(ExpandingTextEdit, self).sizeHint().width() + return QtCore.QSize(width, self.heightForWidth(width)) + + class BaseClickableFrame(QtWidgets.QFrame): """Widget that catch left mouse click and can trigger a callback. @@ -161,19 +201,34 @@ class ClickableLabel(QtWidgets.QLabel): class ExpandBtnLabel(QtWidgets.QLabel): """Label showing expand icon meant for ExpandBtn.""" + state_changed = QtCore.Signal() + + def __init__(self, parent): super(ExpandBtnLabel, self).__init__(parent) - self._source_collapsed_pix = QtGui.QPixmap( - get_style_image_path("branch_closed") - ) - self._source_expanded_pix = QtGui.QPixmap( - get_style_image_path("branch_open") - ) + self._source_collapsed_pix = self._create_collapsed_pixmap() + self._source_expanded_pix = self._create_expanded_pixmap() self._current_image = self._source_collapsed_pix self._collapsed = True - def set_collapsed(self, collapsed): + def _create_collapsed_pixmap(self): + return QtGui.QPixmap( + get_style_image_path("branch_closed") + ) + + def _create_expanded_pixmap(self): + return QtGui.QPixmap( + get_style_image_path("branch_open") + ) + + @property + def collapsed(self): + return self._collapsed + + def set_collapsed(self, collapsed=None): + if collapsed is None: + collapsed = not self._collapsed if self._collapsed == collapsed: return self._collapsed = collapsed @@ -182,6 +237,7 @@ class ExpandBtnLabel(QtWidgets.QLabel): else: self._current_image = self._source_expanded_pix self._set_resized_pix() + self.state_changed.emit() def resizeEvent(self, event): self._set_resized_pix() @@ -203,21 +259,55 @@ class ExpandBtnLabel(QtWidgets.QLabel): class ExpandBtn(ClickableFrame): + state_changed = QtCore.Signal() + def __init__(self, parent=None): super(ExpandBtn, self).__init__(parent) - pixmap_label = ExpandBtnLabel(self) + pixmap_label = self._create_pix_widget(self) layout = QtWidgets.QHBoxLayout(self) layout.setContentsMargins(0, 0, 0, 0) layout.addWidget(pixmap_label) + pixmap_label.state_changed.connect(self.state_changed) + self._pixmap_label = pixmap_label - def set_collapsed(self, collapsed): + def _create_pix_widget(self, parent=None): + if parent is None: + parent = self + return ExpandBtnLabel(parent) + + @property + def collapsed(self): + return self._pixmap_label.collapsed + + def set_collapsed(self, collapsed=None): self._pixmap_label.set_collapsed(collapsed) +class ClassicExpandBtnLabel(ExpandBtnLabel): + def _create_collapsed_pixmap(self): + return QtGui.QPixmap( + get_style_image_path("right_arrow") + ) + + def _create_expanded_pixmap(self): + return QtGui.QPixmap( + get_style_image_path("down_arrow") + ) + + +class ClassicExpandBtn(ExpandBtn): + """Same as 'ExpandBtn' but with arrow images.""" + + def _create_pix_widget(self, parent=None): + if parent is None: + parent = self + return ClassicExpandBtnLabel(parent) + + class ImageButton(QtWidgets.QPushButton): """PushButton with icon and size of font. diff --git a/openpype/vendor/python/common/scriptsmenu/scriptsmenu.py b/openpype/vendor/python/common/scriptsmenu/scriptsmenu.py index 6f6d0b5715..8ab621f757 100644 --- a/openpype/vendor/python/common/scriptsmenu/scriptsmenu.py +++ b/openpype/vendor/python/common/scriptsmenu/scriptsmenu.py @@ -19,9 +19,9 @@ class ScriptsMenu(QtWidgets.QMenu): Args: title (str): the name of the root menu which will be created - + parent (QtWidgets.QObject) : the QObject to parent the menu to - + Returns: None @@ -94,7 +94,7 @@ class ScriptsMenu(QtWidgets.QMenu): parent(QtWidgets.QWidget): the object to parent the menu to title(str): the title of the menu - + Returns: QtWidget.QMenu instance """ @@ -111,7 +111,7 @@ class ScriptsMenu(QtWidgets.QMenu): return menu def add_script(self, parent, title, command, sourcetype, icon=None, - tags=None, label=None, tooltip=None): + tags=None, label=None, tooltip=None, shortcut=None): """Create an action item which runs a script when clicked Args: @@ -134,6 +134,8 @@ class ScriptsMenu(QtWidgets.QMenu): tooltip (str): A tip for the user about the usage fo the tool + shortcut (str): A shortcut to run the command + Returns: QtWidget.QAction instance @@ -166,6 +168,9 @@ class ScriptsMenu(QtWidgets.QMenu): raise RuntimeError("Script action can't be " "processed: {}".format(e)) + if shortcut: + script_action.setShortcut(shortcut) + if icon: iconfile = os.path.expandvars(icon) script_action.iconfile = iconfile @@ -253,7 +258,7 @@ class ScriptsMenu(QtWidgets.QMenu): def _update_search(self, search): """Hide all the samples which do not match the user's import - + Returns: None diff --git a/openpype/version.py b/openpype/version.py index 72297a4430..c24388b2ff 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.15.6-nightly.2" +__version__ = "3.15.9-nightly.1" diff --git a/pyproject.toml b/pyproject.toml index 2f40d58f56..a72a3d66d7 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.15.5" # OpenPype +version = "3.15.8" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/tests/README.md b/tests/README.md index d36b6534f8..20847b2449 100644 --- a/tests/README.md +++ b/tests/README.md @@ -15,16 +15,16 @@ Structure: - openpype/modules/MODULE_NAME - structure follow directory structure in code base - fixture - sample data `(MongoDB dumps, test files etc.)` - `tests.py` - single or more pytest files for MODULE_NAME -- unit - quick unit test - - MODULE_NAME +- unit - quick unit test + - MODULE_NAME - fixture - `tests.py` - + How to run: ---------- - use Openpype command 'runtests' from command line (`.venv` in ${OPENPYPE_ROOT} must be activated to use configured Python!) -- `python ${OPENPYPE_ROOT}/start.py runtests` - + By default, this command will run all tests in ${OPENPYPE_ROOT}/tests. Specific location could be provided to this command as an argument, either as absolute path, or relative path to ${OPENPYPE_ROOT}. @@ -41,17 +41,15 @@ In some cases your tests might be so localized, that you don't care about all en In that case you might add this dummy configuration BEFORE any imports in your test file ``` import os -os.environ["AVALON_MONGO"] = "mongodb://localhost:27017" +os.environ["OPENPYPE_DEBUG"] = "1" os.environ["OPENPYPE_MONGO"] = "mongodb://localhost:27017" -os.environ["AVALON_DB"] = "avalon" os.environ["OPENPYPE_DATABASE_NAME"] = "openpype" -os.environ["AVALON_TIMEOUT"] = '3000' -os.environ["OPENPYPE_DEBUG"] = "3" -os.environ["AVALON_CONFIG"] = "pype" +os.environ["AVALON_DB"] = "avalon" +os.environ["AVALON_TIMEOUT"] = "3000" os.environ["AVALON_ASSET"] = "Asset" os.environ["AVALON_PROJECT"] = "test_project" ``` (AVALON_ASSET and AVALON_PROJECT values should exist in your environment) This might be enough to run your test file separately. Do not commit this skeleton though. -Use only when you know what you are doing! \ No newline at end of file +Use only when you know what you are doing! diff --git a/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py b/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py index d372efcb9a..0e9cd3b00d 100644 --- a/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py +++ b/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py @@ -9,6 +9,9 @@ log = logging.getLogger("test_publish_in_aftereffects") class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestClass): # noqa """est case for DL publishing in AfterEffects with multiple compositions. + Workfile contains 2 prepared `render` instances. First has review set, + second doesn't. + Uses generic TestCase to prepare fixtures for test data, testing DBs, env vars. @@ -68,7 +71,7 @@ class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestCla name="renderTest_taskMain2")) failures.append( - DBAssert.count_of_types(dbcon, "representation", 7)) + DBAssert.count_of_types(dbcon, "representation", 5)) additional_args = {"context.subset": "workfileTest_task", "context.ext": "aep"} @@ -105,13 +108,13 @@ class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestCla additional_args = {"context.subset": "renderTest_taskMain2", "name": "thumbnail"} failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, + DBAssert.count_of_types(dbcon, "representation", 0, additional_args=additional_args)) additional_args = {"context.subset": "renderTest_taskMain2", "name": "png_exr"} failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, + DBAssert.count_of_types(dbcon, "representation", 0, additional_args=additional_args)) assert not any(failures) diff --git a/tests/integration/hosts/photoshop/test_publish_in_photoshop_auto_image.py b/tests/integration/hosts/photoshop/test_publish_in_photoshop_auto_image.py new file mode 100644 index 0000000000..1594b36dec --- /dev/null +++ b/tests/integration/hosts/photoshop/test_publish_in_photoshop_auto_image.py @@ -0,0 +1,93 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.photoshop.lib import PhotoshopTestClass + +log = logging.getLogger("test_publish_in_photoshop") + + +class TestPublishInPhotoshopAutoImage(PhotoshopTestClass): + """Test for publish in Phohoshop with different review configuration. + + Workfile contains 3 layers, auto image and review instances created. + + Test contains updates to Settings!!! + + """ + PERSIST = True + + TEST_FILES = [ + ("1iLF6aNI31qlUCD1rGg9X9eMieZzxL-rc", + "test_photoshop_publish_auto_image.zip", "") + ] + + APP_GROUP = "photoshop" + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + + APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT) + + TIMEOUT = 120 # publish timeout + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 3)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 0, + name="imageMainForeground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 0, + name="imageMainBackground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 5)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + # review from image + additional_args = {"context.subset": "imageBeautyMain", + "context.ext": "jpg", + "name": "jpg_jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageBeautyMain", + "context.ext": "jpg", + "name": "jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "review"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestPublishInPhotoshopAutoImage() diff --git a/tests/integration/hosts/photoshop/test_publish_in_photoshop_review.py b/tests/integration/hosts/photoshop/test_publish_in_photoshop_review.py new file mode 100644 index 0000000000..64b6868d7c --- /dev/null +++ b/tests/integration/hosts/photoshop/test_publish_in_photoshop_review.py @@ -0,0 +1,111 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.photoshop.lib import PhotoshopTestClass + +log = logging.getLogger("test_publish_in_photoshop") + + +class TestPublishInPhotoshopImageReviews(PhotoshopTestClass): + """Test for publish in Phohoshop with different review configuration. + + Workfile contains 2 image instance, one has review flag, second doesn't. + + Regular `review` family is disabled. + + Expected result is to `imageMainForeground` to have additional file with + review, `imageMainBackground` without. No separate `review` family. + + `test_project_test_asset_imageMainForeground_v001_jpg.jpg` is expected name + of imageForeground review, `_jpg` suffix is needed to differentiate between + image and review file. + + """ + PERSIST = True + + TEST_FILES = [ + ("12WGbNy9RJ3m9jlnk0Ib9-IZmONoxIz_p", + "test_photoshop_publish_review.zip", "") + ] + + APP_GROUP = "photoshop" + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + + APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT) + + TIMEOUT = 120 # publish timeout + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 3)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="imageMainForeground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="imageMainBackground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 6)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "jpg", + "context.representation": "jpg_jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "jpg", + "context.representation": "jpg_jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "review"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestPublishInPhotoshopImageReviews() diff --git a/tests/unit/openpype/plugins/publish/test_validate_sequence_frames.py b/tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py similarity index 100% rename from tests/unit/openpype/plugins/publish/test_validate_sequence_frames.py rename to tests/unit/openpype/hosts/unreal/plugins/publish/test_validate_sequence_frames.py diff --git a/website/docs/admin_hosts_maya.md b/website/docs/admin_hosts_maya.md index f0b8710246..700822843f 100644 --- a/website/docs/admin_hosts_maya.md +++ b/website/docs/admin_hosts_maya.md @@ -247,15 +247,24 @@ Fill in the necessary fields (the optional fields are regex filters) ![new place holder](assets/maya-placeholder_new.png) - - Builder type: Whether the the placeholder should load current asset representations or linked assets representations + - ***Builder type***: Whether the the placeholder should load current asset representations or linked assets representations - - Representation: Representation that will be loaded (ex: ma, abc, png, etc...) + - ***Representation***: Representation that will be loaded (ex: ma, abc, png, etc...) - - Family: Family of the representation to load (main, look, image, etc ...) + - ***Family***: Family of the representation to load (main, look, image, etc ...) - - Loader: Placeholder loader name that will be used to load corresponding representations + - ***Loader***: Placeholder loader name that will be used to load corresponding representations + + - ***Order***: Priority for current placeholder loader (priority is lowest first, highest last) + + - ***Loader arguments***: Loader arguments dictionary can be used to pass optional data to loaders. + One use case is to define a custom Subset name for the animation instances created while loading Rig references.This follows the custom namespace system used by loaders. + + **Example** + ``` + {"animationSubsetName": "{asset_name}_animation_{subset}_##_"} + ``` - - Order: Priority for current placeholder loader (priority is lowest first, highet last) - **Save your template** diff --git a/website/docs/admin_hosts_photoshop.md b/website/docs/admin_hosts_photoshop.md new file mode 100644 index 0000000000..de684f01d2 --- /dev/null +++ b/website/docs/admin_hosts_photoshop.md @@ -0,0 +1,127 @@ +--- +id: admin_hosts_photoshop +title: Photoshop Settings +sidebar_label: Photoshop +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## Photoshop settings + +There is a couple of settings that could configure publishing process for **Photoshop**. +All of them are Project based, eg. each project could have different configuration. + +Location: Settings > Project > Photoshop + +![AfterEffects Project Settings](assets/admin_hosts_photoshop_settings.png) + +## Color Management (ImageIO) + +Placeholder for Color Management. Currently not implemented yet. + +## Creator plugins + +Contains configurable items for creators used during publishing from Photoshop. + +### Create Image + +Provides list of [variants](artist_concepts.md#variant) that will be shown to an artist in Publisher. Default value `Main`. + +### Create Flatten Image + +Provides simplified publishing process. It will create single `image` instance for artist automatically. This instance will +produce flatten image from all visible layers in a workfile. + +- Subset template for flatten image - provide template for subset name for this instance (example `imageBeauty`) +- Review - should be separate review created for this instance + +### Create Review + +Creates single `review` instance automatically. This allows artists to disable it if needed. + +### Create Workfile + +Creates single `workfile` instance automatically. This allows artists to disable it if needed. + +## Publish plugins + +Contains configurable items for publish plugins used during publishing from Photoshop. + +### Collect Color Coded Instances + +Used only in remote publishing! + +Allows to create automatically `image` instances for configurable highlight color set on layer or group in the workfile. + +#### Create flatten image + - Flatten with images - produce additional `image` with all published `image` instances merged + - Flatten only - produce only merged `image` instance + - No - produce only separate `image` instances + +#### Subset template for flatten image + +Template used to create subset name automatically (example `image{layer}Main` - uses layer name in subset name) + +### Collect Review + +Disable if no review should be created + +### Collect Version + +If enabled it will push version from workfile name to all published items. Eg. if artist is publishing `test_asset_workfile_v005.psd` +produced `image` and `review` files will contain `v005` (even if some previous version were skipped for particular family). + +### Validate Containers + +Checks if all imported assets to the workfile through `Loader` are in latest version. Limits cases that older version of asset would be used. + +If enabled, artist might still decide to disable validation for each publish (for special use cases). +Limit this optionality by toggling `Optional`. +`Active` toggle denotes that by default artists sees that optional validation as enabled. + +### Validate naming of subsets and layers + +Subset cannot contain invalid characters or extract to file would fail + +#### Regex pattern of invalid characters + +Contains weird characters like `/`, `/`, these might cause an issue when file (which contains subset name) is created on OS disk. + +#### Replacement character + +Replace all offending characters with this one. `_` is default. + +### Extract Image + +Controls extension formats of published instances of `image` family. `png` and `jpg` are by default. + +### Extract Review + +Controls output definitions of extracted reviews to upload on Asset Management (AM). + +#### Makes an image sequence instead of flatten image + +If multiple `image` instances are produced, glue created images into image sequence (`mov`) to review all of them separetely. +Without it only flatten image would be produced. + +#### Maximum size of sources for review + +Set Byte limit for review file. Applicable if gigantic `image` instances are produced, full image size is unnecessary to upload to AM. + +#### Extract jpg Options + +Handles tags for produced `.jpg` representation. `Create review` and `Add review to Ftrack` are defaults. + +#### Extract mov Options + +Handles tags for produced `.mov` representation. `Create review` and `Add review to Ftrack` are defaults. + + +### Workfile Builder + +Allows to open prepared workfile for an artist when no workfile exists. Useful to share standards, additional helpful content in the workfile. + +Could be configured per `Task type`, eg. `composition` task type could use different `.psd` template file than `art` task. +Workfile template must be accessible for all artists. +(Currently not handled by [SiteSync](module_site_sync.md)) \ No newline at end of file diff --git a/website/docs/artist_hosts_aftereffects.md b/website/docs/artist_hosts_aftereffects.md index d9522d5765..d415a1d47d 100644 --- a/website/docs/artist_hosts_aftereffects.md +++ b/website/docs/artist_hosts_aftereffects.md @@ -15,18 +15,18 @@ sidebar_label: AfterEffects ## Setup -To install the extension, download, install [Anastasyi's Extension Manager](https://install.anastasiy.com/). Open Anastasyi's Extension Manager and select AfterEffects in menu. Then go to `{path to pype}hosts/aftereffects/api/extension.zxp`. +To install the extension, download, install [Anastasyi's Extension Manager](https://install.anastasiy.com/). Open Anastasyi's Extension Manager and select AfterEffects in menu. Then go to `{path to pype}hosts/aftereffects/api/extension.zxp`. -Drag extension.zxp and drop it to Anastasyi's Extension Manager. The extension will install itself. +Drag extension.zxp and drop it to Anastasyi's Extension Manager. The extension will install itself. ## Implemented functionality AfterEffects implementation currently allows you to import and add various media to composition (image plates, renders, audio files, video files etc.) -and send prepared composition for rendering to Deadline or render locally. +and send prepared composition for rendering to Deadline or render locally. ## Usage -When you launch AfterEffects you will be met with the Workfiles app. If don't +When you launch AfterEffects you will be met with the Workfiles app. If don't have any previous workfiles, you can just close this window. Workfiles tools takes care of saving your .AEP files in the correct location and under @@ -34,7 +34,7 @@ a correct name. You should use it instead of standard file saving dialog. In AfterEffects you'll find the tools in the `OpenPype` extension: -![Extension](assets/photoshop_extension.png) +![Extension](assets/photoshop_extension.png) You can show the extension panel by going to `Window` > `Extensions` > `OpenPype`. @@ -58,6 +58,9 @@ Name of publishable instance (eg. subset name) could be configured with a templa Trash icon under the list of instances allows to delete any selected `render` instance. +Frame information (frame start, duration, fps) and resolution (width and height) is applied to selected composition from Asset Management System (Ftrack or DB) automatically! +(Eg. number of rendered frames is controlled by settings inserted from supervisor. Artist can override this by disabling validation only in special cases.) + Workfile instance will be automatically recreated though. If you do not want to publish it, use pill toggle on the instance item. If you would like to modify publishable instance, click on `Publish` tab at the top. This would allow you to change name of publishable @@ -67,7 +70,7 @@ Publisher allows publishing into different context, just click on any instance, #### RenderQueue -AE's Render Queue is required for publishing locally or on a farm. Artist needs to configure expected result format (extension, resolution) in the Render Queue in an Output module. +AE's Render Queue is required for publishing locally or on a farm. Artist needs to configure expected result format (extension, resolution) in the Render Queue in an Output module. Currently its expected to have only single render item per composition in the Render Queue. @@ -151,3 +154,25 @@ You can switch to a previous version of the image or update to the latest. ![Loader](assets/photoshop_manage_switch.gif) ![Loader](assets/photoshop_manage_update.gif) + + +### Setting section + +Composition properties should be controlled by state in Asset Management System (Ftrack etc). Extension provides couple of buttons to trigger this propagation. + +#### Set Resolution + +Set width and height from AMS to composition. + +#### Set Frame Range + +Start frame and duration in workarea is set according to the settings in AMS. Handles are incorporated (not inclusive). +It is expected that composition(s) is selected first before pushing this button! + +#### Apply All Settings + +Both previous settings are triggered at same time. + +### Experimental tools + +Currently empty. Could contain special tools available only for specific hosts for early access testing. diff --git a/website/docs/artist_hosts_substancepainter.md b/website/docs/artist_hosts_substancepainter.md new file mode 100644 index 0000000000..86bcbba82e --- /dev/null +++ b/website/docs/artist_hosts_substancepainter.md @@ -0,0 +1,107 @@ +--- +id: artist_hosts_substancepainter +title: Substance Painter +sidebar_label: Substance Painter +--- + +## OpenPype global tools + +- [Work Files](artist_tools.md#workfiles) +- [Load](artist_tools.md#loader) +- [Manage (Inventory)](artist_tools.md#inventory) +- [Publish](artist_tools.md#publisher) +- [Library Loader](artist_tools.md#library-loader) + +## Working with OpenPype in Substance Painter + +The Substance Painter OpenPype integration allows you to: + +- Set the project mesh and easily keep it in sync with updates of the model +- Easily export your textures as versioned publishes for others to load and update. + +## Setting the project mesh + +Substance Painter requires a project file to have a mesh path configured. +As such, you can't start a workfile without choosing a mesh path. + +To start a new project using a published model you can _without an open project_ +use OpenPype > Load.. > Load Mesh on a supported publish. This will prompt you +with a New Project prompt preset to that particular mesh file. + +If you already have a project open, you can also replace (reload) your mesh +using the same Load Mesh functionality. + +After having the project mesh loaded or reloaded through the loader +tool the mesh will be _managed_ by OpenPype. For example, you'll be notified +on workfile open whether the mesh in your workfile is outdated. You can also +set it to specific version using OpenPype > Manage.. where you can right click +on the project mesh to perform _Set Version_ + +:::info +A Substance Painter project will always have only one mesh set. Whenever you +trigger _Load Mesh_ from the loader this will **replace** your currently loaded +mesh for your open project. +::: + +## Publishing textures + +To publish your textures we must first create a `textureSet` +publish instance. + +To create a **TextureSet instance** we will use OpenPype's publisher tool. Go +to **OpenPype → Publish... → TextureSet** + +The texture set instance will define what Substance Painter export template (`.spexp`) to +use and thus defines what texture maps will be exported from your workfile. This +can be set with the **Output Template** attribute on the instance. + +:::info +The TextureSet instance gets saved with your Substance Painter project. As such, +you will only need to configure this once for your workfile. Next time you can +just click **OpenPype → Publish...** and start publishing directly with the +same settings. +::: + +#### Publish per output map of the Substance Painter preset + +The Texture Set instance generates a publish per output map that is defined in +the Substance Painter's export preset. For example a publish from a default +PBR Metallic Roughness texture set results in six separate published subsets +(if all the channels exist in your file). + +![Substance Painter PBR Metallic Roughness Export Preset](assets/substancepainter_pbrmetallicroughness_export_preset.png) + +When publishing for example a texture set with variant **Main** six instances will +be published with the variants: +- Main.**BaseColor** +- Main.**Emissive** +- Main.**Height** +- Main.**Metallic** +- Main.**Normal** +- Main.**Roughness** + +The bold output map name for the publish is based on the string that is pulled +from the what is considered to be the static part of the filename templates in +the export preset. The tokens like `$mesh` and `(_$colorSpace)` are ignored. +So `$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)` becomes `BaseColor`. + +An example output for PBR Metallic Roughness would be: + +![Substance Painter PBR Metallic Roughness Publish Example in Loader](assets/substancepainter_pbrmetallicroughness_published.png) + +## Known issues + +#### Can't see the OpenPype menu? + +If you're unable to see the OpenPype top level menu in Substance Painter make +sure you have launched Substance Painter through OpenPype and that the OpenPype +Integration plug-in is loaded inside Substance Painter: **Python > openpype_plugin** + +#### Substance Painter + Steam + +Running the steam version of Substance Painter within OpenPype will require you +to close the Steam executable before launching Substance Painter through OpenPype. +Otherwise the Substance Painter process is launched using Steam's existing +environment and thus will not be able to pick up the pipeline integration. + +This appears to be a limitation of how Steam works. \ No newline at end of file diff --git a/website/docs/assets/admin_hosts_photoshop_settings.png b/website/docs/assets/admin_hosts_photoshop_settings.png new file mode 100644 index 0000000000..aaa6ecbed7 Binary files /dev/null and b/website/docs/assets/admin_hosts_photoshop_settings.png differ diff --git a/website/docs/assets/aftereffects_extension.png b/website/docs/assets/aftereffects_extension.png new file mode 100644 index 0000000000..b14992471a Binary files /dev/null and b/website/docs/assets/aftereffects_extension.png differ diff --git a/website/docs/assets/substancepainter_pbrmetallicroughness_export_preset.png b/website/docs/assets/substancepainter_pbrmetallicroughness_export_preset.png new file mode 100644 index 0000000000..35a4545f83 Binary files /dev/null and b/website/docs/assets/substancepainter_pbrmetallicroughness_export_preset.png differ diff --git a/website/docs/assets/substancepainter_pbrmetallicroughness_published.png b/website/docs/assets/substancepainter_pbrmetallicroughness_published.png new file mode 100644 index 0000000000..15b0e5b876 Binary files /dev/null and b/website/docs/assets/substancepainter_pbrmetallicroughness_published.png differ diff --git a/website/docs/dev_publishing.md b/website/docs/dev_publishing.md index 2c57537223..3ef6272373 100644 --- a/website/docs/dev_publishing.md +++ b/website/docs/dev_publishing.md @@ -506,6 +506,67 @@ or the scene file was copy pasted from different context. #### *Known errors* When there is a known error that can't be fixed by the user (e.g. can't connect to deadline service, etc.) `KnownPublishError` should be raised. The only difference is that its message is shown in UI to the artist otherwise a neutral message without context is shown. +### Plugins +Plugin is a single processing unit that can work with publish context and instances. + +#### Plugin types +There are 2 types of plugins - `InstancePlugin` and `ContextPlugin`. Be aware that inheritance of plugin from `InstancePlugin` or `ContextPlugin` actually does not affect if plugin is instance or context plugin, that is affected by argument name in `process` method. + +```python +import pyblish.api + + +# Context plugin +class MyContextPlugin(pyblish.api.ContextPlugin): + def process(self, context): + ... + +# Instance plugin +class MyInstancePlugin(pyblish.api.InstancePlugin): + def process(self, instance): + ... + +# Still an instance plugin +class MyOtherInstancePlugin(pyblish.api.ContextPlugin): + def process(self, instance): + ... +``` + +#### Plugin filtering +By pyblish logic, plugins have predefined filtering class attributes `hosts`, `targets` and `families`. Filter by `hosts` and `targets` are filters that are applied for current publishing process. Both filters are registered in `pyblish` module, `hosts` filtering may not match OpenPype host name (e.g. farm publishing uses `shell` in pyblish). Filter `families` works only on instance plugins and is dynamic during publish process by changing families of an instance. + +All filters are list of a strings `families = ["image"]`. Empty list is invalid filter and plugin will be skipped, to allow plugin for all values use a start `families = ["*"]`. For more detailed filtering options check [pyblish documentation](https://api.pyblish.com/pluginsystem). + +Each plugin must have order, there are 4 order milestones - Collect, Validate, Extract, Integration. Any plugin below collection order won't be processed. for more details check [pyblish documentation](https://api.pyblish.com/ordering). + +#### Plugin settings +Pyblish plugins may have settings. There are 2 ways how settings are applied, first is automated, and it's logic is based on function `filter_pyblish_plugins` in `./openpype/pipeline/publish/lib.py`, second is explicit by implementing class method `apply_settings` on a plugin. + + +Automated logic is expecting specific structure of project settings `project_settings[{category}]["plugins"]["publish"][{plugin class name}]`. The category is a key in root of project settings. There are currently 3 ways how the category key is received. +1. Use `settings_category` class attribute value from plugin. If `settings_category` is not `None` there is not any fallback to other way. +2. Use currently registered pyblish host. This will be probably deprecated soon. +3. Use 3rd folder name from a plugin filepath. From path `./maya/plugins/publish/collect_render.py` is used `maya` as the key. + +For any other use-case is recommended to use explicit approach by implementing `apply_settings` method. Must use `@classmethod` decorator and expect arguments for project settings and system settings. We're planning to support single argument with only project settings. +```python +import pyblish.api + + +class MyPlugin(pyblish.api.InstancePlugin): + profiles = [] + + @classmethod + def apply_settings(cls, project_settings, system_settings): + cls.profiles = ( + project_settings + ["addon"] + ["plugins"] + ["publish"] + ["vfx_profiles"] + ) +``` + ### Plugin extension Publish plugins can be extended by additional logic when inheriting from `OpenPypePyblishPluginMixin` which can be used as mixin (additional inheritance of class). Publish plugins that inherit from this mixin can define attributes that will be shown in **CreatedInstance**. One of the most important usages is to be able turn on/off optional plugins. @@ -596,4 +657,4 @@ Publish attributes work the same way as create attributes but the source of attr ### Create dialog ![Publisher UI - Create dialog](assets/publisher_create_dialog.png) -Create dialog is used by artist to create new instances in a context. The context selection can be enabled/disabled by changing `create_allow_context_change` on [creator plugin](#creator). In the middle part the artist selects what will be created and what variant it is. On the right side is information about the selected creator and its pre-create attributes. There is also a question mark button which extends the window and displays more detailed information about the creator. \ No newline at end of file +Create dialog is used by artist to create new instances in a context. The context selection can be enabled/disabled by changing `create_allow_context_change` on [creator plugin](#creator). In the middle part the artist selects what will be created and what variant it is. On the right side is information about the selected creator and its pre-create attributes. There is also a question mark button which extends the window and displays more detailed information about the creator. diff --git a/website/docs/module_deadline.md b/website/docs/module_deadline.md index 94b6a381c2..bca2a83936 100644 --- a/website/docs/module_deadline.md +++ b/website/docs/module_deadline.md @@ -22,6 +22,9 @@ For [AWS Thinkbox Deadline](https://www.awsthinkbox.com/deadline) support you ne 5. Install our custom plugin and scripts to your deadline repository. It should be as simple as copying content of `openpype/modules/deadline/repository/custom` to `path/to/your/deadline/repository/custom`. +Multiple different DL webservice could be configured. First set them in point 4., then they could be configured per project in `project_settings/deadline/deadline_servers`. +Only single webservice could be a target of publish though. + ## Configuration diff --git a/website/docs/module_site_sync.md b/website/docs/module_site_sync.md index 3e5794579c..68f56cb548 100644 --- a/website/docs/module_site_sync.md +++ b/website/docs/module_site_sync.md @@ -7,80 +7,112 @@ sidebar_label: Site Sync import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +Site Sync allows users and studios to synchronize published assets between +multiple 'sites'. Site denotes a storage location, +which could be a physical disk, server, cloud storage. To be able to use site +sync, it first needs to be configured. -:::warning -**This feature is** currently **in a beta stage** and it is not recommended to rely on it fully for production. -::: - -Site Sync allows users and studios to synchronize published assets between multiple 'sites'. Site denotes a storage location, -which could be a physical disk, server, cloud storage. To be able to use site sync, it first needs to be configured. - -The general idea is that each user acts as an individual site and can download and upload any published project files when they are needed. that way, artist can have access to the whole project, but only every store files that are relevant to them on their home workstation. +The general idea is that each user acts as an individual site and can download +and upload any published project files when they are needed. that way, artist +can have access to the whole project, but only every store files that are +relevant to them on their home workstation. :::note -At the moment site sync is only able to deal with publishes files. No workfiles will be synchronized unless they are published. We are working on making workfile synchronization possible as well. +At the moment site sync is only able to deal with publishes files. No workfiles +will be synchronized unless they are published. We are working on making +workfile synchronization possible as well. ::: ## System Settings -To use synchronization, *Site Sync* needs to be enabled globally in **OpenPype Settings/System/Modules/Site Sync**. +To use synchronization, *Site Sync* needs to be enabled globally in **OpenPype +Settings/System/Modules/Site Sync**. ![Configure module](assets/site_sync_system.png) -### Sites +### Sites By default there are two sites created for each OpenPype installation: -- **studio** - default site - usually a centralized mounted disk accessible to all artists. Studio site is used if Site Sync is disabled. -- **local** - each workstation or server running OpenPype Tray receives its own with unique site name. Workstation refers to itself as "local"however all other sites will see it under it's unique ID. -Artists can explore their site ID by opening OpenPype Info tool by clicking on a version number in the tray app. +- **studio** - default site - usually a centralized mounted disk accessible to + all artists. Studio site is used if Site Sync is disabled. +- **local** - each workstation or server running OpenPype Tray receives its own + with unique site name. Workstation refers to itself as "local"however all + other sites will see it under it's unique ID. -Many different sites can be created and configured on the system level, and some or all can be assigned to each project. +Artists can explore their site ID by opening OpenPype Info tool by clicking on +a version number in the tray app. -Each OpenPype Tray app works with two sites at one time. (Sites can be the same, and no syncing is done in this setup). +Many different sites can be created and configured on the system level, and +some or all can be assigned to each project. -Sites could be configured differently per project basis. +Each OpenPype Tray app works with two sites at one time. (Sites can be the +same, and no syncing is done in this setup). -Each new site needs to be created first in `System Settings`. Most important feature of site is its Provider, select one from already prepared Providers. +Sites could be configured differently per project basis. -#### Alternative sites +Each new site needs to be created first in `System Settings`. Most important +feature of site is its Provider, select one from already prepared Providers. + +#### Alternative sites This attribute is meant for special use cases only. -One of the use cases is sftp site vendoring (exposing) same data as regular site (studio). Each site is accessible for different audience. 'studio' for artists in a studio via shared disk, 'sftp' for externals via sftp server with mounted 'studio' drive. +One of the use cases is sftp site vendoring (exposing) same data as regular +site (studio). Each site is accessible for different audience. 'studio' for +artists in a studio via shared disk, 'sftp' for externals via sftp server with +mounted 'studio' drive. -Change of file status on one site actually means same change on 'alternate' site occurred too. (eg. artists publish to 'studio', 'sftp' is using -same location >> file is accessible on 'sftp' site right away, no need to sync it anyhow.) +Change of file status on one site actually means same change on 'alternate' +site occurred too. (eg. artists publish to 'studio', 'sftp' is using +same location >> file is accessible on 'sftp' site right away, no need to sync +it anyhow.) ##### Example + ![Configure module](assets/site_sync_system_sites.png) -Admin created new `sftp` site which is handled by `SFTP` provider. Somewhere in the studio SFTP server is deployed on a machine that has access to `studio` drive. +Admin created new `sftp` site which is handled by `SFTP` provider. Somewhere in +the studio SFTP server is deployed on a machine that has access to `studio` +drive. Alternative sites work both way: + - everything published to `studio` is accessible on a `sftp` site too -- everything published to `sftp` (most probably via artist's local disk - artists publishes locally, representation is marked to be synced to `sftp`. Immediately after it is synced, it is marked to be available on `studio` too for artists in the studio to use.) +- everything published to `sftp` (most probably via artist's local disk - + artists publishes locally, representation is marked to be synced to `sftp`. + Immediately after it is synced, it is marked to be available on `studio` too + for artists in the studio to use.) ## Project Settings -Sites need to be made available for each project. Of course this is possible to do on the default project as well, in which case all other projects will inherit these settings until overridden explicitly. +Sites need to be made available for each project. Of course this is possible to +do on the default project as well, in which case all other projects will +inherit these settings until overridden explicitly. You'll find the setting in **Settings/Project/Global/Site Sync** -The attributes that can be configured will vary between sites and their providers. +The attributes that can be configured will vary between sites and their +providers. ## Local settings -Each user should configure root folder for their 'local' site via **Local Settings** in OpenPype Tray. This folder will be used for all files that the user publishes or downloads while working on a project. Artist has the option to set the folder as "default"in which case it is used for all the projects, or it can be set on a project level individually. +Each user should configure root folder for their 'local' site via **Local +Settings** in OpenPype Tray. This folder will be used for all files that the +user publishes or downloads while working on a project. Artist has the option +to set the folder as "default"in which case it is used for all the projects, or +it can be set on a project level individually. -Artists can also override which site they use as active and remote if need be. +Artists can also override which site they use as active and remote if need be. ![Local overrides](assets/site_sync_local_setting.png) - ## Providers -Each site implements a so called `provider` which handles most common operations (list files, copy files etc.) and provides interface with a particular type of storage. (disk, gdrive, aws, etc.) -Multiple configured sites could share the same provider with different settings (multiple mounted disks - each disk can be a separate site, while +Each site implements a so called `provider` which handles most common +operations (list files, copy files etc.) and provides interface with a +particular type of storage. (disk, gdrive, aws, etc.) +Multiple configured sites could share the same provider with different +settings (multiple mounted disks - each disk can be a separate site, while all share the same provider). **Currently implemented providers:** @@ -89,21 +121,30 @@ all share the same provider). Handles files stored on disk storage. -Local drive provider is the most basic one that is used for accessing all standard hard disk storage scenarios. It will work with any storage that can be mounted on your system in a standard way. This could correspond to a physical external hard drive, network mounted storage, internal drive or even VPN connected network drive. It doesn't care about how the drive is mounted, but you must be able to point to it with a simple directory path. +Local drive provider is the most basic one that is used for accessing all +standard hard disk storage scenarios. It will work with any storage that can be +mounted on your system in a standard way. This could correspond to a physical +external hard drive, network mounted storage, internal drive or even VPN +connected network drive. It doesn't care about how the drive is mounted, but +you must be able to point to it with a simple directory path. Default sites `local` and `studio` both use local drive provider. - ### Google Drive -Handles files on Google Drive (this). GDrive is provided as a production example for implementing other cloud providers +Handles files on Google Drive (this). GDrive is provided as a production +example for implementing other cloud providers -Let's imagine a small globally distributed studio which wants all published work for all their freelancers uploaded to Google Drive folder. +Let's imagine a small globally distributed studio which wants all published +work for all their freelancers uploaded to Google Drive folder. For this use case admin needs to configure: -- how many times it tries to synchronize file in case of some issue (network, permissions) + +- how many times it tries to synchronize file in case of some issue (network, + permissions) - how often should synchronization check for new assets -- sites for synchronization - 'local' and 'gdrive' (this can be overridden in local settings) +- sites for synchronization - 'local' and 'gdrive' (this can be overridden in + local settings) - user credentials - root folder location on Google Drive side @@ -111,30 +152,43 @@ Configuration would look like this: ![Configure project](assets/site_sync_project_settings.png) -*Site Sync* for Google Drive works using its API: https://developers.google.com/drive/api/v3/about-sdk +*Site Sync* for Google Drive works using its +API: https://developers.google.com/drive/api/v3/about-sdk -To configure Google Drive side you would need to have access to Google Cloud Platform project: https://console.cloud.google.com/ +To configure Google Drive side you would need to have access to Google Cloud +Platform project: https://console.cloud.google.com/ To get working connection to Google Drive there are some necessary steps: -- first you need to enable GDrive API: https://developers.google.com/drive/api/v3/enable-drive-api -- next you need to create user, choose **Service Account** (for basic configuration no roles for account are necessary) + +- first you need to enable GDrive + API: https://developers.google.com/drive/api/v3/enable-drive-api +- next you need to create user, choose **Service Account** (for basic + configuration no roles for account are necessary) - add new key for created account and download .json file with credentials -- share destination folder on the Google Drive with created account (directly in GDrive web application) -- add new site back in OpenPype Settings, name as you want, provider needs to be 'gdrive' +- share destination folder on the Google Drive with created account (directly + in GDrive web application) +- add new site back in OpenPype Settings, name as you want, provider needs to + be 'gdrive' - distribute credentials file via shared mounted disk location :::note -If you are using regular personal GDrive for testing don't forget adding `/My Drive` as the prefix in root configuration. Business accounts and share drives don't need this. +If you are using regular personal GDrive for testing don't forget +adding `/My Drive` as the prefix in root configuration. Business accounts and +share drives don't need this. ::: ### SFTP -SFTP provider is used to connect to SFTP server. Currently authentication with `user:password` or `user:ssh key` is implemented. -Please provide only one combination, don't forget to provide password for ssh key if ssh key was created with a passphrase. +SFTP provider is used to connect to SFTP server. Currently authentication +with `user:password` or `user:ssh key` is implemented. +Please provide only one combination, don't forget to provide password for ssh +key if ssh key was created with a passphrase. -(SFTP connection could be a bit finicky, use FileZilla or WinSCP for testing connection, it will be mush faster.) +(SFTP connection could be a bit finicky, use FileZilla or WinSCP for testing +connection, it will be mush faster.) -Beware that ssh key expects OpenSSH format (`.pem`) not a Putty format (`.ppk`)! +Beware that ssh key expects OpenSSH format (`.pem`) not a Putty +format (`.ppk`)! #### How to set SFTP site @@ -143,60 +197,101 @@ Beware that ssh key expects OpenSSH format (`.pem`) not a Putty format (`.ppk`)! ![Enable syncing and create site](assets/site_sync_sftp_system.png) -- In Projects setting enable Site Sync (on default project - all project will be synched, or on specific project) -- Configure SFTP connection and destination folder on a SFTP server (in screenshot `/upload`) +- In Projects setting enable Site Sync (on default project - all project will + be synched, or on specific project) +- Configure SFTP connection and destination folder on a SFTP server (in + screenshot `/upload`) ![SFTP connection](assets/site_sync_project_sftp_settings.png) - -- if you want to force syncing between local and sftp site for all users, use combination `active site: local`, `remote site: NAME_OF_SFTP_SITE` -- if you want to allow only specific users to use SFTP syncing (external users, not located in the office), use `active site: studio`, `remote site: studio`. + +- if you want to force syncing between local and sftp site for all users, use + combination `active site: local`, `remote site: NAME_OF_SFTP_SITE` +- if you want to allow only specific users to use SFTP syncing (external users, + not located in the office), use `active site: studio`, `remote site: studio`. ![Select active and remote site on a project](assets/site_sync_sftp_project_setting_not_forced.png) -- Each artist can decide and configure syncing from his/her local to SFTP via `Local Settings` +- Each artist can decide and configure syncing from his/her local to SFTP + via `Local Settings` ![Select active and remote site on a project](assets/site_sync_sftp_settings_local.png) - + ### Custom providers -If a studio needs to use other services for cloud storage, or want to implement totally different storage providers, they can do so by writing their own provider plugin. We're working on a developer documentation, however, for now we recommend looking at `abstract_provider.py`and `gdrive.py` inside `openpype/modules/sync_server/providers` and using it as a template. +If a studio needs to use other services for cloud storage, or want to implement +totally different storage providers, they can do so by writing their own +provider plugin. We're working on a developer documentation, however, for now +we recommend looking at `abstract_provider.py`and `gdrive.py` +inside `openpype/modules/sync_server/providers` and using it as a template. ### Running Site Sync in background -Site Sync server synchronizes new published files from artist machine into configured remote location by default. +Site Sync server synchronizes new published files from artist machine into +configured remote location by default. -There might be a use case where you need to synchronize between "non-artist" sites, for example between studio site and cloud. In this case -you need to run Site Sync as a background process from a command line (via service etc) 24/7. +There might be a use case where you need to synchronize between "non-artist" +sites, for example between studio site and cloud. In this case +you need to run Site Sync as a background process from a command line (via +service etc) 24/7. -To configure all sites where all published files should be synced eventually you need to configure `project_settings/global/sync_server/config/always_accessible_on` property in Settings (per project) first. +To configure all sites where all published files should be synced eventually +you need to +configure `project_settings/global/sync_server/config/always_accessible_on` +property in Settings (per project) first. ![Set another non artist remote site](assets/site_sync_always_on.png) This is an example of: + - Site Sync is enabled for a project -- default active and remote sites are set to `studio` - eg. standard process: everyone is working in a studio, publishing to shared location etc. -- (but this also allows any of the artists to work remotely, they would change their active site in their own Local Settings to `local` and configure local root. - This would result in everything artist publishes is saved first onto his local folder AND synchronized to `studio` site eventually.) +- default active and remote sites are set to `studio` - eg. standard process: + everyone is working in a studio, publishing to shared location etc. +- (but this also allows any of the artists to work remotely, they would change + their active site in their own Local Settings to `local` and configure local + root. + This would result in everything artist publishes is saved first onto his + local folder AND synchronized to `studio` site eventually.) - everything exported must also be eventually uploaded to `sftp` site -This eventual synchronization between `studio` and `sftp` sites must be physically handled by background process. +This eventual synchronization between `studio` and `sftp` sites must be +physically handled by background process. -As current implementation relies heavily on Settings and Local Settings, background process for a specific site ('studio' for example) must be configured via Tray first to `syncserver` command to work. +As current implementation relies heavily on Settings and Local Settings, +background process for a specific site ('studio' for example) must be +configured via Tray first to `syncserver` command to work. To do this: -- run OP `Tray` with environment variable OPENPYPE_LOCAL_ID set to name of active (source) site. In most use cases it would be studio (for cases of backups of everything published to studio site to different cloud site etc.) +- run OP `Tray` with environment variable OPENPYPE_LOCAL_ID set to name of + active (source) site. In most use cases it would be studio (for cases of + backups of everything published to studio site to different cloud site etc.) - start `Tray` -- check `Local ID` in information dialog after clicking on version number in the Tray +- check `Local ID` in information dialog after clicking on version number in + the Tray - open `Local Settings` in the `Tray` - configure for each project necessary active site and remote site - close `Tray` - run OP from a command line with `syncserver` and `--active_site` arguments - -This is an example how to trigger background syncing process where active (source) site is `studio`. -(It is expected that OP is installed on a machine, `openpype_console` is on PATH. If not, add full path to executable. +This is an example how to trigger background syncing process where active ( +source) site is `studio`. +(It is expected that OP is installed on a machine, `openpype_console` is on +PATH. If not, add full path to executable. ) + ```shell openpype_console syncserver --active_site studio -``` \ No newline at end of file +``` + +### Syncing of last published workfile + +Some DCC might have enabled +in `project_setting/global/tools/Workfiles/last_workfile_on_startup`, eg. open +DCC with last opened workfile. + +Flag `use_last_published_workfile` tells that last published workfile should be +used if no workfile is present locally. +This use case could happen if artists starts working on new task locally, +doesn't have any workfile present. In that case last published will be +synchronized locally and its version bumped by 1 (as workfile's version is +always +1 from published version). \ No newline at end of file diff --git a/website/sidebars.js b/website/sidebars.js index 93887e00f6..4874782197 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -126,6 +126,7 @@ module.exports = { "admin_hosts_nuke", "admin_hosts_resolve", "admin_hosts_harmony", + "admin_hosts_photoshop", "admin_hosts_aftereffects", "admin_hosts_tvpaint" ],