diff --git a/CHANGELOG.md b/CHANGELOG.md index 4a38bbf7af..fc14b5f507 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,134 +1,78 @@ # Changelog -## [3.7.0-nightly.6](https://github.com/pypeclub/OpenPype/tree/HEAD) +## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04) -[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD) - -### πŸ“– Documentation - -- docs\[website\]: Add Ellipse Studio \(logo\) as an OpenPype contributor [\#2324](https://github.com/pypeclub/OpenPype/pull/2324) - -**πŸ†• New features** - -- Settings UI use OpenPype styles [\#2296](https://github.com/pypeclub/OpenPype/pull/2296) +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...3.7.0) **πŸš€ Enhancements** -- Settings: Webpublisher in hosts enum [\#2367](https://github.com/pypeclub/OpenPype/pull/2367) +- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462) +- Photoshop: New style validations for New publisher [\#2429](https://github.com/pypeclub/OpenPype/pull/2429) +- General: Environment variables groups [\#2424](https://github.com/pypeclub/OpenPype/pull/2424) +- Unreal: Dynamic menu created in Python [\#2422](https://github.com/pypeclub/OpenPype/pull/2422) +- Settings UI: Hyperlinks to settings [\#2420](https://github.com/pypeclub/OpenPype/pull/2420) +- Modules: JobQueue module moved one hierarchy level higher [\#2419](https://github.com/pypeclub/OpenPype/pull/2419) +- TimersManager: Start timer post launch hook [\#2418](https://github.com/pypeclub/OpenPype/pull/2418) +- General: Run applications as separate processes under linux [\#2408](https://github.com/pypeclub/OpenPype/pull/2408) +- Ftrack: Check existence of object type on recreation [\#2404](https://github.com/pypeclub/OpenPype/pull/2404) +- Enhancement: Global cleanup plugin that explicitly remove paths from context [\#2402](https://github.com/pypeclub/OpenPype/pull/2402) +- General: MongoDB ability to specify replica set groups [\#2401](https://github.com/pypeclub/OpenPype/pull/2401) +- Flame: moving `utility\_scripts` to api folder also with `scripts` [\#2385](https://github.com/pypeclub/OpenPype/pull/2385) +- Centos 7 dependency compatibility [\#2384](https://github.com/pypeclub/OpenPype/pull/2384) +- Enhancement: Settings: Use project settings values from another project [\#2382](https://github.com/pypeclub/OpenPype/pull/2382) +- Blender 3: Support auto install for new blender version [\#2377](https://github.com/pypeclub/OpenPype/pull/2377) +- Maya add render image path to settings [\#2375](https://github.com/pypeclub/OpenPype/pull/2375) - Hiero: python3 compatibility [\#2365](https://github.com/pypeclub/OpenPype/pull/2365) -- Burnins: Be able recognize mxf OPAtom format [\#2361](https://github.com/pypeclub/OpenPype/pull/2361) -- Local settings: Copyable studio paths [\#2349](https://github.com/pypeclub/OpenPype/pull/2349) -- Assets Widget: Clear model on project change [\#2345](https://github.com/pypeclub/OpenPype/pull/2345) -- General: OpenPype default modules hierarchy [\#2338](https://github.com/pypeclub/OpenPype/pull/2338) -- General: FFprobe error exception contain original error message [\#2328](https://github.com/pypeclub/OpenPype/pull/2328) -- Resolve: Add experimental button to menu [\#2325](https://github.com/pypeclub/OpenPype/pull/2325) -- Hiero: Add experimental tools action [\#2323](https://github.com/pypeclub/OpenPype/pull/2323) -- Input links: Cleanup and unification of differences [\#2322](https://github.com/pypeclub/OpenPype/pull/2322) -- General: Don't validate vendor bin with executing them [\#2317](https://github.com/pypeclub/OpenPype/pull/2317) -- General: Multilayer EXRs support [\#2315](https://github.com/pypeclub/OpenPype/pull/2315) -- General: Run process log stderr as info log level [\#2309](https://github.com/pypeclub/OpenPype/pull/2309) -- General: Reduce vendor imports [\#2305](https://github.com/pypeclub/OpenPype/pull/2305) -- Tools: Cleanup of unused classes [\#2304](https://github.com/pypeclub/OpenPype/pull/2304) -- Project Manager: Added ability to delete project [\#2298](https://github.com/pypeclub/OpenPype/pull/2298) -- Nuke: extract baked review videos presets [\#2248](https://github.com/pypeclub/OpenPype/pull/2248) +- Maya: Add is\_static\_image\_plane and is\_in\_all\_views option in imagePlaneLoader [\#2356](https://github.com/pypeclub/OpenPype/pull/2356) +- TVPaint: Move implementation to OpenPype [\#2336](https://github.com/pypeclub/OpenPype/pull/2336) **πŸ› Bug fixes** +- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471) +- Short Pyblish plugin path [\#2428](https://github.com/pypeclub/OpenPype/pull/2428) +- PS: Introduced settings for invalid characters to use in ValidateNaming plugin [\#2417](https://github.com/pypeclub/OpenPype/pull/2417) +- Settings UI: Breadcrumbs path does not create new entities [\#2416](https://github.com/pypeclub/OpenPype/pull/2416) +- AfterEffects: Variant 2022 is in defaults but missing in schemas [\#2412](https://github.com/pypeclub/OpenPype/pull/2412) +- Nuke: baking representations was not additive [\#2406](https://github.com/pypeclub/OpenPype/pull/2406) +- General: Fix access to environments from default settings [\#2403](https://github.com/pypeclub/OpenPype/pull/2403) +- Fix: Placeholder Input color set fix [\#2399](https://github.com/pypeclub/OpenPype/pull/2399) +- Settings: Fix state change of wrapper label [\#2396](https://github.com/pypeclub/OpenPype/pull/2396) - Flame: fix ftrack publisher [\#2381](https://github.com/pypeclub/OpenPype/pull/2381) - hiero: solve custom ocio path [\#2379](https://github.com/pypeclub/OpenPype/pull/2379) - hiero: fix workio and flatten [\#2378](https://github.com/pypeclub/OpenPype/pull/2378) - Nuke: fixing menu re-drawing during context change [\#2374](https://github.com/pypeclub/OpenPype/pull/2374) - Webpublisher: Fix assignment of families of TVpaint instances [\#2373](https://github.com/pypeclub/OpenPype/pull/2373) - Nuke: fixing node name based on switched asset name [\#2369](https://github.com/pypeclub/OpenPype/pull/2369) -- JobQueue: Fix loading of settings [\#2362](https://github.com/pypeclub/OpenPype/pull/2362) -- Tools: Placeholder color [\#2359](https://github.com/pypeclub/OpenPype/pull/2359) -- Launcher: Minimize button on MacOs [\#2355](https://github.com/pypeclub/OpenPype/pull/2355) -- StandalonePublisher: Fix import of constant [\#2354](https://github.com/pypeclub/OpenPype/pull/2354) -- Adobe products show issue [\#2347](https://github.com/pypeclub/OpenPype/pull/2347) -- Maya Look Assigner: Fix Python 3 compatibility [\#2343](https://github.com/pypeclub/OpenPype/pull/2343) -- Remove wrongly used host for hook [\#2342](https://github.com/pypeclub/OpenPype/pull/2342) -- Tools: Use Qt context on tools show [\#2340](https://github.com/pypeclub/OpenPype/pull/2340) -- Flame: Fix default argument value in custom dictionary [\#2339](https://github.com/pypeclub/OpenPype/pull/2339) -- Timers Manager: Disable auto stop timer on linux platform [\#2334](https://github.com/pypeclub/OpenPype/pull/2334) -- nuke: bake preset single input exception [\#2331](https://github.com/pypeclub/OpenPype/pull/2331) -- Hiero: fixing multiple templates at a hierarchy parent [\#2330](https://github.com/pypeclub/OpenPype/pull/2330) -- Fix - provider icons are pulled from a folder [\#2326](https://github.com/pypeclub/OpenPype/pull/2326) -- InputLinks: Typo in "inputLinks" key [\#2314](https://github.com/pypeclub/OpenPype/pull/2314) -- Deadline timeout and logging [\#2312](https://github.com/pypeclub/OpenPype/pull/2312) -- nuke: do not multiply representation on class method [\#2311](https://github.com/pypeclub/OpenPype/pull/2311) -- Workfiles tool: Fix task formatting [\#2306](https://github.com/pypeclub/OpenPype/pull/2306) -- Delivery: Fix delivery paths created on windows [\#2302](https://github.com/pypeclub/OpenPype/pull/2302) -- Maya: Deadline - fix limit groups [\#2295](https://github.com/pypeclub/OpenPype/pull/2295) -- Royal Render: Fix plugin order and OpenPype auto-detection [\#2291](https://github.com/pypeclub/OpenPype/pull/2291) -- Alternate site for site sync doesnt work for sequences [\#2284](https://github.com/pypeclub/OpenPype/pull/2284) +- Houdini: Fix HDA creation [\#2350](https://github.com/pypeclub/OpenPype/pull/2350) **Merged pull requests:** +- Forced cx\_freeze to include sqlite3 into build [\#2432](https://github.com/pypeclub/OpenPype/pull/2432) +- Maya: Replaced PATH usage with vendored oiio path for maketx utility [\#2405](https://github.com/pypeclub/OpenPype/pull/2405) +- \[Fix\]\[MAYA\] Handle message type attribute within CollectLook [\#2394](https://github.com/pypeclub/OpenPype/pull/2394) +- Add validator to check correct version of extension for PS and AE [\#2387](https://github.com/pypeclub/OpenPype/pull/2387) - Linux : flip updating submodules logic [\#2357](https://github.com/pypeclub/OpenPype/pull/2357) -- Update of avalon-core [\#2346](https://github.com/pypeclub/OpenPype/pull/2346) -- Maya: configurable model top level validation [\#2321](https://github.com/pypeclub/OpenPype/pull/2321) ## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4) -**πŸ› Bug fixes** - -- Nuke: inventory update removes all loaded read nodes [\#2294](https://github.com/pypeclub/OpenPype/pull/2294) - ## [3.6.3](https://github.com/pypeclub/OpenPype/tree/3.6.3) (2021-11-19) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.3-nightly.1...3.6.3) -**πŸ› Bug fixes** - -- Deadline: Fix publish targets [\#2280](https://github.com/pypeclub/OpenPype/pull/2280) - ## [3.6.2](https://github.com/pypeclub/OpenPype/tree/3.6.2) (2021-11-18) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.2-nightly.2...3.6.2) -**πŸš€ Enhancements** - -- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265) -- SceneInventory: Choose loader in asset switcher [\#2262](https://github.com/pypeclub/OpenPype/pull/2262) -- Style: New fonts in OpenPype style [\#2256](https://github.com/pypeclub/OpenPype/pull/2256) -- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255) -- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251) -- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244) - -**πŸ› Bug fixes** - -- Tools: Parenting of tools in Nuke and Hiero [\#2266](https://github.com/pypeclub/OpenPype/pull/2266) -- limiting validator to specific editorial hosts [\#2264](https://github.com/pypeclub/OpenPype/pull/2264) -- Tools: Select Context dialog attribute fix [\#2261](https://github.com/pypeclub/OpenPype/pull/2261) -- Maya: Render publishing fails on linux [\#2260](https://github.com/pypeclub/OpenPype/pull/2260) -- LookAssigner: Fix tool reopen [\#2259](https://github.com/pypeclub/OpenPype/pull/2259) -- Standalone: editorial not publishing thumbnails on all subsets [\#2258](https://github.com/pypeclub/OpenPype/pull/2258) -- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247) - ## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.1-nightly.1...3.6.1) -**πŸ› Bug fixes** - -- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254) - ## [3.6.0](https://github.com/pypeclub/OpenPype/tree/3.6.0) (2021-11-15) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.0-nightly.6...3.6.0) -**πŸš€ Enhancements** - -- Tools: Subset manager in OpenPype [\#2243](https://github.com/pypeclub/OpenPype/pull/2243) -- General: Skip module directories without init file [\#2239](https://github.com/pypeclub/OpenPype/pull/2239) - -**πŸ› Bug fixes** - -- Ftrack: Sync project ftrack id cache issue [\#2250](https://github.com/pypeclub/OpenPype/pull/2250) -- Ftrack: Session creation and Prepare project [\#2245](https://github.com/pypeclub/OpenPype/pull/2245) - ## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0) diff --git a/Dockerfile.centos7 b/Dockerfile.centos7 index f3b257e66b..736a42663c 100644 --- a/Dockerfile.centos7 +++ b/Dockerfile.centos7 @@ -41,6 +41,8 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n ncurses \ ncurses-devel \ qt5-qtbase-devel \ + xcb-util-wm \ + xcb-util-renderutil \ && yum clean all # we need to build our own patchelf @@ -92,7 +94,8 @@ RUN source $HOME/.bashrc \ RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \ && cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \ && cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib \ - && cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib + && cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib \ + && cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.7/vendor/python/PySide2/Qt/lib RUN cd /opt/openpype \ rm -rf ./vendor/bin diff --git a/app_launcher.py b/app_launcher.py new file mode 100644 index 0000000000..6dc1518370 --- /dev/null +++ b/app_launcher.py @@ -0,0 +1,49 @@ +"""Launch process that is not child process of python or OpenPype. + +This is written for linux distributions where process tree may affect what +is when closed or blocked to be closed. +""" + +import os +import sys +import subprocess +import json + + +def main(input_json_path): + """Read launch arguments from json file and launch the process. + + Expected that json contains "args" key with string or list of strings. + + Arguments are converted to string using `list2cmdline`. At the end is added + `&` which will cause that launched process is detached and running as + "background" process. + + ## Notes + @iLLiCiT: This should be possible to do with 'disown' or double forking but + I didn't find a way how to do it properly. Disown didn't work as + expected for me and double forking killed parent process which is + unexpected too. + """ + with open(input_json_path, "r") as stream: + data = json.load(stream) + + # Change environment variables + env = data.get("env") or {} + for key, value in env.items(): + os.environ[key] = value + + # Prepare launch arguments + args = data["args"] + if isinstance(args, list): + args = subprocess.list2cmdline(args) + + # Run the command as background process + shell_cmd = args + " &" + os.system(shell_cmd) + sys.exit(0) + + +if __name__ == "__main__": + # Expect that last argument is path to a json with launch args information + main(sys.argv[-1]) diff --git a/openpype/api.py b/openpype/api.py index a6529202ff..51854492ab 100644 --- a/openpype/api.py +++ b/openpype/api.py @@ -31,8 +31,6 @@ from .lib import ( ) from .lib.mongo import ( - decompose_url, - compose_url, get_default_components ) @@ -84,8 +82,6 @@ __all__ = [ "Anatomy", "config", "execute", - "decompose_url", - "compose_url", "get_default_components", "ApplicationManager", "BuildWorkfile", diff --git a/openpype/cli.py b/openpype/cli.py index 4c4dc1a3c6..6e9c237b0e 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -138,7 +138,10 @@ def webpublisherwebserver(debug, executable, upload_dir, host=None, port=None): @click.option("--asset", help="Asset name", default=None) @click.option("--task", help="Task name", default=None) @click.option("--app", help="Application name", default=None) -def extractenvironments(output_json_path, project, asset, task, app): +@click.option( + "--envgroup", help="Environment group (e.g. \"farm\")", default=None +) +def extractenvironments(output_json_path, project, asset, task, app, envgroup): """Extract environment variables for entered context to a json file. Entered output filepath will be created if does not exists. @@ -149,7 +152,7 @@ def extractenvironments(output_json_path, project, asset, task, app): Context options are "project", "asset", "task", "app" """ PypeCommands.extractenvironments( - output_json_path, project, asset, task, app + output_json_path, project, asset, task, app, envgroup ) @@ -356,9 +359,22 @@ def run(script): "--pyargs", help="Run tests from package", default=None) -def runtests(folder, mark, pyargs): +@click.option("-t", + "--test_data_folder", + help="Unzipped directory path of test file", + default=None) +@click.option("-s", + "--persist", + help="Persist test DB and published files after test end", + default=None) +@click.option("-a", + "--app_variant", + help="Provide specific app variant for test, empty for latest", + default=None) +def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant): """Run all automatic tests after proper initialization via start.py""" - PypeCommands().run_tests(folder, mark, pyargs) + PypeCommands().run_tests(folder, mark, pyargs, test_data_folder, + persist, app_variant) @main.command() diff --git a/openpype/hooks/pre_create_extra_workdir_folders.py b/openpype/hooks/pre_create_extra_workdir_folders.py new file mode 100644 index 0000000000..d79c5831ee --- /dev/null +++ b/openpype/hooks/pre_create_extra_workdir_folders.py @@ -0,0 +1,33 @@ +import os +from openpype.lib import ( + PreLaunchHook, + create_workdir_extra_folders +) + + +class AddLastWorkfileToLaunchArgs(PreLaunchHook): + """Add last workfile path to launch arguments. + + This is not possible to do for all applications the same way. + """ + + # Execute after workfile template copy + order = 15 + + def execute(self): + if not self.application.is_host: + return + + env = self.data.get("env") or {} + workdir = env.get("AVALON_WORKDIR") + if not workdir or not os.path.exists(workdir): + return + + host_name = self.application.host_name + task_type = self.data["task_type"] + task_name = self.data["task_name"] + project_name = self.data["project_name"] + + create_workdir_extra_folders( + workdir, host_name, task_type, task_name, project_name, + ) diff --git a/openpype/hooks/pre_global_host_data.py b/openpype/hooks/pre_global_host_data.py index b32fb5e44a..6b08cdb444 100644 --- a/openpype/hooks/pre_global_host_data.py +++ b/openpype/hooks/pre_global_host_data.py @@ -48,7 +48,7 @@ class GlobalHostDataHook(PreLaunchHook): "log": self.log }) - prepare_host_environments(temp_data) + prepare_host_environments(temp_data, self.launch_context.env_group) prepare_context_environments(temp_data) temp_data.pop("log") diff --git a/openpype/hosts/aftereffects/plugins/publish/closeAE.py b/openpype/hosts/aftereffects/plugins/publish/closeAE.py new file mode 100644 index 0000000000..21bedf0125 --- /dev/null +++ b/openpype/hosts/aftereffects/plugins/publish/closeAE.py @@ -0,0 +1,27 @@ +# -*- coding: utf-8 -*- +"""Close AE after publish. For Webpublishing only.""" +import pyblish.api + +from avalon import aftereffects + + +class CloseAE(pyblish.api.ContextPlugin): + """Close AE after publish. For Webpublishing only. + """ + + order = pyblish.api.IntegratorOrder + 14 + label = "Close AE" + optional = True + active = True + + hosts = ["aftereffects"] + targets = ["remotepublish"] + + def process(self, context): + self.log.info("CloseAE") + + stub = aftereffects.stub() + self.log.info("Shutting down AE") + stub.save() + stub.close() + self.log.info("AE closed") diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_current_file.py b/openpype/hosts/aftereffects/plugins/publish/collect_current_file.py index b59ff41a0e..51f6f5c844 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_current_file.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_current_file.py @@ -8,7 +8,7 @@ from avalon import aftereffects class CollectCurrentFile(pyblish.api.ContextPlugin): """Inject the current working file into context""" - order = pyblish.api.CollectorOrder - 0.5 + order = pyblish.api.CollectorOrder - 0.49 label = "Current File" hosts = ["aftereffects"] diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_extension_version.py b/openpype/hosts/aftereffects/plugins/publish/collect_extension_version.py new file mode 100644 index 0000000000..4e74252043 --- /dev/null +++ b/openpype/hosts/aftereffects/plugins/publish/collect_extension_version.py @@ -0,0 +1,56 @@ +import os +import re +import pyblish.api + +from avalon import aftereffects + + +class CollectExtensionVersion(pyblish.api.ContextPlugin): + """ Pulls and compares version of installed extension. + + It is recommended to use same extension as in provided Openpype code. + + Please use Anastasiy’s Extension Manager or ZXPInstaller to update + extension in case of an error. + + You can locate extension.zxp in your installed Openpype code in + `repos/avalon-core/avalon/aftereffects` + """ + # This technically should be a validator, but other collectors might be + # impacted with usage of obsolete extension, so collector that runs first + # was chosen + order = pyblish.api.CollectorOrder - 0.5 + label = "Collect extension version" + hosts = ["aftereffects"] + + optional = True + active = True + + def process(self, context): + installed_version = aftereffects.stub().get_extension_version() + + if not installed_version: + raise ValueError("Unknown version, probably old extension") + + manifest_url = os.path.join(os.path.dirname(aftereffects.__file__), + "extension", "CSXS", "manifest.xml") + + if not os.path.exists(manifest_url): + self.log.debug("Unable to locate extension manifest, not checking") + return + + expected_version = None + with open(manifest_url) as fp: + content = fp.read() + found = re.findall(r'(ExtensionBundleVersion=")([0-9\.]+)(")', + content) + if found: + expected_version = found[0][1] + + if expected_version != installed_version: + msg = ( + "Expected version '{}' found '{}'\n Please update" + " your installed extension, it might not work properly." + ).format(expected_version, installed_version) + + raise ValueError(msg) diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index 37337e7fee..b36ab24bde 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -19,10 +19,9 @@ class ExtractLocalRender(openpype.api.Extractor): staging_dir = instance.data["stagingDir"] self.log.info("staging_dir::{}".format(staging_dir)) - stub.render(staging_dir) - # pull file name from Render Queue Output module render_q = stub.get_render_info() + stub.render(staging_dir) if not render_q: raise ValueError("No file extension set in Render Queue") _, ext = os.path.splitext(os.path.basename(render_q.file_name)) diff --git a/openpype/hosts/blender/hooks/pre_pyside_install.py b/openpype/hosts/blender/hooks/pre_pyside_install.py index 6d253300d9..e2a419c8ef 100644 --- a/openpype/hosts/blender/hooks/pre_pyside_install.py +++ b/openpype/hosts/blender/hooks/pre_pyside_install.py @@ -32,7 +32,7 @@ class InstallPySideToBlender(PreLaunchHook): def inner_execute(self): # Get blender's python directory - version_regex = re.compile(r"^2\.[0-9]{2}$") + version_regex = re.compile(r"^[2-3]\.[0-9]+$") executable = self.launch_context.executable.executable_path if os.path.basename(executable).lower() != "blender.exe": diff --git a/openpype/hosts/flame/__init__.py b/openpype/hosts/flame/__init__.py index 48e8dc86c9..da28170679 100644 --- a/openpype/hosts/flame/__init__.py +++ b/openpype/hosts/flame/__init__.py @@ -19,7 +19,7 @@ from .api.lib import ( maintain_current_timeline, get_project_manager, get_current_project, - get_current_timeline, + get_current_sequence, create_bin, ) @@ -51,6 +51,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") app_framework = None apps = [] +selection = None __all__ = [ @@ -65,6 +66,7 @@ __all__ = [ "app_framework", "apps", + "selection", # pipeline "install", @@ -86,7 +88,7 @@ __all__ = [ "maintain_current_timeline", "get_project_manager", "get_current_project", - "get_current_timeline", + "get_current_sequence", "create_bin", # menu diff --git a/openpype/hosts/flame/api/lib.py b/openpype/hosts/flame/api/lib.py index 89e020b329..96bffab774 100644 --- a/openpype/hosts/flame/api/lib.py +++ b/openpype/hosts/flame/api/lib.py @@ -220,10 +220,10 @@ def maintain_current_timeline(to_timeline, from_timeline=None): timeline2 >>> with maintain_current_timeline(to_timeline): - ... print(get_current_timeline().GetName()) + ... print(get_current_sequence().GetName()) timeline2 - >>> print(get_current_timeline().GetName()) + >>> print(get_current_sequence().GetName()) timeline1 """ # todo: this is still Resolve's implementation @@ -256,9 +256,28 @@ def get_current_project(): return -def get_current_timeline(new=False): - # TODO: get_current_timeline - return +def get_current_sequence(selection): + import flame + + def segment_to_sequence(_segment): + track = _segment.parent + version = track.parent + return version.parent + + process_timeline = None + + if len(selection) == 1: + if isinstance(selection[0], flame.PySequence): + process_timeline = selection[0] + if isinstance(selection[0], flame.PySegment): + process_timeline = segment_to_sequence(selection[0]) + else: + for segment in selection: + if isinstance(segment, flame.PySegment): + process_timeline = segment_to_sequence(segment) + break + + return process_timeline def create_bin(name, root=None): @@ -272,3 +291,46 @@ def rescan_hooks(): flame.execute_shortcut('Rescan Python Hooks') except Exception: pass + + +def get_metadata(project_name, _log=None): + from adsk.libwiretapPythonClientAPI import ( + WireTapClient, + WireTapServerHandle, + WireTapNodeHandle, + WireTapStr + ) + + class GetProjectColorPolicy(object): + def __init__(self, host_name=None, _log=None): + # Create a connection to the Backburner manager using the Wiretap + # python API. + # + self.log = _log or log + self.host_name = host_name or "localhost" + self._wiretap_client = WireTapClient() + if not self._wiretap_client.init(): + raise Exception("Could not initialize Wiretap Client") + self._server = WireTapServerHandle( + "{}:IFFFS".format(self.host_name)) + + def process(self, project_name): + policy_node_handle = WireTapNodeHandle( + self._server, + "/projects/{}/syncolor/policy".format(project_name) + ) + self.log.info(policy_node_handle) + + policy = WireTapStr() + if not policy_node_handle.getNodeTypeStr(policy): + self.log.warning( + "Could not retrieve policy of '%s': %s" % ( + policy_node_handle.getNodeId().id(), + policy_node_handle.lastError() + ) + ) + + return policy.c_str() + + policy_wiretap = GetProjectColorPolicy(_log=_log) + return policy_wiretap.process(project_name) diff --git a/openpype/hosts/flame/api/menu.py b/openpype/hosts/flame/api/menu.py index b4f1728acf..fef6dbfa35 100644 --- a/openpype/hosts/flame/api/menu.py +++ b/openpype/hosts/flame/api/menu.py @@ -4,7 +4,6 @@ from copy import deepcopy from openpype.tools.utils.host_tools import HostToolsHelper - menu_group_name = 'OpenPype' default_flame_export_presets = { @@ -26,6 +25,13 @@ default_flame_export_presets = { } +def callback_selection(selection, function): + import openpype.hosts.flame as opflame + opflame.selection = selection + print(opflame.selection) + function() + + class _FlameMenuApp(object): def __init__(self, framework): self.name = self.__class__.__name__ @@ -97,9 +103,6 @@ class FlameMenuProjectConnect(_FlameMenuApp): if not self.flame: return [] - flame_project_name = self.flame_project_name - self.log.info("______ {} ______".format(flame_project_name)) - menu = deepcopy(self.menu) menu['actions'].append({ @@ -108,11 +111,13 @@ class FlameMenuProjectConnect(_FlameMenuApp): }) menu['actions'].append({ "name": "Create ...", - "execute": lambda x: self.tools_helper.show_creator() + "execute": lambda x: callback_selection( + x, self.tools_helper.show_creator) }) menu['actions'].append({ "name": "Publish ...", - "execute": lambda x: self.tools_helper.show_publish() + "execute": lambda x: callback_selection( + x, self.tools_helper.show_publish) }) menu['actions'].append({ "name": "Load ...", @@ -128,9 +133,6 @@ class FlameMenuProjectConnect(_FlameMenuApp): }) return menu - def get_projects(self, *args, **kwargs): - pass - def refresh(self, *args, **kwargs): self.rescan() @@ -165,18 +167,17 @@ class FlameMenuTimeline(_FlameMenuApp): if not self.flame: return [] - flame_project_name = self.flame_project_name - self.log.info("______ {} ______".format(flame_project_name)) - menu = deepcopy(self.menu) menu['actions'].append({ "name": "Create ...", - "execute": lambda x: self.tools_helper.show_creator() + "execute": lambda x: callback_selection( + x, self.tools_helper.show_creator) }) menu['actions'].append({ "name": "Publish ...", - "execute": lambda x: self.tools_helper.show_publish() + "execute": lambda x: callback_selection( + x, self.tools_helper.show_publish) }) menu['actions'].append({ "name": "Load ...", @@ -189,9 +190,6 @@ class FlameMenuTimeline(_FlameMenuApp): return menu - def get_projects(self, *args, **kwargs): - pass - def refresh(self, *args, **kwargs): self.rescan() diff --git a/openpype/hosts/flame/scripts/wiretap_com.py b/openpype/hosts/flame/api/scripts/wiretap_com.py similarity index 100% rename from openpype/hosts/flame/scripts/wiretap_com.py rename to openpype/hosts/flame/api/scripts/wiretap_com.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_thumbnails_jpg.xml b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_thumbnails_jpg.xml similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_thumbnails_jpg.xml rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_thumbnails_jpg.xml diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_video_h264.xml b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_video_h264.xml similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_video_h264.xml rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/export_preset/openpype_seg_video_h264.xml diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/__init__.py b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/__init__.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/__init__.py rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/__init__.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/app_utils.py b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/app_utils.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/app_utils.py rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/app_utils.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/ftrack_lib.py b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/ftrack_lib.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/ftrack_lib.py rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/ftrack_lib.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/panel_app.py b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/panel_app.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/panel_app.py rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/panel_app.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/uiwidgets.py b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/uiwidgets.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/modules/uiwidgets.py rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/modules/uiwidgets.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/openpype_flame_to_ftrack.py b/openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/openpype_flame_to_ftrack.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_flame_to_ftrack/openpype_flame_to_ftrack.py rename to openpype/hosts/flame/api/utility_scripts/openpype_flame_to_ftrack/openpype_flame_to_ftrack.py diff --git a/openpype/hosts/flame/utility_scripts/openpype_in_flame.py b/openpype/hosts/flame/api/utility_scripts/openpype_in_flame.py similarity index 100% rename from openpype/hosts/flame/utility_scripts/openpype_in_flame.py rename to openpype/hosts/flame/api/utility_scripts/openpype_in_flame.py diff --git a/openpype/hosts/flame/api/utils.py b/openpype/hosts/flame/api/utils.py index a750046362..201c7d2fac 100644 --- a/openpype/hosts/flame/api/utils.py +++ b/openpype/hosts/flame/api/utils.py @@ -27,6 +27,7 @@ def _sync_utility_scripts(env=None): fsd_paths = [os.path.join( HOST_DIR, + "api", "utility_scripts" )] diff --git a/openpype/hosts/flame/hooks/pre_flame_setup.py b/openpype/hosts/flame/hooks/pre_flame_setup.py index 718c4b574c..159fb37410 100644 --- a/openpype/hosts/flame/hooks/pre_flame_setup.py +++ b/openpype/hosts/flame/hooks/pre_flame_setup.py @@ -22,7 +22,7 @@ class FlamePrelaunch(PreLaunchHook): flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7" wtc_script_path = os.path.join( - opflame.HOST_DIR, "scripts", "wiretap_com.py") + opflame.HOST_DIR, "api", "scripts", "wiretap_com.py") def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) diff --git a/openpype/modules/default_modules/log_viewer/tray/__init__.py b/openpype/hosts/flame/otio/__init__.py similarity index 100% rename from openpype/modules/default_modules/log_viewer/tray/__init__.py rename to openpype/hosts/flame/otio/__init__.py diff --git a/openpype/hosts/flame/otio/flame_export.py b/openpype/hosts/flame/otio/flame_export.py new file mode 100644 index 0000000000..aea1f387e8 --- /dev/null +++ b/openpype/hosts/flame/otio/flame_export.py @@ -0,0 +1,657 @@ +""" compatibility OpenTimelineIO 0.12.0 and newer +""" + +import os +import re +import json +import logging +import opentimelineio as otio +from . import utils + +import flame +from pprint import pformat + +reload(utils) # noqa + +log = logging.getLogger(__name__) + + +TRACK_TYPES = { + "video": otio.schema.TrackKind.Video, + "audio": otio.schema.TrackKind.Audio +} +MARKERS_COLOR_MAP = { + (1.0, 0.0, 0.0): otio.schema.MarkerColor.RED, + (1.0, 0.5, 0.0): otio.schema.MarkerColor.ORANGE, + (1.0, 1.0, 0.0): otio.schema.MarkerColor.YELLOW, + (1.0, 0.5, 1.0): otio.schema.MarkerColor.PINK, + (1.0, 1.0, 1.0): otio.schema.MarkerColor.WHITE, + (0.0, 1.0, 0.0): otio.schema.MarkerColor.GREEN, + (0.0, 1.0, 1.0): otio.schema.MarkerColor.CYAN, + (0.0, 0.0, 1.0): otio.schema.MarkerColor.BLUE, + (0.5, 0.0, 0.5): otio.schema.MarkerColor.PURPLE, + (0.5, 0.0, 1.0): otio.schema.MarkerColor.MAGENTA, + (0.0, 0.0, 0.0): otio.schema.MarkerColor.BLACK +} +MARKERS_INCLUDE = True + + +class CTX: + _fps = None + _tl_start_frame = None + project = None + clips = None + + @classmethod + def set_fps(cls, new_fps): + if not isinstance(new_fps, float): + raise TypeError("Invalid fps type {}".format(type(new_fps))) + if cls._fps != new_fps: + cls._fps = new_fps + + @classmethod + def get_fps(cls): + return cls._fps + + @classmethod + def set_tl_start_frame(cls, number): + if not isinstance(number, int): + raise TypeError("Invalid timeline start frame type {}".format( + type(number))) + if cls._tl_start_frame != number: + cls._tl_start_frame = number + + @classmethod + def get_tl_start_frame(cls): + return cls._tl_start_frame + + +def flatten(_list): + for item in _list: + if isinstance(item, (list, tuple)): + for sub_item in flatten(item): + yield sub_item + else: + yield item + + +def get_current_flame_project(): + project = flame.project.current_project + return project + + +def create_otio_rational_time(frame, fps): + return otio.opentime.RationalTime( + float(frame), + float(fps) + ) + + +def create_otio_time_range(start_frame, frame_duration, fps): + return otio.opentime.TimeRange( + start_time=create_otio_rational_time(start_frame, fps), + duration=create_otio_rational_time(frame_duration, fps) + ) + + +def _get_metadata(item): + if hasattr(item, 'metadata'): + if not item.metadata: + return {} + return {key: value for key, value in dict(item.metadata)} + return {} + + +def create_time_effects(otio_clip, item): + # todo #2426: add retiming effects to export + # get all subtrack items + # subTrackItems = flatten(track_item.parent().subTrackItems()) + # speed = track_item.playbackSpeed() + + # otio_effect = None + # # retime on track item + # if speed != 1.: + # # make effect + # otio_effect = otio.schema.LinearTimeWarp() + # otio_effect.name = "Speed" + # otio_effect.time_scalar = speed + # otio_effect.metadata = {} + + # # freeze frame effect + # if speed == 0.: + # otio_effect = otio.schema.FreezeFrame() + # otio_effect.name = "FreezeFrame" + # otio_effect.metadata = {} + + # if otio_effect: + # # add otio effect to clip effects + # otio_clip.effects.append(otio_effect) + + # # loop trought and get all Timewarps + # for effect in subTrackItems: + # if ((track_item not in effect.linkedItems()) + # and (len(effect.linkedItems()) > 0)): + # continue + # # avoid all effect which are not TimeWarp and disabled + # if "TimeWarp" not in effect.name(): + # continue + + # if not effect.isEnabled(): + # continue + + # node = effect.node() + # name = node["name"].value() + + # # solve effect class as effect name + # _name = effect.name() + # if "_" in _name: + # effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers + # else: + # effect_name = re.sub(r"\d+", "", _name) # one number + + # metadata = {} + # # add knob to metadata + # for knob in ["lookup", "length"]: + # value = node[knob].value() + # animated = node[knob].isAnimated() + # if animated: + # value = [ + # ((node[knob].getValueAt(i)) - i) + # for i in range( + # track_item.timelineIn(), + # track_item.timelineOut() + 1) + # ] + + # metadata[knob] = value + + # # make effect + # otio_effect = otio.schema.TimeEffect() + # otio_effect.name = name + # otio_effect.effect_name = effect_name + # otio_effect.metadata = metadata + + # # add otio effect to clip effects + # otio_clip.effects.append(otio_effect) + pass + + +def _get_marker_color(flame_colour): + # clamp colors to closes half numbers + _flame_colour = [ + (lambda x: round(x * 2) / 2)(c) + for c in flame_colour] + + for color, otio_color_type in MARKERS_COLOR_MAP.items(): + if _flame_colour == list(color): + return otio_color_type + + return otio.schema.MarkerColor.RED + + +def _get_flame_markers(item): + output_markers = [] + + time_in = item.record_in.relative_frame + + for marker in item.markers: + log.debug(marker) + start_frame = marker.location.get_value().relative_frame + + start_frame = (start_frame - time_in) + 1 + + marker_data = { + "name": marker.name.get_value(), + "duration": marker.duration.get_value().relative_frame, + "comment": marker.comment.get_value(), + "start_frame": start_frame, + "colour": marker.colour.get_value() + } + + output_markers.append(marker_data) + + return output_markers + + +def create_otio_markers(otio_item, item): + markers = _get_flame_markers(item) + for marker in markers: + frame_rate = CTX.get_fps() + + marked_range = otio.opentime.TimeRange( + start_time=otio.opentime.RationalTime( + marker["start_frame"], + frame_rate + ), + duration=otio.opentime.RationalTime( + marker["duration"], + frame_rate + ) + ) + + # testing the comment if it is not containing json string + check_if_json = re.findall( + re.compile(r"[{:}]"), + marker["comment"] + ) + + # to identify this as json, at least 3 items in the list should + # be present ["{", ":", "}"] + metadata = {} + if len(check_if_json) >= 3: + # this is json string + try: + # capture exceptions which are related to strings only + metadata.update( + json.loads(marker["comment"]) + ) + except ValueError as msg: + log.error("Marker json conversion: {}".format(msg)) + else: + metadata["comment"] = marker["comment"] + + otio_marker = otio.schema.Marker( + name=marker["name"], + color=_get_marker_color( + marker["colour"]), + marked_range=marked_range, + metadata=metadata + ) + + otio_item.markers.append(otio_marker) + + +def create_otio_reference(clip_data): + metadata = _get_metadata(clip_data) + + # get file info for path and start frame + frame_start = 0 + fps = CTX.get_fps() + + path = clip_data["fpath"] + + reel_clip = None + match_reel_clip = [ + clip for clip in CTX.clips + if clip["fpath"] == path + ] + if match_reel_clip: + reel_clip = match_reel_clip.pop() + fps = reel_clip["fps"] + + file_name = os.path.basename(path) + file_head, extension = os.path.splitext(file_name) + + # get padding and other file infos + log.debug("_ path: {}".format(path)) + + is_sequence = padding = utils.get_frame_from_path(path) + if is_sequence: + number = utils.get_frame_from_path(path) + file_head = file_name.split(number)[:-1] + frame_start = int(number) + + frame_duration = clip_data["source_duration"] + + if is_sequence: + metadata.update({ + "isSequence": True, + "padding": padding + }) + + otio_ex_ref_item = None + + if is_sequence: + # if it is file sequence try to create `ImageSequenceReference` + # the OTIO might not be compatible so return nothing and do it old way + try: + dirname = os.path.dirname(path) + otio_ex_ref_item = otio.schema.ImageSequenceReference( + target_url_base=dirname + os.sep, + name_prefix=file_head, + name_suffix=extension, + start_frame=frame_start, + frame_zero_padding=padding, + rate=fps, + available_range=create_otio_time_range( + frame_start, + frame_duration, + fps + ) + ) + except AttributeError: + pass + + if not otio_ex_ref_item: + reformat_path = utils.get_reformated_path(path, padded=False) + # in case old OTIO or video file create `ExternalReference` + otio_ex_ref_item = otio.schema.ExternalReference( + target_url=reformat_path, + available_range=create_otio_time_range( + frame_start, + frame_duration, + fps + ) + ) + + # add metadata to otio item + add_otio_metadata(otio_ex_ref_item, clip_data, **metadata) + + return otio_ex_ref_item + + +def create_otio_clip(clip_data): + segment = clip_data["PySegment"] + + # create media reference + media_reference = create_otio_reference(clip_data) + + # calculate source in + first_frame = utils.get_frame_from_path(clip_data["fpath"]) or 0 + source_in = int(clip_data["source_in"]) - int(first_frame) + + # creatae source range + source_range = create_otio_time_range( + source_in, + clip_data["record_duration"], + CTX.get_fps() + ) + + otio_clip = otio.schema.Clip( + name=clip_data["segment_name"], + source_range=source_range, + media_reference=media_reference + ) + + # Add markers + if MARKERS_INCLUDE: + create_otio_markers(otio_clip, segment) + + return otio_clip + + +def create_otio_gap(gap_start, clip_start, tl_start_frame, fps): + return otio.schema.Gap( + source_range=create_otio_time_range( + gap_start, + (clip_start - tl_start_frame) - gap_start, + fps + ) + ) + + +def get_clips_in_reels(project): + output_clips = [] + project_desktop = project.current_workspace.desktop + + for reel_group in project_desktop.reel_groups: + for reel in reel_group.reels: + for clip in reel.clips: + clip_data = { + "PyClip": clip, + "fps": float(str(clip.frame_rate)[:-4]) + } + + attrs = [ + "name", "width", "height", + "ratio", "sample_rate", "bit_depth" + ] + + for attr in attrs: + val = getattr(clip, attr) + clip_data[attr] = val + + version = clip.versions[-1] + track = version.tracks[-1] + for segment in track.segments: + segment_data = _get_segment_attributes(segment) + clip_data.update(segment_data) + + output_clips.append(clip_data) + + return output_clips + + +def _get_colourspace_policy(): + + output = {} + # get policies project path + policy_dir = "/opt/Autodesk/project/{}/synColor/policy".format( + CTX.project.name + ) + log.debug(policy_dir) + policy_fp = os.path.join(policy_dir, "policy.cfg") + + if not os.path.exists(policy_fp): + return output + + with open(policy_fp) as file: + dict_conf = dict(line.strip().split(' = ', 1) for line in file) + output.update( + {"openpype.flame.{}".format(k): v for k, v in dict_conf.items()} + ) + return output + + +def _create_otio_timeline(sequence): + + metadata = _get_metadata(sequence) + + # find colour policy files and add them to metadata + colorspace_policy = _get_colourspace_policy() + metadata.update(colorspace_policy) + + metadata.update({ + "openpype.timeline.width": int(sequence.width), + "openpype.timeline.height": int(sequence.height), + "openpype.timeline.pixelAspect": 1 + }) + + rt_start_time = create_otio_rational_time( + CTX.get_tl_start_frame(), CTX.get_fps()) + + return otio.schema.Timeline( + name=str(sequence.name)[1:-1], + global_start_time=rt_start_time, + metadata=metadata + ) + + +def create_otio_track(track_type, track_name): + return otio.schema.Track( + name=track_name, + kind=TRACK_TYPES[track_type] + ) + + +def add_otio_gap(clip_data, otio_track, prev_out): + gap_length = clip_data["record_in"] - prev_out + if prev_out != 0: + gap_length -= 1 + + gap = otio.opentime.TimeRange( + duration=otio.opentime.RationalTime( + gap_length, + CTX.get_fps() + ) + ) + otio_gap = otio.schema.Gap(source_range=gap) + otio_track.append(otio_gap) + + +def add_otio_metadata(otio_item, item, **kwargs): + metadata = _get_metadata(item) + + # add additional metadata from kwargs + if kwargs: + metadata.update(kwargs) + + # add metadata to otio item metadata + for key, value in metadata.items(): + otio_item.metadata.update({key: value}) + + +def _get_shot_tokens_values(clip, tokens): + old_value = None + output = {} + + if not clip.shot_name: + return output + + old_value = clip.shot_name.get_value() + + for token in tokens: + clip.shot_name.set_value(token) + _key = re.sub("[ <>]", "", token) + + try: + output[_key] = int(clip.shot_name.get_value()) + except ValueError: + output[_key] = clip.shot_name.get_value() + + clip.shot_name.set_value(old_value) + + return output + + +def _get_segment_attributes(segment): + # log.debug(dir(segment)) + + if str(segment.name)[1:-1] == "": + return None + + # Add timeline segment to tree + clip_data = { + "segment_name": segment.name.get_value(), + "segment_comment": segment.comment.get_value(), + "tape_name": segment.tape_name, + "source_name": segment.source_name, + "fpath": segment.file_path, + "PySegment": segment + } + + # add all available shot tokens + shot_tokens = _get_shot_tokens_values(segment, [ + "", "", "", "", + ]) + clip_data.update(shot_tokens) + + # populate shot source metadata + segment_attrs = [ + "record_duration", "record_in", "record_out", + "source_duration", "source_in", "source_out" + ] + segment_attrs_data = {} + for attr in segment_attrs: + if not hasattr(segment, attr): + continue + _value = getattr(segment, attr) + segment_attrs_data[attr] = str(_value).replace("+", ":") + + if attr in ["record_in", "record_out"]: + clip_data[attr] = _value.relative_frame + else: + clip_data[attr] = _value.frame + + clip_data["segment_timecodes"] = segment_attrs_data + + return clip_data + + +def create_otio_timeline(sequence): + log.info(dir(sequence)) + log.info(sequence.attributes) + + CTX.project = get_current_flame_project() + CTX.clips = get_clips_in_reels(CTX.project) + + log.debug(pformat( + CTX.clips + )) + + # get current timeline + CTX.set_fps( + float(str(sequence.frame_rate)[:-4])) + + tl_start_frame = utils.timecode_to_frames( + str(sequence.start_time).replace("+", ":"), + CTX.get_fps() + ) + CTX.set_tl_start_frame(tl_start_frame) + + # convert timeline to otio + otio_timeline = _create_otio_timeline(sequence) + + # create otio tracks and clips + for ver in sequence.versions: + for track in ver.tracks: + if len(track.segments) == 0 and track.hidden: + return None + + # convert track to otio + otio_track = create_otio_track( + "video", str(track.name)[1:-1]) + + all_segments = [] + for segment in track.segments: + clip_data = _get_segment_attributes(segment) + if not clip_data: + continue + all_segments.append(clip_data) + + segments_ordered = { + itemindex: clip_data + for itemindex, clip_data in enumerate( + all_segments) + } + log.debug("_ segments_ordered: {}".format( + pformat(segments_ordered) + )) + if not segments_ordered: + continue + + for itemindex, segment_data in segments_ordered.items(): + log.debug("_ itemindex: {}".format(itemindex)) + + # Add Gap if needed + if itemindex == 0: + # if it is first track item at track then add + # it to previouse item + prev_item = segment_data + + else: + # get previouse item + prev_item = segments_ordered[itemindex - 1] + + log.debug("_ segment_data: {}".format(segment_data)) + + # calculate clip frame range difference from each other + clip_diff = segment_data["record_in"] - prev_item["record_out"] + + # add gap if first track item is not starting + # at first timeline frame + if itemindex == 0 and segment_data["record_in"] > 0: + add_otio_gap(segment_data, otio_track, 0) + + # or add gap if following track items are having + # frame range differences from each other + elif itemindex and clip_diff != 1: + add_otio_gap( + segment_data, otio_track, prev_item["record_out"]) + + # create otio clip and add it to track + otio_clip = create_otio_clip(segment_data) + otio_track.append(otio_clip) + + log.debug("_ otio_clip: {}".format(otio_clip)) + + # create otio marker + # create otio metadata + + # add track to otio timeline + otio_timeline.tracks.append(otio_track) + + return otio_timeline + + +def write_to_file(otio_timeline, path): + otio.adapters.write_to_file(otio_timeline, path) diff --git a/openpype/hosts/flame/otio/utils.py b/openpype/hosts/flame/otio/utils.py new file mode 100644 index 0000000000..229946343b --- /dev/null +++ b/openpype/hosts/flame/otio/utils.py @@ -0,0 +1,95 @@ +import re +import opentimelineio as otio +import logging +log = logging.getLogger(__name__) + + +def timecode_to_frames(timecode, framerate): + rt = otio.opentime.from_timecode(timecode, framerate) + return int(otio.opentime.to_frames(rt)) + + +def frames_to_timecode(frames, framerate): + rt = otio.opentime.from_frames(frames, framerate) + return otio.opentime.to_timecode(rt) + + +def frames_to_seconds(frames, framerate): + rt = otio.opentime.from_frames(frames, framerate) + return otio.opentime.to_seconds(rt) + + +def get_reformated_path(path, padded=True): + """ + Return fixed python expression path + + Args: + path (str): path url or simple file name + + Returns: + type: string with reformated path + + Example: + get_reformated_path("plate.1001.exr") > plate.%04d.exr + + """ + padding = get_padding_from_path(path) + found = get_frame_from_path(path) + + if not found: + log.info("Path is not sequence: {}".format(path)) + return path + + if padded: + path = path.replace(found, "%0{}d".format(padding)) + else: + path = path.replace(found, "%d") + + return path + + +def get_padding_from_path(path): + """ + Return padding number from Flame path style + + Args: + path (str): path url or simple file name + + Returns: + int: padding number + + Example: + get_padding_from_path("plate.0001.exr") > 4 + + """ + found = get_frame_from_path(path) + + if found: + return len(found) + else: + return None + + +def get_frame_from_path(path): + """ + Return sequence number from Flame path style + + Args: + path (str): path url or simple file name + + Returns: + int: sequence frame number + + Example: + def get_frame_from_path(path): + ("plate.0001.exr") > 0001 + + """ + frame_pattern = re.compile(r"[._](\d+)[.]") + + found = re.findall(frame_pattern, path) + + if found: + return found.pop() + else: + return None diff --git a/openpype/hosts/flame/plugins/publish/collect_test_selection.py b/openpype/hosts/flame/plugins/publish/collect_test_selection.py new file mode 100644 index 0000000000..9a80a92414 --- /dev/null +++ b/openpype/hosts/flame/plugins/publish/collect_test_selection.py @@ -0,0 +1,26 @@ +import pyblish.api +import openpype.hosts.flame as opflame +from openpype.hosts.flame.otio import flame_export as otio_export +from openpype.hosts.flame.api import lib +from pprint import pformat +reload(lib) # noqa +reload(otio_export) # noqa + + +@pyblish.api.log +class CollectTestSelection(pyblish.api.ContextPlugin): + """testing selection sharing + """ + + order = pyblish.api.CollectorOrder + label = "test selection" + hosts = ["flame"] + + def process(self, context): + self.log.info(opflame.selection) + + sequence = lib.get_current_sequence(opflame.selection) + + otio_timeline = otio_export.create_otio_timeline(sequence) + + self.log.info(pformat(otio_timeline)) diff --git a/openpype/hosts/houdini/plugins/create/create_hda.py b/openpype/hosts/houdini/plugins/create/create_hda.py index 2af1e4a257..459da8bfdf 100644 --- a/openpype/hosts/houdini/plugins/create/create_hda.py +++ b/openpype/hosts/houdini/plugins/create/create_hda.py @@ -45,19 +45,14 @@ class CreateHDA(plugin.Creator): if (self.options or {}).get("useSelection") and self.nodes: # if we have `use selection` enabled and we have some # selected nodes ... - to_hda = self.nodes[0] - if len(self.nodes) > 1: - # if there is more then one node, create subnet first - subnet = out.createNode( - "subnet", node_name="{}_subnet".format(self.name)) - to_hda = subnet - else: - # in case of no selection, just create subnet node - subnet = out.createNode( - "subnet", node_name="{}_subnet".format(self.name)) + subnet = out.collapseIntoSubnet( + self.nodes, + subnet_name="{}_subnet".format(self.name)) subnet.moveToGoodPosition() to_hda = subnet - + else: + to_hda = out.createNode( + "subnet", node_name="{}_subnet".format(self.name)) if not to_hda.type().definition(): # if node type has not its definition, it is not user # created hda. We test if hda can be created from the node. @@ -69,13 +64,12 @@ class CreateHDA(plugin.Creator): name=subset_name, hda_file_name="$HIP/{}.hda".format(subset_name) ) - hou.moveNodesTo(self.nodes, hda_node) hda_node.layoutChildren() + elif self._check_existing(subset_name): + raise plugin.OpenPypeCreatorError( + ("subset {} is already published with different HDA" + "definition.").format(subset_name)) else: - if self._check_existing(subset_name): - raise plugin.OpenPypeCreatorError( - ("subset {} is already published with different HDA" - "definition.").format(subset_name)) hda_node = to_hda hda_node.setName(subset_name) diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index fdad0e0989..a5f03cd576 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -100,6 +100,13 @@ class ReferenceLoader(api.Loader): "offset", label="Position Offset", help="Offset loaded models for easier selection." + ), + qargparse.Boolean( + "attach_to_root", + label="Group imported asset", + default=True, + help="Should a group be created to encapsulate" + " imported representation ?" ) ] diff --git a/openpype/hosts/maya/plugins/load/load_image_plane.py b/openpype/hosts/maya/plugins/load/load_image_plane.py index f2640dc2eb..eea5844e8b 100644 --- a/openpype/hosts/maya/plugins/load/load_image_plane.py +++ b/openpype/hosts/maya/plugins/load/load_image_plane.py @@ -13,10 +13,14 @@ class CameraWindow(QtWidgets.QDialog): self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint) self.camera = None + self.static_image_plane = False + self.show_in_all_views = False self.widgets = { "label": QtWidgets.QLabel("Select camera for image plane."), "list": QtWidgets.QListWidget(), + "staticImagePlane": QtWidgets.QCheckBox(), + "showInAllViews": QtWidgets.QCheckBox(), "warning": QtWidgets.QLabel("No cameras selected!"), "buttons": QtWidgets.QWidget(), "okButton": QtWidgets.QPushButton("Ok"), @@ -31,6 +35,9 @@ class CameraWindow(QtWidgets.QDialog): for camera in cameras: self.widgets["list"].addItem(camera) + self.widgets["staticImagePlane"].setText("Make Image Plane Static") + self.widgets["showInAllViews"].setText("Show Image Plane in All Views") + # Build buttons. layout = QtWidgets.QHBoxLayout(self.widgets["buttons"]) layout.addWidget(self.widgets["okButton"]) @@ -40,6 +47,8 @@ class CameraWindow(QtWidgets.QDialog): layout = QtWidgets.QVBoxLayout(self) layout.addWidget(self.widgets["label"]) layout.addWidget(self.widgets["list"]) + layout.addWidget(self.widgets["staticImagePlane"]) + layout.addWidget(self.widgets["showInAllViews"]) layout.addWidget(self.widgets["buttons"]) layout.addWidget(self.widgets["warning"]) @@ -54,6 +63,8 @@ class CameraWindow(QtWidgets.QDialog): if self.camera is None: self.widgets["warning"].setVisible(True) return + self.show_in_all_views = self.widgets["showInAllViews"].isChecked() + self.static_image_plane = self.widgets["staticImagePlane"].isChecked() self.close() @@ -65,15 +76,15 @@ class CameraWindow(QtWidgets.QDialog): class ImagePlaneLoader(api.Loader): """Specific loader of plate for image planes on selected camera.""" - families = ["plate", "render"] + families = ["image", "plate", "render"] label = "Load imagePlane." representations = ["mov", "exr", "preview", "png"] icon = "image" color = "orange" - def load(self, context, name, namespace, data): + def load(self, context, name, namespace, data, options=None): import pymel.core as pm - + new_nodes = [] image_plane_depth = 1000 asset = context['asset']['name'] @@ -85,17 +96,23 @@ class ImagePlaneLoader(api.Loader): # Get camera from user selection. camera = None - default_cameras = [ - "frontShape", "perspShape", "sideShape", "topShape" - ] - cameras = [ - x for x in pm.ls(type="camera") if x.name() not in default_cameras - ] - camera_names = {x.getParent().name(): x for x in cameras} - camera_names["Create new camera."] = "create_camera" - window = CameraWindow(camera_names.keys()) - window.exec_() - camera = camera_names[window.camera] + is_static_image_plane = None + is_in_all_views = None + if data: + camera = pm.PyNode(data.get("camera")) + is_static_image_plane = data.get("static_image_plane") + is_in_all_views = data.get("in_all_views") + + if not camera: + cameras = pm.ls(type="camera") + camera_names = {x.getParent().name(): x for x in cameras} + camera_names["Create new camera."] = "create_camera" + window = CameraWindow(camera_names.keys()) + window.exec_() + camera = camera_names[window.camera] + + is_static_image_plane = window.static_image_plane + is_in_all_views = window.show_in_all_views if camera == "create_camera": camera = pm.createNode("camera") @@ -111,13 +128,14 @@ class ImagePlaneLoader(api.Loader): # Create image plane image_plane_transform, image_plane_shape = pm.imagePlane( - camera=camera, showInAllViews=False + fileName=context["representation"]["data"]["path"], + camera=camera, showInAllViews=is_in_all_views ) image_plane_shape.depth.set(image_plane_depth) - image_plane_shape.imageName.set( - context["representation"]["data"]["path"] - ) + if is_static_image_plane: + image_plane_shape.detach() + image_plane_transform.setRotation(camera.getRotation()) start_frame = pm.playbackOptions(q=True, min=True) end_frame = pm.playbackOptions(q=True, max=True) diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index cfe8149218..dd64fd0a16 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -40,85 +40,88 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): except ValueError: family = "model" + group_name = "{}:_GRP".format(namespace) + # True by default to keep legacy behaviours + attach_to_root = options.get("attach_to_root", True) + with maya.maintained_selection(): - groupName = "{}:_GRP".format(namespace) cmds.loadPlugin("AbcImport.mll", quiet=True) nodes = cmds.file(self.fname, namespace=namespace, sharedReferenceFile=False, - groupReference=True, - groupName=groupName, reference=True, - returnNewNodes=True) - - # namespace = cmds.referenceQuery(nodes[0], namespace=True) + returnNewNodes=True, + groupReference=attach_to_root, + groupName=group_name) shapes = cmds.ls(nodes, shapes=True, long=True) - newNodes = (list(set(nodes) - set(shapes))) + new_nodes = (list(set(nodes) - set(shapes))) current_namespace = pm.namespaceInfo(currentNamespace=True) if current_namespace != ":": - groupName = current_namespace + ":" + groupName + group_name = current_namespace + ":" + group_name - groupNode = pm.PyNode(groupName) - roots = set() + self[:] = new_nodes - for node in newNodes: - try: - roots.add(pm.PyNode(node).getAllParents()[-2]) - except: # noqa: E722 - pass + if attach_to_root: + group_node = pm.PyNode(group_name) + roots = set() - if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]: + for node in new_nodes: + try: + roots.add(pm.PyNode(node).getAllParents()[-2]) + except: # noqa: E722 + pass + + if family not in ["layout", "setdress", + "mayaAscii", "mayaScene"]: + for root in roots: + root.setParent(world=True) + + group_node.zeroTransformPivots() for root in roots: - root.setParent(world=True) + root.setParent(group_node) - groupNode.zeroTransformPivots() - for root in roots: - root.setParent(groupNode) + cmds.setAttr(group_name + ".displayHandle", 1) - cmds.setAttr(groupName + ".displayHandle", 1) + settings = get_project_settings(os.environ['AVALON_PROJECT']) + colors = settings['maya']['load']['colors'] + c = colors.get(family) + if c is not None: + group_node.useOutlinerColor.set(1) + group_node.outlinerColor.set( + (float(c[0]) / 255), + (float(c[1]) / 255), + (float(c[2]) / 255)) - settings = get_project_settings(os.environ['AVALON_PROJECT']) - colors = settings['maya']['load']['colors'] - c = colors.get(family) - if c is not None: - groupNode.useOutlinerColor.set(1) - groupNode.outlinerColor.set( - (float(c[0])/255), - (float(c[1])/255), - (float(c[2])/255) - ) - - self[:] = newNodes - - cmds.setAttr(groupName + ".displayHandle", 1) - # get bounding box - bbox = cmds.exactWorldBoundingBox(groupName) - # get pivot position on world space - pivot = cmds.xform(groupName, q=True, sp=True, ws=True) - # center of bounding box - cx = (bbox[0] + bbox[3]) / 2 - cy = (bbox[1] + bbox[4]) / 2 - cz = (bbox[2] + bbox[5]) / 2 - # add pivot position to calculate offset - cx = cx + pivot[0] - cy = cy + pivot[1] - cz = cz + pivot[2] - # set selection handle offset to center of bounding box - cmds.setAttr(groupName + ".selectHandleX", cx) - cmds.setAttr(groupName + ".selectHandleY", cy) - cmds.setAttr(groupName + ".selectHandleZ", cz) + cmds.setAttr(group_name + ".displayHandle", 1) + # get bounding box + bbox = cmds.exactWorldBoundingBox(group_name) + # get pivot position on world space + pivot = cmds.xform(group_name, q=True, sp=True, ws=True) + # center of bounding box + cx = (bbox[0] + bbox[3]) / 2 + cy = (bbox[1] + bbox[4]) / 2 + cz = (bbox[2] + bbox[5]) / 2 + # add pivot position to calculate offset + cx = cx + pivot[0] + cy = cy + pivot[1] + cz = cz + pivot[2] + # set selection handle offset to center of bounding box + cmds.setAttr(group_name + ".selectHandleX", cx) + cmds.setAttr(group_name + ".selectHandleY", cy) + cmds.setAttr(group_name + ".selectHandleZ", cz) if family == "rig": self._post_process_rig(name, namespace, context, options) else: - if "translate" in options: - cmds.setAttr(groupName + ".t", *options["translate"]) - return newNodes + if "translate" in options: + cmds.setAttr(group_name + ".t", *options["translate"]) + + return new_nodes def switch(self, container, representation): self.update(container, representation) diff --git a/openpype/hosts/maya/plugins/publish/collect_history.py b/openpype/hosts/maya/plugins/publish/collect_history.py index 16c8e4342e..71f0169971 100644 --- a/openpype/hosts/maya/plugins/publish/collect_history.py +++ b/openpype/hosts/maya/plugins/publish/collect_history.py @@ -22,15 +22,22 @@ class CollectMayaHistory(pyblish.api.InstancePlugin): def process(self, instance): - # Collect the history with long names - history = cmds.listHistory(instance, leaf=False) or [] - history = cmds.ls(history, long=True) + kwargs = {} + if int(cmds.about(version=True)) >= 2020: + # New flag since Maya 2020 which makes cmds.listHistory faster + kwargs = {"fastIteration": True} + else: + self.log.debug("Ignoring `fastIteration` flag before Maya 2020..") - # Remove invalid node types (like renderlayers) - invalid = cmds.ls(history, type="renderLayer", long=True) - if invalid: - invalid = set(invalid) # optimize lookup - history = [x for x in history if x not in invalid] + # Collect the history with long names + history = set(cmds.listHistory(instance, leaf=False, **kwargs) or []) + history = cmds.ls(list(history), long=True) + + # Exclude invalid nodes (like renderlayers) + exclude = cmds.ls(type="renderLayer", long=True) + if exclude: + exclude = set(exclude) # optimize lookup + history = [x for x in history if x not in exclude] # Combine members with history members = instance[:] + history diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py index 53897b21f6..d39750e917 100644 --- a/openpype/hosts/maya/plugins/publish/collect_look.py +++ b/openpype/hosts/maya/plugins/publish/collect_look.py @@ -492,6 +492,8 @@ class CollectLook(pyblish.api.InstancePlugin): if not cmds.attributeQuery(attr, node=node, exists=True): continue attribute = "{}.{}".format(node, attr) + if cmds.getAttr(attribute, type=True) == "message": + continue node_attributes[attr] = cmds.getAttr(attribute) attributes.append({"name": node, diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py index 345f5264b7..ac1e495f08 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render.py +++ b/openpype/hosts/maya/plugins/publish/collect_render.py @@ -224,14 +224,19 @@ class CollectMayaRender(pyblish.api.ContextPlugin): # append full path full_exp_files = [] aov_dict = {} - + default_render_file = context.data.get('project_settings')\ + .get('maya')\ + .get('create')\ + .get('CreateRender')\ + .get('default_render_image_folder') # replace relative paths with absolute. Render products are # returned as list of dictionaries. publish_meta_path = None for aov in exp_files: full_paths = [] for file in aov[aov.keys()[0]]: - full_path = os.path.join(workspace, "renders", file) + full_path = os.path.join(workspace, default_render_file, + file) full_path = full_path.replace("\\", "/") full_paths.append(full_path) publish_meta_path = os.path.dirname(full_path) diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index 2407617b6f..953539f65c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -55,8 +55,16 @@ def maketx(source, destination, *args): str: Output of `maketx` command. """ + from openpype.lib import get_oiio_tools_path + + maketx_path = get_oiio_tools_path("maketx") + if not os.path.exists(maketx_path): + print( + "OIIO tool not found in {}".format(maketx_path)) + raise AssertionError("OIIO tool not found") + cmd = [ - "maketx", + maketx_path, "-v", # verbose "-u", # update mode # unpremultiply before conversion (recommended when alpha present) diff --git a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py index dad1691149..642ca9e25d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py @@ -23,11 +23,24 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin): def process(self, instance): - assert get_file_rule("images") == "renders", ( - "Workspace's `images` file rule must be set to: renders" + default_render_file = self.get_default_render_image_folder(instance) + + assert get_file_rule("images") == default_render_file, ( + "Workspace's `images` file rule must be set to: {}".format( + default_render_file + ) ) @classmethod def repair(cls, instance): - pm.workspace.fileRules["images"] = "renders" + default = cls.get_default_render_image_folder(instance) + pm.workspace.fileRules["images"] = default pm.system.Workspace.save() + + @staticmethod + def get_default_render_image_folder(instance): + return instance.context.data.get('project_settings')\ + .get('maya') \ + .get('create') \ + .get('CreateRender') \ + .get('default_render_image_folder') diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py index 4e028d1d24..d5a1fd3529 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py @@ -164,7 +164,8 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): continue # Ignore proxy connections. - if cmds.addAttr(plug, query=True, usedAsProxy=True): + if (cmds.addAttr(plug, query=True, exists=True) and + cmds.addAttr(plug, query=True, usedAsProxy=True)): continue # Check for incoming connections diff --git a/openpype/hosts/nuke/plugins/publish/extract_render_local.py b/openpype/hosts/nuke/plugins/publish/extract_render_local.py index bc7b41c733..50a5d01483 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_render_local.py +++ b/openpype/hosts/nuke/plugins/publish/extract_render_local.py @@ -42,10 +42,14 @@ class NukeRenderLocal(openpype.api.Extractor): self.log.info("Start frame: {}".format(first_frame)) self.log.info("End frame: {}".format(last_frame)) + # write node url might contain nuke's ctl expressin + # as [python ...]/path... + path = node["file"].evaluate() + # Ensure output directory exists. - directory = os.path.dirname(node["file"].value()) - if not os.path.exists(directory): - os.makedirs(directory) + out_dir = os.path.dirname(path) + if not os.path.exists(out_dir): + os.makedirs(out_dir) # Render frames nuke.execute( @@ -58,15 +62,12 @@ class NukeRenderLocal(openpype.api.Extractor): if "slate" in families: first_frame += 1 - path = node['file'].value() - out_dir = os.path.dirname(path) ext = node["file_type"].value() if "representations" not in instance.data: instance.data["representations"] = [] collected_frames = os.listdir(out_dir) - if len(collected_frames) == 1: repre = { 'name': ext, diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index 261fca6583..32962b57a6 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -42,6 +42,7 @@ class ExtractReviewDataMov(openpype.api.Extractor): # generate data with anlib.maintained_selection(): + generated_repres = [] for o_name, o_data in self.outputs.items(): f_families = o_data["filter"]["families"] f_task_types = o_data["filter"]["task_types"] @@ -112,11 +113,13 @@ class ExtractReviewDataMov(openpype.api.Extractor): }) else: data = exporter.generate_mov(**o_data) + generated_repres.extend(data["representations"]) - self.log.info(data["representations"]) + self.log.info(generated_repres) - # assign to representations - instance.data["representations"] += data["representations"] + if generated_repres: + # assign to representations + instance.data["representations"] += generated_repres self.log.debug( "_ representations: {}".format( diff --git a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py index 29faf867d2..af5e8e9d27 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py +++ b/openpype/hosts/nuke/plugins/publish/validate_rendered_frames.py @@ -67,7 +67,9 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin): if not repre.get("files"): msg = ("no frames were collected, " - "you need to render them") + "you need to render them.\n" + "Check properties of write node (group) and" + "select 'Local' option in 'Publish' dropdown.") self.log.error(msg) raise ValidationException(msg) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_current_file.py b/openpype/hosts/photoshop/plugins/publish/collect_current_file.py index 3cc3e3f636..4d4829555e 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_current_file.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_current_file.py @@ -8,7 +8,7 @@ from avalon import photoshop class CollectCurrentFile(pyblish.api.ContextPlugin): """Inject the current working file into context""" - order = pyblish.api.CollectorOrder - 0.5 + order = pyblish.api.CollectorOrder - 0.49 label = "Current File" hosts = ["photoshop"] diff --git a/openpype/hosts/photoshop/plugins/publish/collect_extension_version.py b/openpype/hosts/photoshop/plugins/publish/collect_extension_version.py new file mode 100644 index 0000000000..f07ff0b0ff --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_extension_version.py @@ -0,0 +1,57 @@ +import os +import re +import pyblish.api + +from avalon import photoshop + + +class CollectExtensionVersion(pyblish.api.ContextPlugin): + """ Pulls and compares version of installed extension. + + It is recommended to use same extension as in provided Openpype code. + + Please use Anastasiy’s Extension Manager or ZXPInstaller to update + extension in case of an error. + + You can locate extension.zxp in your installed Openpype code in + `repos/avalon-core/avalon/photoshop` + """ + # This technically should be a validator, but other collectors might be + # impacted with usage of obsolete extension, so collector that runs first + # was chosen + order = pyblish.api.CollectorOrder - 0.5 + label = "Collect extension version" + hosts = ["photoshop"] + + optional = True + active = True + + def process(self, context): + installed_version = photoshop.stub().get_extension_version() + + if not installed_version: + raise ValueError("Unknown version, probably old extension") + + manifest_url = os.path.join(os.path.dirname(photoshop.__file__), + "extension", "CSXS", "manifest.xml") + + if not os.path.exists(manifest_url): + self.log.debug("Unable to locate extension manifest, not checking") + return + + expected_version = None + with open(manifest_url) as fp: + content = fp.read() + + found = re.findall(r'(ExtensionBundleVersion=")([0-10\.]+)(")', + content) + if found: + expected_version = found[0][1] + + if expected_version != installed_version: + msg = "Expected version '{}' found '{}'\n".format( + expected_version, installed_version) + msg += "Please update your installed extension, it might not work " + msg += "properly." + + raise ValueError(msg) diff --git a/openpype/hosts/photoshop/plugins/publish/validate_naming.py b/openpype/hosts/photoshop/plugins/publish/validate_naming.py index 0fd6794313..1635096f4b 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_naming.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_naming.py @@ -1,3 +1,5 @@ +import re + import pyblish.api import openpype.api from avalon import photoshop @@ -19,20 +21,33 @@ class ValidateNamingRepair(pyblish.api.Action): and result["instance"] not in failed): failed.append(result["instance"]) + invalid_chars, replace_char = plugin.get_replace_chars() + self.log.info("{} --- {}".format(invalid_chars, replace_char)) + # Apply pyblish.logic to get the instances for the plug-in instances = pyblish.api.instances_by_plugin(failed, plugin) stub = photoshop.stub() for instance in instances: self.log.info("validate_naming instance {}".format(instance)) - name = instance.data["name"].replace(" ", "_") - name = name.replace(instance.data["family"], '') - instance[0].Name = name - data = stub.read(instance[0]) - data["subset"] = "image" + name - stub.imprint(instance[0], data) + metadata = stub.read(instance[0]) + self.log.info("metadata instance {}".format(metadata)) + layer_name = None + if metadata.get("uuid"): + layer_data = stub.get_layer(metadata["uuid"]) + self.log.info("layer_data {}".format(layer_data)) + if layer_data: + layer_name = re.sub(invalid_chars, + replace_char, + layer_data.name) - name = stub.PUBLISH_ICON + name - stub.rename_layer(instance.data["uuid"], name) + stub.rename_layer(instance.data["uuid"], layer_name) + + subset_name = re.sub(invalid_chars, replace_char, + instance.data["name"]) + + instance[0].Name = layer_name or subset_name + metadata["subset"] = subset_name + stub.imprint(instance[0], metadata) return True @@ -49,12 +64,21 @@ class ValidateNaming(pyblish.api.InstancePlugin): families = ["image"] actions = [ValidateNamingRepair] + # configured by Settings + invalid_chars = '' + replace_char = '' + def process(self, instance): help_msg = ' Use Repair action (A) in Pyblish to fix it.' msg = "Name \"{}\" is not allowed.{}".format(instance.data["name"], help_msg) - assert " " not in instance.data["name"], msg + assert not re.search(self.invalid_chars, instance.data["name"]), msg msg = "Subset \"{}\" is not allowed.{}".format(instance.data["subset"], help_msg) - assert " " not in instance.data["subset"], msg + assert not re.search(self.invalid_chars, instance.data["subset"]), msg + + @classmethod + def get_replace_chars(cls): + """Pass values configured in Settings for Repair.""" + return cls.invalid_chars, cls.replace_char diff --git a/openpype/hosts/tvpaint/__init__.py b/openpype/hosts/tvpaint/__init__.py index 0e793fcf9f..09b7c52cd1 100644 --- a/openpype/hosts/tvpaint/__init__.py +++ b/openpype/hosts/tvpaint/__init__.py @@ -1,3 +1,6 @@ +import os + + def add_implementation_envs(env, _app): """Modify environments to contain all required for implementation.""" defaults = { @@ -6,3 +9,12 @@ def add_implementation_envs(env, _app): for key, value in defaults.items(): if not env.get(key): env[key] = value + + +def get_launch_script_path(): + current_dir = os.path.dirname(os.path.abspath(__file__)) + return os.path.join( + current_dir, + "api", + "launch_script.py" + ) diff --git a/openpype/hosts/tvpaint/api/__init__.py b/openpype/hosts/tvpaint/api/__init__.py index 1c50987d6d..c461b33f4b 100644 --- a/openpype/hosts/tvpaint/api/__init__.py +++ b/openpype/hosts/tvpaint/api/__init__.py @@ -1,93 +1,49 @@ -import os -import logging +from .communication_server import CommunicationWrapper +from . import lib +from . import launch_script +from . import workio +from . import pipeline +from . import plugin +from .pipeline import ( + install, + uninstall, + maintained_selection, + remove_instance, + list_instances, + ls +) -import requests - -import avalon.api -import pyblish.api -from avalon.tvpaint import pipeline -from avalon.tvpaint.communication_server import register_localization_file -from .lib import set_context_settings - -from openpype.hosts import tvpaint -from openpype.api import get_current_project_settings - -log = logging.getLogger(__name__) - -HOST_DIR = os.path.dirname(os.path.abspath(tvpaint.__file__)) -PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") -PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") -LOAD_PATH = os.path.join(PLUGINS_DIR, "load") -CREATE_PATH = os.path.join(PLUGINS_DIR, "create") +from .workio import ( + open_file, + save_file, + current_file, + has_unsaved_changes, + file_extensions, + work_root, +) -def on_instance_toggle(instance, old_value, new_value): - # Review may not have real instance in wokrfile metadata - if not instance.data.get("uuid"): - return +__all__ = ( + "CommunicationWrapper", - instance_id = instance.data["uuid"] - found_idx = None - current_instances = pipeline.list_instances() - for idx, workfile_instance in enumerate(current_instances): - if workfile_instance["uuid"] == instance_id: - found_idx = idx - break + "lib", + "launch_script", + "workio", + "pipeline", + "plugin", - if found_idx is None: - return + "install", + "uninstall", + "maintained_selection", + "remove_instance", + "list_instances", + "ls", - if "active" in current_instances[found_idx]: - current_instances[found_idx]["active"] = new_value - pipeline._write_instances(current_instances) - - -def initial_launch(): - # Setup project settings if its the template that's launched. - # TODO also check for template creation when it's possible to define - # templates - last_workfile = os.environ.get("AVALON_LAST_WORKFILE") - if not last_workfile or os.path.exists(last_workfile): - return - - log.info("Setting up project...") - set_context_settings() - - -def application_exit(): - data = get_current_project_settings() - stop_timer = data["tvpaint"]["stop_timer_on_application_exit"] - - if not stop_timer: - return - - # Stop application timer. - webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL") - rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url) - requests.post(rest_api_url) - - -def install(): - log.info("OpenPype - Installing TVPaint integration") - localization_file = os.path.join(HOST_DIR, "resources", "avalon.loc") - register_localization_file(localization_file) - - pyblish.api.register_plugin_path(PUBLISH_PATH) - avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH) - avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH) - - registered_callbacks = ( - pyblish.api.registered_callbacks().get("instanceToggled") or [] - ) - if on_instance_toggle not in registered_callbacks: - pyblish.api.register_callback("instanceToggled", on_instance_toggle) - - avalon.api.on("application.launched", initial_launch) - avalon.api.on("application.exit", application_exit) - - -def uninstall(): - log.info("OpenPype - Uninstalling TVPaint integration") - pyblish.api.deregister_plugin_path(PUBLISH_PATH) - avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH) - avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH) + # Workfiles API + "open_file", + "save_file", + "current_file", + "has_unsaved_changes", + "file_extensions", + "work_root" +) diff --git a/openpype/hosts/tvpaint/api/communication_server.py b/openpype/hosts/tvpaint/api/communication_server.py new file mode 100644 index 0000000000..6c8aca5445 --- /dev/null +++ b/openpype/hosts/tvpaint/api/communication_server.py @@ -0,0 +1,939 @@ +import os +import json +import time +import subprocess +import collections +import asyncio +import logging +import socket +import platform +import filecmp +import tempfile +import threading +import shutil +from queue import Queue +from contextlib import closing + +from aiohttp import web +from aiohttp_json_rpc import JsonRpc +from aiohttp_json_rpc.protocol import ( + encode_request, encode_error, decode_msg, JsonRpcMsgTyp +) +from aiohttp_json_rpc.exceptions import RpcError + +from avalon import api +from openpype.hosts.tvpaint.tvpaint_plugin import get_plugin_files_path + +log = logging.getLogger(__name__) +log.setLevel(logging.DEBUG) + + +class CommunicationWrapper: + # TODO add logs and exceptions + communicator = None + + log = logging.getLogger("CommunicationWrapper") + + @classmethod + def create_qt_communicator(cls, *args, **kwargs): + """Create communicator for Artist usage.""" + communicator = QtCommunicator(*args, **kwargs) + cls.set_communicator(communicator) + return communicator + + @classmethod + def set_communicator(cls, communicator): + if not cls.communicator: + cls.communicator = communicator + else: + cls.log.warning("Communicator was set multiple times.") + + @classmethod + def client(cls): + if not cls.communicator: + return None + return cls.communicator.client() + + @classmethod + def execute_george(cls, george_script): + """Execute passed goerge script in TVPaint.""" + if not cls.communicator: + return + return cls.communicator.execute_george(george_script) + + +class WebSocketServer: + def __init__(self): + self.client = None + + self.loop = asyncio.new_event_loop() + self.app = web.Application(loop=self.loop) + self.port = self.find_free_port() + self.websocket_thread = WebsocketServerThread( + self, self.port, loop=self.loop + ) + + @property + def server_is_running(self): + return self.websocket_thread.server_is_running + + def add_route(self, *args, **kwargs): + self.app.router.add_route(*args, **kwargs) + + @staticmethod + def find_free_port(): + with closing( + socket.socket(socket.AF_INET, socket.SOCK_STREAM) + ) as sock: + sock.bind(("", 0)) + sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) + port = sock.getsockname()[1] + return port + + def start(self): + self.websocket_thread.start() + + def stop(self): + try: + if self.websocket_thread.is_running: + log.debug("Stopping websocket server") + self.websocket_thread.is_running = False + self.websocket_thread.stop() + except Exception: + log.warning( + "Error has happened during Killing websocket server", + exc_info=True + ) + + +class WebsocketServerThread(threading.Thread): + """ Listener for websocket rpc requests. + + It would be probably better to "attach" this to main thread (as for + example Harmony needs to run something on main thread), but currently + it creates separate thread and separate asyncio event loop + """ + def __init__(self, module, port, loop): + super(WebsocketServerThread, self).__init__() + self.is_running = False + self.server_is_running = False + self.port = port + self.module = module + self.loop = loop + self.runner = None + self.site = None + self.tasks = [] + + def run(self): + self.is_running = True + + try: + log.debug("Starting websocket server") + + self.loop.run_until_complete(self.start_server()) + + log.info( + "Running Websocket server on URL:" + " \"ws://localhost:{}\"".format(self.port) + ) + + asyncio.ensure_future(self.check_shutdown(), loop=self.loop) + + self.server_is_running = True + self.loop.run_forever() + + except Exception: + log.warning( + "Websocket Server service has failed", exc_info=True + ) + finally: + self.server_is_running = False + # optional + self.loop.close() + + self.is_running = False + log.info("Websocket server stopped") + + async def start_server(self): + """ Starts runner and TCPsite """ + self.runner = web.AppRunner(self.module.app) + await self.runner.setup() + self.site = web.TCPSite(self.runner, "localhost", self.port) + await self.site.start() + + def stop(self): + """Sets is_running flag to false, 'check_shutdown' shuts server down""" + self.is_running = False + + async def check_shutdown(self): + """ Future that is running and checks if server should be running + periodically. + """ + while self.is_running: + while self.tasks: + task = self.tasks.pop(0) + log.debug("waiting for task {}".format(task)) + await task + log.debug("returned value {}".format(task.result)) + + await asyncio.sleep(0.5) + + log.debug("## Server shutdown started") + + await self.site.stop() + log.debug("# Site stopped") + await self.runner.cleanup() + log.debug("# Server runner stopped") + tasks = [ + task for task in asyncio.all_tasks() + if task is not asyncio.current_task() + ] + list(map(lambda task: task.cancel(), tasks)) # cancel all the tasks + results = await asyncio.gather(*tasks, return_exceptions=True) + log.debug(f"Finished awaiting cancelled tasks, results: {results}...") + await self.loop.shutdown_asyncgens() + # to really make sure everything else has time to stop + await asyncio.sleep(0.07) + self.loop.stop() + + +class BaseTVPaintRpc(JsonRpc): + def __init__(self, communication_obj, route_name="", **kwargs): + super().__init__(**kwargs) + self.requests_ids = collections.defaultdict(lambda: 0) + self.waiting_requests = collections.defaultdict(list) + self.responses = collections.defaultdict(list) + + self.route_name = route_name + self.communication_obj = communication_obj + + async def _handle_rpc_msg(self, http_request, raw_msg): + # This is duplicated code from super but there is no way how to do it + # to be able handle server->client requests + host = http_request.host + if host in self.waiting_requests: + try: + _raw_message = raw_msg.data + msg = decode_msg(_raw_message) + + except RpcError as error: + await self._ws_send_str(http_request, encode_error(error)) + return + + if msg.type in (JsonRpcMsgTyp.RESULT, JsonRpcMsgTyp.ERROR): + msg_data = json.loads(_raw_message) + if msg_data.get("id") in self.waiting_requests[host]: + self.responses[host].append(msg_data) + return + + return await super()._handle_rpc_msg(http_request, raw_msg) + + def client_connected(self): + # TODO This is poor check. Add check it is client from TVPaint + if self.clients: + return True + return False + + def send_notification(self, client, method, params=None): + if params is None: + params = [] + asyncio.run_coroutine_threadsafe( + client.ws.send_str(encode_request(method, params=params)), + loop=self.loop + ) + + def send_request(self, client, method, params=None, timeout=0): + if params is None: + params = [] + + client_host = client.host + + request_id = self.requests_ids[client_host] + self.requests_ids[client_host] += 1 + + self.waiting_requests[client_host].append(request_id) + + log.debug("Sending request to client {} ({}, {}) id: {}".format( + client_host, method, params, request_id + )) + future = asyncio.run_coroutine_threadsafe( + client.ws.send_str(encode_request(method, request_id, params)), + loop=self.loop + ) + result = future.result() + + not_found = object() + response = not_found + start = time.time() + while True: + if client.ws.closed: + return None + + for _response in self.responses[client_host]: + _id = _response.get("id") + if _id == request_id: + response = _response + break + + if response is not not_found: + break + + if timeout > 0 and (time.time() - start) > timeout: + raise Exception("Timeout passed") + return + + time.sleep(0.1) + + if response is not_found: + raise Exception("Connection closed") + + self.responses[client_host].remove(response) + + error = response.get("error") + result = response.get("result") + if error: + raise Exception("Error happened: {}".format(error)) + return result + + +class QtTVPaintRpc(BaseTVPaintRpc): + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + + from openpype.tools.utils import host_tools + self.tools_helper = host_tools.HostToolsHelper() + + route_name = self.route_name + + # Register methods + self.add_methods( + (route_name, self.workfiles_tool), + (route_name, self.loader_tool), + (route_name, self.creator_tool), + (route_name, self.subset_manager_tool), + (route_name, self.publish_tool), + (route_name, self.scene_inventory_tool), + (route_name, self.library_loader_tool), + (route_name, self.experimental_tools) + ) + + # Panel routes for tools + async def workfiles_tool(self): + log.info("Triggering Workfile tool") + item = MainThreadItem(self.tools_helper.show_workfiles) + self._execute_in_main_thread(item) + return + + async def loader_tool(self): + log.info("Triggering Loader tool") + item = MainThreadItem(self.tools_helper.show_loader) + self._execute_in_main_thread(item) + return + + async def creator_tool(self): + log.info("Triggering Creator tool") + item = MainThreadItem(self.tools_helper.show_creator) + await self._async_execute_in_main_thread(item, wait=False) + + async def subset_manager_tool(self): + log.info("Triggering Subset Manager tool") + item = MainThreadItem(self.tools_helper.show_subset_manager) + # Do not wait for result of callback + self._execute_in_main_thread(item, wait=False) + return + + async def publish_tool(self): + log.info("Triggering Publish tool") + item = MainThreadItem(self.tools_helper.show_publish) + self._execute_in_main_thread(item) + return + + async def scene_inventory_tool(self): + """Open Scene Inventory tool. + + Funciton can't confirm if tool was opened becauise one part of + SceneInventory initialization is calling websocket request to host but + host can't response because is waiting for response from this call. + """ + log.info("Triggering Scene inventory tool") + item = MainThreadItem(self.tools_helper.show_scene_inventory) + # Do not wait for result of callback + self._execute_in_main_thread(item, wait=False) + return + + async def library_loader_tool(self): + log.info("Triggering Library loader tool") + item = MainThreadItem(self.tools_helper.show_library_loader) + self._execute_in_main_thread(item) + return + + async def experimental_tools(self): + log.info("Triggering Library loader tool") + item = MainThreadItem(self.tools_helper.show_experimental_tools_dialog) + self._execute_in_main_thread(item) + return + + async def _async_execute_in_main_thread(self, item, **kwargs): + await self.communication_obj.async_execute_in_main_thread( + item, **kwargs + ) + + def _execute_in_main_thread(self, item, **kwargs): + return self.communication_obj.execute_in_main_thread(item, **kwargs) + + +class MainThreadItem: + """Structure to store information about callback in main thread. + + Item should be used to execute callback in main thread which may be needed + for execution of Qt objects. + + Item store callback (callable variable), arguments and keyword arguments + for the callback. Item hold information about it's process. + """ + not_set = object() + sleep_time = 0.1 + + def __init__(self, callback, *args, **kwargs): + self.done = False + self.exception = self.not_set + self.result = self.not_set + self.callback = callback + self.args = args + self.kwargs = kwargs + + def execute(self): + """Execute callback and store it's result. + + Method must be called from main thread. Item is marked as `done` + when callback execution finished. Store output of callback of exception + information when callback raise one. + """ + log.debug("Executing process in main thread") + if self.done: + log.warning("- item is already processed") + return + + callback = self.callback + args = self.args + kwargs = self.kwargs + log.info("Running callback: {}".format(str(callback))) + try: + result = callback(*args, **kwargs) + self.result = result + + except Exception as exc: + self.exception = exc + + finally: + self.done = True + + def wait(self): + """Wait for result from main thread. + + This method stops current thread until callback is executed. + + Returns: + object: Output of callback. May be any type or object. + + Raises: + Exception: Reraise any exception that happened during callback + execution. + """ + while not self.done: + time.sleep(self.sleep_time) + + if self.exception is self.not_set: + return self.result + raise self.exception + + async def async_wait(self): + """Wait for result from main thread. + + Returns: + object: Output of callback. May be any type or object. + + Raises: + Exception: Reraise any exception that happened during callback + execution. + """ + while not self.done: + await asyncio.sleep(self.sleep_time) + + if self.exception is self.not_set: + return self.result + raise self.exception + + +class BaseCommunicator: + def __init__(self): + self.process = None + self.websocket_server = None + self.websocket_rpc = None + self.exit_code = None + self._connected_client = None + + @property + def server_is_running(self): + if self.websocket_server is None: + return False + return self.websocket_server.server_is_running + + def _windows_file_process(self, src_dst_mapping, to_remove): + """Windows specific file processing asking for admin permissions. + + It is required to have administration permissions to modify plugin + files in TVPaint installation folder. + + Method requires `pywin32` python module. + + Args: + src_dst_mapping (list, tuple, set): Mapping of source file to + destination. Both must be full path. Each item must be iterable + of size 2 `(C:/src/file.dll, C:/dst/file.dll)`. + to_remove (list): Fullpath to files that should be removed. + """ + + import pythoncom + from win32comext.shell import shell + + # Create temp folder where plugin files are temporary copied + # - reason is that copy to TVPaint requires administartion permissions + # but admin may not have access to source folder + tmp_dir = os.path.normpath( + tempfile.mkdtemp(prefix="tvpaint_copy_") + ) + + # Copy source to temp folder and create new mapping + dst_folders = collections.defaultdict(list) + new_src_dst_mapping = [] + for old_src, dst in src_dst_mapping: + new_src = os.path.join(tmp_dir, os.path.split(old_src)[1]) + shutil.copy(old_src, new_src) + new_src_dst_mapping.append((new_src, dst)) + + for src, dst in new_src_dst_mapping: + src = os.path.normpath(src) + dst = os.path.normpath(dst) + dst_filename = os.path.basename(dst) + dst_folder_path = os.path.dirname(dst) + dst_folders[dst_folder_path].append((dst_filename, src)) + + # create an instance of IFileOperation + fo = pythoncom.CoCreateInstance( + shell.CLSID_FileOperation, + None, + pythoncom.CLSCTX_ALL, + shell.IID_IFileOperation + ) + # Add delete command to file operation object + for filepath in to_remove: + item = shell.SHCreateItemFromParsingName( + filepath, None, shell.IID_IShellItem + ) + fo.DeleteItem(item) + + # here you can use SetOperationFlags, progress Sinks, etc. + for folder_path, items in dst_folders.items(): + # create an instance of IShellItem for the target folder + folder_item = shell.SHCreateItemFromParsingName( + folder_path, None, shell.IID_IShellItem + ) + for _dst_filename, source_file_path in items: + # create an instance of IShellItem for the source item + copy_item = shell.SHCreateItemFromParsingName( + source_file_path, None, shell.IID_IShellItem + ) + # queue the copy operation + fo.CopyItem(copy_item, folder_item, _dst_filename, None) + + # commit + fo.PerformOperations() + + # Remove temp folder + shutil.rmtree(tmp_dir) + + def _prepare_windows_plugin(self, launch_args): + """Copy plugin to TVPaint plugins and set PATH to dependencies. + + Check if plugin in TVPaint's plugins exist and match to plugin + version to current implementation version. Based on 64-bit or 32-bit + version of the plugin. Path to libraries required for plugin is added + to PATH variable. + """ + + host_executable = launch_args[0] + executable_file = os.path.basename(host_executable) + if "64bit" in executable_file: + subfolder = "windows_x64" + elif "32bit" in executable_file: + subfolder = "windows_x86" + else: + raise ValueError( + "Can't determine if executable " + "leads to 32-bit or 64-bit TVPaint!" + ) + + plugin_files_path = get_plugin_files_path() + # Folder for right windows plugin files + source_plugins_dir = os.path.join(plugin_files_path, subfolder) + + # Path to libraies (.dll) required for plugin library + # - additional libraries can be copied to TVPaint installation folder + # (next to executable) or added to PATH environment variable + additional_libs_folder = os.path.join( + source_plugins_dir, + "additional_libraries" + ) + additional_libs_folder = additional_libs_folder.replace("\\", "/") + if additional_libs_folder not in os.environ["PATH"]: + os.environ["PATH"] += (os.pathsep + additional_libs_folder) + + # Path to TVPaint's plugins folder (where we want to add our plugin) + host_plugins_path = os.path.join( + os.path.dirname(host_executable), + "plugins" + ) + + # Files that must be copied to TVPaint's plugin folder + plugin_dir = os.path.join(source_plugins_dir, "plugin") + + to_copy = [] + to_remove = [] + # Remove old plugin name + deprecated_filepath = os.path.join( + host_plugins_path, "AvalonPlugin.dll" + ) + if os.path.exists(deprecated_filepath): + to_remove.append(deprecated_filepath) + + for filename in os.listdir(plugin_dir): + src_full_path = os.path.join(plugin_dir, filename) + dst_full_path = os.path.join(host_plugins_path, filename) + if dst_full_path in to_remove: + to_remove.remove(dst_full_path) + + if ( + not os.path.exists(dst_full_path) + or not filecmp.cmp(src_full_path, dst_full_path) + ): + to_copy.append((src_full_path, dst_full_path)) + + # Skip copy if everything is done + if not to_copy and not to_remove: + return + + # Try to copy + try: + self._windows_file_process(to_copy, to_remove) + except Exception: + log.error("Plugin copy failed", exc_info=True) + + # Validate copy was done + invalid_copy = [] + for src, dst in to_copy: + if not os.path.exists(dst) or not filecmp.cmp(src, dst): + invalid_copy.append((src, dst)) + + # Validate delete was dones + invalid_remove = [] + for filepath in to_remove: + if os.path.exists(filepath): + invalid_remove.append(filepath) + + if not invalid_remove and not invalid_copy: + return + + msg_parts = [] + if invalid_remove: + msg_parts.append( + "Failed to remove files: {}".format(", ".join(invalid_remove)) + ) + + if invalid_copy: + _invalid = [ + "\"{}\" -> \"{}\"".format(src, dst) + for src, dst in invalid_copy + ] + msg_parts.append( + "Failed to copy files: {}".format(", ".join(_invalid)) + ) + raise RuntimeError(" & ".join(msg_parts)) + + def _launch_tv_paint(self, launch_args): + flags = ( + subprocess.DETACHED_PROCESS + | subprocess.CREATE_NEW_PROCESS_GROUP + ) + env = os.environ.copy() + # Remove QuickTime from PATH on windows + # - quicktime overrides TVPaint's ffmpeg encode/decode which may + # cause issues on loading + if platform.system().lower() == "windows": + new_path = [] + for path in env["PATH"].split(os.pathsep): + if path and "quicktime" not in path.lower(): + new_path.append(path) + env["PATH"] = os.pathsep.join(new_path) + + kwargs = { + "env": env, + "creationflags": flags + } + self.process = subprocess.Popen(launch_args, **kwargs) + + def _create_routes(self): + self.websocket_rpc = BaseTVPaintRpc( + self, loop=self.websocket_server.loop + ) + self.websocket_server.add_route( + "*", "/", self.websocket_rpc.handle_request + ) + + def _start_webserver(self): + self.websocket_server.start() + # Make sure RPC is using same loop as websocket server + while not self.websocket_server.server_is_running: + time.sleep(0.1) + + def _stop_webserver(self): + self.websocket_server.stop() + + def _exit(self, exit_code=None): + self._stop_webserver() + if exit_code is not None: + self.exit_code = exit_code + + def stop(self): + """Stop communication and currently running python process.""" + log.info("Stopping communication") + self._exit() + + def launch(self, launch_args): + """Prepare all required data and launch host. + + First is prepared websocket server as communication point for host, + when server is ready to use host is launched as subprocess. + """ + if platform.system().lower() == "windows": + self._prepare_windows_plugin(launch_args) + + # Launch TVPaint and the websocket server. + log.info("Launching TVPaint") + self.websocket_server = WebSocketServer() + + self._create_routes() + + os.environ["WEBSOCKET_URL"] = "ws://localhost:{}".format( + self.websocket_server.port + ) + + log.info("Added request handler for url: {}".format( + os.environ["WEBSOCKET_URL"] + )) + + self._start_webserver() + + # Start TVPaint when server is running + self._launch_tv_paint(launch_args) + + log.info("Waiting for client connection") + while True: + if self.process.poll() is not None: + log.debug("Host process is not alive. Exiting") + self._exit(1) + return + + if self.websocket_rpc.client_connected(): + log.info("Client has connected") + break + time.sleep(0.5) + + self._on_client_connect() + + api.emit("application.launched") + + def _on_client_connect(self): + self._initial_textfile_write() + + def _initial_textfile_write(self): + """Show popup about Write to file at start of TVPaint.""" + tmp_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + tmp_file.close() + tmp_filepath = tmp_file.name.replace("\\", "/") + george_script = ( + "tv_writetextfile \"strict\" \"append\" \"{}\" \"empty\"" + ).format(tmp_filepath) + + result = CommunicationWrapper.execute_george(george_script) + + # Remote the file + os.remove(tmp_filepath) + + if result is None: + log.warning( + "Host was probably closed before plugin was initialized." + ) + elif result.lower() == "forbidden": + log.warning("User didn't confirm saving files.") + + def _client(self): + if not self.websocket_rpc: + log.warning("Communicator's server did not start yet.") + return None + + for client in self.websocket_rpc.clients: + if not client.ws.closed: + return client + log.warning("Client is not yet connected to Communicator.") + return None + + def client(self): + if not self._connected_client or self._connected_client.ws.closed: + self._connected_client = self._client() + return self._connected_client + + def send_request(self, method, params=None): + client = self.client() + if not client: + return + + return self.websocket_rpc.send_request( + client, method, params + ) + + def send_notification(self, method, params=None): + client = self.client() + if not client: + return + + self.websocket_rpc.send_notification( + client, method, params + ) + + def execute_george(self, george_script): + """Execute passed goerge script in TVPaint.""" + return self.send_request( + "execute_george", [george_script] + ) + + def execute_george_through_file(self, george_script): + """Execute george script with temp file. + + Allows to execute multiline george script without stopping websocket + client. + + On windows make sure script does not contain paths with backwards + slashes in paths, TVPaint won't execute properly in that case. + + Args: + george_script (str): George script to execute. May be multilined. + """ + temporary_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".grg", delete=False + ) + temporary_file.write(george_script) + temporary_file.close() + temp_file_path = temporary_file.name.replace("\\", "/") + self.execute_george("tv_runscript {}".format(temp_file_path)) + os.remove(temp_file_path) + + +class QtCommunicator(BaseCommunicator): + menu_definitions = { + "title": "OpenPype Tools", + "menu_items": [ + { + "callback": "workfiles_tool", + "label": "Workfiles", + "help": "Open workfiles tool" + }, { + "callback": "loader_tool", + "label": "Load", + "help": "Open loader tool" + }, { + "callback": "creator_tool", + "label": "Create", + "help": "Open creator tool" + }, { + "callback": "scene_inventory_tool", + "label": "Scene inventory", + "help": "Open scene inventory tool" + }, { + "callback": "publish_tool", + "label": "Publish", + "help": "Open publisher" + }, { + "callback": "library_loader_tool", + "label": "Library", + "help": "Open library loader tool" + }, { + "callback": "subset_manager_tool", + "label": "Subset Manager", + "help": "Open subset manager tool" + }, { + "callback": "experimental_tools", + "label": "Experimental tools", + "help": "Open experimental tools dialog" + } + ] + } + + def __init__(self, qt_app): + super().__init__() + self.callback_queue = Queue() + self.qt_app = qt_app + + def _create_routes(self): + self.websocket_rpc = QtTVPaintRpc( + self, loop=self.websocket_server.loop + ) + self.websocket_server.add_route( + "*", "/", self.websocket_rpc.handle_request + ) + + def execute_in_main_thread(self, main_thread_item, wait=True): + """Add `MainThreadItem` to callback queue and wait for result.""" + self.callback_queue.put(main_thread_item) + if wait: + return main_thread_item.wait() + return + + async def async_execute_in_main_thread(self, main_thread_item, wait=True): + """Add `MainThreadItem` to callback queue and wait for result.""" + self.callback_queue.put(main_thread_item) + if wait: + return await main_thread_item.async_wait() + + def main_thread_listen(self): + """Get last `MainThreadItem` from queue. + + Must be called from main thread. + + Method checks if host process is still running as it may cause + issues if not. + """ + # check if host still running + if self.process.poll() is not None: + self._exit() + return None + + if self.callback_queue.empty(): + return None + return self.callback_queue.get() + + def _on_client_connect(self): + super()._on_client_connect() + self._build_menu() + + def _build_menu(self): + self.send_request( + "define_menu", [self.menu_definitions] + ) + + def _exit(self, *args, **kwargs): + super()._exit(*args, **kwargs) + api.emit("application.exit") + self.qt_app.exit(self.exit_code) diff --git a/openpype/hosts/tvpaint/api/launch_script.py b/openpype/hosts/tvpaint/api/launch_script.py new file mode 100644 index 0000000000..e66bf61df6 --- /dev/null +++ b/openpype/hosts/tvpaint/api/launch_script.py @@ -0,0 +1,84 @@ +import os +import sys +import signal +import traceback +import ctypes +import platform +import logging + +from Qt import QtWidgets, QtCore, QtGui + +from avalon import api +from openpype import style +from openpype.hosts.tvpaint.api.communication_server import ( + CommunicationWrapper +) +from openpype.hosts.tvpaint import api as tvpaint_host + +log = logging.getLogger(__name__) + + +def safe_excepthook(*args): + traceback.print_exception(*args) + + +def main(launch_args): + # Be sure server won't crash at any moment but just print traceback + sys.excepthook = safe_excepthook + + # Create QtApplication for tools + # - QApplicaiton is also main thread/event loop of the server + qt_app = QtWidgets.QApplication([]) + + # Execute pipeline installation + api.install(tvpaint_host) + + # Create Communicator object and trigger launch + # - this must be done before anything is processed + communicator = CommunicationWrapper.create_qt_communicator(qt_app) + communicator.launch(launch_args) + + def process_in_main_thread(): + """Execution of `MainThreadItem`.""" + item = communicator.main_thread_listen() + if item: + item.execute() + + timer = QtCore.QTimer() + timer.setInterval(100) + timer.timeout.connect(process_in_main_thread) + timer.start() + + # Register terminal signal handler + def signal_handler(*_args): + print("You pressed Ctrl+C. Process ended.") + communicator.stop() + + signal.signal(signal.SIGINT, signal_handler) + signal.signal(signal.SIGTERM, signal_handler) + + qt_app.setQuitOnLastWindowClosed(False) + qt_app.setStyleSheet(style.load_stylesheet()) + + # Load avalon icon + icon_path = style.app_icon_path() + if icon_path: + icon = QtGui.QIcon(icon_path) + qt_app.setWindowIcon(icon) + + # Set application name to be able show application icon in task bar + if platform.system().lower() == "windows": + ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID( + u"WebsocketServer" + ) + + # Run Qt application event processing + sys.exit(qt_app.exec_()) + + +if __name__ == "__main__": + args = list(sys.argv) + if os.path.abspath(__file__) == os.path.normpath(args[0]): + # Pop path to script + args.pop(0) + main(args) diff --git a/openpype/hosts/tvpaint/api/lib.py b/openpype/hosts/tvpaint/api/lib.py index 539cebe646..654aff19d8 100644 --- a/openpype/hosts/tvpaint/api/lib.py +++ b/openpype/hosts/tvpaint/api/lib.py @@ -1,85 +1,534 @@ -from PIL import Image +import os +import logging +import tempfile import avalon.io -from avalon.tvpaint.lib import execute_george + +from . import CommunicationWrapper + +log = logging.getLogger(__name__) -def composite_images(input_image_paths, output_filepath): - """Composite images in order from passed list. +def execute_george(george_script, communicator=None): + if not communicator: + communicator = CommunicationWrapper.communicator + return communicator.execute_george(george_script) - Raises: - ValueError: When entered list is empty. + +def execute_george_through_file(george_script, communicator=None): + """Execute george script with temp file. + + Allows to execute multiline george script without stopping websocket + client. + + On windows make sure script does not contain paths with backwards + slashes in paths, TVPaint won't execute properly in that case. + + Args: + george_script (str): George script to execute. May be multilined. """ - if not input_image_paths: - raise ValueError("Nothing to composite.") + if not communicator: + communicator = CommunicationWrapper.communicator - img_obj = None - for image_filepath in input_image_paths: - _img_obj = Image.open(image_filepath) - if img_obj is None: - img_obj = _img_obj - else: - img_obj.alpha_composite(_img_obj) - img_obj.save(output_filepath) + return communicator.execute_george_through_file(george_script) -def set_context_settings(asset_doc=None): - """Set workfile settings by asset document data. +def parse_layers_data(data): + """Parse layers data loaded in 'get_layers_data'.""" + layers = [] + layers_raw = data.split("\n") + for layer_raw in layers_raw: + layer_raw = layer_raw.strip() + if not layer_raw: + continue + ( + layer_id, group_id, visible, position, opacity, name, + layer_type, + frame_start, frame_end, prelighttable, postlighttable, + selected, editable, sencil_state + ) = layer_raw.split("|") + layer = { + "layer_id": int(layer_id), + "group_id": int(group_id), + "visible": visible == "ON", + "position": int(position), + "opacity": int(opacity), + "name": name, + "type": layer_type, + "frame_start": int(frame_start), + "frame_end": int(frame_end), + "prelighttable": prelighttable == "1", + "postlighttable": postlighttable == "1", + "selected": selected == "1", + "editable": editable == "1", + "sencil_state": sencil_state + } + layers.append(layer) + return layers - Change fps, resolution and frame start/end. + +def get_layers_data_george_script(output_filepath, layer_ids=None): + """Prepare george script which will collect all layers from workfile.""" + output_filepath = output_filepath.replace("\\", "/") + george_script_lines = [ + # Variable containing full path to output file + "output_path = \"{}\"".format(output_filepath), + # Get Current Layer ID + "tv_LayerCurrentID", + "current_layer_id = result" + ] + # Script part for getting and storing layer information to temp + layer_data_getter = ( + # Get information about layer's group + "tv_layercolor \"get\" layer_id", + "group_id = result", + "tv_LayerInfo layer_id", + ( + "PARSE result visible position opacity name" + " type startFrame endFrame prelighttable postlighttable" + " selected editable sencilState" + ), + # Check if layer ID match `tv_LayerCurrentID` + "IF CMP(current_layer_id, layer_id)==1", + # - mark layer as selected if layer id match to current layer id + "selected=1", + "END", + # Prepare line with data separated by "|" + ( + "line = layer_id'|'group_id'|'visible'|'position'|'opacity'|'" + "name'|'type'|'startFrame'|'endFrame'|'prelighttable'|'" + "postlighttable'|'selected'|'editable'|'sencilState" + ), + # Write data to output file + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line", + ) + + # Collect data for all layers if layers are not specified + if layer_ids is None: + george_script_lines.extend(( + # Layer loop variables + "loop = 1", + "idx = 0", + # Layers loop + "WHILE loop", + "tv_LayerGetID idx", + "layer_id = result", + "idx = idx + 1", + # Stop loop if layer_id is "NONE" + "IF CMP(layer_id, \"NONE\")==1", + "loop = 0", + "ELSE", + *layer_data_getter, + "END", + "END" + )) + else: + for layer_id in layer_ids: + george_script_lines.append("layer_id = {}".format(layer_id)) + george_script_lines.extend(layer_data_getter) + + return "\n".join(george_script_lines) + + +def layers_data(layer_ids=None, communicator=None): + """Backwards compatible function of 'get_layers_data'.""" + return get_layers_data(layer_ids, communicator) + + +def get_layers_data(layer_ids=None, communicator=None): + """Collect all layers information from currently opened workfile.""" + output_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + output_file.close() + if layer_ids is not None and isinstance(layer_ids, int): + layer_ids = [layer_ids] + + output_filepath = output_file.name + + george_script = get_layers_data_george_script(output_filepath, layer_ids) + + execute_george_through_file(george_script, communicator) + + with open(output_filepath, "r") as stream: + data = stream.read() + + output = parse_layers_data(data) + os.remove(output_filepath) + return output + + +def parse_group_data(data): + """Paser group data collected in 'get_groups_data'.""" + output = [] + groups_raw = data.split("\n") + for group_raw in groups_raw: + group_raw = group_raw.strip() + if not group_raw: + continue + + parts = group_raw.split(" ") + # Check for length and concatenate 2 last items until length match + # - this happens if name contain spaces + while len(parts) > 6: + last_item = parts.pop(-1) + parts[-1] = " ".join([parts[-1], last_item]) + clip_id, group_id, red, green, blue, name = parts + + group = { + "group_id": int(group_id), + "name": name, + "clip_id": int(clip_id), + "red": int(red), + "green": int(green), + "blue": int(blue), + } + output.append(group) + return output + + +def groups_data(communicator=None): + """Backwards compatible function of 'get_groups_data'.""" + return get_groups_data(communicator) + + +def get_groups_data(communicator=None): + """Information about groups from current workfile.""" + output_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + output_file.close() + + output_filepath = output_file.name.replace("\\", "/") + george_script_lines = ( + # Variable containing full path to output file + "output_path = \"{}\"".format(output_filepath), + "loop = 1", + "FOR idx = 1 TO 12", + "tv_layercolor \"getcolor\" 0 idx", + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result", + "END" + ) + george_script = "\n".join(george_script_lines) + execute_george_through_file(george_script, communicator) + + with open(output_filepath, "r") as stream: + data = stream.read() + + output = parse_group_data(data) + os.remove(output_filepath) + return output + + +def get_layers_pre_post_behavior(layer_ids, communicator=None): + """Collect data about pre and post behavior of layer ids. + + Pre and Post behaviors is enumerator of possible values: + - "none" + - "repeat" / "loop" + - "pingpong" + - "hold" + + Example output: + ```json + { + 0: { + "pre": "none", + "post": "loop" + } + } + ``` + + Returns: + dict: Key is layer id value is dictionary with "pre" and "post" keys. """ - if asset_doc is None: - # Use current session asset if not passed - asset_doc = avalon.io.find_one({ - "type": "asset", - "name": avalon.io.Session["AVALON_ASSET"] - }) + # Skip if is empty + if not layer_ids: + return {} - project_doc = avalon.io.find_one({"type": "project"}) + # Auto convert to list + if not isinstance(layer_ids, (list, set, tuple)): + layer_ids = [layer_ids] - framerate = asset_doc["data"].get("fps") - if framerate is None: - framerate = project_doc["data"].get("fps") + # Prepare temp file + output_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + output_file.close() - if framerate is not None: - execute_george( - "tv_framerate {} \"timestretch\"".format(framerate) - ) - else: - print("Framerate was not found!") + output_filepath = output_file.name.replace("\\", "/") + george_script_lines = [ + # Variable containing full path to output file + "output_path = \"{}\"".format(output_filepath), + ] + for layer_id in layer_ids: + george_script_lines.extend([ + "layer_id = {}".format(layer_id), + "tv_layerprebehavior layer_id", + "pre_beh = result", + "tv_layerpostbehavior layer_id", + "post_beh = result", + "line = layer_id'|'pre_beh'|'post_beh", + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line" + ]) - width_key = "resolutionWidth" - height_key = "resolutionHeight" + george_script = "\n".join(george_script_lines) + execute_george_through_file(george_script, communicator) - width = asset_doc["data"].get(width_key) - height = asset_doc["data"].get(height_key) - if width is None or height is None: - width = project_doc["data"].get(width_key) - height = project_doc["data"].get(height_key) + # Read data + with open(output_filepath, "r") as stream: + data = stream.read() - if width is None or height is None: - print("Resolution was not found!") - else: - execute_george("tv_resizepage {} {} 0".format(width, height)) + # Remove temp file + os.remove(output_filepath) - frame_start = asset_doc["data"].get("frameStart") - frame_end = asset_doc["data"].get("frameEnd") + # Parse data + output = {} + raw_lines = data.split("\n") + for raw_line in raw_lines: + line = raw_line.strip() + if not line: + continue + parts = line.split("|") + if len(parts) != 3: + continue + layer_id, pre_beh, post_beh = parts + output[int(layer_id)] = { + "pre": pre_beh.lower(), + "post": post_beh.lower() + } + return output - if frame_start is None or frame_end is None: - print("Frame range was not found!") - return - handles = asset_doc["data"].get("handles") or 0 - handle_start = asset_doc["data"].get("handleStart") - handle_end = asset_doc["data"].get("handleEnd") +def get_layers_exposure_frames(layer_ids, layers_data=None, communicator=None): + """Get exposure frames. - if handle_start is None or handle_end is None: - handle_start = handles - handle_end = handles + Easily said returns frames where keyframes are. Recognized with george + function `tv_exposureinfo` returning "Head". - # Always start from 0 Mark In and set only Mark Out - mark_in = 0 - mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end + Args: + layer_ids (list): Ids of a layers for which exposure frames should + look for. + layers_data (list): Precollected layers data. If are not passed then + 'get_layers_data' is used. + communicator (BaseCommunicator): Communicator used for communication + with TVPaint. - execute_george("tv_markin {} set".format(mark_in)) - execute_george("tv_markout {} set".format(mark_out)) + Returns: + dict: Frames where exposure is set to "Head" by layer id. + """ + + if layers_data is None: + layers_data = get_layers_data(layer_ids) + _layers_by_id = { + layer["layer_id"]: layer + for layer in layers_data + } + layers_by_id = { + layer_id: _layers_by_id.get(layer_id) + for layer_id in layer_ids + } + tmp_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + tmp_file.close() + tmp_output_path = tmp_file.name.replace("\\", "/") + george_script_lines = [ + "output_path = \"{}\"".format(tmp_output_path) + ] + + output = {} + layer_id_mapping = {} + for layer_id, layer_data in layers_by_id.items(): + layer_id_mapping[str(layer_id)] = layer_id + output[layer_id] = [] + if not layer_data: + continue + first_frame = layer_data["frame_start"] + last_frame = layer_data["frame_end"] + george_script_lines.extend([ + "line = \"\"", + "layer_id = {}".format(layer_id), + "line = line''layer_id", + "tv_layerset layer_id", + "frame = {}".format(first_frame), + "WHILE (frame <= {})".format(last_frame), + "tv_exposureinfo frame", + "exposure = result", + "IF (CMP(exposure, \"Head\") == 1)", + "line = line'|'frame", + "END", + "frame = frame + 1", + "END", + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' line" + ]) + + execute_george_through_file("\n".join(george_script_lines), communicator) + + with open(tmp_output_path, "r") as stream: + data = stream.read() + + os.remove(tmp_output_path) + + lines = [] + for line in data.split("\n"): + line = line.strip() + if line: + lines.append(line) + + for line in lines: + line_items = list(line.split("|")) + layer_id = line_items.pop(0) + _layer_id = layer_id_mapping[layer_id] + output[_layer_id] = [int(frame) for frame in line_items] + + return output + + +def get_exposure_frames( + layer_id, first_frame=None, last_frame=None, communicator=None +): + """Get exposure frames. + + Easily said returns frames where keyframes are. Recognized with george + function `tv_exposureinfo` returning "Head". + + Args: + layer_id (int): Id of a layer for which exposure frames should + look for. + first_frame (int): From which frame will look for exposure frames. + Used layers first frame if not entered. + last_frame (int): Last frame where will look for exposure frames. + Used layers last frame if not entered. + + Returns: + list: Frames where exposure is set to "Head". + """ + if first_frame is None or last_frame is None: + layer = layers_data(layer_id)[0] + if first_frame is None: + first_frame = layer["frame_start"] + if last_frame is None: + last_frame = layer["frame_end"] + + tmp_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + tmp_file.close() + tmp_output_path = tmp_file.name.replace("\\", "/") + george_script_lines = [ + "tv_layerset {}".format(layer_id), + "output_path = \"{}\"".format(tmp_output_path), + "output = \"\"", + "frame = {}".format(first_frame), + "WHILE (frame <= {})".format(last_frame), + "tv_exposureinfo frame", + "exposure = result", + "IF (CMP(exposure, \"Head\") == 1)", + "IF (CMP(output, \"\") == 1)", + "output = output''frame", + "ELSE", + "output = output'|'frame", + "END", + "END", + "frame = frame + 1", + "END", + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' output" + ] + + execute_george_through_file("\n".join(george_script_lines), communicator) + + with open(tmp_output_path, "r") as stream: + data = stream.read() + + os.remove(tmp_output_path) + + lines = [] + for line in data.split("\n"): + line = line.strip() + if line: + lines.append(line) + + exposure_frames = [] + for line in lines: + for frame in line.split("|"): + exposure_frames.append(int(frame)) + return exposure_frames + + +def get_scene_data(communicator=None): + """Scene data of currently opened scene. + + Result contains resolution, pixel aspect, fps mark in/out with states, + frame start and background color. + + Returns: + dict: Scene data collected in many ways. + """ + workfile_info = execute_george("tv_projectinfo", communicator) + workfile_info_parts = workfile_info.split(" ") + + # Project frame start - not used + workfile_info_parts.pop(-1) + field_order = workfile_info_parts.pop(-1) + frame_rate = float(workfile_info_parts.pop(-1)) + pixel_apsect = float(workfile_info_parts.pop(-1)) + height = int(workfile_info_parts.pop(-1)) + width = int(workfile_info_parts.pop(-1)) + + # Marks return as "{frame - 1} {state} ", example "0 set". + result = execute_george("tv_markin", communicator) + mark_in_frame, mark_in_state, _ = result.split(" ") + + result = execute_george("tv_markout", communicator) + mark_out_frame, mark_out_state, _ = result.split(" ") + + start_frame = execute_george("tv_startframe", communicator) + return { + "width": width, + "height": height, + "pixel_aspect": pixel_apsect, + "fps": frame_rate, + "field_order": field_order, + "mark_in": int(mark_in_frame), + "mark_in_state": mark_in_state, + "mark_in_set": mark_in_state == "set", + "mark_out": int(mark_out_frame), + "mark_out_state": mark_out_state, + "mark_out_set": mark_out_state == "set", + "start_frame": int(start_frame), + "bg_color": get_scene_bg_color(communicator) + } + + +def get_scene_bg_color(communicator=None): + """Background color set on scene. + + Is important for review exporting where scene bg color is used as + background. + """ + output_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + output_file.close() + output_filepath = output_file.name.replace("\\", "/") + george_script_lines = [ + # Variable containing full path to output file + "output_path = \"{}\"".format(output_filepath), + "tv_background", + "bg_color = result", + # Write data to output file + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' bg_color" + ] + + george_script = "\n".join(george_script_lines) + execute_george_through_file(george_script, communicator) + + with open(output_filepath, "r") as stream: + data = stream.read() + + os.remove(output_filepath) + data = data.strip() + if not data: + return None + return data.split(" ") diff --git a/openpype/hosts/tvpaint/api/pipeline.py b/openpype/hosts/tvpaint/api/pipeline.py new file mode 100644 index 0000000000..e7c5159bbc --- /dev/null +++ b/openpype/hosts/tvpaint/api/pipeline.py @@ -0,0 +1,491 @@ +import os +import json +import contextlib +import tempfile +import logging + +import requests + +import pyblish.api +import avalon.api + +from avalon import io +from avalon.pipeline import AVALON_CONTAINER_ID + +from openpype.hosts import tvpaint +from openpype.api import get_current_project_settings + +from .lib import ( + execute_george, + execute_george_through_file +) + +log = logging.getLogger(__name__) + +HOST_DIR = os.path.dirname(os.path.abspath(tvpaint.__file__)) +PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") +PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") +LOAD_PATH = os.path.join(PLUGINS_DIR, "load") +CREATE_PATH = os.path.join(PLUGINS_DIR, "create") + +METADATA_SECTION = "avalon" +SECTION_NAME_CONTEXT = "context" +SECTION_NAME_INSTANCES = "instances" +SECTION_NAME_CONTAINERS = "containers" +# Maximum length of metadata chunk string +# TODO find out the max (500 is safe enough) +TVPAINT_CHUNK_LENGTH = 500 + +"""TVPaint's Metadata + +Metadata are stored to TVPaint's workfile. + +Workfile works similar to .ini file but has few limitation. Most important +limitation is that value under key has limited length. Due to this limitation +each metadata section/key stores number of "subkeys" that are related to +the section. + +Example: +Metadata key `"instances"` may have stored value "2". In that case it is +expected that there are also keys `["instances0", "instances1"]`. + +Workfile data looks like: +``` +[avalon] +instances0=[{{__dq__}id{__dq__}: {__dq__}pyblish.avalon.instance{__dq__... +instances1=...more data... +instances=2 +``` +""" + + +def install(): + """Install Maya-specific functionality of avalon-core. + + This function is called automatically on calling `api.install(maya)`. + + """ + log.info("OpenPype - Installing TVPaint integration") + io.install() + + # Create workdir folder if does not exist yet + workdir = io.Session["AVALON_WORKDIR"] + if not os.path.exists(workdir): + os.makedirs(workdir) + + pyblish.api.register_host("tvpaint") + pyblish.api.register_plugin_path(PUBLISH_PATH) + avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH) + avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH) + + registered_callbacks = ( + pyblish.api.registered_callbacks().get("instanceToggled") or [] + ) + if on_instance_toggle not in registered_callbacks: + pyblish.api.register_callback("instanceToggled", on_instance_toggle) + + avalon.api.on("application.launched", initial_launch) + avalon.api.on("application.exit", application_exit) + + +def uninstall(): + """Uninstall TVPaint-specific functionality of avalon-core. + + This function is called automatically on calling `api.uninstall()`. + + """ + log.info("OpenPype - Uninstalling TVPaint integration") + pyblish.api.deregister_host("tvpaint") + pyblish.api.deregister_plugin_path(PUBLISH_PATH) + avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH) + avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH) + + +def containerise( + name, namespace, members, context, loader, current_containers=None +): + """Add new container to metadata. + + Args: + name (str): Container name. + namespace (str): Container namespace. + members (list): List of members that were loaded and belongs + to the container (layer names). + current_containers (list): Preloaded containers. Should be used only + on update/switch when containers were modified durring the process. + + Returns: + dict: Container data stored to workfile metadata. + """ + + container_data = { + "schema": "openpype:container-2.0", + "id": AVALON_CONTAINER_ID, + "members": members, + "name": name, + "namespace": namespace, + "loader": str(loader), + "representation": str(context["representation"]["_id"]) + } + if current_containers is None: + current_containers = ls() + + # Add container to containers list + current_containers.append(container_data) + + # Store data to metadata + write_workfile_metadata(SECTION_NAME_CONTAINERS, current_containers) + + return container_data + + +@contextlib.contextmanager +def maintained_selection(): + # TODO implement logic + try: + yield + finally: + pass + + +def split_metadata_string(text, chunk_length=None): + """Split string by length. + + Split text to chunks by entered length. + Example: + ```python + text = "ABCDEFGHIJKLM" + result = split_metadata_string(text, 3) + print(result) + >>> ['ABC', 'DEF', 'GHI', 'JKL'] + ``` + + Args: + text (str): Text that will be split into chunks. + chunk_length (int): Single chunk size. Default chunk_length is + set to global variable `TVPAINT_CHUNK_LENGTH`. + + Returns: + list: List of strings wil at least one item. + """ + if chunk_length is None: + chunk_length = TVPAINT_CHUNK_LENGTH + chunks = [] + for idx in range(chunk_length, len(text) + chunk_length, chunk_length): + start_idx = idx - chunk_length + chunks.append(text[start_idx:idx]) + return chunks + + +def get_workfile_metadata_string_for_keys(metadata_keys): + """Read metadata for specific keys from current project workfile. + + All values from entered keys are stored to single string without separator. + + Function is designed to help get all values for one metadata key at once. + So order of passed keys matteres. + + Args: + metadata_keys (list, str): Metadata keys for which data should be + retrieved. Order of keys matters! It is possible to enter only + single key as string. + """ + # Add ability to pass only single key + if isinstance(metadata_keys, str): + metadata_keys = [metadata_keys] + + output_file = tempfile.NamedTemporaryFile( + mode="w", prefix="a_tvp_", suffix=".txt", delete=False + ) + output_file.close() + output_filepath = output_file.name.replace("\\", "/") + + george_script_parts = [] + george_script_parts.append( + "output_path = \"{}\"".format(output_filepath) + ) + # Store data for each index of metadata key + for metadata_key in metadata_keys: + george_script_parts.append( + "tv_readprojectstring \"{}\" \"{}\" \"\"".format( + METADATA_SECTION, metadata_key + ) + ) + george_script_parts.append( + "tv_writetextfile \"strict\" \"append\" '\"'output_path'\"' result" + ) + + # Execute the script + george_script = "\n".join(george_script_parts) + execute_george_through_file(george_script) + + # Load data from temp file + with open(output_filepath, "r") as stream: + file_content = stream.read() + + # Remove `\n` from content + output_string = file_content.replace("\n", "") + + # Delete temp file + os.remove(output_filepath) + + return output_string + + +def get_workfile_metadata_string(metadata_key): + """Read metadata for specific key from current project workfile.""" + result = get_workfile_metadata_string_for_keys([metadata_key]) + if not result: + return None + + stripped_result = result.strip() + if not stripped_result: + return None + + # NOTE Backwards compatibility when metadata key did not store range of key + # indexes but the value itself + # NOTE We don't have to care about negative values with `isdecimal` check + if not stripped_result.isdecimal(): + metadata_string = result + else: + keys = [] + for idx in range(int(stripped_result)): + keys.append("{}{}".format(metadata_key, idx)) + metadata_string = get_workfile_metadata_string_for_keys(keys) + + # Replace quotes plaholders with their values + metadata_string = ( + metadata_string + .replace("{__sq__}", "'") + .replace("{__dq__}", "\"") + ) + return metadata_string + + +def get_workfile_metadata(metadata_key, default=None): + """Read and parse metadata for specific key from current project workfile. + + Pipeline use function to store loaded and created instances within keys + stored in `SECTION_NAME_INSTANCES` and `SECTION_NAME_CONTAINERS` + constants. + + Args: + metadata_key (str): Key defying which key should read. It is expected + value contain json serializable string. + """ + if default is None: + default = [] + + json_string = get_workfile_metadata_string(metadata_key) + if json_string: + try: + return json.loads(json_string) + except json.decoder.JSONDecodeError: + # TODO remove when backwards compatibility of storing metadata + # will be removed + print(( + "Fixed invalid metadata in workfile." + " Not serializable string was: {}" + ).format(json_string)) + write_workfile_metadata(metadata_key, default) + return default + + +def write_workfile_metadata(metadata_key, value): + """Write metadata for specific key into current project workfile. + + George script has specific way how to work with quotes which should be + solved automatically with this function. + + Args: + metadata_key (str): Key defying under which key value will be stored. + value (dict,list,str): Data to store they must be json serializable. + """ + if isinstance(value, (dict, list)): + value = json.dumps(value) + + if not value: + value = "" + + # Handle quotes in dumped json string + # - replace single and double quotes with placeholders + value = ( + value + .replace("'", "{__sq__}") + .replace("\"", "{__dq__}") + ) + chunks = split_metadata_string(value) + chunks_len = len(chunks) + + write_template = "tv_writeprojectstring \"{}\" \"{}\" \"{}\"" + george_script_parts = [] + # Add information about chunks length to metadata key itself + george_script_parts.append( + write_template.format(METADATA_SECTION, metadata_key, chunks_len) + ) + # Add chunk values to indexed metadata keys + for idx, chunk_value in enumerate(chunks): + sub_key = "{}{}".format(metadata_key, idx) + george_script_parts.append( + write_template.format(METADATA_SECTION, sub_key, chunk_value) + ) + + george_script = "\n".join(george_script_parts) + + return execute_george_through_file(george_script) + + +def get_current_workfile_context(): + """Return context in which was workfile saved.""" + return get_workfile_metadata(SECTION_NAME_CONTEXT, {}) + + +def save_current_workfile_context(context): + """Save context which was used to create a workfile.""" + return write_workfile_metadata(SECTION_NAME_CONTEXT, context) + + +def remove_instance(instance): + """Remove instance from current workfile metadata.""" + current_instances = get_workfile_metadata(SECTION_NAME_INSTANCES) + instance_id = instance.get("uuid") + found_idx = None + if instance_id: + for idx, _inst in enumerate(current_instances): + if _inst["uuid"] == instance_id: + found_idx = idx + break + + if found_idx is None: + return + current_instances.pop(found_idx) + write_instances(current_instances) + + +def list_instances(): + """List all created instances from current workfile.""" + return get_workfile_metadata(SECTION_NAME_INSTANCES) + + +def write_instances(data): + return write_workfile_metadata(SECTION_NAME_INSTANCES, data) + + +# Backwards compatibility +def _write_instances(*args, **kwargs): + return write_instances(*args, **kwargs) + + +def ls(): + return get_workfile_metadata(SECTION_NAME_CONTAINERS) + + +def on_instance_toggle(instance, old_value, new_value): + """Update instance data in workfile on publish toggle.""" + # Review may not have real instance in wokrfile metadata + if not instance.data.get("uuid"): + return + + instance_id = instance.data["uuid"] + found_idx = None + current_instances = list_instances() + for idx, workfile_instance in enumerate(current_instances): + if workfile_instance["uuid"] == instance_id: + found_idx = idx + break + + if found_idx is None: + return + + if "active" in current_instances[found_idx]: + current_instances[found_idx]["active"] = new_value + write_instances(current_instances) + + +def initial_launch(): + # Setup project settings if its the template that's launched. + # TODO also check for template creation when it's possible to define + # templates + last_workfile = os.environ.get("AVALON_LAST_WORKFILE") + if not last_workfile or os.path.exists(last_workfile): + return + + log.info("Setting up project...") + set_context_settings() + + +def application_exit(): + data = get_current_project_settings() + stop_timer = data["tvpaint"]["stop_timer_on_application_exit"] + + if not stop_timer: + return + + # Stop application timer. + webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL") + rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url) + requests.post(rest_api_url) + + +def set_context_settings(asset_doc=None): + """Set workfile settings by asset document data. + + Change fps, resolution and frame start/end. + """ + if asset_doc is None: + # Use current session asset if not passed + asset_doc = avalon.io.find_one({ + "type": "asset", + "name": avalon.io.Session["AVALON_ASSET"] + }) + + project_doc = avalon.io.find_one({"type": "project"}) + + framerate = asset_doc["data"].get("fps") + if framerate is None: + framerate = project_doc["data"].get("fps") + + if framerate is not None: + execute_george( + "tv_framerate {} \"timestretch\"".format(framerate) + ) + else: + print("Framerate was not found!") + + width_key = "resolutionWidth" + height_key = "resolutionHeight" + + width = asset_doc["data"].get(width_key) + height = asset_doc["data"].get(height_key) + if width is None or height is None: + width = project_doc["data"].get(width_key) + height = project_doc["data"].get(height_key) + + if width is None or height is None: + print("Resolution was not found!") + else: + execute_george( + "tv_resizepage {} {} 0".format(width, height) + ) + + frame_start = asset_doc["data"].get("frameStart") + frame_end = asset_doc["data"].get("frameEnd") + + if frame_start is None or frame_end is None: + print("Frame range was not found!") + return + + handles = asset_doc["data"].get("handles") or 0 + handle_start = asset_doc["data"].get("handleStart") + handle_end = asset_doc["data"].get("handleEnd") + + if handle_start is None or handle_end is None: + handle_start = handles + handle_end = handles + + # Always start from 0 Mark In and set only Mark Out + mark_in = 0 + mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end + + execute_george("tv_markin {} set".format(mark_in)) + execute_george("tv_markout {} set".format(mark_out)) diff --git a/openpype/hosts/tvpaint/api/plugin.py b/openpype/hosts/tvpaint/api/plugin.py index e148e44a27..e65c25b8d1 100644 --- a/openpype/hosts/tvpaint/api/plugin.py +++ b/openpype/hosts/tvpaint/api/plugin.py @@ -1,8 +1,21 @@ +import re +import uuid + +import avalon.api + from openpype.api import PypeCreatorMixin -from avalon.tvpaint import pipeline +from openpype.hosts.tvpaint.api import ( + pipeline, + lib +) -class Creator(PypeCreatorMixin, pipeline.Creator): +class Creator(PypeCreatorMixin, avalon.api.Creator): + def __init__(self, *args, **kwargs): + super(Creator, self).__init__(*args, **kwargs) + # Add unified identifier created with `uuid` module + self.data["uuid"] = str(uuid.uuid4()) + @classmethod def get_dynamic_data(cls, *args, **kwargs): dynamic_data = super(Creator, cls).get_dynamic_data(*args, **kwargs) @@ -17,3 +30,95 @@ class Creator(PypeCreatorMixin, pipeline.Creator): if "task" not in dynamic_data and task_name: dynamic_data["task"] = task_name return dynamic_data + + @staticmethod + def are_instances_same(instance_1, instance_2): + """Compare instances but skip keys with unique values. + + During compare are skiped keys that will be 100% sure + different on new instance, like "id". + + Returns: + bool: True if instances are same. + """ + if ( + not isinstance(instance_1, dict) + or not isinstance(instance_2, dict) + ): + return instance_1 == instance_2 + + checked_keys = set() + checked_keys.add("id") + for key, value in instance_1.items(): + if key not in checked_keys: + if key not in instance_2: + return False + if value != instance_2[key]: + return False + checked_keys.add(key) + + for key in instance_2.keys(): + if key not in checked_keys: + return False + return True + + def write_instances(self, data): + self.log.debug( + "Storing instance data to workfile. {}".format(str(data)) + ) + return pipeline.write_instances(data) + + def process(self): + data = pipeline.list_instances() + data.append(self.data) + self.write_instances(data) + + +class Loader(avalon.api.Loader): + hosts = ["tvpaint"] + + @staticmethod + def get_members_from_container(container): + if "members" not in container and "objectName" in container: + # Backwards compatibility + layer_ids_str = container.get("objectName") + return [ + int(layer_id) for layer_id in layer_ids_str.split("|") + ] + return container["members"] + + def get_unique_layer_name(self, asset_name, name): + """Layer name with counter as suffix. + + Find higher 3 digit suffix from all layer names in scene matching regex + `{asset_name}_{name}_{suffix}`. Higher 3 digit suffix is used + as base for next number if scene does not contain layer matching regex + `0` is used ase base. + + Args: + asset_name (str): Name of subset's parent asset document. + name (str): Name of loaded subset. + + Returns: + (str): `{asset_name}_{name}_{higher suffix + 1}` + """ + layer_name_base = "{}_{}".format(asset_name, name) + + counter_regex = re.compile(r"_(\d{3})$") + + higher_counter = 0 + for layer in lib.get_layers_data(): + layer_name = layer["name"] + if not layer_name.startswith(layer_name_base): + continue + number_subpart = layer_name[len(layer_name_base):] + groups = counter_regex.findall(number_subpart) + if len(groups) != 1: + continue + + counter = int(groups[0]) + if counter > higher_counter: + higher_counter = counter + continue + + return "{}_{:0>3d}".format(layer_name_base, higher_counter + 1) diff --git a/openpype/hosts/tvpaint/api/workio.py b/openpype/hosts/tvpaint/api/workio.py new file mode 100644 index 0000000000..c513bec6cf --- /dev/null +++ b/openpype/hosts/tvpaint/api/workio.py @@ -0,0 +1,55 @@ +"""Host API required for Work Files. +# TODO @iLLiCiT implement functions: + has_unsaved_changes +""" + +from avalon import api +from .lib import ( + execute_george, + execute_george_through_file +) +from .pipeline import save_current_workfile_context + + +def open_file(filepath): + """Open the scene file in Blender.""" + george_script = "tv_LoadProject '\"'\"{}\"'\"'".format( + filepath.replace("\\", "/") + ) + return execute_george_through_file(george_script) + + +def save_file(filepath): + """Save the open scene file.""" + # Store context to workfile before save + context = { + "project": api.Session["AVALON_PROJECT"], + "asset": api.Session["AVALON_ASSET"], + "task": api.Session["AVALON_TASK"] + } + save_current_workfile_context(context) + + # Execute george script to save workfile. + george_script = "tv_SaveProject {}".format(filepath.replace("\\", "/")) + return execute_george(george_script) + + +def current_file(): + """Return the path of the open scene file.""" + george_script = "tv_GetProjectName" + return execute_george(george_script) + + +def has_unsaved_changes(): + """Does the open scene file have unsaved changes?""" + return False + + +def file_extensions(): + """Return the supported file extensions for Blender scene files.""" + return api.HOST_WORKFILE_EXTENSIONS["tvpaint"] + + +def work_root(session): + """Return the default root to browse for work files.""" + return session["AVALON_WORKDIR"] diff --git a/openpype/hosts/tvpaint/hooks/pre_launch_args.py b/openpype/hosts/tvpaint/hooks/pre_launch_args.py index b0b13529ca..62fd662d79 100644 --- a/openpype/hosts/tvpaint/hooks/pre_launch_args.py +++ b/openpype/hosts/tvpaint/hooks/pre_launch_args.py @@ -44,10 +44,6 @@ class TvpaintPrelaunchHook(PreLaunchHook): self.launch_context.launch_args.extend(remainders) def launch_script_path(self): - avalon_dir = os.path.dirname(os.path.abspath(avalon.__file__)) - script_path = os.path.join( - avalon_dir, - "tvpaint", - "launch_script.py" - ) - return script_path \ No newline at end of file + from openpype.hosts.tvpaint import get_launch_script_path + + return get_launch_script_path() diff --git a/openpype/hosts/tvpaint/plugins/create/create_render_layer.py b/openpype/hosts/tvpaint/plugins/create/create_render_layer.py index af6c0f0eee..40a7d15990 100644 --- a/openpype/hosts/tvpaint/plugins/create/create_render_layer.py +++ b/openpype/hosts/tvpaint/plugins/create/create_render_layer.py @@ -1,11 +1,12 @@ from avalon.api import CreatorError -from avalon.tvpaint import ( + +from openpype.lib import prepare_template_data +from openpype.hosts.tvpaint.api import ( + plugin, pipeline, lib, CommunicationWrapper ) -from openpype.hosts.tvpaint.api import plugin -from openpype.lib import prepare_template_data class CreateRenderlayer(plugin.Creator): @@ -56,7 +57,7 @@ class CreateRenderlayer(plugin.Creator): # Validate that communication is initialized if CommunicationWrapper.communicator: # Get currently selected layers - layers_data = lib.layers_data() + layers_data = lib.get_layers_data() selected_layers = [ layer @@ -75,7 +76,7 @@ class CreateRenderlayer(plugin.Creator): def process(self): self.log.debug("Query data from workfile.") instances = pipeline.list_instances() - layers_data = lib.layers_data() + layers_data = lib.get_layers_data() self.log.debug("Checking for selection groups.") # Collect group ids from selection @@ -102,7 +103,7 @@ class CreateRenderlayer(plugin.Creator): self.log.debug(f"Selected group id is \"{group_id}\".") self.data["group_id"] = group_id - group_data = lib.groups_data() + group_data = lib.get_groups_data() group_name = None for group in group_data: if group["group_id"] == group_id: @@ -169,7 +170,7 @@ class CreateRenderlayer(plugin.Creator): return self.log.debug("Querying groups data from workfile.") - groups_data = lib.groups_data() + groups_data = lib.get_groups_data() self.log.debug("Changing name of the group.") selected_group = None @@ -196,6 +197,7 @@ class CreateRenderlayer(plugin.Creator): ) def _ask_user_subset_override(self, instance): + from Qt import QtCore from Qt.QtWidgets import QMessageBox title = "Subset \"{}\" already exist".format(instance["subset"]) @@ -205,6 +207,10 @@ class CreateRenderlayer(plugin.Creator): ).format(instance["subset"]) dialog = QMessageBox() + dialog.setWindowFlags( + dialog.windowFlags() + | QtCore.Qt.WindowStaysOnTopHint + ) dialog.setWindowTitle(title) dialog.setText(text) dialog.setStandardButtons(QMessageBox.Yes | QMessageBox.No) diff --git a/openpype/hosts/tvpaint/plugins/create/create_render_pass.py b/openpype/hosts/tvpaint/plugins/create/create_render_pass.py index ad06520210..af962052fc 100644 --- a/openpype/hosts/tvpaint/plugins/create/create_render_pass.py +++ b/openpype/hosts/tvpaint/plugins/create/create_render_pass.py @@ -1,11 +1,11 @@ from avalon.api import CreatorError -from avalon.tvpaint import ( +from openpype.lib import prepare_template_data +from openpype.hosts.tvpaint.api import ( + plugin, pipeline, lib, CommunicationWrapper ) -from openpype.hosts.tvpaint.api import plugin -from openpype.lib import prepare_template_data class CreateRenderPass(plugin.Creator): diff --git a/openpype/hosts/tvpaint/plugins/load/load_image.py b/openpype/hosts/tvpaint/plugins/load/load_image.py index f77fab87f8..1246fe8248 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_image.py +++ b/openpype/hosts/tvpaint/plugins/load/load_image.py @@ -1,8 +1,8 @@ from avalon.vendor import qargparse -from avalon.tvpaint import lib, pipeline +from openpype.hosts.tvpaint.api import lib, plugin -class ImportImage(pipeline.Loader): +class ImportImage(plugin.Loader): """Load image or image sequence to TVPaint as new layer.""" families = ["render", "image", "background", "plate"] diff --git a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py index b8b20ed20a..b5e0a86686 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_reference_image.py +++ b/openpype/hosts/tvpaint/plugins/load/load_reference_image.py @@ -1,10 +1,10 @@ import collections from avalon.pipeline import get_representation_context from avalon.vendor import qargparse -from avalon.tvpaint import lib, pipeline +from openpype.hosts.tvpaint.api import lib, pipeline, plugin -class LoadImage(pipeline.Loader): +class LoadImage(plugin.Loader): """Load image or image sequence to TVPaint as new layer.""" families = ["render", "image", "background", "plate"] diff --git a/openpype/hosts/tvpaint/plugins/load/load_sound.py b/openpype/hosts/tvpaint/plugins/load/load_sound.py index c83748fe06..3f42370f5c 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_sound.py +++ b/openpype/hosts/tvpaint/plugins/load/load_sound.py @@ -1,9 +1,9 @@ import os import tempfile -from avalon.tvpaint import lib, pipeline +from openpype.hosts.tvpaint.api import lib, plugin -class ImportSound(pipeline.Loader): +class ImportSound(plugin.Loader): """Load sound to TVPaint. Sound layers does not have ids but only position index so we can't diff --git a/openpype/hosts/tvpaint/plugins/load/load_workfile.py b/openpype/hosts/tvpaint/plugins/load/load_workfile.py index f410a1ab9d..33e2a76cc9 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_workfile.py +++ b/openpype/hosts/tvpaint/plugins/load/load_workfile.py @@ -1,16 +1,16 @@ import getpass import os -from avalon.tvpaint import lib, pipeline, get_current_workfile_context from avalon import api, io from openpype.lib import ( get_workfile_template_key_from_context, get_workdir_data ) from openpype.api import Anatomy +from openpype.hosts.tvpaint.api import lib, pipeline, plugin -class LoadWorkfile(pipeline.Loader): +class LoadWorkfile(plugin.Loader): """Load workfile.""" families = ["workfile"] @@ -24,7 +24,7 @@ class LoadWorkfile(pipeline.Loader): host = api.registered_host() current_file = host.current_file() - context = get_current_workfile_context() + context = pipeline.get_current_workfile_context() filepath = self.fname.replace("\\", "/") diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py index 1d7a48e389..31d2fd1fd5 100644 --- a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py +++ b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py @@ -218,7 +218,7 @@ class CollectInstances(pyblish.api.ContextPlugin): # - not 100% working as it was found out that layer ids can't be # used as unified identifier across multiple workstations layers_by_id = { - layer["id"]: layer + layer["layer_id"]: layer for layer in layers_data } layer_ids = instance_data["layer_ids"] diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py b/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py index f4259f1b5f..f5c86c613b 100644 --- a/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py +++ b/openpype/hosts/tvpaint/plugins/publish/collect_workfile_data.py @@ -4,7 +4,7 @@ import tempfile import pyblish.api import avalon.api -from avalon.tvpaint import pipeline, lib +from openpype.hosts.tvpaint.api import pipeline, lib class ResetTVPaintWorkfileMetadata(pyblish.api.Action): diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py index 6235b6211d..b6b8bd0d9e 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py @@ -2,17 +2,18 @@ import os import copy import tempfile +from PIL import Image + import pyblish.api -from avalon.tvpaint import lib -from openpype.hosts.tvpaint.api.lib import composite_images +from openpype.hosts.tvpaint.api import lib from openpype.hosts.tvpaint.lib import ( calculate_layers_extraction_data, get_frame_filename_template, fill_reference_frames, composite_rendered_layers, - rename_filepaths_by_frame_start + rename_filepaths_by_frame_start, + composite_images ) -from PIL import Image class ExtractSequence(pyblish.api.Extractor): diff --git a/openpype/hosts/tvpaint/plugins/publish/increment_workfile_version.py b/openpype/hosts/tvpaint/plugins/publish/increment_workfile_version.py index a96a8e3d5d..c9f2434cef 100644 --- a/openpype/hosts/tvpaint/plugins/publish/increment_workfile_version.py +++ b/openpype/hosts/tvpaint/plugins/publish/increment_workfile_version.py @@ -1,7 +1,7 @@ import pyblish.api -from avalon.tvpaint import workio from openpype.api import version_up +from openpype.hosts.tvpaint.api import workio class IncrementWorkfileVersion(pyblish.api.ContextPlugin): diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py b/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py index 4ce8d5347d..0fdeba0a21 100644 --- a/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py +++ b/openpype/hosts/tvpaint/plugins/publish/validate_asset_name.py @@ -1,5 +1,5 @@ import pyblish.api -from avalon.tvpaint import pipeline +from openpype.hosts.tvpaint.api import pipeline class FixAssetNames(pyblish.api.Action): diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_marks.py b/openpype/hosts/tvpaint/plugins/publish/validate_marks.py index e2ef81e4a4..9d55bb21a9 100644 --- a/openpype/hosts/tvpaint/plugins/publish/validate_marks.py +++ b/openpype/hosts/tvpaint/plugins/publish/validate_marks.py @@ -1,7 +1,7 @@ import json import pyblish.api -from avalon.tvpaint import lib +from openpype.hosts.tvpaint.api import lib class ValidateMarksRepair(pyblish.api.Action): diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_start_frame.py b/openpype/hosts/tvpaint/plugins/publish/validate_start_frame.py index d769d47736..e2f8386757 100644 --- a/openpype/hosts/tvpaint/plugins/publish/validate_start_frame.py +++ b/openpype/hosts/tvpaint/plugins/publish/validate_start_frame.py @@ -1,5 +1,5 @@ import pyblish.api -from avalon.tvpaint import lib +from openpype.hosts.tvpaint.api import lib class RepairStartFrame(pyblish.api.Action): diff --git a/openpype/hosts/tvpaint/plugins/publish/validate_workfile_metadata.py b/openpype/hosts/tvpaint/plugins/publish/validate_workfile_metadata.py index 757da3294a..48fbeedb59 100644 --- a/openpype/hosts/tvpaint/plugins/publish/validate_workfile_metadata.py +++ b/openpype/hosts/tvpaint/plugins/publish/validate_workfile_metadata.py @@ -1,5 +1,5 @@ import pyblish.api -from avalon.tvpaint import save_file +from openpype.hosts.tvpaint.api import save_file class ValidateWorkfileMetadataRepair(pyblish.api.Action): diff --git a/openpype/hosts/tvpaint/resources/avalon.loc b/openpype/hosts/tvpaint/resources/avalon.loc deleted file mode 100644 index 3cfb7e9db4..0000000000 --- a/openpype/hosts/tvpaint/resources/avalon.loc +++ /dev/null @@ -1,37 +0,0 @@ -#------------------------------------------------- -#------------ AVALON PLUGIN LOC FILE ------------- -#------------------------------------------------- - -#Language : English -#Version : 1.0 -#Date : 27/10/2020 - -#------------------------------------------------- -#------------ COMMON ----------------------------- -#------------------------------------------------- - -$100 "OpenPype Tools" - -$10010 "Workfiles" -$10020 "Load" -$10030 "Create" -$10040 "Scene inventory" -$10050 "Publish" -$10060 "Library" - -#------------ Help ------------------------------- - -$20010 "Open workfiles tool" -$20020 "Open loader tool" -$20030 "Open creator tool" -$20040 "Open scene inventory tool" -$20050 "Open publisher" -$20060 "Open library loader tool" - -#------------ Errors ----------------------------- - -$30001 "Can't Open Requester !" - -#------------------------------------------------- -#------------ END -------------------------------- -#------------------------------------------------- diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/__init__.py b/openpype/hosts/tvpaint/tvpaint_plugin/__init__.py new file mode 100644 index 0000000000..59a7aaf99b --- /dev/null +++ b/openpype/hosts/tvpaint/tvpaint_plugin/__init__.py @@ -0,0 +1,6 @@ +import os + + +def get_plugin_files_path(): + current_dir = os.path.dirname(os.path.abspath(__file__)) + return os.path.join(current_dir, "plugin_files") diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/CMakeLists.txt b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/CMakeLists.txt new file mode 100644 index 0000000000..ecd94acc99 --- /dev/null +++ b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/CMakeLists.txt @@ -0,0 +1,45 @@ +cmake_minimum_required(VERSION 3.17) +project(OpenPypePlugin C CXX) + +set(CMAKE_CXX_STANDARD 17) +set(CMAKE_CXX_EXTENSIONS OFF) + +set(IP_ENABLE_UNICODE OFF) +set(IP_ENABLE_DOCTEST OFF) + +if(MSVC) + set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON) + add_definitions(-D_CRT_SECURE_NO_WARNINGS) +endif() + +# TODO better options +option(BOOST_ROOT "Path to root of Boost" "") + +option(OPENSSL_INCLUDE "OpenSSL include path" "") +option(OPENSSL_LIB_DIR "OpenSSL lib path" "") + +option(WEBSOCKETPP_INCLUDE "Websocketpp include path" "") +option(WEBSOCKETPP_LIB_DIR "Websocketpp lib path" "") + +option(JSONRPCPP_INCLUDE "Jsonrpcpp include path" "") + +find_package(Boost 1.72.0 COMPONENTS random) + +include_directories( + "${TVPAINT_SDK_INCLUDE}" + "${OPENSSL_INCLUDE}" + "${WEBSOCKETPP_INCLUDE}" + "${JSONRPCPP_INCLUDE}" + "${Boost_INCLUDE_DIR}" +) +link_directories( + "${OPENSSL_LIB_DIR}" + "${WEBSOCKETPP_LIB_DIR}" +) + +add_library(jsonrpcpp INTERFACE) + +add_library(${PROJECT_NAME} SHARED library.cpp library.def "${TVPAINT_SDK_LIB}/dllx.c") + +target_link_libraries(${PROJECT_NAME} ${Boost_LIBRARIES}) +target_link_libraries(${PROJECT_NAME} jsonrpcpp) diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/README.md b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/README.md new file mode 100644 index 0000000000..03b0a31f51 --- /dev/null +++ b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/README.md @@ -0,0 +1,34 @@ +README for TVPaint Avalon plugin +================================ +Introduction +------------ +This project is dedicated to integrate Avalon functionality to TVPaint. +This implementaiton is using TVPaint plugin (C/C++) which can communicate with python process. The communication should allow to trigger tools or pipeline functions from TVPaint and accept requests from python process at the same time. + +Current implementation is based on websocket protocol, using json-rpc communication (specification 2.0). Project is in beta stage, tested only on Windows. + +To be able to load plugin, environment variable `WEBSOCKET_URL` must be set otherwise plugin won't load at all. Plugin should not affect TVPaint if python server crash, but buttons won't work. + +## Requirements - Python server +- python >= 3.6 +- aiohttp +- aiohttp-json-rpc + +### Windows +- pywin32 - required only for plugin installation + +## Requirements - Plugin compilation +- TVPaint SDK - Ask for SDK on TVPaint support. +- Boost 1.72.0 - Boost is used across other plugins (Should be possible to use different version with CMakeLists modification) +- Websocket++/Websocketpp - Websocket library (https://github.com/zaphoyd/websocketpp) +- OpenSSL library - Required by Websocketpp +- jsonrpcpp - C++ library handling json-rpc 2.0 (https://github.com/badaix/jsonrpcpp) +- nlohmann/json - Required for jsonrpcpp (https://github.com/nlohmann/json) + +### jsonrpcpp +This library has `nlohmann/json` as it's part, but current `master` has old version which has bug and probably won't be possible to use library on windows without using last `nlohmann/json`. + +## TODO +- modify code and CMake to be able to compile on MacOS/Linux +- separate websocket logic from plugin logic +- hide buttons and show error message if server is closed diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp new file mode 100644 index 0000000000..a57124084b --- /dev/null +++ b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.cpp @@ -0,0 +1,790 @@ +#ifdef _WIN32 +// Include before +#include +#endif + +#include +#include +#include +#include +#include +#include +#include + +#include "plugdllx.h" + +#include + +#include +#include + +#include "json.hpp" +#include "jsonrpcpp.hpp" + + +// All functions not exported should be static. +// All global variables should be static. + +// mReq Identification of the requester. (=0 closed, !=0 requester ID) +static struct { + bool firstParams; + DWORD mReq; + void* mLocalFile; + PIFilter *current_filter; + // Id counter for client requests + int client_request_id; + // There are new menu items + bool newMenuItems; + // Menu item definitions received from connection + nlohmann::json menuItems; + // Menu items used in requester by their ID + nlohmann::json menuItemsById; + std::list menuItemsIds; + // Messages from server before processing. + // - messages can't be process at the moment of recieve as client is running in thread + std::queue messages; + // Responses to requests mapped by request id + std::map responses; + +} Data = { + true, + 0, + nullptr, + nullptr, + 1, + false, + nlohmann::json::object(), + nlohmann::json::object() +}; + +// Json rpc 2.0 parser - for handling messages and callbacks +jsonrpcpp::Parser parser; +typedef websocketpp::client client; + + +class connection_metadata { +private: + websocketpp::connection_hdl m_hdl; + client *m_endpoint; + std::string m_status; +public: + typedef websocketpp::lib::shared_ptr ptr; + + connection_metadata(websocketpp::connection_hdl hdl, client *endpoint) + : m_hdl(hdl), m_status("Connecting") { + m_endpoint = endpoint; + } + + void on_open(client *c, websocketpp::connection_hdl hdl) { + m_status = "Open"; + } + + void on_fail(client *c, websocketpp::connection_hdl hdl) { + m_status = "Failed"; + } + + void on_close(client *c, websocketpp::connection_hdl hdl) { + m_status = "Closed"; + } + + void on_message(websocketpp::connection_hdl, client::message_ptr msg) { + std::string json_str; + if (msg->get_opcode() == websocketpp::frame::opcode::text) { + json_str = msg->get_payload(); + } else { + json_str = websocketpp::utility::to_hex(msg->get_payload()); + } + process_message(json_str); + } + + void process_message(std::string msg) { + std::cout << "--> " << msg << "\n"; + try { + jsonrpcpp::entity_ptr entity = parser.do_parse(msg); + if (!entity) { + // Return error code? + + } else if (entity->is_response()) { + jsonrpcpp::Response response = jsonrpcpp::Response(entity->to_json()); + Data.responses[response.id().int_id()] = response; + + } else if (entity->is_request() || entity->is_notification()) { + Data.messages.push(msg); + } + } + catch (const jsonrpcpp::RequestException &e) { + std::string message = e.to_json().dump(); + std::cout << "<-- " << e.to_json().dump() << "\n"; + send(message); + } + catch (const jsonrpcpp::ParseErrorException &e) { + std::string message = e.to_json().dump(); + std::cout << "<-- " << message << "\n"; + send(message); + } + catch (const jsonrpcpp::RpcException &e) { + std::cerr << "RpcException: " << e.what() << "\n"; + std::string message = jsonrpcpp::ParseErrorException(e.what()).to_json().dump(); + std::cout << "<-- " << message << "\n"; + send(message); + } + catch (const std::exception &e) { + std::cerr << "Exception: " << e.what() << "\n"; + } + } + + void send(std::string message) { + if (get_status() != "Open") { + return; + } + websocketpp::lib::error_code ec; + + m_endpoint->send(m_hdl, message, websocketpp::frame::opcode::text, ec); + if (ec) { + std::cout << "> Error sending message: " << ec.message() << std::endl; + return; + } + } + + void send_notification(jsonrpcpp::Notification *notification) { + send(notification->to_json().dump()); + } + + void send_response(jsonrpcpp::Response *response) { + send(response->to_json().dump()); + } + + void send_request(jsonrpcpp::Request *request) { + send(request->to_json().dump()); + } + + websocketpp::connection_hdl get_hdl() const { + return m_hdl; + } + + std::string get_status() const { + return m_status; + } +}; + + +class websocket_endpoint { +private: + client m_endpoint; + connection_metadata::ptr client_metadata; + websocketpp::lib::shared_ptr m_thread; + bool thread_is_running = false; + +public: + websocket_endpoint() { + m_endpoint.clear_access_channels(websocketpp::log::alevel::all); + m_endpoint.clear_error_channels(websocketpp::log::elevel::all); + } + + ~websocket_endpoint() { + close_connection(); + } + + void close_connection() { + m_endpoint.stop_perpetual(); + if (connected()) + { + // Close client + close(websocketpp::close::status::normal, ""); + } + if (thread_is_running) { + // Join thread + m_thread->join(); + thread_is_running = false; + } + } + + bool connected() + { + return (client_metadata && client_metadata->get_status() == "Open"); + } + int connect(std::string const &uri) { + if (client_metadata && client_metadata->get_status() == "Open") { + std::cout << "> Already connected" << std::endl; + return 0; + } + + m_endpoint.init_asio(); + m_endpoint.start_perpetual(); + + m_thread.reset(new websocketpp::lib::thread(&client::run, &m_endpoint)); + thread_is_running = true; + + websocketpp::lib::error_code ec; + + client::connection_ptr con = m_endpoint.get_connection(uri, ec); + + if (ec) { + std::cout << "> Connect initialization error: " << ec.message() << std::endl; + return -1; + } + + client_metadata = websocketpp::lib::make_shared(con->get_handle(), &m_endpoint); + + con->set_open_handler(websocketpp::lib::bind( + &connection_metadata::on_open, + client_metadata, + &m_endpoint, + websocketpp::lib::placeholders::_1 + )); + con->set_fail_handler(websocketpp::lib::bind( + &connection_metadata::on_fail, + client_metadata, + &m_endpoint, + websocketpp::lib::placeholders::_1 + )); + con->set_close_handler(websocketpp::lib::bind( + &connection_metadata::on_close, + client_metadata, + &m_endpoint, + websocketpp::lib::placeholders::_1 + )); + con->set_message_handler(websocketpp::lib::bind( + &connection_metadata::on_message, + client_metadata, + websocketpp::lib::placeholders::_1, + websocketpp::lib::placeholders::_2 + )); + + m_endpoint.connect(con); + + return 1; + } + + void close(websocketpp::close::status::value code, std::string reason) { + if (!client_metadata || client_metadata->get_status() != "Open") { + std::cout << "> Not connected yet" << std::endl; + return; + } + + websocketpp::lib::error_code ec; + + m_endpoint.close(client_metadata->get_hdl(), code, reason, ec); + if (ec) { + std::cout << "> Error initiating close: " << ec.message() << std::endl; + } + } + + void send(std::string message) { + if (!client_metadata || client_metadata->get_status() != "Open") { + std::cout << "> Not connected yet" << std::endl; + return; + } + + client_metadata->send(message); + } + + void send_notification(jsonrpcpp::Notification *notification) { + client_metadata->send_notification(notification); + } + + void send_response(jsonrpcpp::Response *response) { + client_metadata->send(response->to_json().dump()); + } + + void send_response(std::shared_ptr response) { + client_metadata->send(response->to_json().dump()); + } + + void send_request(jsonrpcpp::Request *request) { + client_metadata->send_request(request); + } +}; + +class Communicator { +private: + // URL to websocket server + std::string websocket_url; + // Should be avalon plugin available? + // - this may change during processing if websocketet url is not set or server is down + bool use_avalon; +public: + Communicator(); + websocket_endpoint endpoint; + bool is_connected(); + bool is_usable(); + void connect(); + void process_requests(); + jsonrpcpp::Response call_method(std::string method_name, nlohmann::json params); + void call_notification(std::string method_name, nlohmann::json params); +}; + +Communicator::Communicator() { + // URL to websocket server + websocket_url = std::getenv("WEBSOCKET_URL"); + // Should be avalon plugin available? + // - this may change during processing if websocketet url is not set or server is down + if (websocket_url == "") { + use_avalon = false; + } else { + use_avalon = true; + } +} + +bool Communicator::is_connected(){ + return endpoint.connected(); +} + +bool Communicator::is_usable(){ + return use_avalon; +} + +void Communicator::connect() +{ + if (!use_avalon) { + return; + } + int con_result; + con_result = endpoint.connect(websocket_url); + if (con_result == -1) + { + use_avalon = false; + } else { + use_avalon = true; + } +} + +void Communicator::call_notification(std::string method_name, nlohmann::json params) { + if (!use_avalon || !is_connected()) {return;} + + jsonrpcpp::Notification notification = {method_name, params}; + endpoint.send_notification(¬ification); +} + +jsonrpcpp::Response Communicator::call_method(std::string method_name, nlohmann::json params) { + jsonrpcpp::Response response; + if (!use_avalon || !is_connected()) + { + return response; + } + int request_id = Data.client_request_id++; + jsonrpcpp::Request request = {request_id, method_name, params}; + endpoint.send_request(&request); + + bool found = false; + while (!found) { + std::map::iterator iter = Data.responses.find(request_id); + if (iter != Data.responses.end()) { + //element found == was found response + response = iter->second; + Data.responses.erase(request_id); + found = true; + } else { + std::this_thread::sleep_for(std::chrono::milliseconds(100)); + } + } + return response; +} + +void Communicator::process_requests() { + if (!use_avalon || !is_connected() || Data.messages.empty()) {return;} + + std::string msg = Data.messages.front(); + Data.messages.pop(); + std::cout << "Parsing: " << msg << std::endl; + // TODO: add try->except block + auto response = parser.parse(msg); + if (response->is_response()) { + endpoint.send_response(response); + } else { + jsonrpcpp::request_ptr request = std::dynamic_pointer_cast(response); + jsonrpcpp::Error error("Method \"" + request->method() + "\" not found", -32601); + jsonrpcpp::Response _response(request->id(), error); + endpoint.send_response(&_response); + } +} + +jsonrpcpp::response_ptr define_menu(const jsonrpcpp::Id &id, const jsonrpcpp::Parameter ¶ms) { + /* Define plugin menu. + + Menu is defined with json with "title" and "menu_items". + Each item in "menu_items" must have keys: + - "callback" - callback called with RPC when button is clicked + - "label" - label of button + - "help" - tooltip of button + ``` + { + "title": "< Menu title>", + "menu_items": [ + { + "callback": "workfiles_tool", + "label": "Workfiles", + "help": "Open workfiles tool" + }, + ... + ] + } + ``` + */ + Data.menuItems = params.to_json()[0]; + Data.newMenuItems = true; + + std::string output; + + return std::make_shared(id, output); +} + +jsonrpcpp::response_ptr execute_george(const jsonrpcpp::Id &id, const jsonrpcpp::Parameter ¶ms) { + const char *george_script; + char cmd_output[1024] = {0}; + char empty_char = {0}; + std::string std_george_script; + std::string output; + + nlohmann::json json_params = params.to_json(); + std_george_script = json_params[0]; + george_script = std_george_script.c_str(); + + // Result of `TVSendCmd` is int with length of output string + TVSendCmd(Data.current_filter, george_script, cmd_output); + + for (int i = 0; i < sizeof(cmd_output); i++) + { + if (cmd_output[i] == empty_char){ + break; + } + output += cmd_output[i]; + } + return std::make_shared(id, output); +} + +void register_callbacks(){ + parser.register_request_callback("define_menu", define_menu); + parser.register_request_callback("execute_george", execute_george); +} + +Communicator communication; + +//////////////////////////////////////////////////////////////////////////////////////// + +static char* GetLocalString( PIFilter* iFilter, int iNum, char* iDefault ) +{ + char* str; + + if( Data.mLocalFile == NULL ) + return iDefault; + + str = TVGetLocalString( iFilter, Data.mLocalFile, iNum ); + if( str == NULL || strlen( str ) == 0 ) + return iDefault; + + return str; +} + +/**************************************************************************************/ +// Localisation + +// numbers (like 10011) are IDs in the localized file. +// strings are the default values to use when the ID is not found +// in the localized file (or the localized file doesn't exist). +std::string label_from_evn() +{ + std::string _plugin_label = "Avalon"; + if (std::getenv("AVALON_LABEL") && std::getenv("AVALON_LABEL") != "") + { + _plugin_label = std::getenv("AVALON_LABEL"); + } + return _plugin_label; +} +std::string plugin_label = label_from_evn(); + +#define TXT_REQUESTER GetLocalString( iFilter, 100, "OpenPype Tools" ) + +#define TXT_REQUESTER_ERROR GetLocalString( iFilter, 30001, "Can't Open Requester !" ) + +//////////////////////////////////////////////////////////////////////////////////////// +//////////////////////////////////////////////////////////////////////////////////////// +//////////////////////////////////////////////////////////////////////////////////////// + +// The functions directly called by Aura through the plugin interface + + + +/**************************************************************************************/ +// "About" function. + + +void FAR PASCAL PI_About( PIFilter* iFilter ) +{ + char text[256]; + + sprintf( text, "%s %d,%d", iFilter->PIName, iFilter->PIVersion, iFilter->PIRevision ); + + // Just open a warning popup with the filter name and version. + // You can open a much nicer requester if you want. + TVWarning( iFilter, text ); +} + + +/**************************************************************************************/ +// Function called at Aura startup, when the filter is loaded. +// Should do as little as possible to keep Aura's startup time small. + +int FAR PASCAL PI_Open( PIFilter* iFilter ) +{ + Data.current_filter = iFilter; + char tmp[256]; + + strcpy( iFilter->PIName, plugin_label.c_str() ); + iFilter->PIVersion = 1; + iFilter->PIRevision = 0; + + // If this plugin was the one open at Aura shutdown, re-open it + TVReadUserString( iFilter, iFilter->PIName, "Open", tmp, "0", 255 ); + if( atoi( tmp ) ) + { + PI_Parameters( iFilter, NULL ); // NULL as iArg means "open the requester" + } + + communication.connect(); + register_callbacks(); + return 1; // OK +} + + +/**************************************************************************************/ +// Aura shutdown: we make all the necessary cleanup + +void FAR PASCAL PI_Close( PIFilter* iFilter ) +{ + if( Data.mLocalFile ) + { + TVCloseLocalFile( iFilter, Data.mLocalFile ); + } + if( Data.mReq ) + { + TVCloseReq( iFilter, Data.mReq ); + } + communication.endpoint.close_connection(); +} + + +/**************************************************************************************/ +// we have something to do ! + +int FAR PASCAL PI_Parameters( PIFilter* iFilter, char* iArg ) +{ + if( !iArg ) + { + + // If the requester is not open, we open it. + if( Data.mReq == 0) + { + // Create empty requester because menu items are defined with + // `define_menu` callback + DWORD req = TVOpenFilterReqEx( + iFilter, + 185, + 20, + NULL, + NULL, + PIRF_STANDARD_REQ | PIRF_COLLAPSABLE_REQ, + FILTERREQ_NO_TBAR + ); + if( req == 0 ) + { + TVWarning( iFilter, TXT_REQUESTER_ERROR ); + return 0; + } + + + Data.mReq = req; + // This is a very simple requester, so we create it's content right here instead + // of waiting for the PICBREQ_OPEN message... + // Not recommended for more complex requesters. (see the other examples) + + // Sets the title of the requester. + TVSetReqTitle( iFilter, Data.mReq, TXT_REQUESTER ); + // Request to listen to ticks + TVGrabTicks(iFilter, req, PITICKS_FLAG_ON); + } + else + { + // If it is already open, we just put it on front of all other requesters. + TVReqToFront( iFilter, Data.mReq ); + } + } + + return 1; +} + + +int newMenuItemsProcess(PIFilter* iFilter) { + // Menu items defined with `define_menu` should be propagated. + + // Change flag that there are new menu items (avoid infinite loop) + Data.newMenuItems = false; + // Skip if requester does not exists + if (Data.mReq == 0) { + return 0; + } + // Remove all previous menu items + for (int menu_id : Data.menuItemsIds) + { + TVRemoveButtonReq(iFilter, Data.mReq, menu_id); + } + // Clear caches + Data.menuItemsById.clear(); + Data.menuItemsIds.clear(); + + // We use a variable to contains the vertical position of the buttons. + // Each time we create a button, we add its size to this variable. + // This makes it very easy to add/remove/displace buttons in a requester. + int x_pos = 9; + int y_pos = 5; + + // Menu width + int menu_width = 185; + // Single menu item width + int btn_width = menu_width - 19; + // Single row height (btn height is 18) + int row_height = 20; + // Additional height to menu + int height_offset = 5; + + // This is a very simple requester, so we create it's content right here instead + // of waiting for the PICBREQ_OPEN message... + // Not recommended for more complex requesters. (see the other examples) + + const char *menu_title = TXT_REQUESTER; + if (Data.menuItems.contains("title")) + { + menu_title = Data.menuItems["title"].get()->c_str(); + } + // Sets the title of the requester. + TVSetReqTitle( iFilter, Data.mReq, menu_title ); + + // Resize menu + // First get current position and sizes (we only need the position) + int current_x = 0; + int current_y = 0; + int current_width = 0; + int current_height = 0; + TVInfoReq(iFilter, Data.mReq, ¤t_x, ¤t_y, ¤t_width, ¤t_height); + + // Calculate new height + int menu_height = (row_height * Data.menuItems["menu_items"].size()) + height_offset; + // Resize + TVResizeReq(iFilter, Data.mReq, current_x, current_y, menu_width, menu_height); + + // Add menu items + int item_counter = 1; + for (auto& item : Data.menuItems["menu_items"].items()) + { + int item_id = item_counter * 10; + item_counter ++; + std::string item_id_str = std::to_string(item_id); + nlohmann::json item_data = item.value(); + const char *item_label = item_data["label"].get()->c_str(); + const char *help_text = item_data["help"].get()->c_str(); + std::string item_callback = item_data["callback"].get(); + TVAddButtonReq(iFilter, Data.mReq, x_pos, y_pos, btn_width, 0, item_id, PIRBF_BUTTON_NORMAL|PIRBF_BUTTON_ACTION, item_label); + TVSetButtonInfoText( iFilter, Data.mReq, item_id, help_text ); + y_pos += row_height; + + Data.menuItemsById[std::to_string(item_id)] = item_callback; + Data.menuItemsIds.push_back(item_id); + } + + return 1; +} +/**************************************************************************************/ +// something happenned that needs our attention. +// Global variable where current button up data are stored +std::string button_up_item_id_str; +int FAR PASCAL PI_Msg( PIFilter* iFilter, INTPTR iEvent, INTPTR iReq, INTPTR* iArgs ) +{ + Data.current_filter = iFilter; + // what did happen ? + switch( iEvent ) + { + // The user just 'clicked' on a normal button + case PICBREQ_BUTTON_UP: + button_up_item_id_str = std::to_string(iArgs[0]); + if (Data.menuItemsById.contains(button_up_item_id_str)) + { + std::string callback_name = Data.menuItemsById[button_up_item_id_str].get(); + communication.call_method(callback_name, nlohmann::json::array()); + } + TVExecute( iFilter ); + break; + + // The requester was just closed. + case PICBREQ_CLOSE: + // requester doesn't exists anymore + Data.mReq = 0; + + char tmp[256]; + // Save the requester state (opened or closed) + // iArgs[4] contains a flag which tells us if the requester + // has been closed by the user (flag=0) or by Aura's shutdown (flag=1). + // If it was by Aura's shutdown, that means this requester was the + // last one open, so we should reopen this one the next time Aura + // is started. Else we won't open it next time. + sprintf( tmp, "%d", (int)(iArgs[4]) ); + + // Save it in Aura's init file. + TVWriteUserString( iFilter, iFilter->PIName, "Open", tmp ); + break; + + case PICBREQ_TICKS: + if (Data.newMenuItems) + { + newMenuItemsProcess(iFilter); + } + communication.process_requests(); + } + + return 1; +} + + +/**************************************************************************************/ +// Start of the 'execution' of the filter for a new sequence. +// - iNumImages contains the total number of frames to be processed. +// Here you should allocate memory that is used for all frames, +// and precompute all the stuff that doesn't change from frame to frame. + + +int FAR PASCAL PI_SequenceStart( PIFilter* iFilter, int iNumImages ) +{ + // In this simple example we don't have anything to allocate/precompute. + + // 1 means 'continue', 0 means 'error, abort' (like 'not enough memory') + return 1; +} + + +// Here you should cleanup what you've done in PI_SequenceStart + +void FAR PASCAL PI_SequenceFinish( PIFilter* iFilter ) +{} + + +/**************************************************************************************/ +// This is called before each frame. +// Here you should allocate memory and precompute all the stuff you can. + +int FAR PASCAL PI_Start( PIFilter* iFilter, double iPos, double iSize ) +{ + return 1; +} + + +void FAR PASCAL PI_Finish( PIFilter* iFilter ) +{ + // nothing special to cleanup +} + + +/**************************************************************************************/ +// 'Execution' of the filter. +int FAR PASCAL PI_Work( PIFilter* iFilter ) +{ + return 1; +} diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.def b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.def new file mode 100644 index 0000000000..882f2b4719 --- /dev/null +++ b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_code/library.def @@ -0,0 +1,10 @@ +LIBRARY Avalonplugin +EXPORTS + PI_Msg + PI_Open + PI_About + PI_Parameters + PI_Start + PI_Work + PI_Finish + PI_Close diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/additional_libraries/boost_random-vc142-mt-x64-1_72.dll b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/additional_libraries/boost_random-vc142-mt-x64-1_72.dll new file mode 100644 index 0000000000..46bd533b72 Binary files /dev/null and b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/additional_libraries/boost_random-vc142-mt-x64-1_72.dll differ diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll new file mode 100644 index 0000000000..293a7b19b0 Binary files /dev/null and b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x64/plugin/OpenPypePlugin.dll differ diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/additional_libraries/boost_random-vc142-mt-x32-1_72.dll b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/additional_libraries/boost_random-vc142-mt-x32-1_72.dll new file mode 100644 index 0000000000..ccf2fd8562 Binary files /dev/null and b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/additional_libraries/boost_random-vc142-mt-x32-1_72.dll differ diff --git a/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll new file mode 100644 index 0000000000..9671d8a27b Binary files /dev/null and b/openpype/hosts/tvpaint/tvpaint_plugin/plugin_files/windows_x86/plugin/OpenPypePlugin.dll differ diff --git a/openpype/hosts/tvpaint/worker/worker.py b/openpype/hosts/tvpaint/worker/worker.py index 738656fa91..cfd40bc7ba 100644 --- a/openpype/hosts/tvpaint/worker/worker.py +++ b/openpype/hosts/tvpaint/worker/worker.py @@ -2,7 +2,7 @@ import signal import time import asyncio -from avalon.tvpaint.communication_server import ( +from openpype.hosts.tvpaint.api.communication_server import ( BaseCommunicator, CommunicationWrapper ) diff --git a/openpype/hosts/tvpaint/worker/worker_job.py b/openpype/hosts/tvpaint/worker/worker_job.py index c3893b6f2e..519d42ce73 100644 --- a/openpype/hosts/tvpaint/worker/worker_job.py +++ b/openpype/hosts/tvpaint/worker/worker_job.py @@ -256,7 +256,7 @@ class CollectSceneData(BaseCommand): name = "collect_scene_data" def execute(self): - from avalon.tvpaint.lib import ( + from openpype.hosts.tvpaint.api.lib import ( get_layers_data, get_groups_data, get_layers_pre_post_behavior, diff --git a/openpype/hosts/unreal/api/tools_ui.py b/openpype/hosts/unreal/api/tools_ui.py new file mode 100644 index 0000000000..93361c3574 --- /dev/null +++ b/openpype/hosts/unreal/api/tools_ui.py @@ -0,0 +1,158 @@ +import sys +from Qt import QtWidgets, QtCore, QtGui + +from openpype import ( + resources, + style +) +from openpype.tools.utils import host_tools +from openpype.tools.utils.lib import qt_app_context + + +class ToolsBtnsWidget(QtWidgets.QWidget): + """Widget containing buttons which are clickable.""" + tool_required = QtCore.Signal(str) + + def __init__(self, parent=None): + super(ToolsBtnsWidget, self).__init__(parent) + + create_btn = QtWidgets.QPushButton("Create...", self) + load_btn = QtWidgets.QPushButton("Load...", self) + publish_btn = QtWidgets.QPushButton("Publish...", self) + manage_btn = QtWidgets.QPushButton("Manage...", self) + experimental_tools_btn = QtWidgets.QPushButton( + "Experimental tools...", self + ) + + layout = QtWidgets.QVBoxLayout(self) + layout.setContentsMargins(0, 0, 0, 0) + layout.addWidget(create_btn, 0) + layout.addWidget(load_btn, 0) + layout.addWidget(publish_btn, 0) + layout.addWidget(manage_btn, 0) + layout.addWidget(experimental_tools_btn, 0) + layout.addStretch(1) + + create_btn.clicked.connect(self._on_create) + load_btn.clicked.connect(self._on_load) + publish_btn.clicked.connect(self._on_publish) + manage_btn.clicked.connect(self._on_manage) + experimental_tools_btn.clicked.connect(self._on_experimental) + + def _on_create(self): + self.tool_required.emit("creator") + + def _on_load(self): + self.tool_required.emit("loader") + + def _on_publish(self): + self.tool_required.emit("publish") + + def _on_manage(self): + self.tool_required.emit("sceneinventory") + + def _on_experimental(self): + self.tool_required.emit("experimental_tools") + + +class ToolsDialog(QtWidgets.QDialog): + """Dialog with tool buttons that will stay opened until user close it.""" + def __init__(self, *args, **kwargs): + super(ToolsDialog, self).__init__(*args, **kwargs) + + self.setWindowTitle("OpenPype tools") + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + + self.setWindowFlags( + QtCore.Qt.Window + | QtCore.Qt.WindowStaysOnTopHint + ) + self.setFocusPolicy(QtCore.Qt.StrongFocus) + + tools_widget = ToolsBtnsWidget(self) + + layout = QtWidgets.QVBoxLayout(self) + layout.addWidget(tools_widget) + + tools_widget.tool_required.connect(self._on_tool_require) + self._tools_widget = tools_widget + + self._first_show = True + + def sizeHint(self): + result = super(ToolsDialog, self).sizeHint() + result.setWidth(result.width() * 2) + return result + + def showEvent(self, event): + super(ToolsDialog, self).showEvent(event) + if self._first_show: + self.setStyleSheet(style.load_stylesheet()) + self._first_show = False + + def _on_tool_require(self, tool_name): + host_tools.show_tool_by_name(tool_name, parent=self) + + +class ToolsPopup(ToolsDialog): + """Popup with tool buttons that will close when loose focus.""" + def __init__(self, *args, **kwargs): + super(ToolsPopup, self).__init__(*args, **kwargs) + + self.setWindowFlags( + QtCore.Qt.FramelessWindowHint + | QtCore.Qt.Popup + ) + + def showEvent(self, event): + super(ToolsPopup, self).showEvent(event) + app = QtWidgets.QApplication.instance() + app.processEvents() + pos = QtGui.QCursor.pos() + self.move(pos) + + +class WindowCache: + """Cached objects and methods to be used in global scope.""" + _dialog = None + _popup = None + _first_show = True + + @classmethod + def _before_show(cls): + """Create QApplication if does not exists yet.""" + if not cls._first_show: + return + + cls._first_show = False + if not QtWidgets.QApplication.instance(): + QtWidgets.QApplication(sys.argv) + + @classmethod + def show_popup(cls): + cls._before_show() + with qt_app_context(): + if cls._popup is None: + cls._popup = ToolsPopup() + + cls._popup.show() + + @classmethod + def show_dialog(cls): + cls._before_show() + with qt_app_context(): + if cls._dialog is None: + cls._dialog = ToolsDialog() + + cls._dialog.show() + cls._dialog.raise_() + cls._dialog.activateWindow() + + +def show_tools_popup(): + WindowCache.show_popup() + + +def show_tools_dialog(): + WindowCache.show_dialog() diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index efd2cddf7e..34926453cb 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -25,6 +25,7 @@ from .env_tools import ( from .terminal import Terminal from .execute import ( get_pype_execute_args, + get_linux_launcher_args, execute, run_subprocess, path_to_subprocess_arg, @@ -32,8 +33,6 @@ from .execute import ( ) from .log import PypeLogger, timeit from .mongo import ( - decompose_url, - compose_url, get_default_components, validate_mongo_connection, OpenPypeMongoConnection @@ -150,7 +149,8 @@ from .path_tools import ( get_version_from_path, get_last_version_from_path, create_project_folders, - get_project_basic_paths + create_workdir_extra_folders, + get_project_basic_paths, ) from .editorial import ( @@ -174,6 +174,7 @@ terminal = Terminal __all__ = [ "get_pype_execute_args", + "get_linux_launcher_args", "execute", "run_subprocess", "path_to_subprocess_arg", @@ -276,8 +277,6 @@ __all__ = [ "get_datetime_data", "PypeLogger", - "decompose_url", - "compose_url", "get_default_components", "validate_mongo_connection", "OpenPypeMongoConnection", @@ -294,6 +293,7 @@ __all__ = [ "frames_to_timecode", "make_sequence_collection", "create_project_folders", + "create_workdir_extra_folders", "get_project_basic_paths", "get_openpype_version", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index 30be92e886..d0438e12a6 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -1,8 +1,8 @@ import os import sys -import re import copy import json +import tempfile import platform import collections import inspect @@ -37,10 +37,102 @@ from .python_module_tools import ( modules_from_path, classes_from_module ) +from .execute import get_linux_launcher_args _logger = None +PLATFORM_NAMES = {"windows", "linux", "darwin"} +DEFAULT_ENV_SUBGROUP = "standard" + + +def parse_environments(env_data, env_group=None, platform_name=None): + """Parse environment values from settings byt group and platfrom. + + Data may contain up to 2 hierarchical levels of dictionaries. At the end + of the last level must be string or list. List is joined using platform + specific joiner (';' for windows and ':' for linux and mac). + + Hierarchical levels can contain keys for subgroups and platform name. + Platform specific values must be always last level of dictionary. Platform + names are "windows" (MS Windows), "linux" (any linux distribution) and + "darwin" (any MacOS distribution). + + Subgroups are helpers added mainly for standard and on farm usage. Farm + may require different environments for e.g. licence related values or + plugins. Default subgroup is "standard". + + Examples: + ``` + { + # Unchanged value + "ENV_KEY1": "value", + # Empty values are kept (unset environment variable) + "ENV_KEY2": "", + + # Join list values with ':' or ';' + "ENV_KEY3": ["value1", "value2"], + + # Environment groups + "ENV_KEY4": { + "standard": "DEMO_SERVER_URL", + "farm": "LICENCE_SERVER_URL" + }, + + # Platform specific (and only for windows and mac) + "ENV_KEY5": { + "windows": "windows value", + "darwin": ["value 1", "value 2"] + }, + + # Environment groups and platform combination + "ENV_KEY6": { + "farm": "FARM_VALUE", + "standard": { + "windows": ["value1", "value2"], + "linux": "value1", + "darwin": "" + } + } + } + ``` + """ + output = {} + if not env_data: + return output + + if not env_group: + env_group = DEFAULT_ENV_SUBGROUP + + if not platform_name: + platform_name = platform.system().lower() + + for key, value in env_data.items(): + if isinstance(value, dict): + # Look if any key is platform key + # - expect that represents environment group if does not contain + # platform keys + if not PLATFORM_NAMES.intersection(set(value.keys())): + # Skip the key if group is not available + if env_group not in value: + continue + value = value[env_group] + + # Check again if value is dictionary + # - this time there should be only platform keys + if isinstance(value, dict): + value = value.get(platform_name) + + # Check if value is list and join it's values + # QUESTION Should empty values be skipped? + if isinstance(value, (list, tuple)): + value = os.pathsep.join(value) + + # Set key to output if value is string + if isinstance(value, six.string_types): + output[key] = value + return output + def get_logger(): """Global lib.applications logger getter.""" @@ -640,6 +732,10 @@ class LaunchHook: def app_name(self): return getattr(self.application, "full_name", None) + @property + def modules_manager(self): + return getattr(self.launch_context, "modules_manager", None) + def validate(self): """Optional validation of launch hook on initialization. @@ -701,21 +797,32 @@ class ApplicationLaunchContext: preparation to store objects usable in multiple places. """ - def __init__(self, application, executable, **data): + def __init__(self, application, executable, env_group=None, **data): + from openpype.modules import ModulesManager + # Application object self.application = application + self.modules_manager = ModulesManager() + # Logger logger_name = "{}-{}".format(self.__class__.__name__, self.app_name) self.log = PypeLogger.get_logger(logger_name) self.executable = executable + if env_group is None: + env_group = DEFAULT_ENV_SUBGROUP + + self.env_group = env_group + self.data = dict(data) # subprocess.Popen launch arguments (first argument in constructor) self.launch_args = executable.as_args() self.launch_args.extend(application.arguments) + if self.data.get("app_args"): + self.launch_args.extend(self.data.pop("app_args")) # Handle launch environemtns env = self.data.pop("env", None) @@ -810,10 +917,7 @@ class ApplicationLaunchContext: paths.append(path) # Load modules paths - from openpype.modules import ModulesManager - - manager = ModulesManager() - paths.extend(manager.collect_launch_hook_paths()) + paths.extend(self.modules_manager.collect_launch_hook_paths()) return paths @@ -919,6 +1023,48 @@ class ApplicationLaunchContext: def manager(self): return self.application.manager + def _run_process(self): + # Windows and MacOS have easier process start + low_platform = platform.system().lower() + if low_platform in ("windows", "darwin"): + return subprocess.Popen(self.launch_args, **self.kwargs) + + # Linux uses mid process + # - it is possible that the mid process executable is not + # available for this version of OpenPype in that case use standard + # launch + launch_args = get_linux_launcher_args() + if launch_args is None: + return subprocess.Popen(self.launch_args, **self.kwargs) + + # Prepare data that will be passed to midprocess + # - store arguments to a json and pass path to json as last argument + # - pass environments to set + json_data = { + "args": self.launch_args, + "env": self.kwargs.pop("env", {}) + } + # Create temp file + json_temp = tempfile.NamedTemporaryFile( + mode="w", prefix="op_app_args", suffix=".json", delete=False + ) + json_temp.close() + json_temp_filpath = json_temp.name + with open(json_temp_filpath, "w") as stream: + json.dump(json_data, stream) + + launch_args.append(json_temp_filpath) + + # Create mid-process which will launch application + process = subprocess.Popen(launch_args, **self.kwargs) + # Wait until the process finishes + # - This is important! The process would stay in "open" state. + process.wait() + # Remove the temp file + os.remove(json_temp_filpath) + # Return process which is already terminated + return process + def launch(self): """Collect data for new process and then create it. @@ -955,8 +1101,10 @@ class ApplicationLaunchContext: self.app_name, args_len_str, args ) ) + self.launch_args = args + # Run process - self.process = subprocess.Popen(args, **self.kwargs) + self.process = self._run_process() # Process post launch hooks for postlaunch_hook in self.postlaunch_hooks: @@ -1045,7 +1193,7 @@ class EnvironmentPrepData(dict): def get_app_environments_for_context( - project_name, asset_name, task_name, app_name, env=None + project_name, asset_name, task_name, app_name, env_group=None, env=None ): """Prepare environment variables by context. Args: @@ -1097,8 +1245,8 @@ def get_app_environments_for_context( "env": env }) - prepare_host_environments(data) - prepare_context_environments(data) + prepare_host_environments(data, env_group) + prepare_context_environments(data, env_group) # Discard avalon connection dbcon.uninstall() @@ -1118,7 +1266,7 @@ def _merge_env(env, current_env): return result -def prepare_host_environments(data, implementation_envs=True): +def prepare_host_environments(data, env_group=None, implementation_envs=True): """Modify launch environments based on launched app and context. Args: @@ -1172,7 +1320,7 @@ def prepare_host_environments(data, implementation_envs=True): continue # Choose right platform - tool_env = acre.parse(_env_values) + tool_env = parse_environments(_env_values, env_group) # Merge dictionaries env_values = _merge_env(tool_env, env_values) @@ -1204,7 +1352,9 @@ def prepare_host_environments(data, implementation_envs=True): data["env"].pop(key, None) -def apply_project_environments_value(project_name, env, project_settings=None): +def apply_project_environments_value( + project_name, env, project_settings=None, env_group=None +): """Apply project specific environments on passed environments. The enviornments are applied on passed `env` argument value so it is not @@ -1232,14 +1382,15 @@ def apply_project_environments_value(project_name, env, project_settings=None): env_value = project_settings["global"]["project_environments"] if env_value: + parsed_value = parse_environments(env_value, env_group) env.update(acre.compute( - _merge_env(acre.parse(env_value), env), + _merge_env(parsed_value, env), cleanup=False )) return env -def prepare_context_environments(data): +def prepare_context_environments(data, env_group=None): """Modify launch environemnts with context data for launched host. Args: @@ -1269,7 +1420,7 @@ def prepare_context_environments(data): data["project_settings"] = project_settings # Apply project specific environments on current env value apply_project_environments_value( - project_name, data["env"], project_settings + project_name, data["env"], project_settings, env_group ) app = data["app"] diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py index a8340d7d09..cb5bca133d 100644 --- a/openpype/lib/avalon_context.py +++ b/openpype/lib/avalon_context.py @@ -508,13 +508,18 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name): Returns: dict: Data prepared for filling workdir template. """ - hierarchy = "/".join(asset_doc["data"]["parents"]) - task_type = asset_doc['data']['tasks'].get(task_name, {}).get('type') project_task_types = project_doc["config"]["tasks"] task_code = project_task_types.get(task_type, {}).get("short_name") + asset_parents = asset_doc["data"]["parents"] + hierarchy = "/".join(asset_parents) + + parent_name = project_doc["name"] + if asset_parents: + parent_name = asset_parents[-1] + data = { "project": { "name": project_doc["name"], @@ -526,6 +531,7 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name): "short": task_code, }, "asset": asset_doc["name"], + "parent": parent_name, "app": host_name, "user": getpass.getuser(), "hierarchy": hierarchy, @@ -1427,7 +1433,11 @@ def get_creator_by_name(creator_name, case_sensitive=False): @with_avalon def change_timer_to_current_context(): - """Called after context change to change timers""" + """Called after context change to change timers. + + TODO: + - use TimersManager's static method instead of reimplementing it here + """ webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL") if not webserver_url: log.warning("Couldn't find webserver url") @@ -1442,8 +1452,7 @@ def change_timer_to_current_context(): data = { "project_name": avalon.io.Session["AVALON_PROJECT"], "asset_name": avalon.io.Session["AVALON_ASSET"], - "task_name": avalon.io.Session["AVALON_TASK"], - "hierarchy": get_hierarchy() + "task_name": avalon.io.Session["AVALON_TASK"] } requests.post(rest_api_url, json=data) diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index ad77b2f899..f97617d906 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -1,7 +1,6 @@ import os -import shlex import subprocess -import platform +import distutils.spawn from .log import PypeLogger as Logger @@ -175,3 +174,46 @@ def get_pype_execute_args(*args): pype_args.extend(args) return pype_args + + +def get_linux_launcher_args(*args): + """Path to application mid process executable. + + This function should be able as arguments are different when used + from code and build. + + It is possible that this function is used in OpenPype build which does + not have yet the new executable. In that case 'None' is returned. + + Args: + args (iterable): List of additional arguments added after executable + argument. + + Returns: + list: Executables with possible positional argument to script when + called from code. + """ + filename = "app_launcher" + openpype_executable = os.environ["OPENPYPE_EXECUTABLE"] + + executable_filename = os.path.basename(openpype_executable) + if "python" in executable_filename.lower(): + script_path = os.path.join( + os.environ["OPENPYPE_ROOT"], + "{}.py".format(filename) + ) + launch_args = [openpype_executable, script_path] + else: + new_executable = os.path.join( + os.path.dirname(openpype_executable), + filename + ) + executable_path = distutils.spawn.find_executable(new_executable) + if executable_path is None: + return None + launch_args = [executable_path] + + if args: + launch_args.extend(args) + + return launch_args diff --git a/openpype/lib/log.py b/openpype/lib/log.py index 85cbc733ba..a42faef008 100644 --- a/openpype/lib/log.py +++ b/openpype/lib/log.py @@ -27,7 +27,7 @@ import copy from . import Terminal from .mongo import ( MongoEnvNotSet, - decompose_url, + get_default_components, OpenPypeMongoConnection ) try: @@ -202,8 +202,9 @@ class PypeLogger: use_mongo_logging = None mongo_process_id = None - # Information about mongo url - log_mongo_url = None + # Backwards compatibility - was used in start.py + # TODO remove when all old builds are replaced with new one + # not using 'log_mongo_url_components' log_mongo_url_components = None # Database name in Mongo @@ -282,9 +283,9 @@ class PypeLogger: if not cls.use_mongo_logging: return - components = cls.log_mongo_url_components + components = get_default_components() kwargs = { - "host": cls.log_mongo_url, + "host": components["host"], "database_name": cls.log_database_name, "collection": cls.log_collection_name, "username": components["username"], @@ -324,6 +325,7 @@ class PypeLogger: # Change initialization state to prevent runtime changes # if is executed during runtime cls.initialized = False + cls.log_mongo_url_components = get_default_components() # Define if should logging to mongo be used use_mongo_logging = bool(log4mongo is not None) @@ -354,14 +356,8 @@ class PypeLogger: # Define if is in OPENPYPE_DEBUG mode cls.pype_debug = int(os.getenv("OPENPYPE_DEBUG") or "0") - # Mongo URL where logs will be stored - cls.log_mongo_url = os.environ.get("OPENPYPE_MONGO") - - if not cls.log_mongo_url: + if not os.environ.get("OPENPYPE_MONGO"): cls.use_mongo_logging = False - else: - # Decompose url - cls.log_mongo_url_components = decompose_url(cls.log_mongo_url) # Mark as initialized cls.initialized = True @@ -474,7 +470,7 @@ class PypeLogger: if not cls.initialized: cls.initialize() - return OpenPypeMongoConnection.get_mongo_client(cls.log_mongo_url) + return OpenPypeMongoConnection.get_mongo_client() def timeit(method): diff --git a/openpype/lib/mongo.py b/openpype/lib/mongo.py index 0fd4517b5b..7e0bd4f796 100644 --- a/openpype/lib/mongo.py +++ b/openpype/lib/mongo.py @@ -15,7 +15,19 @@ class MongoEnvNotSet(Exception): pass -def decompose_url(url): +def _decompose_url(url): + """Decompose mongo url to basic components. + + Used for creation of MongoHandler which expect mongo url components as + separated kwargs. Components are at the end not used as we're setting + connection directly this is just a dumb components for MongoHandler + validation pass. + """ + # Use first url from passed url + # - this is beacuse it is possible to pass multiple urls for multiple + # replica sets which would crash on urlparse otherwise + # - please don't use comma in username of password + url = url.split(",")[0] components = { "scheme": None, "host": None, @@ -48,42 +60,13 @@ def decompose_url(url): return components -def compose_url(scheme=None, - host=None, - username=None, - password=None, - port=None, - auth_db=None): - - url = "{scheme}://" - - if username and password: - url += "{username}:{password}@" - - url += "{host}" - if port: - url += ":{port}" - - if auth_db: - url += "?authSource={auth_db}" - - return url.format(**{ - "scheme": scheme, - "host": host, - "username": username, - "password": password, - "port": port, - "auth_db": auth_db - }) - - def get_default_components(): mongo_url = os.environ.get("OPENPYPE_MONGO") if mongo_url is None: raise MongoEnvNotSet( "URL for Mongo logging connection is not set." ) - return decompose_url(mongo_url) + return _decompose_url(mongo_url) def should_add_certificate_path_to_mongo_url(mongo_url): diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index 6fd0ad0dfe..12e9e2db9c 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -1,13 +1,15 @@ -import json -import logging import os import re import abc +import json +import logging import six +from openpype.settings import get_project_settings +from openpype.settings.lib import get_site_local_overrides from .anatomy import Anatomy -from openpype.settings import get_project_settings +from .profiles_filtering import filter_profiles log = logging.getLogger(__name__) @@ -200,6 +202,58 @@ def get_project_basic_paths(project_name): return _list_path_items(folder_structure) +def create_workdir_extra_folders( + workdir, host_name, task_type, task_name, project_name, + project_settings=None +): + """Create extra folders in work directory based on context. + + Args: + workdir (str): Path to workdir where workfiles is stored. + host_name (str): Name of host implementation. + task_type (str): Type of task for which extra folders should be + created. + task_name (str): Name of task for which extra folders should be + created. + project_name (str): Name of project on which task is. + project_settings (dict): Prepared project settings. Are loaded if not + passed. + """ + # Load project settings if not set + if not project_settings: + project_settings = get_project_settings(project_name) + + # Load extra folders profiles + extra_folders_profiles = ( + project_settings["global"]["tools"]["Workfiles"]["extra_folders"] + ) + # Skip if are empty + if not extra_folders_profiles: + return + + # Prepare profiles filters + filter_data = { + "task_types": task_type, + "task_names": task_name, + "hosts": host_name + } + profile = filter_profiles(extra_folders_profiles, filter_data) + if profile is None: + return + + for subfolder in profile["folders"]: + # Make sure backslashes are converted to forwards slashes + # and does not start with slash + subfolder = subfolder.replace("\\", "/").lstrip("/") + # Skip empty strings + if not subfolder: + continue + + fullpath = os.path.join(workdir, subfolder) + if not os.path.exists(fullpath): + os.makedirs(fullpath) + + @six.add_metaclass(abc.ABCMeta) class HostDirmap: """ @@ -307,7 +361,6 @@ class HostDirmap: mapping = {} if not project_settings["global"]["sync_server"]["enabled"]: - log.debug("Site Sync not enabled") return mapping from openpype.settings.lib import get_site_local_overrides diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py index 2a859da7cb..7c66f9760d 100644 --- a/openpype/lib/plugin_tools.py +++ b/openpype/lib/plugin_tools.py @@ -227,20 +227,27 @@ def filter_pyblish_plugins(plugins): # iterate over plugins for plugin in plugins[:]: - file = os.path.normpath(inspect.getsourcefile(plugin)) - file = os.path.normpath(file) - - # host determined from path - host_from_file = file.split(os.path.sep)[-4:-3][0] - plugin_kind = file.split(os.path.sep)[-2:-1][0] - - # TODO: change after all plugins are moved one level up - if host_from_file == "openpype": - host_from_file = "global" - try: config_data = presets[host]["publish"][plugin.__name__] except KeyError: + # host determined from path + file = os.path.normpath(inspect.getsourcefile(plugin)) + file = os.path.normpath(file) + + split_path = file.split(os.path.sep) + if len(split_path) < 4: + log.warning( + 'plugin path too short to extract host {}'.format(file) + ) + continue + + host_from_file = split_path[-4] + plugin_kind = split_path[-2] + + # TODO: change after all plugins are moved one level up + if host_from_file == "openpype": + host_from_file = "global" + try: config_data = presets[host_from_file][plugin_kind][plugin.__name__] # noqa: E501 except KeyError: diff --git a/openpype/lib/remote_publish.py b/openpype/lib/remote_publish.py index d7db4d1ab9..8074b2d112 100644 --- a/openpype/lib/remote_publish.py +++ b/openpype/lib/remote_publish.py @@ -2,6 +2,7 @@ import os from datetime import datetime import sys from bson.objectid import ObjectId +import collections import pyblish.util import pyblish.api @@ -11,6 +12,25 @@ from openpype.lib.mongo import OpenPypeMongoConnection from openpype.lib.plugin_tools import parse_json +def headless_publish(log, close_plugin_name=None, is_test=False): + """Runs publish in a opened host with a context and closes Python process. + + Host is being closed via ClosePS pyblish plugin which triggers 'exit' + method in ConsoleTrayApp. + """ + if not is_test: + dbcon = get_webpublish_conn() + _id = os.environ.get("BATCH_LOG_ID") + if not _id: + log.warning("Unable to store log records, " + "batch will be unfinished!") + return + + publish_and_log(dbcon, _id, log, close_plugin_name) + else: + publish(log, close_plugin_name) + + def get_webpublish_conn(): """Get connection to OP 'webpublishes' collection.""" mongo_client = OpenPypeMongoConnection.get_mongo_client() @@ -37,6 +57,33 @@ def start_webpublish_log(dbcon, batch_id, user): }).inserted_id +def publish(log, close_plugin_name=None): + """Loops through all plugins, logs to console. Used for tests. + + Args: + log (OpenPypeLogger) + close_plugin_name (str): name of plugin with responsibility to + close host app + """ + # Error exit as soon as any error occurs. + error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}" + + close_plugin = _get_close_plugin(close_plugin_name, log) + + for result in pyblish.util.publish_iter(): + for record in result["records"]: + log.info("{}: {}".format( + result["plugin"].label, record.msg)) + + if result["error"]: + log.error(error_format.format(**result)) + uninstall() + if close_plugin: # close host app explicitly after error + context = pyblish.api.Context() + close_plugin().process(context) + sys.exit(1) + + def publish_and_log(dbcon, _id, log, close_plugin_name=None): """Loops through all plugins, logs ok and fails into OP DB. @@ -140,7 +187,9 @@ def find_variant_key(application_manager, host): found_variant_key = None # finds most up-to-date variant if any installed - for variant_key, variant in app_group.variants.items(): + sorted_variants = collections.OrderedDict( + sorted(app_group.variants.items())) + for variant_key, variant in sorted_variants.items(): for executable in variant.executables: if executable.exists(): found_variant_key = variant_key diff --git a/openpype/modules/default_modules/avalon_apps/__init__.py b/openpype/modules/avalon_apps/__init__.py similarity index 100% rename from openpype/modules/default_modules/avalon_apps/__init__.py rename to openpype/modules/avalon_apps/__init__.py diff --git a/openpype/modules/default_modules/avalon_apps/avalon_app.py b/openpype/modules/avalon_apps/avalon_app.py similarity index 86% rename from openpype/modules/default_modules/avalon_apps/avalon_app.py rename to openpype/modules/avalon_apps/avalon_app.py index 9e650a097e..51a22323f1 100644 --- a/openpype/modules/default_modules/avalon_apps/avalon_app.py +++ b/openpype/modules/avalon_apps/avalon_app.py @@ -13,14 +13,6 @@ class AvalonModule(OpenPypeModule, ITrayModule): avalon_settings = modules_settings[self.name] - # Check if environment is already set - avalon_mongo_url = os.environ.get("AVALON_MONGO") - if not avalon_mongo_url: - avalon_mongo_url = avalon_settings["AVALON_MONGO"] - # Use pype mongo if Avalon's mongo not defined - if not avalon_mongo_url: - avalon_mongo_url = os.environ["OPENPYPE_MONGO"] - thumbnail_root = os.environ.get("AVALON_THUMBNAIL_ROOT") if not thumbnail_root: thumbnail_root = avalon_settings["AVALON_THUMBNAIL_ROOT"] @@ -31,7 +23,6 @@ class AvalonModule(OpenPypeModule, ITrayModule): avalon_mongo_timeout = avalon_settings["AVALON_TIMEOUT"] self.thumbnail_root = thumbnail_root - self.avalon_mongo_url = avalon_mongo_url self.avalon_mongo_timeout = avalon_mongo_timeout # Tray attributes @@ -51,12 +42,20 @@ class AvalonModule(OpenPypeModule, ITrayModule): def tray_init(self): # Add library tool try: + from Qt import QtCore from openpype.tools.libraryloader import LibraryLoaderWindow - self.libraryloader = LibraryLoaderWindow( + libraryloader = LibraryLoaderWindow( show_projects=True, show_libraries=True ) + # Remove always on top flag for tray + window_flags = libraryloader.windowFlags() + if window_flags | QtCore.Qt.WindowStaysOnTopHint: + window_flags ^= QtCore.Qt.WindowStaysOnTopHint + libraryloader.setWindowFlags(window_flags) + self.libraryloader = libraryloader + except Exception: self.log.warning( "Couldn't load Library loader tool for tray.", diff --git a/openpype/modules/default_modules/avalon_apps/rest_api.py b/openpype/modules/avalon_apps/rest_api.py similarity index 100% rename from openpype/modules/default_modules/avalon_apps/rest_api.py rename to openpype/modules/avalon_apps/rest_api.py diff --git a/openpype/modules/base.py b/openpype/modules/base.py index 7ecfeae7bd..b5c491a1c0 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -29,6 +29,22 @@ from openpype.settings.lib import ( from openpype.lib import PypeLogger +DEFAULT_OPENPYPE_MODULES = ( + "avalon_apps", + "clockify", + "log_viewer", + "muster", + "python_console_interpreter", + "slack", + "webserver", + "launcher_action", + "project_manager_action", + "settings_action", + "standalonepublish_action", + "job_queue", +) + + # Inherit from `object` for Python 2 hosts class _ModuleClass(object): """Fake module class for storing OpenPype modules. @@ -272,17 +288,12 @@ def _load_modules(): log = PypeLogger.get_logger("ModulesLoader") # Import default modules imported from 'openpype.modules' - for default_module_name in ( - "settings_action", - "launcher_action", - "project_manager_action", - "standalonepublish_action", - ): + for default_module_name in DEFAULT_OPENPYPE_MODULES: try: - default_module = __import__( - "openpype.modules.{}".format(default_module_name), - fromlist=("", ) - ) + import_str = "openpype.modules.{}".format(default_module_name) + new_import_str = "{}.{}".format(modules_key, default_module_name) + default_module = __import__(import_str, fromlist=("", )) + sys.modules[new_import_str] = default_module setattr(openpype_modules, default_module_name, default_module) except Exception: diff --git a/openpype/modules/default_modules/clockify/__init__.py b/openpype/modules/clockify/__init__.py similarity index 100% rename from openpype/modules/default_modules/clockify/__init__.py rename to openpype/modules/clockify/__init__.py diff --git a/openpype/modules/default_modules/clockify/clockify_api.py b/openpype/modules/clockify/clockify_api.py similarity index 100% rename from openpype/modules/default_modules/clockify/clockify_api.py rename to openpype/modules/clockify/clockify_api.py diff --git a/openpype/modules/default_modules/clockify/clockify_module.py b/openpype/modules/clockify/clockify_module.py similarity index 100% rename from openpype/modules/default_modules/clockify/clockify_module.py rename to openpype/modules/clockify/clockify_module.py diff --git a/openpype/modules/default_modules/clockify/constants.py b/openpype/modules/clockify/constants.py similarity index 100% rename from openpype/modules/default_modules/clockify/constants.py rename to openpype/modules/clockify/constants.py diff --git a/openpype/modules/default_modules/clockify/ftrack/server/action_clockify_sync_server.py b/openpype/modules/clockify/ftrack/server/action_clockify_sync_server.py similarity index 100% rename from openpype/modules/default_modules/clockify/ftrack/server/action_clockify_sync_server.py rename to openpype/modules/clockify/ftrack/server/action_clockify_sync_server.py diff --git a/openpype/modules/default_modules/clockify/ftrack/user/action_clockify_sync_local.py b/openpype/modules/clockify/ftrack/user/action_clockify_sync_local.py similarity index 100% rename from openpype/modules/default_modules/clockify/ftrack/user/action_clockify_sync_local.py rename to openpype/modules/clockify/ftrack/user/action_clockify_sync_local.py diff --git a/openpype/modules/default_modules/clockify/launcher_actions/ClockifyStart.py b/openpype/modules/clockify/launcher_actions/ClockifyStart.py similarity index 100% rename from openpype/modules/default_modules/clockify/launcher_actions/ClockifyStart.py rename to openpype/modules/clockify/launcher_actions/ClockifyStart.py diff --git a/openpype/modules/default_modules/clockify/launcher_actions/ClockifySync.py b/openpype/modules/clockify/launcher_actions/ClockifySync.py similarity index 100% rename from openpype/modules/default_modules/clockify/launcher_actions/ClockifySync.py rename to openpype/modules/clockify/launcher_actions/ClockifySync.py diff --git a/openpype/modules/default_modules/clockify/widgets.py b/openpype/modules/clockify/widgets.py similarity index 100% rename from openpype/modules/default_modules/clockify/widgets.py rename to openpype/modules/clockify/widgets.py diff --git a/openpype/modules/default_modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/default_modules/deadline/plugins/publish/submit_maya_deadline.py index e6c42374ca..51a19e2aad 100644 --- a/openpype/modules/default_modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/default_modules/deadline/plugins/publish/submit_maya_deadline.py @@ -394,9 +394,14 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin): self.log.debug(filepath) # Gather needed data ------------------------------------------------ + default_render_file = instance.context.data.get('project_settings')\ + .get('maya')\ + .get('create')\ + .get('CreateRender')\ + .get('default_render_image_folder') filename = os.path.basename(filepath) comment = context.data.get("comment", "") - dirname = os.path.join(workspace, "renders") + dirname = os.path.join(workspace, default_render_file) renderlayer = instance.data['setMembers'] # rs_beauty deadline_user = context.data.get("user", getpass.getuser()) jobname = "%s - %s" % (filename, instance.name) diff --git a/openpype/modules/default_modules/ftrack/event_handlers_user/action_create_folders.py b/openpype/modules/default_modules/ftrack/event_handlers_user/action_create_folders.py index 994dbd90e4..8bbef9ad73 100644 --- a/openpype/modules/default_modules/ftrack/event_handlers_user/action_create_folders.py +++ b/openpype/modules/default_modules/ftrack/event_handlers_user/action_create_folders.py @@ -111,13 +111,6 @@ class CreateFolders(BaseAction): publish_template = publish_template[key] publish_has_apps = "{app" in publish_template - tools_settings = project_settings["global"]["tools"] - app_presets = tools_settings["Workfiles"]["sw_folders"] - app_manager_apps = None - if app_presets and (work_has_apps or publish_has_apps): - app_manager_apps = ApplicationManager().applications - - cached_apps = {} collected_paths = [] for entity in all_entities: if entity.entity_type.lower() == "project": @@ -143,26 +136,10 @@ class CreateFolders(BaseAction): if child["object_type"]["name"].lower() != "task": continue tasks_created = True - task_type_name = child["type"]["name"].lower() task_data = ent_data.copy() task_data["task"] = child["name"] apps = [] - if app_manager_apps: - possible_apps = app_presets.get(task_type_name) or [] - for app_name in possible_apps: - - if app_name in cached_apps: - apps.append(cached_apps[app_name]) - continue - - app_def = app_manager_apps.get(app_name) - if app_def and app_def.is_host: - app_dir = app_def.host_name - else: - app_dir = app_name - cached_apps[app_name] = app_dir - apps.append(app_dir) # Template wok if work_has_apps: diff --git a/openpype/modules/default_modules/ftrack/launch_hooks/post_ftrack_changes.py b/openpype/modules/default_modules/ftrack/launch_hooks/post_ftrack_changes.py index df16cde2b8..d5a95fad91 100644 --- a/openpype/modules/default_modules/ftrack/launch_hooks/post_ftrack_changes.py +++ b/openpype/modules/default_modules/ftrack/launch_hooks/post_ftrack_changes.py @@ -52,7 +52,7 @@ class PostFtrackHook(PostLaunchHook): ) if entity: self.ftrack_status_change(session, entity, project_name) - self.start_timer(session, entity, ftrack_api) + except Exception: self.log.warning( "Couldn't finish Ftrack procedure.", exc_info=True @@ -160,26 +160,3 @@ class PostFtrackHook(PostLaunchHook): " on Ftrack entity type \"{}\"" ).format(next_status_name, entity.entity_type) self.log.warning(msg) - - def start_timer(self, session, entity, _ftrack_api): - """Start Ftrack timer on task from context.""" - self.log.debug("Triggering timer start.") - - user_entity = session.query("User where username is \"{}\"".format( - os.environ["FTRACK_API_USER"] - )).first() - if not user_entity: - self.log.warning( - "Couldn't find user with username \"{}\" in Ftrack".format( - os.environ["FTRACK_API_USER"] - ) - ) - return - - try: - user_entity.start_timer(entity, force=True) - session.commit() - self.log.debug("Timer start triggered successfully.") - - except Exception: - self.log.warning("Couldn't trigger Ftrack timer.", exc_info=True) diff --git a/openpype/modules/default_modules/ftrack/lib/avalon_sync.py b/openpype/modules/default_modules/ftrack/lib/avalon_sync.py index 3ba874281a..f58eb91485 100644 --- a/openpype/modules/default_modules/ftrack/lib/avalon_sync.py +++ b/openpype/modules/default_modules/ftrack/lib/avalon_sync.py @@ -360,6 +360,8 @@ class SyncEntitiesFactory: self._subsets_by_parent_id = None self._changeability_by_mongo_id = None + self._object_types_by_name = None + self.all_filtered_entities = {} self.filtered_ids = [] self.not_selected_ids = [] @@ -651,6 +653,18 @@ class SyncEntitiesFactory: self._bubble_changeability(list(self.subsets_by_parent_id.keys())) return self._changeability_by_mongo_id + @property + def object_types_by_name(self): + if self._object_types_by_name is None: + object_types_by_name = self.session.query( + "select id, name from ObjectType" + ).all() + self._object_types_by_name = { + object_type["name"]: object_type + for object_type in object_types_by_name + } + return self._object_types_by_name + @property def all_ftrack_names(self): """ @@ -880,10 +894,7 @@ class SyncEntitiesFactory: custom_attrs, hier_attrs = get_openpype_attr( self.session, query_keys=self.cust_attr_query_keys ) - ent_types = self.session.query("select id, name from ObjectType").all() - ent_types_by_name = { - ent_type["name"]: ent_type["id"] for ent_type in ent_types - } + ent_types_by_name = self.object_types_by_name # Custom attribute types cust_attr_types = self.session.query( "select id, name from CustomAttributeType" @@ -2491,7 +2502,13 @@ class SyncEntitiesFactory: parent_entity = self.entities_dict[parent_id]["entity"] _name = av_entity["name"] - _type = av_entity["data"].get("entityType", "folder") + _type = av_entity["data"].get("entityType") + # Check existence of object type + if _type and _type not in self.object_types_by_name: + _type = None + + if not _type: + _type = "Folder" self.log.debug(( "Re-ceating deleted entity {} <{}>" diff --git a/openpype/modules/default_modules/timers_manager/exceptions.py b/openpype/modules/default_modules/timers_manager/exceptions.py new file mode 100644 index 0000000000..5a9e00765d --- /dev/null +++ b/openpype/modules/default_modules/timers_manager/exceptions.py @@ -0,0 +1,3 @@ +class InvalidContextError(ValueError): + """Context for which the timer should be started is invalid.""" + pass diff --git a/openpype/modules/default_modules/timers_manager/launch_hooks/post_start_timer.py b/openpype/modules/default_modules/timers_manager/launch_hooks/post_start_timer.py new file mode 100644 index 0000000000..d6ae013403 --- /dev/null +++ b/openpype/modules/default_modules/timers_manager/launch_hooks/post_start_timer.py @@ -0,0 +1,45 @@ +from openpype.lib import PostLaunchHook + + +class PostStartTimerHook(PostLaunchHook): + """Start timer with TimersManager module. + + This module requires enabled TimerManager module. + """ + order = None + + def execute(self): + project_name = self.data.get("project_name") + asset_name = self.data.get("asset_name") + task_name = self.data.get("task_name") + + missing_context_keys = set() + if not project_name: + missing_context_keys.add("project_name") + if not asset_name: + missing_context_keys.add("asset_name") + if not task_name: + missing_context_keys.add("task_name") + + if missing_context_keys: + missing_keys_str = ", ".join([ + "\"{}\"".format(key) for key in missing_context_keys + ]) + self.log.debug("Hook {} skipped. Missing data keys: {}".format( + self.__class__.__name__, missing_keys_str + )) + return + + timers_manager = self.modules_manager.modules_by_name.get( + "timers_manager" + ) + if not timers_manager or not timers_manager.enabled: + self.log.info(( + "Skipping starting timer because" + " TimersManager is not available." + )) + return + + timers_manager.start_timer_with_webserver( + project_name, asset_name, task_name, logger=self.log + ) diff --git a/openpype/modules/default_modules/timers_manager/rest_api.py b/openpype/modules/default_modules/timers_manager/rest_api.py index 19b72d688b..f16cb316c3 100644 --- a/openpype/modules/default_modules/timers_manager/rest_api.py +++ b/openpype/modules/default_modules/timers_manager/rest_api.py @@ -39,17 +39,23 @@ class TimersManagerModuleRestApi: async def start_timer(self, request): data = await request.json() try: - project_name = data['project_name'] - asset_name = data['asset_name'] - task_name = data['task_name'] - hierarchy = data['hierarchy'] + project_name = data["project_name"] + asset_name = data["asset_name"] + task_name = data["task_name"] except KeyError: - log.error("Payload must contain fields 'project_name, " + - "'asset_name', 'task_name', 'hierarchy'") - return Response(status=400) + msg = ( + "Payload must contain fields 'project_name," + " 'asset_name' and 'task_name'" + ) + log.error(msg) + return Response(status=400, message=msg) self.module.stop_timers() - self.module.start_timer(project_name, asset_name, task_name, hierarchy) + try: + self.module.start_timer(project_name, asset_name, task_name) + except Exception as exc: + return Response(status=404, message=str(exc)) + return Response(status=200) async def stop_timer(self, request): diff --git a/openpype/modules/default_modules/timers_manager/timers_manager.py b/openpype/modules/default_modules/timers_manager/timers_manager.py index 0f165ff0ac..47d020104b 100644 --- a/openpype/modules/default_modules/timers_manager/timers_manager.py +++ b/openpype/modules/default_modules/timers_manager/timers_manager.py @@ -1,9 +1,15 @@ import os import platform -from openpype.modules import OpenPypeModule -from openpype_interfaces import ITrayService + from avalon.api import AvalonMongoDB +from openpype.modules import OpenPypeModule +from openpype_interfaces import ( + ITrayService, + ILaunchHookPaths +) +from .exceptions import InvalidContextError + class ExampleTimersManagerConnector: """Timers manager can handle timers of multiple modules/addons. @@ -64,7 +70,7 @@ class ExampleTimersManagerConnector: self._timers_manager_module.timer_stopped(self._module.id) -class TimersManager(OpenPypeModule, ITrayService): +class TimersManager(OpenPypeModule, ITrayService, ILaunchHookPaths): """ Handles about Timers. Should be able to start/stop all timers at once. @@ -151,47 +157,112 @@ class TimersManager(OpenPypeModule, ITrayService): self._idle_manager.stop() self._idle_manager.wait() - def start_timer(self, project_name, asset_name, task_name, hierarchy): - """ - Start timer for 'project_name', 'asset_name' and 'task_name' + def get_timer_data_for_path(self, task_path): + """Convert string path to a timer data. - Called from REST api by hosts. - - Args: - project_name (string) - asset_name (string) - task_name (string) - hierarchy (string) + It is expected that first item is project name, last item is task name + and parent asset name is before task name. """ + path_items = task_path.split("/") + if len(path_items) < 3: + raise InvalidContextError("Invalid path \"{}\"".format(task_path)) + task_name = path_items.pop(-1) + asset_name = path_items.pop(-1) + project_name = path_items.pop(0) + return self.get_timer_data_for_context( + project_name, asset_name, task_name, self.log + ) + + def get_launch_hook_paths(self): + """Implementation of `ILaunchHookPaths`.""" + return os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "launch_hooks" + ) + + @staticmethod + def get_timer_data_for_context( + project_name, asset_name, task_name, logger=None + ): + """Prepare data for timer related callbacks. + + TODO: + - return predefined object that has access to asset document etc. + """ + if not project_name or not asset_name or not task_name: + raise InvalidContextError(( + "Missing context information got" + " Project: \"{}\" Asset: \"{}\" Task: \"{}\"" + ).format(str(project_name), str(asset_name), str(task_name))) + dbconn = AvalonMongoDB() dbconn.install() dbconn.Session["AVALON_PROJECT"] = project_name - asset_doc = dbconn.find_one({ - "type": "asset", "name": asset_name - }) + asset_doc = dbconn.find_one( + { + "type": "asset", + "name": asset_name + }, + { + "data.tasks": True, + "data.parents": True + } + ) if not asset_doc: - raise ValueError("Uknown asset {}".format(asset_name)) + dbconn.uninstall() + raise InvalidContextError(( + "Asset \"{}\" not found in project \"{}\"" + ).format(asset_name, project_name)) - task_type = '' + asset_data = asset_doc.get("data") or {} + asset_tasks = asset_data.get("tasks") or {} + if task_name not in asset_tasks: + dbconn.uninstall() + raise InvalidContextError(( + "Task \"{}\" not found on asset \"{}\" in project \"{}\"" + ).format(task_name, asset_name, project_name)) + + task_type = "" try: - task_type = asset_doc["data"]["tasks"][task_name]["type"] + task_type = asset_tasks[task_name]["type"] except KeyError: - self.log.warning("Couldn't find task_type for {}". - format(task_name)) + msg = "Couldn't find task_type for {}".format(task_name) + if logger is not None: + logger.warning(msg) + else: + print(msg) - hierarchy = hierarchy.split("\\") - hierarchy.append(asset_name) + hierarchy_items = asset_data.get("parents") or [] + hierarchy_items.append(asset_name) - data = { + dbconn.uninstall() + return { "project_name": project_name, "task_name": task_name, "task_type": task_type, - "hierarchy": hierarchy + "hierarchy": hierarchy_items } + + def start_timer(self, project_name, asset_name, task_name): + """Start timer for passed context. + + Args: + project_name (str): Project name + asset_name (str): Asset name + task_name (str): Task name + """ + data = self.get_timer_data_for_context( + project_name, asset_name, task_name, self.log + ) self.timer_started(None, data) def get_task_time(self, project_name, asset_name, task_name): + """Get total time for passed context. + + TODO: + - convert context to timer data + """ times = {} for module_id, connector in self._connectors_by_module_id.items(): if hasattr(connector, "get_task_time"): @@ -202,6 +273,10 @@ class TimersManager(OpenPypeModule, ITrayService): return times def timer_started(self, source_id, data): + """Connector triggered that timer has started. + + New timer has started for context in data. + """ for module_id, connector in self._connectors_by_module_id.items(): if module_id == source_id: continue @@ -219,6 +294,14 @@ class TimersManager(OpenPypeModule, ITrayService): self.is_running = True def timer_stopped(self, source_id): + """Connector triggered that hist timer has stopped. + + Should stop all other timers. + + TODO: + - pass context for which timer has stopped to validate if timers are + same and valid + """ for module_id, connector in self._connectors_by_module_id.items(): if module_id == source_id: continue @@ -237,6 +320,7 @@ class TimersManager(OpenPypeModule, ITrayService): self.timer_started(None, self.last_task) def stop_timers(self): + """Stop all timers.""" if self.is_running is False: return @@ -295,18 +379,40 @@ class TimersManager(OpenPypeModule, ITrayService): self, server_manager ) - def change_timer_from_host(self, project_name, asset_name, task_name): - """Prepared method for calling change timers on REST api""" + @staticmethod + def start_timer_with_webserver( + project_name, asset_name, task_name, logger=None + ): + """Prepared method for calling change timers on REST api. + + Webserver must be active. At the moment is Webserver running only when + OpenPype Tray is used. + + Args: + project_name (str): Project name. + asset_name (str): Asset name. + task_name (str): Task name. + logger (logging.Logger): Logger object. Using 'print' if not + passed. + """ webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL") if not webserver_url: - self.log.warning("Couldn't find webserver url") + msg = "Couldn't find webserver url" + if logger is not None: + logger.warning(msg) + else: + print(msg) return rest_api_url = "{}/timers_manager/start_timer".format(webserver_url) try: import requests except Exception: - self.log.warning("Couldn't start timer") + msg = "Couldn't start timer ('requests' is not available)" + if logger is not None: + logger.warning(msg) + else: + print(msg) return data = { "project_name": project_name, @@ -314,4 +420,4 @@ class TimersManager(OpenPypeModule, ITrayService): "task_name": task_name } - requests.post(rest_api_url, json=data) + return requests.post(rest_api_url, json=data) diff --git a/openpype/modules/default_modules/job_queue/__init__.py b/openpype/modules/job_queue/__init__.py similarity index 100% rename from openpype/modules/default_modules/job_queue/__init__.py rename to openpype/modules/job_queue/__init__.py diff --git a/openpype/modules/default_modules/job_queue/job_server/__init__.py b/openpype/modules/job_queue/job_server/__init__.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/__init__.py rename to openpype/modules/job_queue/job_server/__init__.py diff --git a/openpype/modules/default_modules/job_queue/job_server/job_queue_route.py b/openpype/modules/job_queue/job_server/job_queue_route.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/job_queue_route.py rename to openpype/modules/job_queue/job_server/job_queue_route.py diff --git a/openpype/modules/default_modules/job_queue/job_server/jobs.py b/openpype/modules/job_queue/job_server/jobs.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/jobs.py rename to openpype/modules/job_queue/job_server/jobs.py diff --git a/openpype/modules/default_modules/job_queue/job_server/server.py b/openpype/modules/job_queue/job_server/server.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/server.py rename to openpype/modules/job_queue/job_server/server.py diff --git a/openpype/modules/default_modules/job_queue/job_server/utils.py b/openpype/modules/job_queue/job_server/utils.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/utils.py rename to openpype/modules/job_queue/job_server/utils.py diff --git a/openpype/modules/default_modules/job_queue/job_server/workers.py b/openpype/modules/job_queue/job_server/workers.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/workers.py rename to openpype/modules/job_queue/job_server/workers.py diff --git a/openpype/modules/default_modules/job_queue/job_server/workers_rpc_route.py b/openpype/modules/job_queue/job_server/workers_rpc_route.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_server/workers_rpc_route.py rename to openpype/modules/job_queue/job_server/workers_rpc_route.py diff --git a/openpype/modules/default_modules/job_queue/job_workers/__init__.py b/openpype/modules/job_queue/job_workers/__init__.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_workers/__init__.py rename to openpype/modules/job_queue/job_workers/__init__.py diff --git a/openpype/modules/default_modules/job_queue/job_workers/base_worker.py b/openpype/modules/job_queue/job_workers/base_worker.py similarity index 100% rename from openpype/modules/default_modules/job_queue/job_workers/base_worker.py rename to openpype/modules/job_queue/job_workers/base_worker.py diff --git a/openpype/modules/default_modules/job_queue/module.py b/openpype/modules/job_queue/module.py similarity index 100% rename from openpype/modules/default_modules/job_queue/module.py rename to openpype/modules/job_queue/module.py diff --git a/openpype/modules/default_modules/log_viewer/__init__.py b/openpype/modules/log_viewer/__init__.py similarity index 100% rename from openpype/modules/default_modules/log_viewer/__init__.py rename to openpype/modules/log_viewer/__init__.py diff --git a/openpype/modules/default_modules/log_viewer/log_view_module.py b/openpype/modules/log_viewer/log_view_module.py similarity index 100% rename from openpype/modules/default_modules/log_viewer/log_view_module.py rename to openpype/modules/log_viewer/log_view_module.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/.nojekyll b/openpype/modules/log_viewer/tray/__init__.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/.nojekyll rename to openpype/modules/log_viewer/tray/__init__.py diff --git a/openpype/modules/default_modules/log_viewer/tray/app.py b/openpype/modules/log_viewer/tray/app.py similarity index 100% rename from openpype/modules/default_modules/log_viewer/tray/app.py rename to openpype/modules/log_viewer/tray/app.py diff --git a/openpype/modules/default_modules/log_viewer/tray/models.py b/openpype/modules/log_viewer/tray/models.py similarity index 100% rename from openpype/modules/default_modules/log_viewer/tray/models.py rename to openpype/modules/log_viewer/tray/models.py diff --git a/openpype/modules/default_modules/log_viewer/tray/widgets.py b/openpype/modules/log_viewer/tray/widgets.py similarity index 100% rename from openpype/modules/default_modules/log_viewer/tray/widgets.py rename to openpype/modules/log_viewer/tray/widgets.py diff --git a/openpype/modules/default_modules/muster/__init__.py b/openpype/modules/muster/__init__.py similarity index 100% rename from openpype/modules/default_modules/muster/__init__.py rename to openpype/modules/muster/__init__.py diff --git a/openpype/modules/default_modules/muster/muster.py b/openpype/modules/muster/muster.py similarity index 100% rename from openpype/modules/default_modules/muster/muster.py rename to openpype/modules/muster/muster.py diff --git a/openpype/modules/default_modules/muster/rest_api.py b/openpype/modules/muster/rest_api.py similarity index 100% rename from openpype/modules/default_modules/muster/rest_api.py rename to openpype/modules/muster/rest_api.py diff --git a/openpype/modules/default_modules/muster/widget_login.py b/openpype/modules/muster/widget_login.py similarity index 100% rename from openpype/modules/default_modules/muster/widget_login.py rename to openpype/modules/muster/widget_login.py diff --git a/openpype/modules/default_modules/python_console_interpreter/__init__.py b/openpype/modules/python_console_interpreter/__init__.py similarity index 100% rename from openpype/modules/default_modules/python_console_interpreter/__init__.py rename to openpype/modules/python_console_interpreter/__init__.py diff --git a/openpype/modules/default_modules/python_console_interpreter/module.py b/openpype/modules/python_console_interpreter/module.py similarity index 100% rename from openpype/modules/default_modules/python_console_interpreter/module.py rename to openpype/modules/python_console_interpreter/module.py diff --git a/openpype/modules/default_modules/python_console_interpreter/window/__init__.py b/openpype/modules/python_console_interpreter/window/__init__.py similarity index 100% rename from openpype/modules/default_modules/python_console_interpreter/window/__init__.py rename to openpype/modules/python_console_interpreter/window/__init__.py diff --git a/openpype/modules/default_modules/python_console_interpreter/window/widgets.py b/openpype/modules/python_console_interpreter/window/widgets.py similarity index 100% rename from openpype/modules/default_modules/python_console_interpreter/window/widgets.py rename to openpype/modules/python_console_interpreter/window/widgets.py diff --git a/openpype/modules/default_modules/slack/README.md b/openpype/modules/slack/README.md similarity index 100% rename from openpype/modules/default_modules/slack/README.md rename to openpype/modules/slack/README.md diff --git a/openpype/modules/default_modules/slack/__init__.py b/openpype/modules/slack/__init__.py similarity index 100% rename from openpype/modules/default_modules/slack/__init__.py rename to openpype/modules/slack/__init__.py diff --git a/openpype/modules/default_modules/slack/launch_hooks/pre_python2_vendor.py b/openpype/modules/slack/launch_hooks/pre_python2_vendor.py similarity index 100% rename from openpype/modules/default_modules/slack/launch_hooks/pre_python2_vendor.py rename to openpype/modules/slack/launch_hooks/pre_python2_vendor.py diff --git a/openpype/modules/default_modules/slack/manifest.yml b/openpype/modules/slack/manifest.yml similarity index 100% rename from openpype/modules/default_modules/slack/manifest.yml rename to openpype/modules/slack/manifest.yml diff --git a/openpype/modules/default_modules/slack/plugins/publish/collect_slack_family.py b/openpype/modules/slack/plugins/publish/collect_slack_family.py similarity index 100% rename from openpype/modules/default_modules/slack/plugins/publish/collect_slack_family.py rename to openpype/modules/slack/plugins/publish/collect_slack_family.py diff --git a/openpype/modules/default_modules/slack/plugins/publish/integrate_slack_api.py b/openpype/modules/slack/plugins/publish/integrate_slack_api.py similarity index 100% rename from openpype/modules/default_modules/slack/plugins/publish/integrate_slack_api.py rename to openpype/modules/slack/plugins/publish/integrate_slack_api.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.appveyor.yml b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.appveyor.yml similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.appveyor.yml rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.appveyor.yml diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.coveragerc b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.coveragerc similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.coveragerc rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.coveragerc diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.flake8 b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.flake8 similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.flake8 rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.flake8 diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/contributing.md b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/contributing.md similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/contributing.md rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/contributing.md diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/issue_template.md b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/issue_template.md similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/issue_template.md rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/issue_template.md diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/maintainers_guide.md b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/maintainers_guide.md similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/maintainers_guide.md rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/maintainers_guide.md diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/pull_request_template.md b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/pull_request_template.md similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.github/pull_request_template.md rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.github/pull_request_template.md diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.gitignore b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.gitignore similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.gitignore rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.gitignore diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.travis.yml b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/.travis.yml similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/.travis.yml rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/.travis.yml diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/LICENSE b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/LICENSE similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/LICENSE rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/LICENSE diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/MANIFEST.in b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/MANIFEST.in similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/MANIFEST.in rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/MANIFEST.in diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/README.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/README.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/README.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/README.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/.gitignore b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/.gitignore similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/.gitignore rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/.gitignore diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/Makefile b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/Makefile similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/Makefile rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/Makefile diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/conf.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/conf.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/conf.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/conf.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/layout.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/layout.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/layout.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/layout.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/localtoc.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/localtoc.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/localtoc.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/localtoc.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/relations.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/relations.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/relations.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/relations.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/sidebar.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/sidebar.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/sidebar.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/sidebar.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/default.css_t b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/default.css_t similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/default.css_t rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/default.css_t diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/docs.css_t b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/docs.css_t similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/docs.css_t rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/docs.css_t diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/pygments.css_t b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/pygments.css_t similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/pygments.css_t rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/static/pygments.css_t diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/theme.conf b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/theme.conf similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/theme.conf rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/_themes/slack/theme.conf diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/about.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/about.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/about.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/about.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/auth.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/auth.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/auth.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/auth.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/basic_usage.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/basic_usage.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/basic_usage.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/basic_usage.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/changelog.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/changelog.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/changelog.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/changelog.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conf.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conf.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conf.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conf.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conversations.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conversations.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conversations.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/conversations.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/faq.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/faq.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/faq.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/faq.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/index.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/index.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/index.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/index.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/make.bat b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/make.bat similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/make.bat rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/make.bat diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/metadata.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/metadata.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/metadata.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/metadata.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/real_time_messaging.rst b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/real_time_messaging.rst similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs-src/real_time_messaging.rst rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs-src/real_time_messaging.rst diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs.sh b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs.sh similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs.sh rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs.sh diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/.buildinfo b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/.buildinfo similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/.buildinfo rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/.buildinfo diff --git a/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/.nojekyll b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/.nojekyll new file mode 100644 index 0000000000..e69de29bb2 diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/ajax-loader.gif b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/ajax-loader.gif similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/ajax-loader.gif rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/ajax-loader.gif diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/basic.css b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/basic.css similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/basic.css rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/basic.css diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/classic.css b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/classic.css similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/classic.css rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/classic.css diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-bright.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-bright.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-bright.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-bright.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-close.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-close.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-close.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment-close.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/comment.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/default.css b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/default.css similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/default.css rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/default.css diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/docs.css b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/docs.css similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/docs.css rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/docs.css diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/doctools.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/doctools.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/doctools.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/doctools.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/documentation_options.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/documentation_options.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/documentation_options.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/documentation_options.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down-pressed.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down-pressed.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down-pressed.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down-pressed.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/down.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/file.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/file.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/file.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/file.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery-3.2.1.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery-3.2.1.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery-3.2.1.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery-3.2.1.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/jquery.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/language_data.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/language_data.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/language_data.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/language_data.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/minus.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/minus.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/minus.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/minus.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/plus.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/plus.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/plus.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/plus.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/pygments.css b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/pygments.css similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/pygments.css rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/pygments.css diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/searchtools.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/searchtools.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/searchtools.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/searchtools.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/sidebar.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/sidebar.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/sidebar.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/sidebar.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore-1.3.1.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore-1.3.1.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore-1.3.1.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore-1.3.1.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/underscore.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up-pressed.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up-pressed.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up-pressed.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up-pressed.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/up.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/websupport.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/websupport.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/websupport.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/_static/websupport.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/about.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/about.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/about.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/about.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/auth.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/auth.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/auth.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/auth.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/basic_usage.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/basic_usage.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/basic_usage.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/basic_usage.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/changelog.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/changelog.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/changelog.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/changelog.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/conversations.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/conversations.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/conversations.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/conversations.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/faq.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/faq.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/faq.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/faq.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/genindex.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/genindex.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/genindex.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/genindex.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/index.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/index.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/index.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/index.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/metadata.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/metadata.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/metadata.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/metadata.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/objects.inv b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/objects.inv similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/objects.inv rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/objects.inv diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/real_time_messaging.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/real_time_messaging.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/real_time_messaging.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/real_time_messaging.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/search.html b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/search.html similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/search.html rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/search.html diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/searchindex.js b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/searchindex.js similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/docs/searchindex.js rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/docs/searchindex.js diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/requirements.txt b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/requirements.txt similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/requirements.txt rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/requirements.txt diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/setup.cfg b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/setup.cfg similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/setup.cfg rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/setup.cfg diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/setup.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/setup.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/setup.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/setup.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/__init__.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/__init__.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/__init__.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/__init__.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/channel.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/channel.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/channel.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/channel.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/client.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/client.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/client.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/client.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/exceptions.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/exceptions.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/exceptions.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/exceptions.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/im.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/im.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/im.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/im.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/server.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/server.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/server.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/server.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/slackrequest.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/slackrequest.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/slackrequest.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/slackrequest.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/user.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/user.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/user.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/user.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/util.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/util.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/util.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/util.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/version.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/version.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/slackclient/version.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/slackclient/version.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/test_requirements.txt b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/test_requirements.txt similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/test_requirements.txt rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/test_requirements.txt diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/conftest.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/conftest.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/conftest.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/conftest.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/channel.created.json b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/channel.created.json similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/channel.created.json rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/channel.created.json diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/im.created.json b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/im.created.json similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/im.created.json rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/im.created.json diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/rtm.start.json b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/rtm.start.json similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/rtm.start.json rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/rtm.start.json diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/slack_logo.png b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/slack_logo.png similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/data/slack_logo.png rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/data/slack_logo.png diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_channel.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_channel.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_channel.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_channel.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_server.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_server.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_server.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_server.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackclient.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackclient.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackclient.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackclient.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackrequest.py b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackrequest.py similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackrequest.py rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tests/test_slackrequest.py diff --git a/openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tox.ini b/openpype/modules/slack/python2_vendor/python-slack-sdk-1/tox.ini similarity index 100% rename from openpype/modules/default_modules/slack/python2_vendor/python-slack-sdk-1/tox.ini rename to openpype/modules/slack/python2_vendor/python-slack-sdk-1/tox.ini diff --git a/openpype/modules/default_modules/slack/slack_module.py b/openpype/modules/slack/slack_module.py similarity index 100% rename from openpype/modules/default_modules/slack/slack_module.py rename to openpype/modules/slack/slack_module.py diff --git a/openpype/modules/default_modules/webserver/__init__.py b/openpype/modules/webserver/__init__.py similarity index 100% rename from openpype/modules/default_modules/webserver/__init__.py rename to openpype/modules/webserver/__init__.py diff --git a/openpype/modules/default_modules/webserver/base_routes.py b/openpype/modules/webserver/base_routes.py similarity index 100% rename from openpype/modules/default_modules/webserver/base_routes.py rename to openpype/modules/webserver/base_routes.py diff --git a/openpype/modules/default_modules/webserver/host_console_listener.py b/openpype/modules/webserver/host_console_listener.py similarity index 100% rename from openpype/modules/default_modules/webserver/host_console_listener.py rename to openpype/modules/webserver/host_console_listener.py diff --git a/openpype/modules/default_modules/webserver/server.py b/openpype/modules/webserver/server.py similarity index 100% rename from openpype/modules/default_modules/webserver/server.py rename to openpype/modules/webserver/server.py diff --git a/openpype/modules/default_modules/webserver/webserver_module.py b/openpype/modules/webserver/webserver_module.py similarity index 100% rename from openpype/modules/default_modules/webserver/webserver_module.py rename to openpype/modules/webserver/webserver_module.py diff --git a/openpype/plugins/load/delivery.py b/openpype/plugins/load/delivery.py index a8cb0070ee..1037d6dc16 100644 --- a/openpype/plugins/load/delivery.py +++ b/openpype/plugins/load/delivery.py @@ -1,13 +1,13 @@ -from collections import defaultdict import copy +from collections import defaultdict from Qt import QtWidgets, QtCore, QtGui -from avalon import api, style +from avalon import api from avalon.api import AvalonMongoDB from openpype.api import Anatomy, config -from openpype import resources +from openpype import resources, style from openpype.lib.delivery import ( sizeof_fmt, @@ -58,6 +58,18 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): def __init__(self, contexts, log=None, parent=None): super(DeliveryOptionsDialog, self).__init__(parent=parent) + self.setWindowTitle("OpenPype - Deliver versions") + icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) + self.setWindowIcon(icon) + + self.setWindowFlags( + QtCore.Qt.WindowStaysOnTopHint + | QtCore.Qt.WindowCloseButtonHint + | QtCore.Qt.WindowMinimizeButtonHint + ) + + self.setStyleSheet(style.load_stylesheet()) + project = contexts[0]["project"]["name"] self.anatomy = Anatomy(project) self._representations = None @@ -70,16 +82,6 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): self._set_representations(contexts) - self.setWindowTitle("OpenPype - Deliver versions") - icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) - self.setWindowIcon(icon) - - self.setWindowFlags( - QtCore.Qt.WindowCloseButtonHint | - QtCore.Qt.WindowMinimizeButtonHint - ) - self.setStyleSheet(style.load_stylesheet()) - dropdown = QtWidgets.QComboBox() self.templates = self._get_templates(self.anatomy) for name, _ in self.templates.items(): diff --git a/openpype/plugins/publish/cleanup.py b/openpype/plugins/publish/cleanup.py index b8104078d9..f29e6ccd4e 100644 --- a/openpype/plugins/publish/cleanup.py +++ b/openpype/plugins/publish/cleanup.py @@ -15,6 +15,25 @@ class CleanUp(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder + 10 label = "Clean Up" + hosts = [ + "aftereffects", + "blender", + "celaction", + "flame", + "fusion", + "harmony", + "hiero", + "houdini", + "maya", + "nuke", + "photoshop", + "resolve", + "tvpaint", + "unreal", + "standalonepublisher", + "webpublisher", + "shell" + ] exclude_families = ["clip"] optional = True active = True diff --git a/openpype/plugins/publish/cleanup_explicit.py b/openpype/plugins/publish/cleanup_explicit.py new file mode 100644 index 0000000000..88bba34532 --- /dev/null +++ b/openpype/plugins/publish/cleanup_explicit.py @@ -0,0 +1,152 @@ +# -*- coding: utf-8 -*- +"""Cleanup files when publishing is done.""" +import os +import shutil +import pyblish.api + + +class ExplicitCleanUp(pyblish.api.ContextPlugin): + """Cleans up the files and folder defined to be deleted. + + plugin is looking for 2 keys into context data: + - `cleanupFullPaths` - full paths that should be removed not matter if + is path to file or to directory + - `cleanupEmptyDirs` - full paths to directories that should be removed + only if do not contain any file in it but will be removed if contain + sub-folders + """ + + order = pyblish.api.IntegratorOrder + 10 + label = "Explicit Clean Up" + optional = True + active = True + + def process(self, context): + cleanup_full_paths = context.data.get("cleanupFullPaths") + cleanup_empty_dirs = context.data.get("cleanupEmptyDirs") + + self._remove_full_paths(cleanup_full_paths) + self._remove_empty_dirs(cleanup_empty_dirs) + + def _remove_full_paths(self, full_paths): + """Remove files and folders from disc. + + Folders are removed with whole content. + """ + if not full_paths: + self.log.debug("No full paths to cleanup were collected.") + return + + # Separate paths into files and directories + filepaths = set() + dirpaths = set() + for path in full_paths: + # Skip empty items + if not path: + continue + # Normalize path + normalized = os.path.normpath(path) + # Check if path exists + if not os.path.exists(normalized): + continue + + if os.path.isfile(normalized): + filepaths.add(normalized) + else: + dirpaths.add(normalized) + + # Store failed paths with exception + failed = [] + # Store removed filepaths for logging + succeded_files = set() + # Remove file by file + for filepath in filepaths: + try: + os.remove(filepath) + succeded_files.add(filepath) + except Exception as exc: + failed.append((filepath, exc)) + + if succeded_files: + self.log.info( + "Removed files:\n{}".format("\n".join(succeded_files)) + ) + + # Delete folders with it's content + succeded_dirs = set() + for dirpath in dirpaths: + # Check if directory still exists + # - it is possible that directory was already deleted with + # different dirpath to delete + if os.path.exists(dirpath): + try: + shutil.rmtree(dirpath) + succeded_dirs.add(dirpath) + except Exception: + failed.append(dirpath) + + if succeded_dirs: + self.log.info( + "Removed direcoties:\n{}".format("\n".join(succeded_dirs)) + ) + + # Prepare lines for report of failed removements + lines = [] + for filepath, exc in failed: + lines.append("{}: {}".format(filepath, str(exc))) + + if lines: + self.log.warning( + "Failed to remove filepaths:\n{}".format("\n".join(lines)) + ) + + def _remove_empty_dirs(self, empty_dirpaths): + """Remove directories if do not contain any files.""" + if not empty_dirpaths: + self.log.debug("No empty dirs to cleanup were collected.") + return + + # First filtering of directories and making sure those are + # existing directories + filtered_dirpaths = set() + for path in empty_dirpaths: + if ( + path + and os.path.exists(path) + and os.path.isdir(path) + ): + filtered_dirpaths.add(os.path.normpath(path)) + + to_delete_dirpaths = set() + to_skip_dirpaths = set() + # Check if contain any files (or it's subfolders contain files) + for dirpath in filtered_dirpaths: + valid = True + for _, _, filenames in os.walk(dirpath): + if filenames: + valid = False + break + + if valid: + to_delete_dirpaths.add(dirpath) + else: + to_skip_dirpaths.add(dirpath) + + if to_skip_dirpaths: + self.log.debug( + "Skipped directories because contain files:\n{}".format( + "\n".join(to_skip_dirpaths) + ) + ) + + # Remove empty directies + for dirpath in to_delete_dirpaths: + if os.path.exists(dirpath): + shutil.rmtree(dirpath) + + if to_delete_dirpaths: + self.log.debug( + "Deleted empty directories:\n{}".format( + "\n".join(to_delete_dirpaths) + ) + ) diff --git a/openpype/plugins/publish/collect_anatomy_context_data.py b/openpype/plugins/publish/collect_anatomy_context_data.py index 6b95979b76..07de1b4420 100644 --- a/openpype/plugins/publish/collect_anatomy_context_data.py +++ b/openpype/plugins/publish/collect_anatomy_context_data.py @@ -49,24 +49,27 @@ class CollectAnatomyContextData(pyblish.api.ContextPlugin): project_entity = context.data["projectEntity"] asset_entity = context.data["assetEntity"] - hierarchy_items = asset_entity["data"]["parents"] - hierarchy = "" - if hierarchy_items: - hierarchy = os.path.join(*hierarchy_items) - asset_tasks = asset_entity["data"]["tasks"] task_type = asset_tasks.get(task_name, {}).get("type") project_task_types = project_entity["config"]["tasks"] task_code = project_task_types.get(task_type, {}).get("short_name") + asset_parents = asset_entity["data"]["parents"] + hierarchy = "/".join(asset_parents) + + parent_name = project_entity["name"] + if asset_parents: + parent_name = asset_parents[-1] + context_data = { "project": { "name": project_entity["name"], "code": project_entity["data"].get("code") }, "asset": asset_entity["name"], - "hierarchy": hierarchy.replace("\\", "/"), + "parent": parent_name, + "hierarchy": hierarchy, "task": { "name": task_name, "type": task_type, diff --git a/openpype/plugins/publish/collect_anatomy_instance_data.py b/openpype/plugins/publish/collect_anatomy_instance_data.py index da6a2195ee..74b556e28a 100644 --- a/openpype/plugins/publish/collect_anatomy_instance_data.py +++ b/openpype/plugins/publish/collect_anatomy_instance_data.py @@ -242,7 +242,11 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): asset_doc = instance.data.get("assetEntity") if asset_doc and asset_doc["_id"] != context_asset_doc["_id"]: parents = asset_doc["data"].get("parents") or list() + parent_name = project_doc["name"] + if parents: + parent_name = parents[-1] anatomy_updates["hierarchy"] = "/".join(parents) + anatomy_updates["parent"] = parent_name # Task task_name = instance.data.get("task") diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index 519e7c285b..e25b56744e 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -216,6 +216,7 @@ class PypeCommands: task_name, app_name ) + print("env:: {}".format(env)) os.environ.update(env) os.environ["OPENPYPE_PUBLISH_DATA"] = batch_dir @@ -304,13 +305,16 @@ class PypeCommands: log.info("Publish finished.") @staticmethod - def extractenvironments(output_json_path, project, asset, task, app): - env = os.environ.copy() + def extractenvironments( + output_json_path, project, asset, task, app, env_group + ): if all((project, asset, task, app)): from openpype.api import get_app_environments_for_context env = get_app_environments_for_context( - project, asset, task, app, env + project, asset, task, app, env_group ) + else: + env = os.environ.copy() output_dir = os.path.dirname(output_json_path) if not os.path.exists(output_dir): @@ -340,7 +344,8 @@ class PypeCommands: def validate_jsons(self): pass - def run_tests(self, folder, mark, pyargs): + def run_tests(self, folder, mark, pyargs, + test_data_folder, persist, app_variant): """ Runs tests from 'folder' @@ -348,25 +353,39 @@ class PypeCommands: folder (str): relative path to folder with tests mark (str): label to run tests marked by it (slow etc) pyargs (str): package path to test + test_data_folder (str): url to unzipped folder of test data + persist (bool): True if keep test db and published after test + end + app_variant (str): variant (eg 2020 for AE), empty if use + latest installed version """ print("run_tests") - import subprocess - if folder: folder = " ".join(list(folder)) else: folder = "../tests" - mark_str = pyargs_str = '' + # disable warnings and show captured stdout even if success + args = ["--disable-pytest-warnings", "-rP", folder] + if mark: - mark_str = "-m {}".format(mark) + args.extend(["-m", mark]) if pyargs: - pyargs_str = "--pyargs {}".format(pyargs) + args.extend(["--pyargs", pyargs]) - cmd = "pytest {} {} {}".format(folder, mark_str, pyargs_str) - print("Running {}".format(cmd)) - subprocess.run(cmd) + if persist: + args.extend(["--test_data_folder", test_data_folder]) + + if persist: + args.extend(["--persist", persist]) + + if app_variant: + args.extend(["--app_variant", app_variant]) + + print("run_tests args: {}".format(args)) + import pytest + pytest.main(args) def syncserver(self, active_site): """Start running sync_server in background.""" diff --git a/openpype/scripts/otio_burnin.py b/openpype/scripts/otio_burnin.py index 15a62ef38e..3fc1412e62 100644 --- a/openpype/scripts/otio_burnin.py +++ b/openpype/scripts/otio_burnin.py @@ -359,7 +359,8 @@ class ModifiedBurnins(ffmpeg_burnins.Burnins): if frame_start is None: replacement_final = replacement_size = str(MISSING_KEY_VALUE) else: - replacement_final = "%{eif:n+" + str(frame_start) + ":d}" + replacement_final = "%{eif:n+" + str(frame_start) + ":d:" + \ + str(len(str(frame_end))) + "}" replacement_size = str(frame_end) final_text = final_text.replace( diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 55732f80ce..cff1259c98 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -291,21 +291,7 @@ "enabled": false } ], - "sw_folders": { - "compositing": [ - "nuke", - "ae" - ], - "modeling": [ - "maya", - "blender", - "zbrush" - ], - "lookdev": [ - "substance", - "textures" - ] - } + "extra_folders": [] }, "loader": { "family_filter_profiles": [ diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index f4b9760fe1..b75b0168ec 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -43,7 +43,8 @@ "defaults": [ "Main" ], - "aov_separator": "underscore" + "aov_separator": "underscore", + "default_render_image_folder": "renders" }, "CreateAnimation": { "enabled": true, diff --git a/openpype/settings/defaults/project_settings/photoshop.json b/openpype/settings/defaults/project_settings/photoshop.json index 0c24c943ec..db9bf87268 100644 --- a/openpype/settings/defaults/project_settings/photoshop.json +++ b/openpype/settings/defaults/project_settings/photoshop.json @@ -7,11 +7,6 @@ } }, "publish": { - "ValidateContainers": { - "enabled": true, - "optional": true, - "active": true - }, "CollectRemoteInstances": { "color_code_mapping": [ { @@ -22,6 +17,15 @@ } ] }, + "ValidateContainers": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateNaming": { + "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,]", + "replace_char": "_" + }, "ExtractImage": { "formats": [ "png", diff --git a/openpype/settings/defaults/system_settings/applications.json b/openpype/settings/defaults/system_settings/applications.json index d536652581..1cbe09f576 100644 --- a/openpype/settings/defaults/system_settings/applications.json +++ b/openpype/settings/defaults/system_settings/applications.json @@ -142,7 +142,7 @@ "icon": "{}/app_icons/nuke.png", "host_name": "nuke", "environment": { - "NUKE_PATH": "{OPENPYPE_STUDIO_PLUGINS}/nuke" + "NUKE_PATH": ["{NUKE_PATH}", "{OPENPYPE_STUDIO_PLUGINS}/nuke"] }, "variants": { "13-0": { @@ -248,7 +248,7 @@ "icon": "{}/app_icons/nuke.png", "host_name": "nuke", "environment": { - "NUKE_PATH": "{OPENPYPE_STUDIO_PLUGINS}/nuke" + "NUKE_PATH": ["{NUKE_PATH}", "{OPENPYPE_STUDIO_PLUGINS}/nuke"] }, "variants": { "13-0": { @@ -1101,6 +1101,23 @@ "linux": [] }, "environment": {} + }, + "2022": { + "enabled": true, + "variant_label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2022\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": {} } } }, diff --git a/openpype/settings/entities/base_entity.py b/openpype/settings/entities/base_entity.py index 12754d345f..582937481a 100644 --- a/openpype/settings/entities/base_entity.py +++ b/openpype/settings/entities/base_entity.py @@ -235,6 +235,11 @@ class BaseItemEntity(BaseEntity): """Return system settings entity.""" pass + @abstractmethod + def has_child_with_key(self, key): + """Entity contains key as children.""" + pass + def schema_validations(self): """Validate schema of entity and it's hierachy. diff --git a/openpype/settings/entities/dict_conditional.py b/openpype/settings/entities/dict_conditional.py index 5f1c172f31..92512a6668 100644 --- a/openpype/settings/entities/dict_conditional.py +++ b/openpype/settings/entities/dict_conditional.py @@ -107,6 +107,9 @@ class DictConditionalEntity(ItemEntity): for _key, _value in new_value.items(): self.non_gui_children[self.current_enum][_key].set(_value) + def has_child_with_key(self, key): + return key in self.keys() + def _item_initialization(self): self._default_metadata = NOT_SET self._studio_override_metadata = NOT_SET diff --git a/openpype/settings/entities/dict_immutable_keys_entity.py b/openpype/settings/entities/dict_immutable_keys_entity.py index 6131fa2ac7..c477a0eb0f 100644 --- a/openpype/settings/entities/dict_immutable_keys_entity.py +++ b/openpype/settings/entities/dict_immutable_keys_entity.py @@ -205,6 +205,9 @@ class DictImmutableKeysEntity(ItemEntity): ) self.show_borders = self.schema_data.get("show_borders", True) + def has_child_with_key(self, key): + return key in self.non_gui_children + def collect_static_entities_by_path(self): output = {} if self.is_dynamic_item or self.is_in_dynamic_item: diff --git a/openpype/settings/entities/dict_mutable_keys_entity.py b/openpype/settings/entities/dict_mutable_keys_entity.py index cff346e9ea..08b0f75649 100644 --- a/openpype/settings/entities/dict_mutable_keys_entity.py +++ b/openpype/settings/entities/dict_mutable_keys_entity.py @@ -60,6 +60,12 @@ class DictMutableKeysEntity(EndpointEntity): def pop(self, key, *args, **kwargs): if key in self.required_keys: raise RequiredKeyModified(self.path, key) + + if self._override_state is OverrideState.STUDIO: + self._has_studio_override = True + elif self._override_state is OverrideState.PROJECT: + self._has_project_override = True + result = self.children_by_key.pop(key, *args, **kwargs) self.on_change() return result @@ -191,6 +197,9 @@ class DictMutableKeysEntity(EndpointEntity): child_entity = self.children_by_key[key] self.set_child_label(child_entity, label) + def has_child_with_key(self, key): + return key in self.children_by_key + def _item_initialization(self): self._default_metadata = {} self._studio_override_metadata = {} diff --git a/openpype/settings/entities/input_entities.py b/openpype/settings/entities/input_entities.py index a285bf3433..ff32df9262 100644 --- a/openpype/settings/entities/input_entities.py +++ b/openpype/settings/entities/input_entities.py @@ -118,6 +118,9 @@ class InputEntity(EndpointEntity): return self.value == other.value return self.value == other + def has_child_with_key(self, key): + return False + def get_child_path(self, child_obj): raise TypeError("{} can't have children".format( self.__class__.__name__ diff --git a/openpype/settings/entities/item_entities.py b/openpype/settings/entities/item_entities.py index ff0a982900..9c6f428b97 100644 --- a/openpype/settings/entities/item_entities.py +++ b/openpype/settings/entities/item_entities.py @@ -1,3 +1,7 @@ +import re + +import six + from .lib import ( NOT_SET, STRING_TYPE, @@ -48,6 +52,9 @@ class PathEntity(ItemEntity): raise AttributeError(self.attribute_error_msg.format("items")) return self.child_obj.items() + def has_child_with_key(self, key): + return self.child_obj.has_child_with_key(key) + def _item_initialization(self): if self.group_item is None and not self.is_group: self.is_group = True @@ -197,6 +204,7 @@ class PathEntity(ItemEntity): class ListStrictEntity(ItemEntity): schema_types = ["list-strict"] + _key_regex = re.compile(r"[0-9]+") def __getitem__(self, idx): if not isinstance(idx, int): @@ -216,6 +224,19 @@ class ListStrictEntity(ItemEntity): return self.children[idx] return default + def has_child_with_key(self, key): + if ( + key + and isinstance(key, six.string_types) + and self._key_regex.match(key) + ): + key = int(key) + + if not isinstance(key, int): + return False + + return 0 <= key < len(self.children) + def _item_initialization(self): self.valid_value_types = (list, ) self.require_key = True diff --git a/openpype/settings/entities/list_entity.py b/openpype/settings/entities/list_entity.py index 5d89a81351..0268c208bb 100644 --- a/openpype/settings/entities/list_entity.py +++ b/openpype/settings/entities/list_entity.py @@ -1,4 +1,6 @@ import copy +import six +import re from . import ( BaseEntity, EndpointEntity @@ -21,6 +23,7 @@ class ListEntity(EndpointEntity): "collapsible": True, "collapsed": False } + _key_regex = re.compile(r"[0-9]+") def __iter__(self): for item in self.children: @@ -144,6 +147,19 @@ class ListEntity(EndpointEntity): ) self.on_change() + def has_child_with_key(self, key): + if ( + key + and isinstance(key, six.string_types) + and self._key_regex.match(key) + ): + key = int(key) + + if not isinstance(key, int): + return False + + return 0 <= key < len(self.children) + def _convert_to_valid_type(self, value): if isinstance(value, (set, tuple)): return list(value) diff --git a/openpype/settings/entities/root_entities.py b/openpype/settings/entities/root_entities.py index b8baed8a93..687784a359 100644 --- a/openpype/settings/entities/root_entities.py +++ b/openpype/settings/entities/root_entities.py @@ -127,6 +127,9 @@ class RootEntity(BaseItemEntity): for _key, _value in new_value.items(): self.non_gui_children[_key].set(_value) + def has_child_with_key(self, key): + return key in self.non_gui_children + def keys(self): return self.non_gui_children.keys() diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json index ca388de60c..51ea5b3fe7 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json @@ -33,16 +33,6 @@ "key": "publish", "label": "Publish plugins", "children": [ - { - "type": "schema_template", - "name": "template_publish_plugin", - "template_data": [ - { - "key": "ValidateContainers", - "label": "ValidateContainers" - } - ] - }, { "type": "dict", "collapsible": true, @@ -108,6 +98,38 @@ } ] }, + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "key": "ValidateContainers", + "label": "ValidateContainers" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ValidateNaming", + "label": "Validate naming of subsets and layers", + "children": [ + { + "type": "label", + "label": "Subset cannot contain invalid characters or extract to file would fail" + }, + { + "type": "text", + "key": "invalid_chars", + "label": "Regex pattern of invalid characters" + }, + { + "type": "text", + "key": "replace_char", + "label": "Replacement character" + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json index 26d3771d8a..bb71c9bde6 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json @@ -195,14 +195,48 @@ } }, { - "type": "dict-modifiable", + "type": "list", + "key": "extra_folders", + "label": "Extra work folders", "collapsible": true, - "key": "sw_folders", - "label": "Extra task folders", + "use_label_wrap": true, "is_group": true, "object_type": { - "type": "list", - "object_type": "text" + "type": "dict", + "children": [ + { + "type": "hosts-enum", + "key": "hosts", + "label": "Hosts", + "multiselection": true + }, + { + "type": "task-types-enum", + "key": "task_types", + "label": "Task types" + }, + { + "label": "Task names", + "key": "task_names", + "type": "list", + "object_type": "text" + }, + { + "type": "splitter" + }, + { + "type": "label", + "label": "Folders will be created in directory next to workfile. Items may contain nested directories (e.g. resources/images)." + }, + { + "key": "folders", + "label": "Folders", + "type": "list", + "highlight_content": true, + "collapsible": false, + "object_type": "text" + } + ] } } ] diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index e50357cc40..088d5d1f96 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -58,6 +58,11 @@ {"underscore": "_ (underscore)"}, {"dot": ". (dot)"} ] + }, + { + "type": "text", + "key": "default_render_image_folder", + "label": "Default render image folder" } ] }, diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_aftereffects.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_aftereffects.json index 6c36a9bb8a..334c9aa235 100644 --- a/openpype/settings/entities/schemas/system_schema/host_settings/schema_aftereffects.json +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_aftereffects.json @@ -36,6 +36,11 @@ "app_variant_label": "2021", "app_variant": "2021", "variant_skip_paths": ["use_python_2"] + }, + { + "app_variant_label": "2022", + "app_variant": "2022", + "variant_skip_paths": ["use_python_2"] } ] } diff --git a/openpype/settings/lib.py b/openpype/settings/lib.py index ff75562413..43489aecfd 100644 --- a/openpype/settings/lib.py +++ b/openpype/settings/lib.py @@ -933,8 +933,10 @@ def get_general_environments(): # - prevent to use `get_system_settings` where `get_default_settings` # is used default_values = load_openpype_default_settings() + system_settings = default_values["system_settings"] studio_overrides = get_studio_system_settings_overrides() - result = apply_overrides(default_values, studio_overrides) + + result = apply_overrides(system_settings, studio_overrides) environments = result["general"]["environment"] clear_metadata_from_settings(environments) diff --git a/openpype/style/style.css b/openpype/style/style.css index 9249db5f1e..3e95ece4b9 100644 --- a/openpype/style/style.css +++ b/openpype/style/style.css @@ -1065,16 +1065,45 @@ QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical { color: {color:settings:label-fg}; } #SettingsLabel:hover {color: {color:settings:label-fg-hover};} -#SettingsLabel[state="studio"] {color: {color:settings:studio-light};} -#SettingsLabel[state="studio"]:hover {color: {color:settings:studio-label-hover};} -#SettingsLabel[state="modified"] {color: {color:settings:modified-mid};} -#SettingsLabel[state="modified"]:hover {color: {color:settings:modified-light};} -#SettingsLabel[state="overriden-modified"] {color: {color:settings:modified-mid};} -#SettingsLabel[state="overriden-modified"]:hover {color: {color:settings:modified-light};} -#SettingsLabel[state="overriden"] {color: {color:settings:project-mid};} -#SettingsLabel[state="overriden"]:hover {color: {color:settings:project-light};} -#SettingsLabel[state="invalid"] {color:{color:settings:invalid-dark};} -#SettingsLabel[state="invalid"]:hover {color: {color:settings:invalid-dark};} + +#ExpandLabel { + font-weight: bold; + color: {color:settings:label-fg}; +} +#ExpandLabel:hover { + color: {color:settings:label-fg-hover}; +} + +#ExpandLabel[state="studio"], #SettingsLabel[state="studio"] { + color: {color:settings:studio-light}; +} +#ExpandLabel[state="studio"]:hover, #SettingsLabel[state="studio"]:hover { + color: {color:settings:studio-label-hover}; +} +#ExpandLabel[state="modified"], #SettingsLabel[state="modified"] { + color: {color:settings:modified-mid}; +} +#ExpandLabel[state="modified"]:hover, #SettingsLabel[state="modified"]:hover { + color: {color:settings:modified-light}; +} +#ExpandLabel[state="overriden-modified"], #SettingsLabel[state="overriden-modified"] { + color: {color:settings:modified-mid}; +} +#ExpandLabel[state="overriden-modified"]:hover, #SettingsLabel[state="overriden-modified"]:hover { + color: {color:settings:modified-light}; +} +#ExpandLabel[state="overriden"], #SettingsLabel[state="overriden"] { + color: {color:settings:project-mid}; +} +#ExpandLabel[state="overriden"]:hover, #SettingsLabel[state="overriden"]:hover { + color: {color:settings:project-light}; +} +#ExpandLabel[state="invalid"], #SettingsLabel[state="invalid"] { + color:{color:settings:invalid-dark}; +} +#ExpandLabel[state="invalid"]:hover, #SettingsLabel[state="invalid"]:hover { + color: {color:settings:invalid-dark}; +} /* TODO Replace these with explicit widget types if possible */ #SettingsMainWidget QWidget[input-state="modified"] { @@ -1106,14 +1135,6 @@ QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical { #DictKey[state="modified"] {border-color: {color:settings:modified-mid};} #DictKey[state="invalid"] {border-color: {color:settings:invalid-dark};} -#ExpandLabel { - font-weight: bold; - color: {color:settings:label-fg}; -} -#ExpandLabel:hover { - color: {color:settings:label-fg-hover}; -} - #ContentWidget { background-color: transparent; } diff --git a/openpype/tools/loader/app.py b/openpype/tools/loader/app.py index 583065633b..62bf5538de 100644 --- a/openpype/tools/loader/app.py +++ b/openpype/tools/loader/app.py @@ -363,7 +363,6 @@ class LoaderWindow(QtWidgets.QDialog): # Active must be in the selected rows otherwise we # assume it's not actually an "active" current index. - version_docs = None version_doc = None active = selection.currentIndex() rows = selection.selectedRows(column=active.column()) @@ -375,9 +374,10 @@ class LoaderWindow(QtWidgets.QDialog): not (item.get("isGroup") or item.get("isMerged")) ): version_doc = item["version_document"] + self._version_info_widget.set_version(version_doc) + version_docs = [] if rows: - version_docs = [] for index in rows: if not index or not index.isValid(): continue @@ -390,8 +390,6 @@ class LoaderWindow(QtWidgets.QDialog): else: version_docs.append(item["version_document"]) - self._version_info_widget.set_version(version_doc) - thumbnail_src_ids = [ version_doc["_id"] for version_doc in version_docs @@ -402,7 +400,7 @@ class LoaderWindow(QtWidgets.QDialog): self._thumbnail_widget.set_thumbnail(thumbnail_src_ids) if self._repres_widget is not None: - version_ids = [doc["_id"] for doc in version_docs or []] + version_ids = [doc["_id"] for doc in version_docs] self._repres_widget.set_version_ids(version_ids) # self._repres_widget.change_visibility("subset", len(rows) > 1) diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py index 24ec9dcb0e..860c009f15 100644 --- a/openpype/tools/publisher/control.py +++ b/openpype/tools/publisher/control.py @@ -41,12 +41,15 @@ class MainThreadProcess(QtCore.QObject): This approach gives ability to update UI meanwhile plugin is in progress. """ + + timer_interval = 3 + def __init__(self): super(MainThreadProcess, self).__init__() self._items_to_process = collections.deque() timer = QtCore.QTimer() - timer.setInterval(50) + timer.setInterval(self.timer_interval) timer.timeout.connect(self._execute) diff --git a/openpype/tools/pyblish_pype/control.py b/openpype/tools/pyblish_pype/control.py index 2a9e67097e..d2b74e316a 100644 --- a/openpype/tools/pyblish_pype/control.py +++ b/openpype/tools/pyblish_pype/control.py @@ -9,6 +9,7 @@ import os import sys import inspect import logging +import collections from Qt import QtCore @@ -28,6 +29,74 @@ class IterationBreak(Exception): pass +class MainThreadItem: + """Callback with args and kwargs.""" + def __init__(self, callback, *args, **kwargs): + self.callback = callback + self.args = args + self.kwargs = kwargs + + def process(self): + self.callback(*self.args, **self.kwargs) + + +class MainThreadProcess(QtCore.QObject): + """Qt based main thread process executor. + + Has timer which controls each 50ms if there is new item to process. + + This approach gives ability to update UI meanwhile plugin is in progress. + """ + timer_interval = 3 + + def __init__(self): + super(MainThreadProcess, self).__init__() + self._items_to_process = collections.deque() + + timer = QtCore.QTimer() + timer.setInterval(self.timer_interval) + + timer.timeout.connect(self._execute) + + self._timer = timer + + def process(self, func, *args, **kwargs): + item = MainThreadItem(func, *args, **kwargs) + self.add_item(item) + + def add_item(self, item): + self._items_to_process.append(item) + + def _execute(self): + if not self._items_to_process: + return + + item = self._items_to_process.popleft() + item.process() + + def start(self): + if not self._timer.isActive(): + self._timer.start() + + def stop(self): + if self._timer.isActive(): + self._timer.stop() + + def clear(self): + if self._timer.isActive(): + self._timer.stop() + self._items_to_process = collections.deque() + + def stop_if_empty(self): + if self._timer.isActive(): + item = MainThreadItem(self._stop_if_empty) + self.add_item(item) + + def _stop_if_empty(self): + if not self._items_to_process: + self.stop() + + class Controller(QtCore.QObject): log = logging.getLogger("PyblishController") # Emitted when the GUI is about to start processing; @@ -71,6 +140,7 @@ class Controller(QtCore.QObject): self.plugins = {} self.optional_default = {} self.instance_toggled.connect(self._on_instance_toggled) + self._main_thread_processor = MainThreadProcess() def reset_variables(self): self.log.debug("Resetting pyblish context variables") @@ -169,7 +239,11 @@ class Controller(QtCore.QObject): def reset(self): """Discover plug-ins and run collection.""" + self._main_thread_processor.clear() + self._main_thread_processor.process(self._reset) + self._main_thread_processor.start() + def _reset(self): self.reset_context() self.reset_variables() @@ -210,21 +284,25 @@ class Controller(QtCore.QObject): if self.is_running: self.is_running = False self.was_finished.emit() + self._main_thread_processor.stop() def stop(self): self.log.debug("Stopping") self.stopped = True def act(self, plugin, action): - def on_next(): - result = pyblish.plugin.process( - plugin, self.context, None, action.id - ) - self.is_running = False - self.was_acted.emit(result) - self.is_running = True - util.defer(100, on_next) + item = MainThreadItem(self._process_action, plugin, action) + self._main_thread_processor.add_item(item) + self._main_thread_processor.start() + self._main_thread_processor.stop_if_empty() + + def _process_action(self, plugin, action): + result = pyblish.plugin.process( + plugin, self.context, None, action.id + ) + self.is_running = False + self.was_acted.emit(result) def emit_(self, signal, kwargs): pyblish.api.emit(signal, **kwargs) @@ -355,11 +433,13 @@ class Controller(QtCore.QObject): self.passed_group.emit(self.processing["next_group_order"]) - def iterate_and_process(self, on_finished=lambda: None): + def iterate_and_process(self, on_finished=None): """ Iterating inserted plugins with current context. Collectors do not contain instances, they are None when collecting! This process don't stop on one """ + self._main_thread_processor.start() + def on_next(): self.log.debug("Looking for next pair to process") try: @@ -371,13 +451,19 @@ class Controller(QtCore.QObject): self.log.debug("Iteration break was raised") self.is_running = False self.was_stopped.emit() + self._main_thread_processor.stop() return except StopIteration: self.log.debug("Iteration stop was raised") self.is_running = False # All pairs were processed successfully! - return util.defer(500, on_finished) + if on_finished is not None: + self._main_thread_processor.add_item( + MainThreadItem(on_finished) + ) + self._main_thread_processor.stop_if_empty() + return except Exception as exc: self.log.warning( @@ -385,12 +471,15 @@ class Controller(QtCore.QObject): exc_info=True ) exc_msg = str(exc) - return util.defer( - 500, lambda: on_unexpected_error(error=exc_msg) + self._main_thread_processor.add_item( + MainThreadItem(on_unexpected_error, error=exc_msg) ) + return self.about_to_process.emit(*self.current_pair) - util.defer(100, on_process) + self._main_thread_processor.add_item( + MainThreadItem(on_process) + ) def on_process(): try: @@ -411,11 +500,14 @@ class Controller(QtCore.QObject): exc_info=True ) exc_msg = str(exc) - return util.defer( - 500, lambda: on_unexpected_error(error=exc_msg) + self._main_thread_processor.add_item( + MainThreadItem(on_unexpected_error, error=exc_msg) ) + return - util.defer(10, on_next) + self._main_thread_processor.add_item( + MainThreadItem(on_next) + ) def on_unexpected_error(error): # TODO this should be handled much differently @@ -423,24 +515,42 @@ class Controller(QtCore.QObject): self.is_running = False self.was_stopped.emit() util.u_print(u"An unexpected error occurred:\n %s" % error) - return util.defer(500, on_finished) + if on_finished is not None: + self._main_thread_processor.add_item( + MainThreadItem(on_finished) + ) + self._main_thread_processor.stop_if_empty() self.is_running = True - util.defer(10, on_next) + self._main_thread_processor.add_item( + MainThreadItem(on_next) + ) def collect(self): """ Iterate and process Collect plugins - load_plugins method is launched again when finished """ - self.iterate_and_process() + self._main_thread_processor.process(self._start_collect) + self._main_thread_processor.start() def validate(self): """ Process plugins to validations_order value.""" - self.processing["stop_on_validation"] = True - self.iterate_and_process() + self._main_thread_processor.process(self._start_validate) + self._main_thread_processor.start() def publish(self): """ Iterate and process all remaining plugins.""" + self._main_thread_processor.process(self._start_publish) + self._main_thread_processor.start() + + def _start_collect(self): + self.iterate_and_process() + + def _start_validate(self): + self.processing["stop_on_validation"] = True + self.iterate_and_process() + + def _start_publish(self): self.processing["stop_on_validation"] = False self.iterate_and_process(self.on_published) diff --git a/openpype/tools/pyblish_pype/window.py b/openpype/tools/pyblish_pype/window.py index 536f793216..fdd2d80e23 100644 --- a/openpype/tools/pyblish_pype/window.py +++ b/openpype/tools/pyblish_pype/window.py @@ -1148,7 +1148,7 @@ class Window(QtWidgets.QDialog): self.comment_box.placeholder.setVisible(False) self.comment_box.placeholder.setVisible(True) # Launch controller reset - util.defer(500, self.controller.reset) + self.controller.reset() def validate(self): self.info(self.tr("Preparing validate..")) @@ -1159,7 +1159,7 @@ class Window(QtWidgets.QDialog): self.button_suspend_logs.setEnabled(False) - util.defer(5, self.controller.validate) + self.controller.validate() def publish(self): self.info(self.tr("Preparing publish..")) @@ -1170,7 +1170,7 @@ class Window(QtWidgets.QDialog): self.button_suspend_logs.setEnabled(False) - util.defer(5, self.controller.publish) + self.controller.publish() def act(self, plugin_item, action): self.info("%s %s.." % (self.tr("Preparing"), action)) @@ -1187,9 +1187,7 @@ class Window(QtWidgets.QDialog): ) # Give Qt time to draw - util.defer(100, lambda: self.controller.act( - plugin_item.plugin, action - )) + self.controller.act(plugin_item.plugin, action) self.info(self.tr("Action prepared.")) @@ -1267,7 +1265,7 @@ class Window(QtWidgets.QDialog): self.info(self.tr("..as soon as processing is finished..")) self.controller.stop() self.finished.connect(self.close) - util.defer(2000, on_problem) + util.defer(200, on_problem) return event.ignore() self.state["is_closing"] = True diff --git a/openpype/tools/settings/settings/base.py b/openpype/tools/settings/settings/base.py index 48c2b42ebd..e271585852 100644 --- a/openpype/tools/settings/settings/base.py +++ b/openpype/tools/settings/settings/base.py @@ -1,9 +1,15 @@ +import sys import json +import traceback from Qt import QtWidgets, QtGui, QtCore + +from openpype.settings.entities import ProjectSettings from openpype.tools.settings import CHILD_OFFSET + from .widgets import ExpandingWidget from .lib import create_deffered_value_change_timer +from .constants import DEFAULT_PROJECT_LABEL class BaseWidget(QtWidgets.QWidget): @@ -119,9 +125,10 @@ class BaseWidget(QtWidgets.QWidget): return def discard_changes(): - self.ignore_input_changes.set_ignore(True) - self.entity.discard_changes() - self.ignore_input_changes.set_ignore(False) + with self.category_widget.working_state_context(): + self.ignore_input_changes.set_ignore(True) + self.entity.discard_changes() + self.ignore_input_changes.set_ignore(False) action = QtWidgets.QAction("Discard changes") actions_mapping[action] = discard_changes @@ -133,8 +140,11 @@ class BaseWidget(QtWidgets.QWidget): if not self.entity.can_trigger_add_to_studio_default: return + def add_to_studio_default(): + with self.category_widget.working_state_context(): + self.entity.add_to_studio_default() action = QtWidgets.QAction("Add to studio default") - actions_mapping[action] = self.entity.add_to_studio_default + actions_mapping[action] = add_to_studio_default menu.addAction(action) def _remove_from_studio_default_action(self, menu, actions_mapping): @@ -142,9 +152,10 @@ class BaseWidget(QtWidgets.QWidget): return def remove_from_studio_default(): - self.ignore_input_changes.set_ignore(True) - self.entity.remove_from_studio_default() - self.ignore_input_changes.set_ignore(False) + with self.category_widget.working_state_context(): + self.ignore_input_changes.set_ignore(True) + self.entity.remove_from_studio_default() + self.ignore_input_changes.set_ignore(False) action = QtWidgets.QAction("Remove from studio default") actions_mapping[action] = remove_from_studio_default menu.addAction(action) @@ -153,8 +164,12 @@ class BaseWidget(QtWidgets.QWidget): if not self.entity.can_trigger_add_to_project_override: return + def add_to_project_override(): + with self.category_widget.working_state_context(): + self.entity.add_to_project_override + action = QtWidgets.QAction("Add to project project override") - actions_mapping[action] = self.entity.add_to_project_override + actions_mapping[action] = add_to_project_override menu.addAction(action) def _remove_from_project_override_action(self, menu, actions_mapping): @@ -162,9 +177,11 @@ class BaseWidget(QtWidgets.QWidget): return def remove_from_project_override(): - self.ignore_input_changes.set_ignore(True) - self.entity.remove_from_project_override() - self.ignore_input_changes.set_ignore(False) + with self.category_widget.working_state_context(): + self.ignore_input_changes.set_ignore(True) + self.entity.remove_from_project_override() + self.ignore_input_changes.set_ignore(False) + action = QtWidgets.QAction("Remove from project override") actions_mapping[action] = remove_from_project_override menu.addAction(action) @@ -266,14 +283,16 @@ class BaseWidget(QtWidgets.QWidget): # Simple paste value method def paste_value(): - _set_entity_value(self.entity, value) + with self.category_widget.working_state_context(): + _set_entity_value(self.entity, value) action = QtWidgets.QAction("Paste", menu) output.append((action, paste_value)) # Paste value to matchin entity def paste_value_to_path(): - _set_entity_value(matching_entity, value) + with self.category_widget.working_state_context(): + _set_entity_value(matching_entity, value) if matching_entity is not None: action = QtWidgets.QAction("Paste to same place", menu) @@ -281,6 +300,68 @@ class BaseWidget(QtWidgets.QWidget): return output + def _apply_values_from_project_action(self, menu, actions_mapping): + for attr_name in ("project_name", "get_project_names"): + if not hasattr(self.category_widget, attr_name): + return + + if self.entity.is_dynamic_item or self.entity.is_in_dynamic_item: + return + + current_project_name = self.category_widget.project_name + project_names = [] + for project_name in self.category_widget.get_project_names(): + if project_name != current_project_name: + project_names.append(project_name) + + if not project_names: + return + + submenu = QtWidgets.QMenu("Apply values from", menu) + + for project_name in project_names: + if project_name is None: + project_name = DEFAULT_PROJECT_LABEL + + action = QtWidgets.QAction(project_name) + submenu.addAction(action) + actions_mapping[action] = lambda: self._apply_values_from_project( + project_name + ) + menu.addMenu(submenu) + + def _apply_values_from_project(self, project_name): + with self.category_widget.working_state_context(): + try: + path_keys = [ + item + for item in self.entity.path.split("/") + if item + ] + entity = ProjectSettings(project_name) + for key in path_keys: + entity = entity[key] + self.entity.set(entity.value) + + except Exception: + if project_name is None: + project_name = DEFAULT_PROJECT_LABEL + + # TODO better message + title = "Applying values failed" + msg = "Applying values from project \"{}\" failed.".format( + project_name + ) + detail_msg = "".join( + traceback.format_exception(*sys.exc_info()) + ) + dialog = QtWidgets.QMessageBox(self) + dialog.setWindowTitle(title) + dialog.setIcon(QtWidgets.QMessageBox.Warning) + dialog.setText(msg) + dialog.setDetailedText(detail_msg) + dialog.exec_() + def show_actions_menu(self, event=None): if event and event.button() != QtCore.Qt.RightButton: return @@ -299,6 +380,7 @@ class BaseWidget(QtWidgets.QWidget): self._remove_from_studio_default_action(menu, actions_mapping) self._add_to_project_override_action(menu, actions_mapping) self._remove_from_project_override_action(menu, actions_mapping) + self._apply_values_from_project_action(menu, actions_mapping) ui_actions = [] ui_actions.extend(self._copy_value_actions(menu)) @@ -481,7 +563,9 @@ class GUIWidget(BaseWidget): def _create_label_ui(self): label = self.entity["label"] label_widget = QtWidgets.QLabel(label, self) + label_widget.setTextInteractionFlags(QtCore.Qt.TextBrowserInteraction) label_widget.setObjectName("SettingsLabel") + label_widget.linkActivated.connect(self._on_link_activate) layout = QtWidgets.QHBoxLayout(self) layout.setContentsMargins(0, 5, 0, 5) @@ -497,6 +581,14 @@ class GUIWidget(BaseWidget): layout.setContentsMargins(5, 5, 5, 5) layout.addWidget(splitter_item) + def _on_link_activate(self, url): + if not url.startswith("settings://"): + QtGui.QDesktopServices.openUrl(url) + return + + path = url.replace("settings://", "") + self.category_widget.go_to_fullpath(path) + def set_entity_value(self): pass diff --git a/openpype/tools/settings/settings/breadcrumbs_widget.py b/openpype/tools/settings/settings/breadcrumbs_widget.py index d25cbdc8cb..7524bc61f0 100644 --- a/openpype/tools/settings/settings/breadcrumbs_widget.py +++ b/openpype/tools/settings/settings/breadcrumbs_widget.py @@ -71,17 +71,35 @@ class SettingsBreadcrumbs(BreadcrumbsModel): return True return False + def get_valid_path(self, path): + if not path: + return "" + + path_items = path.split("/") + new_path_items = [] + entity = self.entity + for item in path_items: + if not entity.has_child_with_key(item): + break + + new_path_items.append(item) + entity = entity[item] + + return "/".join(new_path_items) + def is_valid_path(self, path): if not path: return True path_items = path.split("/") - try: - entity = self.entity - for item in path_items: - entity = entity[item] - except Exception: - return False + + entity = self.entity + for item in path_items: + if not entity.has_child_with_key(item): + return False + + entity = entity[item] + return True @@ -436,6 +454,7 @@ class BreadcrumbsAddressBar(QtWidgets.QFrame): self.change_path(path) def change_path(self, path): + path = self._model.get_valid_path(path) if self._model and not self._model.is_valid_path(path): self._show_address_field() else: diff --git a/openpype/tools/settings/settings/categories.py b/openpype/tools/settings/settings/categories.py index af7e0bd742..adbde00bf1 100644 --- a/openpype/tools/settings/settings/categories.py +++ b/openpype/tools/settings/settings/categories.py @@ -1,6 +1,7 @@ import os import sys import traceback +import contextlib from enum import Enum from Qt import QtWidgets, QtCore, QtGui @@ -85,6 +86,7 @@ class SettingsCategoryWidget(QtWidgets.QWidget): state_changed = QtCore.Signal() saved = QtCore.Signal(QtWidgets.QWidget) restart_required_trigger = QtCore.Signal() + full_path_requested = QtCore.Signal(str, str) def __init__(self, user_role, parent=None): super(SettingsCategoryWidget, self).__init__(parent) @@ -274,6 +276,37 @@ class SettingsCategoryWidget(QtWidgets.QWidget): # Scroll to widget self.scroll_widget.ensureWidgetVisible(widget) + def go_to_fullpath(self, full_path): + """Full path of settings entity which can lead to different category. + + Args: + full_path (str): Full path to settings entity. It is expected that + path starts with category name ("system_setting" etc.). + """ + if not full_path: + return + items = full_path.split("/") + category = items[0] + path = "" + if len(items) > 1: + path = "/".join(items[1:]) + self.full_path_requested.emit(category, path) + + def contain_category_key(self, category): + """Parent widget ask if category of full path lead to this widget. + + Args: + category (str): The category name. + + Returns: + bool: Passed category lead to this widget. + """ + return False + + def set_category_path(self, category, path): + """Change path of widget based on category full path.""" + pass + def set_path(self, path): self.breadcrumbs_widget.set_path(path) @@ -316,6 +349,12 @@ class SettingsCategoryWidget(QtWidgets.QWidget): ) self.content_layout.addWidget(widget, 0) + @contextlib.contextmanager + def working_state_context(self): + self.set_state(CategoryState.Working) + yield + self.set_state(CategoryState.Idle) + def save(self): if not self.items_are_valid(): return @@ -555,6 +594,14 @@ class SettingsCategoryWidget(QtWidgets.QWidget): class SystemWidget(SettingsCategoryWidget): + def contain_category_key(self, category): + if category == "system_settings": + return True + return False + + def set_category_path(self, category, path): + self.breadcrumbs_widget.change_path(path) + def _create_root_entity(self): self.entity = SystemSettings(set_studio_state=False) self.entity.on_change_callbacks.append(self._on_entity_change) @@ -591,6 +638,21 @@ class SystemWidget(SettingsCategoryWidget): class ProjectWidget(SettingsCategoryWidget): + def contain_category_key(self, category): + if category in ("project_settings", "project_anatomy"): + return True + return False + + def set_category_path(self, category, path): + if path: + path_items = path.split("/") + if path_items[0] not in ("project_settings", "project_anatomy"): + path = "/".join([category, path]) + else: + path = category + + self.breadcrumbs_widget.change_path(path) + def initialize_attributes(self): self.project_name = None @@ -606,6 +668,14 @@ class ProjectWidget(SettingsCategoryWidget): self.project_list_widget = project_list_widget + def get_project_names(self): + if ( + self.modify_defaults_checkbox + and self.modify_defaults_checkbox.isChecked() + ): + return [] + return self.project_list_widget.get_project_names() + def on_saved(self, saved_tab_widget): """Callback on any tab widget save. diff --git a/openpype/tools/settings/settings/widgets.py b/openpype/tools/settings/settings/widgets.py index cc6e396db0..b5c08ef79b 100644 --- a/openpype/tools/settings/settings/widgets.py +++ b/openpype/tools/settings/settings/widgets.py @@ -970,6 +970,13 @@ class ProjectListWidget(QtWidgets.QWidget): index, QtCore.QItemSelectionModel.SelectionFlag.SelectCurrent ) + def get_project_names(self): + output = [] + for row in range(self.project_proxy.rowCount()): + index = self.project_proxy.index(row, 0) + output.append(index.data(PROJECT_NAME_ROLE)) + return output + def refresh(self): selected_project = None for index in self.project_list.selectedIndexes(): diff --git a/openpype/tools/settings/settings/window.py b/openpype/tools/settings/settings/window.py index fd0cd1d7cd..c376e5e91e 100644 --- a/openpype/tools/settings/settings/window.py +++ b/openpype/tools/settings/settings/window.py @@ -63,7 +63,9 @@ class MainWidget(QtWidgets.QWidget): tab_widget.restart_required_trigger.connect( self._on_restart_required ) + tab_widget.full_path_requested.connect(self._on_full_path_request) + self._header_tab_widget = header_tab_widget self.tab_widgets = tab_widgets def _on_tab_save(self, source_widget): @@ -90,6 +92,14 @@ class MainWidget(QtWidgets.QWidget): if app: app.processEvents() + def _on_full_path_request(self, category, path): + for tab_widget in self.tab_widgets: + if tab_widget.contain_category_key(category): + idx = self._header_tab_widget.indexOf(tab_widget) + self._header_tab_widget.setCurrentIndex(idx) + tab_widget.set_category_path(category, path) + break + def showEvent(self, event): super(MainWidget, self).showEvent(event) if self._reset_on_show: diff --git a/openpype/tools/utils/assets_widget.py b/openpype/tools/utils/assets_widget.py index f310aafe89..1495586b04 100644 --- a/openpype/tools/utils/assets_widget.py +++ b/openpype/tools/utils/assets_widget.py @@ -306,6 +306,8 @@ class AssetModel(QtGui.QStandardItemModel): self._items_with_color_by_id = {} self._items_by_asset_id = {} + self._last_project_name = None + @property def refreshing(self): return self._refreshing @@ -347,7 +349,7 @@ class AssetModel(QtGui.QStandardItemModel): return self.get_indexes_by_asset_ids(asset_ids) - def refresh(self, force=False, clear=False): + def refresh(self, force=False): """Refresh the data for the model. Args: @@ -360,7 +362,13 @@ class AssetModel(QtGui.QStandardItemModel): return self.stop_refresh() - if clear: + project_name = self.dbcon.Session.get("AVALON_PROJECT") + clear_model = False + if project_name != self._last_project_name: + clear_model = True + self._last_project_name = project_name + + if clear_model: self._clear_items() # Fetch documents from mongo @@ -401,11 +409,18 @@ class AssetModel(QtGui.QStandardItemModel): self._clear_items() return + self._fill_assets(self._doc_payload) + + self.refreshed.emit(bool(self._items_by_asset_id)) + + self._stop_fetch_thread() + + def _fill_assets(self, asset_docs): # Collect asset documents as needed asset_ids = set() asset_docs_by_id = {} asset_ids_by_parents = collections.defaultdict(set) - for asset_doc in self._doc_payload: + for asset_doc in asset_docs: asset_id = asset_doc["_id"] asset_data = asset_doc.get("data") or {} parent_id = asset_data.get("visualParent") @@ -511,10 +526,6 @@ class AssetModel(QtGui.QStandardItemModel): except Exception: pass - self.refreshed.emit(bool(self._items_by_asset_id)) - - self._stop_fetch_thread() - def _threaded_fetch(self): asset_docs = self._fetch_asset_docs() if not self._refreshing: @@ -582,11 +593,8 @@ class AssetsWidget(QtWidgets.QWidget): self.dbcon = dbcon # Tree View - model = AssetModel(dbcon=self.dbcon, parent=self) - proxy = RecursiveSortFilterProxyModel() - proxy.setSourceModel(model) - proxy.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) - proxy.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + model = self._create_source_model() + proxy = self._create_proxy_model(model) view = AssetsView(self) view.setModel(proxy) @@ -628,7 +636,6 @@ class AssetsWidget(QtWidgets.QWidget): selection_model.selectionChanged.connect(self._on_selection_change) refresh_btn.clicked.connect(self.refresh) current_asset_btn.clicked.connect(self.set_current_session_asset) - model.refreshed.connect(self._on_model_refresh) view.doubleClicked.connect(self.double_clicked) self._current_asset_btn = current_asset_btn @@ -639,17 +646,24 @@ class AssetsWidget(QtWidgets.QWidget): self.model_selection = {} + def _create_source_model(self): + model = AssetModel(dbcon=self.dbcon, parent=self) + model.refreshed.connect(self._on_model_refresh) + return model + + def _create_proxy_model(self, source_model): + proxy = RecursiveSortFilterProxyModel() + proxy.setSourceModel(source_model) + proxy.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) + proxy.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) + return proxy + @property def refreshing(self): return self._model.refreshing def refresh(self): - project_name = self.dbcon.Session.get("AVALON_PROJECT") - clear_model = False - if project_name != self._last_project_name: - clear_model = True - self._last_project_name = project_name - self._refresh_model(clear_model) + self._refresh_model() def stop_refresh(self): self._model.stop_refresh() @@ -691,18 +705,24 @@ class AssetsWidget(QtWidgets.QWidget): self._proxy.setFilterFixedString(new_text) def _on_model_refresh(self, has_item): + """This method should be triggered on model refresh. + + Default implementation register this callback in '_create_source_model' + so if you're modifying model keep in mind that this method should be + called when refresh is done. + """ self._proxy.sort(0) self._set_loading_state(loading=False, empty=not has_item) self.refreshed.emit() - def _refresh_model(self, clear=False): + def _refresh_model(self): # Store selection self._set_loading_state(loading=True, empty=True) # Trigger signal before refresh is called self.refresh_triggered.emit() # Refresh model - self._model.refresh(clear=clear) + self._model.refresh() def _set_loading_state(self, loading, empty): self._view.set_loading_state(loading, empty) diff --git a/openpype/tools/utils/tasks_widget.py b/openpype/tools/utils/tasks_widget.py index 419e77c780..6e6cd17ffd 100644 --- a/openpype/tools/utils/tasks_widget.py +++ b/openpype/tools/utils/tasks_widget.py @@ -194,6 +194,8 @@ class TasksWidget(QtWidgets.QWidget): task_changed = QtCore.Signal() def __init__(self, dbcon, parent=None): + self._dbcon = dbcon + super(TasksWidget, self).__init__(parent) tasks_view = DeselectableTreeView(self) @@ -204,9 +206,8 @@ class TasksWidget(QtWidgets.QWidget): header_view = tasks_view.header() header_view.setSortIndicator(0, QtCore.Qt.AscendingOrder) - tasks_model = TasksModel(dbcon) - tasks_proxy = TasksProxyModel() - tasks_proxy.setSourceModel(tasks_model) + tasks_model = self._create_source_model() + tasks_proxy = self._create_proxy_model(tasks_model) tasks_view.setModel(tasks_proxy) layout = QtWidgets.QVBoxLayout(self) @@ -222,6 +223,19 @@ class TasksWidget(QtWidgets.QWidget): self._last_selected_task_name = None + def _create_source_model(self): + """Create source model of tasks widget. + + Model must have available 'refresh' method and 'set_asset_id' to change + context of asset. + """ + return TasksModel(self._dbcon) + + def _create_proxy_model(self, source_model): + proxy = TasksProxyModel() + proxy.setSourceModel(source_model) + return proxy + def refresh(self): self._tasks_model.refresh() diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index 009c1dc506..3bfa092a21 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -12,22 +12,17 @@ class PlaceholderLineEdit(QtWidgets.QLineEdit): """Set placeholder color of QLineEdit in Qt 5.12 and higher.""" def __init__(self, *args, **kwargs): super(PlaceholderLineEdit, self).__init__(*args, **kwargs) - self._first_show = True - - def showEvent(self, event): - super(PlaceholderLineEdit, self).showEvent(event) - if self._first_show: - self._first_show = False + # Change placeholder palette color + if hasattr(QtGui.QPalette, "PlaceholderText"): filter_palette = self.palette() - if hasattr(filter_palette, "PlaceholderText"): - color_obj = get_objected_colors()["font"] - color = color_obj.get_qcolor() - color.setAlpha(67) - filter_palette.setColor( - filter_palette.PlaceholderText, - color - ) - self.setPalette(filter_palette) + color_obj = get_objected_colors()["font"] + color = color_obj.get_qcolor() + color.setAlpha(67) + filter_palette.setColor( + QtGui.QPalette.PlaceholderText, + color + ) + self.setPalette(filter_palette) class ImageButton(QtWidgets.QPushButton): diff --git a/openpype/tools/workfiles/app.py b/openpype/tools/workfiles/app.py index d33294e4ad..0615ec0aca 100644 --- a/openpype/tools/workfiles/app.py +++ b/openpype/tools/workfiles/app.py @@ -12,7 +12,6 @@ from avalon import io, api, pipeline from openpype import style from openpype.tools.utils.lib import ( - schedule, qt_app_context ) from openpype.tools.utils import PlaceholderLineEdit @@ -25,7 +24,8 @@ from openpype.lib import ( get_workfile_doc, create_workfile_doc, save_workfile_data_to_doc, - get_workfile_template_key + get_workfile_template_key, + create_workdir_extra_folders ) from .model import FilesModel @@ -69,12 +69,16 @@ class NameWindow(QtWidgets.QDialog): "config.tasks": True, } ) + asset_doc = io.find_one( { "type": "asset", "name": asset_name }, - {"data.tasks": True} + { + "data.tasks": True, + "data.parents": True + } ) task_type = asset_doc["data"]["tasks"].get(task_name, {}).get("type") @@ -82,6 +86,11 @@ class NameWindow(QtWidgets.QDialog): project_task_types = project_doc["config"]["tasks"] task_short = project_task_types.get(task_type, {}).get("short_name") + asset_parents = asset_doc["data"]["parents"] + parent_name = project_doc["name"] + if asset_parents: + parent_name = asset_parents[-1] + self.data = { "project": { "name": project_doc["name"], @@ -93,6 +102,7 @@ class NameWindow(QtWidgets.QDialog): "type": task_type, "short": task_short, }, + "parent": parent_name, "version": 1, "user": getpass.getuser(), "comment": "", @@ -662,7 +672,13 @@ class FilesWidget(QtWidgets.QWidget): self.set_asset_task( self._asset_id, self._task_name, self._task_type ) - + create_workdir_extra_folders( + self.root, + api.Session["AVALON_APP"], + self._task_type, + self._task_name, + api.Session["AVALON_PROJECT"] + ) pipeline.emit("after.workfile.save", [file_path]) self.workfile_created.emit(file_path) @@ -719,7 +735,7 @@ class FilesWidget(QtWidgets.QWidget): self.files_model.refresh() if self.auto_select_latest_modified: - schedule(self._select_last_modified_file, 100) + self._select_last_modified_file() def on_context_menu(self, point): index = self.files_view.indexAt(point) @@ -924,8 +940,8 @@ class Window(QtWidgets.QMainWindow): # Connect signals set_context_timer.timeout.connect(self._on_context_set_timeout) - assets_widget.selection_changed.connect(self.on_asset_changed) - tasks_widget.task_changed.connect(self.on_task_changed) + assets_widget.selection_changed.connect(self._on_asset_changed) + tasks_widget.task_changed.connect(self._on_task_changed) files_widget.file_selected.connect(self.on_file_select) files_widget.workfile_created.connect(self.on_workfile_create) files_widget.file_opened.connect(self._on_file_opened) @@ -970,13 +986,6 @@ class Window(QtWidgets.QMainWindow): def set_save_enabled(self, enabled): self.files_widget.btn_save.setEnabled(enabled) - def on_task_changed(self): - # Since we query the disk give it slightly more delay - schedule(self._on_task_changed, 100, channel="mongo") - - def on_asset_changed(self): - schedule(self._on_asset_changed, 50, channel="mongo") - def on_file_select(self, filepath): asset_id = self.assets_widget.get_selected_asset_id() task_name = self.tasks_widget.get_selected_task_name() diff --git a/openpype/version.py b/openpype/version.py index 8909c5edac..8fac77bcdf 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.7.0-nightly.6" +__version__ = "3.7.0" diff --git a/pyproject.toml b/pyproject.toml index 0b2176d277..dd1f5c90b6 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.7.0-nightly.6" # OpenPype +version = "3.7.0" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/repos/avalon-core b/repos/avalon-core index 85c656fcf9..ffe9e910f1 160000 --- a/repos/avalon-core +++ b/repos/avalon-core @@ -1 +1 @@ -Subproject commit 85c656fcf9beb06ab92d3d6ce47f6472cf88df54 +Subproject commit ffe9e910f1f382e222d457d8e4a8426c41ed43ae diff --git a/setup.py b/setup.py index cd3ed4f82c..41ed066693 100644 --- a/setup.py +++ b/setup.py @@ -3,6 +3,7 @@ import os import sys import re +import platform from pathlib import Path from cx_Freeze import setup, Executable @@ -18,8 +19,13 @@ with open(openpype_root / "openpype" / "version.py") as fp: version_match = re.search(r"(\d+\.\d+.\d+).*", version["__version__"]) __version__ = version_match.group(1) +low_platform_name = platform.system().lower() +IS_WINDOWS = low_platform_name == "windows" +IS_LINUX = low_platform_name == "linux" +IS_MACOS = low_platform_name == "darwin" + base = None -if sys.platform == "win32": +if IS_WINDOWS: base = "Win32GUI" # ----------------------------------------------------------------------- @@ -48,7 +54,8 @@ install_requires = [ "filecmp", "dns", # Python defaults (cx_Freeze skip them by default) - "dbm" + "dbm", + "sqlite3" ] includes = [] @@ -71,7 +78,7 @@ include_files = [ "README.md" ] -if sys.platform == "win32": +if IS_WINDOWS: install_requires.extend([ # `pywin32` packages "win32ctypes", @@ -103,6 +110,15 @@ executables = [ Executable("start.py", base=None, target_name="openpype_console", icon=icon_path.as_posix()) ] +if IS_LINUX: + executables.append( + Executable( + "app_launcher.py", + base=None, + target_name="app_launcher", + icon=icon_path.as_posix() + ) + ) setup( name="OpenPype", diff --git a/start.py b/start.py index 5a5039cd5c..b6c14526f9 100644 --- a/start.py +++ b/start.py @@ -340,13 +340,14 @@ def set_avalon_environments(): os.environ.get("AVALON_MONGO") or os.environ["OPENPYPE_MONGO"] ) + avalon_db = os.environ.get("AVALON_DB") or "avalon" # for tests os.environ.update({ # Mongo url (use same as OpenPype has) "AVALON_MONGO": avalon_mongo_url, "AVALON_SCHEMA": schema_path, # Mongo DB name where avalon docs are stored - "AVALON_DB": "avalon", + "AVALON_DB": avalon_db, # Name of config "AVALON_CONFIG": "openpype", "AVALON_LABEL": "OpenPype" @@ -918,7 +919,9 @@ def boot(): sys.exit(1) os.environ["OPENPYPE_MONGO"] = openpype_mongo - os.environ["OPENPYPE_DATABASE_NAME"] = "openpype" # name of Pype database + # name of Pype database + os.environ["OPENPYPE_DATABASE_NAME"] = \ + os.environ.get("OPENPYPE_DATABASE_NAME") or "openpype" _print(">>> run disk mapping command ...") run_disk_mapping_commands(openpype_mongo) @@ -1118,15 +1121,15 @@ def get_info(use_staging=None) -> list: # Reinitialize PypeLogger.initialize() - log_components = PypeLogger.log_mongo_url_components - if log_components["host"]: - inf.append(("Logging to MongoDB", log_components["host"])) - inf.append((" - port", log_components["port"] or "")) + mongo_components = get_default_components() + if mongo_components["host"]: + inf.append(("Logging to MongoDB", mongo_components["host"])) + inf.append((" - port", mongo_components["port"] or "")) inf.append((" - database", PypeLogger.log_database_name)) inf.append((" - collection", PypeLogger.log_collection_name)) - inf.append((" - user", log_components["username"] or "")) - if log_components["auth_db"]: - inf.append((" - auth source", log_components["auth_db"])) + inf.append((" - user", mongo_components["username"] or "")) + if mongo_components["auth_db"]: + inf.append((" - auth source", mongo_components["auth_db"])) maximum = max(len(i[0]) for i in inf) formatted = [] diff --git a/tests/README.md b/tests/README.md index 6317b2ab3c..d0578f8059 100644 --- a/tests/README.md +++ b/tests/README.md @@ -14,12 +14,12 @@ How to run: ---------- - single test class could be run by PyCharm and its pytest runner directly - OR -- use Openpype command 'runtests' from command line --- `${OPENPYPE_ROOT}/start.py runtests` +- use Openpype command 'runtests' from command line (`.venv` in ${OPENPYPE_ROOT} must be activated to use configured Python!) +-- `${OPENPYPE_ROOT}/python start.py runtests` By default, this command will run all tests in ${OPENPYPE_ROOT}/tests. Specific location could be provided to this command as an argument, either as absolute path, or relative path to ${OPENPYPE_ROOT}. -(eg. `${OPENPYPE_ROOT}/start.py runtests ../tests/integration`) will trigger only tests in `integration` folder. +(eg. `${OPENPYPE_ROOT}/python start.py runtests ../tests/integration`) will trigger only tests in `integration` folder. See `${OPENPYPE_ROOT}/cli.py:runtests` for other arguments. diff --git a/tests/integration/README.md b/tests/integration/README.md index 81c07ec50c..0b6a1804ae 100644 --- a/tests/integration/README.md +++ b/tests/integration/README.md @@ -5,33 +5,64 @@ Contains end-to-end tests for automatic testing of OP. Should run headless publish on all hosts to check basic publish use cases automatically to limit regression issues. +How to run +---------- +- activate `{OPENPYPE_ROOT}/.venv` +- run in cmd +`{OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py runtests {OPENPYPE_ROOT}/tests/integration` + - add `hosts/APP_NAME` after integration part to limit only on specific app (eg. `{OPENPYPE_ROOT}/tests/integration/hosts/maya`) + +OR can use built executables +`openpype_console runtests {ABS_PATH}/tests/integration` + +How to check logs/errors from app +-------------------------------- +Keep PERSIST to True in the class and check `test_openpype.logs` collection. + How to create test for publishing from host ------------------------------------------ -- Extend PublishTest +- Extend PublishTest in `tests/lib/testing_classes.py` - Use `resources\test_data.zip` skeleton file as a template for testing input data - Put workfile into `test_data.zip/input/workfile` - If you require other than base DB dumps provide them to `test_data.zip/input/dumps` -- (Check commented code in `db_handler.py` how to dump specific DB. Currently all collections will be dumped.) - Implement `last_workfile_path` - `startup_scripts` - must contain pointing host to startup script saved into `test_data.zip/input/startup` - -- Script must contain something like + -- Script must contain something like (pseudocode) ``` import openpype from avalon import api, HOST + +from openpype.api import Logger + +log = Logger().get_logger(__name__) api.install(HOST) -pyblish.util.publish() +log_lines = [] +for result in pyblish.util.publish_iter(): + for record in result["records"]: # for logging to test_openpype DB + log_lines.append("{}: {}".format( + result["plugin"].label, record.msg)) + + if result["error"]: + err_fmt = "Failed {plugin.__name__}: {error} -- {error.traceback}" + log.error(err_fmt.format(**result)) EXIT_APP (command to exit host) ``` (Install and publish methods must be triggered only AFTER host app is fully initialized!) -- Zip `test_data.zip`, named it with descriptive name, upload it to Google Drive, right click - `Get link`, copy hash id +- If you would like add any command line arguments for your host app add it to `test_data.zip/input/app_args/app_args.json` (as a json list) +- Provide any required environment variables to `test_data.zip/input/env_vars/env_vars.json` (as a json dictionary) +- Zip `test_data.zip`, named it with descriptive name, upload it to Google Drive, right click - `Get link`, copy hash id (file must be accessible to anyone with a link!) - Put this hash id and zip file name into TEST_FILES [(HASH_ID, FILE_NAME, MD5_OPTIONAL)]. If you want to check MD5 of downloaded file, provide md5 value of zipped file. - Implement any assert checks you need in extended class - Run test class manually (via Pycharm or pytest runner (TODO)) -- If you want test to compare expected files to published one, set PERSIST to True, run test manually +- If you want test to visually compare expected files to published one, set PERSIST to True, run test manually -- Locate temporary `publish` subfolder of temporary folder (found in debugging console log) -- Copy whole folder content into .zip file into `expected` subfolder -- By default tests are comparing only structure of `expected` and published format (eg. if you want to save space, replace published files with empty files, but with expected names!) - -- Zip and upload again, change PERSIST to False \ No newline at end of file + -- Zip and upload again, change PERSIST to False + +- Use `TEST_DATA_FOLDER` variable in your class to reuse existing downloaded and unzipped test data (for faster creation of tests) +- Keep `APP_VARIANT` empty if you want to trigger test on latest version of app, or provide explicit value (as '2022' for Photoshop for example) \ No newline at end of file diff --git a/tests/integration/conftest.py b/tests/integration/conftest.py new file mode 100644 index 0000000000..400c0dcc2a --- /dev/null +++ b/tests/integration/conftest.py @@ -0,0 +1,35 @@ +# -*- coding: utf-8 -*- +# adds command line arguments for 'runtests' as a fixtures +import pytest + + +def pytest_addoption(parser): + parser.addoption( + "--test_data_folder", action="store", default=None, + help="Provide url of a folder of unzipped test file" + ) + + parser.addoption( + "--persist", action="store", default=None, + help="True - keep test_db, test_openpype, outputted test files" + ) + + parser.addoption( + "--app_variant", action="store", default=None, + help="Keep empty to locate latest installed variant or explicit" + ) + + +@pytest.fixture(scope="module") +def test_data_folder(request): + return request.config.getoption("--test_data_folder") + + +@pytest.fixture(scope="module") +def persist(request): + return request.config.getoption("--persist") + + +@pytest.fixture(scope="module") +def app_variant(request): + return request.config.getoption("--app_variant") diff --git a/tests/integration/hosts/aftereffects/test_publish_in_aftereffects.py b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects.py new file mode 100644 index 0000000000..407c4f8a3a --- /dev/null +++ b/tests/integration/hosts/aftereffects/test_publish_in_aftereffects.py @@ -0,0 +1,102 @@ +import pytest +import os +import shutil + +from tests.lib.testing_classes import PublishTest + + +class TestPublishInAfterEffects(PublishTest): + """Basic test case for publishing in AfterEffects + + Uses generic TestCase to prepare fixtures for test data, testing DBs, + env vars. + + Opens AfterEffects, run publish on prepared workile. + + Test zip file sets 3 required env vars: + - HEADLESS_PUBLISH - this triggers publish immediately app is open + - IS_TEST - this differentiate between regular webpublish + - PYBLISH_TARGETS + + Then checks content of DB (if subset, version, representations were + created. + Checks tmp folder if all expected files were published. + + """ + PERSIST = True + + TEST_FILES = [ + ("1c8261CmHwyMgS-g7S4xL5epAp0jCBmhf", + "test_aftereffects_publish.zip", + "") + ] + + APP = "aftereffects" + APP_VARIANT = "2022" + + APP_NAME = "{}/{}".format(APP, APP_VARIANT) + + TIMEOUT = 120 # publish timeout + + @pytest.fixture(scope="module") + def last_workfile_path(self, download_test_data): + """Get last_workfile_path from source data. + + Maya expects workfile in proper folder, so copy is done first. + """ + src_path = os.path.join(download_test_data, + "input", + "workfile", + "test_project_test_asset_TestTask_v001.aep") + dest_folder = os.path.join(download_test_data, + self.PROJECT, + self.ASSET, + "work", + self.TASK) + os.makedirs(dest_folder) + dest_path = os.path.join(dest_folder, + "test_project_test_asset_TestTask_v001.aep") + shutil.copy(src_path, dest_path) + + yield dest_path + + @pytest.fixture(scope="module") + def startup_scripts(self, monkeypatch_session, download_test_data): + """Points AfterEffects to userSetup file from input data""" + pass + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + + assert 2 == dbcon.count_documents({"type": "version"}), \ + "Not expected no of versions" + + assert 0 == dbcon.count_documents({"type": "version", + "name": {"$ne": 1}}), \ + "Only versions with 1 expected" + + assert 1 == dbcon.count_documents({"type": "subset", + "name": "imageMainBackgroundcopy" + }), \ + "modelMain subset must be present" + + assert 1 == dbcon.count_documents({"type": "subset", + "name": "workfileTest_task"}), \ + "workfileTesttask subset must be present" + + assert 1 == dbcon.count_documents({"type": "subset", + "name": "reviewTesttask"}), \ + "reviewTesttask subset must be present" + + assert 4 == dbcon.count_documents({"type": "representation"}), \ + "Not expected no of representations" + + assert 1 == dbcon.count_documents({"type": "representation", + "context.subset": "renderTestTaskDefault", # noqa E501 + "context.ext": "png"}), \ + "Not expected no of representations with ext 'png'" + + +if __name__ == "__main__": + test_case = TestPublishInAfterEffects() diff --git a/tests/integration/hosts/maya/lib.py b/tests/integration/hosts/maya/lib.py new file mode 100644 index 0000000000..f3a438c065 --- /dev/null +++ b/tests/integration/hosts/maya/lib.py @@ -0,0 +1,41 @@ +import os +import pytest +import shutil + +from tests.lib.testing_classes import HostFixtures + + +class MayaTestClass(HostFixtures): + @pytest.fixture(scope="module") + def last_workfile_path(self, download_test_data, output_folder_url): + """Get last_workfile_path from source data. + + Maya expects workfile in proper folder, so copy is done first. + """ + src_path = os.path.join(download_test_data, + "input", + "workfile", + "test_project_test_asset_TestTask_v001.mb") + dest_folder = os.path.join(output_folder_url, + self.PROJECT, + self.ASSET, + "work", + self.TASK) + os.makedirs(dest_folder) + dest_path = os.path.join(dest_folder, + "test_project_test_asset_TestTask_v001.mb") + shutil.copy(src_path, dest_path) + + yield dest_path + + @pytest.fixture(scope="module") + def startup_scripts(self, monkeypatch_session, download_test_data): + """Points Maya to userSetup file from input data""" + startup_path = os.path.join(download_test_data, + "input", + "startup") + original_pythonpath = os.environ.get("PYTHONPATH") + monkeypatch_session.setenv("PYTHONPATH", + "{}{}{}".format(startup_path, + os.pathsep, + original_pythonpath)) diff --git a/tests/integration/hosts/maya/test_publish_in_maya.py b/tests/integration/hosts/maya/test_publish_in_maya.py index 1babf30029..68b0564428 100644 --- a/tests/integration/hosts/maya/test_publish_in_maya.py +++ b/tests/integration/hosts/maya/test_publish_in_maya.py @@ -1,11 +1,7 @@ -import pytest -import os -import shutil - -from tests.lib.testing_classes import PublishTest +from tests.integration.hosts.maya.lib import MayaTestClass -class TestPublishInMaya(PublishTest): +class TestPublishInMaya(MayaTestClass): """Basic test case for publishing in Maya Shouldnt be running standalone only via 'runtests' pype command! (??) @@ -13,60 +9,31 @@ class TestPublishInMaya(PublishTest): Uses generic TestCase to prepare fixtures for test data, testing DBs, env vars. - Opens Maya, run publish on prepared workile. + Always pulls and uses test data from GDrive! + + Opens Maya, runs publish on prepared workile. Then checks content of DB (if subset, version, representations were created. Checks tmp folder if all expected files were published. + How to run: + (in cmd with activated {OPENPYPE_ROOT}/.venv) + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py runtests ../tests/integration/hosts/maya # noqa: E501 + """ - PERSIST = True + PERSIST = False TEST_FILES = [ - ("1pOwjA_VVBc6ooTZyFxtAwLS2KZHaBlkY", "test_maya_publish.zip", "") + ("1BTSIIULJTuDc8VvXseuiJV_fL6-Bu7FP", "test_maya_publish.zip", "") ] APP = "maya" - APP_VARIANT = "2019" - - APP_NAME = "{}/{}".format(APP, APP_VARIANT) + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" TIMEOUT = 120 # publish timeout - @pytest.fixture(scope="module") - def last_workfile_path(self, download_test_data): - """Get last_workfile_path from source data. - - Maya expects workfile in proper folder, so copy is done first. - """ - src_path = os.path.join(download_test_data, - "input", - "workfile", - "test_project_test_asset_TestTask_v001.mb") - dest_folder = os.path.join(download_test_data, - self.PROJECT, - self.ASSET, - "work", - self.TASK) - os.makedirs(dest_folder) - dest_path = os.path.join(dest_folder, - "test_project_test_asset_TestTask_v001.mb") - shutil.copy(src_path, dest_path) - - yield dest_path - - @pytest.fixture(scope="module") - def startup_scripts(self, monkeypatch_session, download_test_data): - """Points Maya to userSetup file from input data""" - startup_path = os.path.join(download_test_data, - "input", - "startup") - original_pythonpath = os.environ.get("PYTHONPATH") - monkeypatch_session.setenv("PYTHONPATH", - "{}{}{}".format(startup_path, - os.pathsep, - original_pythonpath)) - def test_db_asserts(self, dbcon, publish_finished): """Host and input data dependent expected results in DB.""" print("test_db_asserts") diff --git a/tests/integration/hosts/nuke/lib.py b/tests/integration/hosts/nuke/lib.py new file mode 100644 index 0000000000..d3c3d7ba81 --- /dev/null +++ b/tests/integration/hosts/nuke/lib.py @@ -0,0 +1,44 @@ +import os +import pytest +import shutil + +from tests.lib.testing_classes import HostFixtures + + +class NukeTestClass(HostFixtures): + @pytest.fixture(scope="module") + def last_workfile_path(self, download_test_data, output_folder_url): + """Get last_workfile_path from source data. + + """ + source_file_name = "test_project_test_asset_CompositingInNuke_v001.nk" + src_path = os.path.join(download_test_data, + "input", + "workfile", + source_file_name) + dest_folder = os.path.join(output_folder_url, + self.PROJECT, + self.ASSET, + "work", + self.TASK) + if not os.path.exists(dest_folder): + os.makedirs(dest_folder) + + dest_path = os.path.join(dest_folder, + source_file_name) + + shutil.copy(src_path, dest_path) + + yield dest_path + + @pytest.fixture(scope="module") + def startup_scripts(self, monkeypatch_session, download_test_data): + """Points Nuke to userSetup file from input data""" + startup_path = os.path.join(download_test_data, + "input", + "startup") + original_nuke_path = os.environ.get("NUKE_PATH", "") + monkeypatch_session.setenv("NUKE_PATH", + "{}{}{}".format(startup_path, + os.pathsep, + original_nuke_path)) \ No newline at end of file diff --git a/tests/integration/hosts/nuke/test_publish_in_nuke.py b/tests/integration/hosts/nuke/test_publish_in_nuke.py new file mode 100644 index 0000000000..884160e0b5 --- /dev/null +++ b/tests/integration/hosts/nuke/test_publish_in_nuke.py @@ -0,0 +1,74 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.nuke.lib import NukeTestClass + +log = logging.getLogger("test_publish_in_nuke") + + +class TestPublishInNuke(NukeTestClass): + """Basic test case for publishing in Nuke + + Uses generic TestCase to prepare fixtures for test data, testing DBs, + env vars. + + Opens Nuke, run publish on prepared workile. + + Then checks content of DB (if subset, version, representations were + created. + Checks tmp folder if all expected files were published. + + How to run: + (in cmd with activated {OPENPYPE_ROOT}/.venv) + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py runtests ../tests/integration/hosts/nuke # noqa: E501 + + To check log/errors from launched app's publish process keep PERSIST + to True and check `test_openpype.logs` collection. + """ + # https://drive.google.com/file/d/1SUurHj2aiQ21ZIMJfGVBI2KjR8kIjBGI/view?usp=sharing # noqa: E501 + TEST_FILES = [ + ("1SUurHj2aiQ21ZIMJfGVBI2KjR8kIjBGI", "test_Nuke_publish.zip", "") + ] + + APP = "nuke" + + TIMEOUT = 120 # publish timeout + + # could be overwritten by command line arguments + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + PERSIST = True # True - keep test_db, test_openpype, outputted test files + TEST_DATA_FOLDER = None + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 2)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="renderCompositingInNukeMain")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 4)) + + additional_args = {"context.subset": "renderCompositingInNukeMain", + "context.ext": "exr"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestPublishInNuke() diff --git a/tests/integration/hosts/photoshop/lib.py b/tests/integration/hosts/photoshop/lib.py new file mode 100644 index 0000000000..16ef2d3ae6 --- /dev/null +++ b/tests/integration/hosts/photoshop/lib.py @@ -0,0 +1,34 @@ +import os +import pytest +import shutil + +from tests.lib.testing_classes import HostFixtures + + +class PhotoshopTestClass(HostFixtures): + @pytest.fixture(scope="module") + def last_workfile_path(self, download_test_data, output_folder_url): + """Get last_workfile_path from source data. + + Maya expects workfile in proper folder, so copy is done first. + """ + src_path = os.path.join(download_test_data, + "input", + "workfile", + "test_project_test_asset_TestTask_v001.psd") + dest_folder = os.path.join(output_folder_url, + self.PROJECT, + self.ASSET, + "work", + self.TASK) + os.makedirs(dest_folder) + dest_path = os.path.join(dest_folder, + "test_project_test_asset_TestTask_v001.psd") + shutil.copy(src_path, dest_path) + + yield dest_path + + @pytest.fixture(scope="module") + def startup_scripts(self, monkeypatch_session, download_test_data): + """Points Maya to userSetup file from input data""" + pass diff --git a/tests/integration/hosts/photoshop/test_publish_in_photoshop.py b/tests/integration/hosts/photoshop/test_publish_in_photoshop.py index 396468a966..32053cd9d4 100644 --- a/tests/integration/hosts/photoshop/test_publish_in_photoshop.py +++ b/tests/integration/hosts/photoshop/test_publish_in_photoshop.py @@ -1,67 +1,54 @@ -import pytest -import os -import shutil - -from tests.lib.testing_classes import PublishTest +from tests.integration.hosts.photoshop.lib import PhotoshopTestClass -class TestPublishInPhotoshop(PublishTest): +class TestPublishInPhotoshop(PhotoshopTestClass): """Basic test case for publishing in Photoshop Uses generic TestCase to prepare fixtures for test data, testing DBs, env vars. - Opens Maya, run publish on prepared workile. + Opens Photoshop, run publish on prepared workile. + + Test zip file sets 3 required env vars: + - HEADLESS_PUBLISH - this triggers publish immediately app is open + - IS_TEST - this differentiate between regular webpublish + - PYBLISH_TARGETS + + Always pulls and uses test data from GDrive! + + Test zip file sets 3 required env vars: + - HEADLESS_PUBLISH - this triggers publish immediately app is open + - IS_TEST - this differentiate between regular webpublish + - PYBLISH_TARGETS Then checks content of DB (if subset, version, representations were created. Checks tmp folder if all expected files were published. + How to run: + (in cmd with activated {OPENPYPE_ROOT}/.venv) + {OPENPYPE_ROOT}/.venv/Scripts/python.exe {OPENPYPE_ROOT}/start.py runtests ../tests/integration/hosts/photoshop # noqa: E501 + """ - PERSIST = True + PERSIST = False TEST_FILES = [ - ("1Bciy2pCwMKl1UIpxuPnlX_LHMo_Xkq0K", "test_photoshop_publish.zip", "") + ("1zD2v5cBgkyOm_xIgKz3WKn8aFB_j8qC-", "test_photoshop_publish.zip", "") ] APP = "photoshop" - APP_VARIANT = "2020" + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" APP_NAME = "{}/{}".format(APP, APP_VARIANT) TIMEOUT = 120 # publish timeout - @pytest.fixture(scope="module") - def last_workfile_path(self, download_test_data): - """Get last_workfile_path from source data. - - Maya expects workfile in proper folder, so copy is done first. - """ - src_path = os.path.join(download_test_data, - "input", - "workfile", - "test_project_test_asset_TestTask_v001.psd") - dest_folder = os.path.join(download_test_data, - self.PROJECT, - self.ASSET, - "work", - self.TASK) - os.makedirs(dest_folder) - dest_path = os.path.join(dest_folder, - "test_project_test_asset_TestTask_v001.psd") - shutil.copy(src_path, dest_path) - - yield dest_path - - @pytest.fixture(scope="module") - def startup_scripts(self, monkeypatch_session, download_test_data): - """Points Maya to userSetup file from input data""" - os.environ["IS_HEADLESS"] = "true" def test_db_asserts(self, dbcon, publish_finished): """Host and input data dependent expected results in DB.""" print("test_db_asserts") - assert 5 == dbcon.count_documents({"type": "version"}), \ + assert 3 == dbcon.count_documents({"type": "version"}), \ "Not expected no of versions" assert 0 == dbcon.count_documents({"type": "version", @@ -69,25 +56,21 @@ class TestPublishInPhotoshop(PublishTest): "Only versions with 1 expected" assert 1 == dbcon.count_documents({"type": "subset", - "name": "modelMain"}), \ + "name": "imageMainBackgroundcopy"} + ), \ "modelMain subset must be present" assert 1 == dbcon.count_documents({"type": "subset", - "name": "workfileTest_task"}), \ + "name": "workfileTesttask"}), \ "workfileTest_task subset must be present" - assert 11 == dbcon.count_documents({"type": "representation"}), \ + assert 6 == dbcon.count_documents({"type": "representation"}), \ "Not expected no of representations" - assert 2 == dbcon.count_documents({"type": "representation", - "context.subset": "modelMain", - "context.ext": "abc"}), \ - "Not expected no of representations with ext 'abc'" - - assert 2 == dbcon.count_documents({"type": "representation", - "context.subset": "modelMain", - "context.ext": "ma"}), \ - "Not expected no of representations with ext 'abc'" + assert 1 == dbcon.count_documents({"type": "representation", + "context.subset": "imageMainBackgroundcopy", # noqa: E501 + "context.ext": "png"}), \ + "Not expected no of representations with ext 'png'" if __name__ == "__main__": diff --git a/tests/lib/assert_classes.py b/tests/lib/assert_classes.py new file mode 100644 index 0000000000..7298853b67 --- /dev/null +++ b/tests/lib/assert_classes.py @@ -0,0 +1,45 @@ +"""Classed and methods for comparing expected and published items in DBs""" + +class DBAssert: + + @classmethod + def count_of_types(cls, dbcon, queried_type, expected, **kwargs): + """Queries 'dbcon' and counts documents of type 'queried_type' + + Args: + dbcon (AvalonMongoDB) + queried_type (str): type of document ("asset", "version"...) + expected (int): number of documents found + any number of additional keyword arguments + + special handling of argument additional_args (dict) + with additional args like + {"context.subset": "XXX"} + """ + args = {"type": queried_type} + for key, val in kwargs.items(): + if key == "additional_args": + args.update(val) + else: + args[key] = val + + msg = None + no_of_docs = dbcon.count_documents(args) + if expected != no_of_docs: + msg = "Not expected no of versions. "\ + "Expected {}, found {}".format(expected, no_of_docs) + + args.pop("type") + detail_str = " " + if args: + detail_str = " with {}".format(args) + + status = "successful" + if msg: + status = "failed" + + print("Comparing count of {}{} {}".format(queried_type, + detail_str, + status)) + + return msg diff --git a/tests/lib/db_handler.py b/tests/lib/db_handler.py index 9be70895da..b181055012 100644 --- a/tests/lib/db_handler.py +++ b/tests/lib/db_handler.py @@ -112,9 +112,17 @@ class DBHandler: source 'db_name' """ db_name_out = db_name_out or db_name - if self._db_exists(db_name) and not overwrite: - raise RuntimeError("DB {} already exists".format(db_name_out) + - "Run with overwrite=True") + if self._db_exists(db_name_out): + if not overwrite: + raise RuntimeError("DB {} already exists".format(db_name_out) + + "Run with overwrite=True") + else: + if collection: + coll = self.client[db_name_out].get(collection) + if coll: + coll.drop() + else: + self.teardown(db_name_out) dir_path = os.path.join(dump_dir, db_name) if not os.path.exists(dir_path): @@ -136,7 +144,8 @@ class DBHandler: print("Dropping {} database".format(db_name)) self.client.drop_database(db_name) - def backup_to_dump(self, db_name, dump_dir, overwrite=False): + def backup_to_dump(self, db_name, dump_dir, overwrite=False, + collection=None): """ Helper method for running mongodump for specific 'db_name' """ @@ -148,7 +157,8 @@ class DBHandler: raise RuntimeError("Backup already exists, " "run with overwrite=True") - query = self._dump_query(self.uri, dump_dir, db_name=db_name) + query = self._dump_query(self.uri, dump_dir, + db_name=db_name, collection=collection) print("Mongodump query:: {}".format(query)) subprocess.run(query) @@ -163,7 +173,7 @@ class DBHandler: if collection: if not db_name: raise ValueError("db_name must be present") - coll_part = "--nsInclude={}.{}".format(db_name, collection) + coll_part = "--collection={}".format(collection) query = "\"{}\" --uri=\"{}\" --out={} {} {}".format( "mongodump", uri, output_path, db_part, coll_part ) @@ -187,7 +197,8 @@ class DBHandler: drop_part = "--drop" if db_name_out: - db_part += " --nsTo={}.*".format(db_name_out) + collection_str = collection or '*' + db_part += " --nsTo={}.{}".format(db_name_out, collection_str) query = "\"{}\" --uri=\"{}\" --dir=\"{}\" {} {} {}".format( "mongorestore", uri, dump_dir, db_part, coll_part, drop_part @@ -217,15 +228,16 @@ class DBHandler: return query +# Examples # handler = DBHandler(uri="mongodb://localhost:27017") -# -# backup_dir = "c:\\projects\\dumps" # # -# handler.backup_to_dump("openpype", backup_dir, True) -# # handler.setup_from_dump("test_db", backup_dir, True) -# # handler.setup_from_sql_file("test_db", "c:\\projects\\sql\\item.sql", -# # collection="test_project", -# # drop=False, mode="upsert") -# handler.setup_from_sql("test_db", "c:\\projects\\sql", +# backup_dir = "c:\\projects\\test_nuke_publish\\input\\dumps" +# # # +# handler.backup_to_dump("avalon", backup_dir, True, collection="test_project") +# handler.setup_from_dump("test_db", backup_dir, True, db_name_out="avalon", collection="test_project") +# handler.setup_from_sql_file("test_db", "c:\\projects\\sql\\item.sql", # collection="test_project", # drop=False, mode="upsert") +# handler.setup_from_sql("test_db", "c:\\projects\\sql", +# collection="test_project", +# drop=False, mode="upsert") diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index 59d4abb3aa..fa467acf9c 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -7,10 +7,13 @@ import pytest import tempfile import shutil import glob +import platform from tests.lib.db_handler import DBHandler from tests.lib.file_handler import RemoteFileHandler +from openpype.lib.remote_publish import find_variant_key + class BaseTest: """Empty base test class""" @@ -45,6 +48,8 @@ class ModuleUnitTest(BaseTest): ASSET = "test_asset" TASK = "test_task" + TEST_DATA_FOLDER = None + @pytest.fixture(scope='session') def monkeypatch_session(self): """Monkeypatch couldn't be used with module or session fixtures.""" @@ -54,25 +59,31 @@ class ModuleUnitTest(BaseTest): m.undo() @pytest.fixture(scope="module") - def download_test_data(self): - tmpdir = tempfile.mkdtemp() - for test_file in self.TEST_FILES: - file_id, file_name, md5 = test_file + def download_test_data(self, test_data_folder, persist=False): + test_data_folder = test_data_folder or self.TEST_DATA_FOLDER + if test_data_folder: + print("Using existing folder {}".format(test_data_folder)) + yield test_data_folder + else: + tmpdir = tempfile.mkdtemp() + for test_file in self.TEST_FILES: + file_id, file_name, md5 = test_file - f_name, ext = os.path.splitext(file_name) + f_name, ext = os.path.splitext(file_name) - RemoteFileHandler.download_file_from_google_drive(file_id, - str(tmpdir), - file_name) + RemoteFileHandler.download_file_from_google_drive(file_id, + str(tmpdir), + file_name) - if ext.lstrip('.') in RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS: - RemoteFileHandler.unzip(os.path.join(tmpdir, file_name)) - print("Temporary folder created:: {}".format(tmpdir)) - yield tmpdir + if ext.lstrip('.') in RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS: # noqa: E501 + RemoteFileHandler.unzip(os.path.join(tmpdir, file_name)) + print("Temporary folder created:: {}".format(tmpdir)) + yield tmpdir - if not self.PERSIST: - print("Removing {}".format(tmpdir)) - shutil.rmtree(tmpdir) + persist = persist or self.PERSIST + if not persist: + print("Removing {}".format(tmpdir)) + shutil.rmtree(tmpdir) @pytest.fixture(scope="module") def env_var(self, monkeypatch_session, download_test_data): @@ -97,13 +108,24 @@ class ModuleUnitTest(BaseTest): value = value.format(**all_vars) print("Setting {}:{}".format(key, value)) monkeypatch_session.setenv(key, str(value)) - import openpype + #reset connection to openpype DB with new env var + import openpype.settings.lib as sett_lib + sett_lib._SETTINGS_HANDLER = None + sett_lib._LOCAL_SETTINGS_HANDLER = None + sett_lib.create_settings_handler() + sett_lib.create_local_settings_handler() + + import openpype openpype_root = os.path.dirname(os.path.dirname(openpype.__file__)) + # ?? why 2 of those monkeypatch_session.setenv("OPENPYPE_ROOT", openpype_root) monkeypatch_session.setenv("OPENPYPE_REPOS_ROOT", openpype_root) + # for remapping purposes (currently in Nuke) + monkeypatch_session.setenv("TEST_SOURCE_FOLDER", download_test_data) + @pytest.fixture(scope="module") def db_setup(self, download_test_data, env_var, monkeypatch_session): """Restore prepared MongoDB dumps into selected DB.""" @@ -111,10 +133,12 @@ class ModuleUnitTest(BaseTest): uri = os.environ.get("OPENPYPE_MONGO") db_handler = DBHandler(uri) - db_handler.setup_from_dump(self.TEST_DB_NAME, backup_dir, True, + db_handler.setup_from_dump(self.TEST_DB_NAME, backup_dir, + overwrite=True, db_name_out=self.TEST_DB_NAME) - db_handler.setup_from_dump("openpype", backup_dir, True, + db_handler.setup_from_dump("openpype", backup_dir, + overwrite=True, db_name_out=self.TEST_OPENPYPE_NAME) yield db_handler @@ -167,31 +191,76 @@ class PublishTest(ModuleUnitTest): """ APP = "" - APP_VARIANT = "" - - APP_NAME = "{}/{}".format(APP, APP_VARIANT) TIMEOUT = 120 # publish timeout - @pytest.fixture(scope="module") - def last_workfile_path(self, download_test_data): - raise NotImplementedError + # could be overwritten by command line arguments + # command line value takes precedence + + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + PERSIST = True # True - keep test_db, test_openpype, outputted test files + TEST_DATA_FOLDER = None # use specific folder of unzipped test file @pytest.fixture(scope="module") - def startup_scripts(self, monkeypatch_session, download_test_data): - raise NotImplementedError + def app_name(self, app_variant): + """Returns calculated value for ApplicationManager. Eg.(nuke/12-2)""" + from openpype.lib import ApplicationManager + app_variant = app_variant or self.APP_VARIANT + + application_manager = ApplicationManager() + if not app_variant: + app_variant = find_variant_key(application_manager, self.APP) + + yield "{}/{}".format(self.APP, app_variant) + + @pytest.fixture(scope="module") + def output_folder_url(self, download_test_data): + """Returns location of published data, cleans it first if exists.""" + path = os.path.join(download_test_data, "output") + if os.path.exists(path): + print("Purging {}".format(path)) + shutil.rmtree(path) + yield path + + @pytest.fixture(scope="module") + def app_args(self, download_test_data): + """Returns additional application arguments from a test file. + + Test zip file should contain file at: + FOLDER_DIR/input/app_args/app_args.json + containing a list of command line arguments (like '-x' etc.) + """ + app_args = [] + args_url = os.path.join(download_test_data, "input", + "app_args", "app_args.json") + if not os.path.exists(args_url): + print("App argument file {} doesn't exist".format(args_url)) + else: + try: + with open(args_url) as json_file: + app_args = json.load(json_file) + + if not isinstance(app_args, list): + raise ValueError + except ValueError: + print("{} doesn't contain valid JSON".format(args_url)) + six.reraise(*sys.exc_info()) + + yield app_args @pytest.fixture(scope="module") def launched_app(self, dbcon, download_test_data, last_workfile_path, - startup_scripts): + startup_scripts, app_args, app_name, output_folder_url): """Launch host app""" # set publishing folders - root_key = "config.roots.work.{}".format("windows") # TEMP + platform_str = platform.system().lower() + root_key = "config.roots.work.{}".format(platform_str) dbcon.update_one( {"type": "project"}, {"$set": { - root_key: download_test_data + root_key: output_folder_url }} ) @@ -217,8 +286,11 @@ class PublishTest(ModuleUnitTest): "asset_name": self.ASSET, "task_name": self.TASK } + if app_args: + data["app_args"] = app_args - yield application_manager.launch(self.APP_NAME, **data) + app_process = application_manager.launch(app_name, **data) + yield app_process @pytest.fixture(scope="module") def publish_finished(self, dbcon, launched_app, download_test_data): @@ -236,23 +308,26 @@ class PublishTest(ModuleUnitTest): yield True def test_folder_structure_same(self, dbcon, publish_finished, - download_test_data): + download_test_data, output_folder_url): """Check if expected and published subfolders contain same files. Compares only presence, not size nor content! """ published_dir_base = download_test_data - published_dir = os.path.join(published_dir_base, + published_dir = os.path.join(output_folder_url, self.PROJECT, + self.ASSET, self.TASK, "**") expected_dir_base = os.path.join(published_dir_base, "expected") expected_dir = os.path.join(expected_dir_base, self.PROJECT, + self.ASSET, self.TASK, "**") - + print("Comparing published:'{}' : expected:'{}'".format(published_dir, + expected_dir)) published = set(f.replace(published_dir_base, '') for f in glob.glob(published_dir, recursive=True) if f != published_dir_base and os.path.exists(f)) @@ -262,3 +337,16 @@ class PublishTest(ModuleUnitTest): not_matched = expected.difference(published) assert not not_matched, "Missing {} files".format(not_matched) + + +class HostFixtures(PublishTest): + """Host specific fixtures. Should be implemented once per host.""" + @pytest.fixture(scope="module") + def last_workfile_path(self, download_test_data, output_folder_url): + """Returns url of workfile""" + raise NotImplementedError + + @pytest.fixture(scope="module") + def startup_scripts(self, monkeypatch_session, download_test_data): + """"Adds init scripts (like userSetup) to expected location""" + raise NotImplementedError \ No newline at end of file diff --git a/tools/create_zip.ps1 b/tools/create_zip.ps1 index c27857b480..e33445d1fa 100644 --- a/tools/create_zip.ps1 +++ b/tools/create_zip.ps1 @@ -96,9 +96,9 @@ if (-not (Test-Path -PathType Container -Path "$($env:POETRY_HOME)\bin")) { Write-Host ">>> " -NoNewline -ForegroundColor green Write-Host "Cleaning cache files ... " -NoNewline -Get-ChildItem $openpype_root -Filter "*.pyc" -Force -Recurse | Where-Object { $_.FullName -inotmatch 'build' } | Remove-Item -Force -Get-ChildItem $openpype_root -Filter "*.pyo" -Force -Recurse | Where-Object { $_.FullName -inotmatch 'build' } | Remove-Item -Force -Get-ChildItem $openpype_root -Filter "__pycache__" -Force -Recurse| Where-Object { $_.FullName -inotmatch 'build' } | Remove-Item -Force -Recurse +Get-ChildItem $openpype_root -Filter "__pycache__" -Force -Recurse| Where-Object {( $_.FullName -inotmatch '\\build\\' ) -and ( $_.FullName -inotmatch '\\.venv' )} | Remove-Item -Force -Recurse +Get-ChildItem $openpype_root -Filter "*.pyc" -Force -Recurse | Where-Object {( $_.FullName -inotmatch '\\build\\' ) -and ( $_.FullName -inotmatch '\\.venv' )} | Remove-Item -Force +Get-ChildItem $openpype_root -Filter "*.pyo" -Force -Recurse | Where-Object {( $_.FullName -inotmatch '\\build\\' ) -and ( $_.FullName -inotmatch '\\.venv' )} | Remove-Item -Force Write-Host "OK" -ForegroundColor green Write-Host ">>> " -NoNewline -ForegroundColor green diff --git a/vendor/deadline/custom/plugins/GlobalJobPreLoad.py b/vendor/deadline/custom/plugins/GlobalJobPreLoad.py index 0aa5adaa20..ba1e5f6c6a 100644 --- a/vendor/deadline/custom/plugins/GlobalJobPreLoad.py +++ b/vendor/deadline/custom/plugins/GlobalJobPreLoad.py @@ -48,6 +48,7 @@ def inject_openpype_environment(deadlinePlugin): add_args['asset'] = job.GetJobEnvironmentKeyValue('AVALON_ASSET') add_args['task'] = job.GetJobEnvironmentKeyValue('AVALON_TASK') add_args['app'] = job.GetJobEnvironmentKeyValue('AVALON_APP_NAME') + add_args["envgroup"] = "farm" if all(add_args.values()): for key, value in add_args.items(): diff --git a/website/docs/admin_settings_project_anatomy.md b/website/docs/admin_settings_project_anatomy.md index 30784686e2..1f742c31ed 100644 --- a/website/docs/admin_settings_project_anatomy.md +++ b/website/docs/admin_settings_project_anatomy.md @@ -60,6 +60,7 @@ We have a few required anatomy templates for OpenPype to work properly, however | `task[name]` | Name of task | | `task[type]` | Type of task | | `task[short]` | Shortname of task | +| `parent` | Name of hierarchical parent | | `version` | Version number | | `subset` | Subset name | | `family` | Main family name | diff --git a/website/yarn.lock b/website/yarn.lock index ae40005384..89da2289de 100644 --- a/website/yarn.lock +++ b/website/yarn.lock @@ -1961,9 +1961,9 @@ ajv@^6.1.0, ajv@^6.12.4, ajv@^6.12.5: uri-js "^4.2.2" algoliasearch-helper@^3.3.4: - version "3.4.4" - resolved "https://registry.yarnpkg.com/algoliasearch-helper/-/algoliasearch-helper-3.4.4.tgz#f2eb46bc4d2f6fed82c7201b8ac4ce0a1988ae67" - integrity sha512-OjyVLjykaYKCMxxRMZNiwLp8CS310E0qAeIY2NaublcmLAh8/SL19+zYHp7XCLtMem2ZXwl3ywMiA32O9jszuw== + version "3.6.2" + resolved "https://registry.yarnpkg.com/algoliasearch-helper/-/algoliasearch-helper-3.6.2.tgz#45e19b12589cfa0c611b573287f65266ea2cc14a" + integrity sha512-Xx0NOA6k4ySn+R2l3UMSONAaMkyfmrZ3AP1geEMo32MxDJQJesZABZYsldO9fa6FKQxH91afhi4hO1G0Zc2opg== dependencies: events "^1.1.1"